Fucking cache mail app disk updating offline love Abella
Cache in-memory in ASP.NET Core
By Rick Anderson, John Luo, and Steve Smith
View or download sample code (how to download)
Caching can significantly improve the performance and scalability of an app by reducing the work required to generate content. Caching works best with data that changes infrequently. Caching makes a copy of data that can be returned much faster than from the original source. You should write and test your app to never depend on cached data.
ASP.NET Core supports several different caches. The simplest cache is based on the IMemoryCache, which represents a cache stored in the memory of the web server. Apps which run on a server farm of multiple servers should ensure that sessions are sticky when using the in-memory cache. Sticky sessions ensure that subsequent requests from a client all go to the same server. For example, Azure Web apps use Application Request Routing (ARR) to route all subsequent requests to the same server.
Non-sticky sessions in a web farm require a distributed cache to avoid cache consistency problems. For some apps, a distributed cache can support higher scale out than an in-memory cache. Using a distributed cache offloads the cache memory to an external process.
The cache will evict cache entries under memory pressure unless the cache priority is set to . You can set the to adjust the priority with which the cache evicts items under memory pressure.
The in-memory cache can store any object; the distributed cache interface is limited to .
In-memory caching is a service that's referenced from your app using Dependency Injection. Call in :
Request the instance in the constructor:
requires NuGet package "Microsoft.Extensions.Caching.Memory".
The following code uses TryGetValue to check if a time is in the cache. If a time isn't cached, a new entry is created and added to the cache with Set.
The current time and the cached time are displayed:
The cached value remains in the cache while there are requests within the timeout period (and no eviction due to memory pressure). The following image shows the current time and an older time retrieved from the cache:
The following code uses GetOrCreate and GetOrCreateAsync to cache data.
The following code calls Get to fetch the cached time:
See IMemoryCache methods and CacheExtensions methods for a description of the cache methods.
The following sample:
- Sets the absolute expiration time. This is the maximum time the entry can be cached and prevents the item from becoming too stale when the sliding expiration is continuously renewed.
- Sets a sliding expiration time. Requests that access this cached item will reset the sliding expiration clock.
- Sets the cache priority to .
- Sets a PostEvictionDelegate that will be called after the entry is evicted from the cache. The callback is run on a different thread from the code that removes the item from the cache.
The following sample shows how to expire a cache entry if a dependent entry expires. A is added to the cached item. When is called on the , both cache entries are evicted.
Using a allows multiple cache entries to be evicted as a group. With the pattern in the code above, cache entries created inside the block will inherit triggers and expiration settings.
When using a callback to repopulate a cache item:
- Multiple requests can find the cached key value empty because the callback hasn't completed.
- This can result in several threads repopulating the cached item.
When one cache entry is used to create another, the child copies the parent entry's expiration tokens and time-based expiration settings. The child isn't expired by manual removal or updating of the parent entry.