pub struct Cache<Key, Val, We = UnitWeighter, B = DefaultHashBuilder, L = DefaultLifecycle<Key, Val>> {
hash_builder: B,
shards: Box<[RwLock<CacheShard<Key, Val, We, B, L, Arc<Placeholder<Val>>>>]>,
shards_mask: u64,
lifecycle: L,
}Expand description
A concurrent cache
The concurrent cache is internally composed of equally sized shards, each of which is independently synchronized. This allows for low contention when multiple threads are accessing the cache but limits the maximum weight capacity of each shard.
§Value
Cache values are cloned when fetched. Users should wrap their values with Arc<_>
if necessary to avoid expensive clone operations. If interior mutability is required
Arc<Mutex<_>> or Arc<RwLock<_>> can also be used.
§Thread Safety and Concurrency
The cache instance can be wrapped with an Arc (or equivalent) and shared between threads.
All methods are accessible via non-mut references so no further synchronization (e.g. Mutex) is needed.
Fields§
§hash_builder: B§shards: Box<[RwLock<CacheShard<Key, Val, We, B, L, Arc<Placeholder<Val>>>>]>§shards_mask: u64§lifecycle: LImplementations§
Source§impl<Key: Eq + Hash, Val: Clone, We: Weighter<Key, Val> + Clone> Cache<Key, Val, We>
impl<Key: Eq + Hash, Val: Clone, We: Weighter<Key, Val> + Clone> Cache<Key, Val, We>
pub fn with_weighter( estimated_items_capacity: usize, weight_capacity: u64, weighter: We, ) -> Self
Source§impl<Key: Eq + Hash, Val: Clone, We: Weighter<Key, Val> + Clone, B: BuildHasher + Clone, L: Lifecycle<Key, Val> + Clone> Cache<Key, Val, We, B, L>
impl<Key: Eq + Hash, Val: Clone, We: Weighter<Key, Val> + Clone, B: BuildHasher + Clone, L: Lifecycle<Key, Val> + Clone> Cache<Key, Val, We, B, L>
Sourcepub fn with(
estimated_items_capacity: usize,
weight_capacity: u64,
weighter: We,
hash_builder: B,
lifecycle: L,
) -> Self
pub fn with( estimated_items_capacity: usize, weight_capacity: u64, weighter: We, hash_builder: B, lifecycle: L, ) -> Self
Creates a new cache that can hold up to weight_capacity in weight.
estimated_items_capacity is the estimated number of items the cache is expected to hold,
roughly equivalent to weight_capacity / average item weight.
Sourcepub fn with_options(
options: Options,
weighter: We,
hash_builder: B,
lifecycle: L,
) -> Self
pub fn with_options( options: Options, weighter: We, hash_builder: B, lifecycle: L, ) -> Self
Constructs a cache based on OptionsBuilder.
§Example
use quick_cache::{sync::{Cache, DefaultLifecycle}, OptionsBuilder, UnitWeighter, DefaultHashBuilder};
Cache::<(String, u64), String>::with_options(
OptionsBuilder::new()
.estimated_items_capacity(10000)
.weight_capacity(10000)
.build()
.unwrap(),
UnitWeighter,
DefaultHashBuilder::default(),
DefaultLifecycle::default(),
);Sourcepub fn capacity(&self) -> u64
pub fn capacity(&self) -> u64
Returns the total maximum weight capacity of cached items.
Note that the cache may be composed of multiple shards and each shard has its own maximum weight capacity,
see Self::shard_capacity.
Sourcepub fn shard_capacity(&self) -> u64
pub fn shard_capacity(&self) -> u64
Returns the maximum weight capacity of each shard.
Sourcepub fn num_shards(&self) -> usize
pub fn num_shards(&self) -> usize
Returns the number of shards.
fn shard_for<Q>( &self, key: &Q, ) -> Option<(&RwLock<CacheShard<Key, Val, We, B, L, Arc<Placeholder<Val>>>>, u64)>
Sourcepub fn reserve(&self, additional: usize)
pub fn reserve(&self, additional: usize)
Reserve additional space for additional entries.
Note that this is counted in entries, and is not weighted.
Sourcepub fn contains_key<Q>(&self, key: &Q) -> bool
pub fn contains_key<Q>(&self, key: &Q) -> bool
Checks if a key exists in the cache.
Sourcepub fn peek<Q>(&self, key: &Q) -> Option<Val>
pub fn peek<Q>(&self, key: &Q) -> Option<Val>
Peeks an item from the cache whose key is key.
Contrary to gets, peeks don’t alter the key “hotness”.
Sourcepub fn remove<Q>(&self, key: &Q) -> Option<(Key, Val)>
pub fn remove<Q>(&self, key: &Q) -> Option<(Key, Val)>
Remove an item from the cache whose key is key.
Returns the removed entry, if any.
Sourcepub fn remove_if<Q, F>(&self, key: &Q, f: F) -> Option<(Key, Val)>
pub fn remove_if<Q, F>(&self, key: &Q, f: F) -> Option<(Key, Val)>
Remove an item from the cache whose key is key if f(&value) returns true for that entry.
Compared to peek and remove, this method guarantees that no new value was inserted in-between.
Returns the removed entry, if any.
Sourcepub fn replace(
&self,
key: Key,
value: Val,
soft: bool,
) -> Result<(), (Key, Val)>
pub fn replace( &self, key: Key, value: Val, soft: bool, ) -> Result<(), (Key, Val)>
Inserts an item in the cache, but only if an entry with key key already exists.
If soft is set, the replace operation won’t affect the “hotness” of the entry,
even if the value is replaced.
Returns Ok if the entry was admitted and Err(_) if it wasn’t.
Sourcepub fn replace_with_lifecycle(
&self,
key: Key,
value: Val,
soft: bool,
) -> Result<L::RequestState, (Key, Val)>
pub fn replace_with_lifecycle( &self, key: Key, value: Val, soft: bool, ) -> Result<L::RequestState, (Key, Val)>
Inserts an item in the cache, but only if an entry with key key already exists.
If soft is set, the replace operation won’t affect the “hotness” of the entry,
even if the value is replaced.
Returns Ok if the entry was admitted and Err(_) if it wasn’t.
Sourcepub fn retain<F>(&self, f: F)
pub fn retain<F>(&self, f: F)
Retains only the items specified by the predicate.
In other words, remove all items for which f(&key, &value) returns false. The
elements are visited in arbitrary order.
Sourcepub fn insert_with_lifecycle(&self, key: Key, value: Val) -> L::RequestState
pub fn insert_with_lifecycle(&self, key: Key, value: Val) -> L::RequestState
Inserts an item in the cache with key key.
Sourcepub fn iter(&self) -> Iter<'_, Key, Val, We, B, L> ⓘwhere
Key: Clone,
pub fn iter(&self) -> Iter<'_, Key, Val, We, B, L> ⓘwhere
Key: Clone,
Iterates over the items in the cache returning cloned key value pairs.
The iterator is guaranteed to yield all items in the cache at the time of creation provided that they are not removed or evicted from the cache while iterating. The iterator may also yield items added to the cache after the iterator is created.
Sourcepub fn drain(&self) -> Drain<'_, Key, Val, We, B, L> ⓘ
pub fn drain(&self) -> Drain<'_, Key, Val, We, B, L> ⓘ
Drains items from the cache.
The iterator is guaranteed to drain all items in the cache at the time of creation provided that they are not removed or evicted from the cache while draining. The iterator may also drain items added to the cache after the iterator is created. Due to the above, the cache may not be empty after the iterator is fully consumed if items are added to the cache while draining.
Note that dropping the iterator will not finish the draining process, unlike other drain methods.
Sourcepub fn set_capacity(&self, new_weight_capacity: u64)
pub fn set_capacity(&self, new_weight_capacity: u64)
Sets the cache to a new weight capacity.
This will adjust the weight capacity of each shard proportionally. If the new capacity is smaller than the current weight, items will be evicted to bring the cache within the new limit.
Sourcepub fn get_value_or_guard<Q>(
&self,
key: &Q,
timeout: Option<Duration>,
) -> GuardResult<'_, Key, Val, We, B, L>
pub fn get_value_or_guard<Q>( &self, key: &Q, timeout: Option<Duration>, ) -> GuardResult<'_, Key, Val, We, B, L>
Gets an item from the cache with key key .
If the corresponding value isn’t present in the cache, this function returns a guard
that can be used to insert the value once it’s computed.
While the returned guard is alive, other calls with the same key using the
get_value_or_guard or get_or_insert family of functions will wait until the guard
is dropped or the value is inserted.
A None timeout means waiting forever.
A Some(<zero>) timeout will return a Timeout error immediately if the value is not present
and a guard is alive elsewhere.
Sourcepub fn get_or_insert_with<Q, E>(
&self,
key: &Q,
with: impl FnOnce() -> Result<Val, E>,
) -> Result<Val, E>
pub fn get_or_insert_with<Q, E>( &self, key: &Q, with: impl FnOnce() -> Result<Val, E>, ) -> Result<Val, E>
Gets or inserts an item in the cache with key key.
See also get_value_or_guard and get_value_or_guard_async.
Sourcepub async fn get_value_or_guard_async<'a, Q>(
&'a self,
key: &Q,
) -> Result<Val, PlaceholderGuard<'a, Key, Val, We, B, L>>
pub async fn get_value_or_guard_async<'a, Q>( &'a self, key: &Q, ) -> Result<Val, PlaceholderGuard<'a, Key, Val, We, B, L>>
Gets an item from the cache with key key.
If the corresponding value isn’t present in the cache, this function returns a guard
that can be used to insert the value once it’s computed.
While the returned guard is alive, other calls with the same key using the
get_value_or_guard or get_or_insert family of functions will wait until the guard
is dropped or the value is inserted.
Sourcepub async fn get_or_insert_async<Q, E>(
&self,
key: &Q,
with: impl Future<Output = Result<Val, E>>,
) -> Result<Val, E>
pub async fn get_or_insert_async<Q, E>( &self, key: &Q, with: impl Future<Output = Result<Val, E>>, ) -> Result<Val, E>
Gets or inserts an item in the cache with key key.
Sourcepub fn entry<Q, T>(
&self,
key: &Q,
timeout: Option<Duration>,
on_occupied: impl FnOnce(&Key, &mut Val) -> EntryAction<T>,
) -> EntryResult<'_, Key, Val, We, B, L, T>
pub fn entry<Q, T>( &self, key: &Q, timeout: Option<Duration>, on_occupied: impl FnOnce(&Key, &mut Val) -> EntryAction<T>, ) -> EntryResult<'_, Key, Val, We, B, L, T>
Atomically accesses an existing entry, or gets a guard for insertion.
If a value exists for key, on_occupied is called with a mutable reference
to the key and value. The callback returns an EntryAction to decide what to do:
EntryAction::Retain(T)— keep the entry, returnT. Weight is recalculated after the callback returns.EntryAction::Remove— remove the entry from the cache.EntryAction::ReplaceWithGuard— remove the entry and get a guard for re-insertion.
If no value exists, a PlaceholderGuard is returned for inserting a new value.
If another thread is already loading this key, waits up to timeout for the value
to arrive, then calls on_occupied on the result.
A None timeout means waiting forever.
A Some(<zero>) timeout will return a Timeout immediately if a guard is alive elsewhere.
The callback is FnOnce and runs at most once.
§Performance
Always acquires a write lock on the shard. For read-only lookups where
contention matters, prefer get, get_value_or_guard
or similar.
The callback runs under the shard write lock — keep it short to avoid blocking other operations on the same shard. Do not call back into the cache from the callback, as this will deadlock when the same shard is accessed.
§Panics
If the callback panics, weight accounting is automatically corrected. However, any partial mutation to the value will remain.
§Examples
use quick_cache::sync::{Cache, EntryAction, EntryResult};
let cache: Cache<String, u64> = Cache::new(5);
cache.insert("counter".to_string(), 0);
// Mutate in place: increment a counter
let result = cache.entry("counter", None, |_k, v| {
*v += 1;
EntryAction::Retain(*v)
});
assert!(matches!(result, EntryResult::Retained(1)));
assert_eq!(cache.get("counter"), Some(1));Sourcepub async fn entry_async<'a, Q, T>(
&'a self,
key: &Q,
on_occupied: impl FnOnce(&Key, &mut Val) -> EntryAction<T>,
) -> EntryResult<'a, Key, Val, We, B, L, T>
pub async fn entry_async<'a, Q, T>( &'a self, key: &Q, on_occupied: impl FnOnce(&Key, &mut Val) -> EntryAction<T>, ) -> EntryResult<'a, Key, Val, We, B, L, T>
Async version of Self::entry.
Atomically accesses an existing entry, or gets a guard for insertion. If another task is already loading this key, waits asynchronously for the value.
See entry for full documentation.
Sourcepub fn memory_used(&self) -> MemoryUsed
pub fn memory_used(&self) -> MemoryUsed
Get total memory used by cache data structures
It should be noted that if cache key or value is some type like Vec<T>,
the memory allocated in the heap will not be counted.