Struct crossbeam_epoch::Guard

source ·
pub struct Guard {
    pub(crate) local: *const Local,
}
Expand description

A guard that keeps the current thread pinned.

§Pinning

The current thread is pinned by calling pin, which returns a new guard:

use crossbeam_epoch as epoch;

// It is often convenient to prefix a call to `pin` with a `&` in order to create a reference.
// This is not really necessary, but makes passing references to the guard a bit easier.
let guard = &epoch::pin();

When a guard gets dropped, the current thread is automatically unpinned.

§Pointers on the stack

Having a guard allows us to create pointers on the stack to heap-allocated objects. For example:

use crossbeam_epoch::{self as epoch, Atomic};
use std::sync::atomic::Ordering::SeqCst;

// Create a heap-allocated number.
let a = Atomic::new(777);

// Pin the current thread.
let guard = &epoch::pin();

// Load the heap-allocated object and create pointer `p` on the stack.
let p = a.load(SeqCst, guard);

// Dereference the pointer and print the value:
if let Some(num) = unsafe { p.as_ref() } {
    println!("The number is {}.", num);
}

§Multiple guards

Pinning is reentrant and it is perfectly legal to create multiple guards. In that case, the thread will actually be pinned only when the first guard is created and unpinned when the last one is dropped:

use crossbeam_epoch as epoch;

let guard1 = epoch::pin();
let guard2 = epoch::pin();
assert!(epoch::is_pinned());
drop(guard1);
assert!(epoch::is_pinned());
drop(guard2);
assert!(!epoch::is_pinned());

Fields§

§local: *const Local

Implementations§

source§

impl Guard

source

pub fn defer<F, R>(&self, f: F)
where F: FnOnce() -> R + Send + 'static,

Stores a function so that it can be executed at some point after all currently pinned threads get unpinned.

This method first stores f into the thread-local (or handle-local) cache. If this cache becomes full, some functions are moved into the global cache. At the same time, some functions from both local and global caches may get executed in order to incrementally clean up the caches as they fill up.

There is no guarantee when exactly f will be executed. The only guarantee is that it won’t be executed until all currently pinned threads get unpinned. In theory, f might never run, but the epoch-based garbage collection will make an effort to execute it reasonably soon.

If this method is called from an unprotected guard, the function will simply be executed immediately.

source

pub unsafe fn defer_unchecked<F, R>(&self, f: F)
where F: FnOnce() -> R,

Stores a function so that it can be executed at some point after all currently pinned threads get unpinned.

This method first stores f into the thread-local (or handle-local) cache. If this cache becomes full, some functions are moved into the global cache. At the same time, some functions from both local and global caches may get executed in order to incrementally clean up the caches as they fill up.

There is no guarantee when exactly f will be executed. The only guarantee is that it won’t be executed until all currently pinned threads get unpinned. In theory, f might never run, but the epoch-based garbage collection will make an effort to execute it reasonably soon.

If this method is called from an unprotected guard, the function will simply be executed immediately.

§Safety

The given function must not hold reference onto the stack. It is highly recommended that the passed function is always marked with move in order to prevent accidental borrows.

use crossbeam_epoch as epoch;

let guard = &epoch::pin();
let message = "Hello!";
unsafe {
    // ALWAYS use `move` when sending a closure into `defer_unchecked`.
    guard.defer_unchecked(move || {
        println!("{}", message);
    });
}

Apart from that, keep in mind that another thread may execute f, so anything accessed by the closure must be Send.

We intentionally didn’t require F: Send, because Rust’s type systems usually cannot prove F: Send for typical use cases. For example, consider the following code snippet, which exemplifies the typical use case of deferring the deallocation of a shared reference:

let shared = Owned::new(7i32).into_shared(guard);
guard.defer_unchecked(move || shared.into_owned()); // `Shared` is not `Send`!

While Shared is not Send, it’s safe for another thread to call the deferred function, because it’s called only after the grace period and shared is no longer shared with other threads. But we don’t expect type systems to prove this.

§Examples

When a heap-allocated object in a data structure becomes unreachable, it has to be deallocated. However, the current thread and other threads may be still holding references on the stack to that same object. Therefore it cannot be deallocated before those references get dropped. This method can defer deallocation until all those threads get unpinned and consequently drop all their references on the stack.

use crossbeam_epoch::{self as epoch, Atomic, Owned};
use std::sync::atomic::Ordering::SeqCst;

let a = Atomic::new("foo");

// Now suppose that `a` is shared among multiple threads and concurrently
// accessed and modified...

// Pin the current thread.
let guard = &epoch::pin();

// Steal the object currently stored in `a` and swap it with another one.
let p = a.swap(Owned::new("bar").into_shared(guard), SeqCst, guard);

if !p.is_null() {
    // The object `p` is pointing to is now unreachable.
    // Defer its deallocation until all currently pinned threads get unpinned.
    unsafe {
        // ALWAYS use `move` when sending a closure into `defer_unchecked`.
        guard.defer_unchecked(move || {
            println!("{} is now being deallocated.", p.deref());
            // Now we have unique access to the object pointed to by `p` and can turn it
            // into an `Owned`. Dropping the `Owned` will deallocate the object.
            drop(p.into_owned());
        });
    }
}
source

pub unsafe fn defer_destroy<T>(&self, ptr: Shared<'_, T>)

Stores a destructor for an object so that it can be deallocated and dropped at some point after all currently pinned threads get unpinned.

This method first stores the destructor into the thread-local (or handle-local) cache. If this cache becomes full, some destructors are moved into the global cache. At the same time, some destructors from both local and global caches may get executed in order to incrementally clean up the caches as they fill up.

There is no guarantee when exactly the destructor will be executed. The only guarantee is that it won’t be executed until all currently pinned threads get unpinned. In theory, the destructor might never run, but the epoch-based garbage collection will make an effort to execute it reasonably soon.

If this method is called from an unprotected guard, the destructor will simply be executed immediately.

§Safety

The object must not be reachable by other threads anymore, otherwise it might be still in use when the destructor runs.

Apart from that, keep in mind that another thread may execute the destructor, so the object must be sendable to other threads.

We intentionally didn’t require T: Send, because Rust’s type systems usually cannot prove T: Send for typical use cases. For example, consider the following code snippet, which exemplifies the typical use case of deferring the deallocation of a shared reference:

let shared = Owned::new(7i32).into_shared(guard);
guard.defer_destroy(shared); // `Shared` is not `Send`!

While Shared is not Send, it’s safe for another thread to call the destructor, because it’s called only after the grace period and shared is no longer shared with other threads. But we don’t expect type systems to prove this.

§Examples

When a heap-allocated object in a data structure becomes unreachable, it has to be deallocated. However, the current thread and other threads may be still holding references on the stack to that same object. Therefore it cannot be deallocated before those references get dropped. This method can defer deallocation until all those threads get unpinned and consequently drop all their references on the stack.

use crossbeam_epoch::{self as epoch, Atomic, Owned};
use std::sync::atomic::Ordering::SeqCst;

let a = Atomic::new("foo");

// Now suppose that `a` is shared among multiple threads and concurrently
// accessed and modified...

// Pin the current thread.
let guard = &epoch::pin();

// Steal the object currently stored in `a` and swap it with another one.
let p = a.swap(Owned::new("bar").into_shared(guard), SeqCst, guard);

if !p.is_null() {
    // The object `p` is pointing to is now unreachable.
    // Defer its deallocation until all currently pinned threads get unpinned.
    unsafe {
        guard.defer_destroy(p);
    }
}
source

pub fn flush(&self)

Clears up the thread-local cache of deferred functions by executing them or moving into the global cache.

Call this method after deferring execution of a function if you want to get it executed as soon as possible. Flushing will make sure it is residing in in the global cache, so that any thread has a chance of taking the function and executing it.

If this method is called from an unprotected guard, it is a no-op (nothing happens).

§Examples
use crossbeam_epoch as epoch;

let guard = &epoch::pin();
guard.defer(move || {
    println!("This better be printed as soon as possible!");
});
guard.flush();
source

pub fn repin(&mut self)

Unpins and then immediately re-pins the thread.

This method is useful when you don’t want delay the advancement of the global epoch by holding an old epoch. For safety, you should not maintain any guard-based reference across the call (the latter is enforced by &mut self). The thread will only be repinned if this is the only active guard for the current thread.

If this method is called from an unprotected guard, then the call will be just no-op.

§Examples
use crossbeam_epoch::{self as epoch, Atomic};
use std::sync::atomic::Ordering::SeqCst;

let a = Atomic::new(777);
let mut guard = epoch::pin();
{
    let p = a.load(SeqCst, &guard);
    assert_eq!(unsafe { p.as_ref() }, Some(&777));
}
guard.repin();
{
    let p = a.load(SeqCst, &guard);
    assert_eq!(unsafe { p.as_ref() }, Some(&777));
}
source

pub fn repin_after<F, R>(&mut self, f: F) -> R
where F: FnOnce() -> R,

Temporarily unpins the thread, executes the given function and then re-pins the thread.

This method is useful when you need to perform a long-running operation (e.g. sleeping) and don’t need to maintain any guard-based reference across the call (the latter is enforced by &mut self). The thread will only be unpinned if this is the only active guard for the current thread.

If this method is called from an unprotected guard, then the passed function is called directly without unpinning the thread.

§Examples
use crossbeam_epoch::{self as epoch, Atomic};
use std::sync::atomic::Ordering::SeqCst;
use std::thread;
use std::time::Duration;

let a = Atomic::new(777);
let mut guard = epoch::pin();
{
    let p = a.load(SeqCst, &guard);
    assert_eq!(unsafe { p.as_ref() }, Some(&777));
}
guard.repin_after(|| thread::sleep(Duration::from_millis(50)));
{
    let p = a.load(SeqCst, &guard);
    assert_eq!(unsafe { p.as_ref() }, Some(&777));
}
source

pub fn collector(&self) -> Option<&Collector>

Returns the Collector associated with this guard.

This method is useful when you need to ensure that all guards used with a data structure come from the same collector.

If this method is called from an unprotected guard, then None is returned.

§Examples
use crossbeam_epoch as epoch;

let guard1 = epoch::pin();
let guard2 = epoch::pin();
assert!(guard1.collector() == guard2.collector());

Trait Implementations§

source§

impl Debug for Guard

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl Drop for Guard

source§

fn drop(&mut self)

Executes the destructor for this type. Read more

Auto Trait Implementations§

§

impl Freeze for Guard

§

impl !RefUnwindSafe for Guard

§

impl !Send for Guard

§

impl !Sync for Guard

§

impl Unpin for Guard

§

impl !UnwindSafe for Guard

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> Pointable for T

source§

const ALIGN: usize = const ALIGN: usize = mem::align_of::<T>();

The alignment of pointer.
§

type Init = T

The type for initializers.
source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.