Struct rayon_core::thread_pool::ThreadPool

source ·
pub struct ThreadPool {
    registry: Arc<Registry>,
}
Expand description

Represents a user created thread-pool.

Use a ThreadPoolBuilder to specify the number and/or names of threads in the pool. After calling ThreadPoolBuilder::build(), you can then execute functions explicitly within this ThreadPool using ThreadPool::install(). By contrast, top level rayon functions (like join()) will execute implicitly within the current thread-pool.

§Creating a ThreadPool

let pool = rayon::ThreadPoolBuilder::new().num_threads(8).build().unwrap();

install() executes a closure in one of the ThreadPool’s threads. In addition, any other rayon operations called inside of install() will also execute in the context of the ThreadPool.

When the ThreadPool is dropped, that’s a signal for the threads it manages to terminate, they will complete executing any remaining work that you have spawned, and automatically terminate.

Fields§

§registry: Arc<Registry>

Implementations§

source§

impl ThreadPool

source

pub fn new(configuration: Configuration) -> Result<ThreadPool, Box<dyn Error>>

👎Deprecated: Use ThreadPoolBuilder::build

Deprecated in favor of ThreadPoolBuilder::build.

source

pub(crate) fn build<S>( builder: ThreadPoolBuilder<S>, ) -> Result<ThreadPool, ThreadPoolBuildError>
where S: ThreadSpawn,

source

pub fn install<OP, R>(&self, op: OP) -> R
where OP: FnOnce() -> R + Send, R: Send,

Executes op within the threadpool. Any attempts to use join, scope, or parallel iterators will then operate within that threadpool.

§Warning: thread-local data

Because op is executing within the Rayon thread-pool, thread-local data from the current thread will not be accessible.

§Warning: execution order

If the current thread is part of a different thread pool, it will try to keep busy while the op completes in its target pool, similar to calling ThreadPool::yield_now() in a loop. Therefore, it may potentially schedule other tasks to run on the current thread in the meantime. For example

fn main() {
    rayon::ThreadPoolBuilder::new().num_threads(1).build_global().unwrap();
    let pool = rayon_core::ThreadPoolBuilder::default().build().unwrap();
    let do_it = || {
        print!("one ");
        pool.install(||{});
        print!("two ");
    };
    rayon::join(|| do_it(), || do_it());
}

Since we configured just one thread in the global pool, one might expect do_it() to run sequentially, producing:

one two one two

However each call to install() yields implicitly, allowing rayon to run multiple instances of do_it() concurrently on the single, global thread. The following output would be equally valid:

one one two two
§Panics

If op should panic, that panic will be propagated.

§Using install()
   fn main() {
        let pool = rayon::ThreadPoolBuilder::new().num_threads(8).build().unwrap();
        let n = pool.install(|| fib(20));
        println!("{}", n);
   }

   fn fib(n: usize) -> usize {
        if n == 0 || n == 1 {
            return n;
        }
        let (a, b) = rayon::join(|| fib(n - 1), || fib(n - 2)); // runs inside of `pool`
        return a + b;
    }
source

pub fn broadcast<OP, R>(&self, op: OP) -> Vec<R>
where OP: Fn(BroadcastContext<'_>) -> R + Sync, R: Send,

Executes op within every thread in the threadpool. Any attempts to use join, scope, or parallel iterators will then operate within that threadpool.

Broadcasts are executed on each thread after they have exhausted their local work queue, before they attempt work-stealing from other threads. The goal of that strategy is to run everywhere in a timely manner without being too disruptive to current work. There may be alternative broadcast styles added in the future for more or less aggressive injection, if the need arises.

§Warning: thread-local data

Because op is executing within the Rayon thread-pool, thread-local data from the current thread will not be accessible.

§Panics

If op should panic on one or more threads, exactly one panic will be propagated, only after all threads have completed (or panicked) their own op.

§Examples
   use std::sync::atomic::{AtomicUsize, Ordering};

   fn main() {
        let pool = rayon::ThreadPoolBuilder::new().num_threads(5).build().unwrap();

        // The argument gives context, including the index of each thread.
        let v: Vec<usize> = pool.broadcast(|ctx| ctx.index() * ctx.index());
        assert_eq!(v, &[0, 1, 4, 9, 16]);

        // The closure can reference the local stack
        let count = AtomicUsize::new(0);
        pool.broadcast(|_| count.fetch_add(1, Ordering::Relaxed));
        assert_eq!(count.into_inner(), 5);
   }
source

pub fn current_num_threads(&self) -> usize

Returns the (current) number of threads in the thread pool.

§Future compatibility note

Note that unless this thread-pool was created with a ThreadPoolBuilder that specifies the number of threads, then this number may vary over time in future versions (see the num_threads() method for details).

source

pub fn current_thread_index(&self) -> Option<usize>

If called from a Rayon worker thread in this thread-pool, returns the index of that thread; if not called from a Rayon thread, or called from a Rayon thread that belongs to a different thread-pool, returns None.

The index for a given thread will not change over the thread’s lifetime. However, multiple threads may share the same index if they are in distinct thread-pools.

§Future compatibility note

Currently, every thread-pool (including the global thread-pool) has a fixed number of threads, but this may change in future Rayon versions (see the num_threads() method for details). In that case, the index for a thread would not change during its lifetime, but thread indices may wind up being reused if threads are terminated and restarted.

source

pub fn current_thread_has_pending_tasks(&self) -> Option<bool>

Returns true if the current worker thread currently has “local tasks” pending. This can be useful as part of a heuristic for deciding whether to spawn a new task or execute code on the current thread, particularly in breadth-first schedulers. However, keep in mind that this is an inherently racy check, as other worker threads may be actively “stealing” tasks from our local deque.

Background: Rayon’s uses a work-stealing scheduler. The key idea is that each thread has its own deque of tasks. Whenever a new task is spawned – whether through join(), Scope::spawn(), or some other means – that new task is pushed onto the thread’s local deque. Worker threads have a preference for executing their own tasks; if however they run out of tasks, they will go try to “steal” tasks from other threads. This function therefore has an inherent race with other active worker threads, which may be removing items from the local deque.

source

pub fn join<A, B, RA, RB>(&self, oper_a: A, oper_b: B) -> (RA, RB)
where A: FnOnce() -> RA + Send, B: FnOnce() -> RB + Send, RA: Send, RB: Send,

Execute oper_a and oper_b in the thread-pool and return the results. Equivalent to self.install(|| join(oper_a, oper_b)).

source

pub fn scope<'scope, OP, R>(&self, op: OP) -> R
where OP: FnOnce(&Scope<'scope>) -> R + Send, R: Send,

Creates a scope that executes within this thread-pool. Equivalent to self.install(|| scope(...)).

See also: the scope() function.

source

pub fn scope_fifo<'scope, OP, R>(&self, op: OP) -> R
where OP: FnOnce(&ScopeFifo<'scope>) -> R + Send, R: Send,

Creates a scope that executes within this thread-pool. Spawns from the same thread are prioritized in relative FIFO order. Equivalent to self.install(|| scope_fifo(...)).

See also: the scope_fifo() function.

source

pub fn in_place_scope<'scope, OP, R>(&self, op: OP) -> R
where OP: FnOnce(&Scope<'scope>) -> R,

Creates a scope that spawns work into this thread-pool.

See also: the in_place_scope() function.

source

pub fn in_place_scope_fifo<'scope, OP, R>(&self, op: OP) -> R
where OP: FnOnce(&ScopeFifo<'scope>) -> R,

Creates a scope that spawns work into this thread-pool in FIFO order.

See also: the in_place_scope_fifo() function.

source

pub fn spawn<OP>(&self, op: OP)
where OP: FnOnce() + Send + 'static,

Spawns an asynchronous task in this thread-pool. This task will run in the implicit, global scope, which means that it may outlast the current stack frame – therefore, it cannot capture any references onto the stack (you will likely need a move closure).

See also: the spawn() function defined on scopes.

source

pub fn spawn_fifo<OP>(&self, op: OP)
where OP: FnOnce() + Send + 'static,

Spawns an asynchronous task in this thread-pool. This task will run in the implicit, global scope, which means that it may outlast the current stack frame – therefore, it cannot capture any references onto the stack (you will likely need a move closure).

See also: the spawn_fifo() function defined on scopes.

source

pub fn spawn_broadcast<OP>(&self, op: OP)
where OP: Fn(BroadcastContext<'_>) + Send + Sync + 'static,

Spawns an asynchronous task on every thread in this thread-pool. This task will run in the implicit, global scope, which means that it may outlast the current stack frame – therefore, it cannot capture any references onto the stack (you will likely need a move closure).

source

pub fn yield_now(&self) -> Option<Yield>

Cooperatively yields execution to Rayon.

This is similar to the general yield_now(), but only if the current thread is part of this thread pool.

Returns Some(Yield::Executed) if anything was executed, Some(Yield::Idle) if nothing was available, or None if the current thread is not part this pool.

source

pub fn yield_local(&self) -> Option<Yield>

Cooperatively yields execution to local Rayon work.

This is similar to the general yield_local(), but only if the current thread is part of this thread pool.

Returns Some(Yield::Executed) if anything was executed, Some(Yield::Idle) if nothing was available, or None if the current thread is not part this pool.

Trait Implementations§

source§

impl Debug for ThreadPool

source§

fn fmt(&self, fmt: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl Drop for ThreadPool

source§

fn drop(&mut self)

Executes the destructor for this type. Read more

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> Pointable for T

source§

const ALIGN: usize = _

The alignment of pointer.
source§

type Init = T

The type for initializers.
source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

source§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.