pub struct Builder {Show 23 fields
kind: Kind,
enable_io: bool,
nevents: usize,
enable_time: bool,
start_paused: bool,
worker_threads: Option<usize>,
max_blocking_threads: usize,
pub(super) thread_name: Arc<dyn Fn() -> String + Send + Sync + 'static>,
pub(super) thread_stack_size: Option<usize>,
pub(super) after_start: Option<Arc<dyn Fn() + Send + Sync>>,
pub(super) before_stop: Option<Arc<dyn Fn() + Send + Sync>>,
pub(super) before_park: Option<Arc<dyn Fn() + Send + Sync>>,
pub(super) after_unpark: Option<Arc<dyn Fn() + Send + Sync>>,
pub(super) before_spawn: Option<Arc<dyn Fn(&TaskMeta<'_>) + Send + Sync>>,
pub(super) after_termination: Option<Arc<dyn Fn(&TaskMeta<'_>) + Send + Sync>>,
pub(super) keep_alive: Option<Duration>,
pub(super) global_queue_interval: Option<u32>,
pub(super) event_interval: u32,
pub(super) local_queue_capacity: usize,
pub(super) disable_lifo_slot: bool,
pub(super) seed_generator: RngSeedGenerator,
pub(super) metrics_poll_count_histogram_enable: bool,
pub(super) metrics_poll_count_histogram: HistogramBuilder,
}
Expand description
Builds Tokio Runtime with custom configuration values.
Methods can be chained in order to set the configuration values. The
Runtime is constructed by calling build
.
New instances of Builder
are obtained via Builder::new_multi_thread
or Builder::new_current_thread
.
See function level documentation for details on the various configuration settings.
§Examples
use tokio::runtime::Builder;
fn main() {
// build runtime
let runtime = Builder::new_multi_thread()
.worker_threads(4)
.thread_name("my-custom-name")
.thread_stack_size(3 * 1024 * 1024)
.build()
.unwrap();
// use runtime ...
}
Fields§
§kind: Kind
Runtime type
enable_io: bool
Whether or not to enable the I/O driver
nevents: usize
§enable_time: bool
Whether or not to enable the time driver
start_paused: bool
Whether or not the clock should start paused.
worker_threads: Option<usize>
The number of worker threads, used by Runtime.
Only used when not using the current-thread executor.
max_blocking_threads: usize
Cap on thread usage.
thread_name: Arc<dyn Fn() -> String + Send + Sync + 'static>
Name fn used for threads spawned by the runtime.
thread_stack_size: Option<usize>
Stack size used for threads spawned by the runtime.
after_start: Option<Arc<dyn Fn() + Send + Sync>>
Callback to run after each thread starts.
before_stop: Option<Arc<dyn Fn() + Send + Sync>>
To run before each worker thread stops
before_park: Option<Arc<dyn Fn() + Send + Sync>>
To run before each worker thread is parked.
after_unpark: Option<Arc<dyn Fn() + Send + Sync>>
To run after each thread is unparked.
before_spawn: Option<Arc<dyn Fn(&TaskMeta<'_>) + Send + Sync>>
To run before each task is spawned.
after_termination: Option<Arc<dyn Fn(&TaskMeta<'_>) + Send + Sync>>
To run after each task is terminated.
keep_alive: Option<Duration>
Customizable keep alive timeout for BlockingPool
global_queue_interval: Option<u32>
How many ticks before pulling a task from the global/remote queue?
When None
, the value is unspecified and behavior details are left to
the scheduler. Each scheduler flavor could choose to either pick its own
default value or use some other strategy to decide when to poll from the
global queue. For example, the multi-threaded scheduler uses a
self-tuning strategy based on mean task poll times.
event_interval: u32
How many ticks before yielding to the driver for timer and I/O events?
local_queue_capacity: usize
§disable_lifo_slot: bool
When true, the multi-threade scheduler LIFO slot should not be used.
This option should only be exposed as unstable.
seed_generator: RngSeedGenerator
Specify a random number generator seed to provide deterministic results
metrics_poll_count_histogram_enable: bool
When true, enables task poll count histogram instrumentation.
metrics_poll_count_histogram: HistogramBuilder
Configures the task poll count histogram
Implementations§
source§impl Builder
impl Builder
sourcepub fn new_current_thread() -> Builder
pub fn new_current_thread() -> Builder
Returns a new builder with the current thread scheduler selected.
Configuration methods can be chained on the return value.
To spawn non-Send
tasks on the resulting runtime, combine it with a
LocalSet
.
sourcepub fn new_multi_thread() -> Builder
pub fn new_multi_thread() -> Builder
Returns a new builder with the multi thread scheduler selected.
Configuration methods can be chained on the return value.
sourcepub(crate) fn new(kind: Kind, event_interval: u32) -> Builder
pub(crate) fn new(kind: Kind, event_interval: u32) -> Builder
Returns a new runtime builder initialized with default configuration values.
Configuration methods can be chained on the return value.
sourcepub fn enable_all(&mut self) -> &mut Self
pub fn enable_all(&mut self) -> &mut Self
Enables both I/O and time drivers.
Doing this is a shorthand for calling enable_io
and enable_time
individually. If additional components are added to Tokio in the future,
enable_all
will include these future components.
§Examples
use tokio::runtime;
let rt = runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap();
sourcepub fn worker_threads(&mut self, val: usize) -> &mut Self
pub fn worker_threads(&mut self, val: usize) -> &mut Self
Sets the number of worker threads the Runtime
will use.
This can be any number above 0 though it is advised to keep this value on the smaller side.
This will override the value read from environment variable TOKIO_WORKER_THREADS
.
§Default
The default value is the number of cores available to the system.
When using the current_thread
runtime this method has no effect.
§Examples
§Multi threaded runtime with 4 threads
use tokio::runtime;
// This will spawn a work-stealing runtime with 4 worker threads.
let rt = runtime::Builder::new_multi_thread()
.worker_threads(4)
.build()
.unwrap();
rt.spawn(async move {});
§Current thread runtime (will only run on the current thread via Runtime::block_on
)
use tokio::runtime;
// Create a runtime that _must_ be driven from a call
// to `Runtime::block_on`.
let rt = runtime::Builder::new_current_thread()
.build()
.unwrap();
// This will run the runtime and future on the current thread
rt.block_on(async move {});
§Panics
This will panic if val
is not larger than 0
.
sourcepub fn max_blocking_threads(&mut self, val: usize) -> &mut Self
pub fn max_blocking_threads(&mut self, val: usize) -> &mut Self
Specifies the limit for additional threads spawned by the Runtime.
These threads are used for blocking operations like tasks spawned
through spawn_blocking
, this includes but is not limited to:
fs
operations- dns resolution through
ToSocketAddrs
- writing to
Stdout
orStderr
- reading from
Stdin
Unlike the worker_threads
, they are not always active and will exit
if left idle for too long. You can change this timeout duration with thread_keep_alive
.
It’s recommended to not set this limit too low in order to avoid hanging on operations
requiring spawn_blocking
.
The default value is 512.
§Panics
This will panic if val
is not larger than 0
.
§Upgrading from 0.x
In old versions max_threads
limited both blocking and worker threads, but the
current max_blocking_threads
does not include async worker threads in the count.
sourcepub fn thread_name(&mut self, val: impl Into<String>) -> &mut Self
pub fn thread_name(&mut self, val: impl Into<String>) -> &mut Self
Sets name of threads spawned by the Runtime
’s thread pool.
The default name is “tokio-runtime-worker”.
§Examples
let rt = runtime::Builder::new_multi_thread()
.thread_name("my-pool")
.build();
sourcepub fn thread_name_fn<F>(&mut self, f: F) -> &mut Self
pub fn thread_name_fn<F>(&mut self, f: F) -> &mut Self
Sets a function used to generate the name of threads spawned by the Runtime
’s thread pool.
The default name fn is || "tokio-runtime-worker".into()
.
§Examples
let rt = runtime::Builder::new_multi_thread()
.thread_name_fn(|| {
static ATOMIC_ID: AtomicUsize = AtomicUsize::new(0);
let id = ATOMIC_ID.fetch_add(1, Ordering::SeqCst);
format!("my-pool-{}", id)
})
.build();
sourcepub fn thread_stack_size(&mut self, val: usize) -> &mut Self
pub fn thread_stack_size(&mut self, val: usize) -> &mut Self
Sets the stack size (in bytes) for worker threads.
The actual stack size may be greater than this value if the platform specifies minimal stack size.
The default stack size for spawned threads is 2 MiB, though this particular stack size is subject to change in the future.
§Examples
let rt = runtime::Builder::new_multi_thread()
.thread_stack_size(32 * 1024)
.build();
sourcepub fn on_thread_start<F>(&mut self, f: F) -> &mut Self
pub fn on_thread_start<F>(&mut self, f: F) -> &mut Self
Executes function f
after each thread is started but before it starts
doing work.
This is intended for bookkeeping and monitoring use cases.
§Examples
let runtime = runtime::Builder::new_multi_thread()
.on_thread_start(|| {
println!("thread started");
})
.build();
sourcepub fn on_thread_stop<F>(&mut self, f: F) -> &mut Self
pub fn on_thread_stop<F>(&mut self, f: F) -> &mut Self
Executes function f
before each thread stops.
This is intended for bookkeeping and monitoring use cases.
§Examples
let runtime = runtime::Builder::new_multi_thread()
.on_thread_stop(|| {
println!("thread stopping");
})
.build();
sourcepub fn on_thread_park<F>(&mut self, f: F) -> &mut Self
pub fn on_thread_park<F>(&mut self, f: F) -> &mut Self
Executes function f
just before a thread is parked (goes idle).
f
is called within the Tokio context, so functions like tokio::spawn
can be called, and may result in this thread being unparked immediately.
This can be used to start work only when the executor is idle, or for bookkeeping and monitoring purposes.
Note: There can only be one park callback for a runtime; calling this function more than once replaces the last callback defined, rather than adding to it.
§Examples
§Multithreaded executor
let once = AtomicBool::new(true);
let barrier = Arc::new(Barrier::new(2));
let runtime = runtime::Builder::new_multi_thread()
.worker_threads(1)
.on_thread_park({
let barrier = barrier.clone();
move || {
let barrier = barrier.clone();
if once.swap(false, Ordering::Relaxed) {
tokio::spawn(async move { barrier.wait().await; });
}
}
})
.build()
.unwrap();
runtime.block_on(async {
barrier.wait().await;
})
§Current thread executor
let once = AtomicBool::new(true);
let barrier = Arc::new(Barrier::new(2));
let runtime = runtime::Builder::new_current_thread()
.on_thread_park({
let barrier = barrier.clone();
move || {
let barrier = barrier.clone();
if once.swap(false, Ordering::Relaxed) {
tokio::spawn(async move { barrier.wait().await; });
}
}
})
.build()
.unwrap();
runtime.block_on(async {
barrier.wait().await;
})
sourcepub fn on_thread_unpark<F>(&mut self, f: F) -> &mut Self
pub fn on_thread_unpark<F>(&mut self, f: F) -> &mut Self
Executes function f
just after a thread unparks (starts executing tasks).
This is intended for bookkeeping and monitoring use cases; note that work in this callback will increase latencies when the application has allowed one or more runtime threads to go idle.
Note: There can only be one unpark callback for a runtime; calling this function more than once replaces the last callback defined, rather than adding to it.
§Examples
let runtime = runtime::Builder::new_multi_thread()
.on_thread_unpark(|| {
println!("thread unparking");
})
.build();
runtime.unwrap().block_on(async {
tokio::task::yield_now().await;
println!("Hello from Tokio!");
})
sourcepub fn build(&mut self) -> Result<Runtime>
pub fn build(&mut self) -> Result<Runtime>
Creates the configured Runtime
.
The returned Runtime
instance is ready to spawn tasks.
§Examples
use tokio::runtime::Builder;
let rt = Builder::new_multi_thread().build().unwrap();
rt.block_on(async {
println!("Hello from the Tokio runtime");
});
fn get_cfg(&self, workers: usize) -> Cfg
sourcepub fn thread_keep_alive(&mut self, duration: Duration) -> &mut Self
pub fn thread_keep_alive(&mut self, duration: Duration) -> &mut Self
Sets a custom timeout for a thread in the blocking pool.
By default, the timeout for a thread is set to 10 seconds. This can
be overridden using .thread_keep_alive()
.
§Example
let rt = runtime::Builder::new_multi_thread()
.thread_keep_alive(Duration::from_millis(100))
.build();
sourcepub fn global_queue_interval(&mut self, val: u32) -> &mut Self
pub fn global_queue_interval(&mut self, val: u32) -> &mut Self
Sets the number of scheduler ticks after which the scheduler will poll the global task queue.
A scheduler “tick” roughly corresponds to one poll
invocation on a task.
By default the global queue interval is 31 for the current-thread scheduler. Please see the module documentation for the default behavior of the multi-thread scheduler.
Schedulers have a local queue of already-claimed tasks, and a global queue of incoming tasks. Setting the interval to a smaller value increases the fairness of the scheduler, at the cost of more synchronization overhead. That can be beneficial for prioritizing getting started on new work, especially if tasks frequently yield rather than complete or await on further I/O. Conversely, a higher value prioritizes existing work, and is a good choice when most tasks quickly complete polling.
§Panics
This function will panic if 0 is passed as an argument.
§Examples
let rt = runtime::Builder::new_multi_thread()
.global_queue_interval(31)
.build();
sourcepub fn event_interval(&mut self, val: u32) -> &mut Self
pub fn event_interval(&mut self, val: u32) -> &mut Self
Sets the number of scheduler ticks after which the scheduler will poll for external events (timers, I/O, and so on).
A scheduler “tick” roughly corresponds to one poll
invocation on a task.
By default, the event interval is 61
for all scheduler types.
Setting the event interval determines the effective “priority” of delivering these external events (which may wake up additional tasks), compared to executing tasks that are currently ready to run. A smaller value is useful when tasks frequently spend a long time in polling, or frequently yield, which can result in overly long delays picking up I/O events. Conversely, picking up new events requires extra synchronization and syscall overhead, so if tasks generally complete their polling quickly, a higher event interval will minimize that overhead while still keeping the scheduler responsive to events.
§Examples
let rt = runtime::Builder::new_multi_thread()
.event_interval(31)
.build();
fn build_current_thread_runtime(&mut self) -> Result<Runtime>
fn build_current_thread_runtime_components( &mut self, local_tid: Option<ThreadId>, ) -> Result<(CurrentThread, Handle, BlockingPool)>
fn metrics_poll_count_histogram_builder(&self) -> Option<HistogramBuilder>
source§impl Builder
impl Builder
sourcepub fn enable_io(&mut self) -> &mut Self
pub fn enable_io(&mut self) -> &mut Self
Enables the I/O driver.
Doing this enables using net, process, signal, and some I/O types on the runtime.
§Examples
use tokio::runtime;
let rt = runtime::Builder::new_multi_thread()
.enable_io()
.build()
.unwrap();
sourcepub fn max_io_events_per_tick(&mut self, capacity: usize) -> &mut Self
pub fn max_io_events_per_tick(&mut self, capacity: usize) -> &mut Self
Enables the I/O driver and configures the max number of events to be processed per tick.
§Examples
use tokio::runtime;
let rt = runtime::Builder::new_current_thread()
.enable_io()
.max_io_events_per_tick(1024)
.build()
.unwrap();