pub struct Device {Show 25 fields
raw: ManuallyDrop<Box<dyn DynDevice>>,
pub(crate) adapter: Arc<Adapter>,
pub(crate) queue: OnceLock<Weak<Queue>>,
queue_to_drop: OnceLock<Box<dyn DynQueue>>,
pub(crate) zero_buffer: ManuallyDrop<Box<dyn DynBuffer>>,
label: String,
pub(crate) command_allocator: CommandAllocator,
pub(crate) active_submission_index: AtomicFenceValue,
pub(crate) last_successful_submission_index: AtomicFenceValue,
pub(crate) fence: RwLock<ManuallyDrop<Box<dyn DynFence>>>,
pub(crate) snatchable_lock: SnatchLock,
pub(crate) valid: AtomicBool,
pub(crate) trackers: Mutex<DeviceTracker>,
pub(crate) tracker_indices: TrackerIndexAllocators,
life_tracker: Mutex<LifetimeTracker>,
pub(crate) bgl_pool: ResourcePool<EntryMap, BindGroupLayout>,
pub(crate) alignments: Alignments,
pub(crate) limits: Limits,
pub(crate) features: Features,
pub(crate) downlevel: DownlevelCapabilities,
pub(crate) instance_flags: InstanceFlags,
pub(crate) pending_writes: Mutex<ManuallyDrop<PendingWrites>>,
pub(crate) deferred_destroy: Mutex<Vec<DeferredDestroy>>,
pub(crate) trace: Mutex<Option<Trace>>,
pub(crate) usage_scopes: Mutex<Vec<(BufferUsageScope, TextureUsageScope)>>,
}
Expand description
Structure describing a logical device. Some members are internally mutable, stored behind mutexes.
TODO: establish clear order of locking for these:
life_tracker
, trackers
, render_passes
, pending_writes
, trace
.
Currently, the rules are:
life_tracker
is locked afterhub.devices
, enforced by the type systemself.trackers
is locked last (unenforced)self.trace
is locked last (unenforced)
Right now avoid locking twice same resource or registry in a call execution and minimize the locking to the minimum scope possible Unless otherwise specified, no lock may be acquired while holding another lock. This means that you must inspect function calls made while a lock is held to see what locks the callee may try to acquire.
Important: When locking pending_writes please check that trackers is not locked trackers should be locked only when needed for the shortest time possible
Fields§
§raw: ManuallyDrop<Box<dyn DynDevice>>
§adapter: Arc<Adapter>
§queue: OnceLock<Weak<Queue>>
§queue_to_drop: OnceLock<Box<dyn DynQueue>>
§zero_buffer: ManuallyDrop<Box<dyn DynBuffer>>
§label: String
The label
from the descriptor used to create the resource.
command_allocator: CommandAllocator
§active_submission_index: AtomicFenceValue
The index of the last command submission that was attempted.
Note that fence
may never be signalled with this value, if the command
submission failed. If you need to wait for everything running on a
Queue
to complete, wait for last_successful_submission_index
.
last_successful_submission_index: AtomicFenceValue
The index of the last successful submission to this device’s
hal::Queue
.
Unlike active_submission_index
, which is incremented each time
submission is attempted, this is updated only when submission succeeds,
so waiting for this value won’t hang waiting for work that was never
submitted.
fence: RwLock<ManuallyDrop<Box<dyn DynFence>>>
§snatchable_lock: SnatchLock
§valid: AtomicBool
Is this device valid? Valid is closely associated with “lose the device”, which can be triggered by various methods, including at the end of device destroy, and by any GPU errors that cause us to no longer trust the state of the device. Ideally we would like to fold valid into the storage of the device itself (for example as an Error enum), but unfortunately we need to continue to be able to retrieve the device in poll_devices to determine if it can be dropped. If our internal accesses of devices were done through ref-counted references and external accesses checked for Error enums, we wouldn’t need this. For now, we need it. All the call sites where we check it are areas that should be revisited if we start using ref-counted references for internal access.
trackers: Mutex<DeviceTracker>
All live resources allocated with this Device
.
Has to be locked temporarily only (locked last) and never before pending_writes
tracker_indices: TrackerIndexAllocators
§life_tracker: Mutex<LifetimeTracker>
§bgl_pool: ResourcePool<EntryMap, BindGroupLayout>
Pool of bind group layouts, allowing deduplication.
alignments: Alignments
§limits: Limits
§features: Features
§downlevel: DownlevelCapabilities
§instance_flags: InstanceFlags
§pending_writes: Mutex<ManuallyDrop<PendingWrites>>
§deferred_destroy: Mutex<Vec<DeferredDestroy>>
§trace: Mutex<Option<Trace>>
§usage_scopes: Mutex<Vec<(BufferUsageScope, TextureUsageScope)>>
Implementations§
source§impl Device
impl Device
pub(crate) fn raw(&self) -> &dyn DynDevice
pub(crate) fn require_features( &self, feature: Features, ) -> Result<(), MissingFeatures>
pub(crate) fn require_downlevel_flags( &self, flags: DownlevelFlags, ) -> Result<(), MissingDownlevelFlags>
source§impl Device
impl Device
pub(crate) fn new( raw_device: Box<dyn DynDevice>, raw_queue: &dyn DynQueue, adapter: &Arc<Adapter>, desc: &DeviceDescriptor<'_>, trace_path: Option<&Path>, instance_flags: InstanceFlags, ) -> Result<Self, DeviceError>
pub fn is_valid(&self) -> bool
pub fn check_is_valid(&self) -> Result<(), DeviceError>
pub fn handle_hal_error(&self, error: DeviceError) -> DeviceError
pub(crate) fn release_queue(&self, queue: Box<dyn DynQueue>)
pub(crate) fn lock_life<'a>(&'a self) -> MutexGuard<'a, LifetimeTracker>
sourcepub(crate) fn deferred_resource_destruction(&self)
pub(crate) fn deferred_resource_destruction(&self)
Run some destroy operations that were deferred.
Destroying the resources requires taking a write lock on the device’s snatch lock, so a good reason for deferring resource destruction is when we don’t know for sure how risky it is to take the lock (typically, it shouldn’t be taken from the drop implementation of a reference-counted structure). The snatch lock must not be held while this function is called.
pub fn get_queue(&self) -> Option<Arc<Queue>>
pub fn set_queue(&self, queue: &Arc<Queue>)
sourcepub(crate) fn maintain<'this>(
&'this self,
fence: RwLockReadGuard<'_, ManuallyDrop<Box<dyn DynFence>>>,
maintain: Maintain<SubmissionIndex>,
snatch_guard: SnatchGuard<'_>,
) -> Result<(UserClosures, bool), WaitIdleError>
pub(crate) fn maintain<'this>( &'this self, fence: RwLockReadGuard<'_, ManuallyDrop<Box<dyn DynFence>>>, maintain: Maintain<SubmissionIndex>, snatch_guard: SnatchGuard<'_>, ) -> Result<(UserClosures, bool), WaitIdleError>
Check this device for completed commands.
The maintain
argument tells how the maintenance function should behave, either
blocking or just polling the current state of the gpu.
Return a pair (closures, queue_empty)
, where:
-
closures
is a list of actions to take: mapping buffers, notifying the user -
queue_empty
is a boolean indicating whether there are more queue submissions still in flight. (We have to take the locks needed to produce this information for other reasons, so we might as well just return it to our callers.)
pub(crate) fn create_buffer( self: &Arc<Self>, desc: &BufferDescriptor<'_>, ) -> Result<Arc<Buffer>, CreateBufferError>
pub(crate) fn create_texture_from_hal( self: &Arc<Self>, hal_texture: Box<dyn DynTexture>, desc: &TextureDescriptor<'_>, ) -> Result<Arc<Texture>, CreateTextureError>
pub(crate) fn create_buffer_from_hal( self: &Arc<Self>, hal_buffer: Box<dyn DynBuffer>, desc: &BufferDescriptor<'_>, ) -> (Fallible<Buffer>, Option<CreateBufferError>)
pub(crate) fn create_texture( self: &Arc<Self>, desc: &TextureDescriptor<'_>, ) -> Result<Arc<Texture>, CreateTextureError>
pub(crate) fn create_texture_view( self: &Arc<Self>, texture: &Arc<Texture>, desc: &TextureViewDescriptor<'_>, ) -> Result<Arc<TextureView>, CreateTextureViewError>
pub(crate) fn create_sampler( self: &Arc<Self>, desc: &SamplerDescriptor<'_>, ) -> Result<Arc<Sampler>, CreateSamplerError>
pub(crate) fn create_shader_module<'a>( self: &Arc<Self>, desc: &ShaderModuleDescriptor<'a>, source: ShaderModuleSource<'a>, ) -> Result<Arc<ShaderModule>, CreateShaderModuleError>
pub(crate) unsafe fn create_shader_module_spirv<'a>( self: &Arc<Self>, desc: &ShaderModuleDescriptor<'a>, source: &'a [u32], ) -> Result<Arc<ShaderModule>, CreateShaderModuleError>
pub(crate) fn create_command_encoder( self: &Arc<Self>, label: &Label<'_>, ) -> Result<Arc<CommandBuffer>, DeviceError>
sourcefn make_late_sized_buffer_groups(
shader_binding_sizes: &HashMap<ResourceBinding, BufferSize, BuildHasherDefault<FxHasher>>,
layout: &PipelineLayout,
) -> ArrayVec<LateSizedBufferGroup, { hal::MAX_BIND_GROUPS }>
fn make_late_sized_buffer_groups( shader_binding_sizes: &HashMap<ResourceBinding, BufferSize, BuildHasherDefault<FxHasher>>, layout: &PipelineLayout, ) -> ArrayVec<LateSizedBufferGroup, { hal::MAX_BIND_GROUPS }>
Generate information about late-validated buffer bindings for pipelines.
pub(crate) fn create_bind_group_layout( self: &Arc<Self>, label: &Label<'_>, entry_map: EntryMap, origin: Origin, ) -> Result<Arc<BindGroupLayout>, CreateBindGroupLayoutError>
fn create_buffer_binding<'a>( &self, bb: &'a ResolvedBufferBinding, binding: u32, decl: &BindGroupLayoutEntry, used_buffer_ranges: &mut Vec<BufferInitTrackerAction>, dynamic_binding_info: &mut Vec<BindGroupDynamicBindingData>, late_buffer_binding_sizes: &mut HashMap<u32, BufferSize, BuildHasherDefault<FxHasher>>, used: &mut BindGroupStates, snatch_guard: &'a SnatchGuard<'a>, ) -> Result<BufferBinding<'a, dyn DynBuffer>, CreateBindGroupError>
fn create_sampler_binding<'a>( &self, used: &mut BindGroupStates, binding: u32, decl: &BindGroupLayoutEntry, sampler: &'a Arc<Sampler>, ) -> Result<&'a dyn DynSampler, CreateBindGroupError>
fn create_texture_binding<'a>( &self, binding: u32, decl: &BindGroupLayoutEntry, view: &'a Arc<TextureView>, used: &mut BindGroupStates, used_texture_ranges: &mut Vec<TextureInitTrackerAction>, snatch_guard: &'a SnatchGuard<'a>, ) -> Result<TextureBinding<'a, dyn DynTextureView>, CreateBindGroupError>
pub(crate) fn create_bind_group( self: &Arc<Self>, desc: ResolvedBindGroupDescriptor<'_>, ) -> Result<Arc<BindGroup>, CreateBindGroupError>
fn check_array_binding( features: Features, count: Option<NonZeroU32>, num_bindings: usize, ) -> Result<(), CreateBindGroupError>
fn texture_use_parameters( &self, binding: u32, decl: &BindGroupLayoutEntry, view: &TextureView, expected: &'static str, ) -> Result<(TextureUsages, TextureUses), CreateBindGroupError>
pub(crate) fn create_pipeline_layout( self: &Arc<Self>, desc: &ResolvedPipelineLayoutDescriptor<'_>, ) -> Result<Arc<PipelineLayout>, CreatePipelineLayoutError>
pub(crate) fn derive_pipeline_layout( self: &Arc<Self>, derived_group_layouts: ArrayVec<EntryMap, { hal::MAX_BIND_GROUPS }>, ) -> Result<Arc<PipelineLayout>, ImplicitLayoutError>
pub(crate) fn create_compute_pipeline( self: &Arc<Self>, desc: ResolvedComputePipelineDescriptor<'_>, ) -> Result<Arc<ComputePipeline>, CreateComputePipelineError>
pub(crate) fn create_render_pipeline( self: &Arc<Self>, desc: ResolvedRenderPipelineDescriptor<'_>, ) -> Result<Arc<RenderPipeline>, CreateRenderPipelineError>
sourcepub unsafe fn create_pipeline_cache(
self: &Arc<Self>,
desc: &PipelineCacheDescriptor<'_>,
) -> Result<Arc<PipelineCache>, CreatePipelineCacheError>
pub unsafe fn create_pipeline_cache( self: &Arc<Self>, desc: &PipelineCacheDescriptor<'_>, ) -> Result<Arc<PipelineCache>, CreatePipelineCacheError>
§Safety
The data
field on desc
must have previously been returned from crate::global::Global::pipeline_cache_get_data
fn get_texture_format_features( &self, format: TextureFormat, ) -> TextureFormatFeatures
fn describe_format_features( &self, format: TextureFormat, ) -> Result<TextureFormatFeatures, MissingFeatures>
pub(crate) fn wait_for_submit( &self, submission_index: SubmissionIndex, ) -> Result<(), DeviceError>
pub(crate) fn create_query_set( self: &Arc<Self>, desc: &QuerySetDescriptor<'_>, ) -> Result<Arc<QuerySet>, CreateQuerySetError>
fn lose(&self, message: &str)
pub(crate) fn release_gpu_resources(&self)
pub(crate) fn new_usage_scope(&self) -> UsageScope<'_>
pub fn get_hal_counters(&self) -> HalCounters
pub fn generate_allocator_report(&self) -> Option<AllocatorReport>
source§impl Device
impl Device
sourcepub(crate) fn prepare_to_die(&self)
pub(crate) fn prepare_to_die(&self)
Wait for idle and remove resources that we can, before we die.