Trait webrender::render_target::RenderTarget
source · pub trait RenderTarget {
// Required methods
fn new(
texture_id: CacheTextureId,
screen_size: DeviceIntSize,
gpu_supports_fast_clears: bool,
used_rect: DeviceIntRect,
) -> Self;
fn add_task(
&mut self,
task_id: RenderTaskId,
ctx: &RenderTargetContext<'_, '_>,
gpu_cache: &mut GpuCache,
gpu_buffer_builder: &mut GpuBufferBuilder,
render_tasks: &RenderTaskGraph,
clip_store: &ClipStore,
transforms: &mut TransformPalette,
);
fn needs_depth(&self) -> bool;
fn texture_id(&self) -> CacheTextureId;
// Provided method
fn build(
&mut self,
_ctx: &mut RenderTargetContext<'_, '_>,
_gpu_cache: &mut GpuCache,
_render_tasks: &RenderTaskGraph,
_prim_headers: &mut PrimitiveHeaders,
_transforms: &mut TransformPalette,
_z_generator: &mut ZBufferIdGenerator,
_prim_instances: &[PrimitiveInstance],
_cmd_buffers: &CommandBufferList,
_gpu_buffer_builder: &mut GpuBufferBuilder,
) { ... }
}
Expand description
Represents a number of rendering operations on a surface.
In graphics parlance, a “render target” usually means “a surface (texture or
framebuffer) bound to the output of a shader”. This trait has a slightly
different meaning, in that it represents the operations on that surface
before it’s actually bound and rendered. So a RenderTarget
is built by
the RenderBackend
by inserting tasks, and then shipped over to the
Renderer
where a device surface is resolved and the tasks are transformed
into draw commands on that surface.
We express this as a trait to generalize over color and alpha surfaces.
a given RenderTask
will draw to one or the other, depending on its type
and sometimes on its parameters. See RenderTask::target_kind
.
Required Methods§
sourcefn new(
texture_id: CacheTextureId,
screen_size: DeviceIntSize,
gpu_supports_fast_clears: bool,
used_rect: DeviceIntRect,
) -> Self
fn new( texture_id: CacheTextureId, screen_size: DeviceIntSize, gpu_supports_fast_clears: bool, used_rect: DeviceIntRect, ) -> Self
Creates a new RenderTarget of the given type.
sourcefn add_task(
&mut self,
task_id: RenderTaskId,
ctx: &RenderTargetContext<'_, '_>,
gpu_cache: &mut GpuCache,
gpu_buffer_builder: &mut GpuBufferBuilder,
render_tasks: &RenderTaskGraph,
clip_store: &ClipStore,
transforms: &mut TransformPalette,
)
fn add_task( &mut self, task_id: RenderTaskId, ctx: &RenderTargetContext<'_, '_>, gpu_cache: &mut GpuCache, gpu_buffer_builder: &mut GpuBufferBuilder, render_tasks: &RenderTaskGraph, clip_store: &ClipStore, transforms: &mut TransformPalette, )
Associates a RenderTask
with this target. That task must be assigned
to a region returned by invoking allocate()
on this target.
TODO(gw): It’s a bit odd that we need the deferred resolves and mutable GPU cache here. They are typically used by the build step above. They are used for the blit jobs to allow resolve_image to be called. It’s a bit of extra overhead to store the image key here and the resolve them in the build step separately. BUT: if/when we add more texture cache target jobs, we might want to tidy this up.
fn needs_depth(&self) -> bool
fn texture_id(&self) -> CacheTextureId
Provided Methods§
sourcefn build(
&mut self,
_ctx: &mut RenderTargetContext<'_, '_>,
_gpu_cache: &mut GpuCache,
_render_tasks: &RenderTaskGraph,
_prim_headers: &mut PrimitiveHeaders,
_transforms: &mut TransformPalette,
_z_generator: &mut ZBufferIdGenerator,
_prim_instances: &[PrimitiveInstance],
_cmd_buffers: &CommandBufferList,
_gpu_buffer_builder: &mut GpuBufferBuilder,
)
fn build( &mut self, _ctx: &mut RenderTargetContext<'_, '_>, _gpu_cache: &mut GpuCache, _render_tasks: &RenderTaskGraph, _prim_headers: &mut PrimitiveHeaders, _transforms: &mut TransformPalette, _z_generator: &mut ZBufferIdGenerator, _prim_instances: &[PrimitiveInstance], _cmd_buffers: &CommandBufferList, _gpu_buffer_builder: &mut GpuBufferBuilder, )
Optional hook to provide additional processing for the target at the end of the build phase.