Struct exr::block::writer::ChunkWriter
source · pub struct ChunkWriter<W> {
header_count: usize,
byte_writer: Tracking<W>,
chunk_indices_byte_location: Range<usize>,
chunk_indices_increasing_y: OffsetTables,
chunk_count: usize,
}
Expand description
Can consume compressed pixel chunks, writing them a file.
Use sequential_blocks_compressor
or parallel_blocks_compressor
to compress your data,
or use compress_all_blocks_sequential
or compress_all_blocks_parallel
.
Use on_progress
to obtain a new writer
that triggers a callback for each block.
Fields§
§header_count: usize
§byte_writer: Tracking<W>
§chunk_indices_byte_location: Range<usize>
§chunk_indices_increasing_y: OffsetTables
§chunk_count: usize
Implementations§
source§impl<W> ChunkWriter<W>
impl<W> ChunkWriter<W>
sourcefn new_for_buffered(
buffered_byte_writer: W,
headers: Headers,
pedantic: bool,
) -> Result<(MetaData, Self)>
fn new_for_buffered( buffered_byte_writer: W, headers: Headers, pedantic: bool, ) -> Result<(MetaData, Self)>
Writes the meta data and zeroed offset tables as a placeholder.
sourcefn complete_meta_data(self) -> UnitResult
fn complete_meta_data(self) -> UnitResult
Seek back to the meta data, write offset tables, and flush the byte writer. Leaves the writer seeked to the middle of the file.
Trait Implementations§
source§impl<W> ChunksWriter for ChunkWriter<W>
impl<W> ChunksWriter for ChunkWriter<W>
source§fn total_chunks_count(&self) -> usize
fn total_chunks_count(&self) -> usize
The total number of chunks that the complete file will contain.
source§fn write_chunk(
&mut self,
index_in_header_increasing_y: usize,
chunk: Chunk,
) -> UnitResult
fn write_chunk( &mut self, index_in_header_increasing_y: usize, chunk: Chunk, ) -> UnitResult
Any more calls will result in an error and have no effect. If writing results in an error, the file and the writer may remain in an invalid state and should not be used further. Errors when the chunk at this index was already written.
source§fn on_progress<F>(
&mut self,
on_progress: F,
) -> OnProgressChunkWriter<'_, Self, F>
fn on_progress<F>( &mut self, on_progress: F, ) -> OnProgressChunkWriter<'_, Self, F>
source§fn sequential_blocks_compressor<'w>(
&'w mut self,
meta: &'w MetaData,
) -> SequentialBlocksCompressor<'w, Self>
fn sequential_blocks_compressor<'w>( &'w mut self, meta: &'w MetaData, ) -> SequentialBlocksCompressor<'w, Self>
source§fn parallel_blocks_compressor<'w>(
&'w mut self,
meta: &'w MetaData,
) -> Option<ParallelBlocksCompressor<'w, Self>>
fn parallel_blocks_compressor<'w>( &'w mut self, meta: &'w MetaData, ) -> Option<ParallelBlocksCompressor<'w, Self>>
source§fn compress_all_blocks_sequential(
self,
meta: &MetaData,
blocks: impl Iterator<Item = (usize, UncompressedBlock)>,
) -> UnitResult
fn compress_all_blocks_sequential( self, meta: &MetaData, blocks: impl Iterator<Item = (usize, UncompressedBlock)>, ) -> UnitResult
MetaData::collect_ordered_blocks(...)
or similar methods.source§fn compress_all_blocks_parallel(
self,
meta: &MetaData,
blocks: impl Iterator<Item = (usize, UncompressedBlock)>,
) -> UnitResult
fn compress_all_blocks_parallel( self, meta: &MetaData, blocks: impl Iterator<Item = (usize, UncompressedBlock)>, ) -> UnitResult
MetaData::collect_ordered_blocks(...)
or similar methods.
Will fallback to sequential processing where threads are not available, or where it would not speed up the process.