Trait exr::block::reader::ChunksReader
source · pub trait ChunksReader:
Sized
+ Iterator<Item = Result<Chunk>>
+ ExactSizeIterator {
// Required methods
fn meta_data(&self) -> &MetaData;
fn expected_chunk_count(&self) -> usize;
// Provided methods
fn headers(&self) -> &[Header] { ... }
fn read_next_chunk(&mut self) -> Option<Result<Chunk>> { ... }
fn on_progress<F>(self, on_progress: F) -> OnProgressChunksReader<Self, F> ⓘ
where F: FnMut(f64) { ... }
fn decompress_parallel(
self,
pedantic: bool,
insert_block: impl FnMut(&MetaData, UncompressedBlock) -> UnitResult,
) -> UnitResult { ... }
fn parallel_decompressor(
self,
pedantic: bool,
) -> Result<ParallelBlockDecompressor<Self>, Self> { ... }
fn decompress_sequential(
self,
pedantic: bool,
insert_block: impl FnMut(&MetaData, UncompressedBlock) -> UnitResult,
) -> UnitResult { ... }
fn sequential_decompressor(
self,
pedantic: bool,
) -> SequentialBlockDecompressor<Self> ⓘ { ... }
}
Expand description
Decode chunks in the file.
The decoded chunks can be decompressed by calling
decompress_parallel
, decompress_sequential
, or sequential_decompressor
.
Call on_progress
to have a callback with each block.
Also contains the image meta data.
Required Methods§
sourcefn expected_chunk_count(&self) -> usize
fn expected_chunk_count(&self) -> usize
The number of chunks that this reader will return in total. Can be less than the total number of chunks in the file, if some chunks are skipped.
Provided Methods§
sourcefn read_next_chunk(&mut self) -> Option<Result<Chunk>>
fn read_next_chunk(&mut self) -> Option<Result<Chunk>>
Read the next compressed chunk from the file.
Equivalent to .next()
, as this also is an iterator.
Returns None
if all chunks have been read.
sourcefn on_progress<F>(self, on_progress: F) -> OnProgressChunksReader<Self, F> ⓘ
fn on_progress<F>(self, on_progress: F) -> OnProgressChunksReader<Self, F> ⓘ
Create a new reader that calls the provided progress callback for each chunk that is read from the file. If the file can be successfully decoded, the progress will always at least once include 0.0 at the start and 1.0 at the end.
sourcefn decompress_parallel(
self,
pedantic: bool,
insert_block: impl FnMut(&MetaData, UncompressedBlock) -> UnitResult,
) -> UnitResult
fn decompress_parallel( self, pedantic: bool, insert_block: impl FnMut(&MetaData, UncompressedBlock) -> UnitResult, ) -> UnitResult
Decompress all blocks in the file, using multiple cpu cores, and call the supplied closure for each block.
The order of the blocks is not deterministic.
You can also use parallel_decompressor
to obtain an iterator instead.
Will fallback to sequential processing where threads are not available, or where it would not speed up the process.
sourcefn parallel_decompressor(
self,
pedantic: bool,
) -> Result<ParallelBlockDecompressor<Self>, Self>
fn parallel_decompressor( self, pedantic: bool, ) -> Result<ParallelBlockDecompressor<Self>, Self>
Return an iterator that decompresses the chunks with multiple threads.
The order of the blocks is not deterministic.
Use ParallelBlockDecompressor::new
if you want to use your own thread pool.
By default, this uses as many threads as there are CPUs.
Returns the self
if there is no need for parallel decompression.
sourcefn decompress_sequential(
self,
pedantic: bool,
insert_block: impl FnMut(&MetaData, UncompressedBlock) -> UnitResult,
) -> UnitResult
fn decompress_sequential( self, pedantic: bool, insert_block: impl FnMut(&MetaData, UncompressedBlock) -> UnitResult, ) -> UnitResult
Return an iterator that decompresses the chunks in this thread.
You can alternatively use sequential_decompressor
if you prefer an external iterator.
sourcefn sequential_decompressor(
self,
pedantic: bool,
) -> SequentialBlockDecompressor<Self> ⓘ
fn sequential_decompressor( self, pedantic: bool, ) -> SequentialBlockDecompressor<Self> ⓘ
Prepare reading the chunks sequentially, only a single thread, but with less memory overhead.