Trait aho_corasick::packed::vector::Vector

source ·
pub(crate) trait Vector:
    Copy
    + Debug
    + Send
    + Sync
    + UnwindSafe
    + RefUnwindSafe {
    const BITS: usize;
    const BYTES: usize;

    // Required methods
    unsafe fn splat(byte: u8) -> Self;
    unsafe fn load_unaligned(data: *const u8) -> Self;
    unsafe fn is_zero(self) -> bool;
    unsafe fn cmpeq(self, vector2: Self) -> Self;
    unsafe fn and(self, vector2: Self) -> Self;
    unsafe fn or(self, vector2: Self) -> Self;
    unsafe fn shift_8bit_lane_right<const BITS: i32>(self) -> Self;
    unsafe fn shift_in_one_byte(self, vector2: Self) -> Self;
    unsafe fn shift_in_two_bytes(self, vector2: Self) -> Self;
    unsafe fn shift_in_three_bytes(self, vector2: Self) -> Self;
    unsafe fn shuffle_bytes(self, indices: Self) -> Self;
    unsafe fn for_each_64bit_lane<T>(
        self,
        f: impl FnMut(usize, u64) -> Option<T>,
    ) -> Option<T>;
}
Expand description

A trait for describing vector operations used by vectorized searchers.

The trait is highly constrained to low level vector operations needed for the specific algorithms used in this crate. In general, it was invented mostly to be generic over x86’s __m128i and __m256i types. At time of writing, it also supports wasm and aarch64 128-bit vector types as well.

§Safety

All methods are not safe since they are intended to be implemented using vendor intrinsics, which are also not safe. Callers must ensure that the appropriate target features are enabled in the calling function, and that the current CPU supports them. All implementations should avoid marking the routines with #[target_feature] and instead mark them as #[inline(always)] to ensure they get appropriately inlined. (inline(always) cannot be used with target_feature.)

Required Associated Constants§

source

const BITS: usize

The number of bits in the vector.

source

const BYTES: usize

The number of bytes in the vector. That is, this is the size of the vector in memory.

Required Methods§

source

unsafe fn splat(byte: u8) -> Self

Create a vector with 8-bit lanes with the given byte repeated into each lane.

§Safety

Callers must ensure that this is okay to call in the current target for the current CPU.

source

unsafe fn load_unaligned(data: *const u8) -> Self

Read a vector-size number of bytes from the given pointer. The pointer does not need to be aligned.

§Safety

Callers must ensure that this is okay to call in the current target for the current CPU.

Callers must guarantee that at least BYTES bytes are readable from data.

source

unsafe fn is_zero(self) -> bool

Returns true if and only if this vector has zero in all of its lanes.

§Safety

Callers must ensure that this is okay to call in the current target for the current CPU.

source

unsafe fn cmpeq(self, vector2: Self) -> Self

Do an 8-bit pairwise equality check. If lane i is equal in this vector and the one given, then lane i in the resulting vector is set to 0xFF. Otherwise, it is set to 0x00.

§Safety

Callers must ensure that this is okay to call in the current target for the current CPU.

source

unsafe fn and(self, vector2: Self) -> Self

Perform a bitwise ‘and’ of this vector and the one given and return the result.

§Safety

Callers must ensure that this is okay to call in the current target for the current CPU.

source

unsafe fn or(self, vector2: Self) -> Self

Perform a bitwise ‘or’ of this vector and the one given and return the result.

§Safety

Callers must ensure that this is okay to call in the current target for the current CPU.

source

unsafe fn shift_8bit_lane_right<const BITS: i32>(self) -> Self

Shift each 8-bit lane in this vector to the right by the number of bits indictated by the BITS type parameter.

§Safety

Callers must ensure that this is okay to call in the current target for the current CPU.

source

unsafe fn shift_in_one_byte(self, vector2: Self) -> Self

Shift this vector to the left by one byte and shift the most significant byte of vector2 into the least significant position of this vector.

Stated differently, this behaves as if self and vector2 were concatenated into a 2 * Self::BITS temporary buffer and then shifted right by Self::BYTES - 1 bytes.

With respect to the Teddy algorithm, vector2 is usually a previous Self::BYTES chunk from the haystack and self is the chunk immediately following it. This permits combining the last two bytes from the previous chunk (vector2) with the first Self::BYTES - 1 bytes from the current chunk. This permits aligning the result of various shuffles so that they can be and-ed together and a possible candidate discovered.

§Safety

Callers must ensure that this is okay to call in the current target for the current CPU.

source

unsafe fn shift_in_two_bytes(self, vector2: Self) -> Self

Shift this vector to the left by two bytes and shift the two most significant bytes of vector2 into the least significant position of this vector.

Stated differently, this behaves as if self and vector2 were concatenated into a 2 * Self::BITS temporary buffer and then shifted right by Self::BYTES - 2 bytes.

With respect to the Teddy algorithm, vector2 is usually a previous Self::BYTES chunk from the haystack and self is the chunk immediately following it. This permits combining the last two bytes from the previous chunk (vector2) with the first Self::BYTES - 2 bytes from the current chunk. This permits aligning the result of various shuffles so that they can be and-ed together and a possible candidate discovered.

§Safety

Callers must ensure that this is okay to call in the current target for the current CPU.

source

unsafe fn shift_in_three_bytes(self, vector2: Self) -> Self

Shift this vector to the left by three bytes and shift the three most significant bytes of vector2 into the least significant position of this vector.

Stated differently, this behaves as if self and vector2 were concatenated into a 2 * Self::BITS temporary buffer and then shifted right by Self::BYTES - 3 bytes.

With respect to the Teddy algorithm, vector2 is usually a previous Self::BYTES chunk from the haystack and self is the chunk immediately following it. This permits combining the last three bytes from the previous chunk (vector2) with the first Self::BYTES - 3 bytes from the current chunk. This permits aligning the result of various shuffles so that they can be and-ed together and a possible candidate discovered.

§Safety

Callers must ensure that this is okay to call in the current target for the current CPU.

source

unsafe fn shuffle_bytes(self, indices: Self) -> Self

Shuffles the bytes in this vector according to the indices in each of the corresponding lanes in indices.

If i is the index of corresponding lanes, A is this vector, B is indices and C is the resulting vector, then C = A[B[i]].

§Safety

Callers must ensure that this is okay to call in the current target for the current CPU.

source

unsafe fn for_each_64bit_lane<T>( self, f: impl FnMut(usize, u64) -> Option<T>, ) -> Option<T>

Call the provided function for each 64-bit lane in this vector. The given function is provided the lane index and lane value as a u64.

If f returns Some, then iteration over the lanes is stopped and the value is returned. Otherwise, this returns None.

§Notes

Conceptually it would be nice if we could have a unpack64(self) -> [u64; BITS / 64] method, but defining that is tricky given Rust’s current support for const generics. And even if we could, it would be tricky to write generic code over it. (Not impossible. We could introduce another layer that requires AsRef<[u64]> or something.)

§Safety

Callers must ensure that this is okay to call in the current target for the current CPU.

Object Safety§

This trait is not object safe.

Implementations on Foreign Types§

source§

impl Vector for __m128i

source§

const BITS: usize = 128usize

source§

const BYTES: usize = 16usize

source§

unsafe fn splat(byte: u8) -> __m128i

source§

unsafe fn load_unaligned(data: *const u8) -> __m128i

source§

unsafe fn is_zero(self) -> bool

source§

unsafe fn cmpeq(self, vector2: Self) -> __m128i

source§

unsafe fn and(self, vector2: Self) -> __m128i

source§

unsafe fn or(self, vector2: Self) -> __m128i

source§

unsafe fn shift_8bit_lane_right<const BITS: i32>(self) -> Self

source§

unsafe fn shift_in_one_byte(self, vector2: Self) -> Self

source§

unsafe fn shift_in_two_bytes(self, vector2: Self) -> Self

source§

unsafe fn shift_in_three_bytes(self, vector2: Self) -> Self

source§

unsafe fn shuffle_bytes(self, indices: Self) -> Self

source§

unsafe fn for_each_64bit_lane<T>( self, f: impl FnMut(usize, u64) -> Option<T>, ) -> Option<T>

source§

impl Vector for __m256i

source§

const BITS: usize = 256usize

source§

const BYTES: usize = 32usize

source§

unsafe fn splat(byte: u8) -> __m256i

source§

unsafe fn load_unaligned(data: *const u8) -> __m256i

source§

unsafe fn is_zero(self) -> bool

source§

unsafe fn cmpeq(self, vector2: Self) -> __m256i

source§

unsafe fn and(self, vector2: Self) -> __m256i

source§

unsafe fn or(self, vector2: Self) -> __m256i

source§

unsafe fn shift_8bit_lane_right<const BITS: i32>(self) -> Self

source§

unsafe fn shift_in_one_byte(self, vector2: Self) -> Self

source§

unsafe fn shift_in_two_bytes(self, vector2: Self) -> Self

source§

unsafe fn shift_in_three_bytes(self, vector2: Self) -> Self

source§

unsafe fn shuffle_bytes(self, indices: Self) -> Self

source§

unsafe fn for_each_64bit_lane<T>( self, f: impl FnMut(usize, u64) -> Option<T>, ) -> Option<T>

Implementors§