Trait memchr::vector::Vector

source ·
pub(crate) trait Vector: Copy + Debug {
    type Mask: MoveMask;

    const BYTES: usize;
    const ALIGN: usize;

    // Required methods
    unsafe fn splat(byte: u8) -> Self;
    unsafe fn load_aligned(data: *const u8) -> Self;
    unsafe fn load_unaligned(data: *const u8) -> Self;
    unsafe fn movemask(self) -> Self::Mask;
    unsafe fn cmpeq(self, vector2: Self) -> Self;
    unsafe fn and(self, vector2: Self) -> Self;
    unsafe fn or(self, vector2: Self) -> Self;

    // Provided method
    unsafe fn movemask_will_have_non_zero(self) -> bool { ... }
}
Expand description

A trait for describing vector operations used by vectorized searchers.

The trait is highly constrained to low level vector operations needed. In general, it was invented mostly to be generic over x86’s __m128i and __m256i types. At time of writing, it also supports wasm and aarch64 128-bit vector types as well.

§Safety

All methods are not safe since they are intended to be implemented using vendor intrinsics, which are also not safe. Callers must ensure that the appropriate target features are enabled in the calling function, and that the current CPU supports them. All implementations should avoid marking the routines with #[target_feature] and instead mark them as #[inline(always)] to ensure they get appropriately inlined. (inline(always) cannot be used with target_feature.)

Required Associated Types§

source

type Mask: MoveMask

The type of the value returned by Vector::movemask.

This supports abstracting over the specific representation used in order to accommodate different representations in different ISAs.

Required Associated Constants§

source

const BYTES: usize

The number of bytes in the vector. That is, this is the size of the vector in memory.

source

const ALIGN: usize

The bits that must be zero in order for a *const u8 pointer to be correctly aligned to read vector values.

Required Methods§

source

unsafe fn splat(byte: u8) -> Self

Create a vector with 8-bit lanes with the given byte repeated into each lane.

source

unsafe fn load_aligned(data: *const u8) -> Self

Read a vector-size number of bytes from the given pointer. The pointer must be aligned to the size of the vector.

§Safety

Callers must guarantee that at least BYTES bytes are readable from data and that data is aligned to a BYTES boundary.

source

unsafe fn load_unaligned(data: *const u8) -> Self

Read a vector-size number of bytes from the given pointer. The pointer does not need to be aligned.

§Safety

Callers must guarantee that at least BYTES bytes are readable from data.

source

unsafe fn movemask(self) -> Self::Mask

_mm_movemask_epi8 or _mm256_movemask_epi8

source

unsafe fn cmpeq(self, vector2: Self) -> Self

_mm_cmpeq_epi8 or _mm256_cmpeq_epi8

source

unsafe fn and(self, vector2: Self) -> Self

_mm_and_si128 or _mm256_and_si256

source

unsafe fn or(self, vector2: Self) -> Self

_mm_or or _mm256_or_si256

Provided Methods§

source

unsafe fn movemask_will_have_non_zero(self) -> bool

Returns true if and only if Self::movemask would return a mask that contains at least one non-zero bit.

Object Safety§

This trait is not object safe.

Implementations on Foreign Types§

source§

impl Vector for __m128i

source§

const BYTES: usize = 16usize

source§

const ALIGN: usize = 15usize

source§

type Mask = SensibleMoveMask

source§

unsafe fn splat(byte: u8) -> __m128i

source§

unsafe fn load_aligned(data: *const u8) -> __m128i

source§

unsafe fn load_unaligned(data: *const u8) -> __m128i

source§

unsafe fn movemask(self) -> SensibleMoveMask

source§

unsafe fn cmpeq(self, vector2: Self) -> __m128i

source§

unsafe fn and(self, vector2: Self) -> __m128i

source§

unsafe fn or(self, vector2: Self) -> __m128i

source§

impl Vector for __m256i

source§

const BYTES: usize = 32usize

source§

const ALIGN: usize = 31usize

source§

type Mask = SensibleMoveMask

source§

unsafe fn splat(byte: u8) -> __m256i

source§

unsafe fn load_aligned(data: *const u8) -> __m256i

source§

unsafe fn load_unaligned(data: *const u8) -> __m256i

source§

unsafe fn movemask(self) -> SensibleMoveMask

source§

unsafe fn cmpeq(self, vector2: Self) -> __m256i

source§

unsafe fn and(self, vector2: Self) -> __m256i

source§

unsafe fn or(self, vector2: Self) -> __m256i

Implementors§