pub struct Bucket<T> {
ptr: NonNull<T>,
}
Expand description
A reference to a hash table bucket containing a T
.
This is usually just a pointer to the element itself. However if the element
is a ZST, then we instead track the index of the element in the table so
that erase
works properly.
Fields§
§ptr: NonNull<T>
Implementations§
source§impl<T> Bucket<T>
impl<T> Bucket<T>
sourceunsafe fn from_base_index(base: NonNull<T>, index: usize) -> Self
unsafe fn from_base_index(base: NonNull<T>, index: usize) -> Self
Creates a Bucket
that contain pointer to the data.
The pointer calculation is performed by calculating the
offset from given base
pointer (convenience for
base.as_ptr().sub(index)
).
index
is in units of T
; e.g., an index
of 3 represents a pointer
offset of 3 * size_of::<T>()
bytes.
If the T
is a ZST, then we instead track the index of the element
in the table so that erase
works properly (return
NonNull::new_unchecked((index + 1) as *mut T)
)
§Safety
If mem::size_of::<T>() != 0
, then the safety rules are directly derived
from the safety rules for <*mut T>::sub
method of *mut T
and the safety
rules of NonNull::new_unchecked
function.
Thus, in order to uphold the safety contracts for the <*mut T>::sub
method
and NonNull::new_unchecked
function, as well as for the correct
logic of the work of this crate, the following rules are necessary and
sufficient:
-
the
base
pointer must not bedangling
and must points to the end of the firstvalue element
from thedata part
of the table, i.e. must be the pointer that returned byRawTable::data_end
or byRawTableInner::data_end<T>
; -
index
must not be greater thanRawTableInner.bucket_mask
, i.e.index <= RawTableInner.bucket_mask
or, in other words,(index + 1)
must be no greater than the number returned by the functionRawTable::buckets
orRawTableInner::buckets
.
If mem::size_of::<T>() == 0
, then the only requirement is that the
index
must not be greater than RawTableInner.bucket_mask
, i.e.
index <= RawTableInner.bucket_mask
or, in other words, (index + 1)
must be no greater than the number returned by the function
RawTable::buckets
or RawTableInner::buckets
.
sourceunsafe fn to_base_index(&self, base: NonNull<T>) -> usize
unsafe fn to_base_index(&self, base: NonNull<T>) -> usize
Calculates the index of a Bucket
as distance between two pointers
(convenience for base.as_ptr().offset_from(self.ptr.as_ptr()) as usize
).
The returned value is in units of T: the distance in bytes divided by
core::mem::size_of::<T>()
.
If the T
is a ZST, then we return the index of the element in
the table so that erase
works properly (return self.ptr.as_ptr() as usize - 1
).
This function is the inverse of from_base_index
.
§Safety
If mem::size_of::<T>() != 0
, then the safety rules are directly derived
from the safety rules for <*const T>::offset_from
method of *const T
.
Thus, in order to uphold the safety contracts for <*const T>::offset_from
method, as well as for the correct logic of the work of this crate, the
following rules are necessary and sufficient:
-
base
contained pointer must not bedangling
and must point to the end of the firstelement
from thedata part
of the table, i.e. must be a pointer that returns byRawTable::data_end
or byRawTableInner::data_end<T>
; -
self
also must not contain dangling pointer; -
both
self
andbase
must be created from the sameRawTable
(orRawTableInner
).
If mem::size_of::<T>() == 0
, this function is always safe.
sourcepub fn as_ptr(&self) -> *mut T
pub fn as_ptr(&self) -> *mut T
Acquires the underlying raw pointer *mut T
to data
.
§Note
If T
is not Copy
, do not use *mut T
methods that can cause calling the
destructor of T
(for example the <*mut T>::drop_in_place
method), because
for properly dropping the data we also need to clear data
control bytes. If we
drop data, but do not clear data control byte
it leads to double drop when
RawTable
goes out of scope.
If you modify an already initialized value
, so Hash
and Eq
on the new
T
value and its borrowed form must match those for the old T
value, as the map
will not re-evaluate where the new value should go, meaning the value may become
“lost” if their location does not reflect their state.
§Examples
use core::hash::{BuildHasher, Hash};
use hashbrown::raw::{Bucket, RawTable};
type NewHashBuilder = core::hash::BuildHasherDefault<ahash::AHasher>;
fn make_hash<K: Hash + ?Sized, S: BuildHasher>(hash_builder: &S, key: &K) -> u64 {
use core::hash::Hasher;
let mut state = hash_builder.build_hasher();
key.hash(&mut state);
state.finish()
}
let hash_builder = NewHashBuilder::default();
let mut table = RawTable::new();
let value = ("a", 100);
let hash = make_hash(&hash_builder, &value.0);
table.insert(hash, value.clone(), |val| make_hash(&hash_builder, &val.0));
let bucket: Bucket<(&str, i32)> = table.find(hash, |(k1, _)| k1 == &value.0).unwrap();
assert_eq!(unsafe { &*bucket.as_ptr() }, &("a", 100));
sourceunsafe fn next_n(&self, offset: usize) -> Self
unsafe fn next_n(&self, offset: usize) -> Self
Create a new Bucket
that is offset from the self
by the given
offset
. The pointer calculation is performed by calculating the
offset from self
pointer (convenience for self.ptr.as_ptr().sub(offset)
).
This function is used for iterators.
offset
is in units of T
; e.g., a offset
of 3 represents a pointer
offset of 3 * size_of::<T>()
bytes.
§Safety
If mem::size_of::<T>() != 0
, then the safety rules are directly derived
from the safety rules for <*mut T>::sub
method of *mut T
and safety
rules of NonNull::new_unchecked
function.
Thus, in order to uphold the safety contracts for <*mut T>::sub
method
and NonNull::new_unchecked
function, as well as for the correct
logic of the work of this crate, the following rules are necessary and
sufficient:
-
self
contained pointer must not bedangling
; -
self.to_base_index() + ofset
must not be greater thanRawTableInner.bucket_mask
, i.e.(self.to_base_index() + ofset) <= RawTableInner.bucket_mask
or, in other words,self.to_base_index() + ofset + 1
must be no greater than the number returned by the functionRawTable::buckets
orRawTableInner::buckets
.
If mem::size_of::<T>() == 0
, then the only requirement is that the
self.to_base_index() + ofset
must not be greater than RawTableInner.bucket_mask
,
i.e. (self.to_base_index() + ofset) <= RawTableInner.bucket_mask
or, in other words,
self.to_base_index() + ofset + 1
must be no greater than the number returned by the
function RawTable::buckets
or RawTableInner::buckets
.
sourcepub(crate) unsafe fn drop(&self)
pub(crate) unsafe fn drop(&self)
Executes the destructor (if any) of the pointed-to data
.
§Safety
See ptr::drop_in_place
for safety concerns.
You should use RawTable::erase
instead of this function,
or be careful with calling this function directly, because for
properly dropping the data we need also clear data
control bytes.
If we drop data, but do not erase data control byte
it leads to
double drop when RawTable
goes out of scope.
sourcepub(crate) unsafe fn read(&self) -> T
pub(crate) unsafe fn read(&self) -> T
Reads the value
from self
without moving it. This leaves the
memory in self
unchanged.
§Safety
See ptr::read
for safety concerns.
You should use RawTable::remove
instead of this function,
or be careful with calling this function directly, because compiler
calls its destructor when readed value
goes out of scope. It
can cause double dropping when RawTable
goes out of scope,
because of not erased data control byte
.
sourcepub(crate) unsafe fn write(&self, val: T)
pub(crate) unsafe fn write(&self, val: T)
Overwrites a memory location with the given value
without reading
or dropping the old value (like ptr::write
function).
§Safety
See ptr::write
for safety concerns.
§Note
Hash
and Eq
on the new T
value and its borrowed form must match
those for the old T
value, as the map will not re-evaluate where the new
value should go, meaning the value may become “lost” if their location
does not reflect their state.
sourcepub unsafe fn as_ref<'a>(&self) -> &'a T
pub unsafe fn as_ref<'a>(&self) -> &'a T
Returns a shared immutable reference to the value
.
§Safety
See NonNull::as_ref
for safety concerns.
§Examples
use core::hash::{BuildHasher, Hash};
use hashbrown::raw::{Bucket, RawTable};
type NewHashBuilder = core::hash::BuildHasherDefault<ahash::AHasher>;
fn make_hash<K: Hash + ?Sized, S: BuildHasher>(hash_builder: &S, key: &K) -> u64 {
use core::hash::Hasher;
let mut state = hash_builder.build_hasher();
key.hash(&mut state);
state.finish()
}
let hash_builder = NewHashBuilder::default();
let mut table = RawTable::new();
let value: (&str, String) = ("A pony", "is a small horse".to_owned());
let hash = make_hash(&hash_builder, &value.0);
table.insert(hash, value.clone(), |val| make_hash(&hash_builder, &val.0));
let bucket: Bucket<(&str, String)> = table.find(hash, |(k, _)| k == &value.0).unwrap();
assert_eq!(
unsafe { bucket.as_ref() },
&("A pony", "is a small horse".to_owned())
);
sourcepub unsafe fn as_mut<'a>(&self) -> &'a mut T
pub unsafe fn as_mut<'a>(&self) -> &'a mut T
Returns a unique mutable reference to the value
.
§Safety
See NonNull::as_mut
for safety concerns.
§Note
Hash
and Eq
on the new T
value and its borrowed form must match
those for the old T
value, as the map will not re-evaluate where the new
value should go, meaning the value may become “lost” if their location
does not reflect their state.
§Examples
use core::hash::{BuildHasher, Hash};
use hashbrown::raw::{Bucket, RawTable};
type NewHashBuilder = core::hash::BuildHasherDefault<ahash::AHasher>;
fn make_hash<K: Hash + ?Sized, S: BuildHasher>(hash_builder: &S, key: &K) -> u64 {
use core::hash::Hasher;
let mut state = hash_builder.build_hasher();
key.hash(&mut state);
state.finish()
}
let hash_builder = NewHashBuilder::default();
let mut table = RawTable::new();
let value: (&str, String) = ("A pony", "is a small horse".to_owned());
let hash = make_hash(&hash_builder, &value.0);
table.insert(hash, value.clone(), |val| make_hash(&hash_builder, &val.0));
let bucket: Bucket<(&str, String)> = table.find(hash, |(k, _)| k == &value.0).unwrap();
unsafe {
bucket
.as_mut()
.1
.push_str(" less than 147 cm at the withers")
};
assert_eq!(
unsafe { bucket.as_ref() },
&(
"A pony",
"is a small horse less than 147 cm at the withers".to_owned()
)
);
sourcepub unsafe fn copy_from_nonoverlapping(&self, other: &Self)
pub unsafe fn copy_from_nonoverlapping(&self, other: &Self)
Copies size_of<T>
bytes from other
to self
. The source
and destination may not overlap.
§Safety
See ptr::copy_nonoverlapping
for safety concerns.
Like read
, copy_nonoverlapping
creates a bitwise copy of T
, regardless of
whether T
is Copy
. If T
is not Copy
, using both the values
in the region beginning at *self
and the region beginning at *other
can
violate memory safety.
§Note
Hash
and Eq
on the new T
value and its borrowed form must match
those for the old T
value, as the map will not re-evaluate where the new
value should go, meaning the value may become “lost” if their location
does not reflect their state.