pub struct Atomic<T: AtomicType>(/* private fields */);
Expand description
A memory location which can be safely modified from multiple execution contexts.
This has the same size, alignment and bit validity as the underlying type T
. And it disables
niche optimization for the same reason as UnsafeCell
.
The atomic operations are implemented in a way that is fully compatible with the Linux Kernel
Memory (Consistency) Model, hence they should be modeled as the corresponding
LKMM
atomic primitives. With the help of Atomic::from_ptr()
and
Atomic::as_ptr()
, this provides a way to interact with C-side atomic operations
(including those without the atomic
prefix, e.g. READ_ONCE()
, WRITE_ONCE()
,
smp_load_acquire()
and smp_store_release()
).
§Invariants
self.0
is a valid T
.
Implementations§
Source§impl<T: AtomicType> Atomic<T>
impl<T: AtomicType> Atomic<T>
Sourcepub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Selfwhere
T: Sync,
pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Selfwhere
T: Sync,
Creates a reference to an atomic T
from a pointer of T
.
This usually is used when communicating with C side or manipulating a C struct, see examples below.
§Safety
ptr
is aligned toalign_of::<T>()
.ptr
is valid for reads and writes for'a
.- For the duration of
'a
, other accesses to*ptr
must not cause data races (defined byLKMM
) against atomic operations on the returned reference. Note that if all other accesses are atomic, then this safety requirement is trivially fulfilled.
§Examples
Using Atomic::from_ptr()
combined with Atomic::load()
or Atomic::store()
can
achieve the same functionality as READ_ONCE()
/smp_load_acquire()
or
WRITE_ONCE()
/smp_store_release()
in C side:
use kernel::sync::atomic::{Atomic, Relaxed, Release};
// Assume there is a C struct `foo`.
mod cbindings {
#[repr(C)]
pub(crate) struct foo {
pub(crate) a: i32,
pub(crate) b: i32
}
}
let tmp = Opaque::new(cbindings::foo { a: 1, b: 2 });
// struct foo *foo_ptr = ..;
let foo_ptr = tmp.get();
// SAFETY: `foo_ptr` is valid, and `.a` is in bounds.
let foo_a_ptr = unsafe { &raw mut (*foo_ptr).a };
// a = READ_ONCE(foo_ptr->a);
//
// SAFETY: `foo_a_ptr` is valid for read, and all other accesses on it is atomic, so no
// data race.
let a = unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed);
// smp_store_release(&foo_ptr->a, 2);
//
// SAFETY: `foo_a_ptr` is valid for writes, and all other accesses on it is atomic, so
// no data race.
unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release);
Source§impl<T: AtomicType> Atomic<T>where
T::Repr: AtomicBasicOps,
impl<T: AtomicType> Atomic<T>where
T::Repr: AtomicBasicOps,
Sourcepub fn load<Ordering: AcquireOrRelaxed>(&self, _: Ordering) -> T
pub fn load<Ordering: AcquireOrRelaxed>(&self, _: Ordering) -> T
Loads the value from the atomic T
.
§Examples
use kernel::sync::atomic::{Atomic, Relaxed};
let x = Atomic::new(42i32);
assert_eq!(42, x.load(Relaxed));
let x = Atomic::new(42i64);
assert_eq!(42, x.load(Relaxed));
Sourcepub fn store<Ordering: ReleaseOrRelaxed>(&self, v: T, _: Ordering)
pub fn store<Ordering: ReleaseOrRelaxed>(&self, v: T, _: Ordering)
Stores a value to the atomic T
.
§Examples
use kernel::sync::atomic::{Atomic, Relaxed};
let x = Atomic::new(42i32);
assert_eq!(42, x.load(Relaxed));
x.store(43, Relaxed);
assert_eq!(43, x.load(Relaxed));
Source§impl<T: AtomicType> Atomic<T>where
T::Repr: AtomicExchangeOps,
impl<T: AtomicType> Atomic<T>where
T::Repr: AtomicExchangeOps,
Sourcepub fn xchg<Ordering: Ordering>(&self, v: T, _: Ordering) -> T
pub fn xchg<Ordering: Ordering>(&self, v: T, _: Ordering) -> T
Atomic exchange.
Atomically updates *self
to v
and returns the old value of *self
.
§Examples
use kernel::sync::atomic::{Atomic, Acquire, Relaxed};
let x = Atomic::new(42);
assert_eq!(42, x.xchg(52, Acquire));
assert_eq!(52, x.load(Relaxed));
Sourcepub fn cmpxchg<Ordering: Ordering>(
&self,
old: T,
new: T,
o: Ordering,
) -> Result<T, T>
pub fn cmpxchg<Ordering: Ordering>( &self, old: T, new: T, o: Ordering, ) -> Result<T, T>
Atomic compare and exchange.
If *self
== old
, atomically updates *self
to new
. Otherwise, *self
is not
modified.
Compare: The comparison is done via the byte level comparison between *self
and old
.
Ordering: When succeeds, provides the corresponding ordering as the Ordering
type
parameter indicates, and a failed one doesn’t provide any ordering, the load part of a
failed cmpxchg is a Relaxed
load.
Returns Ok(value)
if cmpxchg succeeds, and value
is guaranteed to be equal to old
,
otherwise returns Err(value)
, and value
is the current value of *self
.
§Examples
use kernel::sync::atomic::{Atomic, Full, Relaxed};
let x = Atomic::new(42);
// Checks whether cmpxchg succeeded.
let success = x.cmpxchg(52, 64, Relaxed).is_ok();
// Checks whether cmpxchg failed.
let failure = x.cmpxchg(52, 64, Relaxed).is_err();
// Uses the old value if failed, probably re-try cmpxchg.
match x.cmpxchg(52, 64, Relaxed) {
Ok(_) => { },
Err(old) => {
// do something with `old`.
}
}
// Uses the latest value regardlessly, same as atomic_cmpxchg() in C.
let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old);
assert_eq!(64, x.load(Relaxed));
Source§impl<T: AtomicType> Atomic<T>where
T::Repr: AtomicArithmeticOps,
impl<T: AtomicType> Atomic<T>where
T::Repr: AtomicArithmeticOps,
Sourcepub fn add<Rhs>(&self, v: Rhs, _: Relaxed)where
T: AtomicAdd<Rhs>,
pub fn add<Rhs>(&self, v: Rhs, _: Relaxed)where
T: AtomicAdd<Rhs>,
Atomic add.
Atomically updates *self
to (*self).wrapping_add(v)
.
§Examples
use kernel::sync::atomic::{Atomic, Relaxed};
let x = Atomic::new(42);
assert_eq!(42, x.load(Relaxed));
x.add(12, Relaxed);
assert_eq!(54, x.load(Relaxed));
Sourcepub fn fetch_add<Rhs, Ordering: Ordering>(&self, v: Rhs, _: Ordering) -> Twhere
T: AtomicAdd<Rhs>,
pub fn fetch_add<Rhs, Ordering: Ordering>(&self, v: Rhs, _: Ordering) -> Twhere
T: AtomicAdd<Rhs>,
Atomic fetch and add.
Atomically updates *self
to (*self).wrapping_add(v)
, and returns the value of *self
before the update.
§Examples
use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed};
let x = Atomic::new(42);
assert_eq!(42, x.load(Relaxed));
assert_eq!(54, { x.fetch_add(12, Acquire); x.load(Relaxed) });
let x = Atomic::new(42);
assert_eq!(42, x.load(Relaxed));
assert_eq!(54, { x.fetch_add(12, Full); x.load(Relaxed) } );
Trait Implementations§
impl<T: AtomicType> Sync for Atomic<T>
Auto Trait Implementations§
impl<T> !Freeze for Atomic<T>
impl<T> !RefUnwindSafe for Atomic<T>
impl<T> Send for Atomic<T>
impl<T> Unpin for Atomic<T>
impl<T> UnwindSafe for Atomic<T>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> PinInit<T> for T
impl<T> PinInit<T> for T
Source§unsafe fn __pinned_init(self, slot: *mut T) -> Result<(), Infallible>
unsafe fn __pinned_init(self, slot: *mut T) -> Result<(), Infallible>
slot
. Read more