2014-01-16 19:57:59 -08:00
|
|
|
|
//! A "once initialization" primitive
|
|
|
|
|
//!
|
|
|
|
|
//! This primitive is meant to be used to run one-time initialization. An
|
|
|
|
|
//! example use case would be for initializing an FFI library.
|
|
|
|
|
|
2016-03-17 19:01:50 -07:00
|
|
|
|
// A "once" is a relatively simple primitive, and it's also typically provided
|
|
|
|
|
// by the OS as well (see `pthread_once` or `InitOnceExecuteOnce`). The OS
|
|
|
|
|
// primitives, however, tend to have surprising restrictions, such as the Unix
|
|
|
|
|
// one doesn't allow an argument to be passed to the function.
|
|
|
|
|
//
|
|
|
|
|
// As a result, we end up implementing it ourselves in the standard library.
|
|
|
|
|
// This also gives us the opportunity to optimize the implementation a bit which
|
|
|
|
|
// should help the fast path on call sites. Consequently, let's explain how this
|
|
|
|
|
// primitive works now!
|
|
|
|
|
//
|
|
|
|
|
// So to recap, the guarantees of a Once are that it will call the
|
|
|
|
|
// initialization closure at most once, and it will never return until the one
|
|
|
|
|
// that's running has finished running. This means that we need some form of
|
|
|
|
|
// blocking here while the custom callback is running at the very least.
|
|
|
|
|
// Additionally, we add on the restriction of **poisoning**. Whenever an
|
|
|
|
|
// initialization closure panics, the Once enters a "poisoned" state which means
|
|
|
|
|
// that all future calls will immediately panic as well.
|
|
|
|
|
//
|
2018-08-06 12:34:00 +02:00
|
|
|
|
// So to implement this, one might first reach for a `Mutex`, but those cannot
|
|
|
|
|
// be put into a `static`. It also gets a lot harder with poisoning to figure
|
|
|
|
|
// out when the mutex needs to be deallocated because it's not after the closure
|
|
|
|
|
// finishes, but after the first successful closure finishes.
|
2016-03-17 19:01:50 -07:00
|
|
|
|
//
|
|
|
|
|
// All in all, this is instead implemented with atomics and lock-free
|
|
|
|
|
// operations! Whee! Each `Once` has one word of atomic state, and this state is
|
|
|
|
|
// CAS'd on to determine what to do. There are four possible state of a `Once`:
|
|
|
|
|
//
|
|
|
|
|
// * Incomplete - no initialization has run yet, and no thread is currently
|
|
|
|
|
// using the Once.
|
|
|
|
|
// * Poisoned - some thread has previously attempted to initialize the Once, but
|
|
|
|
|
// it panicked, so the Once is now poisoned. There are no other
|
|
|
|
|
// threads currently accessing this Once.
|
|
|
|
|
// * Running - some thread is currently attempting to run initialization. It may
|
|
|
|
|
// succeed, so all future threads need to wait for it to finish.
|
|
|
|
|
// Note that this state is accompanied with a payload, described
|
|
|
|
|
// below.
|
|
|
|
|
// * Complete - initialization has completed and all future calls should finish
|
|
|
|
|
// immediately.
|
|
|
|
|
//
|
|
|
|
|
// With 4 states we need 2 bits to encode this, and we use the remaining bits
|
|
|
|
|
// in the word we have allocated as a queue of threads waiting for the thread
|
|
|
|
|
// responsible for entering the RUNNING state. This queue is just a linked list
|
|
|
|
|
// of Waiter nodes which is monotonically increasing in size. Each node is
|
|
|
|
|
// allocated on the stack, and whenever the running closure finishes it will
|
|
|
|
|
// consume the entire queue and notify all waiters they should try again.
|
|
|
|
|
//
|
|
|
|
|
// You'll find a few more details in the implementation, but that's the gist of
|
|
|
|
|
// it!
|
2019-10-23 10:01:22 +02:00
|
|
|
|
//
|
|
|
|
|
// Atomic orderings:
|
|
|
|
|
// When running `Once` we deal with multiple atomics:
|
|
|
|
|
// `Once.state_and_queue` and an unknown number of `Waiter.signaled`.
|
|
|
|
|
// * `state_and_queue` is used (1) as a state flag, (2) for synchronizing the
|
|
|
|
|
// result of the `Once`, and (3) for synchronizing `Waiter` nodes.
|
|
|
|
|
// - At the end of the `call_inner` function we have to make sure the result
|
|
|
|
|
// of the `Once` is acquired. So every load which can be the only one to
|
|
|
|
|
// load COMPLETED must have at least Acquire ordering, which means all
|
|
|
|
|
// three of them.
|
|
|
|
|
// - `WaiterQueue::Drop` is the only place that may store COMPLETED, and
|
|
|
|
|
// must do so with Release ordering to make the result available.
|
|
|
|
|
// - `wait` inserts `Waiter` nodes as a pointer in `state_and_queue`, and
|
|
|
|
|
// needs to make the nodes available with Release ordering. The load in
|
2020-11-20 22:27:50 +01:00
|
|
|
|
// its `compare_exchange` can be Relaxed because it only has to compare
|
2019-10-23 10:01:22 +02:00
|
|
|
|
// the atomic, not to read other data.
|
|
|
|
|
// - `WaiterQueue::Drop` must see the `Waiter` nodes, so it must load
|
|
|
|
|
// `state_and_queue` with Acquire ordering.
|
|
|
|
|
// - There is just one store where `state_and_queue` is used only as a
|
|
|
|
|
// state flag, without having to synchronize data: switching the state
|
|
|
|
|
// from INCOMPLETE to RUNNING in `call_inner`. This store can be Relaxed,
|
|
|
|
|
// but the read has to be Acquire because of the requirements mentioned
|
|
|
|
|
// above.
|
|
|
|
|
// * `Waiter.signaled` is both used as a flag, and to protect a field with
|
|
|
|
|
// interior mutability in `Waiter`. `Waiter.thread` is changed in
|
|
|
|
|
// `WaiterQueue::Drop` which then sets `signaled` with Release ordering.
|
|
|
|
|
// After `wait` loads `signaled` with Acquire and sees it is true, it needs to
|
|
|
|
|
// see the changes to drop the `Waiter` struct correctly.
|
|
|
|
|
// * There is one place where the two atomics `Once.state_and_queue` and
|
|
|
|
|
// `Waiter.signaled` come together, and might be reordered by the compiler or
|
2020-08-02 23:20:00 +08:00
|
|
|
|
// processor. Because both use Acquire ordering such a reordering is not
|
2019-10-23 10:01:22 +02:00
|
|
|
|
// allowed, so no need for SeqCst.
|
2016-03-17 19:01:50 -07:00
|
|
|
|
|
2020-08-27 13:45:01 +00:00
|
|
|
|
#[cfg(all(test, not(target_os = "emscripten")))]
|
|
|
|
|
mod tests;
|
|
|
|
|
|
2019-10-23 10:10:36 +02:00
|
|
|
|
use crate::cell::Cell;
|
2019-02-11 04:23:21 +09:00
|
|
|
|
use crate::fmt;
|
|
|
|
|
use crate::marker;
|
2021-11-05 18:27:54 +00:00
|
|
|
|
use crate::panic::UnwindSafe;
|
2019-11-09 12:46:17 +01:00
|
|
|
|
use crate::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
|
2019-02-11 04:23:21 +09:00
|
|
|
|
use crate::thread::{self, Thread};
|
2014-01-16 19:57:59 -08:00
|
|
|
|
|
2014-06-10 17:43:22 -07:00
|
|
|
|
/// A synchronization primitive which can be used to run a one-time global
|
|
|
|
|
/// initialization. Useful for one-time initialization for FFI or related
|
2020-09-20 18:37:05 +02:00
|
|
|
|
/// functionality. This type can only be constructed with [`Once::new()`].
|
2014-01-16 19:57:59 -08:00
|
|
|
|
///
|
2015-03-11 21:11:40 -04:00
|
|
|
|
/// # Examples
|
2014-01-16 19:57:59 -08:00
|
|
|
|
///
|
2015-03-12 22:42:38 -04:00
|
|
|
|
/// ```
|
2018-05-24 14:09:42 +02:00
|
|
|
|
/// use std::sync::Once;
|
2014-01-16 19:57:59 -08:00
|
|
|
|
///
|
2018-05-24 14:09:42 +02:00
|
|
|
|
/// static START: Once = Once::new();
|
2014-06-10 17:43:22 -07:00
|
|
|
|
///
|
2014-12-29 15:03:01 -08:00
|
|
|
|
/// START.call_once(|| {
|
2014-10-10 21:59:10 -07:00
|
|
|
|
/// // run initialization here
|
|
|
|
|
/// });
|
2014-01-16 19:57:59 -08:00
|
|
|
|
/// ```
|
2015-01-23 21:48:20 -08:00
|
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2014-01-16 19:57:59 -08:00
|
|
|
|
pub struct Once {
|
2020-11-22 20:26:36 +01:00
|
|
|
|
// `state_and_queue` is actually a pointer to a `Waiter` with extra state
|
2019-10-23 09:30:35 +02:00
|
|
|
|
// bits, so we add the `PhantomData` appropriately.
|
|
|
|
|
state_and_queue: AtomicUsize,
|
2019-10-23 09:56:41 +02:00
|
|
|
|
_marker: marker::PhantomData<*const Waiter>,
|
2016-03-17 19:01:50 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// The `PhantomData` of a raw pointer removes these two auto traits, but we
|
|
|
|
|
// enforce both below in the implementation so this should be safe to add.
|
|
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
|
|
|
|
unsafe impl Sync for Once {}
|
|
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
|
|
|
|
unsafe impl Send for Once {}
|
|
|
|
|
|
2021-11-05 18:27:54 +00:00
|
|
|
|
#[stable(feature = "sync_once_ref_unwind_safe", since = "1.59.0")]
|
|
|
|
|
impl UnwindSafe for Once {}
|
|
|
|
|
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// State yielded to [`Once::call_once_force()`]’s closure parameter. The state
|
|
|
|
|
/// can be used to query the poison status of the [`Once`].
|
2021-02-04 11:13:03 +01:00
|
|
|
|
#[stable(feature = "once_poison", since = "1.51.0")]
|
2016-11-25 13:21:49 -05:00
|
|
|
|
#[derive(Debug)]
|
2016-03-17 19:01:50 -07:00
|
|
|
|
pub struct OnceState {
|
|
|
|
|
poisoned: bool,
|
2020-06-30 18:27:21 +10:00
|
|
|
|
set_state_on_drop_to: Cell<usize>,
|
2014-01-16 19:57:59 -08:00
|
|
|
|
}
|
|
|
|
|
|
2017-03-27 16:10:44 -04:00
|
|
|
|
/// Initialization value for static [`Once`] values.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::{Once, ONCE_INIT};
|
|
|
|
|
///
|
|
|
|
|
/// static START: Once = ONCE_INIT;
|
|
|
|
|
/// ```
|
2015-01-23 21:48:20 -08:00
|
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2019-06-11 18:57:48 -07:00
|
|
|
|
#[rustc_deprecated(
|
|
|
|
|
since = "1.38.0",
|
|
|
|
|
reason = "the `new` function is now preferred",
|
2019-11-09 12:46:17 +01:00
|
|
|
|
suggestion = "Once::new()"
|
2019-06-11 18:57:48 -07:00
|
|
|
|
)]
|
2015-05-27 11:18:36 +03:00
|
|
|
|
pub const ONCE_INIT: Once = Once::new();
|
2014-01-16 19:57:59 -08:00
|
|
|
|
|
2019-10-23 09:30:35 +02:00
|
|
|
|
// Four states that a Once can be in, encoded into the lower bits of
|
|
|
|
|
// `state_and_queue` in the Once structure.
|
2016-03-17 19:01:50 -07:00
|
|
|
|
const INCOMPLETE: usize = 0x0;
|
|
|
|
|
const POISONED: usize = 0x1;
|
|
|
|
|
const RUNNING: usize = 0x2;
|
|
|
|
|
const COMPLETE: usize = 0x3;
|
|
|
|
|
|
|
|
|
|
// Mask to learn about the state. All other bits are the queue of waiters if
|
|
|
|
|
// this is in the RUNNING state.
|
|
|
|
|
const STATE_MASK: usize = 0x3;
|
|
|
|
|
|
2019-10-23 10:10:36 +02:00
|
|
|
|
// Representation of a node in the linked list of waiters, used while in the
|
|
|
|
|
// RUNNING state.
|
|
|
|
|
// Note: `Waiter` can't hold a mutable pointer to the next thread, because then
|
|
|
|
|
// `wait` would both hand out a mutable reference to its `Waiter` node, and keep
|
|
|
|
|
// a shared reference to check `signaled`. Instead we hold shared references and
|
|
|
|
|
// use interior mutability.
|
2019-10-24 17:08:23 +02:00
|
|
|
|
#[repr(align(4))] // Ensure the two lower bits are free to use as state bits.
|
2016-03-17 19:01:50 -07:00
|
|
|
|
struct Waiter {
|
2019-10-23 10:10:36 +02:00
|
|
|
|
thread: Cell<Option<Thread>>,
|
2016-03-17 19:01:50 -07:00
|
|
|
|
signaled: AtomicBool,
|
2019-10-23 09:56:41 +02:00
|
|
|
|
next: *const Waiter,
|
2016-03-17 19:01:50 -07:00
|
|
|
|
}
|
|
|
|
|
|
2019-10-23 11:44:31 +02:00
|
|
|
|
// Head of a linked list of waiters.
|
|
|
|
|
// Every node is a struct on the stack of a waiting thread.
|
|
|
|
|
// Will wake up the waiters when it gets dropped, i.e. also on panic.
|
|
|
|
|
struct WaiterQueue<'a> {
|
|
|
|
|
state_and_queue: &'a AtomicUsize,
|
|
|
|
|
set_state_on_drop_to: usize,
|
2016-03-17 19:01:50 -07:00
|
|
|
|
}
|
|
|
|
|
|
2014-01-16 19:57:59 -08:00
|
|
|
|
impl Once {
|
2015-05-27 11:18:36 +03:00
|
|
|
|
/// Creates a new `Once` value.
|
2020-09-12 17:11:47 +02:00
|
|
|
|
#[inline]
|
2015-06-10 18:44:11 -07:00
|
|
|
|
#[stable(feature = "once_new", since = "1.2.0")]
|
2019-12-18 12:00:59 -05:00
|
|
|
|
#[rustc_const_stable(feature = "const_once_new", since = "1.32.0")]
|
2021-10-10 02:44:26 -04:00
|
|
|
|
#[must_use]
|
2015-05-27 11:18:36 +03:00
|
|
|
|
pub const fn new() -> Once {
|
2019-11-09 12:46:17 +01:00
|
|
|
|
Once { state_and_queue: AtomicUsize::new(INCOMPLETE), _marker: marker::PhantomData }
|
2015-05-27 11:18:36 +03:00
|
|
|
|
}
|
|
|
|
|
|
2015-04-13 10:21:32 -04:00
|
|
|
|
/// Performs an initialization routine once and only once. The given closure
|
2014-12-29 15:03:01 -08:00
|
|
|
|
/// will be executed if this is the first time `call_once` has been called,
|
|
|
|
|
/// and otherwise the routine will *not* be invoked.
|
2014-01-16 19:57:59 -08:00
|
|
|
|
///
|
2015-05-09 00:12:29 +09:00
|
|
|
|
/// This method will block the calling thread if another initialization
|
2014-01-16 19:57:59 -08:00
|
|
|
|
/// routine is currently running.
|
|
|
|
|
///
|
|
|
|
|
/// When this function returns, it is guaranteed that some initialization
|
2021-07-23 19:14:28 -04:00
|
|
|
|
/// has run and completed (it might not be the closure specified). It is also
|
2015-04-28 21:07:21 +02:00
|
|
|
|
/// guaranteed that any memory writes performed by the executed closure can
|
2015-05-09 00:12:29 +09:00
|
|
|
|
/// be reliably observed by other threads at this point (there is a
|
2015-04-28 21:07:21 +02:00
|
|
|
|
/// happens-before relation between the closure and code executing after the
|
|
|
|
|
/// return).
|
2016-03-17 19:01:50 -07:00
|
|
|
|
///
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// If the given closure recursively invokes `call_once` on the same [`Once`]
|
2018-08-03 14:18:06 +03:00
|
|
|
|
/// instance the exact behavior is not specified, allowed outcomes are
|
|
|
|
|
/// a panic or a deadlock.
|
|
|
|
|
///
|
2016-03-17 19:01:50 -07:00
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
2018-05-24 14:09:42 +02:00
|
|
|
|
/// use std::sync::Once;
|
2016-03-17 19:01:50 -07:00
|
|
|
|
///
|
|
|
|
|
/// static mut VAL: usize = 0;
|
2018-05-24 14:09:42 +02:00
|
|
|
|
/// static INIT: Once = Once::new();
|
2016-03-17 19:01:50 -07:00
|
|
|
|
///
|
|
|
|
|
/// // Accessing a `static mut` is unsafe much of the time, but if we do so
|
2018-11-27 02:59:49 +00:00
|
|
|
|
/// // in a synchronized fashion (e.g., write once or read all) then we're
|
2016-03-17 19:01:50 -07:00
|
|
|
|
/// // good to go!
|
|
|
|
|
/// //
|
|
|
|
|
/// // This function will only call `expensive_computation` once, and will
|
|
|
|
|
/// // otherwise always return the value returned from the first invocation.
|
|
|
|
|
/// fn get_cached_val() -> usize {
|
|
|
|
|
/// unsafe {
|
|
|
|
|
/// INIT.call_once(|| {
|
|
|
|
|
/// VAL = expensive_computation();
|
|
|
|
|
/// });
|
|
|
|
|
/// VAL
|
|
|
|
|
/// }
|
|
|
|
|
/// }
|
|
|
|
|
///
|
|
|
|
|
/// fn expensive_computation() -> usize {
|
|
|
|
|
/// // ...
|
|
|
|
|
/// # 2
|
|
|
|
|
/// }
|
|
|
|
|
/// ```
|
|
|
|
|
///
|
|
|
|
|
/// # Panics
|
|
|
|
|
///
|
|
|
|
|
/// The closure `f` will only be executed once if this is called
|
|
|
|
|
/// concurrently amongst many threads. If that closure panics, however, then
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// it will *poison* this [`Once`] instance, causing all future invocations of
|
2016-03-17 19:01:50 -07:00
|
|
|
|
/// `call_once` to also panic.
|
|
|
|
|
///
|
|
|
|
|
/// This is similar to [poisoning with mutexes][poison].
|
|
|
|
|
///
|
|
|
|
|
/// [poison]: struct.Mutex.html#poisoning
|
2015-01-23 21:48:20 -08:00
|
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2019-11-09 12:46:17 +01:00
|
|
|
|
pub fn call_once<F>(&self, f: F)
|
|
|
|
|
where
|
|
|
|
|
F: FnOnce(),
|
|
|
|
|
{
|
2018-08-06 16:31:04 +03:00
|
|
|
|
// Fast path check
|
|
|
|
|
if self.is_completed() {
|
|
|
|
|
return;
|
2014-05-14 10:23:42 +00:00
|
|
|
|
}
|
|
|
|
|
|
2016-03-17 19:01:50 -07:00
|
|
|
|
let mut f = Some(f);
|
|
|
|
|
self.call_inner(false, &mut |_| f.take().unwrap()());
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// Performs the same function as [`call_once()`] except ignores poisoning.
|
2017-03-27 16:10:44 -04:00
|
|
|
|
///
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// Unlike [`call_once()`], if this [`Once`] has been poisoned (i.e., a previous
|
|
|
|
|
/// call to [`call_once()`] or [`call_once_force()`] caused a panic), calling
|
|
|
|
|
/// [`call_once_force()`] will still invoke the closure `f` and will _not_
|
|
|
|
|
/// result in an immediate panic. If `f` panics, the [`Once`] will remain
|
|
|
|
|
/// in a poison state. If `f` does _not_ panic, the [`Once`] will no
|
|
|
|
|
/// longer be in a poison state and all future calls to [`call_once()`] or
|
|
|
|
|
/// [`call_once_force()`] will be no-ops.
|
2017-10-21 09:20:25 -04:00
|
|
|
|
///
|
|
|
|
|
/// The closure `f` is yielded a [`OnceState`] structure which can be used
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// to query the poison status of the [`Once`].
|
2017-10-21 09:20:25 -04:00
|
|
|
|
///
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// [`call_once()`]: Once::call_once
|
|
|
|
|
/// [`call_once_force()`]: Once::call_once_force
|
2016-03-17 19:01:50 -07:00
|
|
|
|
///
|
2017-10-21 09:20:25 -04:00
|
|
|
|
/// # Examples
|
2016-03-17 19:01:50 -07:00
|
|
|
|
///
|
2017-10-21 09:20:25 -04:00
|
|
|
|
/// ```
|
2018-05-24 14:09:42 +02:00
|
|
|
|
/// use std::sync::Once;
|
2017-10-21 09:20:25 -04:00
|
|
|
|
/// use std::thread;
|
|
|
|
|
///
|
2018-05-24 14:09:42 +02:00
|
|
|
|
/// static INIT: Once = Once::new();
|
2017-10-21 09:20:25 -04:00
|
|
|
|
///
|
|
|
|
|
/// // poison the once
|
|
|
|
|
/// let handle = thread::spawn(|| {
|
|
|
|
|
/// INIT.call_once(|| panic!());
|
|
|
|
|
/// });
|
|
|
|
|
/// assert!(handle.join().is_err());
|
|
|
|
|
///
|
|
|
|
|
/// // poisoning propagates
|
|
|
|
|
/// let handle = thread::spawn(|| {
|
|
|
|
|
/// INIT.call_once(|| {});
|
|
|
|
|
/// });
|
|
|
|
|
/// assert!(handle.join().is_err());
|
|
|
|
|
///
|
|
|
|
|
/// // call_once_force will still run and reset the poisoned state
|
|
|
|
|
/// INIT.call_once_force(|state| {
|
2021-02-04 11:13:03 +01:00
|
|
|
|
/// assert!(state.is_poisoned());
|
2017-10-21 09:20:25 -04:00
|
|
|
|
/// });
|
|
|
|
|
///
|
|
|
|
|
/// // once any success happens, we stop propagating the poison
|
|
|
|
|
/// INIT.call_once(|| {});
|
|
|
|
|
/// ```
|
2021-02-04 11:13:03 +01:00
|
|
|
|
#[stable(feature = "once_poison", since = "1.51.0")]
|
2019-11-09 12:46:17 +01:00
|
|
|
|
pub fn call_once_force<F>(&self, f: F)
|
|
|
|
|
where
|
|
|
|
|
F: FnOnce(&OnceState),
|
|
|
|
|
{
|
2018-08-06 16:31:04 +03:00
|
|
|
|
// Fast path check
|
|
|
|
|
if self.is_completed() {
|
|
|
|
|
return;
|
2014-01-16 19:57:59 -08:00
|
|
|
|
}
|
|
|
|
|
|
2016-03-17 19:01:50 -07:00
|
|
|
|
let mut f = Some(f);
|
2020-06-30 18:27:21 +10:00
|
|
|
|
self.call_inner(true, &mut |p| f.take().unwrap()(p));
|
2016-03-17 19:01:50 -07:00
|
|
|
|
}
|
|
|
|
|
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// Returns `true` if some [`call_once()`] call has completed
|
2018-10-22 18:21:55 +02:00
|
|
|
|
/// successfully. Specifically, `is_completed` will return false in
|
|
|
|
|
/// the following situations:
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// * [`call_once()`] was not called at all,
|
|
|
|
|
/// * [`call_once()`] was called, but has not yet completed,
|
|
|
|
|
/// * the [`Once`] instance is poisoned
|
2018-08-03 13:52:52 +03:00
|
|
|
|
///
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// This function returning `false` does not mean that [`Once`] has not been
|
2020-02-07 21:53:22 -08:00
|
|
|
|
/// executed. For example, it may have been executed in the time between
|
|
|
|
|
/// when `is_completed` starts executing and when it returns, in which case
|
|
|
|
|
/// the `false` return value would be stale (but still permissible).
|
2018-08-06 16:31:04 +03:00
|
|
|
|
///
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// [`call_once()`]: Once::call_once
|
|
|
|
|
///
|
2018-08-03 13:52:52 +03:00
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::Once;
|
|
|
|
|
///
|
|
|
|
|
/// static INIT: Once = Once::new();
|
|
|
|
|
///
|
|
|
|
|
/// assert_eq!(INIT.is_completed(), false);
|
|
|
|
|
/// INIT.call_once(|| {
|
|
|
|
|
/// assert_eq!(INIT.is_completed(), false);
|
|
|
|
|
/// });
|
|
|
|
|
/// assert_eq!(INIT.is_completed(), true);
|
|
|
|
|
/// ```
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::Once;
|
|
|
|
|
/// use std::thread;
|
|
|
|
|
///
|
|
|
|
|
/// static INIT: Once = Once::new();
|
|
|
|
|
///
|
|
|
|
|
/// assert_eq!(INIT.is_completed(), false);
|
|
|
|
|
/// let handle = thread::spawn(|| {
|
|
|
|
|
/// INIT.call_once(|| panic!());
|
|
|
|
|
/// });
|
|
|
|
|
/// assert!(handle.join().is_err());
|
|
|
|
|
/// assert_eq!(INIT.is_completed(), false);
|
|
|
|
|
/// ```
|
2020-03-15 10:19:26 +01:00
|
|
|
|
#[stable(feature = "once_is_completed", since = "1.43.0")]
|
2018-09-29 11:07:48 +03:00
|
|
|
|
#[inline]
|
2018-08-03 13:52:52 +03:00
|
|
|
|
pub fn is_completed(&self) -> bool {
|
2018-08-06 16:31:04 +03:00
|
|
|
|
// An `Acquire` load is enough because that makes all the initialization
|
|
|
|
|
// operations visible to us, and, this being a fast path, weaker
|
|
|
|
|
// ordering helps with performance. This `Acquire` synchronizes with
|
2019-10-23 10:01:22 +02:00
|
|
|
|
// `Release` operations on the slow path.
|
2019-10-23 09:30:35 +02:00
|
|
|
|
self.state_and_queue.load(Ordering::Acquire) == COMPLETE
|
2018-08-03 13:52:52 +03:00
|
|
|
|
}
|
|
|
|
|
|
2016-03-17 19:01:50 -07:00
|
|
|
|
// This is a non-generic function to reduce the monomorphization cost of
|
|
|
|
|
// using `call_once` (this isn't exactly a trivial or small implementation).
|
|
|
|
|
//
|
|
|
|
|
// Additionally, this is tagged with `#[cold]` as it should indeed be cold
|
|
|
|
|
// and it helps let LLVM know that calls to this function should be off the
|
|
|
|
|
// fast path. Essentially, this should help generate more straight line code
|
|
|
|
|
// in LLVM.
|
|
|
|
|
//
|
|
|
|
|
// Finally, this takes an `FnMut` instead of a `FnOnce` because there's
|
|
|
|
|
// currently no way to take an `FnOnce` and call it via virtual dispatch
|
|
|
|
|
// without some allocation overhead.
|
|
|
|
|
#[cold]
|
2020-06-30 18:27:21 +10:00
|
|
|
|
fn call_inner(&self, ignore_poisoning: bool, init: &mut dyn FnMut(&OnceState)) {
|
2019-10-23 10:01:22 +02:00
|
|
|
|
let mut state_and_queue = self.state_and_queue.load(Ordering::Acquire);
|
2019-10-23 09:50:32 +02:00
|
|
|
|
loop {
|
2019-10-23 09:30:35 +02:00
|
|
|
|
match state_and_queue {
|
2019-10-23 11:02:20 +02:00
|
|
|
|
COMPLETE => break,
|
2016-03-17 19:01:50 -07:00
|
|
|
|
POISONED if !ignore_poisoning => {
|
2019-10-23 11:02:20 +02:00
|
|
|
|
// Panic to propagate the poison.
|
2016-03-17 19:01:50 -07:00
|
|
|
|
panic!("Once instance has previously been poisoned");
|
|
|
|
|
}
|
2019-11-09 12:46:17 +01:00
|
|
|
|
POISONED | INCOMPLETE => {
|
2019-10-23 11:02:20 +02:00
|
|
|
|
// Try to register this thread as the one RUNNING.
|
2020-11-20 22:27:50 +01:00
|
|
|
|
let exchange_result = self.state_and_queue.compare_exchange(
|
2019-11-09 12:46:17 +01:00
|
|
|
|
state_and_queue,
|
|
|
|
|
RUNNING,
|
|
|
|
|
Ordering::Acquire,
|
2020-11-20 22:27:50 +01:00
|
|
|
|
Ordering::Acquire,
|
2019-11-09 12:46:17 +01:00
|
|
|
|
);
|
2020-11-20 22:27:50 +01:00
|
|
|
|
if let Err(old) = exchange_result {
|
2019-10-23 09:30:35 +02:00
|
|
|
|
state_and_queue = old;
|
2019-11-09 12:46:17 +01:00
|
|
|
|
continue;
|
2016-03-17 19:01:50 -07:00
|
|
|
|
}
|
2019-10-23 11:44:31 +02:00
|
|
|
|
// `waiter_queue` will manage other waiting threads, and
|
|
|
|
|
// wake them up on drop.
|
|
|
|
|
let mut waiter_queue = WaiterQueue {
|
|
|
|
|
state_and_queue: &self.state_and_queue,
|
|
|
|
|
set_state_on_drop_to: POISONED,
|
2016-03-17 19:01:50 -07:00
|
|
|
|
};
|
2019-10-23 11:44:31 +02:00
|
|
|
|
// Run the initialization function, letting it know if we're
|
|
|
|
|
// poisoned or not.
|
2020-06-30 18:36:10 +10:00
|
|
|
|
let init_state = OnceState {
|
|
|
|
|
poisoned: state_and_queue == POISONED,
|
|
|
|
|
set_state_on_drop_to: Cell::new(COMPLETE),
|
|
|
|
|
};
|
2020-06-30 18:27:21 +10:00
|
|
|
|
init(&init_state);
|
|
|
|
|
waiter_queue.set_state_on_drop_to = init_state.set_state_on_drop_to.get();
|
2019-11-09 12:46:17 +01:00
|
|
|
|
break;
|
2016-03-17 19:01:50 -07:00
|
|
|
|
}
|
|
|
|
|
_ => {
|
2019-10-23 11:02:20 +02:00
|
|
|
|
// All other values must be RUNNING with possibly a
|
|
|
|
|
// pointer to the waiter queue in the more significant bits.
|
2019-10-23 09:30:35 +02:00
|
|
|
|
assert!(state_and_queue & STATE_MASK == RUNNING);
|
2019-10-23 10:01:22 +02:00
|
|
|
|
wait(&self.state_and_queue, state_and_queue);
|
2019-10-23 10:01:22 +02:00
|
|
|
|
state_and_queue = self.state_and_queue.load(Ordering::Acquire);
|
2016-03-17 19:01:50 -07:00
|
|
|
|
}
|
|
|
|
|
}
|
2014-01-16 19:57:59 -08:00
|
|
|
|
}
|
2016-03-17 19:01:50 -07:00
|
|
|
|
}
|
|
|
|
|
}
|
2014-01-16 19:57:59 -08:00
|
|
|
|
|
2019-11-04 20:49:47 +01:00
|
|
|
|
fn wait(state_and_queue: &AtomicUsize, mut current_state: usize) {
|
|
|
|
|
// Note: the following code was carefully written to avoid creating a
|
|
|
|
|
// mutable reference to `node` that gets aliased.
|
2019-10-23 10:01:22 +02:00
|
|
|
|
loop {
|
2019-11-04 20:49:47 +01:00
|
|
|
|
// Don't queue this thread if the status is no longer running,
|
|
|
|
|
// otherwise we will not be woken up.
|
|
|
|
|
if current_state & STATE_MASK != RUNNING {
|
|
|
|
|
return;
|
2019-10-23 10:01:22 +02:00
|
|
|
|
}
|
|
|
|
|
|
2019-11-04 20:49:47 +01:00
|
|
|
|
// Create the node for our current thread.
|
|
|
|
|
let node = Waiter {
|
|
|
|
|
thread: Cell::new(Some(thread::current())),
|
|
|
|
|
signaled: AtomicBool::new(false),
|
|
|
|
|
next: (current_state & !STATE_MASK) as *const Waiter,
|
|
|
|
|
};
|
|
|
|
|
let me = &node as *const Waiter as usize;
|
|
|
|
|
|
|
|
|
|
// Try to slide in the node at the head of the linked list, making sure
|
|
|
|
|
// that another thread didn't just replace the head of the linked list.
|
2020-11-20 22:27:50 +01:00
|
|
|
|
let exchange_result = state_and_queue.compare_exchange(
|
|
|
|
|
current_state,
|
|
|
|
|
me | RUNNING,
|
|
|
|
|
Ordering::Release,
|
|
|
|
|
Ordering::Relaxed,
|
|
|
|
|
);
|
|
|
|
|
if let Err(old) = exchange_result {
|
2019-11-04 20:49:47 +01:00
|
|
|
|
current_state = old;
|
|
|
|
|
continue;
|
2019-10-23 10:01:22 +02:00
|
|
|
|
}
|
|
|
|
|
|
2019-11-04 20:49:47 +01:00
|
|
|
|
// We have enqueued ourselves, now lets wait.
|
|
|
|
|
// It is important not to return before being signaled, otherwise we
|
|
|
|
|
// would drop our `Waiter` node and leave a hole in the linked list
|
|
|
|
|
// (and a dangling reference). Guard against spurious wakeups by
|
|
|
|
|
// reparking ourselves until we are signaled.
|
|
|
|
|
while !node.signaled.load(Ordering::Acquire) {
|
|
|
|
|
// If the managing thread happens to signal and unpark us before we
|
|
|
|
|
// can park ourselves, the result could be this thread never gets
|
|
|
|
|
// unparked. Luckily `park` comes with the guarantee that if it got
|
2021-04-01 00:52:02 -04:00
|
|
|
|
// an `unpark` just before on an unparked thread it does not park.
|
2019-11-04 20:49:47 +01:00
|
|
|
|
thread::park();
|
|
|
|
|
}
|
|
|
|
|
break;
|
2019-10-23 10:01:22 +02:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2017-01-29 13:31:47 +00:00
|
|
|
|
#[stable(feature = "std_debug", since = "1.16.0")]
|
2016-11-25 13:21:49 -05:00
|
|
|
|
impl fmt::Debug for Once {
|
2019-03-01 09:34:11 +01:00
|
|
|
|
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
2021-04-05 13:31:11 +02:00
|
|
|
|
f.debug_struct("Once").finish_non_exhaustive()
|
2016-11-25 13:21:49 -05:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-10-23 11:44:31 +02:00
|
|
|
|
impl Drop for WaiterQueue<'_> {
|
2016-03-17 19:01:50 -07:00
|
|
|
|
fn drop(&mut self) {
|
2019-10-23 11:44:31 +02:00
|
|
|
|
// Swap out our state with however we finished.
|
2019-11-09 12:46:17 +01:00
|
|
|
|
let state_and_queue =
|
|
|
|
|
self.state_and_queue.swap(self.set_state_on_drop_to, Ordering::AcqRel);
|
2019-10-23 11:44:31 +02:00
|
|
|
|
|
|
|
|
|
// We should only ever see an old state which was RUNNING.
|
2019-10-23 09:30:35 +02:00
|
|
|
|
assert_eq!(state_and_queue & STATE_MASK, RUNNING);
|
2016-03-17 19:01:50 -07:00
|
|
|
|
|
2019-10-23 10:10:36 +02:00
|
|
|
|
// Walk the entire linked list of waiters and wake them up (in lifo
|
|
|
|
|
// order, last to register is first to wake up).
|
2016-03-17 19:01:50 -07:00
|
|
|
|
unsafe {
|
2019-10-23 10:10:36 +02:00
|
|
|
|
// Right after setting `node.signaled = true` the other thread may
|
|
|
|
|
// free `node` if there happens to be has a spurious wakeup.
|
|
|
|
|
// So we have to take out the `thread` field and copy the pointer to
|
|
|
|
|
// `next` first.
|
2019-10-23 09:56:41 +02:00
|
|
|
|
let mut queue = (state_and_queue & !STATE_MASK) as *const Waiter;
|
2016-03-17 19:01:50 -07:00
|
|
|
|
while !queue.is_null() {
|
|
|
|
|
let next = (*queue).next;
|
2020-04-21 21:11:32 +02:00
|
|
|
|
let thread = (*queue).thread.take().unwrap();
|
2019-10-23 10:01:22 +02:00
|
|
|
|
(*queue).signaled.store(true, Ordering::Release);
|
2019-10-23 10:10:36 +02:00
|
|
|
|
// ^- FIXME (maybe): This is another case of issue #55005
|
|
|
|
|
// `store()` has a potentially dangling ref to `signaled`.
|
2016-03-17 19:01:50 -07:00
|
|
|
|
queue = next;
|
2019-10-23 10:10:36 +02:00
|
|
|
|
thread.unpark();
|
2016-03-17 19:01:50 -07:00
|
|
|
|
}
|
2014-01-16 19:57:59 -08:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2016-03-17 19:01:50 -07:00
|
|
|
|
impl OnceState {
|
2019-02-09 22:16:58 +00:00
|
|
|
|
/// Returns `true` if the associated [`Once`] was poisoned prior to the
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// invocation of the closure passed to [`Once::call_once_force()`].
|
2017-10-21 09:20:25 -04:00
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// A poisoned [`Once`]:
|
2017-10-21 09:20:25 -04:00
|
|
|
|
///
|
|
|
|
|
/// ```
|
2018-05-24 14:09:42 +02:00
|
|
|
|
/// use std::sync::Once;
|
2017-10-21 09:20:25 -04:00
|
|
|
|
/// use std::thread;
|
|
|
|
|
///
|
2018-05-24 14:09:42 +02:00
|
|
|
|
/// static INIT: Once = Once::new();
|
2017-10-21 09:20:25 -04:00
|
|
|
|
///
|
|
|
|
|
/// // poison the once
|
|
|
|
|
/// let handle = thread::spawn(|| {
|
|
|
|
|
/// INIT.call_once(|| panic!());
|
|
|
|
|
/// });
|
|
|
|
|
/// assert!(handle.join().is_err());
|
|
|
|
|
///
|
|
|
|
|
/// INIT.call_once_force(|state| {
|
2021-02-04 11:13:03 +01:00
|
|
|
|
/// assert!(state.is_poisoned());
|
2017-10-21 09:20:25 -04:00
|
|
|
|
/// });
|
|
|
|
|
/// ```
|
|
|
|
|
///
|
2020-09-18 11:09:36 +02:00
|
|
|
|
/// An unpoisoned [`Once`]:
|
2017-10-21 09:20:25 -04:00
|
|
|
|
///
|
|
|
|
|
/// ```
|
2018-05-24 14:09:42 +02:00
|
|
|
|
/// use std::sync::Once;
|
2017-10-21 09:20:25 -04:00
|
|
|
|
///
|
2018-05-24 14:09:42 +02:00
|
|
|
|
/// static INIT: Once = Once::new();
|
2017-10-21 09:20:25 -04:00
|
|
|
|
///
|
|
|
|
|
/// INIT.call_once_force(|state| {
|
2021-02-04 11:13:03 +01:00
|
|
|
|
/// assert!(!state.is_poisoned());
|
2017-10-21 09:20:25 -04:00
|
|
|
|
/// });
|
2021-02-04 11:13:03 +01:00
|
|
|
|
#[stable(feature = "once_poison", since = "1.51.0")]
|
|
|
|
|
pub fn is_poisoned(&self) -> bool {
|
2016-03-17 19:01:50 -07:00
|
|
|
|
self.poisoned
|
|
|
|
|
}
|
2020-06-30 18:27:21 +10:00
|
|
|
|
|
|
|
|
|
/// Poison the associated [`Once`] without explicitly panicking.
|
|
|
|
|
// NOTE: This is currently only exposed for the `lazy` module
|
|
|
|
|
pub(crate) fn poison(&self) {
|
|
|
|
|
self.set_state_on_drop_to.set(POISONED);
|
|
|
|
|
}
|
2016-03-17 19:01:50 -07:00
|
|
|
|
}
|