Document memory orderings of `thread::{park, unpark}`
Document `thread::park/unpark` as having acquire/release synchronization. Without that guarantee, even the example in the documentation can deadlock:
```rust
let flag = Arc::new(AtomicBool::new(false));
let t2 = thread::spawn(move || {
while !flag.load(Ordering::Acquire) {
thread::park();
}
});
flag.store(true, Ordering::Release);
t2.thread().unpark();
// t1: flag.store(true)
// t1: thread.unpark()
// t2: flag.load() == false
// t2 now parks, is immediately unblocked but never
// acquires the flag, and thus spins forever
```
Multiple calls to `unpark` should also maintain a release sequence to make sure operations released by previous `unpark`s are not lost:
```rust
let a = Arc::new(AtomicBool::new(false));
let b = Arc::new(AtomicBool::new(false));
let t2 = thread::spawn(move || {
while !a.load(Ordering::Acquire) || !b.load(Ordering::Acquire) {
thread::park();
}
});
thread::spawn(move || {
a.store(true, Ordering::Release);
t2.thread().unpark();
});
b.store(true, Ordering::Release);
t2.thread().unpark();
// t1: a.store(true)
// t1: t2.unpark()
// t3: b.store(true)
// t3: t2.unpark()
// t2 now parks, is immediately unblocked but never
// acquires the store of `a`, only the store of `b` which
// was released by the most recent unpark, and thus spins forever
```
This is of course a contrived example, but is reasonable to rely upon in real code.
Note that all implementations of park/unpark already comply with the rules, it's just undocumented.
Restructure and rename std thread_local internals to make it less of a maze
Every time I try to work on std's thread local internals, it feels like I'm trying to navigate a confusing maze made of macros, deeply nested modules, and types with multiple names/aliases. Time to clean it up a bit.
This PR:
- Exports `Key` with its own name (`Key`), instead of `__LocalKeyInner`
- Uses `pub macro` to put `__thread_local_inner` into a (unstable, hidden) module, removing `#[macro_export]`, removing it from the crate root.
- Removes the `__` from `__thread_local_inner`.
- Removes a few unnecessary `allow_internal_unstable` features from the macros
- Removes the `libstd_thread_internals` feature. (Merged with `thread_local_internals`.)
- And removes it from the unstable book
- Gets rid of the deeply nested modules for the `Key` definitions (`mod fast` / `mod os` / `mod statik`).
- Turns a `#[cfg]` mess into a single `cfg_if`, now that there's no `#[macro_export]` anymore that breaks with `cfg_if`.
- Simplifies the `cfg_if` conditions to not repeat the conditions.
- Removes useless `normalize-stderr-test`, which were left over from when the `Key` types had different names on different platforms.
- Removes a seemingly unnecessary `realstd` re-export on `cfg(test)`.
This PR changes nothing about the thread local implementation. That's for a later PR. (Which should hopefully be easier once all this stuff is a bit cleaned up.)
For larger applications it's important that users set `RUST_MIN_STACK`
at the start of their program because `min_stack` caches the value.
Not doing so can lead to their `env::set_var` call surprisingly not having any effect.
This allows removing all the platform-dependent code from `library/std/src/thread/local.rs` and `library/std/src/thread/mod.rs`
Signed-off-by: Ayush Singh <ayushsingh1325@gmail.com>
std: use `sync::Mutex` for internal statics
Since `sync::Mutex` is now `const`-constructible, it can be used for internal statics, removing the need for `sys_common::StaticMutex`. This adds some extra allocations on platforms which need to box their mutexes (currently SGX and some UNIX), but these will become unnecessary with the lock improvements tracked in #93740.
I changed the program argument implementation on Hermit, it does not need `Mutex` but can use atomics like some UNIX systems (ping `@mkroening` `@stlankes).`
sync thread_local key conditions exactly with what the macro uses
This makes the `cfg` in `mod.rs` syntactically the same as those in `local.rs`.
I don't think this should actually change anything, but seems better to be consistent?
I looked into this due to https://github.com/rust-lang/rust/issues/102549, but this PR would make it *less* likely that `__OsLocalKeyInner` is going to get provided, so this cannot help with that issue.
r? `@thomcc`
scoped threads: pass closure through MaybeUninit to avoid invalid dangling references
The `main` function defined here looks roughly like this, if it were written as a more explicit stand-alone function:
```rust
// Not showing all the `'lifetime` tracking, the point is that
// this closure might live shorter than `thread`.
fn thread(control: ..., closure: impl FnOnce() + 'lifetime) {
closure();
control.signal_done();
// A lot of time can pass here.
}
```
Note that `thread` continues to run even after `signal_done`! Now consider what happens if the `closure` captures a reference of lifetime `'lifetime`:
- The type of `closure` is a struct (the implicit unnameable closure type) with a `&'lifetime mut T` field. References passed to a function are marked with `dereferenceable`, which is LLVM speak for *this reference will remain live for the entire duration of this function*.
- The closure runs, `signal_done` runs. Then -- potentially -- this thread gets scheduled away and the main thread runs, seeing the signal and returning to the user. Now `'lifetime` ends and the memory the reference points to might be deallocated.
- Now we have UB! The reference that as passed to `thread` with the promise of remaining live for the entire duration of the function, actually got deallocated while the function still runs. Oops.
Long-term I think we should be able to use `ManuallyDrop` to fix this without `unsafe`, or maybe a new `MaybeDangling` type. I am working on an RFC for that. But in the mean time it'd be nice to fix this so that Miri with `-Zmiri-retag-fields` (which is needed for "full enforcement" of all the LLVM flags we generate) stops erroring on scoped threads.
Fixes https://github.com/rust-lang/rust/issues/101983
r? `@m-ou-se`
Add cgroupv1 support to available_parallelism
Fixes#97549
My dev machine uses cgroup v2 so I was only able to test that code path. So the v1 code path is written only based on documentation. I could use some help testing that it works on a machine with cgroups v1:
```
$ x.py build --stage 1
# quota.rs
fn main() {
println!("{:?}", std:🧵:available_parallelism());
}
# assuming stage1 is linked in rustup
$ rust +stage1 quota.rs
# spawn a new cgroup scope for the current user
$ sudo systemd-run -p CPUQuota="300%" --uid=$(id -u) -tdS
# should print Ok(3)
$ ./quota
```
If it doesn't work as expected an strace, the contents of `/proc/self/cgroups` and the structure of `/sys/fs/cgroups` would help.