This commit is intended to follow the stabilization disposition of the
FCP that has now finished in #84223. This stabilizes the ability to flag
thread local initializers as `const` expressions which enables the macro
to generate more efficient code for accessing it, notably removing
runtime checks for initialization.
More information can also be found in #84223 as well as the tests where
the feature usage was removed in this PR.
Closes#84223
The "fixed" in "fixed steady state limits" means to exclude load-dependent resource prioritization
that would calculate to 100% of capacity on an idle system and less capacity on a loaded system.
Additionally I also exclude "system load" since it would be silly to try to identify
other, perhaps higher priority, processes hogging some CPU cores that aren't explicitly excluded
by masks/quotas/whatever.
This commit goes through and updates various `#[cfg]` as appropriate to
get the wasm64-unknown-unknown target behaving similarly to the
wasm32-unknown-unknown target. Most of this is just updating various
conditions for `target_arch = "wasm32"` to also account for `target_arch
= "wasm64"` where appropriate. This commit also lists `wasm64` as an
allow-listed architecture to not have the `restricted_std` feature
enabled, enabling experimentation with `-Z build-std` externally.
The main goal of this commit is to enable playing around with
`wasm64-unknown-unknown` externally via `-Z build-std` in a way that's
similar to the `wasm32-unknown-unknown` target. These targets are
effectively the same and only differ in their pointer size, but wasm64
is much newer and has much less ecosystem/library support so it'll still
take time to get wasm64 fully-fledged.
Add JoinHandle::is_running.
This adds:
```rust
impl<T> JoinHandle<T> {
/// Checks if the the associated thread is still running its main function.
///
/// This might return `false` for a brief moment after the thread's main
/// function has returned, but before the thread itself has stopped running.
pub fn is_running(&self) -> bool;
}
```
The usual way to check if a background thread is still running is to set some atomic flag at the end of its main function. We already do that, in the form of dropping an Arc which will reduce the reference counter. So we might as well expose that information.
This is useful in applications with a main loop (e.g. a game, gui, control system, ..) where you spawn some background task, and check every frame/iteration whether the background task is finished to .join() it in that frame/iteration while keeping the program responsive.
Add #[must_use] to remaining std functions (O-Z)
I've run out of compelling reasons to group functions together across crates so I'm just going to go module-by-module. This is half of the remaining items from the `std` crate, from O-Z.
`panicking::take_hook` has a side effect: it unregisters the current panic hook, returning it. I almost ignored it, but the documentation example shows `let _ = panic::take_hook();`, so following suit I went ahead and added a `#[must_use]`.
```rust
std::panicking fn take_hook() -> Box<dyn Fn(&PanicInfo<'_>) + 'static + Sync + Send>;
```
I added these functions that clippy did not flag:
```rust
std::path::Path fn starts_with<P: AsRef<Path>>(&self, base: P) -> bool;
std::path::Path fn ends_with<P: AsRef<Path>>(&self, child: P) -> bool;
std::path::Path fn with_file_name<S: AsRef<OsStr>>(&self, file_name: S) -> PathBuf;
std::path::Path fn with_extension<S: AsRef<OsStr>>(&self, extension: S) -> PathBuf;
```
Parent issue: #89692
r? `@joshtriplett`
Improve `std:🧵:available_parallelism` docs
_Tracking issue: https://github.com/rust-lang/rust/issues/74479_
This PR reworks the documentation of `std:🧵:available_parallelism`, as requested [here](https://github.com/rust-lang/rust/pull/89324#issuecomment-934343254).
## Changes
The following changes are made:
- We've removed prior mentions of "hardware threads" and instead centers the docs around "parallelism" as a resource available to a program.
- We now provide examples of when `available_parallelism` may return numbers that differ from the number of CPU cores in the host machine.
- We now mention that the amount of available parallelism may change over time.
- We make note of which platform components we don't take into account which more advanced users may want to take note of.
- The example has been updated, which should be a bit easier to use.
- We've added a docs alias to `num-cpus` which provides similar functionality to `available_parallelism`, and is one of the most popular crates on crates.io.
---
Thanks!
r? `@BurntSushi`
Rename `std:🧵:available_conccurrency` to `std:🧵:available_parallelism`
_Tracking issue: https://github.com/rust-lang/rust/issues/74479_
This PR renames `std:🧵:available_conccurrency` to `std:🧵:available_parallelism`.
## Rationale
The API was initially named `std:🧵:hardware_concurrency`, mirroring the [C++ API of the same name](https://en.cppreference.com/w/cpp/thread/thread/hardware_concurrency). We eventually decided to omit any reference to the word "hardware" after [this comment](https://github.com/rust-lang/rust/pull/74480#issuecomment-662045841). And so we ended up with `available_concurrency` instead.
---
For a talk I was preparing this week I was reading through ["Understanding and expressing scalable concurrency" (A. Turon, 2013)](http://aturon.github.io/academic/turon-thesis.pdf), and the following passage stood out to me (emphasis mine):
> __Concurrency is a system-structuring mechanism.__ An interactive system that deals with disparate asynchronous events is naturally structured by division into concurrent threads with disparate responsibilities. Doing so creates a better fit between problem and solution, and can also decrease the average latency of the system by preventing long-running computations from obstructing quicker ones.
> __Parallelism is a resource.__ A given machine provides a certain capacity for parallelism, i.e., a bound on the number of computations it can perform simultaneously. The goal is to maximize throughput by intelligently using this resource. For interactive systems, parallelism can decrease latency as well.
_Chapter 2.1: Concurrency is not Parallelism. Page 30._
---
_"Concurrency is a system-structuring mechanism. Parallelism is a resource."_ — It feels like this accurately captures the way we should be thinking about these APIs. What this API returns is not "the amount of concurrency available to the program" which is a property of the program, and thus even with just a single thread is effectively unbounded. But instead it returns "the amount of _parallelism_ available to the program", which is a resource hard-constrained by the machine's capacity (and can be further restricted by e.g. operating systems).
That's why I'd like to propose we rename this API from `available_concurrency` to `available_parallelism`. This still meets the criteria we previously established of not attempting to define what exactly we mean by "hardware", "threads", and other such words. Instead we only talk about "concurrency" as an abstract resource available to our program.
r? `@joshtriplett`
Previously the thread name would first be heap allocated and then
re-allocated to add a nul terminator. Now it will be heap allocated only
once with nul terminator added form the start.
- also clarifies how thread.join and detaching of threads works
- the previous prose implied that there is a relationship between a
spawning thread and the thread being spawned, and that "child" threads
couldn't outlive their parents unless detached, which is incorrect.
The old documentation suggested the use of yield_now for repeated
polling instead of discouraging it; it also made the false claim that
channels are implementing using yield_now. (They are not, except for
a corner case).
This commit adds a variant of the `thread_local!` macro as a new
`thread_local_const_init!` macro which requires that the initialization
expression is constant (e.g. could be stuck into a `const` if so
desired). This form of thread local allows for a more efficient
implementation of `LocalKey::with` both if the value has a destructor
and if it doesn't. If the value doesn't have a destructor then `with`
should desugar to exactly as-if you use `#[thread_local]` given
sufficient inlining.
The purpose of this new form of thread locals is to precisely be
equivalent to `#[thread_local]` on platforms where possible for values
which fit the bill (those without destructors). This should help close
the gap in performance between `thread_local!`, which is safe, relative
to `#[thread_local]`, which is not easy to use in a portable fashion.
Split sys_common::Mutex in StaticMutex and MovableMutex.
The (unsafe) `Mutex` from `sys_common` had a rather complicated interface. You were supposed to call `init()` manually, unless you could guarantee it was neither moved nor used reentrantly.
Calling `destroy()` was also optional, although it was unclear if 1) resources might be leaked or not, and 2) if `destroy()` should only be called when `init()` was called.
This allowed for a number of interesting (confusing?) different ways to use this `Mutex`, all captured in a single type.
In practice, this type was only ever used in two ways:
1. As a static variable. In this case, neither `init()` nor `destroy()` are called. The variable is never moved, and it is never used reentrantly. It is only ever locked using the `LockGuard`, never with `raw_lock`.
2. As a `Box`ed variable. In this case, both `init()` and `destroy()` are called, it will be moved and possibly used reentrantly.
No other combinations are used anywhere in `std`.
This change simplifies things by splitting this `Mutex` type into two types matching the two use cases: `StaticMutex` and `MovableMutex`.
The interface of both new types is now both safer and simpler. The first one does not call nor expose `init`/`destroy`, and the second one calls those automatically in its `new()` and `Drop` functions. Also, the locking functions of `MovableMutex` are no longer unsafe.
---
This will also make it easier to conditionally box mutexes later, by moving that decision into sys/sys_common. Some of the mutex implementations (at least those of Wasm and 'sys/unsupported') are safe to move, so wouldn't need a box. ~~(But that's blocked on #76932 for now.)~~ (See #77380.)
The (unsafe) Mutex from sys_common had a rather complicated interface.
You were supposed to call init() manually, unless you could guarantee it
was neither moved nor used reentrantly.
Calling `destroy()` was also optional, although it was unclear if 1)
resources might be leaked or not, and 2) if destroy() should only be
called when `init()` was called.
This allowed for a number of interesting (confusing?) different ways to
use this Mutex, all captured in a single type.
In practice, this type was only ever used in two ways:
1. As a static variable. In this case, neither init() nor destroy() are
called. The variable is never moved, and it is never used
reentrantly. It is only ever locked using the LockGuard, never with
raw_lock.
2. As a Boxed variable. In this case, both init() and destroy() are
called, it will be moved and possibly used reentrantly.
No other combinations are used anywhere in `std`.
This change simplifies things by splitting this Mutex type into
two types matching the two use cases: StaticMutex and MovableMutex.
The interface of both new types is now both safer and simpler. The first
one does not call nor expose init/destroy, and the second one calls
those automatically in its new() and Drop functions. Also, the locking
functions of MovableMutex are no longer unsafe.