Rollup of 10 pull requests
Successful merges:
- #118665 (Consolidate all associated items on the NonZero integer types into a single impl block per type)
- #118798 (Use AtomicU8 instead of AtomicUsize in backtrace.rs)
- #119062 (Deny braced macro invocations in let-else)
- #119138 (Docs: Use non-SeqCst in module example of atomics)
- #119907 (Update `fn()` trait implementation docs)
- #120083 (Warn when not having a profiler runtime means that coverage tests won't be run/blessed)
- #120107 (dead_code treats #[repr(transparent)] the same as #[repr(C)])
- #120110 (Update documentation for Vec::into_boxed_slice to be more clear about excess capacity)
- #120113 (Remove myself from review rotation)
- #120118 (Fix typo in documentation in base.rs)
r? `@ghost`
`@rustbot` modify labels: rollup
Expand lint tables && make clippy happy 🎉
This PR expands the lint tables on `./Cargo.toml` and thereby makes `cargo clippy` exit successfully! 🎉Fixes#15918
## How?
In the beginning there are some warnings for rustc.
Next, and most importantly, there is the clippy lint table. There are a few sections in there.
First there are the lint groups.
Second there are all lints which are permanently allowed with the reasoning why they are allowed.
Third there is a huge list of temporarily allowed lints. They should be removed in the mid-term, but incur a substantial amount of work, therefore they are allowed for now and can be worked on bit by bit.
Fourth there are all lints which should warn.
Additionally there are a few allow statements in the code for lints which should be permanently allowed in this specific place, but not in the whole code base.
## Follow up work
- [ ] Run clippy in CI
- [ ] Remove tidy test (at least `@Veykril` wrote this in #15017)
- [ ] Work on temporarily allowed lints
Pass each obligation to an fn callback with its respective inference context. This avoids needing to keep around copies of obligations or inference contexts.
Specify usability of inspect_typeck in comment.
internal: Record FnAbi
This unfortunately breaks our lub coercions, so will need to look into fixing that first, though I am not sure what is going wrong where...
Stubbed some stuff out for the time being.
The internal function was unsound, it could cause UB in rare cases where
the user inadvertly stored the returned object in a location that could
outlive the TyCtxt.
In order to make it safe, we now take a type context as an argument to
the internal fn, and we ensure that interned items are lifted using the
provided context.
Thus, this change ensures that the compiler can properly enforce
that the object does not outlive the type context it was lifted to.
With https://reviews.llvm.org/D86310 LLVM now has i128 aligned to
16-bytes on x86 based platforms. This will be in LLVM-18. This patch
updates all our spec targets to be 16-byte aligned, and removes the
alignment when speaking to older LLVM.
This results in Rust overaligning things relative to LLVM on older LLVMs.
This alignment change was discussed in rust-lang/compiler-team#683
See #54341 for additional information about why this is happening and
where this will be useful in the future.
This *does not* stabilize `i128`/`u128` for FFI.
`cargo clippy --fix`
This PR is the result of running `cargo clippy --fix && cargo fmt` in the root of the repository. I did not manually review all the changes, but just skimmed through a few of them. The tests still pass, so it seems fine.
A-diagnostics is already labeled correctly thanks to the template and there usually isn't much to do on those issues, so it's fine to just add them to the pile.
Update documentation for Vec::into_boxed_slice to be more clear about excess capacity
Currently, the documentation for Vec::into_boxed_slice says that "if the vector has excess capacity, its items will be moved into a newly-allocated buffer with exactly the right capacity." This is misleading, as copies do not necessarily occur, depending on if the allocator supports in-place shrinking. I copied some of the wording from shrink_to_fit, though it could potentially still be worded better than this.
dead_code treats #[repr(transparent)] the same as #[repr(C)]
In #92972 we enabled linting on unused fields in tuple structs. In #118297 that lint was enabled by default. That exposed issues like #119659, where the fields of a struct marked `#[repr(transparent)]` were reported by the `dead_code` lint. The language team [decided](https://github.com/rust-lang/rust/issues/119659#issuecomment-1885172045) that the lint should treat `repr(transparent)` the same as `#[repr(C)]`.
Fixes#119659
Warn when not having a profiler runtime means that coverage tests won't be run/blessed
On a few occasions (e.g. #118036, #119984) people have been tripped up by the fact that half of the coverage test suite is skipped by default, because it `// needs-profiler-support` and the profiler runtime is not actually built in any of the default config profiles.
(This is made worse by the fact that it isn't enabled in any of the PR CI jobs either. So people think that they've successfully blessed the test suite, and then get a rude surprise when their merge only fails in the full CI job suite.)
This PR adds a simple warning to compiletest that should alert the user in some cases. It's not foolproof, but it should increase the chances of catching this problem earlier in the PR process.
Update `fn()` trait implementation docs
Fixes#119903
This was FCP'd and approved for the 1.70.0 release, this is just a docs update to match that change.
Docs: Use non-SeqCst in module example of atomics
I done this for this reasons:
1. The example now shows that there is more Orderings than just SeqCst.
2. People who would copy from example would now have more suitable orderings for the job.
3. SeqCst is both much harder to reason about and not needed in most situations.
IMHO, we should encourage people to think and use memory orderings that is suitable to task instead of blindly defaulting to SeqCst.
r? `@m-ou-se`
Consolidate all associated items on the NonZero integer types into a single impl block per type
**Before:**
```rust
#[repr(transparent)]
#[rustc_layout_scalar_valid_range_start(1)]
pub struct NonZeroI8(i8);
impl NonZeroI8 {
pub const fn new(n: i8) -> Option<Self> ...
pub const fn get(self) -> i8 ...
}
impl NonZeroI8 {
pub const fn leading_zeros(self) -> u32 ...
pub const fn trailing_zeros(self) -> u32 ...
}
impl NonZeroI8 {
pub const fn abs(self) -> NonZeroI8 ...
}
...
```
**After:**
```rust
#[repr(transparent)]
#[rustc_layout_scalar_valid_range_start(1)]
pub struct NonZeroI8(i8);
impl NonZeroI8 {
pub const fn new(n: i8) -> Option<Self> ...
pub const fn get(self) -> i8 ...
pub const fn leading_zeros(self) -> u32 ...
pub const fn trailing_zeros(self) -> u32 ...
pub const fn abs(self) -> NonZeroI8 ...
...
}
```
Having 6-7 different impl blocks per type is not such a problem in today's implementation, but becomes awful upon the switch to a generic `NonZero<T>` type (context: https://github.com/rust-lang/rust/issues/82363#issuecomment-921513910).
In the implementation from https://github.com/rust-lang/rust/pull/100428, there end up being **67** impl blocks on that type.
<img src="5b68bd6f-8a36-4922-baa3-348e30dbfcc1" width="200"><img src="2cfec71e-c2cd-4361-a542-487f13f435d9" width="200"><img src="2fe00337-7307-405d-9036-6fe1e58b2627" width="200">
Without the refactor to a single impl block first, introducing `NonZero<T>` would be a usability regression compared to today's separate pages per type. With all those blocks expanded, Ctrl+F is obnoxious because you need to skip 12× past every match you don't care about. With all the blocks collapsed, Ctrl+F is useless. Getting to a state in which exactly one type's (e.g. `NonZero<u32>`) impl blocks are expanded while the rest are collapsed is annoying.
After this refactor to a single impl block, we can move forward with making `NonZero<T>` a generic struct whose docs all go on the same rustdoc page. The rustdoc will have 12 impl blocks, one per choice of `T` supported by the standard library. The reader can expand a single one of those impl blocks e.g. `NonZero<u32>` to understand the entire API of that type.
Note that moving the API into a generic `impl<T> NonZero<T> { ... }` is not going to be an option until after `NonZero<T>` has been stabilized, which may be months or years after its introduction. During the period while generic `NonZero` is unstable, it will be extra important to offer good documentation on all methods demonstrating the API being used through the stable aliases such as `NonZeroI8`.
This PR follows a `key = $value` syntax for the macros which is similar to the macros we already use for producing a single large impl block on the integer primitives.
1dd4db5062/library/core/src/num/mod.rs (L288-L309)
Best reviewed one commit at a time.
Optimize large array creation in const-eval
This changes repeated memcpy's to a memset for the case that we're propagating a single byte into a region of memory. It also optimizes the element-by-element copies to have a tighter loop; I'm pretty sure the old code was actually doing a multiply within each loop iteration.
For an 8GB array (`static SLICE: [u8; SIZE] = [0u8; 1 << 33];`) this takes us from ~23 seconds to ~6 seconds locally, which is spent roughly 50/50 in (a) memset to zero and (b) memcpy of the original place into a new place, when popping stack frame. The latter seems hard to avoid but is a big memcpy (since we're copying the type rather than initializing a region, so it's pretty fast), and the first is as good as it's going to get without special casing constant-valued arrays.
Closes https://github.com/rust-lang/rust/issues/55795. (That issue's references to lint checking don't appear true anymore, but I think this closes that case as something that is slow due to *time* pretty fully. An 8GB array taking only 6 seconds feels reasonable enough to not merit further tracking).
Get rid of the hir_owner query.
This query was meant as a firewall between `hir_owner_nodes` which is supposed to change often, and the queries that only depend on the item signature. That firewall was inefficient, leaking the contents of the HIR body through `HirId`s.
`hir_owner` incurs a significant cost, as we need to hash HIR twice in multiple modes. This PR proposes to remove it, and simplify the hashing scheme.
For the future, `def_kind`, `def_span`... are much more efficient for incremental decoupling, and should be preferred.