Put the core unit tests in a separate coretests package
Having standard library tests in the same package as a standard library crate has bad side effects. It causes the test to have a dependency on a locally built standard library crate, while also indirectly depending on it through libtest. Currently this works out fine in the context of rust's build system as both copies are identical, but for example in cg_clif's tests I've found it basically impossible to compile both copies with the exact same compiler flags and thus the two copies would cause lang item conflicts.
This PR moves the tests of libcore to a separate package which doesn't depend on libcore, thus preventing the duplicate crates even when compiler flags don't exactly match between building the sysroot (for libtest) and building the test itself. The rest of the standard library crates do still have this issue however.
This needs more time to bake before we turn it on. Turning it on early risks people silencing the warning indefinitely, before we have the chance to make it less noisy.
Add some tracing to core bootstrap logic
Follow-up to #135391.
### Summary
Add some initial tracing logging to bootstrap, focused on the core logic (in this PR).
Also:
- Adjusted tracing-tree style to not use indent lines (I found that more distracting than helpful).
- Avoid glob-importing `tracing` items.
- Improve the rustc-dev-guide docs on bootstrap tracing.
### Example output
```bash
$ BOOTSTRAP_TRACING=bootstrap=TRACE ./x check src/bootstrap
```

r? bootstrap
compiler_fence: fix example
The old example was wrong, an acquire fence is required in the signal handler. To make the point more clear, I changed the "data" variable to use non-atomic accesses.
Fixes https://github.com/rust-lang/rust/issues/133014
Move `std::io::pipe` code into its own file
Also update the docs for the new location, create a section "Platform-specific behavior", don't hide required imports for code examples.
Uplift `clippy::double_neg` lint as `double_negations`
Warns about cases like this:
```rust
fn main() {
let x = 1;
let _b = --x; //~ WARN use of a double negation
}
```
The intent is to keep people from thinking that `--x` is a prefix decrement operator. `++x`, `x++` and `x--` are invalid expressions and already have a helpful diagnostic.
I didn't add a machine-applicable suggestion to the lint because it's not entirely clear what the programmer was trying to achieve with the `--x` operation. The code that triggers the lint should always be reviewed manually.
Closes#82987
Uplift `clippy::double_neg` lint as `double_negations`
Warns about cases like this:
```rust
fn main() {
let x = 1;
let _b = --x; //~ WARN use of a double negation
}
```
The intent is to keep people from thinking that `--x` is a prefix decrement operator. `++x`, `x++` and `x--` are invalid expressions and already have a helpful diagnostic.
I didn't add a machine-applicable suggestion to the lint because it's not entirely clear what the programmer was trying to achieve with the `--x` operation. The code that triggers the lint should always be reviewed manually.
Closes#82987
- `check-pass` test for a MRE of #135020
- fail test for #135138
- switch to `TooGeneric` for checking CMSE fn signatures
- switch to `TooGeneric` for compute `SizeSkeleton` (for transmute)
- fix broken tests
Consistently use the highest bit of vector masks when converting to i1 vectors
This improves the codegen for vector `select`, `gather`, `scatter` and boolean reduction intrinsics and fixesrust-lang/portable-simd#316.
The current behavior of most mask operations during llvm codegen is to truncate the mask vector to <N x i1>, telling llvm to use the least significat bit. The exception is the `simd_bitmask` intrinsics, which already used the most signifiant bit.
Since sse/avx instructions are defined to use the most significant bit, truncating means that llvm has to insert a left shift to move the bit into the most significant position, before the mask can actually be used.
Similarly on aarch64, mask operations like blend work bit by bit, repeating the least significant bit across the whole lane involves shifting it into the sign position and then comparing against zero.
By shifting before truncating to <N x i1>, we tell llvm that we only consider the most significant bit, removing the need for additional shift instructions in the assembly.
This avoids a good deal of work, since each module child can now just be
compared via u32 comparison, rather than fetching the raw &str
(requiring locking and indexing into the interner) and then comparing
the two strings (also relatively expensive).
Receivers which are references to `Option` and `Result`, or who
implement `Deref` to one of those types, will be linted as well.
changelog: [`unnecessary_map_or`]: work with ref and `Deref` to `Option`
and `Result` as well
Fixes#14023
**Note:** this patch must be merged after #13998 – only the second
commit must be reviewed, the first one repeats the patch in #13998 for
mergeability reasons.
This improves the codegen for vector `select`, `gather`, `scatter` and
boolean reduction intrinsics and fixesrust-lang/portable-simd#316.
The current behavior of most mask operations during llvm codegen is to
truncate the mask vector to <N x i1>, telling llvm to use the least
significat bit. The exception is the `simd_bitmask` intrinsics, which
already used the most signifiant bit.
Since sse/avx instructions are defined to use the most significant bit,
truncating means that llvm has to insert a left shift to move the bit
into the most significant position, before the mask can actually be
used.
Similarly on aarch64, mask operations like blend work bit by bit,
repeating the least significant bit across the whole lane involves
shifting it into the sign position and then comparing against zero.
By shifting before truncating to <N x i1>, we tell llvm that we only
consider the most significant bit, removing the need for additional
shift instructions in the assembly.