These things don't need to be `Vec`s; boxed slices are enough.
The frequent one here is call arguments, but MIR building knows the number of arguments from the THIR, so the collect is always getting the allocation right in the first place, and thus this shouldn't ever add the shrink-in-place overhead.
`PtrMetadata` doesn't care about `*const`/`*mut`/`&`/`&mut`, so GVN away those casts in its argument.
This includes updating MIR to allow calling PtrMetadata on references too, not just raw pointers. That means that `[T]::len` can be just `_0 = PtrMetadata(_1)`, for example.
# Conflicts:
# tests/mir-opt/pre-codegen/slice_index.slice_get_unchecked_mut_range.PreCodegen.after.panic-abort.mir
# tests/mir-opt/pre-codegen/slice_index.slice_get_unchecked_mut_range.PreCodegen.after.panic-unwind.mir
Apparently MIR borrowck cares about at least one of these for checking variance.
In runtime MIR, though, there's no need for them as `PtrToPtr` does the same thing.
(Banning them simplifies passes like GVN that no longer need to handle multiple cast possibilities.)
deref patterns: lower deref patterns to MIR
This lowers deref patterns to MIR. This is a bit tricky because this is the first kind of pattern that requires storing a value in a temporary. Thanks to https://github.com/rust-lang/rust/pull/123324 false edges are no longer a problem.
The thing I'm not confident about is the handling of fake borrows. This PR ignores any fake borrows inside a deref pattern. We are guaranteed to at least fake borrow the place of the first pointer value, which could be enough, but I'm not certain.
A redundant size assertion for `StatementKind` was added in #122937, because
the existing assertion was in a different file.
This patch cleans that up, and also moves the `TerminatorKind` assertion into
the same file where it belongs, to avoid the same thing happening again.
Check `x86_64` size assertions on `aarch64`, too
(Context: https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Checking.20size.20assertions.20on.20aarch64.3F)
Currently the compiler has around 30 sets of `static_assert_size!` for various size-critical data structures (e.g. various IR nodes), guarded by `#[cfg(all(target_arch = "x86_64", target_pointer_width = "64"))]`.
(Presumably this cfg avoids having to maintain separate size values for 32-bit targets and unusual 64-bit targets. Apparently it may have been necessary before the i128/u128 alignment changes, too.)
This is slightly incovenient for people on aarch64 workstations (e.g. Macs), because the assertions normally aren't checked until we push to a PR. So this PR adds `aarch64` to the `#[cfg(..)]` guarding all of those assertions in the compiler.
---
Implemented with a simple find/replace. Verified by manually inspecting each `static_assert_size!` in `compiler/`, and checking that either the replacement succeeded, or adding aarch64 wouldn't have been appropriate.
rename ptr::from_exposed_addr -> ptr::with_exposed_provenance
As discussed on [Zulip](427757066).
The old name, `from_exposed_addr`, makes little sense as it's not the address that is exposed, it's the provenance. (`ptr.expose_addr()` stays unchanged as we haven't found a better option yet. The intended interpretation is "expose the provenance and return the address".)
The new name nicely matches `ptr::without_provenance`.
Add `Ord::cmp` for primitives as a `BinOp` in MIR
Update: most of this OP was written months ago. See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.
---
There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches. Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:
1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.
Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic. Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical. Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)? But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)? And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers. Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](404250586) -- we'll need at least a rustc intrinsic to be able to call it.
As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR. The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks. Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues. (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)
---
r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
Unbox and unwrap the contents of `StatementKind::Coverage`
The payload of coverage statements was historically a structure with several fields, so it was boxed to avoid bloating `StatementKind`.
Now that the payload is a single relatively-small enum, we can replace `Box<Coverage>` with just `CoverageKind`.
This patch also adds a size assertion for `StatementKind`, to avoid accidentally bloating it in the future.
``@rustbot`` label +A-code-coverage
The payload of coverage statements was historically a structure with several
fields, so it was boxed to avoid bloating `StatementKind`.
Now that the payload is a single relatively-small enum, we can replace
`Box<Coverage>` with just `CoverageKind`.
This patch also adds a size assertion for `StatementKind`, to avoid
accidentally bloating it in the future.
Some of the marker statements used by coverage are added during MIR building
for use by the InstrumentCoverage pass (during analysis), and are not needed
afterwards.
fixes#117448
For example unnecessary imports in std::prelude that can be eliminated:
```rust
use std::option::Option::Some;//~ WARNING the item `Some` is imported redundantly
use std::option::Option::None; //~ WARNING the item `None` is imported redundantly
```