1
Fork 0
Commit graph

158 commits

Author SHA1 Message Date
Nicholas Nethercote
7418aa1a07 Remove extern crate rustc_data_structures from numerous crates. 2024-04-29 18:45:14 +10:00
Nicholas Nethercote
6ce258f657 Remove extern crate rustc_macros from rustc_middle. 2024-04-29 11:19:16 +10:00
León Orell Valerian Liehr
332cac2c6d
Rollup merge of #122598 - Nadrieril:full-derefpats, r=matthewjasper
deref patterns: lower deref patterns to MIR

This lowers deref patterns to MIR. This is a bit tricky because this is the first kind of pattern that requires storing a value in a temporary. Thanks to https://github.com/rust-lang/rust/pull/123324 false edges are no longer a problem.

The thing I'm not confident about is the handling of fake borrows. This PR ignores any fake borrows inside a deref pattern. We are guaranteed to at least fake borrow the place of the first pointer value, which could be enough, but I'm not certain.
2024-04-23 17:25:15 +02:00
Scott McMurray
e6b2b764ec Add AggregateKind::RawPtr and enough support to compile 2024-04-21 11:08:37 -07:00
Nadrieril
726fb55ae2 Fix documentation of BorrowKind::Fake 2024-04-20 16:07:27 +02:00
Nadrieril
50531806ee Add a non-shallow fake borrow 2024-04-20 16:01:35 +02:00
Nicholas Nethercote
0d97669a17 Simplify static_assert_sizes.
We want to run them on all 64-bit platforms.
2024-04-18 15:36:25 +10:00
Zalathar
c0e7659cae Move size assertions for mir::syntax types into the same file
A redundant size assertion for `StatementKind` was added in #122937, because
the existing assertion was in a different file.

This patch cleans that up, and also moves the `TerminatorKind` assertion into
the same file where it belongs, to avoid the same thing happening again.
2024-04-16 15:18:43 +10:00
Jacob Pratt
4332498a6d
Rollup merge of #123401 - Zalathar:assert-size-aarch64, r=fmease
Check `x86_64` size assertions on `aarch64`, too

(Context: https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Checking.20size.20assertions.20on.20aarch64.3F)

Currently the compiler has around 30 sets of `static_assert_size!` for various size-critical data structures (e.g. various IR nodes), guarded by `#[cfg(all(target_arch = "x86_64", target_pointer_width = "64"))]`.

(Presumably this cfg avoids having to maintain separate size values for 32-bit targets and unusual 64-bit targets. Apparently it may have been necessary before the i128/u128 alignment changes, too.)

This is slightly incovenient for people on aarch64 workstations (e.g. Macs), because the assertions normally aren't checked until we push to a PR. So this PR adds `aarch64` to the `#[cfg(..)]` guarding all of those assertions in the compiler.

---

Implemented with a simple find/replace. Verified by manually inspecting each `static_assert_size!` in `compiler/`, and checking that either the replacement succeeded, or adding aarch64 wouldn't have been appropriate.
2024-04-03 20:17:06 -04:00
joboet
989660c3e6
rename expose_addr to expose_provenance 2024-04-03 16:00:38 +02:00
Zalathar
2d47cd77ac Check x86_64 size assertions on aarch64, too
This makes it easier for contributors on aarch64 workstations (e.g. Macs) to
notice when these assertions have been violated.
2024-04-03 16:53:03 +11:00
Jacob Pratt
e9ef8e1efa
Rollup merge of #122935 - RalfJung:with-exposed-provenance, r=Amanieu
rename ptr::from_exposed_addr -> ptr::with_exposed_provenance

As discussed on [Zulip](427757066).

The old name, `from_exposed_addr`, makes little sense as it's not the address that is exposed, it's the provenance. (`ptr.expose_addr()` stays unchanged as we haven't found a better option yet. The intended interpretation is "expose the provenance and return the address".)

The new name nicely matches `ptr::without_provenance`.
2024-04-02 20:37:39 -04:00
bors
a77322c16f Auto merge of #118310 - scottmcm:three-way-compare, r=davidtwco
Add `Ord::cmp` for primitives as a `BinOp` in MIR

Update: most of this OP was written months ago.  See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.

---

There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches.  Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:

1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.

Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic.  Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical.  Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)?  But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)?  And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers.  Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](404250586) -- we'll need at least a rustc intrinsic to be able to call it.

As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR.  The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks.  Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues.  (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)

---

r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
2024-04-02 19:21:44 +00:00
DianQK
47ed73a7b5
Eliminate UbCheck for non-standard libraries 2024-03-27 21:02:40 +08:00
Scott McMurray
c59e93c753 Address PR feedback 2024-03-24 17:42:35 -07:00
Matthias Krüger
19d3827efe
Rollup merge of #122937 - Zalathar:unbox, r=oli-obk
Unbox and unwrap the contents of `StatementKind::Coverage`

The payload of coverage statements was historically a structure with several fields, so it was boxed to avoid bloating `StatementKind`.

Now that the payload is a single relatively-small enum, we can replace `Box<Coverage>` with just `CoverageKind`.

This patch also adds a size assertion for `StatementKind`, to avoid accidentally bloating it in the future.

``@rustbot`` label +A-code-coverage
2024-03-24 17:08:16 +01:00
Scott McMurray
3da115a93b Add+Use mir::BinOp::Cmp 2024-03-23 23:23:41 -07:00
Ralf Jung
6177530420 refactor check_{lang,library}_ub: use a single intrinsic, put policy into library 2024-03-23 18:45:05 +01:00
Ralf Jung
038e7c6c38 rename MIR int2ptr casts to match library name 2024-03-23 13:18:33 +01:00
Ralf Jung
67b9d7d184 rename ptr::from_exposed_addr -> ptr::with_exposed_provenance 2024-03-23 13:18:33 +01:00
Zalathar
ab92699f4a Unbox and unwrap the contents of StatementKind::Coverage
The payload of coverage statements was historically a structure with several
fields, so it was boxed to avoid bloating `StatementKind`.

Now that the payload is a single relatively-small enum, we can replace
`Box<Coverage>` with just `CoverageKind`.

This patch also adds a size assertion for `StatementKind`, to avoid
accidentally bloating it in the future.
2024-03-23 22:05:11 +11:00
Zalathar
91aae58568 coverage: Clean up marker statements that aren't needed later
Some of the marker statements used by coverage are added during MIR building
for use by the InstrumentCoverage pass (during analysis), and are not needed
afterwards.
2024-03-22 20:20:41 +11:00
Ben Kimock
5a93a59fd5 Distinguish between library and lang UB in assert_unsafe_precondition 2024-03-08 18:53:58 -05:00
Gary Guo
3b1dd1bfa9 Implement asm goto in MIR and MIR lowering 2024-02-24 18:50:09 +00:00
Gary Guo
b044aaa905 Change InlineAsm to allow multiple targets instead 2024-02-24 18:50:09 +00:00
surechen
a61126cef6 By tracking import use types to check whether it is scope uses or the other situations like module-relative uses, we can do more accurate redundant import checking.
fixes #117448

For example unnecessary imports in std::prelude that can be eliminated:

```rust
use std::option::Option::Some;//~ WARNING the item `Some` is imported redundantly
use std::option::Option::None; //~ WARNING the item `None` is imported redundantly
```
2024-02-18 16:38:11 +08:00
Ben Kimock
8836ac5758 Add a new debug_assertions instrinsic (compiler)
And in clippy
2024-02-08 11:49:08 -05:00
Michael Goulet
c567eddec2 Add CoroutineClosure to TyKind, AggregateKind, UpvarArgs 2024-02-06 02:22:58 +00:00
Ralf Jung
1025a12b64 interpret: project_downcast: do not ICE for uninhabited variants 2024-01-26 09:01:56 +01:00
Josh Stone
167555f36a Pack the u128 in SwitchTargets 2024-01-19 20:10:39 -08:00
Martin Nordholts
16ba56c242 compiler: Lower fn call arg spans down to MIR
To enable improved accuracy of diagnostics in upcoming commits.
2024-01-15 19:07:11 +01:00
Michael Goulet
fcb42b42d6 Remove movability from TyKind::Coroutine 2023-12-28 16:35:01 +00:00
Ralf Jung
31493c70fa interpret: simplify handling of shifts by no longer trying to handle signed and unsigned shift amounts in the same branch 2023-11-12 12:49:46 +01:00
lcnr
992d93f687 rename BorrowKind::Shallow to Fake
also adds some comments
2023-11-08 22:55:28 +01:00
George Bateman
d995bd61e7
Enums in offset_of: update based on est31, scottmcm & llogiq review 2023-10-31 23:26:02 +00:00
George Bateman
e936416a8d
Support enum variants in offset_of! 2023-10-31 23:25:54 +00:00
Oli Scherer
e96ce20b34 s/generator/coroutine/ 2023-10-20 21:14:01 +00:00
Oli Scherer
60956837cf s/Generator/Coroutine/ 2023-10-20 21:10:38 +00:00
Zalathar
753caf292c coverage: Update docs for StatementKind::Coverage
This new description reflects the changes made in this PR, and should hopefully
be more useful to non-coverage developers who need to care about coverage
statements.
2023-10-18 23:44:36 +11:00
Zalathar
6da319f635 coverage: Store all of a function's mappings in function coverage info
Previously, mappings were attached to individual coverage statements in MIR.
That necessitated special handling in MIR optimizations to avoid deleting those
statements, since otherwise codegen would be unable to reassemble the original
list of mappings.

With this change, a function's list of mappings is now attached to its MIR
body, and survives intact even if individual statements are deleted by
optimizations.
2023-10-18 23:42:39 +11:00
Ralf Jung
28b0c87ad6 update MIR place semantics UB comment 2023-10-15 18:13:33 +02:00
Guillaume Gomez
9e28a9349c
Rollup merge of #116329 - RalfJung:swap-comments, r=scottmcm
update some comments around swap()

Based on ``@eddyb's`` comment [here](https://github.com/rust-lang/unsafe-code-guidelines/issues/461#issuecomment-1742156410).

And then I noticed the wrong capitalization for Miri and fixed it in some other places as well.
2023-10-06 13:18:35 +02:00
bors
36aab8df0a Auto merge of #115301 - Zalathar:regions-vec, r=davidtwco
coverage: Allow each coverage statement to have multiple code regions

The original implementation of coverage instrumentation was built around the assumption that a coverage counter/expression would be associated with *up to one* code region. When it was discovered that *multiple* regions would sometimes need to share a counter, a workaround was found: for the remaining regions, the instrumentor would create a fresh expression that adds zero  to the existing counter/expression.

That got the job done, but resulted in some awkward code, and produces unnecessarily complicated coverage maps in the final binary.

---

This PR removes that tension by changing `StatementKind::Coverage`'s code region field from `Option<CodeRegion>` to `Vec<CodeRegion>`.

The changes on the codegen side are fairly straightforward. As long as each `CoverageKind::Counter` only injects one `llvm.instrprof.increment`, the rest of coverage codegen is happy to handle multiple regions mapped to the same counter/expression, with only minor option-to-vec adjustments.

On the instrumentor/mir-transform side, we can get rid of the code that creates extra (x + 0) expressions. Instead we gather all of the code regions associated with a single BCB, and inject them all into one coverage statement.

---

There are several patches here but they can be divided in to three phases:
- Preparatory work
- Actually switching over to multiple regions per coverage statement
- Cleaning up

So viewing the patches individually may be easier.
2023-10-03 18:36:21 +00:00
Zalathar
ee9d00f6b8 coverage: Let each coverage statement hold a vector of code regions
This makes it possible for a `StatementKind::Coverage` to hold more than one
code region, but that capability is not yet used.
2023-10-03 13:03:39 +11:00
ouz-a
5d753abb30 have better explanation for relate_types 2023-10-02 23:39:45 +03:00
ouz-a
cd7f471931 Add docs, remove code, change subtyper code 2023-10-02 23:39:44 +03:00
ouz-a
3148e6a993 subtyping_projections 2023-10-02 23:37:49 +03:00
Ralf Jung
bfc0f23acb MIRI -> Miri 2023-10-02 08:35:08 +02:00
Oli Scherer
0031cf7c7e Add a mir validation check to prevent OpaqueCast after analysis passes finish 2023-09-28 16:13:38 +00:00
Camille GILLOT
8b848af325 Add global value numbering pass. 2023-09-24 09:09:04 +00:00