Make sure we handle `backwards_incompatible_lint` drops appropriately in drop elaboration
In #131326, a new kind of scheduled drop (`drop_kind: DropKind::Value` + `backwards_incompatible_lint: true`) was added so that we could insert a new kind of no-op MIR statement (`backward incompatible drop`) for linting purposes.
These drops were intended to have *no side-effects*, but drop elaboration code forgot to handle these drops specially and they were handled otherwise as normal drops in most of the code. This ends up being **unsound** since we insert more than one drop call for some values, which means that `Drop::drop` could be called more than once.
This PR fixes this by splitting out the `DropKind::ForLint` and adjusting the code. I'm not totally certain if all of the places I've adjusted are either reachable or correct, but I'm pretty certain that it's *more* correct than it was previously.
cc `@dingxiangfei2009`
r? nikomatsakis
Fixes#134482
Remove a duplicated check that doesn't do anything anymore.
fixes#134005
This code didn't actually `lub` the type of the previous expressions, but just the current type over and over again. Changing it to using the actual expression type does not change anything either, so may as well remove the entire loop.
coverage: Store coverage source regions as `Span` until codegen (take 2)
This is an attempt to re-land #133418:
> Historically, coverage spans were converted into line/column coordinates during the MIR instrumentation pass.
> This PR moves that conversion step into codegen, so that coverage spans spend most of their time stored as Span instead.
> In addition to being conceptually nicer, this also reduces the size of coverage mappings in MIR, because Span is smaller than 4x u32.
That PR was reverted by #133608, because in some circumstances not covered by our test suite we were emitting coverage metadata that was causing `llvm-cov` to exit with an error (#133606).
---
The implementation here is *mostly* the same, but adapted for subsequent changes in the relevant code (e.g. #134163).
I believe that the changes in #134163 should be sufficient to prevent the problem that required the original PR to be reverted. But I haven't been able to reproduce the original breakage in a regression test, and the `llvm-cov` error message is extremely unhelpful, so I can't completely rule out the possibility of this breaking again.
r? jieyouxu (reviewer of the original PR)
compiletest: don't register predefined `MSVC`/`NONMSVC` FileCheck prefixes
This was fragile as it was based on host target passed to compiletest,
but the user could cross-compile and run test for a different target
(e.g. cross from linux to msvc, but msvc won't be set on the target).
Furthermore, it was also very surprising as normally revision names
(other than `CHECK`) was accepted as FileCheck prefixes.
This partially reverts the `MSVC`/`NONMSVC` predefined FileCheck
prefix registration introduced historically for some codegen tests.
This makes some codegen tests more verbose since they now need to
explicitly introduce `MSVC`/`NONMSVC` revisions, but I think that's
less surprising, e.g.:
```rs
//@ revisions: MSVC NONMSVC
//`@[MSVC]` only-msvc
//`@[NONMSVC]` ignore-msvc
```
Note that revisions are not *only* FileCheck prefixes in
FileCheck-based test suites, as they also can be used
to conditionally apply certain compiletest directives.
r? `@Zalathar` (or reroll a `r/? compiletest` reviewer)
try-job: x86_64-msvc
try-job: i686-msvc
try-job: x86_64-mingw-1
try-job: i686-mingw
This was fragile as it was based on host target passed to compiletest,
but the user could cross-compile and run test for a different target
(e.g. cross from linux to msvc, but msvc won't be set on the target).
Furthermore, it was also very surprising as normally revision names
(other than `CHECK`) was accepted as FileCheck prefixes.
Arbitrary self types v2 attempts to detect cases where methods in an
"outer" type (e.g. a smart pointer) might "shadow" methods in the
referent.
There are a couple of cases where the current code makes no attempt to
detect such shadowing. Both of these cases only apply if other unstable
features are enabled.
Add a test, mostly for illustrative purposes, so we can see the
shadowing cases that can occur.
Some destructor/drop related tweaks
Two random tweaks I got from investigating some stuff around drops in edition 2024:
1. Use the `TypingEnv` of the mir builder, rather than making it over again.
2. Rename the `id` field from `Scope` to `local_id`, to reflect that it's a local id, and remove the `item_local_id()` accessor which just returned the id field.
Forbid overwriting types in typeck
While trying to figure out some type setting logic in https://github.com/rust-lang/rust/pull/134248 I realized that we sometimes set a type twice. While hopefully that would have been the same type, we didn't ensure that at all and just silently accepted it. So now we reject setting it twice, unless errors are happening, then we don't care.
Best reviewed commit by commit.
No behaviour change is intended.
reduce compiler `Assemble` complexity
`compile::Assemble` is already complicated by its nature (as it handles core internals like recursive building logic, etc.) and also handles half of `LldWrapper` tool logic for no good reason since it should be done in the build step directly.
This change moves it there to reduce complexity of `compile::Assemble` logic.
Fix intra doc links not generated inside footnote definitions
Fixes#132208.
The problem was that we were running the `Footnote` "pass" before the `LinkReplacer` one. Sadly, the change is bigger than it should because we can't specialize the `Iterator` trait implementation, forcing me to add a new type to handle the other `Iterator` kind (the one which still has the `Range`).
r? ``@notriddle``
Variants::Single: do not use invalid VariantIdx for uninhabited enums
~~Stacked on top of https://github.com/rust-lang/rust/pull/133681, only the last commit is new.~~
Currently, `Variants::Single` for an empty enum contains a `VariantIdx` of 0; looking that up in the enum variant list will ICE. That's quite confusing. So let's fix that by adding a new `Variants::Empty` case for types that have 0 variants.
try-job: i686-msvc
cleanup region handling: add `LateParamRegionKind`
The second commit is to enable a split between `BoundRegionKind` and `LateParamRegionKind`, by avoiding `BoundRegionKind` where it isn't necessary.
The third comment then adds `LateParamRegionKind` to avoid having the same late-param region for separate bound regions. This fixes#124021.
r? `@compiler-errors`
In codegen, a used function with `FunctionCoverageInfo` but no mappings has
historically indicated a bug. However, that will no longer be the case after
moving some fallible span-processing steps into codegen.
Currently it relies on having the right integer for every variant, and
if you add a variant you need to adjust the integers for all subsequent
variants, which is a pain.
This commit introduces a match guard formulation that takes advantage of
the enum-to-integer conversion to avoid specifying the integer for each
variant. And it does this via a macro to avoid lots of boilerplate.
The parser pushes a `TokenType` to `Parser::expected_token_types` on
every call to the various `check`/`eat` methods, and clears it on every
call to `bump`. Some of those `TokenType` values are full tokens that
require cloning and dropping. This is a *lot* of work for something
that is only used in error messages and it accounts for a significant
fraction of parsing execution time.
This commit overhauls `TokenType` so that `Parser::expected_token_types`
can be implemented as a bitset. This requires changing `TokenType` to a
C-style parameterless enum, and adding `TokenTypeSet` which uses a
`u128` for the bits. (The new `TokenType` has 105 variants.)
The new types `ExpTokenPair` and `ExpKeywordPair` are now arguments to
the `check`/`eat` methods. This is for maximum speed. The elements in
the pairs are always statically known; e.g. a
`token::BinOp(token::Star)` is always paired with a `TokenType::Star`.
So we now compute `TokenType`s in advance and pass them in to
`check`/`eat` rather than the current approach of constructing them on
insertion into `expected_token_types`.
Values of these pair types can be produced by the new `exp!` macro,
which is used at every `check`/`eat` call site. The macro is for
convenience, allowing any pair to be generated from a single identifier.
The ident/keyword filtering in `expected_one_of_not_found` is no longer
necessary. It was there to account for some sloppiness in
`TokenKind`/`TokenType` comparisons.
The existing `TokenType` is moved to a new file `token_type.rs`, and all
its new infrastructure is added to that file. There is more boilerplate
code than I would like, but I can't see how to make it shorter.
This is a naming convention used in a handful of spots in the parser for
delimiters. It confused me when I first saw it a long time ago, and I've
never liked it. A web search says "Bra-ket notation" exists in linear
algebra but the terminology has zero prior use in a programming context,
as far as I can tell.
This commit changes it to `open`/`close`, which is consistent with the
rest of the compiler.
The most significant is `check_keyword`: it now only pushes to
`expected_token_types` if the keyword check fails, which matches how all
the other `check` methods work.
The remainder are just tweaks to make these methods more consistent with
each other.