Fix deduplication mismatches in vtables leading to upcasting unsoundness
We currently have two cases where subtleties in supertraits can trigger disagreements in the vtable layout, e.g. leading to a different vtable layout being accessed at a callsite compared to what was prepared during unsizing. Namely:
### #135315
In this example, we were not normalizing supertraits when preparing vtables. In the example,
```
trait Supertrait<T> {
fn _print_numbers(&self, mem: &[usize; 100]) {
println!("{mem:?}");
}
}
impl<T> Supertrait<T> for () {}
trait Identity {
type Selff;
}
impl<Selff> Identity for Selff {
type Selff = Selff;
}
trait Middle<T>: Supertrait<()> + Supertrait<T> {
fn say_hello(&self, _: &usize) {
println!("Hello!");
}
}
impl<T> Middle<T> for () {}
trait Trait: Middle<<() as Identity>::Selff> {}
impl Trait for () {}
fn main() {
(&() as &dyn Trait as &dyn Middle<()>).say_hello(&0);
}
```
When we prepare `dyn Trait`, we see a supertrait of `Middle<<() as Identity>::Selff>`, which itself has two supertraits `Supertrait<()>` and `Supertrait<<() as Identity>::Selff>`. These two supertraits are identical, but they are not duplicated because we were using structural equality and *not* considering normalization. This leads to a vtable layout with two trait pointers.
When we upcast to `dyn Middle<()>`, those two supertraits are now the same, leading to a vtable layout with only one trait pointer. This leads to an offset error, and we call the wrong method.
### #135316
This one is a bit more interesting, and is the bulk of the changes in this PR. It's a bit similar, except it uses binder equality instead of normalization to make the compiler get confused about two vtable layouts. In the example,
```
trait Supertrait<T> {
fn _print_numbers(&self, mem: &[usize; 100]) {
println!("{mem:?}");
}
}
impl<T> Supertrait<T> for () {}
trait Trait<T, U>: Supertrait<T> + Supertrait<U> {
fn say_hello(&self, _: &usize) {
println!("Hello!");
}
}
impl<T, U> Trait<T, U> for () {}
fn main() {
(&() as &'static dyn for<'a> Trait<&'static (), &'a ()>
as &'static dyn Trait<&'static (), &'static ()>)
.say_hello(&0);
}
```
When we prepare the vtable for `dyn for<'a> Trait<&'static (), &'a ()>`, we currently consider the PolyTraitRef of the vtable as the key for a supertrait. This leads two two supertraits -- `Supertrait<&'static ()>` and `for<'a> Supertrait<&'a ()>`.
However, we can upcast[^up] without offsetting the vtable from `dyn for<'a> Trait<&'static (), &'a ()>` to `dyn Trait<&'static (), &'static ()>`. This is just instantiating the principal trait ref for a specific `'a = 'static`. However, when considering those supertraits, we now have only one distinct supertrait -- `Supertrait<&'static ()>` (which is deduplicated since there are two supertraits with the same substitutions). This leads to similar offsetting issues, leading to the wrong method being called.
[^up]: I say upcast but this is a cast that is allowed on stable, since it's not changing the vtable at all, just instantiating the binder of the principal trait ref for some lifetime.
The solution here is to recognize that a vtable isn't really meaningfully higher ranked, and to just treat a vtable as corresponding to a `TraitRef` so we can do this deduplication more faithfully. That is to say, the vtable for `dyn for<'a> Tr<'a>` and `dyn Tr<'x>` are always identical, since they both would correspond to a set of free regions on an impl... Do note that `Tr<for<'a> fn(&'a ())>` and `Tr<fn(&'static ())>` are still distinct.
----
There's a bit more that can be cleaned up. In codegen, we can stop using `PolyExistentialTraitRef` basically everywhere. We can also fix SMIR to stop storing `PolyExistentialTraitRef` in its vtable allocations.
As for testing, it's difficult to actually turn this into something that can be tested with `rustc_dump_vtable`, since having multiple supertraits that are identical is a recipe for ambiguity errors. Maybe someone else is more creative with getting that attr to work, since the tests I added being run-pass tests is a bit unsatisfying. Miri also doesn't help here, since it doesn't really generate vtables that are offset by an index in the same way as codegen.
r? `@lcnr` for the vibe check? Or reassign, idk. Maybe let's talk about whether this makes sense.
<sup>(I guess an alternative would also be to not do any deduplication of vtable supertraits (or only a really conservative subset) rather than trying to normalize and deduplicate more faithfully here. Not sure if that works and is sufficient tho.)</sup>
cc `@steffahn` -- ty for the minimizations
cc `@WaffleLapkin` -- since you're overseeing the feature stabilization :3
Fixes#135315Fixes#135316
Revert most of #133194 (except the test and the comment fixes). Then refix
not emitting locations at all when the correct location discriminator value
exceeds LLVM's capacity.
`rustc_span::symbol` defines some things that are re-exported from
`rustc_span`, such as `Symbol` and `sym`. But it doesn't re-export some
closely related things such as `Ident` and `kw`. So you can do `use
rustc_span::{Symbol, sym}` but you have to do `use
rustc_span::symbol::{Ident, kw}`, which is inconsistent for no good
reason.
This commit re-exports `Ident`, `kw`, and `MacroRulesNormalizedIdent`,
and changes many `rustc_span::symbol::` qualifiers in `compiler/` to
`rustc_span::`. This is a 200+ net line of code reduction, mostly
because many files with two `use rustc_span` items can be reduced to
one.
The maximum discriminator value LLVM can currently encode is 2^12. If macro use
results in more than 2^12 calls to the same function attributed to the same
callsite, and those calls are MIR-inlined, we will require more than the maximum
discriminator value to completely represent the debug information. Once we reach
that point drop the debug info instead.
the behavior of the type system not only depends on the current
assumptions, but also the currentnphase of the compiler. This is
mostly necessary as we need to decide whether and how to reveal
opaque types. We track this via the `TypingMode`.
- Don't rely on enum values defined by LLVM's C++ API
- Use safe wrapper functions instead of direct `unsafe` calls
- Consistently pass pointer/length strings instead of C strings
cg_llvm: Use a type-safe helper to cast `&str` and `&[u8]` to `*const c_char`
In `rustc_codegen_llvm` there are many uses of `.as_ptr().cast()` to convert a string or byte-slice to `*const c_char`, which then gets passed through FFI.
This works, but is fragile, because there's nothing constraining the pointer cast to actually be from `u8` to `c_char`. If the original value changes to something else that has an `as_ptr` method, or the context changes to expect something other than `c_char`, the cast will silently do the wrong thing.
By making the cast more explicit via a helper method, we can be sure that it will either perform the intended cast, or fail at compile time.
- removed extra bits from predicates queries that are no longer needed in the new system
- removed the need for `non_erasable_generics` to take in tcx and DefId, removed unused arguments in callers
Supertraits of `BuilderMethods` are all called `XyzBuilderMethods`.
Supertraits of `CodegenMethods` are all called `XyzMethods`. This commit
changes the latter to `XyzCodegenMethods`, for consistency.
Because constants are currently emitted *before* the prologue, leaving the
debug location on the IRBuilder spills onto other instructions in the prologue
and messes up both line numbers as well as the point LLVM chooses to be the
prologue end.
Example LLVM IR (irrelevant IR elided):
Before:
define internal { i64, i64 } @_ZN3tmp3Foo18var_return_opt_try17he02116165b0fc08cE(ptr align 8 %self) !dbg !347 {
start:
%self.dbg.spill = alloca [8 x i8], align 8
%_0 = alloca [16 x i8], align 8
%residual.dbg.spill = alloca [0 x i8], align 1
#dbg_declare(ptr %residual.dbg.spill, !353, !DIExpression(), !357)
store ptr %self, ptr %self.dbg.spill, align 8, !dbg !357
#dbg_declare(ptr %self.dbg.spill, !350, !DIExpression(), !358)
After:
define internal { i64, i64 } @_ZN3tmp3Foo18var_return_opt_try17h00b17d08874ddd90E(ptr align 8 %self) !dbg !347 {
start:
%self.dbg.spill = alloca [8 x i8], align 8
%_0 = alloca [16 x i8], align 8
%residual.dbg.spill = alloca [0 x i8], align 1
#dbg_declare(ptr %residual.dbg.spill, !353, !DIExpression(), !357)
store ptr %self, ptr %self.dbg.spill, align 8
#dbg_declare(ptr %self.dbg.spill, !350, !DIExpression(), !358)
Note in particular how !357 from %residual.dbg.spill's dbg_declare no longer
falls through onto the store to %self.dbg.spill. This fixes argument values
at entry when the constant is a ZST (e.g. <Option as Try>::Residual). This
fixes#130003 (but note that it does *not* fix issues with argument values and
non-ZST constants, which emit their own stores that have debug info on them,
like #128945).
Add `#[warn(unreachable_pub)]` to a bunch of compiler crates
By default `unreachable_pub` identifies things that need not be `pub` and tells you to make them `pub(crate)`. But sometimes those things don't need any kind of visibility. So they way I did these was to remove the visibility entirely for each thing the lint identifies, and then add `pub(crate)` back in everywhere the compiler said it was necessary. (Or occasionally `pub(super)` when context suggested that was appropriate.) Tedious, but results in more `pub` removal.
There are plenty more crates to do but this seems like enough for a first PR.
r? `@compiler-errors`
Avoid extra `cast()`s after `CStr::as_ptr()`
These used to be `&str` literals that did need a pointer cast, but that
became a no-op after switching to `c""` literals in #118566.
Line 0 has a special meaning in DWARF. From the version 5 spec:
The compiler may emit the value 0 in cases
where an instruction cannot be attributed to any
source line.
DUMMY_SP spans cannot be attributed to any line. However, because rustc
internally stores line numbers starting at zero, lookup_debug_loc() adjusts
every line number by one. Special casing DUMMY_SP to actually emit line 0
ensures rustc communicates to the debugger that there's no meaningful source
code for this instruction, rather than telling the debugger to jump to line 1
randomly.
detects redundant imports that can be eliminated.
for #117772 :
In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
This fixes the changes brought to codegen tests when effect params are
added to libcore, by not attempting to monomorphize functions that get
the host param by being `const fn`.
Use the same DISubprogram for each instance of the same inlined function within a caller
# Issue Details:
The call to `panic` within a function like `Option::unwrap` is translated to LLVM as a `tail call` (as it will never return), when multiple calls to the same function like this are inlined LLVM will notice the common `tail call` block (i.e., loading the same panic string + location info and then calling `panic`) and merge them together.
When merging these instructions together, LLVM will also attempt to merge the debug locations as well, but this fails (i.e., debug info is dropped) as Rust emits a new `DISubprogram` at each inline site thus LLVM doesn't recognize that these are actually the same function and so thinks that there isn't a common debug location.
As an example of this, consider the following program:
```rust
#[no_mangle]
fn add_numbers(x: &Option<i32>, y: &Option<i32>) -> i32 {
let x1 = x.unwrap();
let y1 = y.unwrap();
x1 + y1
}
```
When building for x86_64 Windows using 1.72 it generates (note the lack of `.cv_loc` before the call to `panic`, thus it will be attributed to the same line at the `addq` instruction):
```llvm
.cv_loc 0 1 3 0 # src\lib.rs:3:0
addq $40, %rsp
retq
leaq .Lalloc_f570dea0a53168780ce9a91e67646421(%rip), %rcx
leaq .Lalloc_629ace53b7e5b76aaa810d549cc84ea3(%rip), %r8
movl $43, %edx
callq _ZN4core9panicking5panic17h12e60b9063f6dee8E
int3
```
# Fix Details:
Cache the `DISubprogram` emitted for each inlined function instance within a caller so that this can be reused if that instance is encountered again.
Ideally, we would also deduplicate child scopes and variables, however my attempt to do that with #114643 resulted in asserts when building for Linux (#115156) which would require some deep changes to Rust to fix (#115455).
Instead, when using an inlined function as a debug scope, we will also create a new child scope such that subsequent child scopes and variables do not collide (from LLVM's perspective).
After this change the above assembly now (with <https://reviews.llvm.org/D159226> as well) shows the `panic!` was inlined from `unwrap` in `option.rs` at line 935 into the current function in `lib.rs` at line 0 (line 0 is emitted since it is ambiguous which line to use as there were two inline sites that lead to this same code):
```llvm
.cv_loc 0 1 3 0 # src\lib.rs:3:0
addq $40, %rsp
retq
.cv_inline_site_id 6 within 0 inlined_at 1 0 0
.cv_loc 6 2 935 0 # library\core\src\option.rs:935:0
leaq .Lalloc_5f55955de67e57c79064b537689facea(%rip), %rcx
leaq .Lalloc_e741d4de8cb5801e1fd7a6c6795c1559(%rip), %r8
movl $43, %edx
callq _ZN4core9panicking5panic17hde1558f32d5b1c04E
int3
```