Commit graph

54 commits

Author SHA1 Message Date
Scott McMurray
b5376ba601 Remove my scalar_copy_backend_type optimization attempt
I added this back in 111999, but I no longer think it's a good idea
- It had to get scaled back to only power-of-two things to not break a bunch of targets
- LLVM seems to be getting better at memcpy removal anyway
- Introducing vector instructions has seemed to sometimes (115515) make autovectorization worse

So this removes it from the codegen crates entirely, and instead just tries to use <https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/traits/builder/trait.BuilderMethods.html#method.typed_place_copy> instead of direct `memcpy` so things will still use load/store for immediates.
2024-04-09 08:51:32 -07:00
Ralf Jung
f391c0793b only set noalias on Box with the global allocator 2024-03-05 15:03:33 +01:00
Erik Desjardins
4724cd4dc4 introduce and use ptradd/inbounds_ptradd instead of gep 2024-02-26 22:45:53 -05:00
bors
432fffa8af Auto merge of #118991 - nikic:scalar-pair, r=nagisa
Separate immediate and in-memory ScalarPair representation

Currently, we assume that ScalarPair is always represented using a two-element struct, both as an immediate value and when stored in memory.

This currently works fairly well, but runs into problems with https://github.com/rust-lang/rust/pull/116672, where a ScalarPair involving an i128 type can no longer be represented as a two-element struct in memory. For example, the tuple `(i32, i128)` needs to be represented in-memory as `{ i32, [3 x i32], i128 }` to satisfy alignment requirements. Using `{ i32, i128 }` instead will result in the second element being stored at the wrong offset (prior to LLVM 18).

Resolve this issue by no longer requiring that the immediate and in-memory type for ScalarPair are the same. The in-memory type will now look the same as for normal struct types (and will include padding filler and similar), while the immediate type stays a simple two-element struct type. This also means that booleans in immediate ScalarPair are now represented as i1 rather than i8, just like we do everywhere else.

The core change here is to llvm_type (which now treats ScalarPair as a normal struct) and immediate_llvm_type (which returns the two-element struct that llvm_type used to produce). The rest is fixing things up to no longer assume these are the same. In particular, this switches places that try to get pointers to the ScalarPair elements to use byte-geps instead of struct-geps.
2024-01-05 14:31:56 +00:00
Camille GILLOT
3ea5cfaa11 Tolerate overaligned MIR constants for codegen. 2023-12-17 22:56:42 +00:00
Nikita Popov
c2fd26a115 Separate immediate and in-memory ScalarPair representation
Currently, we assume that ScalarPair is always represented using
a two-element struct, both as an immediate value and when stored
in memory.

This currently works fairly well, but runs into problems with
https://github.com/rust-lang/rust/pull/116672, where a ScalarPair
involving an i128 type can no longer be represented as a two-element
struct in memory. For example, the tuple `(i32, i128)` needs to be
represented in-memory as `{ i32, [3 x i32], i128 }` to satisfy
alignment requirement. Using `{ i32, i128 }` instead will result in
the second element being stored at the wrong offset (prior to
LLVM 18).

Resolve this issue by no longer requiring that the immediate and
in-memory type for ScalarPair are the same. The in-memory type
will now look the same as for normal struct types (and will include
padding filler and similar), while the immediate type stays a
simple two-element struct type. This also means that booleans in
immediate ScalarPair are now represented as i1 rather than i8,
just like we do everywhere else.

The core change here is to llvm_type (which now treats ScalarPair
as a normal struct) and immediate_llvm_type (which returns the
two-element struct that llvm_type used to produce). The rest is
fixing things up to no longer assume these are the same. In
particular, this switches places that try to get pointers to the
ScalarPair elements to use byte-geps instead of struct-geps.
2023-12-15 17:42:05 +01:00
Ralf Jung
b1613ebc43 codegen: panic when trying to compute size/align of extern type 2023-12-12 08:15:17 +01:00
Ben Kimock
b0a580112b Use immediate_backend_type when reading from a const alloc 2023-12-09 17:13:11 -05:00
bors
0e7f91b75e Auto merge of #118324 - RalfJung:ctfe-read-only-pointers, r=saethlin
compile-time evaluation: detect writes through immutable pointers

This has two motivations:
- it unblocks https://github.com/rust-lang/rust/pull/116745 (and therefore takes a big step towards `const_mut_refs` stabilization), because we can now detect if the memory that we find in `const` can be interned as "immutable"
- it would detect the UB that was uncovered in https://github.com/rust-lang/rust/pull/117905, which was caused by accidental stabilization of `copy` functions in `const` that can only be called with UB

When UB is detected, we emit a future-compat warn-by-default lint. This is not a breaking change, so completely in line with [the const-UB RFC](https://rust-lang.github.io/rfcs/3016-const-ub.html), meaning we don't need t-lang FCP here. I made the lint immediately show up for dependencies since it is nearly impossible to even trigger this lint without `const_mut_refs` -- the accidentally stabilized `copy` functions are the only way this can happen, so the crates that popped up in #117905 are the only causes of such UB (in the code that crater covers), and the three cases of UB that we know about have all been fixed in their respective crates already.

The way this is implemented is by making use of the fact that our interpreter is already generic over the notion of provenance. For CTFE we now use the new `CtfeProvenance` type which is conceptually an `AllocId` plus a boolean `immutable` flag (but packed for a more efficient representation). This means we can mark a pointer as immutable when it is created as a shared reference. The flag will be propagated to all pointers derived from this one. We can then check the immutable flag on each write to reject writes through immutable pointers.

I just hope perf works out.
2023-12-07 18:11:01 +00:00
Ralf Jung
cb86303342 ctfe interpreter: extend provenance so that it can track whether a pointer is immutable 2023-12-07 17:46:36 +01:00
Ralf Jung
5a20bac6b3 more targeted errors when extern types end up in places they should not 2023-12-03 08:11:15 +01:00
Camille GILLOT
ac0683b783 Use correct offset when codegening mir::Const::Indirect. 2023-09-23 14:07:10 +00:00
Camille GILLOT
6992405674 Tolerate non-ptr indirect scalars in codegen. 2023-09-23 14:07:10 +00:00
Ralf Jung
ea22adbabd adjust constValue::Slice to work for arbitrary slice types 2023-09-19 20:17:43 +02:00
Ralf Jung
5a0a1ff0cd move ConstValue into mir
this way we have mir::ConstValue and ty::ValTree as reasonably parallel
2023-09-19 11:11:02 +02:00
Ralf Jung
89ac57db4d move required_consts check to general post-mono-check function 2023-09-14 22:30:42 +02:00
Ralf Jung
430c386821 make it more clear which functions create fresh AllocId 2023-09-14 07:27:31 +02:00
Ralf Jung
0f8908da27 cleanup op_to_const a bit; rename ConstValue::ByRef → Indirect 2023-09-14 07:27:30 +02:00
Ralf Jung
551f481ffb use AllocId instead of Allocation in ConstValue::ByRef 2023-09-14 07:26:24 +02:00
Ralf Jung
b2ebf1c23f const_eval and codegen: audit uses of is_zst 2023-08-29 09:03:46 +02:00
Erik Desjardins
04303cfb3a cg_ssa: remove pointee types and pointercast/bitcast-of-ptr 2023-07-29 13:18:20 -04:00
Scott McMurray
bf36193ef6 Add a distinct OperandValue::ZeroSized variant for ZSTs
These tend to have special handling in a bunch of places anyway, so the variant helps remember that.  And I think it's easier to grok than non-Scalar Aggregates sometimes being `Immediates` (like I got wrong and caused 109992).  As a minor bonus, it means we don't need to generate poison LLVM values for them to pass around in `OperandValue::Immediate`s.
2023-05-31 19:10:28 -07:00
Oli Scherer
164d041e30 Stop creating intermediate places just to immediate convert them to operands 2023-05-26 15:01:29 +00:00
Tomasz Miąsko
83a5a69a4c Align unsized locals
Allocate an extra space for unsized locals and manually align the
storage, since alloca doesn't support dynamic alignment.
2023-05-08 23:40:51 +02:00
Luqman Aden
c7c042ad31 Address review comments.
Remove bitcasts in OperandRef::extract_field; only pointercasts should
be needed.
2023-05-05 15:13:18 -07:00
Luqman Aden
6a5ee11027 Don't bitcast aggregate field. 2023-05-05 14:25:56 -07:00
Luqman Aden
48af94c080 Operand::extract_field: only cast llval if it's a pointer and replace bitcast w/ pointercast. 2023-05-05 14:25:55 -07:00
Scott McMurray
454bca514a Check CastKind::Transmute sizes in a better way
Fixes #110005
2023-04-06 13:53:10 -07:00
Scott McMurray
9aa9a846b6 Allow transmutes to produce OperandValues instead of always using allocas
LLVM can usually optimize these away, but especially for things like transmutes of newtypes it's silly to generate the `alloc`+`store`+`load` at all when it's actually a nop at LLVM level.
2023-04-04 18:44:29 -07:00
Scott McMurray
49798605a0 Refactor: Separate LocalRef variant for not-evaluated-yet operands 2023-03-24 20:36:59 -07:00
Nikita Popov
30331828cb Use poison instead of undef
In cases where it is legal, we should prefer poison values over
undef values.

This replaces undef with poison for aggregate construction and
for uninhabited types. There are more places where we can likely
use poison, but I wanted to stay conservative to start with.

In particular the aggregate case is important for newer LLVM
versions, which are not able to handle an undef base value during
early optimization due to poison-propagation concerns.
2023-03-16 15:07:04 +01:00
Maybe Waffle
1d42936b18 Prefer doc comments over //-comments in compiler 2022-11-27 11:19:04 +00:00
bjorn3
268e02c387 Remove type argument of array_alloca and rename to byte_array_alloca 2022-10-02 13:42:14 +00:00
Oli Scherer
9c4fb018ca Remove dead broken code from const zst handling in backends 2022-09-06 14:09:49 +00:00
Ralf Jung
4e7aaf1f44 tweak names and output and bless 2022-07-09 07:43:56 -04:00
Ralf Jung
ac265cdc19 review feedback 2022-07-09 07:27:29 -04:00
Ralf Jung
a422b42159 don't allow ZST in ScalarInt
There are several indications that we should not ZST as a ScalarInt:
- We had two ways to have ZST valtrees, either an empty `Branch` or a `Leaf` with a ZST in it.
  `ValTree::zst()` used the former, but the latter could possibly arise as well.
- Likewise, the interpreter had `Immediate::Uninit` and `Immediate::Scalar(Scalar::ZST)`.
- LLVM codegen already had to special-case ZST ScalarInt.

So instead add new ZST variants to those types that did not have other variants
which could be used for this purpose.
2022-07-09 07:27:29 -04:00
DrMeepster
1d1ff36214 fix codegen assertion 2022-06-15 18:39:23 -07:00
DrMeepster
cb417881a9 remove box derefs from codgen 2022-06-15 18:38:26 -07:00
Oli Scherer
d32ce37a17 Mark scalar layout unions so that backends that do not support partially initialized scalars can special case them. 2022-04-05 13:18:21 +00:00
DrMeepster
9d72dd54d0 fix another assumption about box 2022-03-11 17:00:56 -08:00
est31
2ef8af6619 Adopt let else in more places 2022-02-19 17:27:43 +01:00
LegionMammal978
eaf39cbd9e Remove in_band_lifetimes from rustc_codegen_ssa
See #91867 for more information.
2021-12-15 00:41:41 -05:00
est31
1418df5888 Adopt let_else across the compiler
This performs a substitution of code following the pattern:

let <id> = if let <pat> = ... { identity } else { ... : ! };

To simplify it to:

let <pat> = ... { identity } else { ... : ! };

By adopting the let_else feature.
2021-10-16 07:18:05 +02:00
Andreas Liljeqvist
5b2f757dae Make abi::Abi Copy and remove a *lot* of refs
fix

fix

Remove more refs and clones

fix

more

fix
2021-09-09 10:41:19 +02:00
Eduard-Mihai Burtescu
4ce933f13f rustc_target: move LayoutOf to ty::layout. 2021-09-02 01:17:14 +03:00
Tomasz Miąsko
838042aa4e Prepare struct_gep for opaque pointers
Imlement struct_gep using LLVMBuildStructGEP2 which takes an explicit
type argument instead of deriving it from a pointer type.
2021-08-04 15:51:30 +02:00
Ralf Jung
626605cea0 consistently treat None-tagged pointers as ints; get rid of some deprecated Scalar methods 2021-07-14 18:17:49 +02:00
Nikita Popov
2ce1addeba Don't access pointer element type for nontemporal store
Simply shift the bitcast from the store to the load, so that
we can use the destination type. I'm not sure the bitcast is
really necessary, but keeping it for now.
2021-07-09 22:15:05 +02:00
Nikita Popov
4560efe46c Pass type when creating load
This makes load generation compatible with opaque pointers.

The generation of nontemporal copies still accesses the pointer
element type, as fixing this requires more movement.
2021-07-09 22:14:44 +02:00