Do not equate `Const`'s ty in `super_combine_const`
Fixes#114456
In #125451 we started relating the `Const`'s tys outside of a probe so it was no longer simply an assertion to catch bugs.
This was done so that when we _do_ provide a wrongly typed const argument to an item if we wind up relating it with some other instantiation we'll have a `TypeError` we can bubble up and taint the resulting mir allowing const eval to skip evaluation.
In this PR I instead change `ConstArgHasType` to correctly handle checking the types of const inference variables. Previously if we had something like `impl<const N: u32> Trait for [(); N]`, when using the impl we would instantiate it with infer vars and then check that `?x: u32` is of type `u32` and succeed. Then later we would infer `?x` to some `Const` of type `usize`.
We now stall on `?x` in `ConstArgHasType` until it has a concrete value that we can determine the type of. This allows us to fail using the erroneous implementation of `Trait` which allows us to taint the mir.
Long term we intend to remove the `ty` field on `Const` so we would have no way of accessing the `ty` of a const inference variable anyway and would have to do this. I did not fully update `ConstArgHasType` to avoid using the `ty` field as it's not entirely possible right now- we would need to lookup `ConstArgHasType` candidates in the env.
---
As for _why_ I think we should do this, relating the types of const's is not necessary for soundness of the type system. Originally this check started off as a plain `==` in `super_relate_consts` and gradually has been growing in complexity as we support more complicated types. It was never actually required to ensure that const arguments are correctly typed for their parameters however.
The way we currently check that a const argument has the correct type is a little convoluted and confusing (and will hopefully be less weird as time goes on). Every const argument has an anon const with its return type set to type of the const parameter it is an argument to. When type checking the anon const regular type checking rules require that the expression is the same type as the return type. This effectively ensure that no matter what every const argument _always_ has the correct type.
An extra bit of complexity is that during `hir_ty_lowering` we do not represent everything as a `ConstKind::Unevaluated` corresponding to the anon const. For generic parameters i.e. `[(); N]` we simply represent them as `ConstKind::Param` as we do not want `ConstKind::Unevaluated` with generic substs on stable under min const generics. The anon const still gets type checked resulting in errors about type mismatches.
Eventually we intend to not create anon consts for all const arguments (for example for `ConstKind::Param`) and instead check that the argument type is correct via `ConstArgHasType` obligations (these effectively also act as a check that the anon consts have the correctly set return type).
What this all means is that the the only time we should ever have mismatched types when relating two `Const`s is if we have messed up our logic for ensuring that const arguments are of the correct type. Having this not be an assert is:
- Confusing as it may incorrectly lead people to believe this is an important check that is actually required
- Opens the possibility for bugs or behaviour reliant on this (unnecessary) check existing
---
This PR makes two tests go from pass->ICE (`generic_const_exprs/ice-125520-layout-mismatch-mulwithoverflow.rs` and `tests/crashes/121858.rs`). This is caused by the fact that we evaluate anon consts even if their where clauses do not hold and is a pre-existing issue and only affects `generic_const_exprs`. I am comfortable exposing the brokenness of `generic_const_exprs` more with this PR
This PR makes a test go from ICE->pass (`const-generics/issues/issue-105821.rs`). I have no idea why this PR affects that but I believe that ICE is an unrelated issue to do with the fact that under `generic_const_exprs`/`adt_const_params` we do not handle lifetimes in const parameter types correctly. This PR is likely just masking this bug.
Note: this PR doesn't re-introduce the assertion that the two consts' tys are equal. I'm not really sure how I feel about this but tbh it has caused more ICEs than its found lately so 🤷♀️
r? `@oli-obk` `@compiler-errors`
We already handle this case this way on the coherence side, and it matches the new solver's behaviour. While there is some breakage around type-alias-impl-trait (see new "type annotations needed" in tests/ui/type-alias-impl-trait/issue-84660-unsoundness.rs), no stable code breaks, and no new stable code is accepted.
Split out `ty::AliasTerm` from `ty::AliasTy`
Splitting out `AliasTerm` (for use in project and normalizes goals) and `AliasTy` (for use in `ty::Alias`)
r? lcnr
Cleanup: Rename `HAS_PROJECTIONS` to `HAS_ALIASES` etc.
The name of the bitflag `HAS_PROJECTIONS` and of its corresponding method `has_projections` is quite historical dating back to a time when projections were the only kind of alias type.
I think it's time to update it to clear up any potential confusion for newcomers and to reduce unnecessary friction during contributor onboarding.
r? types
Make inductive cycles always ambiguous
This makes inductive cycles always result in ambiguity rather than be treated like a stack-dependent error.
This has some interactions with specialization, and so breaks a few UI tests that I don't agree should've ever worked in the first place, and also breaks a handful of crates in a way that I don't believe is a problem.
On the bright side, it puts us in a better spot when it comes to eventually enabling coinduction everywhere.
## Results
This was cratered in https://github.com/rust-lang/rust/pull/116494#issuecomment-2008657494, which boils down to two regressions:
* `lu_packets` - This code should have never compiled in the first place. More below.
* **ALL** other regressions are due to `commit_verify@0.11.0-beta.1` (edit: and `commit_verify@0.10.x`) - This actually seems to be fixed in version `0.11.0-beta.5`, which is the *most* up to date version, but it's still prerelease on crates.io so I don't think cargo ends up picking `beta.5` when building dependent crates.
### `lu_packets`
Firstly, this crate uses specialization, so I think it's automatically worth breaking. However, I've minimized [the regression](https://crater-reports.s3.amazonaws.com/pr-116494-3/try%23d614ed876e31a5f3ad1d0fbf848fcdab3a29d1d8/gh/lcdr.lu_packets/log.txt) to:
```rust
// Upstream crate
pub trait Serialize {}
impl Serialize for &() {}
impl<S> Serialize for &[S] where for<'a> &'a S: Serialize {}
// ----------------------------------------------------------------------- //
// Downstream crate
#![feature(specialization)]
#![allow(incomplete_features, unused)]
use upstream::Serialize;
trait Replica {
fn serialize();
}
impl<T> Replica for T {
default fn serialize() {}
}
impl<T> Replica for Option<T>
where
for<'a> &'a T: Serialize,
{
fn serialize() {}
}
```
Specifically this fails when computing the specialization graph for the `downstream` crate.
The code ends up cycling on `&[?0]: Serialize` when we equate `&?0 = &[?1]` during impl matching, which ends up needing to prove `&[?1]: Serialize`, which since cycles are treated like ambiguity, ends up in a **fatal overflow**. For some reason this requires two crates, squashing them into one crate doesn't work.
Side-note: This code is subtly order dependent. When minimizing, I ended up having the code start failing on `nightly` very easily after removing and reordering impls. This seems to me all the more reason to remove this behavior altogether.
## Side-note: Item Bounds (edit: this was fixed independently in #121123)
Due to the changes in #120584 where we now consider an alias's item bounds *and* all the item bounds of the alias's nested self type aliases, I've had to add e6b64c61941120f734657106ae2479d05b463197 which is a hack to make sure we're not eagerly normalizing bounds that have nothing to do with the predicate we're trying to solve, and which result in.
This is fixed in a more principled way in #121123.
---
r? lcnr for an initial review
Rollup of 8 pull requests
Successful merges:
- #114009 (compiler: allow transmute of ZST arrays with generics)
- #122195 (Note that the caller chooses a type for type param)
- #122651 (Suggest `_` for missing generic arguments in turbofish)
- #122784 (Add `tag_for_variant` query)
- #122839 (Split out `PredicatePolarity` from `ImplPolarity`)
- #122873 (Merge my contributor emails into one using mailmap)
- #122885 (Adjust better spastorino membership to triagebot's adhoc_groups)
- #122888 (add a couple more tests)
r? `@ghost`
`@rustbot` modify labels: rollup
clean up `Sized` checking
This PR cleans up `sized_constraint` and related functions to make them simpler and faster. This should not make more or less code compile, but it can change error output in some rare cases.
## enums and unions are `Sized`, even if they are not WF
The previous code has some special handling for enums, which made them sized if and only if the last field of each variant is sized. For example given this definition (which is not WF)
```rust
enum E<T1: ?Sized, T2: ?Sized, U1: ?Sized, U2: ?Sized> {
A(T1, T2),
B(U1, U2),
}
```
the enum was sized if and only if `T2` and `U2` are sized, while `T1` and `T2` were ignored for `Sized` checking. After this PR this enum will always be sized.
Unsized enums are not a thing in Rust and removing this special case allows us to return an `Option<Ty>` from `sized_constraint`, rather than a `List<Ty>`.
Similarly, the old code made an union defined like this
```rust
union Union<T: ?Sized, U: ?Sized> {
head: T,
tail: U,
}
```
sized if and only if `U` is sized, completely ignoring `T`. This just makes no sense at all and now this union is always sized.
## apply the "perf hack" to all (non-error) types, instead of just type parameters
This "perf hack" skips evaluating `sized_constraint(adt): Sized` if `sized_constraint(adt): Sized` exactly matches a predicate defined on `adt`, for example:
```rust
// `Foo<T>: Sized` iff `T: Sized`, but we know `T: Sized` from a predicate of `Foo`
struct Foo<T /*: Sized */>(T);
```
Previously this was only applied to type parameters and now it is applied to every type. This means that for example this type is now always sized:
```rust
// Note that this definition is WF, but the type `S<T>` not WF in the global/empty ParamEnv
struct S<T>([T]) where [T]: Sized;
```
I don't anticipate this to affect compile time of any real-world program, but it makes the code a bit nicer and it also makes error messages a bit more consistent if someone does write such a cursed type.
## tuples are sized if the last type is sized
The old solver already has this behavior and this PR also implements it for the new solver and `is_trivially_sized`. This makes it so that tuples work more like a struct defined like this:
```rust
struct TupleN<T1, T2, /* ... */ Tn: ?Sized>(T1, T2, /* ... */ Tn);
```
This might improve the compile time of programs with large tuples a little, but is mostly also a consistency fix.
## `is_trivially_sized` for more types
This function is used post-typeck code (borrowck, const eval, codegen) to skip evaluating `T: Sized` in some cases. It will now return `true` in more cases, most notably `UnsafeCell<T>` and `ManuallyDrop<T>` where `T.is_trivially_sized`.
I'm anticipating that this change will improve compile time for some real world programs.