This makes `Obligation` two words bigger, but avoids allocating a lot of
the time.
I previously tried this in #73983 and it didn't help much, but local
timings look more promising now.
This crate actually had a typo `'ctx` in one of its functions:
```diff
-pub fn same_type_modulo_infer(a: Ty<'tcx>, b: Ty<'ctx>) -> bool {
+pub fn same_type_modulo_infer<'tcx>(a: Ty<'tcx>, b: Ty<'tcx>) -> bool {
```
Implementation of GATs outlives lint
See #87479 for background. Closes#87479
The basic premise of this lint/error is to require the user to write where clauses on a GAT when those bounds can be implied or proven from any function on the trait returning that GAT.
## Intuitive Explanation (Attempt) ##
Let's take this trait definition as an example:
```rust
trait Iterable {
type Item<'x>;
fn iter<'a>(&'a self) -> Self::Item<'a>;
}
```
Let's focus on the `iter` function. The first thing to realize is that we know that `Self: 'a` because of `&'a self`. If an impl wants `Self::Item` to contain any data with references, then those references must be derived from `&'a self`. Thus, they must live only as long as `'a`. Furthermore, because of the `Self: 'a` implied bound, they must live only as long as `Self`. Since it's `'a` is used in place of `'x`, it is reasonable to assume that any value of `Self::Item<'x>`, and thus `'x`, will only be able to live as long as `Self`. Therefore, we require this bound on `Item` in the trait.
As another example:
```rust
trait Deserializer<T> {
type Out<'x>;
fn deserialize<'a>(&self, input: &'a T) -> Self::Out<'a>;
}
```
The intuition is similar here, except rather than a `Self: 'a` implied bound, we have a `T: 'a` implied bound. Thus, the data on `Self::Out<'a>` is derived from `&'a T`, and thus it is reasonable to expect that the lifetime `'x` will always be less than `T`.
## Implementation Algorithm ##
* Given a GAT `<P0 as Trait<P1..Pi>>::G<Pi...Pn>` declared as `trait T<A1..Ai> for A0 { type G<Ai...An>; }` used in return type of one associated function `F`
* Given env `E` (including implied bounds) for `F`
* For each lifetime parameter `'a` in `P0...Pn`:
* For each other type parameter `Pi != 'a` in `P0...Pn`: // FIXME: this include of lifetime parameters too
* If `E => (P: 'a)`:
* Require where clause `Ai: 'a`
## Follow-up questions ##
* What should we do when we don't pass params exactly?
For this example:
```rust
trait Des {
type Out<'x, D>;
fn des<'z, T>(&self, data: &'z Wrap<T>) -> Self::Out<'z, Wrap<T>>;
}
```
Should we be requiring a `D: 'x` clause? We pass `Wrap<T>` as `D` and `'z` as `'x`, and should be able to prove that `Wrap<T>: 'z`.
r? `@nikomatsakis`
Revise never type fallback algorithm
This is a rebase of https://github.com/rust-lang/rust/pull/84573, but dropping the stabilization of never type (and the accompanying large test diff).
Each commit builds & has tests updated alongside it, and could be reviewed in a more or less standalone fashion. But it may make more sense to review the PR as a whole, I'm not sure. It should be noted that tests being updated isn't really a good indicator of final behavior -- never_type_fallback is not enabled by default in this PR, so we can't really see the full effects of the commits here.
This combines the work by Niko, which is [documented in this gist](https://gist.github.com/nikomatsakis/7a07b265dc12f5c3b3bd0422018fa660), with some additional rules largely derived to target specific known patterns that regress with the algorithm solely derived by Niko. We build these from an intuition that:
* In general, fallback to `()` is *sound* in all cases
* But, in general, we *prefer* fallback to `!` as it accepts more code, particularly that written to intentionally use `!` (e.g., Result's with a Infallible/! variant).
When evaluating Niko's proposed algorithm, we find that there are certain cases where fallback to `!` leads to compilation failures in real-world code, and fallback to `()` fixes those errors. In order to allow for stabilization, we need to fix a good portion of these patterns.
The final rule set this PR proposes is that, by default, we fallback from `?T` to `!`, with the following exceptions:
1. `?T: Foo` and `Bar::Baz = ?T` and `(): Foo`, then fallback to `()`
2. Per [Niko's algorithm](https://gist.github.com/nikomatsakis/7a07b265dc12f5c3b3bd0422018fa660#proposal-fallback-chooses-between--and--based-on-the-coercion-graph), the "live" `?T` also fallback to `()`.
The first rule is necessary to address a fairly common pattern which boils down to something like the snippet below. Without rule 1, we do not see the closure's return type as needing a () fallback, which leads to compilation failure.
```rust
#![feature(never_type_fallback)]
trait Bar { }
impl Bar for () { }
impl Bar for u32 { }
fn foo<R: Bar>(_: impl Fn() -> R) {}
fn main() {
foo(|| panic!());
}
```
r? `@jackh726`
We now fallback type variables using the following rules:
* Construct a coercion graph `A -> B` where `A` and `B` are unresolved
type variables or the `!` type.
* Let D be those variables that are reachable from `!`.
* Let N be those variables that are reachable from a variable not in
D.
* All variables in (D \ N) fallback to `!`.
* All variables in (D & N) fallback to `()`.
canonicalize consts before calling try_unify_abstract_consts query
Fixes#88022Fixes#86953Fixes#77708Fixes#82034Fixes#85031
these ICEs were all caused by calling the `try_unify_abstract_consts` query with inference vars in substs
r? `@lcnr`
I didn't like the sub-unify code executing when a predicate was
ENQUEUED, that felt fragile. I would have preferred to move the
sub-unify code so that it only occurred during generalization, but
that impacted diagnostics, so having it also occur when we process
subtype predicates felt pretty reasonable. (I guess we only need one
or the other, but I kind of prefer both, since the generalizer
ultimately feels like the *right* place to guarantee the properties we
want.)
This allows opaque type inference to check for defining uses without having to pass down that def id via function arguments to every method that could possibly cause an opaque type to be compared with a concrete type
Previously, we would 'forget' that we had `'static` regions in some
place during trait evaluation. This lead to us producing
`EvaluatedToOkModuloRegions` when we could have produced
`EvaluatedToOk`, causing us to perform unnecessary work.
This PR preserves `'static` regions when we canonicalize a predicate for
`evaluate_obligation`, and when we 'freshen' a predicate during trait
evaluation. Thie ensures that evaluating a predicate containing
`'static` regions can produce `EvaluatedToOk` (assuming that we
don't end up introducing any region dependencies during evaluation).
Building off of this improved caching, we use
`predicate_must_hold_considering_regions` during fulfillment of
projection predicates to see if we can skip performing additional work.
We already do this for trait predicates, but doing this for projection
predicates lead to mixed performance results without the above caching
improvements.
Found with https://github.com/est31/warnalyzer.
Dubious changes:
- Is anyone else using rustc_apfloat? I feel weird completely deleting
x87 support.
- Maybe some of the dead code in rustc_data_structures, in case someone
wants to use it in the future?
- Don't change rustc_serialize
I plan to scrap most of the json module in the near future (see
https://github.com/rust-lang/compiler-team/issues/418) and fixing the
tests needed more work than I expected.
TODO: check if any of the comments on the deleted code should be kept.
Rust contains various size checks conditional on target_arch = "x86_64",
but these checks were never intended to apply to
x86_64-unknown-linux-gnux32. Add target_pointer_width = "64" to the
conditions.
Introduce `TypeVisitor::BreakTy`
Implements MCP rust-lang/compiler-team#383.
r? `@ghost`
cc `@lcnr` `@oli-obk`
~~Blocked on FCP in rust-lang/compiler-team#383.~~