Document some builtin impls in the next solver
This does not cover all builtin impls, but ones that I were able to go over within a cycle.
r? `@lcnr`
Let me know if the place isn't correct for these, or if you'd like me to change how the impls are presented ^^
Add stubs in IR and ABI for `f16` and `f128`
This is the very first step toward the changes in https://github.com/rust-lang/rust/pull/114607 and the [`f16` and `f128` RFC](https://rust-lang.github.io/rfcs/3453-f16-and-f128.html). It adds the types to `rustc_type_ir::FloatTy` and `rustc_abi::Primitive`, and just propagates those out as `unimplemented!` stubs where necessary.
These types do not parse yet so there is no feature gate, and it should be okay to use `unimplemented!`.
The next steps will probably be AST support with parsing and the feature gate.
r? `@compiler-errors`
cc `@Nilstrieb` suggested breaking the PR up in https://github.com/rust-lang/rust/pull/120645#issuecomment-1925900572
No need to `validate_alias_bound_self_from_param_env` in `assemble_alias_bound_candidates`
We already fully normalize the self type before we reach `assemble_alias_bound_candidates`, so there's no reason to double check that a projection is truly rigid by checking param-env bounds.
I think this is also blocked on us making sure to always normalize opaques: #120549.
r? lcnr
Do not assemble candidates for default impls
There is no reason (as far as I can tell?) that we should assemble an impl candidate for a default impl. This candidate itself does not prove that the impl holds, and any time that it *does* hold, there will be a more specializing non-default impl that also is assembled.
This is because `default impl<T> Foo for T {}` actually expands to `impl<T> Foo for T where T: Foo {}`. The only way to satisfy that where clause (without coinduction) is via *another* implementation that does hold -- precisely an impl that specializes it.
This should fix the specialization related regressions for #116494. That should lead to one root crate regression that doesn't have to do with specialization, which I think we can regress.
r? lcnr cc ``@rust-lang/types``
cc #31844
Dejargonize `subst`
In favor of #110793, replace almost every occurence of `subst` and `substitution` from rustc codes, but they still remains in subtrees under `src/tools/` like clippy and test codes (I'd like to replace them after this)
Harmonize `AsyncFn` implementations, make async closures conditionally impl `Fn*` traits
This PR implements several changes to the built-in and libcore-provided implementations of `Fn*` and `AsyncFn*` to address two problems:
1. async closures do not implement the `Fn*` family traits, leading to breakage: https://crater-reports.s3.amazonaws.com/pr-120361/index.html
2. *references* to async closures do not implement `AsyncFn*`, as a consequence of the existing blanket impls of the shape `AsyncFn for F where F: Fn, F::Output: Future`.
In order to fix (1.), we implement `Fn` traits appropriately for async closures. It turns out that async closures can:
* always implement `FnOnce`, meaning that they're drop-in compatible with `FnOnce`-bound combinators like `Option::map`.
* conditionally implement `Fn`/`FnMut` if they have no captures, which means that existing usages of async closures should *probably* work without breakage (crater checking this: https://github.com/rust-lang/rust/pull/120712#issuecomment-1930587805).
In order to fix (2.), we make all of the built-in callables implement `AsyncFn*` via built-in impls, and instead adjust the blanket impls for `AsyncFn*` provided by libcore to match the blanket impls for `Fn*`.
For a rigid projection, recursively look at the self type's item bounds to fix the `associated_type_bounds` feature
Given a deeply nested rigid projection like `<<<T as Trait1>::Assoc1 as Trait2>::Assoc2 as Trait3>::Assoc3`, this PR adjusts both trait solvers to look at the item bounds for all of `Assoc3`, `Assoc2`, and `Assoc1` in order to satisfy a goal. We do this because the item bounds for projections may contain relevant bounds for *other* nested projections when the `associated_type_bounds` (ATB) feature is enabled. For example:
```rust
#![feature(associated_type_bounds)]
trait Trait1 {
type Assoc1: Trait2<Assoc2: Foo>;
// Item bounds for `Assoc1` are:
// `<Self as Trait1>::Assoc1: Trait2`
// `<<Self as Trait1>::Assoc1 as Trait2>::Assoc2: Foo`
}
trait Trait2 {
type Assoc2;
}
trait Foo {}
fn hello<T: Trait1>(x: <<T as Trait1>::Assoc1 as Trait2>::Assoc2) {
fn is_foo(_: impl Foo) {}
is_foo(x);
// Currently fails with:
// ERROR the trait bound `<<Self as Trait1>::Assoc1 as Trait2>::Assoc2: Foo` is not satisfied
}
```
This has been a long-standing place of brokenness for ATBs, and is also part of the reason why ATBs currently desugar so differently in various positions (i.e. sometimes desugaring to param-env bounds, sometimes desugaring to RPITs, etc). For example, in RPIT and TAIT position, `impl Foo<Bar: Baz>` currently desugars to `impl Foo<Bar = impl Baz>` because we do not currently take advantage of these nested item bounds if we desugared them into a single set of item bounds on the opaque. This is obviously both strange and unnecessary if we just take advantage of these bounds as we should.
## Approach
This PR repeatedly peels off each projection of a given goal's self type and tries to match its item bounds against a goal, repeating with the self type of the projection. This is pretty straightforward to implement in the new solver, only requiring us to loop on the self type of a rigid projection to discover inner rigid projections, and we also need to introduce an extra probe so we can normalize them.
In the old solver, we can do essentially the same thing, however we rely on the fact that projections *should* be normalized already. This is obviously not always the case -- however, in the case that they are not fully normalized, such as a projection which has both infer vars and, we bail out with ambiguity if we hit an infer var for the self type.
## Caveats
⚠️ In the old solver, this has the side-effect of actually stalling some higher-ranked trait goals of the form `for<'a> <?0 as Tr<'a>>: Tr2`. Because we stall them, they no longer are eagerly treated as error -- this cause some existing `known-bug` tests to go from fail -> pass.
I'm pretty unconvinced that this is a problem since we make code that we expect to pass in the *new* solver also pass in the *old* solver, though this obviously doesn't solve the *full* problem.
## And then also...
We also adjust the desugaring of ATB to always desugar to a regular associated bound, rather than sometimes to an impl Trait **except** for when the ATB is present in a `dyn Trait`. We need to lower `dyn Trait<Assoc: Bar>` to `dyn Trait<Assoc = impl Bar>` because object types need all of their associated types specified.
I would also be in favor of splitting out the ATB feature and/or removing support for object types in order to stabilize just the set of positions for which the ATB feature is consistent (i.e. always elaborates to a bound).
Remove unused args from functions
`#[instrument]` suppresses the unused arguments from a function, *and* suppresses unused methods too! This PR removes things which are only used via `#[instrument]` calls, and fixes some other errors (privacy?) that I will comment inline.
It's possible that some of these arguments were being passed in for the purposes of being instrumented, but I am unconvinced by most of them.