1
Fork 0

Auto merge of #109442 - Nilstrieb:rollup-seb5xsa, r=Nilstrieb

Rollup of 10 pull requests

Successful merges:

 - #106434 (Document `Iterator::sum/product` for Option/Result)
 - #108326 (Implement read_buf for a few more types)
 - #108842 (Enforce non-lifetime-binders in supertrait preds are not object safe)
 - #108896 (new solver: make all goal evaluation able to be automatically rerun )
 - #109124 (Add `dist.compression-profile` option to control compression speed)
 - #109240 (Walk un-shifted nested `impl Trait` in trait when setting up default trait method assumptions)
 - #109385 (fix typo)
 - #109386 (add myself to mailmap)
 - #109390 (Custom MIR: Support aggregate expressions)
 - #109408 (not *all* retags might be explicit in Runtime MIR)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
This commit is contained in:
bors 2023-03-21 12:06:26 +00:00
commit a01b4cc9f3
55 changed files with 1228 additions and 556 deletions

View file

@ -29,6 +29,8 @@ Alexander Ronald Altman <alexanderaltman@me.com>
Alexandre Martin <martin.alex32@hotmail.fr> Alexandre Martin <martin.alex32@hotmail.fr>
Alexis Beingessner <a.beingessner@gmail.com> Alexis Beingessner <a.beingessner@gmail.com>
Alfie John <alfie@alfie.wtf> Alfie John <alfiej@fastmail.fm> Alfie John <alfie@alfie.wtf> Alfie John <alfiej@fastmail.fm>
Alona Enraght-Moony <code@alona.page> <nixon.emoony@gmail.com>
Alona Enraght-Moony <code@alona.page> <nixon@caminus.local>
Amos Onn <amosonn@gmail.com> Amos Onn <amosonn@gmail.com>
Ana-Maria Mihalache <mihalacheana.maria@yahoo.com> Ana-Maria Mihalache <mihalacheana.maria@yahoo.com>
Anatoly Ikorsky <aikorsky@gmail.com> Anatoly Ikorsky <aikorsky@gmail.com>
@ -415,7 +417,6 @@ Nicolas Abram <abramlujan@gmail.com>
Nicole Mazzuca <npmazzuca@gmail.com> Nicole Mazzuca <npmazzuca@gmail.com>
Nif Ward <nif.ward@gmail.com> Nif Ward <nif.ward@gmail.com>
Nika Layzell <nika@thelayzells.com> <michael@thelayzells.com> Nika Layzell <nika@thelayzells.com> <michael@thelayzells.com>
Nixon Enraght-Moony <nixon.emoony@gmail.com>
NODA Kai <nodakai@gmail.com> NODA Kai <nodakai@gmail.com>
oliver <16816606+o752d@users.noreply.github.com> oliver <16816606+o752d@users.noreply.github.com>
Oliver Middleton <olliemail27@gmail.com> <ollie27@users.noreply.github.com> Oliver Middleton <olliemail27@gmail.com> <ollie27@users.noreply.github.com>

View file

@ -605,7 +605,7 @@ fn find_opaque_ty_constraints_for_tait(tcx: TyCtxt<'_>, def_id: LocalDefId) -> T
found: Option<ty::OpaqueHiddenType<'tcx>>, found: Option<ty::OpaqueHiddenType<'tcx>>,
/// In the presence of dead code, typeck may figure out a hidden type /// In the presence of dead code, typeck may figure out a hidden type
/// while borrowck will now. We collect these cases here and check at /// while borrowck will not. We collect these cases here and check at
/// the end that we actually found a type that matches (modulo regions). /// the end that we actually found a type that matches (modulo regions).
typeck_types: Vec<ty::OpaqueHiddenType<'tcx>>, typeck_types: Vec<ty::OpaqueHiddenType<'tcx>>,
} }

View file

@ -78,7 +78,8 @@ pub enum MirPhase {
/// MIR, this is UB. /// MIR, this is UB.
/// - Retags: If `-Zmir-emit-retag` is enabled, analysis MIR has "implicit" retags in the same way /// - Retags: If `-Zmir-emit-retag` is enabled, analysis MIR has "implicit" retags in the same way
/// that Rust itself has them. Where exactly these are is generally subject to change, and so we /// that Rust itself has them. Where exactly these are is generally subject to change, and so we
/// don't document this here. Runtime MIR has all retags explicit. /// don't document this here. Runtime MIR has most retags explicit (though implicit retags
/// can still occur at `Rvalue::{Ref,AddrOf}`).
/// - Generator bodies: In analysis MIR, locals may actually be behind a pointer that user code has /// - Generator bodies: In analysis MIR, locals may actually be behind a pointer that user code has
/// access to. This occurs in generator bodies. Such locals do not behave like other locals, /// access to. This occurs in generator bodies. Such locals do not behave like other locals,
/// because they eg may be aliased in surprising ways. Runtime MIR has no such special locals - /// because they eg may be aliased in surprising ways. Runtime MIR has no such special locals -
@ -1165,7 +1166,7 @@ pub enum AggregateKind<'tcx> {
Tuple, Tuple,
/// The second field is the variant index. It's equal to 0 for struct /// The second field is the variant index. It's equal to 0 for struct
/// and union expressions. The fourth field is /// and union expressions. The last field is the
/// active field number and is present only for union expressions /// active field number and is present only for union expressions
/// -- e.g., for a union expression `SomeUnion { c: .. }`, the /// -- e.g., for a union expression `SomeUnion { c: .. }`, the
/// active field index would identity the field `c` /// active field index would identity the field `c`

View file

@ -897,6 +897,9 @@ pub enum ObjectSafetyViolation {
/// (e.g., `trait Foo : Bar<Self>`). /// (e.g., `trait Foo : Bar<Self>`).
SupertraitSelf(SmallVec<[Span; 1]>), SupertraitSelf(SmallVec<[Span; 1]>),
// Supertrait has a non-lifetime `for<T>` binder.
SupertraitNonLifetimeBinder(SmallVec<[Span; 1]>),
/// Method has something illegal. /// Method has something illegal.
Method(Symbol, MethodViolationCode, Span), Method(Symbol, MethodViolationCode, Span),
@ -919,6 +922,9 @@ impl ObjectSafetyViolation {
.into() .into()
} }
} }
ObjectSafetyViolation::SupertraitNonLifetimeBinder(_) => {
format!("where clause cannot reference non-lifetime `for<...>` variables").into()
}
ObjectSafetyViolation::Method(name, MethodViolationCode::StaticMethod(_), _) => { ObjectSafetyViolation::Method(name, MethodViolationCode::StaticMethod(_), _) => {
format!("associated function `{}` has no `self` parameter", name).into() format!("associated function `{}` has no `self` parameter", name).into()
} }
@ -969,7 +975,9 @@ impl ObjectSafetyViolation {
pub fn solution(&self, err: &mut Diagnostic) { pub fn solution(&self, err: &mut Diagnostic) {
match self { match self {
ObjectSafetyViolation::SizedSelf(_) | ObjectSafetyViolation::SupertraitSelf(_) => {} ObjectSafetyViolation::SizedSelf(_)
| ObjectSafetyViolation::SupertraitSelf(_)
| ObjectSafetyViolation::SupertraitNonLifetimeBinder(..) => {}
ObjectSafetyViolation::Method( ObjectSafetyViolation::Method(
name, name,
MethodViolationCode::StaticMethod(Some((add_self_sugg, make_sized_sugg))), MethodViolationCode::StaticMethod(Some((add_self_sugg, make_sized_sugg))),
@ -1023,7 +1031,8 @@ impl ObjectSafetyViolation {
// diagnostics use a `note` instead of a `span_label`. // diagnostics use a `note` instead of a `span_label`.
match self { match self {
ObjectSafetyViolation::SupertraitSelf(spans) ObjectSafetyViolation::SupertraitSelf(spans)
| ObjectSafetyViolation::SizedSelf(spans) => spans.clone(), | ObjectSafetyViolation::SizedSelf(spans)
| ObjectSafetyViolation::SupertraitNonLifetimeBinder(spans) => spans.clone(),
ObjectSafetyViolation::AssocConst(_, span) ObjectSafetyViolation::AssocConst(_, span)
| ObjectSafetyViolation::GAT(_, span) | ObjectSafetyViolation::GAT(_, span)
| ObjectSafetyViolation::Method(_, _, span) | ObjectSafetyViolation::Method(_, _, span)

View file

@ -51,9 +51,7 @@ where
// Region folder // Region folder
impl<'tcx> TyCtxt<'tcx> { impl<'tcx> TyCtxt<'tcx> {
/// Folds the escaping and free regions in `value` using `f`, and /// Folds the escaping and free regions in `value` using `f`.
/// sets `skipped_regions` to true if any late-bound region was found
/// and skipped.
pub fn fold_regions<T>( pub fn fold_regions<T>(
self, self,
value: T, value: T,
@ -64,17 +62,6 @@ impl<'tcx> TyCtxt<'tcx> {
{ {
value.fold_with(&mut RegionFolder::new(self, &mut f)) value.fold_with(&mut RegionFolder::new(self, &mut f))
} }
pub fn super_fold_regions<T>(
self,
value: T,
mut f: impl FnMut(ty::Region<'tcx>, ty::DebruijnIndex) -> ty::Region<'tcx>,
) -> T
where
T: TypeSuperFoldable<TyCtxt<'tcx>>,
{
value.super_fold_with(&mut RegionFolder::new(self, &mut f))
}
} }
/// Folds over the substructure of a type, visiting its component /// Folds over the substructure of a type, visiting its component

View file

@ -166,6 +166,28 @@ impl<'tcx, 'body> ParseCtxt<'tcx, 'body> {
let cast_kind = mir_cast_kind(source_ty, expr.ty); let cast_kind = mir_cast_kind(source_ty, expr.ty);
Ok(Rvalue::Cast(cast_kind, source, expr.ty)) Ok(Rvalue::Cast(cast_kind, source, expr.ty))
}, },
ExprKind::Tuple { fields } => Ok(
Rvalue::Aggregate(
Box::new(AggregateKind::Tuple),
fields.iter().map(|e| self.parse_operand(*e)).collect::<Result<_, _>>()?
)
),
ExprKind::Array { fields } => {
let elem_ty = expr.ty.builtin_index().expect("ty must be an array");
Ok(Rvalue::Aggregate(
Box::new(AggregateKind::Array(elem_ty)),
fields.iter().map(|e| self.parse_operand(*e)).collect::<Result<_, _>>()?
))
},
ExprKind::Adt(box AdtExpr{ adt_def, variant_index, substs, fields, .. }) => {
let is_union = adt_def.is_union();
let active_field_index = is_union.then(|| fields[0].name.index());
Ok(Rvalue::Aggregate(
Box::new(AggregateKind::Adt(adt_def.did(), *variant_index, substs, None, active_field_index)),
fields.iter().map(|f| self.parse_operand(f.expr)).collect::<Result<_, _>>()?
))
},
_ => self.parse_operand(expr_id).map(Rvalue::Use), _ => self.parse_operand(expr_id).map(Rvalue::Use),
) )
} }

View file

@ -224,7 +224,9 @@ impl<'tcx> EvalCtxt<'_, 'tcx> {
if goal.predicate.self_ty().is_ty_var() { if goal.predicate.self_ty().is_ty_var() {
return vec![Candidate { return vec![Candidate {
source: CandidateSource::BuiltinImpl, source: CandidateSource::BuiltinImpl,
result: self.make_canonical_response(Certainty::AMBIGUOUS).unwrap(), result: self
.evaluate_added_goals_and_make_canonical_response(Certainty::AMBIGUOUS)
.unwrap(),
}]; }];
} }
@ -261,8 +263,9 @@ impl<'tcx> EvalCtxt<'_, 'tcx> {
let &ty::Alias(ty::Projection, projection_ty) = goal.predicate.self_ty().kind() else { let &ty::Alias(ty::Projection, projection_ty) = goal.predicate.self_ty().kind() else {
return return
}; };
self.probe(|this| {
let normalized_ty = this.next_ty_infer(); self.probe(|ecx| {
let normalized_ty = ecx.next_ty_infer();
let normalizes_to_goal = goal.with( let normalizes_to_goal = goal.with(
tcx, tcx,
ty::Binder::dummy(ty::ProjectionPredicate { ty::Binder::dummy(ty::ProjectionPredicate {
@ -270,28 +273,16 @@ impl<'tcx> EvalCtxt<'_, 'tcx> {
term: normalized_ty.into(), term: normalized_ty.into(),
}), }),
); );
let normalization_certainty = match this.evaluate_goal(normalizes_to_goal) { ecx.add_goal(normalizes_to_goal);
Ok((_, certainty)) => certainty, if let Ok(_) = ecx.try_evaluate_added_goals() {
Err(NoSolution) => return, let normalized_ty = ecx.resolve_vars_if_possible(normalized_ty);
};
let normalized_ty = this.resolve_vars_if_possible(normalized_ty);
// NOTE: Alternatively we could call `evaluate_goal` here and only have a `Normalized` candidate. // NOTE: Alternatively we could call `evaluate_goal` here and only have a `Normalized` candidate.
// This doesn't work as long as we use `CandidateSource` in winnowing. // This doesn't work as long as we use `CandidateSource` in winnowing.
let goal = goal.with(tcx, goal.predicate.with_self_ty(tcx, normalized_ty)); let goal = goal.with(tcx, goal.predicate.with_self_ty(tcx, normalized_ty));
let normalized_candidates = this.assemble_and_evaluate_candidates(goal); candidates.extend(ecx.assemble_and_evaluate_candidates(goal));
for mut normalized_candidate in normalized_candidates {
normalized_candidate.result =
normalized_candidate.result.unchecked_map(|mut response| {
// FIXME: This currently hides overflow in the normalization step of the self type
// which is probably wrong. Maybe `unify_and` should actually keep overflow as
// we treat it as non-fatal anyways.
response.certainty = response.certainty.unify_and(normalization_certainty);
response
});
candidates.push(normalized_candidate);
} }
}) });
} }
fn assemble_impl_candidates<G: GoalKind<'tcx>>( fn assemble_impl_candidates<G: GoalKind<'tcx>>(
@ -516,7 +507,7 @@ impl<'tcx> EvalCtxt<'_, 'tcx> {
} else { } else {
Certainty::AMBIGUOUS Certainty::AMBIGUOUS
}; };
return self.make_canonical_response(certainty); return self.evaluate_added_goals_and_make_canonical_response(certainty);
} }
} }
@ -538,14 +529,16 @@ impl<'tcx> EvalCtxt<'_, 'tcx> {
} }
} }
fn discard_reservation_impl(&self, mut candidate: Candidate<'tcx>) -> Candidate<'tcx> { fn discard_reservation_impl(&mut self, mut candidate: Candidate<'tcx>) -> Candidate<'tcx> {
if let CandidateSource::Impl(def_id) = candidate.source { if let CandidateSource::Impl(def_id) = candidate.source {
if let ty::ImplPolarity::Reservation = self.tcx().impl_polarity(def_id) { if let ty::ImplPolarity::Reservation = self.tcx().impl_polarity(def_id) {
debug!("Selected reservation impl"); debug!("Selected reservation impl");
// We assemble all candidates inside of a probe so by // We assemble all candidates inside of a probe so by
// making a new canonical response here our result will // making a new canonical response here our result will
// have no constraints. // have no constraints.
candidate.result = self.make_canonical_response(Certainty::AMBIGUOUS).unwrap(); candidate.result = self
.evaluate_added_goals_and_make_canonical_response(Certainty::AMBIGUOUS)
.unwrap();
} }
} }

View file

@ -48,7 +48,13 @@ impl<'tcx> EvalCtxt<'_, 'tcx> {
/// - `external_constraints`: additional constraints which aren't expressable /// - `external_constraints`: additional constraints which aren't expressable
/// using simple unification of inference variables. /// using simple unification of inference variables.
#[instrument(level = "debug", skip(self))] #[instrument(level = "debug", skip(self))]
pub(super) fn make_canonical_response(&self, certainty: Certainty) -> QueryResult<'tcx> { pub(super) fn evaluate_added_goals_and_make_canonical_response(
&mut self,
certainty: Certainty,
) -> QueryResult<'tcx> {
let goals_certainty = self.try_evaluate_added_goals()?;
let certainty = certainty.unify_and(goals_certainty);
let external_constraints = self.compute_external_query_constraints()?; let external_constraints = self.compute_external_query_constraints()?;
let response = Response { var_values: self.var_values, external_constraints, certainty }; let response = Response { var_values: self.var_values, external_constraints, certainty };
@ -209,7 +215,7 @@ impl<'tcx> EvalCtxt<'_, 'tcx> {
// FIXME: To deal with #105787 I also expect us to emit nested obligations here at // FIXME: To deal with #105787 I also expect us to emit nested obligations here at
// some point. We can figure out how to deal with this once we actually have // some point. We can figure out how to deal with this once we actually have
// an ICE. // an ICE.
let nested_goals = self.eq(param_env, orig, response)?; let nested_goals = self.eq_and_get_goals(param_env, orig, response)?;
assert!(nested_goals.is_empty(), "{nested_goals:?}"); assert!(nested_goals.is_empty(), "{nested_goals:?}");
} }

View file

@ -2,8 +2,11 @@ use rustc_hir::def_id::DefId;
use rustc_infer::infer::at::ToTrace; use rustc_infer::infer::at::ToTrace;
use rustc_infer::infer::canonical::CanonicalVarValues; use rustc_infer::infer::canonical::CanonicalVarValues;
use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind}; use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind};
use rustc_infer::infer::{DefineOpaqueTypes, InferCtxt, InferOk, LateBoundRegionConversionTime}; use rustc_infer::infer::{
DefineOpaqueTypes, InferCtxt, InferOk, LateBoundRegionConversionTime, TyCtxtInferExt,
};
use rustc_infer::traits::query::NoSolution; use rustc_infer::traits::query::NoSolution;
use rustc_infer::traits::solve::{CanonicalGoal, Certainty, MaybeCause, QueryResult};
use rustc_infer::traits::ObligationCause; use rustc_infer::traits::ObligationCause;
use rustc_middle::infer::unify_key::{ConstVariableOrigin, ConstVariableOriginKind}; use rustc_middle::infer::unify_key::{ConstVariableOrigin, ConstVariableOriginKind};
use rustc_middle::ty::{ use rustc_middle::ty::{
@ -13,8 +16,8 @@ use rustc_middle::ty::{
use rustc_span::DUMMY_SP; use rustc_span::DUMMY_SP;
use std::ops::ControlFlow; use std::ops::ControlFlow;
use super::search_graph::SearchGraph; use super::search_graph::{self, OverflowHandler};
use super::Goal; use super::{search_graph::SearchGraph, Goal};
pub struct EvalCtxt<'a, 'tcx> { pub struct EvalCtxt<'a, 'tcx> {
// FIXME: should be private. // FIXME: should be private.
@ -33,14 +36,305 @@ pub struct EvalCtxt<'a, 'tcx> {
pub(super) search_graph: &'a mut SearchGraph<'tcx>, pub(super) search_graph: &'a mut SearchGraph<'tcx>,
/// This field is used by a debug assertion in [`EvalCtxt::evaluate_goal`], pub(super) nested_goals: NestedGoals<'tcx>,
/// see the comment in that method for more details. }
pub in_projection_eq_hack: bool,
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub(super) enum IsNormalizesToHack {
Yes,
No,
}
#[derive(Debug, Clone)]
pub(super) struct NestedGoals<'tcx> {
pub(super) normalizes_to_hack_goal: Option<Goal<'tcx, ty::ProjectionPredicate<'tcx>>>,
pub(super) goals: Vec<Goal<'tcx, ty::Predicate<'tcx>>>,
}
impl NestedGoals<'_> {
pub(super) fn new() -> Self {
Self { normalizes_to_hack_goal: None, goals: Vec::new() }
}
pub(super) fn is_empty(&self) -> bool {
self.normalizes_to_hack_goal.is_none() && self.goals.is_empty()
}
}
pub trait InferCtxtEvalExt<'tcx> {
/// Evaluates a goal from **outside** of the trait solver.
///
/// Using this while inside of the solver is wrong as it uses a new
/// search graph which would break cycle detection.
fn evaluate_root_goal(
&self,
goal: Goal<'tcx, ty::Predicate<'tcx>>,
) -> Result<(bool, Certainty), NoSolution>;
}
impl<'tcx> InferCtxtEvalExt<'tcx> for InferCtxt<'tcx> {
#[instrument(level = "debug", skip(self))]
fn evaluate_root_goal(
&self,
goal: Goal<'tcx, ty::Predicate<'tcx>>,
) -> Result<(bool, Certainty), NoSolution> {
let mut search_graph = search_graph::SearchGraph::new(self.tcx);
let mut ecx = EvalCtxt {
search_graph: &mut search_graph,
infcx: self,
// Only relevant when canonicalizing the response.
max_input_universe: ty::UniverseIndex::ROOT,
var_values: CanonicalVarValues::dummy(),
nested_goals: NestedGoals::new(),
};
let result = ecx.evaluate_goal(IsNormalizesToHack::No, goal);
assert!(
ecx.nested_goals.is_empty(),
"root `EvalCtxt` should not have any goals added to it"
);
assert!(search_graph.is_empty());
result
}
}
impl<'a, 'tcx> EvalCtxt<'a, 'tcx> {
/// The entry point of the solver.
///
/// This function deals with (coinductive) cycles, overflow, and caching
/// and then calls [`EvalCtxt::compute_goal`] which contains the actual
/// logic of the solver.
///
/// Instead of calling this function directly, use either [EvalCtxt::evaluate_goal]
/// if you're inside of the solver or [InferCtxtEvalExt::evaluate_root_goal] if you're
/// outside of it.
#[instrument(level = "debug", skip(tcx, search_graph), ret)]
fn evaluate_canonical_goal(
tcx: TyCtxt<'tcx>,
search_graph: &'a mut search_graph::SearchGraph<'tcx>,
canonical_goal: CanonicalGoal<'tcx>,
) -> QueryResult<'tcx> {
// Deal with overflow, caching, and coinduction.
//
// The actual solver logic happens in `ecx.compute_goal`.
search_graph.with_new_goal(tcx, canonical_goal, |search_graph| {
let (ref infcx, goal, var_values) =
tcx.infer_ctxt().build_with_canonical(DUMMY_SP, &canonical_goal);
let mut ecx = EvalCtxt {
infcx,
var_values,
max_input_universe: canonical_goal.max_universe,
search_graph,
nested_goals: NestedGoals::new(),
};
ecx.compute_goal(goal)
})
}
/// Recursively evaluates `goal`, returning whether any inference vars have
/// been constrained and the certainty of the result.
fn evaluate_goal(
&mut self,
is_normalizes_to_hack: IsNormalizesToHack,
goal: Goal<'tcx, ty::Predicate<'tcx>>,
) -> Result<(bool, Certainty), NoSolution> {
let (orig_values, canonical_goal) = self.canonicalize_goal(goal);
let canonical_response =
EvalCtxt::evaluate_canonical_goal(self.tcx(), self.search_graph, canonical_goal)?;
let has_changed = !canonical_response.value.var_values.is_identity();
let certainty = self.instantiate_and_apply_query_response(
goal.param_env,
orig_values,
canonical_response,
)?;
// Check that rerunning this query with its inference constraints applied
// doesn't result in new inference constraints and has the same result.
//
// If we have projection goals like `<T as Trait>::Assoc == u32` we recursively
// call `exists<U> <T as Trait>::Assoc == U` to enable better caching. This goal
// could constrain `U` to `u32` which would cause this check to result in a
// solver cycle.
if cfg!(debug_assertions)
&& has_changed
&& is_normalizes_to_hack == IsNormalizesToHack::No
&& !self.search_graph.in_cycle()
{
debug!("rerunning goal to check result is stable");
let (_orig_values, canonical_goal) = self.canonicalize_goal(goal);
let canonical_response =
EvalCtxt::evaluate_canonical_goal(self.tcx(), self.search_graph, canonical_goal)?;
if !canonical_response.value.var_values.is_identity() {
bug!("unstable result: {goal:?} {canonical_goal:?} {canonical_response:?}");
}
assert_eq!(certainty, canonical_response.value.certainty);
}
Ok((has_changed, certainty))
}
fn compute_goal(&mut self, goal: Goal<'tcx, ty::Predicate<'tcx>>) -> QueryResult<'tcx> {
let Goal { param_env, predicate } = goal;
let kind = predicate.kind();
if let Some(kind) = kind.no_bound_vars() {
match kind {
ty::PredicateKind::Clause(ty::Clause::Trait(predicate)) => {
self.compute_trait_goal(Goal { param_env, predicate })
}
ty::PredicateKind::Clause(ty::Clause::Projection(predicate)) => {
self.compute_projection_goal(Goal { param_env, predicate })
}
ty::PredicateKind::Clause(ty::Clause::TypeOutlives(predicate)) => {
self.compute_type_outlives_goal(Goal { param_env, predicate })
}
ty::PredicateKind::Clause(ty::Clause::RegionOutlives(predicate)) => {
self.compute_region_outlives_goal(Goal { param_env, predicate })
}
ty::PredicateKind::Clause(ty::Clause::ConstArgHasType(ct, ty)) => {
self.compute_const_arg_has_type_goal(Goal { param_env, predicate: (ct, ty) })
}
ty::PredicateKind::Subtype(predicate) => {
self.compute_subtype_goal(Goal { param_env, predicate })
}
ty::PredicateKind::Coerce(predicate) => {
self.compute_coerce_goal(Goal { param_env, predicate })
}
ty::PredicateKind::ClosureKind(def_id, substs, kind) => self
.compute_closure_kind_goal(Goal {
param_env,
predicate: (def_id, substs, kind),
}),
ty::PredicateKind::ObjectSafe(trait_def_id) => {
self.compute_object_safe_goal(trait_def_id)
}
ty::PredicateKind::WellFormed(arg) => {
self.compute_well_formed_goal(Goal { param_env, predicate: arg })
}
ty::PredicateKind::Ambiguous => {
self.evaluate_added_goals_and_make_canonical_response(Certainty::AMBIGUOUS)
}
// FIXME: implement these predicates :)
ty::PredicateKind::ConstEvaluatable(_) | ty::PredicateKind::ConstEquate(_, _) => {
self.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
}
ty::PredicateKind::TypeWellFormedFromEnv(..) => {
bug!("TypeWellFormedFromEnv is only used for Chalk")
}
ty::PredicateKind::AliasEq(lhs, rhs) => {
self.compute_alias_eq_goal(Goal { param_env, predicate: (lhs, rhs) })
}
}
} else {
let kind = self.infcx.instantiate_binder_with_placeholders(kind);
let goal = goal.with(self.tcx(), ty::Binder::dummy(kind));
self.add_goal(goal);
self.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
}
}
// Recursively evaluates all the goals added to this `EvalCtxt` to completion, returning
// the certainty of all the goals.
#[instrument(level = "debug", skip(self))]
pub(super) fn try_evaluate_added_goals(&mut self) -> Result<Certainty, NoSolution> {
let mut goals = core::mem::replace(&mut self.nested_goals, NestedGoals::new());
let mut new_goals = NestedGoals::new();
let response = self.repeat_while_none(
|_| Ok(Certainty::Maybe(MaybeCause::Overflow)),
|this| {
let mut has_changed = Err(Certainty::Yes);
if let Some(goal) = goals.normalizes_to_hack_goal.take() {
let (_, certainty) = match this.evaluate_goal(
IsNormalizesToHack::Yes,
goal.with(this.tcx(), ty::Binder::dummy(goal.predicate)),
) {
Ok(r) => r,
Err(NoSolution) => return Some(Err(NoSolution)),
};
if goal.predicate.projection_ty
!= this.resolve_vars_if_possible(goal.predicate.projection_ty)
{
has_changed = Ok(())
}
match certainty {
Certainty::Yes => {}
Certainty::Maybe(_) => {
let goal = this.resolve_vars_if_possible(goal);
// The rhs of this `normalizes-to` must always be an unconstrained infer var as it is
// the hack used by `normalizes-to` to ensure that every `normalizes-to` behaves the same
// regardless of the rhs.
//
// However it is important not to unconditionally replace the rhs with a new infer var
// as otherwise we may replace the original unconstrained infer var with a new infer var
// and never propagate any constraints on the new var back to the original var.
let term = this
.term_is_fully_unconstrained(goal)
.then_some(goal.predicate.term)
.unwrap_or_else(|| {
this.next_term_infer_of_kind(goal.predicate.term)
});
let projection_pred = ty::ProjectionPredicate {
term,
projection_ty: goal.predicate.projection_ty,
};
new_goals.normalizes_to_hack_goal =
Some(goal.with(this.tcx(), projection_pred));
has_changed = has_changed.map_err(|c| c.unify_and(certainty));
}
}
}
for nested_goal in goals.goals.drain(..) {
let (changed, certainty) =
match this.evaluate_goal(IsNormalizesToHack::No, nested_goal) {
Ok(result) => result,
Err(NoSolution) => return Some(Err(NoSolution)),
};
if changed {
has_changed = Ok(());
}
match certainty {
Certainty::Yes => {}
Certainty::Maybe(_) => {
new_goals.goals.push(nested_goal);
has_changed = has_changed.map_err(|c| c.unify_and(certainty));
}
}
}
core::mem::swap(&mut new_goals, &mut goals);
match has_changed {
Ok(()) => None,
Err(certainty) => Some(Ok(certainty)),
}
},
);
self.nested_goals = goals;
response
}
} }
impl<'tcx> EvalCtxt<'_, 'tcx> { impl<'tcx> EvalCtxt<'_, 'tcx> {
pub(super) fn probe<T>(&mut self, f: impl FnOnce(&mut EvalCtxt<'_, 'tcx>) -> T) -> T { pub(super) fn probe<T>(&mut self, f: impl FnOnce(&mut EvalCtxt<'_, 'tcx>) -> T) -> T {
self.infcx.probe(|_| f(self)) let mut ecx = EvalCtxt {
infcx: self.infcx,
var_values: self.var_values,
max_input_universe: self.max_input_universe,
search_graph: self.search_graph,
nested_goals: self.nested_goals.clone(),
};
self.infcx.probe(|_| f(&mut ecx))
} }
pub(super) fn tcx(&self) -> TyCtxt<'tcx> { pub(super) fn tcx(&self) -> TyCtxt<'tcx> {
@ -61,6 +355,15 @@ impl<'tcx> EvalCtxt<'_, 'tcx> {
) )
} }
/// Returns a ty infer or a const infer depending on whether `kind` is a `Ty` or `Const`.
/// If `kind` is an integer inference variable this will still return a ty infer var.
pub(super) fn next_term_infer_of_kind(&self, kind: ty::Term<'tcx>) -> ty::Term<'tcx> {
match kind.unpack() {
ty::TermKind::Ty(_) => self.next_ty_infer().into(),
ty::TermKind::Const(ct) => self.next_const_infer(ct.ty()).into(),
}
}
/// Is the projection predicate is of the form `exists<T> <Ty as Trait>::Assoc = T`. /// Is the projection predicate is of the form `exists<T> <Ty as Trait>::Assoc = T`.
/// ///
/// This is the case if the `term` is an inference variable in the innermost universe /// This is the case if the `term` is an inference variable in the innermost universe
@ -137,6 +440,30 @@ impl<'tcx> EvalCtxt<'_, 'tcx> {
#[instrument(level = "debug", skip(self, param_env), ret)] #[instrument(level = "debug", skip(self, param_env), ret)]
pub(super) fn eq<T: ToTrace<'tcx>>( pub(super) fn eq<T: ToTrace<'tcx>>(
&mut self,
param_env: ty::ParamEnv<'tcx>,
lhs: T,
rhs: T,
) -> Result<(), NoSolution> {
self.infcx
.at(&ObligationCause::dummy(), param_env)
.eq(DefineOpaqueTypes::No, lhs, rhs)
.map(|InferOk { value: (), obligations }| {
self.add_goals(obligations.into_iter().map(|o| o.into()));
})
.map_err(|e| {
debug!(?e, "failed to equate");
NoSolution
})
}
/// Equates two values returning the nested goals without adding them
/// to the nested goals of the `EvalCtxt`.
///
/// If possible, try using `eq` instead which automatically handles nested
/// goals correctly.
#[instrument(level = "debug", skip(self, param_env), ret)]
pub(super) fn eq_and_get_goals<T: ToTrace<'tcx>>(
&self, &self,
param_env: ty::ParamEnv<'tcx>, param_env: ty::ParamEnv<'tcx>,
lhs: T, lhs: T,

View file

@ -15,23 +15,19 @@
// FIXME: uses of `infcx.at` need to enable deferred projection equality once that's implemented. // FIXME: uses of `infcx.at` need to enable deferred projection equality once that's implemented.
use std::mem;
use rustc_hir::def_id::DefId; use rustc_hir::def_id::DefId;
use rustc_infer::infer::canonical::{Canonical, CanonicalVarValues}; use rustc_infer::infer::canonical::{Canonical, CanonicalVarValues};
use rustc_infer::infer::{DefineOpaqueTypes, InferCtxt, InferOk, TyCtxtInferExt}; use rustc_infer::infer::{DefineOpaqueTypes, InferOk};
use rustc_infer::traits::query::NoSolution; use rustc_infer::traits::query::NoSolution;
use rustc_middle::traits::solve::{ use rustc_middle::traits::solve::{
CanonicalGoal, CanonicalResponse, Certainty, ExternalConstraints, ExternalConstraintsData, CanonicalGoal, CanonicalResponse, Certainty, ExternalConstraints, ExternalConstraintsData,
Goal, MaybeCause, QueryResult, Response, Goal, QueryResult, Response,
}; };
use rustc_middle::ty::{self, Ty, TyCtxt}; use rustc_middle::ty::{self, Ty, TyCtxt};
use rustc_middle::ty::{ use rustc_middle::ty::{
CoercePredicate, RegionOutlivesPredicate, SubtypePredicate, TypeOutlivesPredicate, CoercePredicate, RegionOutlivesPredicate, SubtypePredicate, TypeOutlivesPredicate,
}; };
use rustc_span::DUMMY_SP;
use crate::solve::search_graph::OverflowHandler;
use crate::traits::ObligationCause; use crate::traits::ObligationCause;
mod assembly; mod assembly;
@ -42,7 +38,7 @@ mod project_goals;
mod search_graph; mod search_graph;
mod trait_goals; mod trait_goals;
pub use eval_ctxt::EvalCtxt; pub use eval_ctxt::{EvalCtxt, InferCtxtEvalExt};
pub use fulfill::FulfillmentCtxt; pub use fulfill::FulfillmentCtxt;
trait CanonicalResponseExt { trait CanonicalResponseExt {
@ -57,180 +53,18 @@ impl<'tcx> CanonicalResponseExt for Canonical<'tcx, Response<'tcx>> {
} }
} }
pub trait InferCtxtEvalExt<'tcx> {
/// Evaluates a goal from **outside** of the trait solver.
///
/// Using this while inside of the solver is wrong as it uses a new
/// search graph which would break cycle detection.
fn evaluate_root_goal(
&self,
goal: Goal<'tcx, ty::Predicate<'tcx>>,
) -> Result<(bool, Certainty), NoSolution>;
}
impl<'tcx> InferCtxtEvalExt<'tcx> for InferCtxt<'tcx> {
fn evaluate_root_goal(
&self,
goal: Goal<'tcx, ty::Predicate<'tcx>>,
) -> Result<(bool, Certainty), NoSolution> {
let mut search_graph = search_graph::SearchGraph::new(self.tcx);
let result = EvalCtxt {
search_graph: &mut search_graph,
infcx: self,
// Only relevant when canonicalizing the response.
max_input_universe: ty::UniverseIndex::ROOT,
var_values: CanonicalVarValues::dummy(),
in_projection_eq_hack: false,
}
.evaluate_goal(goal);
assert!(search_graph.is_empty());
result
}
}
impl<'a, 'tcx> EvalCtxt<'a, 'tcx> { impl<'a, 'tcx> EvalCtxt<'a, 'tcx> {
/// The entry point of the solver. #[instrument(level = "debug", skip(self))]
///
/// This function deals with (coinductive) cycles, overflow, and caching
/// and then calls [`EvalCtxt::compute_goal`] which contains the actual
/// logic of the solver.
///
/// Instead of calling this function directly, use either [EvalCtxt::evaluate_goal]
/// if you're inside of the solver or [InferCtxtEvalExt::evaluate_root_goal] if you're
/// outside of it.
#[instrument(level = "debug", skip(tcx, search_graph), ret)]
fn evaluate_canonical_goal(
tcx: TyCtxt<'tcx>,
search_graph: &'a mut search_graph::SearchGraph<'tcx>,
canonical_goal: CanonicalGoal<'tcx>,
) -> QueryResult<'tcx> {
// Deal with overflow, caching, and coinduction.
//
// The actual solver logic happens in `ecx.compute_goal`.
search_graph.with_new_goal(tcx, canonical_goal, |search_graph| {
let (ref infcx, goal, var_values) =
tcx.infer_ctxt().build_with_canonical(DUMMY_SP, &canonical_goal);
let mut ecx = EvalCtxt {
infcx,
var_values,
max_input_universe: canonical_goal.max_universe,
search_graph,
in_projection_eq_hack: false,
};
ecx.compute_goal(goal)
})
}
/// Recursively evaluates `goal`, returning whether any inference vars have
/// been constrained and the certainty of the result.
fn evaluate_goal(
&mut self,
goal: Goal<'tcx, ty::Predicate<'tcx>>,
) -> Result<(bool, Certainty), NoSolution> {
let (orig_values, canonical_goal) = self.canonicalize_goal(goal);
let canonical_response =
EvalCtxt::evaluate_canonical_goal(self.tcx(), self.search_graph, canonical_goal)?;
let has_changed = !canonical_response.value.var_values.is_identity();
let certainty = self.instantiate_and_apply_query_response(
goal.param_env,
orig_values,
canonical_response,
)?;
// Check that rerunning this query with its inference constraints applied
// doesn't result in new inference constraints and has the same result.
//
// If we have projection goals like `<T as Trait>::Assoc == u32` we recursively
// call `exists<U> <T as Trait>::Assoc == U` to enable better caching. This goal
// could constrain `U` to `u32` which would cause this check to result in a
// solver cycle.
if cfg!(debug_assertions)
&& has_changed
&& !self.in_projection_eq_hack
&& !self.search_graph.in_cycle()
&& false
{
let (_orig_values, canonical_goal) = self.canonicalize_goal(goal);
let canonical_response =
EvalCtxt::evaluate_canonical_goal(self.tcx(), self.search_graph, canonical_goal)?;
if !canonical_response.value.var_values.is_identity() {
bug!("unstable result: {goal:?} {canonical_goal:?} {canonical_response:?}");
}
assert_eq!(certainty, canonical_response.value.certainty);
}
Ok((has_changed, certainty))
}
fn compute_goal(&mut self, goal: Goal<'tcx, ty::Predicate<'tcx>>) -> QueryResult<'tcx> {
let Goal { param_env, predicate } = goal;
let kind = predicate.kind();
if let Some(kind) = kind.no_bound_vars() {
match kind {
ty::PredicateKind::Clause(ty::Clause::Trait(predicate)) => {
self.compute_trait_goal(Goal { param_env, predicate })
}
ty::PredicateKind::Clause(ty::Clause::Projection(predicate)) => {
self.compute_projection_goal(Goal { param_env, predicate })
}
ty::PredicateKind::Clause(ty::Clause::TypeOutlives(predicate)) => {
self.compute_type_outlives_goal(Goal { param_env, predicate })
}
ty::PredicateKind::Clause(ty::Clause::RegionOutlives(predicate)) => {
self.compute_region_outlives_goal(Goal { param_env, predicate })
}
ty::PredicateKind::Clause(ty::Clause::ConstArgHasType(ct, ty)) => {
self.compute_const_arg_has_type_goal(Goal { param_env, predicate: (ct, ty) })
}
ty::PredicateKind::Subtype(predicate) => {
self.compute_subtype_goal(Goal { param_env, predicate })
}
ty::PredicateKind::Coerce(predicate) => {
self.compute_coerce_goal(Goal { param_env, predicate })
}
ty::PredicateKind::ClosureKind(def_id, substs, kind) => self
.compute_closure_kind_goal(Goal {
param_env,
predicate: (def_id, substs, kind),
}),
ty::PredicateKind::ObjectSafe(trait_def_id) => {
self.compute_object_safe_goal(trait_def_id)
}
ty::PredicateKind::WellFormed(arg) => {
self.compute_well_formed_goal(Goal { param_env, predicate: arg })
}
ty::PredicateKind::Ambiguous => self.make_canonical_response(Certainty::AMBIGUOUS),
// FIXME: implement these predicates :)
ty::PredicateKind::ConstEvaluatable(_) | ty::PredicateKind::ConstEquate(_, _) => {
self.make_canonical_response(Certainty::Yes)
}
ty::PredicateKind::TypeWellFormedFromEnv(..) => {
bug!("TypeWellFormedFromEnv is only used for Chalk")
}
ty::PredicateKind::AliasEq(lhs, rhs) => {
self.compute_alias_eq_goal(Goal { param_env, predicate: (lhs, rhs) })
}
}
} else {
let kind = self.infcx.instantiate_binder_with_placeholders(kind);
let goal = goal.with(self.tcx(), ty::Binder::dummy(kind));
let (_, certainty) = self.evaluate_goal(goal)?;
self.make_canonical_response(certainty)
}
}
fn compute_type_outlives_goal( fn compute_type_outlives_goal(
&mut self, &mut self,
goal: Goal<'tcx, TypeOutlivesPredicate<'tcx>>, goal: Goal<'tcx, TypeOutlivesPredicate<'tcx>>,
) -> QueryResult<'tcx> { ) -> QueryResult<'tcx> {
let ty::OutlivesPredicate(ty, lt) = goal.predicate; let ty::OutlivesPredicate(ty, lt) = goal.predicate;
self.infcx.register_region_obligation_with_cause(ty, lt, &ObligationCause::dummy()); self.infcx.register_region_obligation_with_cause(ty, lt, &ObligationCause::dummy());
self.make_canonical_response(Certainty::Yes) self.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
} }
#[instrument(level = "debug", skip(self))]
fn compute_region_outlives_goal( fn compute_region_outlives_goal(
&mut self, &mut self,
goal: Goal<'tcx, RegionOutlivesPredicate<'tcx>>, goal: Goal<'tcx, RegionOutlivesPredicate<'tcx>>,
@ -239,9 +73,10 @@ impl<'a, 'tcx> EvalCtxt<'a, 'tcx> {
&ObligationCause::dummy(), &ObligationCause::dummy(),
ty::Binder::dummy(goal.predicate), ty::Binder::dummy(goal.predicate),
); );
self.make_canonical_response(Certainty::Yes) self.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
} }
#[instrument(level = "debug", skip(self))]
fn compute_coerce_goal( fn compute_coerce_goal(
&mut self, &mut self,
goal: Goal<'tcx, CoercePredicate<'tcx>>, goal: Goal<'tcx, CoercePredicate<'tcx>>,
@ -256,6 +91,7 @@ impl<'a, 'tcx> EvalCtxt<'a, 'tcx> {
}) })
} }
#[instrument(level = "debug", skip(self))]
fn compute_subtype_goal( fn compute_subtype_goal(
&mut self, &mut self,
goal: Goal<'tcx, SubtypePredicate<'tcx>>, goal: Goal<'tcx, SubtypePredicate<'tcx>>,
@ -263,18 +99,18 @@ impl<'a, 'tcx> EvalCtxt<'a, 'tcx> {
if goal.predicate.a.is_ty_var() && goal.predicate.b.is_ty_var() { if goal.predicate.a.is_ty_var() && goal.predicate.b.is_ty_var() {
// FIXME: Do we want to register a subtype relation between these vars? // FIXME: Do we want to register a subtype relation between these vars?
// That won't actually reflect in the query response, so it seems moot. // That won't actually reflect in the query response, so it seems moot.
self.make_canonical_response(Certainty::AMBIGUOUS) self.evaluate_added_goals_and_make_canonical_response(Certainty::AMBIGUOUS)
} else { } else {
let InferOk { value: (), obligations } = self let InferOk { value: (), obligations } = self
.infcx .infcx
.at(&ObligationCause::dummy(), goal.param_env) .at(&ObligationCause::dummy(), goal.param_env)
.sub(DefineOpaqueTypes::No, goal.predicate.a, goal.predicate.b)?; .sub(DefineOpaqueTypes::No, goal.predicate.a, goal.predicate.b)?;
self.evaluate_all_and_make_canonical_response( self.add_goals(obligations.into_iter().map(|pred| pred.into()));
obligations.into_iter().map(|pred| pred.into()).collect(), self.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
)
} }
} }
#[instrument(level = "debug", skip(self))]
fn compute_closure_kind_goal( fn compute_closure_kind_goal(
&mut self, &mut self,
goal: Goal<'tcx, (DefId, ty::SubstsRef<'tcx>, ty::ClosureKind)>, goal: Goal<'tcx, (DefId, ty::SubstsRef<'tcx>, ty::ClosureKind)>,
@ -283,23 +119,25 @@ impl<'a, 'tcx> EvalCtxt<'a, 'tcx> {
let found_kind = substs.as_closure().kind_ty().to_opt_closure_kind(); let found_kind = substs.as_closure().kind_ty().to_opt_closure_kind();
let Some(found_kind) = found_kind else { let Some(found_kind) = found_kind else {
return self.make_canonical_response(Certainty::AMBIGUOUS); return self.evaluate_added_goals_and_make_canonical_response(Certainty::AMBIGUOUS);
}; };
if found_kind.extends(expected_kind) { if found_kind.extends(expected_kind) {
self.make_canonical_response(Certainty::Yes) self.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
} else { } else {
Err(NoSolution) Err(NoSolution)
} }
} }
#[instrument(level = "debug", skip(self))]
fn compute_object_safe_goal(&mut self, trait_def_id: DefId) -> QueryResult<'tcx> { fn compute_object_safe_goal(&mut self, trait_def_id: DefId) -> QueryResult<'tcx> {
if self.tcx().check_is_object_safe(trait_def_id) { if self.tcx().check_is_object_safe(trait_def_id) {
self.make_canonical_response(Certainty::Yes) self.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
} else { } else {
Err(NoSolution) Err(NoSolution)
} }
} }
#[instrument(level = "debug", skip(self))]
fn compute_well_formed_goal( fn compute_well_formed_goal(
&mut self, &mut self,
goal: Goal<'tcx, ty::GenericArg<'tcx>>, goal: Goal<'tcx, ty::GenericArg<'tcx>>,
@ -309,10 +147,11 @@ impl<'a, 'tcx> EvalCtxt<'a, 'tcx> {
goal.param_env, goal.param_env,
goal.predicate, goal.predicate,
) { ) {
Some(obligations) => self.evaluate_all_and_make_canonical_response( Some(obligations) => {
obligations.into_iter().map(|o| o.into()).collect(), self.add_goals(obligations.into_iter().map(|o| o.into()));
), self.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
None => self.make_canonical_response(Certainty::AMBIGUOUS), }
None => self.evaluate_added_goals_and_make_canonical_response(Certainty::AMBIGUOUS),
} }
} }
@ -326,14 +165,14 @@ impl<'a, 'tcx> EvalCtxt<'a, 'tcx> {
let evaluate_normalizes_to = |ecx: &mut EvalCtxt<'_, 'tcx>, alias, other| { let evaluate_normalizes_to = |ecx: &mut EvalCtxt<'_, 'tcx>, alias, other| {
debug!("evaluate_normalizes_to(alias={:?}, other={:?})", alias, other); debug!("evaluate_normalizes_to(alias={:?}, other={:?})", alias, other);
let r = ecx.probe(|ecx| { let r = ecx.probe(|ecx| {
let (_, certainty) = ecx.evaluate_goal(goal.with( ecx.add_goal(goal.with(
tcx, tcx,
ty::Binder::dummy(ty::ProjectionPredicate { ty::Binder::dummy(ty::ProjectionPredicate {
projection_ty: alias, projection_ty: alias,
term: other, term: other,
}), }),
))?; ));
ecx.make_canonical_response(certainty) ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
}); });
debug!("evaluate_normalizes_to(..) -> {:?}", r); debug!("evaluate_normalizes_to(..) -> {:?}", r);
r r
@ -360,10 +199,10 @@ impl<'a, 'tcx> EvalCtxt<'a, 'tcx> {
// Evaluate all 3 potential candidates for the alias' being equal // Evaluate all 3 potential candidates for the alias' being equal
candidates.push(evaluate_normalizes_to(self, alias_lhs, goal.predicate.1)); candidates.push(evaluate_normalizes_to(self, alias_lhs, goal.predicate.1));
candidates.push(evaluate_normalizes_to(self, alias_rhs, goal.predicate.0)); candidates.push(evaluate_normalizes_to(self, alias_rhs, goal.predicate.0));
candidates.push(self.probe(|this| { candidates.push(self.probe(|ecx| {
debug!("compute_alias_eq_goal: alias defids are equal, equating substs"); debug!("compute_alias_eq_goal: alias defids are equal, equating substs");
let nested_goals = this.eq(goal.param_env, alias_lhs, alias_rhs)?; ecx.eq(goal.param_env, alias_lhs, alias_rhs)?;
this.evaluate_all_and_make_canonical_response(nested_goals) ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
})); }));
debug!(?candidates); debug!(?candidates);
@ -379,62 +218,31 @@ impl<'a, 'tcx> EvalCtxt<'a, 'tcx> {
goal: Goal<'tcx, (ty::Const<'tcx>, Ty<'tcx>)>, goal: Goal<'tcx, (ty::Const<'tcx>, Ty<'tcx>)>,
) -> QueryResult<'tcx> { ) -> QueryResult<'tcx> {
let (ct, ty) = goal.predicate; let (ct, ty) = goal.predicate;
let nested_goals = self.eq(goal.param_env, ct.ty(), ty)?; self.eq(goal.param_env, ct.ty(), ty)?;
self.evaluate_all_and_make_canonical_response(nested_goals) self.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
} }
} }
impl<'tcx> EvalCtxt<'_, 'tcx> { impl<'tcx> EvalCtxt<'_, 'tcx> {
// Recursively evaluates a list of goals to completion, returning the certainty #[instrument(level = "debug", skip(self))]
// of all of the goals. fn set_normalizes_to_hack_goal(&mut self, goal: Goal<'tcx, ty::ProjectionPredicate<'tcx>>) {
fn evaluate_all( assert!(
&mut self, self.nested_goals.normalizes_to_hack_goal.is_none(),
mut goals: Vec<Goal<'tcx, ty::Predicate<'tcx>>>, "attempted to set the projection eq hack goal when one already exists"
) -> Result<Certainty, NoSolution> { );
let mut new_goals = Vec::new(); self.nested_goals.normalizes_to_hack_goal = Some(goal);
self.repeat_while_none(
|_| Ok(Certainty::Maybe(MaybeCause::Overflow)),
|this| {
let mut has_changed = Err(Certainty::Yes);
for goal in goals.drain(..) {
let (changed, certainty) = match this.evaluate_goal(goal) {
Ok(result) => result,
Err(NoSolution) => return Some(Err(NoSolution)),
};
if changed {
has_changed = Ok(());
} }
match certainty { #[instrument(level = "debug", skip(self))]
Certainty::Yes => {} fn add_goal(&mut self, goal: Goal<'tcx, ty::Predicate<'tcx>>) {
Certainty::Maybe(_) => { self.nested_goals.goals.push(goal);
new_goals.push(goal);
has_changed = has_changed.map_err(|c| c.unify_and(certainty));
}
}
} }
match has_changed { #[instrument(level = "debug", skip(self, goals))]
Ok(()) => { fn add_goals(&mut self, goals: impl IntoIterator<Item = Goal<'tcx, ty::Predicate<'tcx>>>) {
mem::swap(&mut new_goals, &mut goals); let current_len = self.nested_goals.goals.len();
None self.nested_goals.goals.extend(goals);
} debug!("added_goals={:?}", &self.nested_goals.goals[current_len..]);
Err(certainty) => Some(Ok(certainty)),
}
},
)
}
// Recursively evaluates a list of goals to completion, making a query response.
//
// This is just a convenient way of calling [`EvalCtxt::evaluate_all`],
// then [`EvalCtxt::make_canonical_response`].
fn evaluate_all_and_make_canonical_response(
&mut self,
goals: Vec<Goal<'tcx, ty::Predicate<'tcx>>>,
) -> QueryResult<'tcx> {
self.evaluate_all(goals).and_then(|certainty| self.make_canonical_response(certainty))
} }
fn try_merge_responses( fn try_merge_responses(
@ -466,7 +274,7 @@ impl<'tcx> EvalCtxt<'_, 'tcx> {
}); });
// FIXME(-Ztrait-solver=next): We should take the intersection of the constraints on all the // FIXME(-Ztrait-solver=next): We should take the intersection of the constraints on all the
// responses and use that for the constraints of this ambiguous response. // responses and use that for the constraints of this ambiguous response.
let response = self.make_canonical_response(certainty); let response = self.evaluate_added_goals_and_make_canonical_response(certainty);
if let Ok(response) = &response { if let Ok(response) = &response {
assert!(response.has_no_inference_or_external_constraints()); assert!(response.has_no_inference_or_external_constraints());
} }

View file

@ -20,6 +20,7 @@ use rustc_span::{sym, DUMMY_SP};
use std::iter; use std::iter;
impl<'tcx> EvalCtxt<'_, 'tcx> { impl<'tcx> EvalCtxt<'_, 'tcx> {
#[instrument(level = "debug", skip(self), ret)]
pub(super) fn compute_projection_goal( pub(super) fn compute_projection_goal(
&mut self, &mut self,
goal: Goal<'tcx, ProjectionPredicate<'tcx>>, goal: Goal<'tcx, ProjectionPredicate<'tcx>>,
@ -36,54 +37,18 @@ impl<'tcx> EvalCtxt<'_, 'tcx> {
self.merge_candidates_and_discard_reservation_impls(candidates) self.merge_candidates_and_discard_reservation_impls(candidates)
} else { } else {
let predicate = goal.predicate; let predicate = goal.predicate;
let unconstrained_rhs = match predicate.term.unpack() { let unconstrained_rhs = self.next_term_infer_of_kind(predicate.term);
ty::TermKind::Ty(_) => self.next_ty_infer().into(), let unconstrained_predicate = ProjectionPredicate {
ty::TermKind::Const(ct) => self.next_const_infer(ct.ty()).into(),
};
let unconstrained_predicate = ty::Clause::Projection(ProjectionPredicate {
projection_ty: goal.predicate.projection_ty, projection_ty: goal.predicate.projection_ty,
term: unconstrained_rhs, term: unconstrained_rhs,
}); };
let (_has_changed, normalize_certainty) = self.in_projection_eq_hack(|this| {
this.evaluate_goal(goal.with(this.tcx(), unconstrained_predicate))
})?;
let nested_eq_goals = self.eq(goal.param_env, unconstrained_rhs, predicate.term)?; self.set_normalizes_to_hack_goal(goal.with(self.tcx(), unconstrained_predicate));
let eval_certainty = self.evaluate_all(nested_eq_goals)?; self.try_evaluate_added_goals()?;
self.make_canonical_response(normalize_certainty.unify_and(eval_certainty)) self.eq(goal.param_env, unconstrained_rhs, predicate.term)?;
self.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
} }
} }
/// This sets a flag used by a debug assert in [`EvalCtxt::evaluate_goal`],
/// see the comment in that method for more details.
fn in_projection_eq_hack<T>(&mut self, f: impl FnOnce(&mut Self) -> T) -> T {
self.in_projection_eq_hack = true;
let result = f(self);
self.in_projection_eq_hack = false;
result
}
/// After normalizing the projection to `normalized_alias` with the given
/// `normalization_certainty`, constrain the inference variable `term` to it
/// and return a query response.
fn eq_term_and_make_canonical_response(
&mut self,
goal: Goal<'tcx, ProjectionPredicate<'tcx>>,
normalization_certainty: Certainty,
normalized_alias: impl Into<ty::Term<'tcx>>,
) -> QueryResult<'tcx> {
// The term of our goal should be fully unconstrained, so this should never fail.
//
// It can however be ambiguous when the `normalized_alias` contains a projection.
let nested_goals = self
.eq(goal.param_env, goal.predicate.term, normalized_alias.into())
.expect("failed to unify with unconstrained term");
let unify_certainty =
self.evaluate_all(nested_goals).expect("failed to unify with unconstrained term");
self.make_canonical_response(normalization_certainty.unify_and(unify_certainty))
}
} }
impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> { impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> {
@ -111,19 +76,14 @@ impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> {
ecx.probe(|ecx| { ecx.probe(|ecx| {
let assumption_projection_pred = let assumption_projection_pred =
ecx.instantiate_binder_with_infer(poly_projection_pred); ecx.instantiate_binder_with_infer(poly_projection_pred);
let mut nested_goals = ecx.eq( ecx.eq(
goal.param_env, goal.param_env,
goal.predicate.projection_ty, goal.predicate.projection_ty,
assumption_projection_pred.projection_ty, assumption_projection_pred.projection_ty,
)?; )?;
nested_goals.extend(requirements); ecx.eq(goal.param_env, goal.predicate.term, assumption_projection_pred.term)?;
let subst_certainty = ecx.evaluate_all(nested_goals)?; ecx.add_goals(requirements);
ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
ecx.eq_term_and_make_canonical_response(
goal,
subst_certainty,
assumption_projection_pred.term,
)
}) })
} else { } else {
Err(NoSolution) Err(NoSolution)
@ -139,21 +99,22 @@ impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> {
&& poly_projection_pred.projection_def_id() == goal.predicate.def_id() && poly_projection_pred.projection_def_id() == goal.predicate.def_id()
{ {
ecx.probe(|ecx| { ecx.probe(|ecx| {
let tcx = ecx.tcx();
let assumption_projection_pred = let assumption_projection_pred =
ecx.instantiate_binder_with_infer(poly_projection_pred); ecx.instantiate_binder_with_infer(poly_projection_pred);
let mut nested_goals = ecx.eq( ecx.eq(
goal.param_env, goal.param_env,
goal.predicate.projection_ty, goal.predicate.projection_ty,
assumption_projection_pred.projection_ty, assumption_projection_pred.projection_ty,
)?; )?;
let tcx = ecx.tcx();
let ty::Dynamic(bounds, _, _) = *goal.predicate.self_ty().kind() else { let ty::Dynamic(bounds, _, _) = *goal.predicate.self_ty().kind() else {
bug!("expected object type in `consider_object_bound_candidate`"); bug!("expected object type in `consider_object_bound_candidate`");
}; };
nested_goals.extend( ecx.add_goals(
structural_traits::predicates_for_object_candidate( structural_traits::predicates_for_object_candidate(
ecx, &ecx,
goal.param_env, goal.param_env,
goal.predicate.projection_ty.trait_ref(tcx), goal.predicate.projection_ty.trait_ref(tcx),
bounds, bounds,
@ -161,14 +122,8 @@ impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> {
.into_iter() .into_iter()
.map(|pred| goal.with(tcx, pred)), .map(|pred| goal.with(tcx, pred)),
); );
ecx.eq(goal.param_env, goal.predicate.term, assumption_projection_pred.term)?;
let subst_certainty = ecx.evaluate_all(nested_goals)?; ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
ecx.eq_term_and_make_canonical_response(
goal,
subst_certainty,
assumption_projection_pred.term,
)
}) })
} else { } else {
Err(NoSolution) Err(NoSolution)
@ -195,16 +150,15 @@ impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> {
let impl_substs = ecx.fresh_substs_for_item(impl_def_id); let impl_substs = ecx.fresh_substs_for_item(impl_def_id);
let impl_trait_ref = impl_trait_ref.subst(tcx, impl_substs); let impl_trait_ref = impl_trait_ref.subst(tcx, impl_substs);
let mut nested_goals = ecx.eq(goal.param_env, goal_trait_ref, impl_trait_ref)?; ecx.eq(goal.param_env, goal_trait_ref, impl_trait_ref)?;
let where_clause_bounds = tcx let where_clause_bounds = tcx
.predicates_of(impl_def_id) .predicates_of(impl_def_id)
.instantiate(tcx, impl_substs) .instantiate(tcx, impl_substs)
.predicates .predicates
.into_iter() .into_iter()
.map(|pred| goal.with(tcx, pred)); .map(|pred| goal.with(tcx, pred));
ecx.add_goals(where_clause_bounds);
nested_goals.extend(where_clause_bounds);
let match_impl_certainty = ecx.evaluate_all(nested_goals)?;
// In case the associated item is hidden due to specialization, we have to // In case the associated item is hidden due to specialization, we have to
// return ambiguity this would otherwise be incomplete, resulting in // return ambiguity this would otherwise be incomplete, resulting in
@ -216,7 +170,7 @@ impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> {
goal.predicate.def_id(), goal.predicate.def_id(),
impl_def_id impl_def_id
)? else { )? else {
return ecx.make_canonical_response(match_impl_certainty.unify_and(Certainty::AMBIGUOUS)); return ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes);
}; };
if !assoc_def.item.defaultness(tcx).has_value() { if !assoc_def.item.defaultness(tcx).has_value() {
@ -263,7 +217,8 @@ impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> {
ty.map_bound(|ty| ty.into()) ty.map_bound(|ty| ty.into())
}; };
ecx.eq_term_and_make_canonical_response(goal, match_impl_certainty, term.subst(tcx, substs)) ecx.eq(goal.param_env, goal.predicate.term, term.subst(tcx, substs))?;
ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
}) })
} }
@ -308,13 +263,17 @@ impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> {
goal_kind: ty::ClosureKind, goal_kind: ty::ClosureKind,
) -> QueryResult<'tcx> { ) -> QueryResult<'tcx> {
let tcx = ecx.tcx(); let tcx = ecx.tcx();
let Some(tupled_inputs_and_output) = let tupled_inputs_and_output =
structural_traits::extract_tupled_inputs_and_output_from_callable( match structural_traits::extract_tupled_inputs_and_output_from_callable(
tcx, tcx,
goal.predicate.self_ty(), goal.predicate.self_ty(),
goal_kind, goal_kind,
)? else { )? {
return ecx.make_canonical_response(Certainty::AMBIGUOUS); Some(tupled_inputs_and_output) => tupled_inputs_and_output,
None => {
return ecx
.evaluate_added_goals_and_make_canonical_response(Certainty::AMBIGUOUS);
}
}; };
let output_is_sized_pred = tupled_inputs_and_output let output_is_sized_pred = tupled_inputs_and_output
.map_bound(|(_, output)| tcx.at(DUMMY_SP).mk_trait_ref(LangItem::Sized, [output])); .map_bound(|(_, output)| tcx.at(DUMMY_SP).mk_trait_ref(LangItem::Sized, [output]));
@ -380,13 +339,9 @@ impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> {
[ty::GenericArg::from(goal.predicate.self_ty())], [ty::GenericArg::from(goal.predicate.self_ty())],
)); ));
let (_, is_sized_certainty) = ecx.add_goal(goal.with(tcx, sized_predicate));
ecx.evaluate_goal(goal.with(tcx, sized_predicate))?; ecx.eq(goal.param_env, goal.predicate.term, tcx.types.unit.into())?;
return ecx.eq_term_and_make_canonical_response( return ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes);
goal,
is_sized_certainty,
tcx.types.unit,
);
} }
ty::Adt(def, substs) if def.is_struct() => { ty::Adt(def, substs) if def.is_struct() => {
@ -394,12 +349,12 @@ impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> {
None => tcx.types.unit, None => tcx.types.unit,
Some(field_def) => { Some(field_def) => {
let self_ty = field_def.ty(tcx, substs); let self_ty = field_def.ty(tcx, substs);
let new_goal = goal.with( ecx.add_goal(goal.with(
tcx, tcx,
ty::Binder::dummy(goal.predicate.with_self_ty(tcx, self_ty)), ty::Binder::dummy(goal.predicate.with_self_ty(tcx, self_ty)),
); ));
let (_, certainty) = ecx.evaluate_goal(new_goal)?; return ecx
return ecx.make_canonical_response(certainty); .evaluate_added_goals_and_make_canonical_response(Certainty::Yes);
} }
} }
} }
@ -408,12 +363,12 @@ impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> {
ty::Tuple(elements) => match elements.last() { ty::Tuple(elements) => match elements.last() {
None => tcx.types.unit, None => tcx.types.unit,
Some(&self_ty) => { Some(&self_ty) => {
let new_goal = goal.with( ecx.add_goal(goal.with(
tcx, tcx,
ty::Binder::dummy(goal.predicate.with_self_ty(tcx, self_ty)), ty::Binder::dummy(goal.predicate.with_self_ty(tcx, self_ty)),
); ));
let (_, certainty) = ecx.evaluate_goal(new_goal)?; return ecx
return ecx.make_canonical_response(certainty); .evaluate_added_goals_and_make_canonical_response(Certainty::Yes);
} }
}, },
@ -426,7 +381,8 @@ impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> {
), ),
}; };
ecx.eq_term_and_make_canonical_response(goal, Certainty::Yes, metadata_ty) ecx.eq(goal.param_env, goal.predicate.term, metadata_ty.into())?;
ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
}) })
} }
@ -522,7 +478,10 @@ impl<'tcx> assembly::GoalKind<'tcx> for ProjectionPredicate<'tcx> {
goal: Goal<'tcx, Self>, goal: Goal<'tcx, Self>,
) -> QueryResult<'tcx> { ) -> QueryResult<'tcx> {
let discriminant = goal.predicate.self_ty().discriminant_ty(ecx.tcx()); let discriminant = goal.predicate.self_ty().discriminant_ty(ecx.tcx());
ecx.probe(|ecx| ecx.eq_term_and_make_canonical_response(goal, Certainty::Yes, discriminant)) ecx.probe(|ecx| {
ecx.eq(goal.param_env, goal.predicate.term, discriminant.into())?;
ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
})
} }
} }

View file

@ -39,9 +39,7 @@ impl<'tcx> SearchGraph<'tcx> {
} }
pub(super) fn is_empty(&self) -> bool { pub(super) fn is_empty(&self) -> bool {
self.stack.is_empty() self.stack.is_empty() && self.provisional_cache.is_empty()
&& self.provisional_cache.is_empty()
&& !self.overflow_data.did_overflow()
} }
/// Whether we're currently in a cycle. This should only be used /// Whether we're currently in a cycle. This should only be used

View file

@ -47,7 +47,6 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
let impl_substs = ecx.fresh_substs_for_item(impl_def_id); let impl_substs = ecx.fresh_substs_for_item(impl_def_id);
let impl_trait_ref = impl_trait_ref.subst(tcx, impl_substs); let impl_trait_ref = impl_trait_ref.subst(tcx, impl_substs);
let mut nested_goals =
ecx.eq(goal.param_env, goal.predicate.trait_ref, impl_trait_ref)?; ecx.eq(goal.param_env, goal.predicate.trait_ref, impl_trait_ref)?;
let where_clause_bounds = tcx let where_clause_bounds = tcx
.predicates_of(impl_def_id) .predicates_of(impl_def_id)
@ -55,8 +54,8 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
.predicates .predicates
.into_iter() .into_iter()
.map(|pred| goal.with(tcx, pred)); .map(|pred| goal.with(tcx, pred));
nested_goals.extend(where_clause_bounds); ecx.add_goals(where_clause_bounds);
ecx.evaluate_all_and_make_canonical_response(nested_goals) ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
}) })
} }
@ -73,13 +72,13 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
ecx.probe(|ecx| { ecx.probe(|ecx| {
let assumption_trait_pred = let assumption_trait_pred =
ecx.instantiate_binder_with_infer(poly_trait_pred); ecx.instantiate_binder_with_infer(poly_trait_pred);
let mut nested_goals = ecx.eq( ecx.eq(
goal.param_env, goal.param_env,
goal.predicate.trait_ref, goal.predicate.trait_ref,
assumption_trait_pred.trait_ref, assumption_trait_pred.trait_ref,
)?; )?;
nested_goals.extend(requirements); ecx.add_goals(requirements);
ecx.evaluate_all_and_make_canonical_response(nested_goals) ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
}) })
} else { } else {
Err(NoSolution) Err(NoSolution)
@ -98,7 +97,7 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
ecx.probe(|ecx| { ecx.probe(|ecx| {
let assumption_trait_pred = let assumption_trait_pred =
ecx.instantiate_binder_with_infer(poly_trait_pred); ecx.instantiate_binder_with_infer(poly_trait_pred);
let mut nested_goals = ecx.eq( ecx.eq(
goal.param_env, goal.param_env,
goal.predicate.trait_ref, goal.predicate.trait_ref,
assumption_trait_pred.trait_ref, assumption_trait_pred.trait_ref,
@ -108,9 +107,9 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
let ty::Dynamic(bounds, _, _) = *goal.predicate.self_ty().kind() else { let ty::Dynamic(bounds, _, _) = *goal.predicate.self_ty().kind() else {
bug!("expected object type in `consider_object_bound_candidate`"); bug!("expected object type in `consider_object_bound_candidate`");
}; };
nested_goals.extend( ecx.add_goals(
structural_traits::predicates_for_object_candidate( structural_traits::predicates_for_object_candidate(
ecx, &ecx,
goal.param_env, goal.param_env,
goal.predicate.trait_ref, goal.predicate.trait_ref,
bounds, bounds,
@ -118,8 +117,7 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
.into_iter() .into_iter()
.map(|pred| goal.with(tcx, pred)), .map(|pred| goal.with(tcx, pred)),
); );
ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
ecx.evaluate_all_and_make_canonical_response(nested_goals)
}) })
} else { } else {
Err(NoSolution) Err(NoSolution)
@ -166,9 +164,8 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
let nested_obligations = tcx let nested_obligations = tcx
.predicates_of(goal.predicate.def_id()) .predicates_of(goal.predicate.def_id())
.instantiate(tcx, goal.predicate.trait_ref.substs); .instantiate(tcx, goal.predicate.trait_ref.substs);
ecx.evaluate_all_and_make_canonical_response( ecx.add_goals(nested_obligations.predicates.into_iter().map(|p| goal.with(tcx, p)));
nested_obligations.predicates.into_iter().map(|p| goal.with(tcx, p)).collect(), ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
)
}) })
} }
@ -197,7 +194,7 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
goal: Goal<'tcx, Self>, goal: Goal<'tcx, Self>,
) -> QueryResult<'tcx> { ) -> QueryResult<'tcx> {
if goal.predicate.self_ty().has_non_region_infer() { if goal.predicate.self_ty().has_non_region_infer() {
return ecx.make_canonical_response(Certainty::AMBIGUOUS); return ecx.evaluate_added_goals_and_make_canonical_response(Certainty::AMBIGUOUS);
} }
let tcx = ecx.tcx(); let tcx = ecx.tcx();
@ -209,7 +206,7 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
&& layout.layout.align().abi == usize_layout.align().abi && layout.layout.align().abi == usize_layout.align().abi
{ {
// FIXME: We could make this faster by making a no-constraints response // FIXME: We could make this faster by making a no-constraints response
ecx.make_canonical_response(Certainty::Yes) ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
} else { } else {
Err(NoSolution) Err(NoSolution)
} }
@ -221,13 +218,17 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
goal_kind: ty::ClosureKind, goal_kind: ty::ClosureKind,
) -> QueryResult<'tcx> { ) -> QueryResult<'tcx> {
let tcx = ecx.tcx(); let tcx = ecx.tcx();
let Some(tupled_inputs_and_output) = let tupled_inputs_and_output =
structural_traits::extract_tupled_inputs_and_output_from_callable( match structural_traits::extract_tupled_inputs_and_output_from_callable(
tcx, tcx,
goal.predicate.self_ty(), goal.predicate.self_ty(),
goal_kind, goal_kind,
)? else { )? {
return ecx.make_canonical_response(Certainty::AMBIGUOUS); Some(a) => a,
None => {
return ecx
.evaluate_added_goals_and_make_canonical_response(Certainty::AMBIGUOUS);
}
}; };
let output_is_sized_pred = tupled_inputs_and_output let output_is_sized_pred = tupled_inputs_and_output
.map_bound(|(_, output)| tcx.at(DUMMY_SP).mk_trait_ref(LangItem::Sized, [output])); .map_bound(|(_, output)| tcx.at(DUMMY_SP).mk_trait_ref(LangItem::Sized, [output]));
@ -247,7 +248,7 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
goal: Goal<'tcx, Self>, goal: Goal<'tcx, Self>,
) -> QueryResult<'tcx> { ) -> QueryResult<'tcx> {
if let ty::Tuple(..) = goal.predicate.self_ty().kind() { if let ty::Tuple(..) = goal.predicate.self_ty().kind() {
ecx.make_canonical_response(Certainty::Yes) ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
} else { } else {
Err(NoSolution) Err(NoSolution)
} }
@ -257,7 +258,7 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
ecx: &mut EvalCtxt<'_, 'tcx>, ecx: &mut EvalCtxt<'_, 'tcx>,
_goal: Goal<'tcx, Self>, _goal: Goal<'tcx, Self>,
) -> QueryResult<'tcx> { ) -> QueryResult<'tcx> {
ecx.make_canonical_response(Certainty::Yes) ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
} }
fn consider_builtin_future_candidate( fn consider_builtin_future_candidate(
@ -277,7 +278,7 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
// Async generator unconditionally implement `Future` // Async generator unconditionally implement `Future`
// Technically, we need to check that the future output type is Sized, // Technically, we need to check that the future output type is Sized,
// but that's already proven by the generator being WF. // but that's already proven by the generator being WF.
ecx.make_canonical_response(Certainty::Yes) ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
} }
fn consider_builtin_generator_candidate( fn consider_builtin_generator_candidate(
@ -317,7 +318,7 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
let a_ty = goal.predicate.self_ty(); let a_ty = goal.predicate.self_ty();
let b_ty = goal.predicate.trait_ref.substs.type_at(1); let b_ty = goal.predicate.trait_ref.substs.type_at(1);
if b_ty.is_ty_var() { if b_ty.is_ty_var() {
return ecx.make_canonical_response(Certainty::AMBIGUOUS); return ecx.evaluate_added_goals_and_make_canonical_response(Certainty::AMBIGUOUS);
} }
ecx.probe(|ecx| { ecx.probe(|ecx| {
match (a_ty.kind(), b_ty.kind()) { match (a_ty.kind(), b_ty.kind()) {
@ -326,7 +327,7 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
// Dyn upcasting is handled separately, since due to upcasting, // Dyn upcasting is handled separately, since due to upcasting,
// when there are two supertraits that differ by substs, we // when there are two supertraits that differ by substs, we
// may return more than one query response. // may return more than one query response.
return Err(NoSolution); Err(NoSolution)
} }
// `T` -> `dyn Trait` unsizing // `T` -> `dyn Trait` unsizing
(_, &ty::Dynamic(data, region, ty::Dyn)) => { (_, &ty::Dynamic(data, region, ty::Dyn)) => {
@ -341,29 +342,26 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
let Some(sized_def_id) = tcx.lang_items().sized_trait() else { let Some(sized_def_id) = tcx.lang_items().sized_trait() else {
return Err(NoSolution); return Err(NoSolution);
}; };
let nested_goals: Vec<_> = data
.iter()
// Check that the type implements all of the predicates of the def-id. // Check that the type implements all of the predicates of the def-id.
// (i.e. the principal, all of the associated types match, and any auto traits) // (i.e. the principal, all of the associated types match, and any auto traits)
.map(|pred| goal.with(tcx, pred.with_self_ty(tcx, a_ty))) ecx.add_goals(
.chain([ data.iter().map(|pred| goal.with(tcx, pred.with_self_ty(tcx, a_ty))),
);
// The type must be Sized to be unsized. // The type must be Sized to be unsized.
goal.with( ecx.add_goal(
tcx, goal.with(tcx, ty::Binder::dummy(tcx.mk_trait_ref(sized_def_id, [a_ty]))),
ty::Binder::dummy(tcx.mk_trait_ref(sized_def_id, [a_ty])), );
),
// The type must outlive the lifetime of the `dyn` we're unsizing into. // The type must outlive the lifetime of the `dyn` we're unsizing into.
ecx.add_goal(
goal.with(tcx, ty::Binder::dummy(ty::OutlivesPredicate(a_ty, region))), goal.with(tcx, ty::Binder::dummy(ty::OutlivesPredicate(a_ty, region))),
]) );
.collect(); ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
ecx.evaluate_all_and_make_canonical_response(nested_goals)
} }
// `[T; n]` -> `[T]` unsizing // `[T; n]` -> `[T]` unsizing
(&ty::Array(a_elem_ty, ..), &ty::Slice(b_elem_ty)) => { (&ty::Array(a_elem_ty, ..), &ty::Slice(b_elem_ty)) => {
// We just require that the element type stays the same // We just require that the element type stays the same
let nested_goals = ecx.eq(goal.param_env, a_elem_ty, b_elem_ty)?; ecx.eq(goal.param_env, a_elem_ty, b_elem_ty)?;
ecx.evaluate_all_and_make_canonical_response(nested_goals) ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
} }
// Struct unsizing `Struct<T>` -> `Struct<U>` where `T: Unsize<U>` // Struct unsizing `Struct<T>` -> `Struct<U>` where `T: Unsize<U>`
(&ty::Adt(a_def, a_substs), &ty::Adt(b_def, b_substs)) (&ty::Adt(a_def, a_substs), &ty::Adt(b_def, b_substs))
@ -397,15 +395,14 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
// Finally, we require that `TailA: Unsize<TailB>` for the tail field // Finally, we require that `TailA: Unsize<TailB>` for the tail field
// types. // types.
let mut nested_goals = ecx.eq(goal.param_env, unsized_a_ty, b_ty)?; ecx.eq(goal.param_env, unsized_a_ty, b_ty)?;
nested_goals.push(goal.with( ecx.add_goal(goal.with(
tcx, tcx,
ty::Binder::dummy( ty::Binder::dummy(
tcx.mk_trait_ref(goal.predicate.def_id(), [a_tail_ty, b_tail_ty]), tcx.mk_trait_ref(goal.predicate.def_id(), [a_tail_ty, b_tail_ty]),
), ),
)); ));
ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
ecx.evaluate_all_and_make_canonical_response(nested_goals)
} }
// Tuple unsizing `(.., T)` -> `(.., U)` where `T: Unsize<U>` // Tuple unsizing `(.., T)` -> `(.., U)` where `T: Unsize<U>`
(&ty::Tuple(a_tys), &ty::Tuple(b_tys)) (&ty::Tuple(a_tys), &ty::Tuple(b_tys))
@ -417,17 +414,16 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
// Substitute just the tail field of B., and require that they're equal. // Substitute just the tail field of B., and require that they're equal.
let unsized_a_ty = let unsized_a_ty =
tcx.mk_tup_from_iter(a_rest_tys.iter().chain([b_last_ty]).copied()); tcx.mk_tup_from_iter(a_rest_tys.iter().chain([b_last_ty]).copied());
let mut nested_goals = ecx.eq(goal.param_env, unsized_a_ty, b_ty)?; ecx.eq(goal.param_env, unsized_a_ty, b_ty)?;
// Similar to ADTs, require that the rest of the fields are equal. // Similar to ADTs, require that the rest of the fields are equal.
nested_goals.push(goal.with( ecx.add_goal(goal.with(
tcx, tcx,
ty::Binder::dummy( ty::Binder::dummy(
tcx.mk_trait_ref(goal.predicate.def_id(), [*a_last_ty, *b_last_ty]), tcx.mk_trait_ref(goal.predicate.def_id(), [*a_last_ty, *b_last_ty]),
), ),
)); ));
ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
ecx.evaluate_all_and_make_canonical_response(nested_goals)
} }
_ => Err(NoSolution), _ => Err(NoSolution),
} }
@ -477,12 +473,11 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
let new_a_ty = tcx.mk_dynamic(new_a_data, b_region, ty::Dyn); let new_a_ty = tcx.mk_dynamic(new_a_data, b_region, ty::Dyn);
// We also require that A's lifetime outlives B's lifetime. // We also require that A's lifetime outlives B's lifetime.
let mut nested_obligations = ecx.eq(goal.param_env, new_a_ty, b_ty)?; ecx.eq(goal.param_env, new_a_ty, b_ty)?;
nested_obligations.push( ecx.add_goal(
goal.with(tcx, ty::Binder::dummy(ty::OutlivesPredicate(a_region, b_region))), goal.with(tcx, ty::Binder::dummy(ty::OutlivesPredicate(a_region, b_region))),
); );
ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
ecx.evaluate_all_and_make_canonical_response(nested_obligations)
}) })
}; };
@ -516,7 +511,7 @@ impl<'tcx> assembly::GoalKind<'tcx> for TraitPredicate<'tcx> {
_goal: Goal<'tcx, Self>, _goal: Goal<'tcx, Self>,
) -> QueryResult<'tcx> { ) -> QueryResult<'tcx> {
// `DiscriminantKind` is automatically implemented for every type. // `DiscriminantKind` is automatically implemented for every type.
ecx.make_canonical_response(Certainty::Yes) ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
} }
} }
@ -530,21 +525,23 @@ impl<'tcx> EvalCtxt<'_, 'tcx> {
goal: Goal<'tcx, TraitPredicate<'tcx>>, goal: Goal<'tcx, TraitPredicate<'tcx>>,
constituent_tys: impl Fn(&EvalCtxt<'_, 'tcx>, Ty<'tcx>) -> Result<Vec<Ty<'tcx>>, NoSolution>, constituent_tys: impl Fn(&EvalCtxt<'_, 'tcx>, Ty<'tcx>) -> Result<Vec<Ty<'tcx>>, NoSolution>,
) -> QueryResult<'tcx> { ) -> QueryResult<'tcx> {
self.probe(|this| { self.probe(|ecx| {
this.evaluate_all_and_make_canonical_response( ecx.add_goals(
constituent_tys(this, goal.predicate.self_ty())? constituent_tys(ecx, goal.predicate.self_ty())?
.into_iter() .into_iter()
.map(|ty| { .map(|ty| {
goal.with( goal.with(
this.tcx(), ecx.tcx(),
ty::Binder::dummy(goal.predicate.with_self_ty(this.tcx(), ty)), ty::Binder::dummy(goal.predicate.with_self_ty(ecx.tcx(), ty)),
) )
}) })
.collect(), .collect::<Vec<_>>(),
) );
ecx.evaluate_added_goals_and_make_canonical_response(Certainty::Yes)
}) })
} }
#[instrument(level = "debug", skip(self))]
pub(super) fn compute_trait_goal( pub(super) fn compute_trait_goal(
&mut self, &mut self,
goal: Goal<'tcx, TraitPredicate<'tcx>>, goal: Goal<'tcx, TraitPredicate<'tcx>>,

View file

@ -333,7 +333,7 @@ impl<'tcx> TypeFolder<TyCtxt<'tcx>> for ReplaceProjectionWith<'_, 'tcx> {
// FIXME: Technically this folder could be fallible? // FIXME: Technically this folder could be fallible?
let nested = self let nested = self
.ecx .ecx
.eq(self.param_env, alias_ty, proj.projection_ty) .eq_and_get_goals(self.param_env, alias_ty, proj.projection_ty)
.expect("expected to be able to unify goal projection with dyn's projection"); .expect("expected to be able to unify goal projection with dyn's projection");
// FIXME: Technically we could register these too.. // FIXME: Technically we could register these too..
assert!(nested.is_empty(), "did not expect unification to have any nested goals"); assert!(nested.is_empty(), "did not expect unification to have any nested goals");

View file

@ -17,10 +17,10 @@ use rustc_errors::{DelayDm, FatalError, MultiSpan};
use rustc_hir as hir; use rustc_hir as hir;
use rustc_hir::def_id::DefId; use rustc_hir::def_id::DefId;
use rustc_middle::ty::subst::{GenericArg, InternalSubsts}; use rustc_middle::ty::subst::{GenericArg, InternalSubsts};
use rustc_middle::ty::ToPredicate;
use rustc_middle::ty::{ use rustc_middle::ty::{
self, EarlyBinder, Ty, TyCtxt, TypeSuperVisitable, TypeVisitable, TypeVisitor, self, EarlyBinder, Ty, TyCtxt, TypeSuperVisitable, TypeVisitable, TypeVisitor,
}; };
use rustc_middle::ty::{ToPredicate, TypeVisitableExt};
use rustc_session::lint::builtin::WHERE_CLAUSES_OBJECT_SAFETY; use rustc_session::lint::builtin::WHERE_CLAUSES_OBJECT_SAFETY;
use rustc_span::symbol::Symbol; use rustc_span::symbol::Symbol;
use rustc_span::Span; use rustc_span::Span;
@ -139,6 +139,10 @@ fn object_safety_violations_for_trait(
if !spans.is_empty() { if !spans.is_empty() {
violations.push(ObjectSafetyViolation::SupertraitSelf(spans)); violations.push(ObjectSafetyViolation::SupertraitSelf(spans));
} }
let spans = super_predicates_have_non_lifetime_binders(tcx, trait_def_id);
if !spans.is_empty() {
violations.push(ObjectSafetyViolation::SupertraitNonLifetimeBinder(spans));
}
violations.extend( violations.extend(
tcx.associated_items(trait_def_id) tcx.associated_items(trait_def_id)
@ -348,6 +352,21 @@ fn predicate_references_self<'tcx>(
} }
} }
fn super_predicates_have_non_lifetime_binders(
tcx: TyCtxt<'_>,
trait_def_id: DefId,
) -> SmallVec<[Span; 1]> {
// If non_lifetime_binders is disabled, then exit early
if !tcx.features().non_lifetime_binders {
return SmallVec::new();
}
tcx.super_predicates_of(trait_def_id)
.predicates
.iter()
.filter_map(|(pred, span)| pred.has_non_region_late_bound().then_some(*span))
.collect()
}
fn trait_has_sized_self(tcx: TyCtxt<'_>, trait_def_id: DefId) -> bool { fn trait_has_sized_self(tcx: TyCtxt<'_>, trait_def_id: DefId) -> bool {
generics_require_sized_self(tcx, trait_def_id) generics_require_sized_self(tcx, trait_def_id)
} }

View file

@ -7,7 +7,10 @@ use rustc_middle::ty::{
TyCtxt, TypeSuperVisitable, TypeVisitable, TypeVisitor, TyCtxt, TypeSuperVisitable, TypeVisitable, TypeVisitor,
}; };
use rustc_session::config::TraitSolver; use rustc_session::config::TraitSolver;
use rustc_span::def_id::{DefId, CRATE_DEF_ID}; use rustc_span::{
def_id::{DefId, CRATE_DEF_ID},
DUMMY_SP,
};
use rustc_trait_selection::traits; use rustc_trait_selection::traits;
fn sized_constraint_for_ty<'tcx>( fn sized_constraint_for_ty<'tcx>(
@ -275,16 +278,22 @@ impl<'tcx> TypeVisitor<TyCtxt<'tcx>> for ImplTraitInTraitFinder<'_, 'tcx> {
} }
fn visit_ty(&mut self, ty: Ty<'tcx>) -> std::ops::ControlFlow<Self::BreakTy> { fn visit_ty(&mut self, ty: Ty<'tcx>) -> std::ops::ControlFlow<Self::BreakTy> {
if let ty::Alias(ty::Projection, alias_ty) = *ty.kind() if let ty::Alias(ty::Projection, unshifted_alias_ty) = *ty.kind()
&& self.tcx.is_impl_trait_in_trait(alias_ty.def_id) && self.tcx.is_impl_trait_in_trait(unshifted_alias_ty.def_id)
&& self.tcx.impl_trait_in_trait_parent_fn(alias_ty.def_id) == self.fn_def_id && self.tcx.impl_trait_in_trait_parent_fn(unshifted_alias_ty.def_id) == self.fn_def_id
&& self.seen.insert(alias_ty.def_id) && self.seen.insert(unshifted_alias_ty.def_id)
{ {
// We have entered some binders as we've walked into the // We have entered some binders as we've walked into the
// bounds of the RPITIT. Shift these binders back out when // bounds of the RPITIT. Shift these binders back out when
// constructing the top-level projection predicate. // constructing the top-level projection predicate.
let alias_ty = self.tcx.fold_regions(alias_ty, |re, _| { let shifted_alias_ty = self.tcx.fold_regions(unshifted_alias_ty, |re, depth| {
if let ty::ReLateBound(index, bv) = re.kind() { if let ty::ReLateBound(index, bv) = re.kind() {
if depth != ty::INNERMOST {
return self.tcx.mk_re_error_with_message(
DUMMY_SP,
"we shouldn't walk non-predicate binders with `impl Trait`...",
);
}
self.tcx.mk_re_late_bound(index.shifted_out_to_binder(self.depth), bv) self.tcx.mk_re_late_bound(index.shifted_out_to_binder(self.depth), bv)
} else { } else {
re re
@ -295,26 +304,27 @@ impl<'tcx> TypeVisitor<TyCtxt<'tcx>> for ImplTraitInTraitFinder<'_, 'tcx> {
// the `type_of` of the trait's associated item. If we're using the old lowering // the `type_of` of the trait's associated item. If we're using the old lowering
// strategy, then just reinterpret the associated type like an opaque :^) // strategy, then just reinterpret the associated type like an opaque :^)
let default_ty = if self.tcx.lower_impl_trait_in_trait_to_assoc_ty() { let default_ty = if self.tcx.lower_impl_trait_in_trait_to_assoc_ty() {
self self.tcx.type_of(shifted_alias_ty.def_id).subst(self.tcx, shifted_alias_ty.substs)
.tcx
.type_of(alias_ty.def_id)
.subst(self.tcx, alias_ty.substs)
} else { } else {
self.tcx.mk_alias(ty::Opaque, alias_ty) self.tcx.mk_alias(ty::Opaque, shifted_alias_ty)
}; };
self.predicates.push( self.predicates.push(
ty::Binder::bind_with_vars( ty::Binder::bind_with_vars(
ty::ProjectionPredicate { ty::ProjectionPredicate { projection_ty: shifted_alias_ty, term: default_ty.into() },
projection_ty: alias_ty,
term: default_ty.into(),
},
self.bound_vars, self.bound_vars,
) )
.to_predicate(self.tcx), .to_predicate(self.tcx),
); );
for bound in self.tcx.item_bounds(alias_ty.def_id).subst_iter(self.tcx, alias_ty.substs) // We walk the *un-shifted* alias ty, because we're tracking the de bruijn
// binder depth, and if we were to walk `shifted_alias_ty` instead, we'd
// have to reset `self.depth` back to `ty::INNERMOST` or something. It's
// easier to just do this.
for bound in self
.tcx
.item_bounds(unshifted_alias_ty.def_id)
.subst_iter(self.tcx, unshifted_alias_ty.substs)
{ {
bound.visit_with(self); bound.visit_with(self);
} }

View file

@ -806,3 +806,9 @@ changelog-seen = 2
# #
# This list must be non-empty. # This list must be non-empty.
#compression-formats = ["gz", "xz"] #compression-formats = ["gz", "xz"]
# How much time should be spent compressing the tarballs. The better the
# compression profile, the longer compression will take.
#
# Available options: fast, balanced, best
#compression-profile = "fast"

View file

@ -164,12 +164,13 @@ where
/// element is encountered: /// element is encountered:
/// ///
/// ``` /// ```
/// let f = |&x: &i32| if x < 0 { Err("Negative element found") } else { Ok(x) };
/// let v = vec![1, 2]; /// let v = vec![1, 2];
/// let res: Result<i32, &'static str> = v.iter().map(|&x: &i32| /// let res: Result<i32, _> = v.iter().map(f).sum();
/// if x < 0 { Err("Negative element found") }
/// else { Ok(x) }
/// ).sum();
/// assert_eq!(res, Ok(3)); /// assert_eq!(res, Ok(3));
/// let v = vec![1, -2];
/// let res: Result<i32, _> = v.iter().map(f).sum();
/// assert_eq!(res, Err("Negative element found"));
/// ``` /// ```
fn sum<I>(iter: I) -> Result<T, E> fn sum<I>(iter: I) -> Result<T, E>
where where
@ -187,6 +188,20 @@ where
/// Takes each element in the [`Iterator`]: if it is an [`Err`], no further /// Takes each element in the [`Iterator`]: if it is an [`Err`], no further
/// elements are taken, and the [`Err`] is returned. Should no [`Err`] /// elements are taken, and the [`Err`] is returned. Should no [`Err`]
/// occur, the product of all elements is returned. /// occur, the product of all elements is returned.
///
/// # Examples
///
/// This multiplies each number in a vector of strings,
/// if a string could not be parsed the operation returns `Err`:
///
/// ```
/// let nums = vec!["5", "10", "1", "2"];
/// let total: Result<usize, _> = nums.iter().map(|w| w.parse::<usize>()).product();
/// assert_eq!(total, Ok(100));
/// let nums = vec!["5", "10", "one", "2"];
/// let total: Result<usize, _> = nums.iter().map(|w| w.parse::<usize>()).product();
/// assert!(total.is_err());
/// ```
fn product<I>(iter: I) -> Result<T, E> fn product<I>(iter: I) -> Result<T, E>
where where
I: Iterator<Item = Result<U, E>>, I: Iterator<Item = Result<U, E>>,
@ -213,6 +228,9 @@ where
/// let words = vec!["have", "a", "great", "day"]; /// let words = vec!["have", "a", "great", "day"];
/// let total: Option<usize> = words.iter().map(|w| w.find('a')).sum(); /// let total: Option<usize> = words.iter().map(|w| w.find('a')).sum();
/// assert_eq!(total, Some(5)); /// assert_eq!(total, Some(5));
/// let words = vec!["have", "a", "good", "day"];
/// let total: Option<usize> = words.iter().map(|w| w.find('a')).sum();
/// assert_eq!(total, None);
/// ``` /// ```
fn sum<I>(iter: I) -> Option<T> fn sum<I>(iter: I) -> Option<T>
where where
@ -230,6 +248,20 @@ where
/// Takes each element in the [`Iterator`]: if it is a [`None`], no further /// Takes each element in the [`Iterator`]: if it is a [`None`], no further
/// elements are taken, and the [`None`] is returned. Should no [`None`] /// elements are taken, and the [`None`] is returned. Should no [`None`]
/// occur, the product of all elements is returned. /// occur, the product of all elements is returned.
///
/// # Examples
///
/// This multiplies each number in a vector of strings,
/// if a string could not be parsed the operation returns `None`:
///
/// ```
/// let nums = vec!["5", "10", "1", "2"];
/// let total: Option<usize> = nums.iter().map(|w| w.parse::<usize>().ok()).product();
/// assert_eq!(total, Some(100));
/// let nums = vec!["5", "10", "one", "2"];
/// let total: Option<usize> = nums.iter().map(|w| w.parse::<usize>().ok()).product();
/// assert_eq!(total, None);
/// ```
fn product<I>(iter: I) -> Option<T> fn product<I>(iter: I) -> Option<T>
where where
I: Iterator<Item = Option<U>>, I: Iterator<Item = Option<U>>,

View file

@ -3448,6 +3448,9 @@ pub trait Iterator {
/// ///
/// An empty iterator returns the zero value of the type. /// An empty iterator returns the zero value of the type.
/// ///
/// `sum()` can be used to sum any type implementing [`Sum`][`core::iter::Sum`],
/// including [`Option`][`Option::sum`] and [`Result`][`Result::sum`].
///
/// # Panics /// # Panics
/// ///
/// When calling `sum()` and a primitive integer type is being returned, this /// When calling `sum()` and a primitive integer type is being returned, this
@ -3478,6 +3481,9 @@ pub trait Iterator {
/// ///
/// An empty iterator returns the one value of the type. /// An empty iterator returns the one value of the type.
/// ///
/// `product()` can be used to multiply any type implementing [`Product`][`core::iter::Product`],
/// including [`Option`][`Option::product`] and [`Result`][`Result::product`].
///
/// # Panics /// # Panics
/// ///
/// When calling `product()` and a primitive integer type is being returned, /// When calling `product()` and a primitive integer type is being returned,

View file

@ -2,7 +2,8 @@ use crate::io::prelude::*;
use crate::env; use crate::env;
use crate::fs::{self, File, OpenOptions}; use crate::fs::{self, File, OpenOptions};
use crate::io::{ErrorKind, SeekFrom}; use crate::io::{BorrowedBuf, ErrorKind, SeekFrom};
use crate::mem::MaybeUninit;
use crate::path::Path; use crate::path::Path;
use crate::str; use crate::str;
use crate::sync::Arc; use crate::sync::Arc;
@ -401,6 +402,23 @@ fn file_test_io_seek_read_write() {
check!(fs::remove_file(&filename)); check!(fs::remove_file(&filename));
} }
#[test]
fn file_test_read_buf() {
let tmpdir = tmpdir();
let filename = &tmpdir.join("test");
check!(fs::write(filename, &[1, 2, 3, 4]));
let mut buf: [MaybeUninit<u8>; 128] = MaybeUninit::uninit_array();
let mut buf = BorrowedBuf::from(buf.as_mut_slice());
let mut file = check!(File::open(filename));
check!(file.read_buf(buf.unfilled()));
assert_eq!(buf.filled(), &[1, 2, 3, 4]);
// File::read_buf should omit buffer initialization.
assert_eq!(buf.init_len(), 4);
check!(fs::remove_file(filename));
}
#[test] #[test]
fn file_test_stat_is_correct_on_is_file() { fn file_test_stat_is_correct_on_is_file() {
let tmpdir = tmpdir(); let tmpdir = tmpdir();

View file

@ -8,7 +8,7 @@ use crate::io::prelude::*;
use crate::cell::{Cell, RefCell}; use crate::cell::{Cell, RefCell};
use crate::fmt; use crate::fmt;
use crate::fs::File; use crate::fs::File;
use crate::io::{self, BufReader, IoSlice, IoSliceMut, LineWriter, Lines}; use crate::io::{self, BorrowedCursor, BufReader, IoSlice, IoSliceMut, LineWriter, Lines};
use crate::sync::atomic::{AtomicBool, Ordering}; use crate::sync::atomic::{AtomicBool, Ordering};
use crate::sync::{Arc, Mutex, MutexGuard, OnceLock, ReentrantMutex, ReentrantMutexGuard}; use crate::sync::{Arc, Mutex, MutexGuard, OnceLock, ReentrantMutex, ReentrantMutexGuard};
use crate::sys::stdio; use crate::sys::stdio;
@ -97,6 +97,10 @@ impl Read for StdinRaw {
handle_ebadf(self.0.read(buf), 0) handle_ebadf(self.0.read(buf), 0)
} }
fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> io::Result<()> {
handle_ebadf(self.0.read_buf(buf), ())
}
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
handle_ebadf(self.0.read_vectored(bufs), 0) handle_ebadf(self.0.read_vectored(bufs), 0)
} }
@ -418,6 +422,9 @@ impl Read for Stdin {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> { fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
self.lock().read(buf) self.lock().read(buf)
} }
fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> io::Result<()> {
self.lock().read_buf(buf)
}
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
self.lock().read_vectored(bufs) self.lock().read_vectored(bufs)
} }
@ -450,6 +457,10 @@ impl Read for StdinLock<'_> {
self.inner.read(buf) self.inner.read(buf)
} }
fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> io::Result<()> {
self.inner.read_buf(buf)
}
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
self.inner.read_vectored(bufs) self.inner.read_vectored(bufs)
} }

View file

@ -6,7 +6,7 @@ mod tests;
use crate::io::prelude::*; use crate::io::prelude::*;
use crate::fmt; use crate::fmt;
use crate::io::{self, IoSlice, IoSliceMut}; use crate::io::{self, BorrowedCursor, IoSlice, IoSliceMut};
use crate::iter::FusedIterator; use crate::iter::FusedIterator;
use crate::net::{Shutdown, SocketAddr, ToSocketAddrs}; use crate::net::{Shutdown, SocketAddr, ToSocketAddrs};
use crate::sys_common::net as net_imp; use crate::sys_common::net as net_imp;
@ -619,6 +619,10 @@ impl Read for TcpStream {
self.0.read(buf) self.0.read(buf)
} }
fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> io::Result<()> {
self.0.read_buf(buf)
}
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
self.0.read_vectored(bufs) self.0.read_vectored(bufs)
} }
@ -653,6 +657,10 @@ impl Read for &TcpStream {
self.0.read(buf) self.0.read(buf)
} }
fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> io::Result<()> {
self.0.read_buf(buf)
}
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
self.0.read_vectored(bufs) self.0.read_vectored(bufs)
} }

View file

@ -1,6 +1,7 @@
use crate::fmt; use crate::fmt;
use crate::io::prelude::*; use crate::io::prelude::*;
use crate::io::{ErrorKind, IoSlice, IoSliceMut}; use crate::io::{BorrowedBuf, ErrorKind, IoSlice, IoSliceMut};
use crate::mem::MaybeUninit;
use crate::net::test::{next_test_ip4, next_test_ip6}; use crate::net::test::{next_test_ip4, next_test_ip6};
use crate::net::*; use crate::net::*;
use crate::sync::mpsc::channel; use crate::sync::mpsc::channel;
@ -279,6 +280,31 @@ fn partial_read() {
}) })
} }
#[test]
fn read_buf() {
each_ip(&mut |addr| {
let srv = t!(TcpListener::bind(&addr));
let t = thread::spawn(move || {
let mut s = t!(TcpStream::connect(&addr));
s.write_all(&[1, 2, 3, 4]).unwrap();
});
let mut s = t!(srv.accept()).0;
let mut buf: [MaybeUninit<u8>; 128] = MaybeUninit::uninit_array();
let mut buf = BorrowedBuf::from(buf.as_mut_slice());
t!(s.read_buf(buf.unfilled()));
assert_eq!(buf.filled(), &[1, 2, 3, 4]);
// FIXME: sgx uses default_read_buf that initializes the buffer.
if cfg!(not(target_env = "sgx")) {
// TcpStream::read_buf should omit buffer initialization.
assert_eq!(buf.init_len(), 4);
}
t.join().ok().expect("thread panicked");
})
}
#[test] #[test]
fn read_vectored() { fn read_vectored() {
each_ip(&mut |addr| { each_ip(&mut |addr| {

View file

@ -110,7 +110,7 @@ use crate::convert::Infallible;
use crate::ffi::OsStr; use crate::ffi::OsStr;
use crate::fmt; use crate::fmt;
use crate::fs; use crate::fs;
use crate::io::{self, IoSlice, IoSliceMut}; use crate::io::{self, BorrowedCursor, IoSlice, IoSliceMut};
use crate::num::NonZeroI32; use crate::num::NonZeroI32;
use crate::path::Path; use crate::path::Path;
use crate::str; use crate::str;
@ -354,6 +354,10 @@ impl Read for ChildStdout {
self.inner.read(buf) self.inner.read(buf)
} }
fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> io::Result<()> {
self.inner.read_buf(buf)
}
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
self.inner.read_vectored(bufs) self.inner.read_vectored(bufs)
} }
@ -419,6 +423,10 @@ impl Read for ChildStderr {
self.inner.read(buf) self.inner.read(buf)
} }
fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> io::Result<()> {
self.inner.read_buf(buf)
}
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
self.inner.read_vectored(bufs) self.inner.read_vectored(bufs)
} }

View file

@ -1,7 +1,8 @@
use crate::io::prelude::*; use crate::io::prelude::*;
use super::{Command, Output, Stdio}; use super::{Command, Output, Stdio};
use crate::io::ErrorKind; use crate::io::{BorrowedBuf, ErrorKind};
use crate::mem::MaybeUninit;
use crate::str; use crate::str;
fn known_command() -> Command { fn known_command() -> Command {
@ -119,6 +120,37 @@ fn stdin_works() {
assert_eq!(out, "foobar\n"); assert_eq!(out, "foobar\n");
} }
#[test]
#[cfg_attr(any(target_os = "vxworks"), ignore)]
fn child_stdout_read_buf() {
let mut cmd = if cfg!(target_os = "windows") {
let mut cmd = Command::new("cmd");
cmd.arg("/C").arg("echo abc");
cmd
} else {
let mut cmd = shell_cmd();
cmd.arg("-c").arg("echo abc");
cmd
};
cmd.stdin(Stdio::null());
cmd.stdout(Stdio::piped());
let child = cmd.spawn().unwrap();
let mut stdout = child.stdout.unwrap();
let mut buf: [MaybeUninit<u8>; 128] = MaybeUninit::uninit_array();
let mut buf = BorrowedBuf::from(buf.as_mut_slice());
stdout.read_buf(buf.unfilled()).unwrap();
// ChildStdout::read_buf should omit buffer initialization.
if cfg!(target_os = "windows") {
assert_eq!(buf.filled(), b"abc\r\n");
assert_eq!(buf.init_len(), 5);
} else {
assert_eq!(buf.filled(), b"abc\n");
assert_eq!(buf.init_len(), 4);
};
}
#[test] #[test]
#[cfg_attr(any(target_os = "vxworks"), ignore)] #[cfg_attr(any(target_os = "vxworks"), ignore)]
fn test_process_status() { fn test_process_status() {

View file

@ -1,7 +1,7 @@
use fortanix_sgx_abi::Fd; use fortanix_sgx_abi::Fd;
use super::abi::usercalls; use super::abi::usercalls;
use crate::io::{self, IoSlice, IoSliceMut}; use crate::io::{self, BorrowedCursor, IoSlice, IoSliceMut};
use crate::mem; use crate::mem;
use crate::sys::{AsInner, FromInner, IntoInner}; use crate::sys::{AsInner, FromInner, IntoInner};
@ -30,6 +30,10 @@ impl FileDesc {
usercalls::read(self.fd, &mut [IoSliceMut::new(buf)]) usercalls::read(self.fd, &mut [IoSliceMut::new(buf)])
} }
pub fn read_buf(&self, buf: BorrowedCursor<'_>) -> io::Result<()> {
crate::io::default_read_buf(|b| self.read(b), buf)
}
pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
usercalls::read(self.fd, bufs) usercalls::read(self.fd, bufs)
} }

View file

@ -1,6 +1,6 @@
use crate::error; use crate::error;
use crate::fmt; use crate::fmt;
use crate::io::{self, IoSlice, IoSliceMut}; use crate::io::{self, BorrowedCursor, IoSlice, IoSliceMut};
use crate::net::{Ipv4Addr, Ipv6Addr, Shutdown, SocketAddr, ToSocketAddrs}; use crate::net::{Ipv4Addr, Ipv6Addr, Shutdown, SocketAddr, ToSocketAddrs};
use crate::sync::Arc; use crate::sync::Arc;
use crate::sys::fd::FileDesc; use crate::sys::fd::FileDesc;
@ -144,6 +144,10 @@ impl TcpStream {
self.inner.inner.read(buf) self.inner.inner.read(buf)
} }
pub fn read_buf(&self, buf: BorrowedCursor<'_>) -> io::Result<()> {
self.inner.inner.read_buf(buf)
}
pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
self.inner.inner.read_vectored(bufs) self.inner.inner.read_vectored(bufs)
} }

View file

@ -469,6 +469,15 @@ impl<'a> Read for &'a FileDesc {
fn read_buf(&mut self, cursor: BorrowedCursor<'_>) -> io::Result<()> { fn read_buf(&mut self, cursor: BorrowedCursor<'_>) -> io::Result<()> {
(**self).read_buf(cursor) (**self).read_buf(cursor)
} }
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
(**self).read_vectored(bufs)
}
#[inline]
fn is_read_vectored(&self) -> bool {
(**self).is_read_vectored()
}
} }
impl AsInner<OwnedFd> for FileDesc { impl AsInner<OwnedFd> for FileDesc {

View file

@ -1,6 +1,6 @@
use crate::cmp; use crate::cmp;
use crate::ffi::CStr; use crate::ffi::CStr;
use crate::io::{self, IoSlice, IoSliceMut}; use crate::io::{self, BorrowedBuf, BorrowedCursor, IoSlice, IoSliceMut};
use crate::mem; use crate::mem;
use crate::net::{Shutdown, SocketAddr}; use crate::net::{Shutdown, SocketAddr};
use crate::os::unix::io::{AsFd, AsRawFd, BorrowedFd, FromRawFd, IntoRawFd, RawFd}; use crate::os::unix::io::{AsFd, AsRawFd, BorrowedFd, FromRawFd, IntoRawFd, RawFd};
@ -242,19 +242,35 @@ impl Socket {
self.0.duplicate().map(Socket) self.0.duplicate().map(Socket)
} }
fn recv_with_flags(&self, buf: &mut [u8], flags: c_int) -> io::Result<usize> { fn recv_with_flags(&self, mut buf: BorrowedCursor<'_>, flags: c_int) -> io::Result<()> {
let ret = cvt(unsafe { let ret = cvt(unsafe {
libc::recv(self.as_raw_fd(), buf.as_mut_ptr() as *mut c_void, buf.len(), flags) libc::recv(
self.as_raw_fd(),
buf.as_mut().as_mut_ptr() as *mut c_void,
buf.capacity(),
flags,
)
})?; })?;
Ok(ret as usize) unsafe {
buf.advance(ret as usize);
}
Ok(())
} }
pub fn read(&self, buf: &mut [u8]) -> io::Result<usize> { pub fn read(&self, buf: &mut [u8]) -> io::Result<usize> {
self.recv_with_flags(buf, 0) let mut buf = BorrowedBuf::from(buf);
self.recv_with_flags(buf.unfilled(), 0)?;
Ok(buf.len())
} }
pub fn peek(&self, buf: &mut [u8]) -> io::Result<usize> { pub fn peek(&self, buf: &mut [u8]) -> io::Result<usize> {
self.recv_with_flags(buf, MSG_PEEK) let mut buf = BorrowedBuf::from(buf);
self.recv_with_flags(buf.unfilled(), MSG_PEEK)?;
Ok(buf.len())
}
pub fn read_buf(&self, buf: BorrowedCursor<'_>) -> io::Result<()> {
self.recv_with_flags(buf, 0)
} }
pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {

View file

@ -1,4 +1,4 @@
use crate::io::{self, IoSlice, IoSliceMut}; use crate::io::{self, BorrowedCursor, IoSlice, IoSliceMut};
use crate::mem; use crate::mem;
use crate::os::unix::io::{AsFd, AsRawFd, BorrowedFd, FromRawFd, IntoRawFd, RawFd}; use crate::os::unix::io::{AsFd, AsRawFd, BorrowedFd, FromRawFd, IntoRawFd, RawFd};
use crate::sys::fd::FileDesc; use crate::sys::fd::FileDesc;
@ -49,6 +49,10 @@ impl AnonPipe {
self.0.read(buf) self.0.read(buf)
} }
pub fn read_buf(&self, buf: BorrowedCursor<'_>) -> io::Result<()> {
self.0.read_buf(buf)
}
pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
self.0.read_vectored(bufs) self.0.read_vectored(bufs)
} }

View file

@ -1,4 +1,4 @@
use crate::io::{self, IoSlice, IoSliceMut}; use crate::io::{self, BorrowedCursor, IoSlice, IoSliceMut};
use crate::mem::ManuallyDrop; use crate::mem::ManuallyDrop;
use crate::os::unix::io::FromRawFd; use crate::os::unix::io::FromRawFd;
use crate::sys::fd::FileDesc; use crate::sys::fd::FileDesc;
@ -18,6 +18,10 @@ impl io::Read for Stdin {
unsafe { ManuallyDrop::new(FileDesc::from_raw_fd(libc::STDIN_FILENO)).read(buf) } unsafe { ManuallyDrop::new(FileDesc::from_raw_fd(libc::STDIN_FILENO)).read(buf) }
} }
fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> io::Result<()> {
unsafe { ManuallyDrop::new(FileDesc::from_raw_fd(libc::STDIN_FILENO)).read_buf(buf) }
}
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
unsafe { ManuallyDrop::new(FileDesc::from_raw_fd(libc::STDIN_FILENO)).read_vectored(bufs) } unsafe { ManuallyDrop::new(FileDesc::from_raw_fd(libc::STDIN_FILENO)).read_vectored(bufs) }
} }

View file

@ -1,5 +1,5 @@
use crate::fmt; use crate::fmt;
use crate::io::{self, IoSlice, IoSliceMut}; use crate::io::{self, BorrowedCursor, IoSlice, IoSliceMut};
use crate::net::{Ipv4Addr, Ipv6Addr, Shutdown, SocketAddr}; use crate::net::{Ipv4Addr, Ipv6Addr, Shutdown, SocketAddr};
use crate::sys::unsupported; use crate::sys::unsupported;
use crate::time::Duration; use crate::time::Duration;
@ -39,6 +39,10 @@ impl TcpStream {
self.0 self.0
} }
pub fn read_buf(&self, _buf: BorrowedCursor<'_>) -> io::Result<()> {
self.0
}
pub fn read_vectored(&self, _: &mut [IoSliceMut<'_>]) -> io::Result<usize> { pub fn read_vectored(&self, _: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
self.0 self.0
} }

View file

@ -1,4 +1,4 @@
use crate::io::{self, IoSlice, IoSliceMut}; use crate::io::{self, BorrowedCursor, IoSlice, IoSliceMut};
pub struct AnonPipe(!); pub struct AnonPipe(!);
@ -7,6 +7,10 @@ impl AnonPipe {
self.0 self.0
} }
pub fn read_buf(&self, _buf: BorrowedCursor<'_>) -> io::Result<()> {
self.0
}
pub fn read_vectored(&self, _bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { pub fn read_vectored(&self, _bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
self.0 self.0
} }

View file

@ -2,7 +2,7 @@
#![allow(dead_code)] #![allow(dead_code)]
use super::err2io; use super::err2io;
use crate::io::{self, IoSlice, IoSliceMut, SeekFrom}; use crate::io::{self, BorrowedCursor, IoSlice, IoSliceMut, SeekFrom};
use crate::mem; use crate::mem;
use crate::net::Shutdown; use crate::net::Shutdown;
use crate::os::wasi::io::{AsFd, AsRawFd, BorrowedFd, FromRawFd, IntoRawFd, OwnedFd, RawFd}; use crate::os::wasi::io::{AsFd, AsRawFd, BorrowedFd, FromRawFd, IntoRawFd, OwnedFd, RawFd};
@ -46,6 +46,22 @@ impl WasiFd {
unsafe { wasi::fd_read(self.as_raw_fd() as wasi::Fd, iovec(bufs)).map_err(err2io) } unsafe { wasi::fd_read(self.as_raw_fd() as wasi::Fd, iovec(bufs)).map_err(err2io) }
} }
pub fn read_buf(&self, mut buf: BorrowedCursor<'_>) -> io::Result<()> {
unsafe {
let bufs = [wasi::Iovec {
buf: buf.as_mut().as_mut_ptr() as *mut u8,
buf_len: buf.capacity(),
}];
match wasi::fd_read(self.as_raw_fd() as wasi::Fd, &bufs) {
Ok(n) => {
buf.advance(n);
Ok(())
}
Err(e) => Err(err2io(e)),
}
}
}
pub fn write(&self, bufs: &[IoSlice<'_>]) -> io::Result<usize> { pub fn write(&self, bufs: &[IoSlice<'_>]) -> io::Result<usize> {
unsafe { wasi::fd_write(self.as_raw_fd() as wasi::Fd, ciovec(bufs)).map_err(err2io) } unsafe { wasi::fd_write(self.as_raw_fd() as wasi::Fd, ciovec(bufs)).map_err(err2io) }
} }

View file

@ -441,7 +441,7 @@ impl File {
} }
pub fn read_buf(&self, cursor: BorrowedCursor<'_>) -> io::Result<()> { pub fn read_buf(&self, cursor: BorrowedCursor<'_>) -> io::Result<()> {
crate::io::default_read_buf(|buf| self.read(buf), cursor) self.fd.read_buf(cursor)
} }
pub fn write(&self, buf: &[u8]) -> io::Result<usize> { pub fn write(&self, buf: &[u8]) -> io::Result<usize> {

View file

@ -3,7 +3,7 @@
use super::err2io; use super::err2io;
use super::fd::WasiFd; use super::fd::WasiFd;
use crate::fmt; use crate::fmt;
use crate::io::{self, IoSlice, IoSliceMut}; use crate::io::{self, BorrowedCursor, IoSlice, IoSliceMut};
use crate::net::{Ipv4Addr, Ipv6Addr, Shutdown, SocketAddr}; use crate::net::{Ipv4Addr, Ipv6Addr, Shutdown, SocketAddr};
use crate::os::wasi::io::{AsFd, AsRawFd, BorrowedFd, FromRawFd, IntoRawFd, RawFd}; use crate::os::wasi::io::{AsFd, AsRawFd, BorrowedFd, FromRawFd, IntoRawFd, RawFd};
use crate::sys::unsupported; use crate::sys::unsupported;
@ -91,6 +91,10 @@ impl TcpStream {
self.read_vectored(&mut [IoSliceMut::new(buf)]) self.read_vectored(&mut [IoSliceMut::new(buf)])
} }
pub fn read_buf(&self, buf: BorrowedCursor<'_>) -> io::Result<()> {
self.socket().as_inner().read_buf(buf)
}
pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
self.socket().as_inner().read(bufs) self.socket().as_inner().read(bufs)
} }

View file

@ -327,7 +327,16 @@ impl<'a> Read for &'a Handle {
(**self).read(buf) (**self).read(buf)
} }
fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> io::Result<()> {
(**self).read_buf(buf)
}
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
(**self).read_vectored(bufs) (**self).read_vectored(bufs)
} }
#[inline]
fn is_read_vectored(&self) -> bool {
(**self).is_read_vectored()
}
} }

View file

@ -1,7 +1,7 @@
#![unstable(issue = "none", feature = "windows_net")] #![unstable(issue = "none", feature = "windows_net")]
use crate::cmp; use crate::cmp;
use crate::io::{self, IoSlice, IoSliceMut, Read}; use crate::io::{self, BorrowedBuf, BorrowedCursor, IoSlice, IoSliceMut, Read};
use crate::mem; use crate::mem;
use crate::net::{Shutdown, SocketAddr}; use crate::net::{Shutdown, SocketAddr};
use crate::os::windows::io::{ use crate::os::windows::io::{
@ -214,28 +214,38 @@ impl Socket {
Ok(Self(self.0.try_clone()?)) Ok(Self(self.0.try_clone()?))
} }
fn recv_with_flags(&self, buf: &mut [u8], flags: c_int) -> io::Result<usize> { fn recv_with_flags(&self, mut buf: BorrowedCursor<'_>, flags: c_int) -> io::Result<()> {
// On unix when a socket is shut down all further reads return 0, so we // On unix when a socket is shut down all further reads return 0, so we
// do the same on windows to map a shut down socket to returning EOF. // do the same on windows to map a shut down socket to returning EOF.
let length = cmp::min(buf.len(), i32::MAX as usize) as i32; let length = cmp::min(buf.capacity(), i32::MAX as usize) as i32;
let result = let result = unsafe {
unsafe { c::recv(self.as_raw_socket(), buf.as_mut_ptr() as *mut _, length, flags) }; c::recv(self.as_raw_socket(), buf.as_mut().as_mut_ptr() as *mut _, length, flags)
};
match result { match result {
c::SOCKET_ERROR => { c::SOCKET_ERROR => {
let error = unsafe { c::WSAGetLastError() }; let error = unsafe { c::WSAGetLastError() };
if error == c::WSAESHUTDOWN { if error == c::WSAESHUTDOWN {
Ok(0) Ok(())
} else { } else {
Err(io::Error::from_raw_os_error(error)) Err(io::Error::from_raw_os_error(error))
} }
} }
_ => Ok(result as usize), _ => {
unsafe { buf.advance(result as usize) };
Ok(())
}
} }
} }
pub fn read(&self, buf: &mut [u8]) -> io::Result<usize> { pub fn read(&self, buf: &mut [u8]) -> io::Result<usize> {
let mut buf = BorrowedBuf::from(buf);
self.recv_with_flags(buf.unfilled(), 0)?;
Ok(buf.len())
}
pub fn read_buf(&self, buf: BorrowedCursor<'_>) -> io::Result<()> {
self.recv_with_flags(buf, 0) self.recv_with_flags(buf, 0)
} }
@ -277,7 +287,9 @@ impl Socket {
} }
pub fn peek(&self, buf: &mut [u8]) -> io::Result<usize> { pub fn peek(&self, buf: &mut [u8]) -> io::Result<usize> {
self.recv_with_flags(buf, c::MSG_PEEK) let mut buf = BorrowedBuf::from(buf);
self.recv_with_flags(buf.unfilled(), c::MSG_PEEK)?;
Ok(buf.len())
} }
fn recv_from_with_flags( fn recv_from_with_flags(

View file

@ -1,7 +1,7 @@
use crate::os::windows::prelude::*; use crate::os::windows::prelude::*;
use crate::ffi::OsStr; use crate::ffi::OsStr;
use crate::io::{self, IoSlice, IoSliceMut, Read}; use crate::io::{self, BorrowedCursor, IoSlice, IoSliceMut, Read};
use crate::mem; use crate::mem;
use crate::path::Path; use crate::path::Path;
use crate::ptr; use crate::ptr;
@ -252,6 +252,28 @@ impl AnonPipe {
} }
} }
pub fn read_buf(&self, mut buf: BorrowedCursor<'_>) -> io::Result<()> {
let result = unsafe {
let len = crate::cmp::min(buf.capacity(), c::DWORD::MAX as usize) as c::DWORD;
self.alertable_io_internal(c::ReadFileEx, buf.as_mut().as_mut_ptr() as _, len)
};
match result {
// The special treatment of BrokenPipe is to deal with Windows
// pipe semantics, which yields this error when *reading* from
// a pipe after the other end has closed; we interpret that as
// EOF on the pipe.
Err(ref e) if e.kind() == io::ErrorKind::BrokenPipe => Ok(()),
Err(e) => Err(e),
Ok(n) => {
unsafe {
buf.advance(n);
}
Ok(())
}
}
}
pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
self.inner.read_vectored(bufs) self.inner.read_vectored(bufs)
} }

View file

@ -4,7 +4,7 @@ mod tests;
use crate::cmp; use crate::cmp;
use crate::convert::{TryFrom, TryInto}; use crate::convert::{TryFrom, TryInto};
use crate::fmt; use crate::fmt;
use crate::io::{self, ErrorKind, IoSlice, IoSliceMut}; use crate::io::{self, BorrowedCursor, ErrorKind, IoSlice, IoSliceMut};
use crate::mem; use crate::mem;
use crate::net::{Ipv4Addr, Ipv6Addr, Shutdown, SocketAddr}; use crate::net::{Ipv4Addr, Ipv6Addr, Shutdown, SocketAddr};
use crate::ptr; use crate::ptr;
@ -272,6 +272,10 @@ impl TcpStream {
self.inner.read(buf) self.inner.read(buf)
} }
pub fn read_buf(&self, buf: BorrowedCursor<'_>) -> io::Result<()> {
self.inner.read_buf(buf)
}
pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> { pub fn read_vectored(&self, bufs: &mut [IoSliceMut<'_>]) -> io::Result<usize> {
self.inner.read_vectored(bufs) self.inner.read_vectored(bufs)
} }

View file

@ -191,6 +191,7 @@ pub struct Config {
pub dist_sign_folder: Option<PathBuf>, pub dist_sign_folder: Option<PathBuf>,
pub dist_upload_addr: Option<String>, pub dist_upload_addr: Option<String>,
pub dist_compression_formats: Option<Vec<String>>, pub dist_compression_formats: Option<Vec<String>>,
pub dist_compression_profile: String,
pub dist_include_mingw_linker: bool, pub dist_include_mingw_linker: bool,
// libstd features // libstd features
@ -703,6 +704,7 @@ define_config! {
src_tarball: Option<bool> = "src-tarball", src_tarball: Option<bool> = "src-tarball",
missing_tools: Option<bool> = "missing-tools", missing_tools: Option<bool> = "missing-tools",
compression_formats: Option<Vec<String>> = "compression-formats", compression_formats: Option<Vec<String>> = "compression-formats",
compression_profile: Option<String> = "compression-profile",
include_mingw_linker: Option<bool> = "include-mingw-linker", include_mingw_linker: Option<bool> = "include-mingw-linker",
} }
} }
@ -821,6 +823,7 @@ impl Config {
config.deny_warnings = true; config.deny_warnings = true;
config.bindir = "bin".into(); config.bindir = "bin".into();
config.dist_include_mingw_linker = true; config.dist_include_mingw_linker = true;
config.dist_compression_profile = "fast".into();
// set by build.rs // set by build.rs
config.build = TargetSelection::from_user(&env!("BUILD_TRIPLE")); config.build = TargetSelection::from_user(&env!("BUILD_TRIPLE"));
@ -1308,6 +1311,7 @@ impl Config {
config.dist_sign_folder = t.sign_folder.map(PathBuf::from); config.dist_sign_folder = t.sign_folder.map(PathBuf::from);
config.dist_upload_addr = t.upload_addr; config.dist_upload_addr = t.upload_addr;
config.dist_compression_formats = t.compression_formats; config.dist_compression_formats = t.compression_formats;
set(&mut config.dist_compression_profile, t.compression_profile);
set(&mut config.rust_dist_src, t.src_tarball); set(&mut config.rust_dist_src, t.src_tarball);
set(&mut config.missing_tools, t.missing_tools); set(&mut config.missing_tools, t.missing_tools);
set(&mut config.dist_include_mingw_linker, t.include_mingw_linker) set(&mut config.dist_include_mingw_linker, t.include_mingw_linker)

View file

@ -11,3 +11,7 @@ extended = true
[llvm] [llvm]
# Most users installing from source want to build all parts of the project from source, not just rustc itself. # Most users installing from source want to build all parts of the project from source, not just rustc itself.
download-ci-llvm = false download-ci-llvm = false
[dist]
# Use better compression when preparing tarballs.
compression-profile = "balanced"

View file

@ -318,6 +318,7 @@ impl<'a> Tarball<'a> {
assert!(!formats.is_empty(), "dist.compression-formats can't be empty"); assert!(!formats.is_empty(), "dist.compression-formats can't be empty");
cmd.arg("--compression-formats").arg(formats.join(",")); cmd.arg("--compression-formats").arg(formats.join(","));
} }
cmd.args(&["--compression-profile", &self.builder.config.dist_compression_profile]);
self.builder.run(&mut cmd); self.builder.run(&mut cmd);
// Ensure there are no symbolic links in the tarball. In particular, // Ensure there are no symbolic links in the tarball. In particular,

View file

@ -58,6 +58,7 @@ RUST_CONFIGURE_ARGS="$RUST_CONFIGURE_ARGS --disable-manage-submodules"
RUST_CONFIGURE_ARGS="$RUST_CONFIGURE_ARGS --enable-locked-deps" RUST_CONFIGURE_ARGS="$RUST_CONFIGURE_ARGS --enable-locked-deps"
RUST_CONFIGURE_ARGS="$RUST_CONFIGURE_ARGS --enable-cargo-native-static" RUST_CONFIGURE_ARGS="$RUST_CONFIGURE_ARGS --enable-cargo-native-static"
RUST_CONFIGURE_ARGS="$RUST_CONFIGURE_ARGS --set rust.codegen-units-std=1" RUST_CONFIGURE_ARGS="$RUST_CONFIGURE_ARGS --set rust.codegen-units-std=1"
RUST_CONFIGURE_ARGS="$RUST_CONFIGURE_ARGS --set dist.compression-profile=best"
# Only produce xz tarballs on CI. gz tarballs will be generated by the release # Only produce xz tarballs on CI. gz tarballs will be generated by the release
# process by recompressing the existing xz ones. This decreases the storage # process by recompressing the existing xz ones. This decreases the storage

View file

@ -1,7 +1,7 @@
use super::Scripter; use super::Scripter;
use super::Tarballer; use super::Tarballer;
use crate::{ use crate::{
compression::{CompressionFormat, CompressionFormats}, compression::{CompressionFormat, CompressionFormats, CompressionProfile},
util::*, util::*,
}; };
use anyhow::{bail, Context, Result}; use anyhow::{bail, Context, Result};
@ -48,6 +48,10 @@ actor! {
#[clap(value_name = "DIR")] #[clap(value_name = "DIR")]
output_dir: String = "./dist", output_dir: String = "./dist",
/// The profile used to compress the tarball.
#[clap(value_name = "FORMAT", default_value_t)]
compression_profile: CompressionProfile,
/// The formats used to compress the tarball /// The formats used to compress the tarball
#[clap(value_name = "FORMAT", default_value_t)] #[clap(value_name = "FORMAT", default_value_t)]
compression_formats: CompressionFormats, compression_formats: CompressionFormats,
@ -153,6 +157,7 @@ impl Combiner {
.work_dir(self.work_dir) .work_dir(self.work_dir)
.input(self.package_name) .input(self.package_name)
.output(path_to_str(&output)?.into()) .output(path_to_str(&output)?.into())
.compression_profile(self.compression_profile)
.compression_formats(self.compression_formats.clone()); .compression_formats(self.compression_formats.clone());
tarballer.run()?; tarballer.run()?;

View file

@ -4,6 +4,37 @@ use rayon::prelude::*;
use std::{convert::TryFrom, fmt, io::Read, io::Write, path::Path, str::FromStr}; use std::{convert::TryFrom, fmt, io::Read, io::Write, path::Path, str::FromStr};
use xz2::{read::XzDecoder, write::XzEncoder}; use xz2::{read::XzDecoder, write::XzEncoder};
#[derive(Default, Debug, Copy, Clone)]
pub enum CompressionProfile {
Fast,
#[default]
Balanced,
Best,
}
impl FromStr for CompressionProfile {
type Err = Error;
fn from_str(input: &str) -> Result<Self, Error> {
Ok(match input {
"fast" => Self::Fast,
"balanced" => Self::Balanced,
"best" => Self::Best,
other => anyhow::bail!("invalid compression profile: {other}"),
})
}
}
impl fmt::Display for CompressionProfile {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
CompressionProfile::Fast => f.write_str("fast"),
CompressionProfile::Balanced => f.write_str("balanced"),
CompressionProfile::Best => f.write_str("best"),
}
}
}
#[derive(Debug, Copy, Clone)] #[derive(Debug, Copy, Clone)]
pub enum CompressionFormat { pub enum CompressionFormat {
Gz, Gz,
@ -26,7 +57,11 @@ impl CompressionFormat {
} }
} }
pub(crate) fn encode(&self, path: impl AsRef<Path>) -> Result<Box<dyn Encoder>, Error> { pub(crate) fn encode(
&self,
path: impl AsRef<Path>,
profile: CompressionProfile,
) -> Result<Box<dyn Encoder>, Error> {
let mut os = path.as_ref().as_os_str().to_os_string(); let mut os = path.as_ref().as_os_str().to_os_string();
os.push(format!(".{}", self.extension())); os.push(format!(".{}", self.extension()));
let path = Path::new(&os); let path = Path::new(&os);
@ -37,8 +72,23 @@ impl CompressionFormat {
let file = crate::util::create_new_file(path)?; let file = crate::util::create_new_file(path)?;
Ok(match self { Ok(match self {
CompressionFormat::Gz => Box::new(GzEncoder::new(file, flate2::Compression::best())), CompressionFormat::Gz => Box::new(GzEncoder::new(
file,
match profile {
CompressionProfile::Fast => flate2::Compression::fast(),
CompressionProfile::Balanced => flate2::Compression::new(6),
CompressionProfile::Best => flate2::Compression::best(),
},
)),
CompressionFormat::Xz => { CompressionFormat::Xz => {
let encoder = match profile {
CompressionProfile::Fast => {
xz2::stream::MtStreamBuilder::new().threads(6).preset(1).encoder().unwrap()
}
CompressionProfile::Balanced => {
xz2::stream::MtStreamBuilder::new().threads(6).preset(6).encoder().unwrap()
}
CompressionProfile::Best => {
let mut filters = xz2::stream::Filters::new(); let mut filters = xz2::stream::Filters::new();
// the preset is overridden by the other options so it doesn't matter // the preset is overridden by the other options so it doesn't matter
let mut lzma_ops = xz2::stream::LzmaOptions::new_preset(9).unwrap(); let mut lzma_ops = xz2::stream::LzmaOptions::new_preset(9).unwrap();
@ -75,11 +125,11 @@ impl CompressionFormat {
} else { } else {
builder.threads(6); builder.threads(6);
} }
builder.encoder().unwrap()
}
};
let compressor = XzEncoder::new_stream( let compressor = XzEncoder::new_stream(std::io::BufWriter::new(file), encoder);
std::io::BufWriter::new(file),
builder.encoder().unwrap(),
);
Box::new(compressor) Box::new(compressor)
} }
}) })

View file

@ -1,6 +1,6 @@
use super::Scripter; use super::Scripter;
use super::Tarballer; use super::Tarballer;
use crate::compression::CompressionFormats; use crate::compression::{CompressionFormats, CompressionProfile};
use crate::util::*; use crate::util::*;
use anyhow::{bail, format_err, Context, Result}; use anyhow::{bail, format_err, Context, Result};
use std::collections::BTreeSet; use std::collections::BTreeSet;
@ -54,6 +54,10 @@ actor! {
#[clap(value_name = "DIR")] #[clap(value_name = "DIR")]
output_dir: String = "./dist", output_dir: String = "./dist",
/// The profile used to compress the tarball.
#[clap(value_name = "FORMAT", default_value_t)]
compression_profile: CompressionProfile,
/// The formats used to compress the tarball /// The formats used to compress the tarball
#[clap(value_name = "FORMAT", default_value_t)] #[clap(value_name = "FORMAT", default_value_t)]
compression_formats: CompressionFormats, compression_formats: CompressionFormats,
@ -113,6 +117,7 @@ impl Generator {
.work_dir(self.work_dir) .work_dir(self.work_dir)
.input(self.package_name) .input(self.package_name)
.output(path_to_str(&output)?.into()) .output(path_to_str(&output)?.into())
.compression_profile(self.compression_profile)
.compression_formats(self.compression_formats.clone()); .compression_formats(self.compression_formats.clone());
tarballer.run()?; tarballer.run()?;

View file

@ -6,7 +6,7 @@ use tar::{Builder, Header};
use walkdir::WalkDir; use walkdir::WalkDir;
use crate::{ use crate::{
compression::{CombinedEncoder, CompressionFormats}, compression::{CombinedEncoder, CompressionFormats, CompressionProfile},
util::*, util::*,
}; };
@ -25,6 +25,10 @@ actor! {
#[clap(value_name = "DIR")] #[clap(value_name = "DIR")]
work_dir: String = "./workdir", work_dir: String = "./workdir",
/// The profile used to compress the tarball.
#[clap(value_name = "FORMAT", default_value_t)]
compression_profile: CompressionProfile,
/// The formats used to compress the tarball. /// The formats used to compress the tarball.
#[clap(value_name = "FORMAT", default_value_t)] #[clap(value_name = "FORMAT", default_value_t)]
compression_formats: CompressionFormats, compression_formats: CompressionFormats,
@ -38,7 +42,7 @@ impl Tarballer {
let encoder = CombinedEncoder::new( let encoder = CombinedEncoder::new(
self.compression_formats self.compression_formats
.iter() .iter()
.map(|f| f.encode(&tarball_name)) .map(|f| f.encode(&tarball_name, self.compression_profile))
.collect::<Result<Vec<_>>>()?, .collect::<Result<Vec<_>>>()?,
); );

View file

@ -0,0 +1,16 @@
// MIR for `adt` after built
fn adt() -> Onion {
let mut _0: Onion; // return place in scope 0 at $DIR/aggregate_exprs.rs:+0:13: +0:18
let mut _1: i32; // in scope 0 at $SRC_DIR/core/src/intrinsics/mir.rs:LL:COL
let mut _2: Foo; // in scope 0 at $SRC_DIR/core/src/intrinsics/mir.rs:LL:COL
let mut _3: Bar; // in scope 0 at $SRC_DIR/core/src/intrinsics/mir.rs:LL:COL
bb0: {
_1 = const 1_i32; // scope 0 at $DIR/aggregate_exprs.rs:+6:13: +6:20
_2 = Foo { a: const 1_i32, b: const 2_i32 }; // scope 0 at $DIR/aggregate_exprs.rs:+7:13: +10:14
_3 = Bar::Foo(move _2, _1); // scope 0 at $DIR/aggregate_exprs.rs:+11:13: +11:39
_0 = Onion { neon: ((_3 as variant#0).1: i32) }; // scope 0 at $DIR/aggregate_exprs.rs:+12:13: +12:58
return; // scope 0 at $DIR/aggregate_exprs.rs:+13:13: +13:21
}
}

View file

@ -0,0 +1,15 @@
// MIR for `array` after built
fn array() -> [i32; 2] {
let mut _0: [i32; 2]; // return place in scope 0 at $DIR/aggregate_exprs.rs:+0:15: +0:23
let mut _1: [i32; 2]; // in scope 0 at $SRC_DIR/core/src/intrinsics/mir.rs:LL:COL
let mut _2: i32; // in scope 0 at $SRC_DIR/core/src/intrinsics/mir.rs:LL:COL
bb0: {
_1 = [const 42_i32, const 43_i32]; // scope 0 at $DIR/aggregate_exprs.rs:+5:13: +5:25
_2 = const 1_i32; // scope 0 at $DIR/aggregate_exprs.rs:+6:13: +6:20
_1 = [_2, const 2_i32]; // scope 0 at $DIR/aggregate_exprs.rs:+7:13: +7:25
_0 = move _1; // scope 0 at $DIR/aggregate_exprs.rs:+8:13: +8:26
return; // scope 0 at $DIR/aggregate_exprs.rs:+9:13: +9:21
}
}

View file

@ -0,0 +1,71 @@
#![feature(custom_mir, core_intrinsics)]
extern crate core;
use core::intrinsics::mir::*;
// EMIT_MIR aggregate_exprs.tuple.built.after.mir
#[custom_mir(dialect = "built")]
fn tuple() -> (i32, bool) {
mir!(
{
RET = (1, true);
Return()
}
)
}
// EMIT_MIR aggregate_exprs.array.built.after.mir
#[custom_mir(dialect = "built")]
fn array() -> [i32; 2] {
mir!(
let x: [i32; 2];
let one: i32;
{
x = [42, 43];
one = 1;
x = [one, 2];
RET = Move(x);
Return()
}
)
}
struct Foo {
a: i32,
b: i32,
}
enum Bar {
Foo(Foo, i32),
}
union Onion {
neon: i32,
noun: f32,
}
// EMIT_MIR aggregate_exprs.adt.built.after.mir
#[custom_mir(dialect = "built")]
fn adt() -> Onion {
mir!(
let one: i32;
let x: Foo;
let y: Bar;
{
one = 1;
x = Foo {
a: 1,
b: 2,
};
y = Bar::Foo(Move(x), one);
RET = Onion { neon: Field(Variant(y, 0), 1) };
Return()
}
)
}
fn main() {
assert_eq!(tuple(), (1, true));
assert_eq!(array(), [1, 2]);
assert_eq!(unsafe { adt().neon }, 1);
}

View file

@ -0,0 +1,10 @@
// MIR for `tuple` after built
fn tuple() -> (i32, bool) {
let mut _0: (i32, bool); // return place in scope 0 at $DIR/aggregate_exprs.rs:+0:15: +0:26
bb0: {
_0 = (const 1_i32, const true); // scope 0 at $DIR/aggregate_exprs.rs:+3:13: +3:28
return; // scope 0 at $DIR/aggregate_exprs.rs:+4:13: +4:21
}
}

View file

@ -13,4 +13,10 @@ trait Trait {
fn method(&self) -> impl Trait<Type = impl Sized + '_>; fn method(&self) -> impl Trait<Type = impl Sized + '_>;
} }
trait Trait2 {
type Type;
fn method(&self) -> impl Trait2<Type = impl Trait2<Type = impl Sized + '_> + '_>;
}
fn main() {} fn main() {}

View file

@ -0,0 +1,24 @@
#![feature(non_lifetime_binders)]
//~^ WARN the feature `non_lifetime_binders` is incomplete
trait Foo: for<T> Bar<T> {}
trait Bar<T: ?Sized> {
fn method(&self) {}
}
fn needs_bar(x: &(impl Bar<i32> + ?Sized)) {
x.method();
}
impl Foo for () {}
impl<T: ?Sized> Bar<T> for () {}
fn main() {
let x: &dyn Foo = &();
//~^ ERROR the trait `Foo` cannot be made into an object
//~| ERROR the trait `Foo` cannot be made into an object
needs_bar(x);
//~^ ERROR the trait `Foo` cannot be made into an object
}

View file

@ -0,0 +1,56 @@
warning: the feature `non_lifetime_binders` is incomplete and may not be safe to use and/or cause compiler crashes
--> $DIR/supertrait-object-safety.rs:1:12
|
LL | #![feature(non_lifetime_binders)]
| ^^^^^^^^^^^^^^^^^^^^
|
= note: see issue #108185 <https://github.com/rust-lang/rust/issues/108185> for more information
= note: `#[warn(incomplete_features)]` on by default
error[E0038]: the trait `Foo` cannot be made into an object
--> $DIR/supertrait-object-safety.rs:19:23
|
LL | let x: &dyn Foo = &();
| ^^^ `Foo` cannot be made into an object
|
note: for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
--> $DIR/supertrait-object-safety.rs:4:12
|
LL | trait Foo: for<T> Bar<T> {}
| --- ^^^^^^^^^^^^^ ...because where clause cannot reference non-lifetime `for<...>` variables
| |
| this trait cannot be made into an object...
= note: required for `&()` to implement `CoerceUnsized<&dyn Foo>`
= note: required by cast to type `&dyn Foo`
error[E0038]: the trait `Foo` cannot be made into an object
--> $DIR/supertrait-object-safety.rs:19:12
|
LL | let x: &dyn Foo = &();
| ^^^^^^^^ `Foo` cannot be made into an object
|
note: for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
--> $DIR/supertrait-object-safety.rs:4:12
|
LL | trait Foo: for<T> Bar<T> {}
| --- ^^^^^^^^^^^^^ ...because where clause cannot reference non-lifetime `for<...>` variables
| |
| this trait cannot be made into an object...
error[E0038]: the trait `Foo` cannot be made into an object
--> $DIR/supertrait-object-safety.rs:22:5
|
LL | needs_bar(x);
| ^^^^^^^^^ `Foo` cannot be made into an object
|
note: for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
--> $DIR/supertrait-object-safety.rs:4:12
|
LL | trait Foo: for<T> Bar<T> {}
| --- ^^^^^^^^^^^^^ ...because where clause cannot reference non-lifetime `for<...>` variables
| |
| this trait cannot be made into an object...
error: aborting due to 3 previous errors; 1 warning emitted
For more information about this error, try `rustc --explain E0038`.