rust/compiler/rustc_codegen_ssa/src/mir/mod.rs

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

443 lines
16 KiB
Rust
Raw Normal View History

2023-01-17 00:00:00 +00:00
use crate::base;
2019-02-09 23:31:47 +09:00
use crate::traits::*;
2020-03-29 16:41:09 +02:00
use rustc_middle::mir;
use rustc_middle::mir::interpret::ErrorHandled;
use rustc_middle::ty::layout::{FnAbiOf, HasTyCtxt, TyAndLayout};
2023-02-22 02:18:40 +00:00
use rustc_middle::ty::{self, Instance, Ty, TyCtxt, TypeFoldable, TypeVisitableExt};
use rustc_target::abi::call::{FnAbi, PassMode};
use std::iter;
use rustc_index::bit_set::BitSet;
use rustc_index::IndexVec;
use self::debuginfo::{FunctionDebugContext, PerLocalVarDebugInfo};
use self::place::PlaceRef;
2020-03-29 16:41:09 +02:00
use rustc_middle::mir::traversal;
Various improvements to MIR and LLVM IR Construction Primarily affects the MIR construction, which indirectly improves LLVM IR generation, but some LLVM IR changes have been made too. * Handle "statement expressions" more intelligently. These are expressions that always evaluate to `()`. Previously a temporary would be generated as a destination to translate into, which is unnecessary. This affects assignment, augmented assignment, `return`, `break` and `continue`. * Avoid inserting drops for non-drop types in more places. Scheduled drops were already skipped for types that we knew wouldn't need dropping at construction time. However manually-inserted drops like those for `x` in `x = y;` were still generated. `build_drop` now takes a type parameter like its `schedule_drop` counterpart and checks to see if the type needs dropping. * Avoid generating an extra temporary for an assignment where the types involved don't need dropping. Previously an expression like `a = b + 1;` would result in a temporary for `b + 1`. This is so the RHS can be evaluated, then the LHS evaluated and dropped and have everything work correctly. However, this isn't necessary if the `LHS` doesn't need a drop, as we can just overwrite the existing value. * Improves lvalue analysis to allow treating an `Rvalue::Use` as an operand in certain conditions. The reason for it never being an operand is so it can be zeroed/drop-filled, but this is only true for types that need dropping. The first two changes result in significantly fewer MIR blocks being generated, as previously almost every statement would end up generating a new block due to the drop of the `()` temporary being generated.
2016-04-15 12:36:16 +12:00
use self::operand::{OperandRef, OperandValue};
// Used for tracking the state of generated basic blocks.
enum CachedLlbb<T> {
/// Nothing created yet.
None,
/// Has been created.
Some(T),
/// Nothing created yet, and nothing should be.
Skip,
}
2018-05-08 16:10:16 +03:00
/// Master context for codegenning from MIR.
pub struct FunctionCx<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> {
2018-01-16 09:31:48 +01:00
instance: Instance<'tcx>,
2020-04-12 10:31:00 -07:00
mir: &'tcx mir::Body<'tcx>,
debug_context: Option<FunctionDebugContext<Bx::DIScope, Bx::DILocation>>,
2019-10-13 11:28:19 +02:00
llfn: Bx::Function,
cx: &'a Bx::CodegenCx,
2016-12-19 17:48:41 -07:00
fn_abi: &'tcx FnAbi<'tcx, Ty<'tcx>>,
2016-12-19 19:16:36 -07:00
/// When unwinding is initiated, we have to store this personality
/// value somewhere so that we can load it and re-use it in the
/// resume instruction. The personality is (afaik) some kind of
/// value used for C++ unwinding, which must filter by type: we
/// don't really care about it very much. Anyway, this value
/// contains an alloca into which the personality is stored and
/// then later loaded when generating the DIVERGE_BLOCK.
personality_slot: Option<PlaceRef<'tcx, Bx::Value>>,
/// A backend `BasicBlock` for each MIR `BasicBlock`, created lazily
/// as-needed (e.g. RPO reaching it or another block branching to it).
// FIXME(eddyb) rename `llbbs` and other `ll`-prefixed things to use a
// more backend-agnostic prefix such as `cg` (i.e. this would be `cgbbs`).
cached_llbbs: IndexVec<mir::BasicBlock, CachedLlbb<Bx::BasicBlock>>,
/// The funclet status of each basic block
2023-01-17 00:00:00 +00:00
cleanup_kinds: Option<IndexVec<mir::BasicBlock, analyze::CleanupKind>>,
/// When targeting MSVC, this stores the cleanup info for each funclet BB.
/// This is initialized at the same time as the `landing_pads` entry for the
/// funclets' head block, i.e. when needed by an unwind / `cleanup_ret` edge.
funclets: IndexVec<mir::BasicBlock, Option<Bx::Funclet>>,
/// This stores the cached landing/cleanup pad block for a given BB.
// FIXME(eddyb) rename this to `eh_pads`.
landing_pads: IndexVec<mir::BasicBlock, Option<Bx::BasicBlock>>,
/// Cached unreachable block
unreachable_block: Option<Bx::BasicBlock>,
/// Cached terminate upon unwinding block
2022-10-10 22:40:40 +01:00
terminate_block: Option<Bx::BasicBlock>,
/// The location where each MIR arg/var/tmp/ret is stored. This is
2017-12-01 14:31:47 +02:00
/// usually an `PlaceRef` representing an alloca, but not always:
/// sometimes we can skip the alloca and just store the value
/// directly using an `OperandRef`, which makes for tighter LLVM
/// IR. The conditions for using an `OperandRef` are as follows:
///
/// - the type of the local must be judged "immediate" by `is_llvm_immediate`
/// - the operand must never be referenced indirectly
/// - we should not take its address using the `&` operator
/// - nor should it appear in a place path like `tmp.a`
/// - the operand must be defined by an rvalue that can generate immediate
/// values
2015-11-03 15:50:04 -05:00
///
/// Avoiding allocs can also be important for certain intrinsics,
/// notably `expect`.
locals: IndexVec<mir::Local, LocalRef<'tcx, Bx::Value>>,
/// All `VarDebugInfo` from the MIR body, partitioned by `Local`.
/// This is `None` if no var`#[non_exhaustive]`iable debuginfo/names are needed.
per_local_var_debug_info:
Option<IndexVec<mir::Local, Vec<PerLocalVarDebugInfo<'tcx, Bx::DIVariable>>>>,
/// Caller location propagated if this function has `#[track_caller]`.
caller_location: Option<OperandRef<'tcx, Bx::Value>>,
}
impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx> {
2020-10-24 02:21:18 +02:00
pub fn monomorphize<T>(&self, value: T) -> T
where
2023-02-22 02:18:40 +00:00
T: Copy + TypeFoldable<TyCtxt<'tcx>>,
{
debug!("monomorphize: self.instance={:?}", self.instance);
self.instance.subst_mir_and_normalize_erasing_regions(
self.cx.tcx(),
ty::ParamEnv::reveal_all(),
2023-05-29 13:46:10 +02:00
ty::EarlyBinder::bind(value),
)
2016-12-18 16:05:40 -07:00
}
}
enum LocalRef<'tcx, V> {
Place(PlaceRef<'tcx, V>),
/// `UnsizedPlace(p)`: `p` itself is a thin pointer (indirect place).
/// `*p` is the fat pointer that references the actual unsized place.
/// Every time it is initialized, we have to reallocate the place
/// and update the fat pointer. That's the reason why it is indirect.
UnsizedPlace(PlaceRef<'tcx, V>),
/// The backend [`OperandValue`] has already been generated.
Operand(OperandRef<'tcx, V>),
/// Will be a `Self::Operand` once we get to its definition.
PendingOperand,
}
impl<'tcx, V: CodegenObject> LocalRef<'tcx, V> {
fn new_operand(layout: TyAndLayout<'tcx>) -> LocalRef<'tcx, V> {
if layout.is_zst() {
// Zero-size temporaries aren't always initialized, which
// doesn't matter because they don't contain data, but
// we need something in the operand.
LocalRef::Operand(OperandRef::zero_sized(layout))
} else {
LocalRef::PendingOperand
}
}
}
///////////////////////////////////////////////////////////////////////////
#[instrument(level = "debug", skip(cx))]
pub fn codegen_mir<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>(
cx: &'a Bx::CodegenCx,
instance: Instance<'tcx>,
) {
2023-04-27 08:34:11 +01:00
assert!(!instance.substs.has_infer());
2018-12-02 18:04:39 +01:00
let llfn = cx.get_fn(instance);
let mir = cx.tcx().instance_mir(instance.def);
2021-09-02 00:29:15 +03:00
let fn_abi = cx.fn_abi_of_instance(instance, ty::List::empty());
debug!("fn_abi: {:?}", fn_abi);
let debug_context = cx.create_function_debug_context(instance, &fn_abi, llfn, &mir);
let start_llbb = Bx::append_block(cx, llfn, "start");
let mut start_bx = Bx::build(cx, start_llbb);
2022-11-17 02:58:42 +00:00
if mir.basic_blocks.iter().any(|bb| {
bb.is_cleanup || matches!(bb.terminator().unwind(), Some(mir::UnwindAction::Terminate))
}) {
start_bx.set_personality_fn(cx.eh_personality());
}
let cleanup_kinds = base::wants_msvc_seh(cx.tcx().sess).then(|| analyze::cleanup_kinds(&mir));
2023-01-17 00:00:00 +00:00
let cached_llbbs: IndexVec<mir::BasicBlock, CachedLlbb<Bx::BasicBlock>> =
mir.basic_blocks
.indices()
.map(|bb| {
if bb == mir::START_BLOCK { CachedLlbb::Some(start_llbb) } else { CachedLlbb::None }
})
.collect();
let mut fx = FunctionCx {
2018-01-16 09:31:48 +01:00
instance,
mir,
llfn,
fn_abi,
2018-01-05 07:04:08 +02:00
cx,
personality_slot: None,
cached_llbbs,
unreachable_block: None,
2022-10-10 22:40:40 +01:00
terminate_block: None,
cleanup_kinds,
landing_pads: IndexVec::from_elem(None, &mir.basic_blocks),
funclets: IndexVec::from_fn_n(|_| None, mir.basic_blocks.len()),
locals: IndexVec::new(),
debug_context,
per_local_var_debug_info: None,
caller_location: None,
};
fx.per_local_var_debug_info = fx.compute_per_local_var_debug_info(&mut start_bx);
// Evaluate all required consts; codegen later assumes that CTFE will never fail.
let mut all_consts_ok = true;
for const_ in &mir.required_consts {
if let Err(err) = fx.eval_mir_constant(const_) {
all_consts_ok = false;
match err {
// errored or at least linted
ErrorHandled::Reported(_) => {}
ErrorHandled::TooGeneric => {
span_bug!(const_.span, "codegen encountered polymorphic constant: {:?}", err)
}
}
}
}
if !all_consts_ok {
// We leave the IR in some half-built state here, and rely on this code not even being
// submitted to LLVM once an error was raised.
return;
}
let memory_locals = analyze::non_ssa_locals(&fx);
2016-12-18 16:05:40 -07:00
// Allocate variable and temp allocas
fx.locals = {
let args = arg_local_refs(&mut start_bx, &mut fx, &memory_locals);
let mut allocate_local = |local| {
2020-04-12 10:31:00 -07:00
let decl = &mir.local_decls[local];
let layout = start_bx.layout_of(fx.monomorphize(decl.ty));
2022-01-12 03:19:52 +00:00
assert!(!layout.ty.has_erasable_regions());
if local == mir::RETURN_PLACE && fx.fn_abi.ret.is_indirect() {
debug!("alloc: {:?} (return place) -> place", local);
let llretptr = start_bx.get_param(0);
return LocalRef::Place(PlaceRef::new_sized(llretptr, layout));
}
if memory_locals.contains(local) {
debug!("alloc: {:?} -> place", local);
if layout.is_unsized() {
LocalRef::UnsizedPlace(PlaceRef::alloca_unsized_indirect(&mut start_bx, layout))
} else {
LocalRef::Place(PlaceRef::alloca(&mut start_bx, layout))
}
} else {
debug!("alloc: {:?} -> operand", local);
LocalRef::new_operand(layout)
}
};
let retptr = allocate_local(mir::RETURN_PLACE);
iter::once(retptr)
.chain(args.into_iter())
2020-04-12 10:31:00 -07:00
.chain(mir.vars_and_temps_iter().map(allocate_local))
.collect()
};
// Apply debuginfo to the newly allocated locals.
fx.debug_introduce_locals(&mut start_bx);
2023-03-08 11:19:12 +08:00
// The builders will be created separately for each basic block at `codegen_block`.
// So drop the builder of `start_llbb` to avoid having two at the same time.
drop(start_bx);
2018-05-08 16:10:16 +03:00
// Codegen the body of each block using reverse postorder
for (bb, _) in traversal::reverse_postorder(&mir) {
2019-10-14 01:38:38 -04:00
fx.codegen_block(bb);
}
}
2019-02-08 14:53:55 +01:00
/// Produces, for each argument, a `Value` pointing at the
/// argument's value. As arguments are places, these are always
/// indirect.
fn arg_local_refs<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>(
bx: &mut Bx,
fx: &mut FunctionCx<'a, 'tcx, Bx>,
Merge indexed_set.rs into bitvec.rs, and rename it bit_set.rs. Currently we have two files implementing bitsets (and 2D bit matrices). This commit combines them into one, taking the best features from each. This involves renaming a lot of things. The high level changes are as follows. - bitvec.rs --> bit_set.rs - indexed_set.rs --> (removed) - BitArray + IdxSet --> BitSet (merged, see below) - BitVector --> GrowableBitSet - {,Sparse,Hybrid}IdxSet --> {,Sparse,Hybrid}BitSet - BitMatrix --> BitMatrix - SparseBitMatrix --> SparseBitMatrix The changes within the bitset types themselves are as follows. ``` OLD OLD NEW BitArray<C> IdxSet<T> BitSet<T> -------- ------ ------ grow - grow new - (remove) new_empty new_empty new_empty new_filled new_filled new_filled - to_hybrid to_hybrid clear clear clear set_up_to set_up_to set_up_to clear_above - clear_above count - count contains(T) contains(&T) contains(T) contains_all - superset is_empty - is_empty insert(T) add(&T) insert(T) insert_all - insert_all() remove(T) remove(&T) remove(T) words words words words_mut words_mut words_mut - overwrite overwrite merge union union - subtract subtract - intersect intersect iter iter iter ``` In general, when choosing names I went with: - names that are more obvious (e.g. `BitSet` over `IdxSet`). - names that are more like the Rust libraries (e.g. `T` over `C`, `insert` over `add`); - names that are more set-like (e.g. `union` over `merge`, `superset` over `contains_all`, `domain_size` over `num_bits`). Also, using `T` for index arguments seems more sensible than `&T` -- even though the latter is standard in Rust collection types -- because indices are always copyable. It also results in fewer `&` and `*` sigils in practice.
2018-09-14 15:07:25 +10:00
memory_locals: &BitSet<mir::Local>,
) -> Vec<LocalRef<'tcx, Bx::Value>> {
let mir = fx.mir;
let mut idx = 0;
let mut llarg_idx = fx.fn_abi.ret.is_indirect() as usize;
Support `#[track_caller]` on closures and generators This PR allows applying a `#[track_caller]` attribute to a closure/generator expression. The attribute as interpreted as applying to the compiler-generated implementation of the corresponding trait method (`FnOnce::call_once`, `FnMut::call_mut`, `Fn::call`, or `Generator::resume`). This feature does not have its own feature gate - however, it requires `#![feature(stmt_expr_attributes)]` in order to actually apply an attribute to a closure or generator. This is implemented in the same way as for functions - an extra location argument is appended to the end of the ABI. For closures, this argument is *not* part of the 'tupled' argument storing the parameters - the final closure argument for `#[track_caller]` closures is no longer a tuple. For direct (monomorphized) calls, the necessary support was already implemented - we just needeed to adjust some assertions around checking the ABI and argument count to take closures into account. For calls through a trait object, more work was needed. When creating a `ReifyShim`, we need to create a shim for the trait method (e.g. `FnOnce::call_mut`) - unlike normal functions, closures are never invoked directly, and always go through a trait method. Additional handling was needed for `InstanceDef::ClosureOnceShim`. In order to pass location information throgh a direct (monomorphized) call to `FnOnce::call_once` on an `FnMut` closure, we need to make `ClosureOnceShim` aware of `#[tracked_caller]`. A new field `track_caller` is added to `ClosureOnceShim` - this is used by `InstanceDef::requires_caller` location, allowing codegen to pass through the extra location argument. Since `ClosureOnceShim.track_caller` is only used by codegen, we end up generating two identical MIR shims - one for `track_caller == true`, and one for `track_caller == false`. However, these two shims are used by the entire crate (i.e. it's two shims total, not two shims per unique closure), so this shouldn't a big deal.
2021-06-27 14:01:11 -05:00
let mut num_untupled = None;
let args = mir
.args_iter()
.enumerate()
.map(|(arg_index, local)| {
let arg_decl = &mir.local_decls[local];
2019-12-22 17:42:04 -05:00
if Some(local) == mir.spread_arg {
// This argument (e.g., the last argument in the "rust-call" ABI)
2016-09-27 02:03:35 +02:00
// is a tuple that was spread at the ABI level and now we have
// to reconstruct it into a tuple local variable, from multiple
// individual LLVM function arguments.
2019-12-22 17:42:04 -05:00
2020-10-24 02:21:18 +02:00
let arg_ty = fx.monomorphize(arg_decl.ty);
2022-02-19 00:48:49 +01:00
let ty::Tuple(tupled_arg_tys) = arg_ty.kind() else {
bug!("spread argument isn't a tuple?!");
2016-09-27 02:03:35 +02:00
};
2019-12-22 17:42:04 -05:00
2023-04-26 19:46:10 -04:00
let layout = bx.layout_of(arg_ty);
// FIXME: support unsized params in "rust-call" ABI
if layout.is_unsized() {
span_bug!(
arg_decl.source_info.span,
"\"rust-call\" ABI does not support unsized params",
);
}
let place = PlaceRef::alloca(bx, layout);
for i in 0..tupled_arg_tys.len() {
let arg = &fx.fn_abi.args[idx];
2016-09-27 02:03:35 +02:00
idx += 1;
if let PassMode::Cast(_, true) = arg.mode {
llarg_idx += 1;
2019-12-22 17:42:04 -05:00
}
let pr_field = place.project_field(bx, i);
bx.store_fn_arg(arg, &mut llarg_idx, pr_field);
}
Support `#[track_caller]` on closures and generators This PR allows applying a `#[track_caller]` attribute to a closure/generator expression. The attribute as interpreted as applying to the compiler-generated implementation of the corresponding trait method (`FnOnce::call_once`, `FnMut::call_mut`, `Fn::call`, or `Generator::resume`). This feature does not have its own feature gate - however, it requires `#![feature(stmt_expr_attributes)]` in order to actually apply an attribute to a closure or generator. This is implemented in the same way as for functions - an extra location argument is appended to the end of the ABI. For closures, this argument is *not* part of the 'tupled' argument storing the parameters - the final closure argument for `#[track_caller]` closures is no longer a tuple. For direct (monomorphized) calls, the necessary support was already implemented - we just needeed to adjust some assertions around checking the ABI and argument count to take closures into account. For calls through a trait object, more work was needed. When creating a `ReifyShim`, we need to create a shim for the trait method (e.g. `FnOnce::call_mut`) - unlike normal functions, closures are never invoked directly, and always go through a trait method. Additional handling was needed for `InstanceDef::ClosureOnceShim`. In order to pass location information throgh a direct (monomorphized) call to `FnOnce::call_once` on an `FnMut` closure, we need to make `ClosureOnceShim` aware of `#[tracked_caller]`. A new field `track_caller` is added to `ClosureOnceShim` - this is used by `InstanceDef::requires_caller` location, allowing codegen to pass through the extra location argument. Since `ClosureOnceShim.track_caller` is only used by codegen, we end up generating two identical MIR shims - one for `track_caller == true`, and one for `track_caller == false`. However, these two shims are used by the entire crate (i.e. it's two shims total, not two shims per unique closure), so this shouldn't a big deal.
2021-06-27 14:01:11 -05:00
assert_eq!(
None,
num_untupled.replace(tupled_arg_tys.len()),
"Replaced existing num_tupled"
);
return LocalRef::Place(place);
2016-03-08 14:24:44 +02:00
}
if fx.fn_abi.c_variadic && arg_index == fx.fn_abi.args.len() {
2020-10-24 02:21:18 +02:00
let arg_ty = fx.monomorphize(arg_decl.ty);
let va_list = PlaceRef::alloca(bx, bx.layout_of(arg_ty));
bx.va_start(va_list.llval);
return LocalRef::Place(va_list);
}
let arg = &fx.fn_abi.args[idx];
idx += 1;
if let PassMode::Cast(_, true) = arg.mode {
llarg_idx += 1;
}
if !memory_locals.contains(local) {
// We don't have to cast or keep the argument in the alloca.
// FIXME(eddyb): We should figure out how to use llvm.dbg.value instead
// of putting everything in allocas just so we can use llvm.dbg.declare.
let local = |op| LocalRef::Operand(op);
match arg.mode {
PassMode::Ignore => {
return local(OperandRef::zero_sized(arg.layout));
}
PassMode::Direct(_) => {
2018-12-04 20:20:45 +01:00
let llarg = bx.get_param(llarg_idx);
llarg_idx += 1;
2018-01-05 07:12:32 +02:00
return local(OperandRef::from_immediate_or_packed_pair(
bx, llarg, arg.layout,
2019-12-22 17:42:04 -05:00
));
}
PassMode::Pair(..) => {
2018-01-05 07:12:32 +02:00
let (a, b) = (bx.get_param(llarg_idx), bx.get_param(llarg_idx + 1));
llarg_idx += 2;
2019-12-22 17:42:04 -05:00
return local(OperandRef {
2018-01-05 07:12:32 +02:00
val: OperandValue::Pair(a, b),
layout: arg.layout,
2019-12-22 17:42:04 -05:00
});
}
_ => {}
}
}
if arg.is_sized_indirect() {
// Don't copy an indirect argument to an alloca, the caller
// already put it in a temporary alloca and gave it up.
// FIXME: lifetimes
2018-12-04 20:20:45 +01:00
let llarg = bx.get_param(llarg_idx);
llarg_idx += 1;
LocalRef::Place(PlaceRef::new_sized(llarg, arg.layout))
} else if arg.is_unsized_indirect() {
// As the storage for the indirect argument lives during
// the whole function call, we just copy the fat pointer.
2018-12-04 20:20:45 +01:00
let llarg = bx.get_param(llarg_idx);
llarg_idx += 1;
2018-12-04 20:20:45 +01:00
let llextra = bx.get_param(llarg_idx);
llarg_idx += 1;
let indirect_operand = OperandValue::Pair(llarg, llextra);
2019-12-22 17:42:04 -05:00
let tmp = PlaceRef::alloca_unsized_indirect(bx, arg.layout);
2018-09-14 17:48:57 +02:00
indirect_operand.store(bx, tmp);
LocalRef::UnsizedPlace(tmp)
} else {
let tmp = PlaceRef::alloca(bx, arg.layout);
bx.store_fn_arg(arg, &mut llarg_idx, tmp);
LocalRef::Place(tmp)
}
})
.collect::<Vec<_>>();
if fx.instance.def.requires_caller_location(bx.tcx()) {
Support `#[track_caller]` on closures and generators This PR allows applying a `#[track_caller]` attribute to a closure/generator expression. The attribute as interpreted as applying to the compiler-generated implementation of the corresponding trait method (`FnOnce::call_once`, `FnMut::call_mut`, `Fn::call`, or `Generator::resume`). This feature does not have its own feature gate - however, it requires `#![feature(stmt_expr_attributes)]` in order to actually apply an attribute to a closure or generator. This is implemented in the same way as for functions - an extra location argument is appended to the end of the ABI. For closures, this argument is *not* part of the 'tupled' argument storing the parameters - the final closure argument for `#[track_caller]` closures is no longer a tuple. For direct (monomorphized) calls, the necessary support was already implemented - we just needeed to adjust some assertions around checking the ABI and argument count to take closures into account. For calls through a trait object, more work was needed. When creating a `ReifyShim`, we need to create a shim for the trait method (e.g. `FnOnce::call_mut`) - unlike normal functions, closures are never invoked directly, and always go through a trait method. Additional handling was needed for `InstanceDef::ClosureOnceShim`. In order to pass location information throgh a direct (monomorphized) call to `FnOnce::call_once` on an `FnMut` closure, we need to make `ClosureOnceShim` aware of `#[tracked_caller]`. A new field `track_caller` is added to `ClosureOnceShim` - this is used by `InstanceDef::requires_caller` location, allowing codegen to pass through the extra location argument. Since `ClosureOnceShim.track_caller` is only used by codegen, we end up generating two identical MIR shims - one for `track_caller == true`, and one for `track_caller == false`. However, these two shims are used by the entire crate (i.e. it's two shims total, not two shims per unique closure), so this shouldn't a big deal.
2021-06-27 14:01:11 -05:00
let mir_args = if let Some(num_untupled) = num_untupled {
// Subtract off the tupled argument that gets 'expanded'
args.len() - 1 + num_untupled
} else {
args.len()
};
assert_eq!(
fx.fn_abi.args.len(),
Support `#[track_caller]` on closures and generators This PR allows applying a `#[track_caller]` attribute to a closure/generator expression. The attribute as interpreted as applying to the compiler-generated implementation of the corresponding trait method (`FnOnce::call_once`, `FnMut::call_mut`, `Fn::call`, or `Generator::resume`). This feature does not have its own feature gate - however, it requires `#![feature(stmt_expr_attributes)]` in order to actually apply an attribute to a closure or generator. This is implemented in the same way as for functions - an extra location argument is appended to the end of the ABI. For closures, this argument is *not* part of the 'tupled' argument storing the parameters - the final closure argument for `#[track_caller]` closures is no longer a tuple. For direct (monomorphized) calls, the necessary support was already implemented - we just needeed to adjust some assertions around checking the ABI and argument count to take closures into account. For calls through a trait object, more work was needed. When creating a `ReifyShim`, we need to create a shim for the trait method (e.g. `FnOnce::call_mut`) - unlike normal functions, closures are never invoked directly, and always go through a trait method. Additional handling was needed for `InstanceDef::ClosureOnceShim`. In order to pass location information throgh a direct (monomorphized) call to `FnOnce::call_once` on an `FnMut` closure, we need to make `ClosureOnceShim` aware of `#[tracked_caller]`. A new field `track_caller` is added to `ClosureOnceShim` - this is used by `InstanceDef::requires_caller` location, allowing codegen to pass through the extra location argument. Since `ClosureOnceShim.track_caller` is only used by codegen, we end up generating two identical MIR shims - one for `track_caller == true`, and one for `track_caller == false`. However, these two shims are used by the entire crate (i.e. it's two shims total, not two shims per unique closure), so this shouldn't a big deal.
2021-06-27 14:01:11 -05:00
mir_args + 1,
"#[track_caller] instance {:?} must have 1 more argument in their ABI than in their MIR",
fx.instance
);
2019-12-06 17:05:51 -08:00
let arg = fx.fn_abi.args.last().unwrap();
match arg.mode {
PassMode::Direct(_) => (),
2019-12-06 17:05:51 -08:00
_ => bug!("caller location must be PassMode::Direct, found {:?}", arg.mode),
}
fx.caller_location = Some(OperandRef {
val: OperandValue::Immediate(bx.get_param(llarg_idx)),
layout: arg.layout,
});
}
args
}
mod analyze;
mod block;
pub mod constant;
pub mod coverageinfo;
pub mod debuginfo;
mod intrinsic;
pub mod operand;
pub mod place;
mod rvalue;
mod statement;