Use object crate for .rustc metadata generation
We already use the object crate for generating uncompressed .rmeta
metadata object files. This switches the generation of compressed
.rustc object files to use the object crate as well. These have
slightly different requirements in that .rmeta should be completely
excluded from any final compilation artifacts, while .rustc should
be part of shared objects, but not loaded into memory.
The primary motivation for this change is #90326: In LLVM 14, the
current way of setting section flags (and in particular, preventing
the setting of SHF_ALLOC) will no longer work. There are other ways
we could work around this, but switching to the object crate seems
like the most elegant, as we already use it for .rmeta, and as it
makes this independent of the codegen backend. In particular, we
don't need separate handling in codegen_llvm and codegen_gcc.
codegen_cranelift should be able to reuse the implementation as
well, though I have omitted that here, as it is not based on
codegen_ssa.
This change mostly extracts the existing code for .rmeta handling
to allow using it for .rustc as well, and adjust the codegen
infrastructure to handle the metadata object file separately: We
no longer create a backend-specific module for it, and directly
produce the compiled module instead.
This does not fix #90326 by itself yet, as .llvmbc will need to be
handled separately.
2021-12-02 12:24:25 +01:00
|
|
|
|
use crate::back::metadata::create_compressed_metadata_file;
|
2019-05-17 02:20:14 +01:00
|
|
|
|
use crate::back::write::{
|
2020-05-08 09:27:59 -07:00
|
|
|
|
compute_per_cgu_lto_type, start_async_codegen, submit_codegened_module_to_llvm,
|
|
|
|
|
submit_post_lto_module_to_llvm, submit_pre_lto_module_to_llvm, ComputedLtoType, OngoingCodegen,
|
2019-05-17 02:20:14 +01:00
|
|
|
|
};
|
2019-12-24 17:38:22 -05:00
|
|
|
|
use crate::common::{IntPredicate, RealPredicate, TypeKind};
|
2019-05-17 02:20:14 +01:00
|
|
|
|
use crate::meth;
|
|
|
|
|
use crate::mir;
|
|
|
|
|
use crate::mir::operand::OperandValue;
|
|
|
|
|
use crate::mir::place::PlaceRef;
|
|
|
|
|
use crate::traits::*;
|
Use object crate for .rustc metadata generation
We already use the object crate for generating uncompressed .rmeta
metadata object files. This switches the generation of compressed
.rustc object files to use the object crate as well. These have
slightly different requirements in that .rmeta should be completely
excluded from any final compilation artifacts, while .rustc should
be part of shared objects, but not loaded into memory.
The primary motivation for this change is #90326: In LLVM 14, the
current way of setting section flags (and in particular, preventing
the setting of SHF_ALLOC) will no longer work. There are other ways
we could work around this, but switching to the object crate seems
like the most elegant, as we already use it for .rmeta, and as it
makes this independent of the codegen backend. In particular, we
don't need separate handling in codegen_llvm and codegen_gcc.
codegen_cranelift should be able to reuse the implementation as
well, though I have omitted that here, as it is not based on
codegen_ssa.
This change mostly extracts the existing code for .rmeta handling
to allow using it for .rustc as well, and adjust the codegen
infrastructure to handle the metadata object file separately: We
no longer create a backend-specific module for it, and directly
produce the compiled module instead.
This does not fix #90326 by itself yet, as .llvmbc will need to be
handled separately.
2021-12-02 12:24:25 +01:00
|
|
|
|
use crate::{CachedModuleCodegen, CompiledModule, CrateInfo, MemFlags, ModuleCodegen, ModuleKind};
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2020-01-11 13:15:20 +01:00
|
|
|
|
use rustc_attr as attr;
|
2019-12-24 05:02:53 +01:00
|
|
|
|
use rustc_data_structures::fx::FxHashMap;
|
2021-01-25 12:56:21 -08:00
|
|
|
|
use rustc_data_structures::profiling::{get_resident_set_size, print_time_passes_entry};
|
2022-02-08 22:21:16 +03:00
|
|
|
|
|
|
|
|
|
#[cfg(parallel_compiler)]
|
2021-01-24 21:23:38 -08:00
|
|
|
|
use rustc_data_structures::sync::{par_iter, ParallelIterator};
|
2020-01-05 02:37:57 +01:00
|
|
|
|
use rustc_hir as hir;
|
2021-04-26 01:09:35 +08:00
|
|
|
|
use rustc_hir::def_id::{DefId, LOCAL_CRATE};
|
2020-08-18 11:47:27 +01:00
|
|
|
|
use rustc_hir::lang_items::LangItem;
|
2019-09-26 05:38:33 +00:00
|
|
|
|
use rustc_index::vec::Idx;
|
2021-09-24 18:15:36 +02:00
|
|
|
|
use rustc_metadata::EncodedMetadata;
|
2020-03-29 16:41:09 +02:00
|
|
|
|
use rustc_middle::middle::codegen_fn_attrs::CodegenFnAttrs;
|
Use object crate for .rustc metadata generation
We already use the object crate for generating uncompressed .rmeta
metadata object files. This switches the generation of compressed
.rustc object files to use the object crate as well. These have
slightly different requirements in that .rmeta should be completely
excluded from any final compilation artifacts, while .rustc should
be part of shared objects, but not loaded into memory.
The primary motivation for this change is #90326: In LLVM 14, the
current way of setting section flags (and in particular, preventing
the setting of SHF_ALLOC) will no longer work. There are other ways
we could work around this, but switching to the object crate seems
like the most elegant, as we already use it for .rmeta, and as it
makes this independent of the codegen backend. In particular, we
don't need separate handling in codegen_llvm and codegen_gcc.
codegen_cranelift should be able to reuse the implementation as
well, though I have omitted that here, as it is not based on
codegen_ssa.
This change mostly extracts the existing code for .rmeta handling
to allow using it for .rustc as well, and adjust the codegen
infrastructure to handle the metadata object file separately: We
no longer create a backend-specific module for it, and directly
produce the compiled module instead.
This does not fix #90326 by itself yet, as .llvmbc will need to be
handled separately.
2021-12-02 12:24:25 +01:00
|
|
|
|
use rustc_middle::middle::exported_symbols;
|
2020-03-29 16:41:09 +02:00
|
|
|
|
use rustc_middle::middle::lang_items;
|
|
|
|
|
use rustc_middle::mir::mono::{CodegenUnit, CodegenUnitNameBuilder, MonoItem};
|
2021-08-30 17:38:27 +03:00
|
|
|
|
use rustc_middle::ty::layout::{HasTyCtxt, LayoutOf, TyAndLayout};
|
2020-03-29 16:41:09 +02:00
|
|
|
|
use rustc_middle::ty::query::Providers;
|
|
|
|
|
use rustc_middle::ty::{self, Instance, Ty, TyCtxt};
|
2019-12-24 17:38:22 -05:00
|
|
|
|
use rustc_session::cgu_reuse_tracker::CguReuse;
|
Use object crate for .rustc metadata generation
We already use the object crate for generating uncompressed .rmeta
metadata object files. This switches the generation of compressed
.rustc object files to use the object crate as well. These have
slightly different requirements in that .rmeta should be completely
excluded from any final compilation artifacts, while .rustc should
be part of shared objects, but not loaded into memory.
The primary motivation for this change is #90326: In LLVM 14, the
current way of setting section flags (and in particular, preventing
the setting of SHF_ALLOC) will no longer work. There are other ways
we could work around this, but switching to the object crate seems
like the most elegant, as we already use it for .rmeta, and as it
makes this independent of the codegen backend. In particular, we
don't need separate handling in codegen_llvm and codegen_gcc.
codegen_cranelift should be able to reuse the implementation as
well, though I have omitted that here, as it is not based on
codegen_ssa.
This change mostly extracts the existing code for .rmeta handling
to allow using it for .rustc as well, and adjust the codegen
infrastructure to handle the metadata object file separately: We
no longer create a backend-specific module for it, and directly
produce the compiled module instead.
This does not fix #90326 by itself yet, as .llvmbc will need to be
handled separately.
2021-12-02 12:24:25 +01:00
|
|
|
|
use rustc_session::config::{self, EntryFnType, OutputType};
|
2020-03-11 12:49:08 +01:00
|
|
|
|
use rustc_session::Session;
|
2021-05-29 17:08:46 +02:00
|
|
|
|
use rustc_span::symbol::sym;
|
2021-08-30 17:38:27 +03:00
|
|
|
|
use rustc_target::abi::{Align, VariantIdx};
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2021-07-31 22:46:23 +08:00
|
|
|
|
use std::convert::TryFrom;
|
2018-10-03 13:49:57 +02:00
|
|
|
|
use std::ops::{Deref, DerefMut};
|
2019-12-24 17:38:22 -05:00
|
|
|
|
use std::time::{Duration, Instant};
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2021-02-03 18:41:30 -08:00
|
|
|
|
use itertools::Itertools;
|
|
|
|
|
|
2019-12-24 17:38:22 -05:00
|
|
|
|
pub fn bin_op_to_icmp_predicate(op: hir::BinOpKind, signed: bool) -> IntPredicate {
|
2018-10-03 13:49:57 +02:00
|
|
|
|
match op {
|
|
|
|
|
hir::BinOpKind::Eq => IntPredicate::IntEQ,
|
|
|
|
|
hir::BinOpKind::Ne => IntPredicate::IntNE,
|
2019-12-24 17:38:22 -05:00
|
|
|
|
hir::BinOpKind::Lt => {
|
|
|
|
|
if signed {
|
|
|
|
|
IntPredicate::IntSLT
|
|
|
|
|
} else {
|
|
|
|
|
IntPredicate::IntULT
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
hir::BinOpKind::Le => {
|
|
|
|
|
if signed {
|
|
|
|
|
IntPredicate::IntSLE
|
|
|
|
|
} else {
|
|
|
|
|
IntPredicate::IntULE
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
hir::BinOpKind::Gt => {
|
|
|
|
|
if signed {
|
|
|
|
|
IntPredicate::IntSGT
|
|
|
|
|
} else {
|
|
|
|
|
IntPredicate::IntUGT
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
hir::BinOpKind::Ge => {
|
|
|
|
|
if signed {
|
|
|
|
|
IntPredicate::IntSGE
|
|
|
|
|
} else {
|
|
|
|
|
IntPredicate::IntUGE
|
|
|
|
|
}
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
2019-12-24 17:38:22 -05:00
|
|
|
|
op => bug!(
|
|
|
|
|
"comparison_op_to_icmp_predicate: expected comparison operator, \
|
2019-12-13 14:44:08 +01:00
|
|
|
|
found {:?}",
|
2019-12-24 17:38:22 -05:00
|
|
|
|
op
|
|
|
|
|
),
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
pub fn bin_op_to_fcmp_predicate(op: hir::BinOpKind) -> RealPredicate {
|
|
|
|
|
match op {
|
|
|
|
|
hir::BinOpKind::Eq => RealPredicate::RealOEQ,
|
|
|
|
|
hir::BinOpKind::Ne => RealPredicate::RealUNE,
|
|
|
|
|
hir::BinOpKind::Lt => RealPredicate::RealOLT,
|
|
|
|
|
hir::BinOpKind::Le => RealPredicate::RealOLE,
|
|
|
|
|
hir::BinOpKind::Gt => RealPredicate::RealOGT,
|
|
|
|
|
hir::BinOpKind::Ge => RealPredicate::RealOGE,
|
|
|
|
|
op => {
|
2019-12-24 17:38:22 -05:00
|
|
|
|
bug!(
|
|
|
|
|
"comparison_op_to_fcmp_predicate: expected comparison operator, \
|
2019-12-13 14:44:08 +01:00
|
|
|
|
found {:?}",
|
2019-12-24 17:38:22 -05:00
|
|
|
|
op
|
|
|
|
|
);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-06-14 19:39:39 +03:00
|
|
|
|
pub fn compare_simd_types<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>(
|
2018-10-05 15:08:49 +02:00
|
|
|
|
bx: &mut Bx,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
lhs: Bx::Value,
|
|
|
|
|
rhs: Bx::Value,
|
|
|
|
|
t: Ty<'tcx>,
|
|
|
|
|
ret_ty: Bx::Type,
|
2019-06-16 12:41:24 +03:00
|
|
|
|
op: hir::BinOpKind,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
) -> Bx::Value {
|
2020-08-03 00:49:11 +02:00
|
|
|
|
let signed = match t.kind() {
|
2018-10-03 13:49:57 +02:00
|
|
|
|
ty::Float(_) => {
|
|
|
|
|
let cmp = bin_op_to_fcmp_predicate(op);
|
2018-10-05 15:08:49 +02:00
|
|
|
|
let cmp = bx.fcmp(cmp, lhs, rhs);
|
|
|
|
|
return bx.sext(cmp, ret_ty);
|
2019-12-24 17:38:22 -05:00
|
|
|
|
}
|
2018-10-03 13:49:57 +02:00
|
|
|
|
ty::Uint(_) => false,
|
|
|
|
|
ty::Int(_) => true,
|
|
|
|
|
_ => bug!("compare_simd_types: invalid SIMD type"),
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
let cmp = bin_op_to_icmp_predicate(op, signed);
|
2018-10-05 15:08:49 +02:00
|
|
|
|
let cmp = bx.icmp(cmp, lhs, rhs);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
// LLVM outputs an `< size x i1 >`, so we need to perform a sign extension
|
|
|
|
|
// to get the correctly sized type. This will compile to a single instruction
|
|
|
|
|
// once the IR is converted to assembly if the SIMD instruction is supported
|
|
|
|
|
// by the target architecture.
|
2018-10-05 15:08:49 +02:00
|
|
|
|
bx.sext(cmp, ret_ty)
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
|
2019-02-08 14:53:55 +01:00
|
|
|
|
/// Retrieves the information we are losing (making dynamic) in an unsizing
|
2018-10-03 13:49:57 +02:00
|
|
|
|
/// adjustment.
|
|
|
|
|
///
|
2019-05-17 02:20:14 +01:00
|
|
|
|
/// The `old_info` argument is a bit odd. It is intended for use in an upcast,
|
|
|
|
|
/// where the new vtable for an object will be derived from the old one.
|
2021-07-31 22:46:23 +08:00
|
|
|
|
pub fn unsized_info<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>(
|
|
|
|
|
bx: &mut Bx,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
source: Ty<'tcx>,
|
|
|
|
|
target: Ty<'tcx>,
|
2021-07-31 22:46:23 +08:00
|
|
|
|
old_info: Option<Bx::Value>,
|
|
|
|
|
) -> Bx::Value {
|
|
|
|
|
let cx = bx.cx();
|
2019-07-11 13:27:41 +02:00
|
|
|
|
let (source, target) =
|
2021-07-31 22:46:23 +08:00
|
|
|
|
cx.tcx().struct_lockstep_tails_erasing_lifetimes(source, target, bx.param_env());
|
2020-08-03 00:49:11 +02:00
|
|
|
|
match (source.kind(), target.kind()) {
|
2018-10-03 13:49:57 +02:00
|
|
|
|
(&ty::Array(_, len), &ty::Slice(_)) => {
|
2019-03-26 00:13:09 +01:00
|
|
|
|
cx.const_usize(len.eval_usize(cx.tcx(), ty::ParamEnv::reveal_all()))
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
2021-07-31 22:46:23 +08:00
|
|
|
|
(&ty::Dynamic(ref data_a, ..), &ty::Dynamic(ref data_b, ..)) => {
|
|
|
|
|
let old_info =
|
|
|
|
|
old_info.expect("unsized_info: missing old info for trait upcasting coercion");
|
|
|
|
|
if data_a.principal_def_id() == data_b.principal_def_id() {
|
|
|
|
|
return old_info;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// trait upcasting coercion
|
|
|
|
|
|
2021-08-18 12:45:18 +08:00
|
|
|
|
let vptr_entry_idx =
|
|
|
|
|
cx.tcx().vtable_trait_upcasting_coercion_new_vptr_slot((source, target));
|
2021-07-31 22:46:23 +08:00
|
|
|
|
|
|
|
|
|
if let Some(entry_idx) = vptr_entry_idx {
|
|
|
|
|
let ptr_ty = cx.type_i8p();
|
|
|
|
|
let ptr_align = cx.tcx().data_layout.pointer_align.abi;
|
|
|
|
|
let llvtable = bx.pointercast(old_info, bx.type_ptr_to(ptr_ty));
|
2021-08-01 00:00:00 +00:00
|
|
|
|
let gep = bx.inbounds_gep(
|
|
|
|
|
ptr_ty,
|
|
|
|
|
llvtable,
|
|
|
|
|
&[bx.const_usize(u64::try_from(entry_idx).unwrap())],
|
|
|
|
|
);
|
2021-07-31 22:46:23 +08:00
|
|
|
|
let new_vptr = bx.load(ptr_ty, gep, ptr_align);
|
|
|
|
|
bx.nonnull_metadata(new_vptr);
|
|
|
|
|
// Vtable loads are invariant.
|
|
|
|
|
bx.set_invariant_load(new_vptr);
|
|
|
|
|
new_vptr
|
|
|
|
|
} else {
|
|
|
|
|
old_info
|
|
|
|
|
}
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
(_, &ty::Dynamic(ref data, ..)) => {
|
2021-07-31 22:46:23 +08:00
|
|
|
|
let vtable_ptr_ty = cx.scalar_pair_element_backend_type(
|
|
|
|
|
cx.layout_of(cx.tcx().mk_mut_ptr(target)),
|
|
|
|
|
1,
|
|
|
|
|
true,
|
|
|
|
|
);
|
|
|
|
|
cx.const_ptrcast(meth::get_vtable(cx, source, data.principal()), vtable_ptr_ty)
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
2019-12-24 17:38:22 -05:00
|
|
|
|
_ => bug!("unsized_info: invalid unsizing {:?} -> {:?}", source, target),
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2021-07-31 22:46:23 +08:00
|
|
|
|
/// Coerces `src` to `dst_ty`. `src_ty` must be a pointer.
|
|
|
|
|
pub fn unsize_ptr<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>(
|
2018-10-05 15:08:49 +02:00
|
|
|
|
bx: &mut Bx,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
src: Bx::Value,
|
|
|
|
|
src_ty: Ty<'tcx>,
|
2019-06-16 12:41:24 +03:00
|
|
|
|
dst_ty: Ty<'tcx>,
|
2021-07-31 22:46:23 +08:00
|
|
|
|
old_info: Option<Bx::Value>,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
) -> (Bx::Value, Bx::Value) {
|
2021-07-31 22:46:23 +08:00
|
|
|
|
debug!("unsize_ptr: {:?} => {:?}", src_ty, dst_ty);
|
2020-08-03 00:49:11 +02:00
|
|
|
|
match (src_ty.kind(), dst_ty.kind()) {
|
2020-04-16 17:38:52 -07:00
|
|
|
|
(&ty::Ref(_, a, _), &ty::Ref(_, b, _) | &ty::RawPtr(ty::TypeAndMut { ty: b, .. }))
|
2019-12-24 17:38:22 -05:00
|
|
|
|
| (&ty::RawPtr(ty::TypeAndMut { ty: a, .. }), &ty::RawPtr(ty::TypeAndMut { ty: b, .. })) => {
|
2021-07-31 22:46:23 +08:00
|
|
|
|
assert_eq!(bx.cx().type_is_sized(a), old_info.is_none());
|
2018-10-03 13:49:57 +02:00
|
|
|
|
let ptr_ty = bx.cx().type_ptr_to(bx.cx().backend_type(bx.cx().layout_of(b)));
|
2021-07-31 22:46:23 +08:00
|
|
|
|
(bx.pointercast(src, ptr_ty), unsized_info(bx, a, b, old_info))
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
(&ty::Adt(def_a, _), &ty::Adt(def_b, _)) => {
|
|
|
|
|
assert_eq!(def_a, def_b);
|
|
|
|
|
let src_layout = bx.cx().layout_of(src_ty);
|
|
|
|
|
let dst_layout = bx.cx().layout_of(dst_ty);
|
2021-07-31 22:46:23 +08:00
|
|
|
|
if src_ty == dst_ty {
|
|
|
|
|
return (src, old_info.unwrap());
|
|
|
|
|
}
|
2018-10-03 13:49:57 +02:00
|
|
|
|
let mut result = None;
|
|
|
|
|
for i in 0..src_layout.fields.count() {
|
|
|
|
|
let src_f = src_layout.field(bx.cx(), i);
|
|
|
|
|
assert_eq!(src_layout.fields.offset(i).bytes(), 0);
|
|
|
|
|
assert_eq!(dst_layout.fields.offset(i).bytes(), 0);
|
|
|
|
|
if src_f.is_zst() {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
assert_eq!(src_layout.size, src_f.size);
|
|
|
|
|
|
|
|
|
|
let dst_f = dst_layout.field(bx.cx(), i);
|
|
|
|
|
assert_ne!(src_f.ty, dst_f.ty);
|
|
|
|
|
assert_eq!(result, None);
|
2021-07-31 22:46:23 +08:00
|
|
|
|
result = Some(unsize_ptr(bx, src, src_f.ty, dst_f.ty, old_info));
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
let (lldata, llextra) = result.unwrap();
|
2021-07-31 22:46:23 +08:00
|
|
|
|
let lldata_ty = bx.cx().scalar_pair_element_backend_type(dst_layout, 0, true);
|
|
|
|
|
let llextra_ty = bx.cx().scalar_pair_element_backend_type(dst_layout, 1, true);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
// HACK(eddyb) have to bitcast pointers until LLVM removes pointee types.
|
2021-07-31 22:46:23 +08:00
|
|
|
|
(bx.bitcast(lldata, lldata_ty), bx.bitcast(llextra, llextra_ty))
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
2021-07-31 22:46:23 +08:00
|
|
|
|
_ => bug!("unsize_ptr: called on bad types"),
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-05-17 02:20:14 +01:00
|
|
|
|
/// Coerces `src`, which is a reference to a value of type `src_ty`,
|
|
|
|
|
/// to a value of type `dst_ty`, and stores the result in `dst`.
|
2019-06-14 19:39:39 +03:00
|
|
|
|
pub fn coerce_unsized_into<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>(
|
2018-10-05 15:08:49 +02:00
|
|
|
|
bx: &mut Bx,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
src: PlaceRef<'tcx, Bx::Value>,
|
2019-06-16 12:41:24 +03:00
|
|
|
|
dst: PlaceRef<'tcx, Bx::Value>,
|
|
|
|
|
) {
|
2018-10-03 13:49:57 +02:00
|
|
|
|
let src_ty = src.layout.ty;
|
|
|
|
|
let dst_ty = dst.layout.ty;
|
2020-08-03 00:49:11 +02:00
|
|
|
|
match (src_ty.kind(), dst_ty.kind()) {
|
2020-04-16 17:38:52 -07:00
|
|
|
|
(&ty::Ref(..), &ty::Ref(..) | &ty::RawPtr(..)) | (&ty::RawPtr(..), &ty::RawPtr(..)) => {
|
2019-10-15 22:11:39 +03:00
|
|
|
|
let (base, info) = match bx.load_operand(src).val {
|
2021-07-31 22:46:23 +08:00
|
|
|
|
OperandValue::Pair(base, info) => unsize_ptr(bx, base, src_ty, dst_ty, Some(info)),
|
|
|
|
|
OperandValue::Immediate(base) => unsize_ptr(bx, base, src_ty, dst_ty, None),
|
2019-12-24 17:38:22 -05:00
|
|
|
|
OperandValue::Ref(..) => bug!(),
|
2019-10-15 22:11:39 +03:00
|
|
|
|
};
|
|
|
|
|
OperandValue::Pair(base, info).store(bx, dst);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
(&ty::Adt(def_a, _), &ty::Adt(def_b, _)) => {
|
|
|
|
|
assert_eq!(def_a, def_b);
|
|
|
|
|
|
2022-03-05 07:28:41 +11:00
|
|
|
|
for i in 0..def_a.variant(VariantIdx::new(0)).fields.len() {
|
2018-10-03 13:49:57 +02:00
|
|
|
|
let src_f = src.project_field(bx, i);
|
|
|
|
|
let dst_f = dst.project_field(bx, i);
|
|
|
|
|
|
|
|
|
|
if dst_f.layout.is_zst() {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if src_f.layout.ty == dst_f.layout.ty {
|
2019-12-24 17:38:22 -05:00
|
|
|
|
memcpy_ty(
|
|
|
|
|
bx,
|
|
|
|
|
dst_f.llval,
|
|
|
|
|
dst_f.align,
|
|
|
|
|
src_f.llval,
|
|
|
|
|
src_f.align,
|
|
|
|
|
src_f.layout,
|
|
|
|
|
MemFlags::empty(),
|
|
|
|
|
);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
} else {
|
|
|
|
|
coerce_unsized_into(bx, src_f, dst_f);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2019-12-24 17:38:22 -05:00
|
|
|
|
_ => bug!("coerce_unsized_into: invalid coercion {:?} -> {:?}", src_ty, dst_ty,),
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-06-14 19:39:39 +03:00
|
|
|
|
pub fn cast_shift_expr_rhs<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>(
|
2018-10-05 15:08:49 +02:00
|
|
|
|
bx: &mut Bx,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
op: hir::BinOpKind,
|
|
|
|
|
lhs: Bx::Value,
|
2019-06-16 12:41:24 +03:00
|
|
|
|
rhs: Bx::Value,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
) -> Bx::Value {
|
2018-10-05 15:08:49 +02:00
|
|
|
|
cast_shift_rhs(bx, op, lhs, rhs)
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
|
2019-06-14 19:39:39 +03:00
|
|
|
|
fn cast_shift_rhs<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>(
|
2018-10-05 15:08:49 +02:00
|
|
|
|
bx: &mut Bx,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
op: hir::BinOpKind,
|
|
|
|
|
lhs: Bx::Value,
|
|
|
|
|
rhs: Bx::Value,
|
2018-10-05 15:08:49 +02:00
|
|
|
|
) -> Bx::Value {
|
2018-10-03 13:49:57 +02:00
|
|
|
|
// Shifts may have any size int on the rhs
|
|
|
|
|
if op.is_shift() {
|
|
|
|
|
let mut rhs_llty = bx.cx().val_ty(rhs);
|
|
|
|
|
let mut lhs_llty = bx.cx().val_ty(lhs);
|
|
|
|
|
if bx.cx().type_kind(rhs_llty) == TypeKind::Vector {
|
|
|
|
|
rhs_llty = bx.cx().element_type(rhs_llty)
|
|
|
|
|
}
|
|
|
|
|
if bx.cx().type_kind(lhs_llty) == TypeKind::Vector {
|
|
|
|
|
lhs_llty = bx.cx().element_type(lhs_llty)
|
|
|
|
|
}
|
|
|
|
|
let rhs_sz = bx.cx().int_width(rhs_llty);
|
|
|
|
|
let lhs_sz = bx.cx().int_width(lhs_llty);
|
|
|
|
|
if lhs_sz < rhs_sz {
|
2018-10-05 15:08:49 +02:00
|
|
|
|
bx.trunc(rhs, lhs_llty)
|
2018-10-03 13:49:57 +02:00
|
|
|
|
} else if lhs_sz > rhs_sz {
|
|
|
|
|
// FIXME (#1877: If in the future shifting by negative
|
|
|
|
|
// values is no longer undefined then this is wrong.
|
2018-10-05 15:08:49 +02:00
|
|
|
|
bx.zext(rhs, lhs_llty)
|
2018-10-03 13:49:57 +02:00
|
|
|
|
} else {
|
|
|
|
|
rhs
|
|
|
|
|
}
|
|
|
|
|
} else {
|
|
|
|
|
rhs
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-02-08 14:53:55 +01:00
|
|
|
|
/// Returns `true` if this session's target will use SEH-based unwinding.
|
2018-10-03 13:49:57 +02:00
|
|
|
|
///
|
|
|
|
|
/// This is only true for MSVC targets, and even then the 64-bit MSVC target
|
|
|
|
|
/// currently uses SEH-ish unwinding with DWARF info tables to the side (same as
|
|
|
|
|
/// 64-bit MinGW) instead of "full SEH".
|
|
|
|
|
pub fn wants_msvc_seh(sess: &Session) -> bool {
|
2020-11-08 14:27:51 +03:00
|
|
|
|
sess.target.is_like_msvc
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
|
2019-06-14 19:39:39 +03:00
|
|
|
|
pub fn memcpy_ty<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>(
|
2018-10-05 15:08:49 +02:00
|
|
|
|
bx: &mut Bx,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
dst: Bx::Value,
|
|
|
|
|
dst_align: Align,
|
|
|
|
|
src: Bx::Value,
|
|
|
|
|
src_align: Align,
|
2020-03-04 14:50:21 +00:00
|
|
|
|
layout: TyAndLayout<'tcx>,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
flags: MemFlags,
|
|
|
|
|
) {
|
|
|
|
|
let size = layout.size.bytes();
|
|
|
|
|
if size == 0 {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
bx.memcpy(dst, dst_align, src, src_align, bx.cx().const_usize(size), flags);
|
|
|
|
|
}
|
|
|
|
|
|
2019-06-16 12:33:47 +03:00
|
|
|
|
pub fn codegen_instance<'a, 'tcx: 'a, Bx: BuilderMethods<'a, 'tcx>>(
|
2018-10-03 13:49:57 +02:00
|
|
|
|
cx: &'a Bx::CodegenCx,
|
|
|
|
|
instance: Instance<'tcx>,
|
|
|
|
|
) {
|
|
|
|
|
// this is an info! to allow collecting monomorphization statistics
|
|
|
|
|
// and to allow finding the last function before LLVM aborts from
|
|
|
|
|
// release builds.
|
|
|
|
|
info!("codegen_instance({})", instance);
|
|
|
|
|
|
2019-10-29 16:26:25 +02:00
|
|
|
|
mir::codegen_mir::<Bx>(cx, instance);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
|
2019-02-08 14:53:55 +01:00
|
|
|
|
/// Creates the `main` function which will initialize the rust runtime and call
|
2018-10-03 13:49:57 +02:00
|
|
|
|
/// users main function.
|
2020-01-16 00:00:00 +00:00
|
|
|
|
pub fn maybe_create_entry_wrapper<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>(
|
|
|
|
|
cx: &'a Bx::CodegenCx,
|
|
|
|
|
) -> Option<Bx::Function> {
|
2021-05-11 12:00:59 +02:00
|
|
|
|
let (main_def_id, entry_type) = cx.tcx().entry_fn(())?;
|
2021-04-26 01:09:35 +08:00
|
|
|
|
let main_is_local = main_def_id.is_local();
|
|
|
|
|
let instance = Instance::mono(cx.tcx(), main_def_id);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2021-04-26 01:09:35 +08:00
|
|
|
|
if main_is_local {
|
2018-10-03 13:49:57 +02:00
|
|
|
|
// We want to create the wrapper in the same codegen unit as Rust's main
|
|
|
|
|
// function.
|
2021-04-26 01:09:35 +08:00
|
|
|
|
if !cx.codegen_unit().contains_item(&MonoItem::Fn(instance)) {
|
|
|
|
|
return None;
|
|
|
|
|
}
|
2021-04-24 13:16:34 +08:00
|
|
|
|
} else if !cx.codegen_unit().is_primary() {
|
|
|
|
|
// We want to create the wrapper only when the codegen unit is the primary one
|
|
|
|
|
return None;
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
|
2019-10-13 12:05:40 +02:00
|
|
|
|
let main_llfn = cx.get_fn_addr(instance);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2021-05-11 12:00:59 +02:00
|
|
|
|
let use_start_lang_item = EntryFnType::Start != entry_type;
|
|
|
|
|
let entry_fn = create_entry_fn::<Bx>(cx, main_llfn, main_def_id, use_start_lang_item);
|
|
|
|
|
return Some(entry_fn);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2019-06-14 19:39:39 +03:00
|
|
|
|
fn create_entry_fn<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>(
|
2018-10-03 13:49:57 +02:00
|
|
|
|
cx: &'a Bx::CodegenCx,
|
|
|
|
|
rust_main: Bx::Value,
|
2021-04-26 01:09:35 +08:00
|
|
|
|
rust_main_def_id: DefId,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
use_start_lang_item: bool,
|
2020-01-16 00:00:00 +00:00
|
|
|
|
) -> Bx::Function {
|
2019-10-21 17:29:40 -07:00
|
|
|
|
// The entry function is either `int main(void)` or `int main(int argc, char **argv)`,
|
|
|
|
|
// depending on whether the target needs `argc` and `argv` to be passed in.
|
2020-11-08 14:27:51 +03:00
|
|
|
|
let llfty = if cx.sess().target.main_needs_argc_argv {
|
2019-10-17 16:09:32 -07:00
|
|
|
|
cx.type_func(&[cx.type_int(), cx.type_ptr_to(cx.type_i8p())], cx.type_int())
|
|
|
|
|
} else {
|
|
|
|
|
cx.type_func(&[], cx.type_int())
|
|
|
|
|
};
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
|
|
|
|
let main_ret_ty = cx.tcx().fn_sig(rust_main_def_id).output();
|
|
|
|
|
// Given that `main()` has no arguments,
|
|
|
|
|
// then its return type cannot have
|
|
|
|
|
// late-bound regions, since late-bound
|
|
|
|
|
// regions must appear in the argument
|
|
|
|
|
// listing.
|
2022-02-23 00:00:00 +00:00
|
|
|
|
let main_ret_ty = cx.tcx().normalize_erasing_regions(
|
|
|
|
|
ty::ParamEnv::reveal_all(),
|
|
|
|
|
main_ret_ty.no_bound_vars().unwrap(),
|
|
|
|
|
);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2022-02-19 00:48:49 +01:00
|
|
|
|
let Some(llfn) = cx.declare_c_main(llfty) else {
|
|
|
|
|
// FIXME: We should be smart and show a better diagnostic here.
|
|
|
|
|
let span = cx.tcx().def_span(rust_main_def_id);
|
|
|
|
|
cx.sess()
|
|
|
|
|
.struct_span_err(span, "entry symbol `main` declared multiple times")
|
|
|
|
|
.help("did you use `#[no_mangle]` on `fn main`? Use `#[start]` instead")
|
|
|
|
|
.emit();
|
|
|
|
|
cx.sess().abort_if_errors();
|
|
|
|
|
bug!();
|
2020-09-18 13:06:53 +02:00
|
|
|
|
};
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
|
|
|
|
// `main` should respect same config for frame pointer elimination as rest of code
|
2021-06-26 23:53:35 +03:00
|
|
|
|
cx.set_frame_pointer_type(llfn);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
cx.apply_target_cpu_attr(llfn);
|
|
|
|
|
|
2021-05-06 18:57:04 +03:00
|
|
|
|
let llbb = Bx::append_block(&cx, llfn, "top");
|
|
|
|
|
let mut bx = Bx::build(&cx, llbb);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
|
|
|
|
bx.insert_reference_to_gdb_debug_scripts_section_global();
|
|
|
|
|
|
2021-08-03 15:09:57 -07:00
|
|
|
|
let isize_ty = cx.type_isize();
|
|
|
|
|
let i8pp_ty = cx.type_ptr_to(cx.type_i8p());
|
2019-10-21 17:29:40 -07:00
|
|
|
|
let (arg_argc, arg_argv) = get_argc_argv(cx, &mut bx);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2021-08-03 15:09:57 -07:00
|
|
|
|
let (start_fn, start_ty, args) = if use_start_lang_item {
|
2020-08-18 11:47:27 +01:00
|
|
|
|
let start_def_id = cx.tcx().require_lang_item(LangItem::Start, None);
|
2019-10-13 12:05:40 +02:00
|
|
|
|
let start_fn = cx.get_fn_addr(
|
2019-10-13 11:45:34 +02:00
|
|
|
|
ty::Instance::resolve(
|
|
|
|
|
cx.tcx(),
|
|
|
|
|
ty::ParamEnv::reveal_all(),
|
|
|
|
|
start_def_id,
|
|
|
|
|
cx.tcx().intern_substs(&[main_ret_ty.into()]),
|
2019-12-24 17:38:22 -05:00
|
|
|
|
)
|
2020-04-10 05:13:29 +03:00
|
|
|
|
.unwrap()
|
2019-12-24 17:38:22 -05:00
|
|
|
|
.unwrap(),
|
2018-10-03 13:49:57 +02:00
|
|
|
|
);
|
2021-08-03 15:09:57 -07:00
|
|
|
|
let start_ty = cx.type_func(&[cx.val_ty(rust_main), isize_ty, i8pp_ty], isize_ty);
|
|
|
|
|
(start_fn, start_ty, vec![rust_main, arg_argc, arg_argv])
|
2018-10-03 13:49:57 +02:00
|
|
|
|
} else {
|
|
|
|
|
debug!("using user-defined start fn");
|
2021-08-03 15:09:57 -07:00
|
|
|
|
let start_ty = cx.type_func(&[isize_ty, i8pp_ty], isize_ty);
|
|
|
|
|
(rust_main, start_ty, vec![arg_argc, arg_argv])
|
2018-10-03 13:49:57 +02:00
|
|
|
|
};
|
|
|
|
|
|
2021-08-03 15:09:57 -07:00
|
|
|
|
let result = bx.call(start_ty, start_fn, &args, None);
|
2018-10-05 15:08:49 +02:00
|
|
|
|
let cast = bx.intcast(result, cx.type_int(), true);
|
|
|
|
|
bx.ret(cast);
|
2020-01-16 00:00:00 +00:00
|
|
|
|
|
|
|
|
|
llfn
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-10-21 17:29:40 -07:00
|
|
|
|
/// Obtain the `argc` and `argv` values to pass to the rust start function.
|
|
|
|
|
fn get_argc_argv<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>(
|
|
|
|
|
cx: &'a Bx::CodegenCx,
|
2019-12-24 17:38:22 -05:00
|
|
|
|
bx: &mut Bx,
|
|
|
|
|
) -> (Bx::Value, Bx::Value) {
|
2020-11-08 14:27:51 +03:00
|
|
|
|
if cx.sess().target.main_needs_argc_argv {
|
2019-10-21 17:29:40 -07:00
|
|
|
|
// Params from native `main()` used as args for rust start function
|
|
|
|
|
let param_argc = bx.get_param(0);
|
|
|
|
|
let param_argv = bx.get_param(1);
|
|
|
|
|
let arg_argc = bx.intcast(param_argc, cx.type_isize(), true);
|
|
|
|
|
let arg_argv = param_argv;
|
|
|
|
|
(arg_argc, arg_argv)
|
|
|
|
|
} else {
|
|
|
|
|
// The Rust start function doesn't need `argc` and `argv`, so just pass zeros.
|
|
|
|
|
let arg_argc = bx.const_int(cx.type_int(), 0);
|
|
|
|
|
let arg_argv = bx.const_null(cx.type_ptr_to(cx.type_i8p()));
|
|
|
|
|
(arg_argc, arg_argv)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2018-10-04 15:23:10 +02:00
|
|
|
|
pub fn codegen_crate<B: ExtraBackendMethods>(
|
2018-10-03 13:49:57 +02:00
|
|
|
|
backend: B,
|
2021-12-13 21:52:35 -05:00
|
|
|
|
tcx: TyCtxt<'_>,
|
2021-03-28 22:14:09 +02:00
|
|
|
|
target_cpu: String,
|
2019-04-26 17:22:36 +10:00
|
|
|
|
metadata: EncodedMetadata,
|
|
|
|
|
need_metadata_module: bool,
|
2018-10-23 17:01:35 +02:00
|
|
|
|
) -> OngoingCodegen<B> {
|
2018-10-03 13:49:57 +02:00
|
|
|
|
// Skip crate items and just output metadata in -Z no-codegen mode.
|
2019-12-24 17:38:22 -05:00
|
|
|
|
if tcx.sess.opts.debugging_opts.no_codegen || !tcx.sess.opts.output_types.should_codegen() {
|
Use object crate for .rustc metadata generation
We already use the object crate for generating uncompressed .rmeta
metadata object files. This switches the generation of compressed
.rustc object files to use the object crate as well. These have
slightly different requirements in that .rmeta should be completely
excluded from any final compilation artifacts, while .rustc should
be part of shared objects, but not loaded into memory.
The primary motivation for this change is #90326: In LLVM 14, the
current way of setting section flags (and in particular, preventing
the setting of SHF_ALLOC) will no longer work. There are other ways
we could work around this, but switching to the object crate seems
like the most elegant, as we already use it for .rmeta, and as it
makes this independent of the codegen backend. In particular, we
don't need separate handling in codegen_llvm and codegen_gcc.
codegen_cranelift should be able to reuse the implementation as
well, though I have omitted that here, as it is not based on
codegen_ssa.
This change mostly extracts the existing code for .rmeta handling
to allow using it for .rustc as well, and adjust the codegen
infrastructure to handle the metadata object file separately: We
no longer create a backend-specific module for it, and directly
produce the compiled module instead.
This does not fix #90326 by itself yet, as .llvmbc will need to be
handled separately.
2021-12-02 12:24:25 +01:00
|
|
|
|
let ongoing_codegen = start_async_codegen(backend, tcx, target_cpu, metadata, None, 1);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2018-10-23 17:01:35 +02:00
|
|
|
|
ongoing_codegen.codegen_finished(tcx);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2018-10-23 17:01:35 +02:00
|
|
|
|
ongoing_codegen.check_for_errors(tcx.sess);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
|
|
|
|
return ongoing_codegen;
|
|
|
|
|
}
|
|
|
|
|
|
2019-04-26 17:22:36 +10:00
|
|
|
|
let cgu_name_builder = &mut CodegenUnitNameBuilder::new(tcx);
|
|
|
|
|
|
2018-10-03 13:49:57 +02:00
|
|
|
|
// Run the monomorphization collector and partition the collected items into
|
|
|
|
|
// codegen units.
|
2021-05-11 14:39:04 +02:00
|
|
|
|
let codegen_units = tcx.collect_and_partition_mono_items(()).1;
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
|
|
|
|
// Force all codegen_unit queries so they are already either red or green
|
|
|
|
|
// when compile_codegen_unit accesses them. We are not able to re-execute
|
|
|
|
|
// the codegen_unit query from just the DepNode, so an unknown color would
|
|
|
|
|
// lead to having to re-execute compile_codegen_unit, possibly
|
|
|
|
|
// unnecessarily.
|
|
|
|
|
if tcx.dep_graph.is_fully_enabled() {
|
2020-03-14 12:41:32 +01:00
|
|
|
|
for cgu in codegen_units {
|
2020-05-01 11:32:20 +02:00
|
|
|
|
tcx.ensure().codegen_unit(cgu.name());
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
Use object crate for .rustc metadata generation
We already use the object crate for generating uncompressed .rmeta
metadata object files. This switches the generation of compressed
.rustc object files to use the object crate as well. These have
slightly different requirements in that .rmeta should be completely
excluded from any final compilation artifacts, while .rustc should
be part of shared objects, but not loaded into memory.
The primary motivation for this change is #90326: In LLVM 14, the
current way of setting section flags (and in particular, preventing
the setting of SHF_ALLOC) will no longer work. There are other ways
we could work around this, but switching to the object crate seems
like the most elegant, as we already use it for .rmeta, and as it
makes this independent of the codegen backend. In particular, we
don't need separate handling in codegen_llvm and codegen_gcc.
codegen_cranelift should be able to reuse the implementation as
well, though I have omitted that here, as it is not based on
codegen_ssa.
This change mostly extracts the existing code for .rmeta handling
to allow using it for .rustc as well, and adjust the codegen
infrastructure to handle the metadata object file separately: We
no longer create a backend-specific module for it, and directly
produce the compiled module instead.
This does not fix #90326 by itself yet, as .llvmbc will need to be
handled separately.
2021-12-02 12:24:25 +01:00
|
|
|
|
let metadata_module = if need_metadata_module {
|
|
|
|
|
// Emit compressed metadata object.
|
|
|
|
|
let metadata_cgu_name =
|
|
|
|
|
cgu_name_builder.build_cgu_name(LOCAL_CRATE, &["crate"], Some("metadata")).to_string();
|
|
|
|
|
tcx.sess.time("write_compressed_metadata", || {
|
|
|
|
|
let file_name =
|
|
|
|
|
tcx.output_filenames(()).temp_path(OutputType::Metadata, Some(&metadata_cgu_name));
|
|
|
|
|
let data = create_compressed_metadata_file(
|
|
|
|
|
tcx.sess,
|
|
|
|
|
&metadata,
|
|
|
|
|
&exported_symbols::metadata_symbol_name(tcx),
|
|
|
|
|
);
|
|
|
|
|
if let Err(err) = std::fs::write(&file_name, data) {
|
|
|
|
|
tcx.sess.fatal(&format!("error writing metadata object file: {}", err));
|
|
|
|
|
}
|
|
|
|
|
Some(CompiledModule {
|
|
|
|
|
name: metadata_cgu_name,
|
|
|
|
|
kind: ModuleKind::Metadata,
|
|
|
|
|
object: Some(file_name),
|
|
|
|
|
dwarf_object: None,
|
|
|
|
|
bytecode: None,
|
|
|
|
|
})
|
|
|
|
|
})
|
|
|
|
|
} else {
|
|
|
|
|
None
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
let ongoing_codegen = start_async_codegen(
|
|
|
|
|
backend.clone(),
|
|
|
|
|
tcx,
|
|
|
|
|
target_cpu,
|
|
|
|
|
metadata,
|
|
|
|
|
metadata_module,
|
|
|
|
|
codegen_units.len(),
|
|
|
|
|
);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
let ongoing_codegen = AbortCodegenOnDrop::<B>(Some(ongoing_codegen));
|
|
|
|
|
|
|
|
|
|
// Codegen an allocator shim, if necessary.
|
|
|
|
|
//
|
|
|
|
|
// If the crate doesn't have an `allocator_kind` set then there's definitely
|
|
|
|
|
// no shim to generate. Otherwise we also check our dependency graph for all
|
|
|
|
|
// our output crate types. If anything there looks like its a `Dynamic`
|
|
|
|
|
// linkage, then it's already got an allocator shim and we'll be using that
|
|
|
|
|
// one instead. If nothing exists then it's our job to generate the
|
|
|
|
|
// allocator!
|
2021-05-11 11:26:52 +02:00
|
|
|
|
let any_dynamic_crate = tcx.dependency_formats(()).iter().any(|(_, list)| {
|
2020-03-29 16:41:09 +02:00
|
|
|
|
use rustc_middle::middle::dependency_format::Linkage;
|
2019-12-24 17:38:22 -05:00
|
|
|
|
list.iter().any(|&linkage| linkage == Linkage::Dynamic)
|
|
|
|
|
});
|
2018-10-03 13:49:57 +02:00
|
|
|
|
let allocator_module = if any_dynamic_crate {
|
|
|
|
|
None
|
2021-05-11 22:05:54 +02:00
|
|
|
|
} else if let Some(kind) = tcx.allocator_kind(()) {
|
2019-12-24 17:38:22 -05:00
|
|
|
|
let llmod_id =
|
|
|
|
|
cgu_name_builder.build_cgu_name(LOCAL_CRATE, &["crate"], Some("allocator")).to_string();
|
2021-08-31 11:16:10 -07:00
|
|
|
|
let mut module_llvm = backend.new_metadata(tcx, &llmod_id);
|
2020-09-07 10:45:20 +02:00
|
|
|
|
tcx.sess.time("write_allocator_module", || {
|
2021-08-31 11:16:10 -07:00
|
|
|
|
backend.codegen_allocator(
|
|
|
|
|
tcx,
|
|
|
|
|
&mut module_llvm,
|
|
|
|
|
&llmod_id,
|
|
|
|
|
kind,
|
|
|
|
|
tcx.lang_items().oom().is_some(),
|
|
|
|
|
)
|
2020-09-07 10:45:20 +02:00
|
|
|
|
});
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2021-08-31 11:16:10 -07:00
|
|
|
|
Some(ModuleCodegen { name: llmod_id, module_llvm, kind: ModuleKind::Allocator })
|
2018-10-03 13:49:57 +02:00
|
|
|
|
} else {
|
|
|
|
|
None
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
if let Some(allocator_module) = allocator_module {
|
2018-10-23 17:01:35 +02:00
|
|
|
|
ongoing_codegen.submit_pre_codegened_module_to_llvm(tcx, allocator_module);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
|
2021-02-03 18:41:30 -08:00
|
|
|
|
// For better throughput during parallel processing by LLVM, we used to sort
|
|
|
|
|
// CGUs largest to smallest. This would lead to better thread utilization
|
|
|
|
|
// by, for example, preventing a large CGU from being processed last and
|
|
|
|
|
// having only one LLVM thread working while the rest remained idle.
|
|
|
|
|
//
|
|
|
|
|
// However, this strategy would lead to high memory usage, as it meant the
|
|
|
|
|
// LLVM-IR for all of the largest CGUs would be resident in memory at once.
|
|
|
|
|
//
|
|
|
|
|
// Instead, we can compromise by ordering CGUs such that the largest and
|
|
|
|
|
// smallest are first, second largest and smallest are next, etc. If there
|
|
|
|
|
// are large size variations, this can reduce memory usage significantly.
|
|
|
|
|
let codegen_units: Vec<_> = {
|
|
|
|
|
let mut sorted_cgus = codegen_units.iter().collect::<Vec<_>>();
|
|
|
|
|
sorted_cgus.sort_by_cached_key(|cgu| cgu.size_estimate());
|
|
|
|
|
|
|
|
|
|
let (first_half, second_half) = sorted_cgus.split_at(sorted_cgus.len() / 2);
|
|
|
|
|
second_half.iter().rev().interleave(first_half).copied().collect()
|
2018-10-03 13:49:57 +02:00
|
|
|
|
};
|
|
|
|
|
|
2020-01-08 06:05:26 +01:00
|
|
|
|
// The non-parallel compiler can only translate codegen units to LLVM IR
|
|
|
|
|
// on a single thread, leading to a staircase effect where the N LLVM
|
|
|
|
|
// threads have to wait on the single codegen threads to generate work
|
|
|
|
|
// for them. The parallel compiler does not have this restriction, so
|
|
|
|
|
// we can pre-load the LLVM queue in parallel before handing off
|
|
|
|
|
// coordination to the OnGoingCodegen scheduler.
|
|
|
|
|
//
|
|
|
|
|
// This likely is a temporary measure. Once we don't have to support the
|
|
|
|
|
// non-parallel compiler anymore, we can compile CGUs end-to-end in
|
|
|
|
|
// parallel and get rid of the complicated scheduling logic.
|
2022-02-08 22:21:16 +03:00
|
|
|
|
#[cfg(parallel_compiler)]
|
2020-01-08 06:05:26 +01:00
|
|
|
|
let pre_compile_cgus = |cgu_reuse: &[CguReuse]| {
|
2022-02-08 22:21:16 +03:00
|
|
|
|
tcx.sess.time("compile_first_CGU_batch", || {
|
|
|
|
|
// Try to find one CGU to compile per thread.
|
|
|
|
|
let cgus: Vec<_> = cgu_reuse
|
|
|
|
|
.iter()
|
|
|
|
|
.enumerate()
|
|
|
|
|
.filter(|&(_, reuse)| reuse == &CguReuse::No)
|
|
|
|
|
.take(tcx.sess.threads())
|
|
|
|
|
.collect();
|
|
|
|
|
|
|
|
|
|
// Compile the found CGUs in parallel.
|
|
|
|
|
let start_time = Instant::now();
|
|
|
|
|
|
|
|
|
|
let pre_compiled_cgus = par_iter(cgus)
|
|
|
|
|
.map(|(i, _)| {
|
|
|
|
|
let module = backend.compile_codegen_unit(tcx, codegen_units[i].name());
|
|
|
|
|
(i, module)
|
|
|
|
|
})
|
|
|
|
|
.collect();
|
|
|
|
|
|
|
|
|
|
(pre_compiled_cgus, start_time.elapsed())
|
|
|
|
|
})
|
2020-01-05 02:10:23 +01:00
|
|
|
|
};
|
|
|
|
|
|
2022-02-08 22:21:16 +03:00
|
|
|
|
#[cfg(not(parallel_compiler))]
|
|
|
|
|
let pre_compile_cgus = |_: &[CguReuse]| (FxHashMap::default(), Duration::new(0, 0));
|
|
|
|
|
|
2020-01-08 06:05:26 +01:00
|
|
|
|
let mut cgu_reuse = Vec::new();
|
|
|
|
|
let mut pre_compiled_cgus: Option<FxHashMap<usize, _>> = None;
|
2021-01-24 21:23:38 -08:00
|
|
|
|
let mut total_codegen_time = Duration::new(0, 0);
|
2021-01-25 12:56:21 -08:00
|
|
|
|
let start_rss = tcx.sess.time_passes().then(|| get_resident_set_size());
|
2020-01-05 02:10:23 +01:00
|
|
|
|
|
2020-01-08 06:05:26 +01:00
|
|
|
|
for (i, cgu) in codegen_units.iter().enumerate() {
|
2018-10-23 17:01:35 +02:00
|
|
|
|
ongoing_codegen.wait_for_signal_to_codegen_item();
|
|
|
|
|
ongoing_codegen.check_for_errors(tcx.sess);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2020-01-08 06:05:26 +01:00
|
|
|
|
// Do some setup work in the first iteration
|
|
|
|
|
if pre_compiled_cgus.is_none() {
|
|
|
|
|
// Calculate the CGU reuse
|
|
|
|
|
cgu_reuse = tcx.sess.time("find_cgu_reuse", || {
|
|
|
|
|
codegen_units.iter().map(|cgu| determine_cgu_reuse(tcx, &cgu)).collect()
|
|
|
|
|
});
|
|
|
|
|
// Pre compile some CGUs
|
2021-01-24 21:23:38 -08:00
|
|
|
|
let (compiled_cgus, codegen_time) = pre_compile_cgus(&cgu_reuse);
|
|
|
|
|
pre_compiled_cgus = Some(compiled_cgus);
|
|
|
|
|
total_codegen_time += codegen_time;
|
2020-01-08 06:05:26 +01:00
|
|
|
|
}
|
|
|
|
|
|
2020-01-05 02:10:23 +01:00
|
|
|
|
let cgu_reuse = cgu_reuse[i];
|
2021-12-15 14:39:23 +11:00
|
|
|
|
tcx.sess.cgu_reuse_tracker.set_actual_reuse(cgu.name().as_str(), cgu_reuse);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
|
|
|
|
match cgu_reuse {
|
|
|
|
|
CguReuse::No => {
|
2020-01-08 06:05:26 +01:00
|
|
|
|
let (module, cost) =
|
|
|
|
|
if let Some(cgu) = pre_compiled_cgus.as_mut().unwrap().remove(&i) {
|
|
|
|
|
cgu
|
|
|
|
|
} else {
|
|
|
|
|
let start_time = Instant::now();
|
|
|
|
|
let module = backend.compile_codegen_unit(tcx, cgu.name());
|
2021-01-24 21:23:38 -08:00
|
|
|
|
total_codegen_time += start_time.elapsed();
|
2020-01-08 06:05:26 +01:00
|
|
|
|
module
|
|
|
|
|
};
|
2021-01-24 12:12:08 +01:00
|
|
|
|
// This will unwind if there are errors, which triggers our `AbortCodegenOnDrop`
|
|
|
|
|
// guard. Unfortunately, just skipping the `submit_codegened_module_to_llvm` makes
|
|
|
|
|
// compilation hang on post-monomorphization errors.
|
|
|
|
|
tcx.sess.abort_if_errors();
|
|
|
|
|
|
2020-01-05 02:10:23 +01:00
|
|
|
|
submit_codegened_module_to_llvm(
|
|
|
|
|
&backend,
|
|
|
|
|
&ongoing_codegen.coordinator_send,
|
|
|
|
|
module,
|
|
|
|
|
cost,
|
|
|
|
|
);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
false
|
|
|
|
|
}
|
|
|
|
|
CguReuse::PreLto => {
|
2019-12-24 17:38:22 -05:00
|
|
|
|
submit_pre_lto_module_to_llvm(
|
|
|
|
|
&backend,
|
|
|
|
|
tcx,
|
|
|
|
|
&ongoing_codegen.coordinator_send,
|
|
|
|
|
CachedModuleCodegen {
|
|
|
|
|
name: cgu.name().to_string(),
|
|
|
|
|
source: cgu.work_product(tcx),
|
|
|
|
|
},
|
|
|
|
|
);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
true
|
|
|
|
|
}
|
|
|
|
|
CguReuse::PostLto => {
|
2019-12-24 17:38:22 -05:00
|
|
|
|
submit_post_lto_module_to_llvm(
|
|
|
|
|
&backend,
|
|
|
|
|
&ongoing_codegen.coordinator_send,
|
|
|
|
|
CachedModuleCodegen {
|
|
|
|
|
name: cgu.name().to_string(),
|
|
|
|
|
source: cgu.work_product(tcx),
|
|
|
|
|
},
|
|
|
|
|
);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
true
|
|
|
|
|
}
|
|
|
|
|
};
|
|
|
|
|
}
|
|
|
|
|
|
2018-10-23 17:01:35 +02:00
|
|
|
|
ongoing_codegen.codegen_finished(tcx);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
|
|
|
|
// Since the main thread is sometimes blocked during codegen, we keep track
|
|
|
|
|
// -Ztime-passes output manually.
|
2021-01-25 12:56:21 -08:00
|
|
|
|
if tcx.sess.time_passes() {
|
|
|
|
|
let end_rss = get_resident_set_size();
|
|
|
|
|
|
|
|
|
|
print_time_passes_entry(
|
|
|
|
|
"codegen_to_LLVM_IR",
|
|
|
|
|
total_codegen_time,
|
|
|
|
|
start_rss.unwrap(),
|
|
|
|
|
end_rss,
|
|
|
|
|
);
|
|
|
|
|
}
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2018-10-23 17:01:35 +02:00
|
|
|
|
ongoing_codegen.check_for_errors(tcx.sess);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
|
|
|
|
ongoing_codegen.into_inner()
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// A curious wrapper structure whose only purpose is to call `codegen_aborted`
|
|
|
|
|
/// when it's dropped abnormally.
|
|
|
|
|
///
|
|
|
|
|
/// In the process of working on rust-lang/rust#55238 a mysterious segfault was
|
|
|
|
|
/// stumbled upon. The segfault was never reproduced locally, but it was
|
|
|
|
|
/// suspected to be related to the fact that codegen worker threads were
|
|
|
|
|
/// sticking around by the time the main thread was exiting, causing issues.
|
|
|
|
|
///
|
|
|
|
|
/// This structure is an attempt to fix that issue where the `codegen_aborted`
|
|
|
|
|
/// message will block until all workers have finished. This should ensure that
|
|
|
|
|
/// even if the main codegen thread panics we'll wait for pending work to
|
|
|
|
|
/// complete before returning from the main thread, hopefully avoiding
|
|
|
|
|
/// segfaults.
|
|
|
|
|
///
|
|
|
|
|
/// If you see this comment in the code, then it means that this workaround
|
|
|
|
|
/// worked! We may yet one day track down the mysterious cause of that
|
|
|
|
|
/// segfault...
|
2018-10-23 17:01:35 +02:00
|
|
|
|
struct AbortCodegenOnDrop<B: ExtraBackendMethods>(Option<OngoingCodegen<B>>);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2018-10-04 15:23:10 +02:00
|
|
|
|
impl<B: ExtraBackendMethods> AbortCodegenOnDrop<B> {
|
2018-10-23 17:01:35 +02:00
|
|
|
|
fn into_inner(mut self) -> OngoingCodegen<B> {
|
2018-10-03 13:49:57 +02:00
|
|
|
|
self.0.take().unwrap()
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2018-10-04 15:23:10 +02:00
|
|
|
|
impl<B: ExtraBackendMethods> Deref for AbortCodegenOnDrop<B> {
|
2018-10-23 17:01:35 +02:00
|
|
|
|
type Target = OngoingCodegen<B>;
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2018-10-23 17:01:35 +02:00
|
|
|
|
fn deref(&self) -> &OngoingCodegen<B> {
|
2018-10-03 13:49:57 +02:00
|
|
|
|
self.0.as_ref().unwrap()
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2018-10-04 15:23:10 +02:00
|
|
|
|
impl<B: ExtraBackendMethods> DerefMut for AbortCodegenOnDrop<B> {
|
2018-10-23 17:01:35 +02:00
|
|
|
|
fn deref_mut(&mut self) -> &mut OngoingCodegen<B> {
|
2018-10-03 13:49:57 +02:00
|
|
|
|
self.0.as_mut().unwrap()
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2018-10-04 15:23:10 +02:00
|
|
|
|
impl<B: ExtraBackendMethods> Drop for AbortCodegenOnDrop<B> {
|
2018-10-03 13:49:57 +02:00
|
|
|
|
fn drop(&mut self) {
|
|
|
|
|
if let Some(codegen) = self.0.take() {
|
2018-10-23 17:01:35 +02:00
|
|
|
|
codegen.codegen_aborted();
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
impl CrateInfo {
|
2021-07-06 15:31:38 +02:00
|
|
|
|
pub fn new(tcx: TyCtxt<'_>, target_cpu: String) -> CrateInfo {
|
2021-07-06 18:28:07 +02:00
|
|
|
|
let exported_symbols = tcx
|
|
|
|
|
.sess
|
|
|
|
|
.crate_types()
|
|
|
|
|
.iter()
|
|
|
|
|
.map(|&c| (c, crate::back::linker::exported_symbols(tcx, c)))
|
|
|
|
|
.collect();
|
2021-05-29 17:37:38 +02:00
|
|
|
|
let local_crate_name = tcx.crate_name(LOCAL_CRATE);
|
2021-05-29 17:08:46 +02:00
|
|
|
|
let crate_attrs = tcx.hir().attrs(rustc_hir::CRATE_HIR_ID);
|
|
|
|
|
let subsystem = tcx.sess.first_attr_value_str_by_name(crate_attrs, sym::windows_subsystem);
|
|
|
|
|
let windows_subsystem = subsystem.map(|subsystem| {
|
|
|
|
|
if subsystem != sym::windows && subsystem != sym::console {
|
|
|
|
|
tcx.sess.fatal(&format!(
|
|
|
|
|
"invalid windows subsystem `{}`, only \
|
|
|
|
|
`windows` and `console` are allowed",
|
|
|
|
|
subsystem
|
|
|
|
|
));
|
|
|
|
|
}
|
|
|
|
|
subsystem.to_string()
|
|
|
|
|
});
|
|
|
|
|
|
2021-06-07 15:23:44 +02:00
|
|
|
|
// This list is used when generating the command line to pass through to
|
|
|
|
|
// system linker. The linker expects undefined symbols on the left of the
|
|
|
|
|
// command line to be defined in libraries on the right, not the other way
|
|
|
|
|
// around. For more info, see some comments in the add_used_library function
|
|
|
|
|
// below.
|
|
|
|
|
//
|
|
|
|
|
// In order to get this left-to-right dependency ordering, we use the reverse
|
|
|
|
|
// postorder of all crates putting the leaves at the right-most positions.
|
|
|
|
|
let used_crates = tcx
|
|
|
|
|
.postorder_cnums(())
|
|
|
|
|
.iter()
|
|
|
|
|
.rev()
|
|
|
|
|
.copied()
|
|
|
|
|
.filter(|&cnum| !tcx.dep_kind(cnum).macros_only())
|
|
|
|
|
.collect();
|
|
|
|
|
|
2018-10-03 13:49:57 +02:00
|
|
|
|
let mut info = CrateInfo {
|
2021-07-06 18:28:07 +02:00
|
|
|
|
target_cpu,
|
|
|
|
|
exported_symbols,
|
2021-05-29 17:37:38 +02:00
|
|
|
|
local_crate_name,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
compiler_builtins: None,
|
|
|
|
|
profiler_runtime: None,
|
|
|
|
|
is_no_builtins: Default::default(),
|
|
|
|
|
native_libraries: Default::default(),
|
2020-12-19 22:36:35 +11:00
|
|
|
|
used_libraries: tcx.native_libraries(LOCAL_CRATE).iter().map(Into::into).collect(),
|
2018-10-03 13:49:57 +02:00
|
|
|
|
crate_name: Default::default(),
|
2021-06-07 15:23:44 +02:00
|
|
|
|
used_crates,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
used_crate_source: Default::default(),
|
|
|
|
|
lang_item_to_crate: Default::default(),
|
|
|
|
|
missing_lang_items: Default::default(),
|
2022-01-31 19:55:34 +01:00
|
|
|
|
dependency_formats: tcx.dependency_formats(()).clone(),
|
2021-05-29 17:08:46 +02:00
|
|
|
|
windows_subsystem,
|
2018-10-03 13:49:57 +02:00
|
|
|
|
};
|
|
|
|
|
let lang_items = tcx.lang_items();
|
|
|
|
|
|
2021-06-07 11:03:17 +02:00
|
|
|
|
let crates = tcx.crates(());
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
|
|
|
|
let n_crates = crates.len();
|
|
|
|
|
info.native_libraries.reserve(n_crates);
|
|
|
|
|
info.crate_name.reserve(n_crates);
|
|
|
|
|
info.used_crate_source.reserve(n_crates);
|
|
|
|
|
info.missing_lang_items.reserve(n_crates);
|
|
|
|
|
|
|
|
|
|
for &cnum in crates.iter() {
|
2020-12-19 22:36:35 +11:00
|
|
|
|
info.native_libraries
|
|
|
|
|
.insert(cnum, tcx.native_libraries(cnum).iter().map(Into::into).collect());
|
2018-10-03 13:49:57 +02:00
|
|
|
|
info.crate_name.insert(cnum, tcx.crate_name(cnum).to_string());
|
2022-01-31 19:55:34 +01:00
|
|
|
|
info.used_crate_source.insert(cnum, tcx.used_crate_source(cnum).clone());
|
2018-10-03 13:49:57 +02:00
|
|
|
|
if tcx.is_compiler_builtins(cnum) {
|
|
|
|
|
info.compiler_builtins = Some(cnum);
|
|
|
|
|
}
|
|
|
|
|
if tcx.is_profiler_runtime(cnum) {
|
|
|
|
|
info.profiler_runtime = Some(cnum);
|
|
|
|
|
}
|
|
|
|
|
if tcx.is_no_builtins(cnum) {
|
|
|
|
|
info.is_no_builtins.insert(cnum);
|
|
|
|
|
}
|
|
|
|
|
let missing = tcx.missing_lang_items(cnum);
|
|
|
|
|
for &item in missing.iter() {
|
|
|
|
|
if let Ok(id) = lang_items.require(item) {
|
|
|
|
|
info.lang_item_to_crate.insert(item, id.krate);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2020-07-07 11:12:44 -04:00
|
|
|
|
// No need to look for lang items that don't actually need to exist.
|
2020-01-30 08:53:34 +01:00
|
|
|
|
let missing =
|
2020-07-07 11:12:44 -04:00
|
|
|
|
missing.iter().cloned().filter(|&l| lang_items::required(tcx, l)).collect();
|
2018-10-03 13:49:57 +02:00
|
|
|
|
info.missing_lang_items.insert(cnum, missing);
|
|
|
|
|
}
|
|
|
|
|
|
2020-03-20 15:03:11 +01:00
|
|
|
|
info
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2021-01-23 11:11:51 +01:00
|
|
|
|
pub fn provide(providers: &mut Providers) {
|
2018-10-27 15:29:06 +03:00
|
|
|
|
providers.backend_optimization_level = |tcx, cratenum| {
|
|
|
|
|
let for_speed = match tcx.sess.opts.optimize {
|
|
|
|
|
// If globally no optimisation is done, #[optimize] has no effect.
|
|
|
|
|
//
|
|
|
|
|
// This is done because if we ended up "upgrading" to `-O2` here, we’d populate the
|
|
|
|
|
// pass manager and it is likely that some module-wide passes (such as inliner or
|
|
|
|
|
// cross-function constant propagation) would ignore the `optnone` annotation we put
|
|
|
|
|
// on the functions, thus necessarily involving these functions into optimisations.
|
|
|
|
|
config::OptLevel::No => return config::OptLevel::No,
|
|
|
|
|
// If globally optimise-speed is already specified, just use that level.
|
|
|
|
|
config::OptLevel::Less => return config::OptLevel::Less,
|
|
|
|
|
config::OptLevel::Default => return config::OptLevel::Default,
|
|
|
|
|
config::OptLevel::Aggressive => return config::OptLevel::Aggressive,
|
|
|
|
|
// If globally optimize-for-size has been requested, use -O2 instead (if optimize(size)
|
|
|
|
|
// are present).
|
|
|
|
|
config::OptLevel::Size => config::OptLevel::Default,
|
|
|
|
|
config::OptLevel::SizeMin => config::OptLevel::Default,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
let (defids, _) = tcx.collect_and_partition_mono_items(cratenum);
|
|
|
|
|
for id in &*defids {
|
2019-12-24 05:30:02 +01:00
|
|
|
|
let CodegenFnAttrs { optimize, .. } = tcx.codegen_fn_attrs(*id);
|
2018-10-27 15:29:06 +03:00
|
|
|
|
match optimize {
|
|
|
|
|
attr::OptimizeAttr::None => continue,
|
|
|
|
|
attr::OptimizeAttr::Size => continue,
|
|
|
|
|
attr::OptimizeAttr::Speed => {
|
|
|
|
|
return for_speed;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2020-03-20 15:03:11 +01:00
|
|
|
|
tcx.sess.opts.optimize
|
2018-10-27 15:29:06 +03:00
|
|
|
|
};
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
|
2019-06-14 00:48:52 +03:00
|
|
|
|
fn determine_cgu_reuse<'tcx>(tcx: TyCtxt<'tcx>, cgu: &CodegenUnit<'tcx>) -> CguReuse {
|
2018-10-03 13:49:57 +02:00
|
|
|
|
if !tcx.dep_graph.is_fully_enabled() {
|
2019-12-24 17:38:22 -05:00
|
|
|
|
return CguReuse::No;
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
let work_product_id = &cgu.work_product_id();
|
|
|
|
|
if tcx.dep_graph.previous_work_product(work_product_id).is_none() {
|
|
|
|
|
// We don't have anything cached for this CGU. This can happen
|
|
|
|
|
// if the CGU did not exist in the previous session.
|
2019-12-24 17:38:22 -05:00
|
|
|
|
return CguReuse::No;
|
2018-10-03 13:49:57 +02:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Try to mark the CGU as green. If it we can do so, it means that nothing
|
|
|
|
|
// affecting the LLVM module has changed and we can re-use a cached version.
|
|
|
|
|
// If we compile with any kind of LTO, this means we can re-use the bitcode
|
|
|
|
|
// of the Pre-LTO stage (possibly also the Post-LTO version but we'll only
|
|
|
|
|
// know that later). If we are not doing LTO, there is only one optimized
|
|
|
|
|
// version of each module, so we re-use that.
|
|
|
|
|
let dep_node = cgu.codegen_dep_node(tcx);
|
2019-12-24 17:38:22 -05:00
|
|
|
|
assert!(
|
|
|
|
|
!tcx.dep_graph.dep_node_exists(&dep_node),
|
2018-10-03 13:49:57 +02:00
|
|
|
|
"CompileCodegenUnit dep-node for CGU `{}` already exists before marking.",
|
2019-12-24 17:38:22 -05:00
|
|
|
|
cgu.name()
|
|
|
|
|
);
|
2018-10-03 13:49:57 +02:00
|
|
|
|
|
2021-01-18 23:53:42 +01:00
|
|
|
|
if tcx.try_mark_green(&dep_node) {
|
2020-05-08 09:27:59 -07:00
|
|
|
|
// We can re-use either the pre- or the post-thinlto state. If no LTO is
|
|
|
|
|
// being performed then we can use post-LTO artifacts, otherwise we must
|
|
|
|
|
// reuse pre-LTO artifacts
|
|
|
|
|
match compute_per_cgu_lto_type(
|
|
|
|
|
&tcx.sess.lto(),
|
|
|
|
|
&tcx.sess.opts,
|
2020-05-15 21:44:28 -07:00
|
|
|
|
&tcx.sess.crate_types(),
|
2020-05-08 09:27:59 -07:00
|
|
|
|
ModuleKind::Regular,
|
|
|
|
|
) {
|
|
|
|
|
ComputedLtoType::No => CguReuse::PostLto,
|
|
|
|
|
_ => CguReuse::PreLto,
|
|
|
|
|
}
|
2018-10-03 13:49:57 +02:00
|
|
|
|
} else {
|
|
|
|
|
CguReuse::No
|
|
|
|
|
}
|
|
|
|
|
}
|