Allow bounds checks when enumerating `IndexSlice` to be elided
Without this hint, each loop iteration has to separately bounds check the index. See https://godbolt.org/z/zrfPY4Ten for an example.
This is technically a behaviour change, but only in cases where the compiler is going to crash anyways.
Revert <https://github.com/rust-lang/rust/pull/138084> to buy time to
consider options that avoids breaking downstream usages of cargo on
distributed `rustc-src` artifacts, where such cargo invocations fail due
to inability to inherit `lints` from workspace root manifest's
`workspace.lints` (this is only valid for the source rust-lang/rust
workspace, but not really the distributed `rustc-src` artifacts).
This breakage was reported in
<https://github.com/rust-lang/rust/issues/138304>.
This reverts commit 48caf81484, reversing
changes made to c6662879b2.
By naming them in `[workspace.lints.rust]` in the top-level
`Cargo.toml`, and then making all `compiler/` crates inherit them with
`[lints] workspace = true`. (I omitted `rustc_codegen_{cranelift,gcc}`,
because they're a bit different.)
The advantages of this over the current approach:
- It uses a standard Cargo feature, rather than special handling in
bootstrap. So, easier to understand, and less likely to get
accidentally broken in the future.
- It works for proc macro crates.
It's a shame it doesn't work for rustc-specific lints, as the comments
explain.
Use `std::mem::{size_of, size_of_val, align_of, align_of_val}` from the
prelude instead of importing or qualifying them.
These functions were added to all preludes in Rust 1.80.
Allow `IndexSlice` to be indexed by ranges.
This comes with some annoyances as the index type can no longer inferred from indexing expressions. The biggest offender for this is `IndexVec::from_fn_n(|idx| ..., n)` where the index type won't be inferred from the call site or any index expressions inside the closure.
My main use case for this is mapping a `Place` to `Range<Idx>` for value tracking where the range represents all the values the place contains.
This is similar to the existing `union`, except that bits in the RHS are
negated before being incorporated into the LHS.
Currently only `DenseBitSet` is supported. Supporting other bitset types is
possible, but non-trivial, and currently isn't needed.
update `rustc_index_macros` feature handling
It seems that cargo can't [conditionally propagate features](214587c89d/compiler/rustc_index/Cargo.toml (L20)) if they were enabled by default on the target crate, but disabled with `default-features = false` in the current/parent crate.
Fixes#118129
A `ChunkedBitSet` has to be at least 2048 bits for it to outperform a
`BitSet`, because that's the chunk size. The largest `SparseBitMatrix`
encountered when compiling the compiler and the entire rustc-perf
benchmark suite is less than 600 bits.
This change is a tiny perf win, but the motivation is more about
avoiding uses of `ChunkedBitSet` outside of `MixedBitSet`.
The test change is necessary to avoid hitting the `<BitSet<T> as
BitRelations<ChunkedBitSet<T>>>::subtract` method that has
`unimplemented!` in its body and isn't otherwise used.
It just uses `BitSet` for small/medium sizes (<= 2048 bits) and
`ChunkedBitSet` for larger sizes. This is good because `ChunkedBitSet`
is slow and memory-hungry at smaller sizes.
The current implementation is slow because it does an operation for
every bit in the set, even zero bits. So if you have a large bitset with
many zero bits (which is common) it's very slow.
This commit improves the iterator to skip over `Zeros` chunks in a
single step, and uses the fast `BitIter` for `Mixed` chunks. It also
removes the existing `fold` implementation, which was only there because
the old iterator was slow.
- Fix a typo in a comment.
- Remove unnecessary `Chunk::` qualifiers.
- Rename `ChunkedBitIter::bitset` as `ChunkedBitIter::bit_set`, because
`bit_set` is the form used everywhere else.
- Avoid some unnecessary local variables.
`ChunkedBitSet::is_empty` currently does an unnecessary check. This
commit removes that check and adds clarifying comments and an assertion
that demonstrate why it's unnecessary.
take 2
open up coroutines
tweak the wordings
the lint works up until 2021
We were missing one case, for ADTs, which was
causing `Result` to yield incorrect results.
only include field spans with significant types
deduplicate and eliminate field spans
switch to emit spans to impl Drops
Co-authored-by: Niko Matsakis <nikomat@amazon.com>
collect drops instead of taking liveness diff
apply some suggestions and add explantory notes
small fix on the cache
let the query recurse through coroutine
new suggestion format with extracted variable name
fine-tune the drop span and messages
bugfix on runtime borrows
tweak message wording
filter out ecosystem types earlier
apply suggestions
clippy
check lint level at session level
further restrict applicability of the lint
translate bid into nop for stable mir
detect cycle in type structure
Fix a few relative paths in rustc doc
## Changes
- Don't inline the doc for re-exporting some structs that have relative paths in doc.
## Context
See #124028.
- Most of the relative links in rustdoc are there because of circular import (so syntax like `[MyType]: rustc_foo::bar` is difficult to achieve when we cannot import `rustc_xxx` due to circular import)
- Here, I disable new links for re-exports. I think it's fine for re-exported items in `hir::*`.
- There is a few more relative links in other `rustc` crates, however they are not addressed in this PR, as they are not re-exported and/so the relative paths are working.
Closes#124028.
r? `@fmease`
Let me know if I miss anything or there's any other way to address this issue.
Fix double handling in `collect_tokens`
Double handling of AST nodes can occur in `collect_tokens`. This is when an inner call to `collect_tokens` produces an AST node, and then an outer call to `collect_tokens` produces the same AST node. This can happen in a few places, e.g. expression statements where the statement delegates `HasTokens` and `HasAttrs` to the expression. It will also happen more after #124141.
This PR fixes some double handling cases that cause problems, including #129166.
r? `@petrochenkov`
- `new_zeroed` variants move to `new_zeroed_alloc`
- the `write` fn moves to `box_uninit_write`
The remainder will be stabilized in upcoming patches, as
it was decided to only stabilize `uninit*` and `assume_init`.
By keeping track of attributes that have been previously processed.
This fixes the `macro-rules-derive-cfg.stdout` test, and is necessary
for #124141 which removes nonterminals.
Also shrink the `SmallVec` inline size used in `IntervalSet`. 2 gives
slightly better perf than 4 now that there's an `IntervalSet` in
`Parser`, which is cloned reasonably often.