The payload of coverage statements was historically a structure with several
fields, so it was boxed to avoid bloating `StatementKind`.
Now that the payload is a single relatively-small enum, we can replace
`Box<Coverage>` with just `CoverageKind`.
This patch also adds a size assertion for `StatementKind`, to avoid
accidentally bloating it in the future.
These assertions detect situations where a BCB node would have both a physical
counter and one or more in-edge counters/expressions.
For most BCBs that situation would indicate an implementation bug. However,
it's perfectly fine in the case of a BCB having an edge that loops back to
itself.
Given the complexity and risk involved in fixing the assertions, and the fact
that nothing relies on them actually being true, this patch just removes them
instead.
This will allow MIR building to check whether a function is eligible for
coverage instrumentation, and avoid collecting branch coverage info if it is
not.
coverage: Rename `is_closure` to `is_hole`
Extracted from #121433, since I was having second thoughts about some of the other changes bundled in that PR, but these changes are still fine.
---
When refining covspans, we don't specifically care which ones represent closures; we just want to know which ones represent "holes" that should be carved out of other spans and then discarded.
(Closures are currently the only source of hole spans, but in the future we might want to also create hole spans for nested items and inactive `#[cfg(..)]` regions.)
``@rustbot`` label +A-code-coverage
When refining covspans, we don't specifically care which ones represent
closures; we just want to know which ones represent "holes" that should be
carved out of other spans and then discarded.
(Closures are currently the only source of hole spans, but in the future we
might want to also create hole spans for nested items and inactive `#[cfg(..)]`
regions.)
When we try to extract coverage-relevant spans from MIR, sometimes we see MIR
statements/terminators whose spans cover the entire function body. Those spans
tend to be unhelpful for coverage purposes, because they often represent
compiler-inserted code, e.g. the implicit return value of `()`.
If we only check for duplicate spans when `prev` is unmodified, we reduce the
number of situations that `update_pending_dups` needs to handle.
This could potentially change the coverage spans we produce in some unknown
corner cases, but none of our current coverage tests indicate any change.
coverage: Split out counter increment sites from BCB node/edge counters
This makes it possible for two nodes/edges in the coverage graph to share the same counter, without causing the instrumentor to inject unwanted duplicate counter-increment statements.
---
````@rustbot```` label +A-code-coverage
This sidesteps the normal span refinement code in cases where we know that we
are only dealing with the special signature span that represents having called
an async function.
This makes it possible for two nodes/edges in the coverage graph to share the
same counter, without causing the instrumentor to inject unwanted duplicate
counter-increment statements.
The query accept arbitrary DefIds, not just owner DefIds.
The return can be an `Option` because if there are no nodes, then it doesn't matter whether it's due to NonOwner or Phantom.
Also rename the query to `opt_hir_owner_nodes`.
coverage: Dismantle `Instrumentor` and flatten span refinement
This is a combination of two refactorings that are unrelated, but would otherwise have a merge conflict.
No functional changes, other than a small tweak to debug logging as part of rearranging some functions.
Ignoring whitespace is highly recommended, since most of the modified lines have just been reindented.
---
The first change is to dismantle `Instrumentor` into ordinary functions.
This is one of those cases where encapsulating several values into a struct ultimately hurts more than it helps. With everything stored as local variables in one main function, and passed explicitly into helper functions, it's easier to see what is used where, and make changes as necessary.
---
The second change is to flatten the functions for extracting/refining coverage spans.
Consolidating this code into flatter functions reduces the amount of pointer-chasing required to read and modify it.
coverage: Don't instrument `#[automatically_derived]` functions
This PR makes the coverage instrumentor detect and skip functions that have [`#[automatically_derived]`](https://doc.rust-lang.org/reference/attributes/derive.html#the-automatically_derived-attribute) on their enclosing impl block.
Most notably, this means that methods generated by built-in derives (e.g. `Clone`, `Debug`, `PartialEq`) are now ignored by coverage instrumentation, and won't appear as executed or not-executed in coverage reports.
This is a noticeable change in user-visible behaviour, but overall I think it's a net improvement. For example, we've had a few user requests for this sort of change (e.g. #105055, https://github.com/rust-lang/rust/issues/84605#issuecomment-1902069040), and I believe it's the behaviour that most users will expect/prefer by default.
It's possible to imagine situations where users would want to instrument these derived implementations, but I think it's OK to treat that as an opportunity to consider adding more fine-grained option flags to control the details of coverage instrumentation, while leaving this new behaviour as the default.
(Also note that while `-Cinstrument-coverage` is a stable feature, the exact details of coverage instrumentation are allowed to change. So we *can* make this change; the main question is whether we *should*.)
Fixes#105055.
coverage: Never emit improperly-ordered coverage regions
If we emit a coverage region that is improperly ordered (end < start), `llvm-cov` will fail with `coveragemap_error::malformed`, which is inconvenient for users and also very hard to debug.
Ideally we would fix the root causes of these situations, but they tend to occur in very obscure edge-case scenarios (often involving nested macros), and we don't always have a good MCVE to work from. So it makes sense to also have a catch-all check that will prevent improperly-ordered regions from ever being emitted.
---
This is mainly aimed at resolving #119453. We don't have a specific way to reproduce it, which is why I haven't been able to add a test case in this PR. But based on the information provided in that issue, this change seems likely to avoid the error in `llvm-cov`.
`````@rustbot````` label +A-code-coverage
This also switches from `split_off(0)` to `std::mem::take` when emptying the
accumulated list of blocks, because `split_off(0)` handles capacity in a way
that is unintuitive when used in a loop.
The old loop had two separate places where it would flush the acumulated list
of straight-line blocks into a new BCB. One occurred at the start of the loop
body when the current block couldn't be chained into, and the other occurred at
the end of the loop body when the current block couldn't be chained from.
The latter check can be hoisted to the start of the loop body by making it
examine the previous block (which has added itself to the list) instead of the
current block. With that done, we can combine the two separate flushes into one
flush with two possible trigger conditions.
Filtering out unreachable successors is only needed by the main graph traversal
loop, so we can move the filtering step into that loop instead, eliminating the
need to pass the MIR body into `bcb_filtered_successors`.
This is less elegant in some ways, since we no longer visit a BCB's spans as a
batch, but will make it much easier to add support for other kinds of coverage
mapping regions (e.g. branch regions or gap regions).