`lookup_debug_loc` finds a file, line, and column, which requires two
binary searches. But this call site only needs the file.
This commit replaces the call with `lookup_source_file`, which does a
single binary search.
`lookup_debug_loc` calls `SourceMap::lookup_line`, which does a binary
search over the files, and then a binary search over the lines within
the found file. It then calls `SourceFile::line_begin_pos`, which redoes
the binary search over the lines within the found file.
This commit removes the second binary search over the lines, instead
getting the line starting pos directly using the result of the first
binary search over the lines.
(And likewise for `get_span_loc`, in the cranelift backend.)
Because tiny CGUs make compilation less efficient *and* result in worse
generated code.
We don't do this when the number of CGUs is explicitly given, because
there are times when the requested number is very important, as
described in some comments within the commit. So the commit also
introduces a `CodegenUnits` type that distinguishes between default
values and user-specified values.
This change has a roughly neutral effect on walltimes across the
rustc-perf benchmarks; there are some speedups and some slowdowns. But
it has significant wins for most other metrics on numerous benchmarks,
including instruction counts, cycles, binary size, and max-rss. It also
reduces parallelism, which is good for reducing jobserver competition
when multiple rustc processes are running at the same time. It's smaller
benchmarks that benefit the most; larger benchmarks already have CGUs
that are all larger than the minimum size.
Here are some example before/after CGU sizes for opt builds.
- html5ever
- CGUs: 16, mean size: 1196.1, sizes: [3908, 2992, 1706, 1652, 1572,
1136, 1045, 948, 946, 938, 579, 471, 443, 327, 286, 189]
- CGUs: 4, mean size: 4396.0, sizes: [6706, 3908, 3490, 3480]
- libc
- CGUs: 12, mean size: 35.3, sizes: [163, 93, 58, 53, 37, 8, 2 (x6)]
- CGUs: 1, mean size: 424.0, sizes: [424]
- tt-muncher
- CGUs: 5, mean size: 1819.4, sizes: [8508, 350, 198, 34, 7]
- CGUs: 1, mean size: 9075.0, sizes: [9075]
Note that CGUs of size 100,000+ aren't unusual in larger programs.
Fix dependency tracking for debugger visualizers
This PR fixes dependency tracking for debugger visualizer files by changing the `debugger_visualizers` query to an `eval_always` query that scans the AST while it is still available. This way the set of visualizer files is already available when dep-info is emitted. Since the query is turned into an `eval_always` query, dependency tracking will now reliably detect changes to the visualizer script files themselves.
TODO:
- [x] perf.rlo
- [x] Needs a bit more documentation in some places
- [x] Needs regression test for the incr. comp. case
Fixes https://github.com/rust-lang/rust/issues/111226
Fixes https://github.com/rust-lang/rust/issues/111227
Fixes https://github.com/rust-lang/rust/issues/111295
r? `@wesleywiser`
cc `@gibbyfree`
Only depend on CFG_VERSION in rustc_interface
This avoids having to rebuild the whole compiler on each commit when `omit-git-hash = false`.
cc https://github.com/rust-lang/rust/issues/76720 - this won't fix it, and I'm not suggesting we turn this on by default, but it will make it less painful for people who do have `omit-git-hash` on as a workaround.
When we're adding a method to a type DIE, we only want a DW_AT_declaration
there, because LLVM LTO can't unify type definitions when a child DIE is a
full subprogram definition. Now the subprogram definition gets added at the
CU level with a specification link back to the abstract declaration.
Encode hashes as bytes, not varint
In a few places, we store hashes as `u64` or `u128` and then apply `derive(Decodable, Encodable)` to the enclosing struct/enum. It is more efficient to encode hashes directly than try to apply some varint encoding. This PR adds two new types `Hash64` and `Hash128` which are produced by `StableHasher` and replace every use of storing a `u64` or `u128` that represents a hash.
Distribution of the byte lengths of leb128 encodings, from `x build --stage 2` with `incremental = true`
Before:
```
( 1) 373418203 (53.7%, 53.7%): 1
( 2) 196240113 (28.2%, 81.9%): 3
( 3) 108157958 (15.6%, 97.5%): 2
( 4) 17213120 ( 2.5%, 99.9%): 4
( 5) 223614 ( 0.0%,100.0%): 9
( 6) 216262 ( 0.0%,100.0%): 10
( 7) 15447 ( 0.0%,100.0%): 5
( 8) 3633 ( 0.0%,100.0%): 19
( 9) 3030 ( 0.0%,100.0%): 8
( 10) 1167 ( 0.0%,100.0%): 18
( 11) 1032 ( 0.0%,100.0%): 7
( 12) 1003 ( 0.0%,100.0%): 6
( 13) 10 ( 0.0%,100.0%): 16
( 14) 10 ( 0.0%,100.0%): 17
( 15) 5 ( 0.0%,100.0%): 12
( 16) 4 ( 0.0%,100.0%): 14
```
After:
```
( 1) 372939136 (53.7%, 53.7%): 1
( 2) 196240140 (28.3%, 82.0%): 3
( 3) 108014969 (15.6%, 97.5%): 2
( 4) 17192375 ( 2.5%,100.0%): 4
( 5) 435 ( 0.0%,100.0%): 5
( 6) 83 ( 0.0%,100.0%): 18
( 7) 79 ( 0.0%,100.0%): 10
( 8) 50 ( 0.0%,100.0%): 9
( 9) 6 ( 0.0%,100.0%): 19
```
The remaining 9 or 10 and 18 or 19 are `u64` and `u128` respectively that have the high bits set. As far as I can tell these are coming primarily from `SwitchTargets`.
`-Cdebuginfo=1` was never line tables only and
can't be due to backwards compatibility issues.
This was clarified and an option for line tables only
was added. Additionally an option for line info
directives only was added, which is well needed for
some targets. The debug info options should now
behave the same as clang's debug info options.
And while doing the updates for that, also uses `FieldIdx` in `ProjectionKind::Field` and `TypeckResults::field_indices`.
There's more places that could use it (like `rustc_const_eval` and `LayoutS`), but I tried to keep this PR from exploding to *even more* places.
Part 2/? of https://github.com/rust-lang/compiler-team/issues/606
The first PR for https://github.com/rust-lang/compiler-team/issues/606
This is just the move-and-rename, because it's plenty big-and-bitrotty already. Future PRs will start using `FieldIdx` more broadly, and concomitantly removing `FieldIdx::new`s.
debuginfo: Get pointer size/align from tcx.data_layout instead of layout_of
This avoids some type interning and a query execution. It also just makes the code simpler.
...and remove it from `PointeeInfo`, which isn't meant for this.
There are still various places (marked with FIXMEs) that assume all pointers
have the same size and alignment. Fixing this requires parsing non-default
address spaces in the data layout string, which will be done in a followup.
Pass 128-bit C-style enum enumerator values to LLVM
Pass the full 128 bits of C-style enum enumerators through to LLVM. This means that debuginfo for C-style repr128 enums is now emitted correctly for DWARF platforms (as compared to not being correctly emitted on any platform).
Tracking issue: #56071