Bump Fuchsia integration commit
This advances Fuchsia to a checkout from 2025-01-13, which corresponds to a recent Rust roll, and hopefully avoids #135667, where a repository used by the older version of Rust was accidentally archived and broke checking out the prior version.
try-job: x86_64-fuchsia
cc `@ehuss`
new solver: prefer trivial builtin impls
As discussed [on zulip](https://rust-lang.zulipchat.com/#narrow/channel/364551-t-types.2Ftrait-system-refactor/topic/needs_help.3A.20trivial.20builtin.20impls), this PR:
- adds a new `BuiltinImplSource::Trivial` source, and marks the `Sized` builtin impls as trivial
- prefers these trivial builtin impls in `merge_trait_candidates`
The comments can likely be wordsmithed a bit better, and I ~stole~ was inspired by the old solver ones. Let me know how you want them improved.
When enabling the new solver for tests, 3 UI tests now pass:
- `regions/issue-26448-1.rs` and its sibling `regions/issue-26448-2.rs` were rejected by the new solver but accepted by the old one
- and `issues/issue-42796.rs` where the old solver emitted some overflow errors in addition to the expected error
(For some reason one of these tests is run-pass, but I can take care of that another day)
r? lcnr
Make tidy warn on unrecognized directives
This PR makes it so tidy warns on unrecognized directives, as recommended on [the discussion of #130984](https://github.com/rust-lang/rust/issues/130984#issuecomment-2589284620). This is edited from the previous version of this PR, which only warned on "tidy-ignore" and no other tidy directive typos.
Fixes#130984.
``@rustbot`` label A-tidy C-enhancement
This advances Fuchsia to a checkout from 2025-01-13, which corresponds
to a recent Rust roll, and hopefully avoids #135667, where a repository
used by the older version of Rust was accidentally archived and broke
checking out the prior version.
try-job: x86_64-fuchsia
Stable Hash: Ignore all HirIds that just identify the node itself
This should provide better incremental caching, but it seems there is more to it.
These IDs also serve no purpose being in the stable hash of the item they refer to, only when referring to *another* item is it important that we hash the `HirId`. So we can at least avoid the cost during stable hashing, even if we don't benefit from it by avoiding some queries' caches from being invalidated
Unsure how to make sure we do this right by construction. Would be nice to do something type based
Rollup of 6 pull requests
Successful merges:
- #131806 (Treat other items as functions for the purpose of type-based search)
- #134980 (Location-sensitive polonius prototype: endgame)
- #135558 (Detect if-else chains with a missing final else in type errors)
- #135594 (fix error for when results in a rustdoc-js test are in the wrong order)
- #135601 (Fix suggestion to convert dereference of raw pointer to ref)
- #135604 (Expand docs for `E0207` with additional example)
r? `@ghost`
`@rustbot` modify labels: rollup
Expand docs for `E0207` with additional example
Add an example to E0207 docs showing how to tie the lifetime of the self type to an associated type in an impl when the trait *doesn't* have a lifetime to begin with.
CC #135589.
Detect if-else chains with a missing final else in type errors
```
error[E0308]: `if` and `else` have incompatible types
--> $DIR/if-else-chain-missing-else.rs:12:12
|
LL | let x = if let Ok(x) = res {
| ______________-
LL | | x
| | - expected because of this
LL | | } else if let Err(e) = res {
| | ____________^
LL | || return Err(e);
LL | || };
| || ^
| ||_____|
| |_____`if` and `else` have incompatible types
| expected `i32`, found `()`
|
= note: `if` expressions without `else` evaluate to `()`
= note: consider adding an `else` block that evaluates to the expected type
```
We probably want a longer explanation and fewer spans on this case.
Partially address #133316.
Location-sensitive polonius prototype: endgame
This PR sets up the naive location-sensitive analysis end-to-end, and replaces the location-insensitive analysis. It's roughly all the in-progress work I wanted to land for the prototype, modulo cleanups I still want to do after the holidays, or the polonius debugger, and so on.
Here, we traverse the localized constraint graph, have to deal with kills and time-traveling (👌), and record that as loan liveness for the existing scope and active loans computations.
Then the near future looks like this, especially if the 2025h1 project goal is accepted:
- gradually bringing it up to completion
- analyzing and fixing the few remaining test failures
- going over the *numerous* fixmes in this prototype (one of which is similar to a hang on one test's millions and millions of constraints)
- trying to see how to lower the impact of the lack of NLL liveness optimization on diagnostics, and their categorization of local variables and temporaries (the vast majority of blessed expectations differences), as well as the couple ICEs trying to find an NLL constraint to blame for errors.
- dealing with the theoretical weakness around kills, conflating reachability for the two TCS, etc that is described ad nauseam in the code.
- switching the compare mode to the in-tree implementation, and blessing the diagnostics
- apart from the hang, it's not catastrophically slower on our test suite, so then we can try to enable it on CI
- checking crater, maybe trying to make it faster :3, etc.
I've tried to gradually introduce this PR's work over 4 commits, because it's kind of subtle/annoying, and Niko/I are not completely convinced yet. That one comment explaining the situation is maybe 30% of the PR 😓. Who knew that spacetime reachability and time-traveling could be mind bending.
I kinda found this late and the impact on this part of the computation was a bit unexpected to us. A bit more care/thought will be needed here. I've described my plan in the comments though. In any case, I believe we have the current implementation is a conservative approximation that shouldn't result in unsoundness but false positives at worst. So it feels fine for now.
r? ``@jackh726``
---
Fixes#127628 -- which was a assertion triggered for a difference in loan computation between NLLs and the location-insensitive analysis. That doesn't exist anymore so I've removed this crash test.
Treat other items as functions for the purpose of type-based search
specifically, constants and statics are nullary functions, and struct fields are unary functions.
fixes#130204
r? ``@notriddle``
Add gpu-kernel calling convention
The amdgpu-kernel calling convention was reverted in commit f6b21e90d1 (#120495 and https://github.com/rust-lang/rust-analyzer/pull/16463) due to inactivity in the amdgpu target.
Introduce a `gpu-kernel` calling convention that translates to `ptx_kernel` or `amdgpu_kernel`, depending on the target that rust compiles for.
Tracking issue: #135467
amdgpu target tracking issue: #135024
bootstrap: still require `COMPILETEST_FORCE_STAGE0` for `./x test rustdoc-js --stage 0`
This PR reverts #135375, because through some more testing I found out `./x test rustdoc-js --stage 0` does not in fact build rustdoc, and all the tests fail. This can't be intended behavior, so at least require `COMPILETEST_FORCE_STAGE0` to make it less likely to run `rustdoc-js --stage 0` by accident.
The problem that `--stage 0` is not working at all for this rustdoc-js test suite is tracked over at #135603.
cc `@lolbinarycat`
r? bootstrap
Use trait definition cycle detection for trait alias definitions, too
fixes#133901
In general doing this for `All` is not right, but this code path is specifically for traits and trait aliases, and there we only ever use `All` for trait aliases.
constants and statics are nullary functions, and struct fields are unary functions.
functions (along with methods and trait methods) are prioritized over other
items, like fields and constants.
Add license-metadata.json to rustc-src tarball.
Adds a license-metadata.json to the source tarball.
This file was reported as missing as a comment on #133461, and it prevents you building the compiler from the source tarball (unless you re-generate it yourself, which is non-obvious and requires `reuse` to be installed).
r? Kobzol
resolve symlinks of LLVM tool binaries before copying them
There is a chance that these tools are being installed from an external LLVM and we have no control over them. If any of these tools use symlinks, they will fail during tarball distribution. This change makes copying process to resolve symlinks just before placing them into the destination path.
Fixes https://github.com/rust-lang/rust/issues/135554
Update docs for `-Clink-dead-code` to discourage its use
The `-Clink-dead-code` flag was originally added way back in #31368, apparently to help improve the output of some older forms of code coverage measurement, and also to address some use-cases for wanting to suppress linker flags like `-dead_strip` and `--gc-section`.
In the past it might have also been useful in conjunction with `-Cinstrument-coverage`, but subsequent improvements to coverage instrumentation have made it unnecessary there.
[It is also currently used by cargo-fuzz by default](https://github.com/rust-fuzz/cargo-fuzz/issues/391), for reasons that are possibly no longer relevant.
---
The flag currently does more than its name suggests, affecting not just linker flags, but also monomorphization decisions. It has also contributed to ICEs (e.g. #135515) that would not have occurred without link-dead-code.
---
For now, this PR just updates the documentation to be more realistic about what the flag does, and when it should be used (approximately never). In the future, it might be worth looking into properly deprecating this flag, and perhaps making it a no-op if feasible.
coverage: Completely overhaul counter assignment, using node-flow graphs
The existing code for choosing where to put physical counter-increments gets the job done, but is very ad-hoc and hard to modify without introducing tricky regressions.
This PR replaces all of that with a more principled approach, based on the algorithm described in "Optimal measurement points for program frequency counts" (Knuth & Stevenson, 1973).
---
We start by ensuring that our graph has “balanced flow”, i.e. each node's flow (execution count) is equal to the sum of all its in-edge flows, and equal to the sum of all its out-edge flows. That isn't naturally true of control-flow graphs, so we introduce a wrapper type `BalancedFlowGraph` to fix that by introducing synthetic nodes and edges as needed.
Once our graph has balanced flow, the next step is to create another view of that graph in which each node's successors have all been merged into one “supernode”. Consequently, each node's out-edges can be coalesced into a single out-edge to one of those supernodes. Because of the balanced-flow property, the flow of that coalesced edge is equal to the flow of the original node.
Having expressed all of our node flows as edge flows, we can then analyze node flows using techniques for analyzing edge flows. We incrementally build a spanning tree over the merged supernodes, such that each new edge in the spanning tree represents a node whose flow can be computed from that of other nodes.
When this is done, we end up with a list of “counter terms” for each node, describing which nodes need physical counters, and how the remaining nodes can have their flow calculated by adding and subtracting those physical counters.
---
The re-blessed coverage tests show that this results in modest or major improvements for our test programs. Some tests need fewer physical counters, some tests need fewer expression nodes for the same number of physical counters, and some tests show striking reductions in both.
Clarify note in `std::sync::LazyLock` example
I doubt most people know what it means, as I did not until a week ago. In the current form, it seems like a `TODO:`.
Fix overflows in the implementation of `overflowing_literals` lint's help
This PR fixes two overflow problems that cause the `overflowing_literals` lint to behave incorrectly in some edge cases.
1. When an integer literal is between `i128::MAX` and `u128::MAX`, an overflowing `as` cast can cause the suggested type to be overly small. It's fixed by using checked type conversion and returning `u128` when it's the only choice. (Fixes#135248)
2. When an integer literal is `i128::MIN` but is of a smaller type, an overflowing negation cause the compiler to panic in debug build. Fixed by checking the number size beforehand and `wrapping_neg`. (Fixes#131849)
Edit: extracted the type conversion part into a standalone function to separate the concern of overflowing.
Update documentation for Arc::from_raw, Arc::increment_strong_count, and Arc::decrement_strong_count to clarify allocator requirement
### Related Issue:
This update addresses parts of the issue raised in [#134242](https://github.com/rust-lang/rust/issues/134242), where Arc's documentation lacks `Global Allocator` safety descriptions for three APIs. And this was confirmed by ```@workingjubilee``` :
> Wait, nevermind. I apparently forgot the `increment_strong_count` is implicitly A = Global. Ugh. Another reason these things are hard to track, unfortunately.
### PR Description
This PR updates the document for the following APIs:
- `Arc::from_raw`
- `Arc::increment_strong_count`
- `Arc::decrement_strong_count`
These APIs currently lack an important piece of documentation: **the raw pointer must point to a block of memory allocated by the global allocator**. This crucial detail is specified in the source code but is not reflected in the documentation, which could lead to confusion or incorrect usage by users.
### Problem:
The following example demonstrates the potential confusion caused by the lack of documentation:
```rust
#![feature(allocator_api)]
use std::alloc::{Allocator,AllocError, Layout};
use std::ptr::NonNull;
use std::sync::Arc;
struct LocalAllocator {
memory: NonNull<u8>,
size: usize,
}
impl LocalAllocator {
fn new(size: usize) -> Self {
Self {
memory: unsafe { NonNull::new_unchecked(&mut 0u8 as *mut u8) },
size,
}
}
}
unsafe impl Allocator for LocalAllocator {
fn allocate(&self, _layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
Ok(NonNull::slice_from_raw_parts(self.memory, self.size))
}
unsafe fn deallocate(&self, _ptr: NonNull<u8>, _layout: Layout) {
}
}
fn main() {
let allocator = LocalAllocator::new(64);
let arc = Arc::new_in(5, &allocator); // Here, allocator could be any non-global allocator
let ptr = Arc::into_raw(arc);
unsafe {
Arc::increment_strong_count(ptr);
let arc = Arc::from_raw(ptr);
assert_eq!(2, Arc::strong_count(&arc)); // Failed here!
}
}
```
[cfg_match] Adjust syntax
A year has passed since the creation of #115585 and the feature, as expected, is not moving forward. Let's change that.
This PR proposes changing the arm's syntax from `cfg(SOME_CONDITION) => { ... }` to `SOME_CODITION => {}`.
```rust
match_cfg! {
unix => {
fn foo() { /* unix specific functionality */ }
}
target_pointer_width = "32" => {
fn foo() { /* non-unix, 32-bit functionality */ }
}
_ => {
fn foo() { /* fallback implementation */ }
}
}
```
Why? Because after several manual migrations in https://github.com/rust-lang/rust/pull/116342 it became clear, at least for me, that `cfg` prefixes are unnecessary, verbose and redundant.
Again, everything is just a proposal to move things forward. If the shown syntax isn't ideal, feel free to close this PR or suggest other alternatives.
There is a chance that these tools are being installed from an external LLVM
and we have no control over them. If any of these tools use symlinks, they will
fail during tarball distribution. This change makes copying process to resolve
symlinks just before placing them into the destination path.
Signed-off-by: onur-ozkan <work@onurozkan.dev>