| Age | Commit message (Collapse) | Author | Lines |
|
change it's return type from Option<QueryMap> to QueryMap
As there currently always Some(...) inside
|
|
|
|
`x clippy compiler -Aclippy::all -Wclippy::needless_borrow --fix`.
Then I had to remove a few unnecessary parens and muts that were exposed
now.
|
|
|
|
- Sort dependencies and features sections.
- Add `tidy` markers to the sorted sections so they stay sorted.
- Remove empty `[lib`] sections.
- Remove "See more keys..." comments.
Excluded files:
- rustc_codegen_{cranelift,gcc}, because they're external.
- rustc_lexer, because it has external use.
- stable_mir, because it has external use.
|
|
The comment explains it's for `unstable_offset_of`, but `offset_of` is
now stable.
|
|
Remove -Zdep-tasks.
This option is not useful any more, we can use `tracing` and `RUSTC_LOG` to debug the dep-graph.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
It lints against features that are inteded to be internal to the
compiler and standard library. Implements MCP #596.
We allow `internal_features` in the standard library and compiler as those
use many features and this _is_ the standard library from the "internal to the compiler and
standard library" after all.
Marking some features as internal wasn't exactly the most scientific approach, I just marked some
mostly obvious features. While there is a categorization in the macro,
it's not very well upheld (should probably be fixed in another PR).
We always pass `-Ainternal_features` in the testsuite
About 400 UI tests and several other tests use internal features.
Instead of throwing the attribute on each one, just always allow them.
There's nothing wrong with testing internal features^^
|
|
filter_map_identity
iter_kv_map
needless_question_mark
redundant_at_rest_pattern
filter_next
derivable_impls
|
|
|
|
|
|
This enables usage of the offset_of!() macro in the compiler,
through the wrappers in memoffset and then in field-offset.
|
|
update iana-time-zone-haiku to drop bumch of cxx* deps
cargo update -p iana-time-zone-haiku
Updating crates.io index
Updating cc v1.0.77 -> v1.0.79
Removing codespan-reporting v0.11.1
Removing cxx v1.0.94
Removing cxx-build v1.0.94
Removing cxxbridge-flags v1.0.94
Removing cxxbridge-macro v1.0.94
Updating iana-time-zone-haiku v0.1.1 -> v0.1.2
Removing link-cplusplus v1.0.8
Removing scratch v1.0.5
fixes known issue https://github.com/crossbeam-rs/crossbeam/pull/972
cargo update -p crossbeam-channel
Updating crates.io index
Updating crossbeam-channel v0.5.6 -> v0.5.8
dedupes memoffset versions
cargo update -p crossbeam-epoch
Updating crates.io index
Updating crossbeam-epoch v0.9.13 -> v0.9.14
Removing memoffset v0.7.1
dedupes bstr versions
cargo update -p ignore -p opener
Updating crates.io index
Removing bstr v0.2.17
Updating globset v0.4.9 -> v0.4.10
Updating ignore v0.4.18 -> v0.4.20
Updating opener v0.5.0 -> v0.5.2
|
|
|
|
|
|
|
|
|
|
Shorten backtraces for queries in ICEs
r? `@jyn514`
Fixes #107910
|
|
|
|
|
|
Specialize query execution for incremental and non-incremental
This specializes query execution for incremental and non-incremental by passing in a separate `dyn QueryEngine` types, taking advantage of the virtual dispatch to avoid a branch. This ends up duplicating `try_execute_query`, hopefully the compile time cost of that is relatively low.
This is a performance improvement for the non-incremental path:
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.8420s</td><td align="right">1.8331s</td><td align="right"> -0.48%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2652s</td><td align="right">0.2631s</td><td align="right"> -0.78%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">1.0161s</td><td align="right">1.0062s</td><td align="right"> -0.98%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.6408s</td><td align="right">1.6197s</td><td align="right">💚 -1.28%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">6.3939s</td><td align="right">6.3558s</td><td align="right"> -0.60%</td></tr><tr><td>Total</td><td align="right">11.1580s</td><td align="right">11.0780s</td><td align="right"> -0.72%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9918s</td><td align="right"> -0.82%</td></tr></table>
The incremental path is more neutral:
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check:initial</td><td align="right">2.2210s</td><td align="right">2.2227s</td><td align="right"> 0.08%</td></tr><tr><td>🟣 <b>hyper</b>:check:initial</td><td align="right">0.3441s</td><td align="right">0.3443s</td><td align="right"> 0.05%</td></tr><tr><td>🟣 <b>regex</b>:check:initial</td><td align="right">1.2919s</td><td align="right">1.2877s</td><td align="right"> -0.33%</td></tr><tr><td>🟣 <b>syn</b>:check:initial</td><td align="right">2.0749s</td><td align="right">2.0721s</td><td align="right"> -0.14%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check:initial</td><td align="right">7.9266s</td><td align="right">7.9206s</td><td align="right"> -0.07%</td></tr><tr><td>Total</td><td align="right">13.8585s</td><td align="right">13.8474s</td><td align="right"> -0.08%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9992s</td><td align="right"> -0.08%</td></tr></table>
r? `@cjgillot`
|
|
|
|
|
|
|
|
|
|
|
|
Rewrite MemDecoder around pointers not a slice
This is basically https://github.com/rust-lang/rust/pull/109910 but I'm being a lot more aggressive. The pointer-based structure means that it makes a lot more sense to absorb more complexity into `MemDecoder`, most of the diff is just complexity moving from one place to another.
The primary argument for this structure is that we only incur a single bounds check when doing multi-byte reads from a `MemDecoder`. With the slice-based implementation we need to do those with `data[position..position + len]` , which needs to account for `position + len` wrapping. It would be possible to dodge the first bounds check if we stored a slice that starts at `position`, but that would require updating the pointer and length on every read.
This PR also embeds the failure path in a separate function, which means that this PR should subsume all the perf wins observed in https://github.com/rust-lang/rust/pull/109867.
|
|
|
|
|
|
Panic instead of truncating if the incremental on-disk cache is too big
It seems _unlikely_ that anyone would hit this truncation, but if this `as` does actually truncate, that seems incredibly bad.
|
|
|
|
|
|
Switch to `EarlyBinder` for `collect_return_position_impl_trait_in_trait_tys`
Part of the work to finish https://github.com/rust-lang/rust/issues/105779.
This PR adds `EarlyBinder` to the return type of the `collect_return_position_impl_trait_in_trait_tys` query and removes `bound_return_position_impl_trait_in_trait_tys`.
r? `@lcnr`
|
|
collect_return_position_impl_trait_in_trait_tys query; remove bound_X version
|
|
Encode hashes as bytes, not varint
In a few places, we store hashes as `u64` or `u128` and then apply `derive(Decodable, Encodable)` to the enclosing struct/enum. It is more efficient to encode hashes directly than try to apply some varint encoding. This PR adds two new types `Hash64` and `Hash128` which are produced by `StableHasher` and replace every use of storing a `u64` or `u128` that represents a hash.
Distribution of the byte lengths of leb128 encodings, from `x build --stage 2` with `incremental = true`
Before:
```
( 1) 373418203 (53.7%, 53.7%): 1
( 2) 196240113 (28.2%, 81.9%): 3
( 3) 108157958 (15.6%, 97.5%): 2
( 4) 17213120 ( 2.5%, 99.9%): 4
( 5) 223614 ( 0.0%,100.0%): 9
( 6) 216262 ( 0.0%,100.0%): 10
( 7) 15447 ( 0.0%,100.0%): 5
( 8) 3633 ( 0.0%,100.0%): 19
( 9) 3030 ( 0.0%,100.0%): 8
( 10) 1167 ( 0.0%,100.0%): 18
( 11) 1032 ( 0.0%,100.0%): 7
( 12) 1003 ( 0.0%,100.0%): 6
( 13) 10 ( 0.0%,100.0%): 16
( 14) 10 ( 0.0%,100.0%): 17
( 15) 5 ( 0.0%,100.0%): 12
( 16) 4 ( 0.0%,100.0%): 14
```
After:
```
( 1) 372939136 (53.7%, 53.7%): 1
( 2) 196240140 (28.3%, 82.0%): 3
( 3) 108014969 (15.6%, 97.5%): 2
( 4) 17192375 ( 2.5%,100.0%): 4
( 5) 435 ( 0.0%,100.0%): 5
( 6) 83 ( 0.0%,100.0%): 18
( 7) 79 ( 0.0%,100.0%): 10
( 8) 50 ( 0.0%,100.0%): 9
( 9) 6 ( 0.0%,100.0%): 19
```
The remaining 9 or 10 and 18 or 19 are `u64` and `u128` respectively that have the high bits set. As far as I can tell these are coming primarily from `SwitchTargets`.
|
|
|
|
* account
* achieved
* advising
* always
* ambiguous
* analysis
* annotations
* appropriate
* build
* candidates
* cascading
* category
* character
* clarification
* compound
* conceptually
* constituent
* consts
* convenience
* corresponds
* debruijn
* debug
* debugable
* debuggable
* deterministic
* discriminant
* display
* documentation
* doesn't
* ellipsis
* erroneous
* evaluability
* evaluate
* evaluation
* explicitly
* fallible
* fulfill
* getting
* has
* highlighting
* illustrative
* imported
* incompatible
* infringing
* initialized
* into
* intrinsic
* introduced
* javascript
* liveness
* metadata
* monomorphization
* nonexistent
* nontrivial
* obligation
* obligations
* offset
* opaque
* opportunities
* opt-in
* outlive
* overlapping
* paragraph
* parentheses
* poisson
* precisely
* predecessors
* predicates
* preexisting
* propagated
* really
* reentrant
* referent
* responsibility
* rustonomicon
* shortcircuit
* simplifiable
* simplifications
* specify
* stabilized
* structurally
* suggestibility
* translatable
* transmuting
* two
* unclosed
* uninhabited
* visibility
* volatile
* workaround
Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
|
|
|
|
Rollup of 7 pull requests
Successful merges:
- #109395 (Fix issue when there are multiple candidates for edit_distance_with_substrings)
- #109755 (Implement support for `GeneratorWitnessMIR` in new solver)
- #109782 (Don't leave a comma at the start of argument list when removing arguments)
- #109977 (rustdoc: avoid including line numbers in Google SERP snippets)
- #109980 (Derive String's PartialEq implementation)
- #109984 (Remove f32 & f64 from MemDecoder/MemEncoder)
- #110004 (add `dont_check_failure_status` option in the compiler test)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
|
|
|
|
|