about summary refs log tree commit diff
path: root/compiler/rustc_data_structures/src
AgeCommit message (Collapse)AuthorLines
2022-03-02Auto merge of #94514 - matthiaskrgr:rollup-pdzn82h, r=matthiaskrgrbors-1/+1
Rollup of 9 pull requests Successful merges: - #94464 (Suggest adding a new lifetime parameter when two elided lifetimes should match up for traits and impls.) - #94476 (7 - Make more use of `let_chains`) - #94478 (Fix panic when handling intra doc links generated from macro) - #94482 (compiler: fix some typos) - #94490 (Update books) - #94496 (tests: accept llvm intrinsic in align-checking test) - #94498 (9 - Make more use of `let_chains`) - #94503 (Provide C FFI types via core::ffi, not just in std) - #94513 (update Miri) Failed merges: r? `@ghost` `@rustbot` modify labels: rollup
2022-03-01compiler: fix some typoscuishuang-1/+1
2022-03-01Querify `global_backend_features`Simonas Kazlauskas-0/+12
At the very least this serves to deduplicate the diagnostics that are output about unknown target features provided via CLI.
2022-02-27Auto merge of #94084 - Mark-Simulacrum:drop-sharded, r=cjgillotbors-1/+1
Avoid query cache sharding code in single-threaded mode In non-parallel compilers, this is just adding needless overhead at compilation time (since there is only one shard statically anyway). This amounts to roughly ~10 seconds reduction in bootstrap time, with overall neutral (some wins, some losses) performance results. Parallel compiler performance should be largely unaffected by this PR; sharding is kept there.
2022-02-26Rollup merge of #94306 - Mark-Simulacrum:dom-fixups, r=jackh726Matthias Krüger-3/+13
Avoid exhausting stack space in dominator compression Doesn't add a test case -- I ended up running into this while playing with the generated example from #43578, which we could do with a run-make test (to avoid checking a large code snippet into tree), but I suspect we don't want to wait for it to compile (locally it takes ~14s -- not terrible, but doesn't seem worth it to me). In practice stack space exhaustion is difficult to test for, too, since if we set the bound too low a different call structure above us (e.g., a nearer ensure_sufficient_stack call) would let the test pass even with the old impl, most likely. Locally it seems like this manages to perform approximately equivalently to the recursion, but will run perf to confirm.
2022-02-25Switch bootstrap cfgsMark Rousskov-2/+2
2022-02-24Rollup merge of #94288 - Mark-Simulacrum:ser-opt, r=nnethercoteMatthias Krüger-3/+1
Cleanup a few Decoder methods This is just some simple follow up to #93839. r? `@nnethercote`
2022-02-23Avoid exhausting stack space in dominator compressionMark Rousskov-3/+13
2022-02-23Auto merge of #93984 - nnethercote:ChunkedBitSet, r=Mark-Simulacrumbors-8/+2
Introduce `ChunkedBitSet` and use it for some dataflow analyses. This reduces peak memory usage significantly for some programs with very large functions. r? `@ghost`
2022-02-23Introduce `ChunkedBitSet` and use it for some dataflow analyses.Nicholas Nethercote-8/+2
This reduces peak memory usage significantly for some programs with very large functions, such as: - `keccak`, `unicode_normalization`, and `match-stress-enum`, from the `rustc-perf` benchmark suite; - `http-0.2.6` from crates.io. The new type is used in the analyses where the bitsets can get huge (e.g. 10s of thousands of bits): `MaybeInitializedPlaces`, `MaybeUninitializedPlaces`, and `EverInitializedPlaces`. Some refactoring was required in `rustc_mir_dataflow`. All existing analysis domains are either `BitSet` or a trivial wrapper around `BitSet`, and access in a few places is done via `Borrow<BitSet>` or `BorrowMut<BitSet>`. Now that some of these domains are `ClusterBitSet`, that no longer works. So this commit replaces the `Borrow`/`BorrowMut` usage with a new trait `BitSetExt` containing the needed bitset operations. The impls just forward these to the underlying bitset type. This required fiddling with trait bounds in a few places. The commit also: - Moves `static_assert_size` from `rustc_data_structures` to `rustc_index` so it can be used in the latter; the former now re-exports it so existing users are unaffected. - Factors out some common "clear excess bits in the final word" functionality in `bit_set.rs`. - Uses `fill` in a few places instead of loops.
2022-02-22Provide copy-free access to raw Decoder bytesMark Rousskov-3/+1
2022-02-21obligation forest docslcnr-4/+5
2022-02-20Move Sharded maps into each QueryCache implMark Rousskov-1/+1
2022-02-20Auto merge of #93934 - rusticstuff:inline_ensure_sufficient_stack, r=estebankbors-0/+1
Allow inlining of `ensure_sufficient_stack()` This functions is monomorphized a lot and allowing the compiler to inline it improves instructions count and max RSS significantly in my local tests.
2022-02-19Adopt let else in more placesest31-11/+7
2022-02-15Address review comments.Nicholas Nethercote-11/+7
2022-02-15Rename `PtrKey` as `Interned` and improve it.Nicholas Nethercote-38/+163
In particular, there's now more protection against incorrect usage, because you can only create one via `Interned::new_unchecked`, which makes it more obvious that you must be careful. There are also some tests.
2022-02-14Call the method fork instead of clone and add proper commentsSantiago Pastorino-0/+2
2022-02-12Allow inlining of ensure_sufficient_stack()Hans Kratz-0/+1
2022-02-05Use const generics in SipHasher128's short_writeJakub Beránek-46/+39
2022-02-03Fix `isize` optimization in `StableHasher` for big-endian architecturesJakub Beránek-3/+8
2022-02-03Auto merge of #93432 - Kobzol:stable-hash-isize-hash-compression, r=the8472bors-3/+51
Compress amount of hashed bytes for `isize` values in StableHasher This is another attempt to land https://github.com/rust-lang/rust/pull/92103, this time hopefully with a correct implementation w.r.t. stable hashing guarantees. The previous PR was [reverted](https://github.com/rust-lang/rust/pull/93014) because it could produce the [same hash](https://github.com/rust-lang/rust/pull/92103#issuecomment-1014625442) for different values even in quite simple situations. I have since added a basic [test](https://github.com/rust-lang/rust/pull/93193) that should guard against that situation, I also added a new test in this PR, specialised for this optimization. ## Why this optimization helps Since the original PR, I have tried to analyze why this optimization even helps (and why it especially helps for `clap`). I found that the vast majority of stable-hashing `i64` actually comes from hashing `isize` (which is converted to `i64` in the stable hasher). I only found a single place where is this datatype used directly in the compiler, and this place has also been showing up in traces that I used to find out when is `isize` being hashed. This place is `rustc_span::FileName::DocTest`, however, I suppose that isizes also come from other places, but they might not be so easy to find (there were some other entries in the trace). `clap` hashes about 8.5 million `isize`s, and all of them fit into a single byte, which is why this optimization has helped it [quite a lot](https://github.com/rust-lang/rust/pull/92103#issuecomment-1005711861). Now, I'm not sure if special casing `isize` is the correct solution here, maybe something could be done with that `isize` inside `DocTest` or in other places, but that's for another discussion I suppose. In this PR, instead of hardcoding a special case inside `SipHasher128`, I instead put it into `StableHasher`, and only used it for `isize` (I tested that for `i64` it doesn't help, or at least not for `clap` and other few benchmarks that I was testing). ## New approach Since the most common case is a single byte, I added a fast path for hashing `isize` values which positive value fits within a single byte, and a cold path for the rest of the values. To avoid the previous correctness problem, we need to make sure that each unique `isize` value will produce a unique hash stream to the hasher. By hash stream I mean a sequence of bytes that will be hashed (a different sequence should produce a different hash, but that is of course not guaranteed). We have to distinguish different values that produce the same bit pattern when we combine them. For example, if we just simply skipped the leading zero bytes for values that fit within a single byte, `(0xFF, 0xFFFFFFFFFFFFFFFF)` and `(0xFFFFFFFFFFFFFFFF, 0xFF)` would send the same hash stream to the hasher, which must not happen. To avoid this situation, values `[0, 0xFE]` are hashed as a single byte. When we hash a larger (treating `isize` as `u64`) value, we first hash an additional byte `0xFF`. Since `0xFF` cannot occur when we apply the single byte optimization, we guarantee that the hash streams will be unique when hashing two values `(a, b)` and `(b, a)` if `a != b`: 1) When both `a` and `b` are within `[0, 0xFE]`, their hash streams will be different. 2) When neither `a` and `b` are within `[0, 0xFE]`, their hash streams will be different. 3) When `a` is within `[0, 0xFE]` and `b` isn't, when we hash `(a, b)`, the hash stream will definitely not begin with `0xFF`. When we hash `(b, a)`, the hash stream will definitely begin with `0xFF`. Therefore the hash streams will be different. r? `@the8472`
2022-02-02Rollup merge of #92528 - tmiasko:combine-commutative, r=michaelwoeristerMatthias Krüger-1/+18
Make `Fingerprint::combine_commutative` associative The previous implementation swapped lower and upper 64-bits of a result of modular addition, so the function was non-associative. r? `@Aaron1011`
2022-02-01add a rustc::query_stability lintlcnr-0/+1
2022-01-30Compress amount of hashed bytes for `isize` values in StableHasherJakub Beránek-3/+51
2022-01-24Add test stable hash uniqueness of adjacent field valuesJakub Beránek-0/+42
2022-01-24Revert "Do not hash leading zero bytes of i64 numbers in Sip128 hasher"Jakub Beránek-16/+2
2022-01-22Make `Decodable` and `Decoder` infallible.Nicholas Nethercote-7/+7
`Decoder` has two impls: - opaque: this impl is already partly infallible, i.e. in some places it currently panics on failure (e.g. if the input is too short, or on a bad `Result` discriminant), and in some places it returns an error (e.g. on a bad `Option` discriminant). The number of places where either happens is surprisingly small, just because the binary representation has very little redundancy and a lot of input reading can occur even on malformed data. - json: this impl is fully fallible, but it's only used (a) for the `.rlink` file production, and there's a `FIXME` comment suggesting it should change to a binary format, and (b) in a few tests in non-fundamental ways. Indeed #85993 is open to remove it entirely. And the top-level places in the compiler that call into decoding just abort on error anyway. So the fallibility is providing little value, and getting rid of it leads to some non-trivial performance improvements. Much of this commit is pretty boring and mechanical. Some notes about a few interesting parts: - The commit removes `Decoder::{Error,error}`. - `InternIteratorElement::intern_with`: the impl for `T` now has the same optimization for small counts that the impl for `Result<T, E>` has, because it's now much hotter. - Decodable impls for SmallVec, LinkedList, VecDeque now all use `collect`, which is nice; the one for `Vec` uses unsafe code, because that gave better perf on some benchmarks.
2022-01-11Auto merge of #92070 - rukai:replace_vec_into_iter_with_array_into_iter, ↵bors-4/+4
r=Mark-Simulacrum Replace usages of vec![].into_iter with [].into_iter `[].into_iter` is idiomatic over `vec![].into_iter` because its simpler and faster (unless the vec is optimized away in which case it would be the same) So we should change all the implementation, documentation and tests to use it. I skipped: * `src/tools` - Those are copied in from upstream * `src/test/ui` - Hard to tell if `vec![].into_iter` was used intentionally or not here and not much benefit to changing it. * any case where `vec![].into_iter` was used because we specifically needed a `Vec::IntoIter<T>` * any case where it looked like we were intentionally using `vec![].into_iter` to test it.
2022-01-09eplace usages of vec![].into_iter with [].into_iterLucas Kent-4/+4
2022-01-05Ensure that `Fingerprint` caching respects hashing configurationAaron Hill-0/+19
Fixes #92266 In some `HashStable` impls, we use a cache to avoid re-computing the same `Fingerprint` from the same structure (e.g. an `AdtDef`). However, the `StableHashingContext` used can be configured to perform hashing in different ways (e.g. skipping `Span`s). This configuration information is not included in the cache key, which will cause an incorrect `Fingerprint` to be used if we hash the same structure with different `StableHashingContext` settings. To fix this, the configuration settings of `StableHashingContext` are split out into a separate `HashingControls` struct. This struct is used as part of the cache key, ensuring that our caches always produce the correct result for the given settings. With this in place, we now turn off `Span` hashing during the entire process of computing the hash included in legacy symbols. This current has no effect, but will matter when a future PR starts hashing more `Span`s that we currently skip.
2022-01-04Do not hash zero bytes of i64 and u32 in Sip128 hasherJakub Beránek-2/+16
2022-01-03Make `Fingerprint::combine_commutative` associativeTomasz Miąsko-1/+18
The previous implementation swapped lower and upper 64-bits of a result of modular addition, so the function was non-associative.
2022-01-01Rustdoc: use ThinVec for GenericArgs bindingsJakub Beránek-5/+9
2021-12-28Auto merge of #92130 - Kobzol:stable-hash-str, r=cjgillotbors-3/+2
Use hash_stable for hashing str This seemed like an oversight. With this change the hash can go through the `HashStable` machinery directly.
2021-12-22rustc `VecGraph`: require the index type to implement Ordpierwill-6/+9
2021-12-22Remove `PartialOrd` and `Ord` from `LocalDefId`pierwill-1/+1
Implement `Ord`, `PartialOrd` for SpanData
2021-12-21Auto merge of #91903 - tmiasko:bit-set-hash, r=jackh726bors-4/+31
Implement StableHash for BitSet and BitMatrix via Hash This fixes an issue where bit sets / bit matrices the same word content but a different domain size would receive the same hash.
2021-12-20Use hash_stable for hashing strJakub Beránek-3/+2
2021-12-18Auto merge of #91837 - Kobzol:stable-hash-map-avoid-sort, r=the8472bors-23/+43
Avoid sorting in hash map stable hashing Suggested by `@the8472` [here](https://github.com/rust-lang/rust/pull/89404#issuecomment-991813333). I hope that I understood it right, I replaced the sort with modular multiplication, which should be commutative. Can I ask for a perf. run? However, locally it didn't help at all. Creating the `StableHasher` all over again is probably slowing it down quite a lot. And using `FxHasher` is not straightforward, because the keys and values only implement `HashStable` (and probably they shouldn't be just hashed via `Hash` anyway for it to actually be stable). Maybe the `StableHash` interface could be changed somehow to better suppor these scenarios where the hasher is short-lived. Or the `StableHasher` implementation could have variants with e.g. a shorter buffer for these scenarios.
2021-12-18Implement StableHash for BitSet and BitMatrix via HashTomasz Miąsko-4/+31
This fixes an issue where bit sets / bit matrices the same word content but a different domain size would receive the same hash.
2021-12-13Add special case for length 1Jakub Beránek-9/+17
2021-12-13Remove sort from hashing hashset, treeset and treemapJakub Beránek-27/+29
2021-12-12Use modular arithmeticJakub Beránek-3/+3
2021-12-12Auto merge of #91549 - fee1-dead:const_env, r=spastorinobors-0/+2
Eliminate ConstnessAnd again Closes #91489. Closes #89432. Reverts #91491. Reverts #89450. r? `@spastorino`
2021-12-12Avoid sorting in hash map stable hashingJakub Beránek-4/+14
2021-12-12Small performance tweaksDeadbeef-0/+2
2021-12-12Auto merge of #89404 - Kobzol:hash-stable-sort, r=Mark-Simulacrumbors-1/+2
Slightly optimize hash map stable hashing I was profiling some of the `rustc-perf` benchmarks locally and noticed that quite some time is spent inside the stable hash of hashmaps. I tried to use a `SmallVec` instead of a `Vec` there, which helped very slightly. Then I tried to remove the sorting, which was a bottleneck, and replaced it with insertion into a binary heap. Locally, it yielded nice improvements in instruction counts and RSS in several benchmarks for incremental builds. The implementation could probably be much nicer and possibly extended to other stable hashes, but first I wanted to test the perf impact properly. Can I ask someone to do a perf run? Thank you!
2021-12-11Rollup merge of #91426 - eggyal:idfunctor-panic-safety, r=lcnrMatthias Krüger-24/+30
Make IdFunctor::try_map_id panic-safe Addresses FIXME comment created in #78313 r? ```@lcnr```
2021-12-09Remove redundant [..]sest31-4/+4