about summary refs log tree commit diff
path: root/compiler/rustc_data_structures
AgeCommit message (Collapse)AuthorLines
2022-04-06Fix some fallout around type alias impl trait in associated typesOli Scherer-1/+1
2022-04-03Auto merge of #92686 - saethlin:unsafe-debug-asserts, r=Amanieubors-4/+4
Add debug assertions to some unsafe functions As suggested by https://github.com/rust-lang/rust/issues/51713 ~~Some similar code calls `abort()` instead of `panic!()` but aborting doesn't work in a `const fn`, and the intrinsic for doing dispatch based on whether execution is in a const is unstable.~~ This picked up some invalid uses of `get_unchecked` in the compiler, and fixes them. I can confirm that they do in fact pick up invalid uses of `get_unchecked` in the wild, though the user experience is less-than-awesome: ``` Running unittests (target/x86_64-unknown-linux-gnu/debug/deps/rle_decode_fast-04b7918da2001b50) running 6 tests error: test failed, to rerun pass '--lib' Caused by: process didn't exit successfully: `/home/ben/rle-decode-helper/target/x86_64-unknown-linux-gnu/debug/deps/rle_decode_fast-04b7918da2001b50` (signal: 4, SIGILL: illegal instruction) ``` ~~As best I can tell these changes produce a 6% regression in the runtime of `./x.py test` when `[rust] debug = true` is set.~~ Latest commit (https://github.com/rust-lang/rust/pull/92686/commits/6894d559bdb4365243b3f4bf73f18e4b1bed04d1) brings the additional overhead from this PR down to 0.5%, while also adding a few more assertions. I think this actually covers all the places in `core` that it is reasonable to check for safety requirements at runtime. Thoughts?
2022-03-31Check that the cached stable hash is the right one if debug assertions are ↵Oli Scherer-4/+8
enabled
2022-03-31Move stable hash from TyS into a datastructure that can be shared with other ↵Oli Scherer-0/+73
interned types.
2022-03-30Auto merge of #95466 - Dylan-DPC:rollup-g7ddr8y, r=Dylan-DPCbors-2/+2
Rollup of 5 pull requests Successful merges: - #95294 (Document Linux kernel handoff in std::io::copy and std::fs::copy) - #95443 (Clarify how `src/tools/x` searches for python) - #95452 (fix since field version for termination stabilization) - #95460 (Spellchecking compiler code) - #95461 (Spellchecking some comments) Failed merges: r? `@ghost` `@rustbot` modify labels: rollup
2022-03-30Rollup merge of #95461 - nyurik:spelling, r=lcnrDylan DPC-2/+2
Spellchecking some comments This PR attempts to clean up some minor spelling mistakes in comments
2022-03-30Spellchecking some commentsYuri Astrakhan-2/+2
This PR attempts to clean up some minor spelling mistakes in comments
2022-03-30Auto merge of #94081 - oli-obk:lazy_tait_take_two, r=nikomatsakisbors-1/+15
Lazy type-alias-impl-trait take two ### user visible change 1: RPIT inference from recursive call sites Lazy TAIT has an insta-stable change. The following snippet now compiles, because opaque types can now have their hidden type set from wherever the opaque type is mentioned. ```rust fn bar(b: bool) -> impl std::fmt::Debug { if b { return 42 } let x: u32 = bar(false); // this errors on stable 99 } ``` The return type of `bar` stays opaque, you can't do `bar(false) + 42`, you need to actually mention the hidden type. ### user visible change 2: divergence between RPIT and TAIT in return statements Note that `return` statements and the trailing return expression are special with RPIT (but not TAIT). So ```rust #![feature(type_alias_impl_trait)] type Foo = impl std::fmt::Debug; fn foo(b: bool) -> Foo { if b { return vec![42]; } std::iter::empty().collect() //~ ERROR `Foo` cannot be built from an iterator } fn bar(b: bool) -> impl std::fmt::Debug { if b { return vec![42] } std::iter::empty().collect() // Works, magic (accidentally stabilized, not intended) } ``` But when we are working with the return value of a recursive call, the behavior of RPIT and TAIT is the same: ```rust type Foo = impl std::fmt::Debug; fn foo(b: bool) -> Foo { if b { return vec![]; } let mut x = foo(false); x = std::iter::empty().collect(); //~ ERROR `Foo` cannot be built from an iterator vec![] } fn bar(b: bool) -> impl std::fmt::Debug { if b { return vec![]; } let mut x = bar(false); x = std::iter::empty().collect(); //~ ERROR `impl Debug` cannot be built from an iterator vec![] } ``` ### user visible change 3: TAIT does not merge types across branches In contrast to RPIT, TAIT does not merge types across branches, so the following does not compile. ```rust type Foo = impl std::fmt::Debug; fn foo(b: bool) -> Foo { if b { vec![42_i32] } else { std::iter::empty().collect() //~^ ERROR `Foo` cannot be built from an iterator over elements of type `_` } } ``` It is easy to support, but we should make an explicit decision to include the additional complexity in the implementation (it's not much, see a721052457cf513487fb4266e3ade65c29b272d2 which needs to be reverted to enable this). ### PR formalities previous attempt: #92007 This PR also includes #92306 and #93783, as they were reverted along with #92007 in #93893 fixes #93411 fixes #88236 fixes #89312 fixes #87340 fixes #86800 fixes #86719 fixes #84073 fixes #83919 fixes #82139 fixes #77987 fixes #74282 fixes #67830 fixes #62742 fixes #54895
2022-03-29Add debug assertions to some unsafe functionsBen Kimock-4/+4
These debug assertions are all implemented only at runtime using `const_eval_select`, and in the error path they execute `intrinsics::abort` instead of being a normal debug assertion to minimize the impact of these assertions on code size, when enabled. Of all these changes, the bounds checks for unchecked indexing are expected to be most impactful (case in point, they found a problem in rustc).
2022-03-28Revert "Auto merge of #93893 - oli-obk:sad_revert, r=oli-obk"Oli Scherer-1/+15
This reverts commit 6499c5e7fc173a3f55b7a3bd1e6a50e9edef782d, reversing changes made to 78450d2d602b06d9b94349aaf8cece1a4acaf3a8.
2022-03-28Propagate `parallel_compiler` feature through rustc crates. Turned off ↵klensy-3/+6
feature gives change of builded crates: 238 -> 224.
2022-03-08add `#[rustc_pass_by_value]` to more typeslcnr-70/+70
2022-03-07Clarify `Layout` interning.Nicholas Nethercote-0/+10
`Layout` is another type that is sometimes interned, sometimes not, and we always use references to refer to it so we can't take any advantage of the uniqueness properties for hashing or equality checks. This commit renames `Layout` as `LayoutS`, and then introduces a new `Layout` that is a newtype around an `Interned<LayoutS>`. It also interns more layouts than before. Previously layouts within layouts (via the `variants` field) were never interned, but now they are. Hence the lifetime on the new `Layout` type. Unlike other interned types, these ones are in `rustc_target` instead of `rustc_middle`. This reflects the existing structure of the code, which does layout-specific stuff in `rustc_target` while `TyAndLayout` is generic over the `Ty`, allowing the type-specific stuff to occur in `rustc_middle`. The commit also adds a `HashStable` impl for `Interned`, which was needed. It hashes the contents, unlike the `Hash` impl which hashes the pointer.
2022-03-07Introduce `ConstAllocation`.Nicholas Nethercote-6/+10
Currently some `Allocation`s are interned, some are not, and it's very hard to tell at a use point which is which. This commit introduces `ConstAllocation` for the known-interned ones, which makes the division much clearer. `ConstAllocation::inner()` is used to get the underlying `Allocation`. In some places it's natural to use an `Allocation`, in some it's natural to use a `ConstAllocation`, and in some places there's no clear choice. I've tried to make things look as nice as possible, while generally favouring `ConstAllocation`, which is the type that embodies more information. This does require quite a few calls to `inner()`. The commit also tweaks how `PartialOrd` works for `Interned`. The previous code was too clever by half, building on `T: Ord` to make the code shorter. That caused problems with deriving `PartialOrd` and `Ord` for `ConstAllocation`, so I changed it to build on `T: PartialOrd`, which is slightly more verbose but much more standard and avoided the problems.
2022-03-06Auto merge of #94579 - tmiasko:target-features, r=nagisabors-1/+91
Always include global target features in function attributes This ensures that information about target features configured with `-C target-feature=...` or detected with `-C target-cpu=native` is retained for subsequent consumers of LLVM bitcode. This is crucial for linker plugin LTO, since this information is not conveyed to the plugin otherwise. <details><summary>Additional test case demonstrating the issue</summary> ```rust extern crate core; #[inline] #[target_feature(enable = "aes")] unsafe fn f(a: u128, b: u128) -> u128 { use core::arch::x86_64::*; use core::mem::transmute; transmute(_mm_aesenc_si128(transmute(a), transmute(b))) } pub fn g(a: u128, b: u128) -> u128 { unsafe { f(a, b) } } fn main() { let mut args = std::env::args(); let _ = args.next().unwrap(); let a: u128 = args.next().unwrap().parse().unwrap(); let b: u128 = args.next().unwrap().parse().unwrap(); println!("{}", g(a, b)); } ``` ```console $ rustc --edition=2021 a.rs -Clinker-plugin-lto -Clink-arg=-fuse-ld=lld -Ctarget-feature=+aes -O ... = note: LLVM ERROR: Cannot select: intrinsic %llvm.x86.aesni.aesenc ``` </details> r? `@nagisa`
2022-03-04Add SmallStrTomasz Miąsko-1/+90
2022-03-04Inline SmallCStr::derefTomasz Miąsko-0/+1
2022-03-04Remove invalid #[cfg(tests)] in index_mapLoïc BRANSTETT-3/+0
2022-03-02Auto merge of #94514 - matthiaskrgr:rollup-pdzn82h, r=matthiaskrgrbors-1/+1
Rollup of 9 pull requests Successful merges: - #94464 (Suggest adding a new lifetime parameter when two elided lifetimes should match up for traits and impls.) - #94476 (7 - Make more use of `let_chains`) - #94478 (Fix panic when handling intra doc links generated from macro) - #94482 (compiler: fix some typos) - #94490 (Update books) - #94496 (tests: accept llvm intrinsic in align-checking test) - #94498 (9 - Make more use of `let_chains`) - #94503 (Provide C FFI types via core::ffi, not just in std) - #94513 (update Miri) Failed merges: r? `@ghost` `@rustbot` modify labels: rollup
2022-03-01compiler: fix some typoscuishuang-1/+1
2022-03-01Querify `global_backend_features`Simonas Kazlauskas-0/+12
At the very least this serves to deduplicate the diagnostics that are output about unknown target features provided via CLI.
2022-02-27Auto merge of #94084 - Mark-Simulacrum:drop-sharded, r=cjgillotbors-1/+1
Avoid query cache sharding code in single-threaded mode In non-parallel compilers, this is just adding needless overhead at compilation time (since there is only one shard statically anyway). This amounts to roughly ~10 seconds reduction in bootstrap time, with overall neutral (some wins, some losses) performance results. Parallel compiler performance should be largely unaffected by this PR; sharding is kept there.
2022-02-26Rollup merge of #94306 - Mark-Simulacrum:dom-fixups, r=jackh726Matthias Krüger-3/+13
Avoid exhausting stack space in dominator compression Doesn't add a test case -- I ended up running into this while playing with the generated example from #43578, which we could do with a run-make test (to avoid checking a large code snippet into tree), but I suspect we don't want to wait for it to compile (locally it takes ~14s -- not terrible, but doesn't seem worth it to me). In practice stack space exhaustion is difficult to test for, too, since if we set the bound too low a different call structure above us (e.g., a nearer ensure_sufficient_stack call) would let the test pass even with the old impl, most likely. Locally it seems like this manages to perform approximately equivalently to the recursion, but will run perf to confirm.
2022-02-25Switch bootstrap cfgsMark Rousskov-2/+2
2022-02-24Rollup merge of #94288 - Mark-Simulacrum:ser-opt, r=nnethercoteMatthias Krüger-3/+1
Cleanup a few Decoder methods This is just some simple follow up to #93839. r? `@nnethercote`
2022-02-23Avoid exhausting stack space in dominator compressionMark Rousskov-3/+13
2022-02-23Auto merge of #93984 - nnethercote:ChunkedBitSet, r=Mark-Simulacrumbors-8/+2
Introduce `ChunkedBitSet` and use it for some dataflow analyses. This reduces peak memory usage significantly for some programs with very large functions. r? `@ghost`
2022-02-23Introduce `ChunkedBitSet` and use it for some dataflow analyses.Nicholas Nethercote-8/+2
This reduces peak memory usage significantly for some programs with very large functions, such as: - `keccak`, `unicode_normalization`, and `match-stress-enum`, from the `rustc-perf` benchmark suite; - `http-0.2.6` from crates.io. The new type is used in the analyses where the bitsets can get huge (e.g. 10s of thousands of bits): `MaybeInitializedPlaces`, `MaybeUninitializedPlaces`, and `EverInitializedPlaces`. Some refactoring was required in `rustc_mir_dataflow`. All existing analysis domains are either `BitSet` or a trivial wrapper around `BitSet`, and access in a few places is done via `Borrow<BitSet>` or `BorrowMut<BitSet>`. Now that some of these domains are `ClusterBitSet`, that no longer works. So this commit replaces the `Borrow`/`BorrowMut` usage with a new trait `BitSetExt` containing the needed bitset operations. The impls just forward these to the underlying bitset type. This required fiddling with trait bounds in a few places. The commit also: - Moves `static_assert_size` from `rustc_data_structures` to `rustc_index` so it can be used in the latter; the former now re-exports it so existing users are unaffected. - Factors out some common "clear excess bits in the final word" functionality in `bit_set.rs`. - Uses `fill` in a few places instead of loops.
2022-02-22Provide copy-free access to raw Decoder bytesMark Rousskov-3/+1
2022-02-21obligation forest docslcnr-4/+5
2022-02-20Move Sharded maps into each QueryCache implMark Rousskov-1/+1
2022-02-20Auto merge of #93934 - rusticstuff:inline_ensure_sufficient_stack, r=estebankbors-0/+1
Allow inlining of `ensure_sufficient_stack()` This functions is monomorphized a lot and allowing the compiler to inline it improves instructions count and max RSS significantly in my local tests.
2022-02-19Adopt let else in more placesest31-11/+7
2022-02-15Address review comments.Nicholas Nethercote-11/+7
2022-02-15Rename `PtrKey` as `Interned` and improve it.Nicholas Nethercote-38/+163
In particular, there's now more protection against incorrect usage, because you can only create one via `Interned::new_unchecked`, which makes it more obvious that you must be careful. There are also some tests.
2022-02-14Call the method fork instead of clone and add proper commentsSantiago Pastorino-0/+2
2022-02-12Allow inlining of ensure_sufficient_stack()Hans Kratz-0/+1
2022-02-05Use const generics in SipHasher128's short_writeJakub Beránek-46/+39
2022-02-03Fix `isize` optimization in `StableHasher` for big-endian architecturesJakub Beránek-3/+8
2022-02-03Auto merge of #93432 - Kobzol:stable-hash-isize-hash-compression, r=the8472bors-3/+51
Compress amount of hashed bytes for `isize` values in StableHasher This is another attempt to land https://github.com/rust-lang/rust/pull/92103, this time hopefully with a correct implementation w.r.t. stable hashing guarantees. The previous PR was [reverted](https://github.com/rust-lang/rust/pull/93014) because it could produce the [same hash](https://github.com/rust-lang/rust/pull/92103#issuecomment-1014625442) for different values even in quite simple situations. I have since added a basic [test](https://github.com/rust-lang/rust/pull/93193) that should guard against that situation, I also added a new test in this PR, specialised for this optimization. ## Why this optimization helps Since the original PR, I have tried to analyze why this optimization even helps (and why it especially helps for `clap`). I found that the vast majority of stable-hashing `i64` actually comes from hashing `isize` (which is converted to `i64` in the stable hasher). I only found a single place where is this datatype used directly in the compiler, and this place has also been showing up in traces that I used to find out when is `isize` being hashed. This place is `rustc_span::FileName::DocTest`, however, I suppose that isizes also come from other places, but they might not be so easy to find (there were some other entries in the trace). `clap` hashes about 8.5 million `isize`s, and all of them fit into a single byte, which is why this optimization has helped it [quite a lot](https://github.com/rust-lang/rust/pull/92103#issuecomment-1005711861). Now, I'm not sure if special casing `isize` is the correct solution here, maybe something could be done with that `isize` inside `DocTest` or in other places, but that's for another discussion I suppose. In this PR, instead of hardcoding a special case inside `SipHasher128`, I instead put it into `StableHasher`, and only used it for `isize` (I tested that for `i64` it doesn't help, or at least not for `clap` and other few benchmarks that I was testing). ## New approach Since the most common case is a single byte, I added a fast path for hashing `isize` values which positive value fits within a single byte, and a cold path for the rest of the values. To avoid the previous correctness problem, we need to make sure that each unique `isize` value will produce a unique hash stream to the hasher. By hash stream I mean a sequence of bytes that will be hashed (a different sequence should produce a different hash, but that is of course not guaranteed). We have to distinguish different values that produce the same bit pattern when we combine them. For example, if we just simply skipped the leading zero bytes for values that fit within a single byte, `(0xFF, 0xFFFFFFFFFFFFFFFF)` and `(0xFFFFFFFFFFFFFFFF, 0xFF)` would send the same hash stream to the hasher, which must not happen. To avoid this situation, values `[0, 0xFE]` are hashed as a single byte. When we hash a larger (treating `isize` as `u64`) value, we first hash an additional byte `0xFF`. Since `0xFF` cannot occur when we apply the single byte optimization, we guarantee that the hash streams will be unique when hashing two values `(a, b)` and `(b, a)` if `a != b`: 1) When both `a` and `b` are within `[0, 0xFE]`, their hash streams will be different. 2) When neither `a` and `b` are within `[0, 0xFE]`, their hash streams will be different. 3) When `a` is within `[0, 0xFE]` and `b` isn't, when we hash `(a, b)`, the hash stream will definitely not begin with `0xFF`. When we hash `(b, a)`, the hash stream will definitely begin with `0xFF`. Therefore the hash streams will be different. r? `@the8472`
2022-02-02Rollup merge of #92528 - tmiasko:combine-commutative, r=michaelwoeristerMatthias Krüger-1/+18
Make `Fingerprint::combine_commutative` associative The previous implementation swapped lower and upper 64-bits of a result of modular addition, so the function was non-associative. r? `@Aaron1011`
2022-02-01add a rustc::query_stability lintlcnr-0/+1
2022-01-30Compress amount of hashed bytes for `isize` values in StableHasherJakub Beránek-3/+51
2022-01-27Rollup merge of #93193 - Kobzol:stable-hash-permutation-test, r=the8472Matthias Krüger-0/+42
Add test for stable hash uniqueness of adjacent field values This PR adds a simple test to check that stable hash will produce a different hash if the order of two values that have the same combined bit pattern changes. r? `@the8472`
2022-01-24Auto merge of #90842 - pierwill:localdefid-indexmap, r=wesleywiserbors-1/+1
Use `indexmap` to avoid sorting `LocalDefId`s See discussion in https://github.com/rust-lang/rust/pull/90408#discussion_r745935459. Related to work on https://github.com/rust-lang/rust/issues/90317.
2022-01-24Add test stable hash uniqueness of adjacent field valuesJakub Beránek-0/+42
2022-01-24Revert "Do not hash leading zero bytes of i64 numbers in Sip128 hasher"Jakub Beránek-16/+2
2022-01-22Use an `indexmap` to avoid sorting `LocalDefId`spierwill-1/+1
Update `indexmap` to 1.8.0. Bless test
2022-01-22Make `Decodable` and `Decoder` infallible.Nicholas Nethercote-7/+7
`Decoder` has two impls: - opaque: this impl is already partly infallible, i.e. in some places it currently panics on failure (e.g. if the input is too short, or on a bad `Result` discriminant), and in some places it returns an error (e.g. on a bad `Option` discriminant). The number of places where either happens is surprisingly small, just because the binary representation has very little redundancy and a lot of input reading can occur even on malformed data. - json: this impl is fully fallible, but it's only used (a) for the `.rlink` file production, and there's a `FIXME` comment suggesting it should change to a binary format, and (b) in a few tests in non-fundamental ways. Indeed #85993 is open to remove it entirely. And the top-level places in the compiler that call into decoding just abort on error anyway. So the fallibility is providing little value, and getting rid of it leads to some non-trivial performance improvements. Much of this commit is pretty boring and mechanical. Some notes about a few interesting parts: - The commit removes `Decoder::{Error,error}`. - `InternIteratorElement::intern_with`: the impl for `T` now has the same optimization for small counts that the impl for `Result<T, E>` has, because it's now much hotter. - Decodable impls for SmallVec, LinkedList, VecDeque now all use `collect`, which is nice; the one for `Vec` uses unsafe code, because that gave better perf on some benchmarks.
2022-01-16Auto merge of #92740 - cuviper:update-rayons, r=Mark-Simulacrumbors-2/+2
Update rayon and rustc-rayon This updates rayon for various tools and rustc-rayon for the compiler's parallel mode. - rayon v1.3.1 -> v1.5.1 - rayon-core v1.7.1 -> v1.9.1 - rustc-rayon v0.3.1 -> v0.3.2 - rustc-rayon-core v0.3.1 -> v0.3.2 ... and indirectly, this updates all of crossbeam-* to their latest versions. Fixes #92677 by removing crossbeam-queue, but there's still a lingering question about how tidy discovers "runtime" dependencies. None of this is truly in the standard library's dependency tree at all.