| Age | Commit message (Collapse) | Author | Lines |
|
|
|
outside the range of a scalar
|
|
interpret: copy_provenance: avoid large intermediate buffer for large repeat counts
Copying provenance worked in this odd way where the "preparation" phase (which is supposed to just extract the necessary information from the source range) already did all the work of repeating the result N times for the target range. This was needed to use the existing `insert_presorted` function on `SortedMap`.
This PR generalizes `insert_presorted` so that we can avoid this odd structure on copy-provenance, and maybe even improve performance.
|
|
r=jdonszelmann,ralfjung,traviscross
Implement `#[rustc_align_static(N)]` on `static`s
Tracking issue: https://github.com/rust-lang/rust/issues/146177
```rust
#![feature(static_align)]
#[rustc_align_static(64)]
static SO_ALIGNED: u64 = 0;
```
We need a different attribute than `rustc_align` because unstable attributes are tied to their feature (we can't have two unstable features use the same unstable attribute). Otherwise this uses all of the same infrastructure as `#[rustc_align]`.
r? `@traviscross`
|
|
|
|
counts
|
|
We need a different attribute than `rustc_align` because unstable attributes are
tied to their feature (we can't have two unstable features use the same
unstable attribute). Otherwise this uses all of the same infrastructure
as `#[rustc_align]`.
|
|
rename erase_regions to erase_and_anonymize_regions
I find it consistently confusing that `erase_regions` does more than replacing regions with `'erased`. it also makes some code look real goofy to be writing manual folders to erase regions with a comment saying "we cant use erase regions" :> or code that re-calls erase_regions on types with regions already erased just to anonymize all the bound regions.
r? lcnr
idk how i feel about the name being almost twice as long now
|
|
|
|
|
|
|
|
const-eval: full support for pointer fragments
This fixes https://github.com/rust-lang/const-eval/issues/72 and makes `swap_nonoverlapping` fully work in const-eval by enhancing per-byte provenance tracking with tracking of *which* of the bytes of the pointer this one is. Later, if we see all the same bytes in the exact same order, we can treat it like a whole pointer again without ever risking a leak of the data bytes (that encode the offset into the allocation). This lifts the limitation that was discussed quite a bit in https://github.com/rust-lang/rust/pull/137280.
For a concrete piece of code that used to fail and now works properly consider this example doing a byte-for-byte memcpy in const without using intrinsics:
```rust
use std::{mem::{self, MaybeUninit}, ptr};
type Byte = MaybeUninit<u8>;
const unsafe fn memcpy(dst: *mut Byte, src: *const Byte, n: usize) {
let mut i = 0;
while i < n {
*dst.add(i) = *src.add(i);
i += 1;
}
}
const _MEMCPY: () = unsafe {
let ptr = &42;
let mut ptr2 = ptr::null::<i32>();
// Copy from ptr to ptr2.
memcpy(&mut ptr2 as *mut _ as *mut _, &ptr as *const _ as *const _, mem::size_of::<&i32>());
assert!(*ptr2 == 42);
};
```
What makes this code tricky is that pointers are "opaque blobs" in const-eval, we cannot just let people look at the individual bytes since *we don't know what those bytes look like* -- that depends on the absolute address the pointed-to object will be placed at. The code above "breaks apart" a pointer into individual bytes, and then puts them back together in the same order elsewhere. This PR implements the logic to properly track how those individual bytes relate to the original pointer, and to recognize when they are in the right order again.
We still reject constants where the final value contains a not-fully-put-together pointer: I have no idea how one could construct an LLVM global where one byte is defined as "the 3rd byte of a pointer to that other global over there" -- and even if LLVM supports this somehow, we can leave implementing that to a future PR. It seems unlikely to me anyone would even want this, but who knows.^^
This also changes the behavior of Miri, by tracking the order of bytes with provenance and only considering a pointer to have valid provenance if all bytes are in the original order again. This is related to https://github.com/rust-lang/unsafe-code-guidelines/issues/558. It means one cannot implement XOR linked lists with strict provenance any more, which is however only of theoretical interest. Practically I am curious if anyone will show up with any code that Miri now complains about - that would be interesting data. Cc `@rust-lang/opsem`
|
|
- don't need type alias to default type argument
- `Residual` impl allows to use more std APIs (like `<[T; N]>::try_map`)
|
|
|
|
|
|
|
|
|
|
Co-Authored-By: Ralf Jung <post@ralfj.de>
Co-Authored-By: Oli Scherer <github333195615777966@oli-obk.de>
|
|
interpret/allocation: expose init + write_wildcards on a range
Part of https://github.com/rust-lang/miri/pull/4456, so that we can mark down when a foreign access to our memory happened. Should this also move `prepare_for_native_access()` itself into Miri, given that everything there can be implemented on Miri's side?
r? `````@RalfJung`````
|
|
|
|
|
|
default data address space
|
|
setup typos check in CI
This allows to check typos in CI, currently for compiler only (to reduce commit size with fixes). With current setup, exclude list is quite short, so it worth trying?
Also includes commits with actual typo fixes.
MCP: https://github.com/rust-lang/compiler-team/issues/817
typos check currently turned for:
* ./compiler
* ./library
* ./src/bootstrap
* ./src/librustdoc
After merging, PRs which enables checks for other crates (tools) can be implemented too.
Found typos will **not break** other jobs immediately: (tests, building compiler for perf run). Job will be marked as red on completion in ~ 20 secs, so you will not forget to fix it whenever you want, before merging pr.
Check typos: `python x.py test tidy --extra-checks=spellcheck`
Apply typo fixes: `python x.py test tidy --extra-checks=spellcheck:fix` (in case if there only 1 suggestion of each typo)
Current fail in this pr is expected and shows how typo errors emitted. Commit with error will be removed after r+.
|
|
|
|
|
|
Introduce `ByteSymbol`
It's like `Symbol` but for byte strings. The interner is now used for both `Symbol` and `ByteSymbol`. E.g. if you intern `"dog"` and `b"dog"` you'll get a `Symbol` and a `ByteSymbol` with the same index and the characters will only be stored once.
The motivation for this is to eliminate the `Arc`s in `ast::LitKind`, to make `ast::LitKind` impl `Copy`, and to avoid the need to arena-allocate `ast::LitKind` in HIR. The latter change reduces peak memory by a non-trivial amount on literal-heavy benchmarks such as `deep-vector` and `tuple-stress`.
`Encoder`, `Decoder`, `SpanEncoder`, and `SpanDecoder` all get some changes so that they can handle normal strings and byte strings.
|
|
It's like `Symbol` but for byte strings. The interner is now used for
both `Symbol` and `ByteSymbol`. E.g. if you intern `"dog"` and `b"dog"`
you'll get a `Symbol` and a `ByteSymbol` with the same index and the
characters will only be stored once.
The motivation for this is to eliminate the `Arc`s in `ast::LitKind`, to
make `ast::LitKind` impl `Copy`, and to avoid the need to arena-allocate
`ast::LitKind` in HIR. The latter change reduces peak memory by a
non-trivial amount on literal-heavy benchmarks such as `deep-vector` and
`tuple-stress`.
`Encoder`, `Decoder`, `SpanEncoder`, and `SpanDecoder` all get some
changes so that they can handle normal strings and byte strings.
This change does slow down compilation of programs that use
`include_bytes!` on large files, because the contents of those files are
now interned (hashed). This makes `include_bytes!` more similar to
`include_str!`, though `include_bytes!` contents still aren't escaped,
and hashing is still much cheaper than escaping.
|
|
|
|
|
|
such constants as patterns
|
|
interpret/allocation: Fixup type for `alloc_bytes`
This can be `FnOnce`, which helps us avoid an extra clone in rust-lang/miri#4343
r? RalfJung
|
|
GCI: At their def site, actually wfcheck the where-clause & always eval free lifetime-generic constants
* 1st commit: Partially addresses [#136204](https://github.com/rust-lang/rust/issues/136204) by turning const eval errors from post to pre-mono for free lifetime-generic constants.
* As the linked issue/comment states, on master there's a difference between `const _: () = panic!();` (pre-mono error) and `const _<'a>: () = panic!();` (post-mono error) which feels wrong.
* With this PR, both become pre-mono ones!
* 2nd commit: Oof, yeah, I missed that in the initial impl!
This doesn't fully address #136204 because I still haven't figured out how & where to properly & best suppress const eval of free constants whose predicates don't hold at the def site. The motivating example is `const _UNUSED: () = () where for<'_delay> String: Copy;` which can also be found over at the tracking issue #113521.
r? compiler-errors or reassign
|
|
|
|
|
|
intrinsics, ScalarInt: minor cleanup
Taken out of https://github.com/rust-lang/rust/pull/141507 while we resolve technical disagreements in that PR.
r? ``@bjorn3``
|
|
|
|
|
|
|
|
interpret: better error message for out-of-bounds pointer arithmetic and accesses
Fixes https://github.com/rust-lang/rust/issues/93881
r? `@saethlin`
|
|
accesses
|
|
|
|
|
|
|
|
Add a .bss-like scheme for encoded const allocs
This check if all bytes are zero feel like it should be too slow, and instead we should have a flag that we track, but that seems hard. Let's see how this perfs first.
Also we can probably stash the "it's all zero actually" flag inside one of the other struct members that's already not using an entire byte. This optimization doesn't fire all that often, so it's possible that by sticking it in the varint length field, this PR actually makes rmeta size worse.
|
|
Convert `ShardedHashMap` to use `hashbrown::HashTable`
The `hash_raw_entry` feature (#56167) has finished fcp-close, so the compiler
should stop using it to allow its removal. Several `Sharded` maps were
using raw entries to avoid re-hashing between shard and map lookup, and
we can do that with `hashbrown::HashTable` instead.
|
|
memory FFI can access
|
|
The `hash_raw_entry` feature has finished fcp-close, so the compiler
should stop using it to allow its removal. Several `Sharded` maps were
using raw entries to avoid re-hashing between shard and map lookup, and
we can do that with `hashbrown::HashTable` instead.
|
|
r=compiler-errors
compiler: Use `size_of` from the prelude instead of imported
Use `std::mem::{size_of, size_of_val, align_of, align_of_val}` from the prelude instead of importing or qualifying them. Apply this change across the compiler.
These functions were added to all preludes in Rust 1.80.
r? ``@compiler-errors``
|
|
Use `std::mem::{size_of, size_of_val, align_of, align_of_val}` from the
prelude instead of importing or qualifying them.
These functions were added to all preludes in Rust 1.80.
|
|
interpret/provenance_map: consistently use range_is_empty
https://github.com/rust-lang/rust/pull/137704 started using this for per-ptr provenance; let's be consistent and use it also for the per-byte provenance check. Also rename the methods to avoid having both "get" and "is_empty" in the name.
r? ````@oli-obk````
|