| Age | Commit message (Collapse) | Author | Lines |
|
Implementation of `#[repr(packed(n))]` RFC 1399.
Tracking issue https://github.com/rust-lang/rust/issues/33158.
|
|
|
|
|
|
-Zshare-generics
|
|
Better document the implementors of Clone and Copy
There are two parts to this change. The first part is a change to the compiler and to the standard library (specifically, libcore) to allow implementations of `Clone` and `Copy` to be written for a subset of builtin types. By adding these implementations to libcore, they now show up in the documentation. This is a [breaking-change] for users of `#![no_core]`, because they will now have to supply their own copy of the implementations of `Clone` and `Copy` that were added in libcore.
The second part is purely a documentation change to document the other implementors of `Clone` and `Copy` that cannot be described in Rust code (yet) and are thus provided by the compiler.
Fixes #25893
|
|
We only support stack probes on x86 and x86_64.
Other arches are already ignored.
|
|
Allow niche-filling dataful variants to be represented as a ScalarPair
r? @eddyb
|
|
implement minmax intrinsics
This adds the `simd_{fmin,fmax}` intrinsics, which do a vertical (lane-wise) `min`/`max` for floating point vectors that's equivalent to Rust's `min`/`max` for `f32`/`f64`.
It might make sense to make `{f32,f64}::{min,max}` use the `minnum` and `minmax` intrinsics as well.
---
~~HELP: I need some help with these. Either I should go to sleep or there must be something that I must be missing. AFAICT I am calling the `maxnum` builder correctly, yet rustc/LLVM seem to insert a call to `llvm.minnum` there instead...~~ EDIT: Rust's LLVM version is too old :/
|
|
Simply checking for the presence of `llvm.memset` is too brittle because
this instrinsic can be used for seemingly trivial operations, such as
zero-initializing a `RawVec`.
|
|
|
|
|
|
|
|
|
|
|
|
Introduce unsafe offset_from on pointers
Adds intrinsics::exact_div to take advantage of the unsafe, which reduces the implementation from
```asm
sub rcx, rdx
mov rax, rcx
sar rax, 63
shr rax, 62
lea rax, [rax + rcx]
sar rax, 2
ret
```
down to
```asm
sub rcx, rdx
sar rcx, 2
mov rax, rcx
ret
```
(for `*const i32`)
See discussion on the `offset_to` tracking issue https://github.com/rust-lang/rust/issues/41079
Some open questions
- Would you rather I split the intrinsic PR from the library PR?
- Do we even want the safe version of the API? https://github.com/rust-lang/rust/issues/41079#issuecomment-374426786 I've added some text to its documentation that even if it's not UB, it's useless to use it between pointers into different objects.
and todos
- [x] ~~I need to make a codegen test~~ Done
- [x] ~~Can the subtraction use nsw/nuw?~~ No, it can't https://github.com/rust-lang/rust/pull/49297#discussion_r176697574
- [x] ~~Should there be `usize` variants of this, like there are now `add` and `sub` that you almost always want over `offset`? For example, I imagine `sub_ptr` that returns `usize` and where it's UB if the distance is negative.~~ Can wait for later; C gives a signed result https://github.com/rust-lang/rust/issues/41079#issuecomment-375842235, so we might as well, and this existing to go with `offset` makes sense.
|
|
adds simd_select intrinsic
The select SIMD intrinsic is used to select elements from two SIMD vectors using a mask:
```rust
let mask = b8x4::new(true, false, false, true);
let a = f32x4::new(1., 2., 3., 4.);
let b = f32x4::new(5., 6., 7., 8.);
assert_eq!(simd_select(mask, a, b), f32x4::new(1., 6., 7., 4.));
```
The number of lanes between the mask and the vectors must match, but the vector width of the mask does not need to match that of the vectors. The mask is required to be a vector of signed integers.
Note: this intrinsic will be exposed via `std::simd`'s vector masks - users are not expected to use it directly.
|
|
|
|
This reverts commit 16ac85ce4dce1e185f2e6ce27df3833e07a9e502.
|
|
|
|
Since #47964 was merged, 64-bit mips started passing all structures
using 64-bit chunks regardless of their contents. The
repr-transparent-aggregates tests needs updating to cope with this.
|
|
|
|
|
|
|
|
|
|
|
|
The differences are not part of what the test is testing, so they were simply removed.
|
|
|
|
|
|
Fixes #47311.
r? @nrc
|
|
Fixes #47311.
r? @nrc
|
|
Remove experimental -Zremap-path-prefix-from/to, and replace it with
the stabilized --remap-path-prefix=from=to variant.
This is an implementation for issue of #41555.
|
|
You can now choose between the following:
- `#[unwind(allowed)]`
- `#[unwind(aborts)]`
Per rust-lang/rust#48251, the default is `#[unwind(allowed)]`, though
I think we should change this eventually.
|
|
|
|
Fix oversized loads on x86_64 SysV FFI calls
The x86_64 SysV ABI should use exact sizes for small structs passed in
registers, i.e. a struct that occupies 3 bytes should use an i24,
instead of the i32 it currently uses.
Refs #45543
|
|
The x86_64 SysV ABI should use exact sizes for small structs passed in
registers, i.e. a struct that occupies 3 bytes should use an i24,
instead of the i32 it currently uses.
Refs #45543
|
|
Implement TrustedLen for Take<Repeat> and Take<RangeFrom>
This will allow optimization of simple `repeat(x).take(n).collect()` iterators, which are currently not vectorized and have capacity checks.
This will only support a few aggregates on `Repeat` and `RangeFrom`, which might be enough for simple cases, but doesn't optimize more complex ones. Namely, Cycle, StepBy, Filter, FilterMap, Peekable, SkipWhile, Skip, FlatMap, Fuse and Inspect are not marked `TrustedLen` when the inner iterator is infinite.
Previous discussion can be found in #47082
r? @alexcrichton
|
|
Make inline assembly volatile if it has no outputs. Fixes #46026
|
|
|
|
Use a range to identify SIGSEGV in stack guards
Previously, the `guard::init()` and `guard::current()` functions were
returning a `usize` address representing the top of the stack guard,
respectively for the main thread and for spawned threads. The `SIGSEGV`
handler on `unix` targets checked if a fault was within one page below that
address, if so reporting it as a stack overflow.
Now `unix` targets report a `Range<usize>` representing the guard memory,
so it can cover arbitrary guard sizes. Non-`unix` targets which always
return `None` for guards now do so with `Option<!>`, so they don't pay any
overhead.
For `linux-gnu` in particular, the previous guard upper-bound was
`stackaddr + guardsize`, as the protected memory was *inside* the stack.
This was a glibc bug, and starting from 2.27 they are moving the guard
*past* the end of the stack. However, there's no simple way for us to know
where the guard page actually lies, so now we declare it as the whole range
of `stackaddr ± guardsize`, and any fault therein will be called a stack
overflow. This fixes #47863.
|
|
|
|
|
|
|
|
rollup
|
|
Teach rustc about DW_AT_noreturn and a few more DIFlags
We achieve two small things with this PR:
1. We provide definitions for a few additional llvm debuginfo flags
1. We _use_ one of these new flags, `FlagNoReturn`, and add it to debuginfo for functions with the never return type (`!`).
|
|
This commit changes the ABI of SIMD types in the "Rust" ABI to unconditionally
be passed via pointers instead of being passed as immediates. This should fix a
longstanding issue, #44367, where SIMD-using programs ended up showing very odd
behavior at runtime because the ABI between functions was mismatched.
As a bit of a recap, this is sort of an LLVM bug and sort of an LLVM feature
(today's behavior). LLVM will generate code for a function solely looking at the
function it's generating, including calls to other functions. Let's then say
you've got something that looks like:
```llvm
define void @foo() { ; no target features enabled
call void @bar(<i64 x 4> zeroinitializer)
ret void
}
define void @bar(<i64 x 4>) #0 { ; enables the AVX feature
...
}
```
LLVM will codegen the call to `bar` *without* using AVX registers becauase `foo`
doesn't have access to these registers. Instead it's generated with emulation
that uses two 128-bit registers. The `bar` function, on the other hand, will
expect its argument in an AVX register (as it has AVX enabled). This means we've
got a codegen problem!
Comments on #44367 have some more contexutal information but the crux of the
issue is that if we want SIMD to work in general we'll need to ensure that
whenever a function calls another they ABI of the arguments being passed is in
agreement.
One possible solution to this would be to insert "shim functions" where whenever
a `target_feature` mismatch is detected the compiler inserts a shim function
where you pass arguments via memory to the shim and then the shim loads the
values and calls the target function (where the shim and the target have the
same target features enabled). This unfortunately is quite nontrivial to
implement in rustc today (especially when accounting for function pointers and
such).
This commit takes a different solution, *always* passing SIMD arguments through
memory instead of passing as immediates. This strategy solves the problem at the
LLVM layer because the ABI between two functions never uses SIMD registers. This
also shouldn't be a hit to performance because SIMD performance is thought to
often rely on inlining anyway, where a `call` instruction, even if using SIMD
registers, would be disastrous to performance regardless. LLVM should then be
more than capable of fixing all our memory usage to use registers instead after
enough inlining has been performed.
Note that there's a few caveats to this commit though:
* The "platform intrinsic" ABI is omitted from "always pass via memory". This
ABI is used to define intrinsics like `simd_shuffle4` where LLVM and rustc
need to have the arguments as an immediate.
* Additionally this commit does *not* fix the `extern` ("C") ABI. This means
that the bug in #44367 can still happen when using non-Rust-ABI functions. My
hope is that before stabilization we can ban and/or warn about SIMD types in
these functions (as AFAIK there's not much motivation to belong there anyway),
but I'll leave that for a later commit and if this is merged I'll file a
follow-up issue.
All in all this...
Closes #44367
|
|
|
|
Implement repr(transparent)
r? @eddyb for the functional changes. The bulk of the PR is error messages and docs, might be good to have a doc person look over those.
cc #43036
cc @nox
|
|
|
|
|
|
|