| Age | Commit message (Collapse) | Author | Lines |
|
The splat contents are printed differently on LLVM 20.
|
|
This improves the codegen for vector `select`, `gather`, `scatter` and
boolean reduction intrinsics and fixes rust-lang/portable-simd#316.
The current behavior of most mask operations during llvm codegen is to
truncate the mask vector to <N x i1>, telling llvm to use the least
significat bit. The exception is the `simd_bitmask` intrinsics, which
already used the most signifiant bit.
Since sse/avx instructions are defined to use the most significant bit,
truncating means that llvm has to insert a left shift to move the bit
into the most significant position, before the mask can actually be
used.
Similarly on aarch64, mask operations like blend work bit by bit,
repeating the least significant bit across the whole lane involves
shifting it into the sign position and then comparing against zero.
By shifting before truncating to <N x i1>, we tell llvm that we only
consider the most significant bit, removing the need for additional
shift instructions in the assembly.
|
|
that was introduced in https://github.com/llvm/llvm-project/pull/112548
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This maps to the LLVM intrinsics: llvm.masked.load and llvm.masked.store
|
|
With opaque pointers, there's no longer a need to generate a chain
of pointer types in the intrinsic name when arguments are pointers to
pointers.
|
|
|
|
|
|
LLVM can usually optimize these away, but especially for things like transmutes of newtypes it's silly to generate the `alloc`+`store`+`load` at all when it's actually a nop at LLVM level.
|
|
|
|
|
|
|
|
|