about summary refs log tree commit diff
path: root/tests/assembly/simd-intrinsic-select.rs
AgeCommit message (Collapse)AuthorLines
2025-07-22Rename `tests/assembly` into `tests/assembly-llvm`Guillaume Gomez-128/+0
2025-04-20make abi_unsupported_vector_types a hard errorRalf Jung-2/+5
2025-04-06update/bless testsBennet Bleßmann-3/+2
2025-02-24tests: use minicore moreDavid Wood-6/+3
minicore makes it much easier to add new language items to all of the existing `no_core` tests.
2025-02-08tests/assembly: use -Copt-level=3 instead of -OJubilee Young-1/+1
2025-01-26Consistently use the most significant bit of vector masksJörn Horstmann-24/+24
This improves the codegen for vector `select`, `gather`, `scatter` and boolean reduction intrinsics and fixes rust-lang/portable-simd#316. The current behavior of most mask operations during llvm codegen is to truncate the mask vector to <N x i1>, telling llvm to use the least significat bit. The exception is the `simd_bitmask` intrinsics, which already used the most signifiant bit. Since sse/avx instructions are defined to use the most significant bit, truncating means that llvm has to insert a left shift to move the bit into the most significant position, before the mask can actually be used. Similarly on aarch64, mask operations like blend work bit by bit, repeating the least significant bit across the whole lane involves shifting it into the sign position and then comparing against zero. By shifting before truncating to <N x i1>, we tell llvm that we only consider the most significant bit, removing the need for additional shift instructions in the assembly.
2024-09-18Update the minimum external LLVM to 18Josh Stone-1/+0
2024-06-19Remove c_unwind from tests and fix testsGary Guo-1/+1
2024-03-12Add tests for the generated assembly of mask related simd instructions.Jörn Horstmann-0/+130
The tests show that the code generation currently uses the least significant bits of <iX x N> vector masks when converting to <i1 xN>. This leads to an additional left shift operation in the assembly for x86, since mask operations on x86 operate based on the most significant bit. On aarch64 the left shift is followed by a comparison against zero, which repeats the sign bit across the whole lane. The exception, which does not introduce an unneeded shift, is simd_bitmask, because the code generation already shifts before truncating. By using the "C" calling convention the tests should be stable regarding changes in register allocation, but it is possible that future llvm updates will require updating some of the checks. This additional instruction would be removed by the fix in #104693, which uses the most significant bit for all mask operations.