about summary refs log tree commit diff
path: root/tests/codegen/simd-intrinsic/simd-intrinsic-mask-reduce.rs
AgeCommit message (Collapse)AuthorLines
2025-07-22Rename `tests/codegen` into `tests/codegen-llvm`Guillaume Gomez-60/+0
2025-07-20So many test updates x_xScott McMurray-7/+6
2025-02-27remove most `simd_` intrinsic declaration in testsFolkert de Vries-7/+3
instead, we can just import the intrinsics from core
2025-01-27Fix SIMD codegen tests on LLVM 20Nikita Popov-4/+4
The splat contents are printed differently on LLVM 20.
2025-01-26Consistently use the most significant bit of vector masksJörn Horstmann-0/+65
This improves the codegen for vector `select`, `gather`, `scatter` and boolean reduction intrinsics and fixes rust-lang/portable-simd#316. The current behavior of most mask operations during llvm codegen is to truncate the mask vector to <N x i1>, telling llvm to use the least significat bit. The exception is the `simd_bitmask` intrinsics, which already used the most signifiant bit. Since sse/avx instructions are defined to use the most significant bit, truncating means that llvm has to insert a left shift to move the bit into the most significant position, before the mask can actually be used. Similarly on aarch64, mask operations like blend work bit by bit, repeating the least significant bit across the whole lane involves shifting it into the sign position and then comparing against zero. By shifting before truncating to <N x i1>, we tell llvm that we only consider the most significant bit, removing the need for additional shift instructions in the assembly.