diff options
| author | Jed Brown <jed@jedbrown.org> | 2024-01-05 21:04:41 -0700 |
|---|---|---|
| committer | Jed Brown <jed@jedbrown.org> | 2024-10-11 15:32:56 -0600 |
| commit | 0d8a978e8a55b08778ec6ee861c2c5ed6703eb6c (patch) | |
| tree | 048738de47621c87899e28ba3c90b4c2417b5d3a /compiler/rustc_codegen_cranelift | |
| parent | 01e2fff90c7ed19e1d9fb828ebc012e7b9732297 (diff) | |
| download | rust-0d8a978e8a55b08778ec6ee861c2c5ed6703eb6c.tar.gz rust-0d8a978e8a55b08778ec6ee861c2c5ed6703eb6c.zip | |
intrinsics.fmuladdf{16,32,64,128}: expose llvm.fmuladd.* semantics
Add intrinsics `fmuladd{f16,f32,f64,f128}`. This computes `(a * b) +
c`, to be fused if the code generator determines that (i) the target
instruction set has support for a fused operation, and (ii) that the
fused operation is more efficient than the equivalent, separate pair
of `mul` and `add` instructions.
https://llvm.org/docs/LangRef.html#llvm-fmuladd-intrinsic
MIRI support is included for f32 and f64.
The codegen_cranelift uses the `fma` function from libc, which is a
correct implementation, but without the desired performance semantic. I
think this requires an update to cranelift to expose a suitable
instruction in its IR.
I have not tested with codegen_gcc, but it should behave the same
way (using `fma` from libc).
Diffstat (limited to 'compiler/rustc_codegen_cranelift')
| -rw-r--r-- | compiler/rustc_codegen_cranelift/src/intrinsics/mod.rs | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/compiler/rustc_codegen_cranelift/src/intrinsics/mod.rs b/compiler/rustc_codegen_cranelift/src/intrinsics/mod.rs index 19e5adc2538..35f0ccff3f9 100644 --- a/compiler/rustc_codegen_cranelift/src/intrinsics/mod.rs +++ b/compiler/rustc_codegen_cranelift/src/intrinsics/mod.rs @@ -328,6 +328,9 @@ fn codegen_float_intrinsic_call<'tcx>( sym::fabsf64 => ("fabs", 1, fx.tcx.types.f64, types::F64), sym::fmaf32 => ("fmaf", 3, fx.tcx.types.f32, types::F32), sym::fmaf64 => ("fma", 3, fx.tcx.types.f64, types::F64), + // FIXME: calling `fma` from libc without FMA target feature uses expensive sofware emulation + sym::fmuladdf32 => ("fmaf", 3, fx.tcx.types.f32, types::F32), // TODO: use cranelift intrinsic analogous to llvm.fmuladd.f32 + sym::fmuladdf64 => ("fma", 3, fx.tcx.types.f64, types::F64), // TODO: use cranelift intrinsic analogous to llvm.fmuladd.f64 sym::copysignf32 => ("copysignf", 2, fx.tcx.types.f32, types::F32), sym::copysignf64 => ("copysign", 2, fx.tcx.types.f64, types::F64), sym::floorf32 => ("floorf", 1, fx.tcx.types.f32, types::F32), @@ -381,7 +384,7 @@ fn codegen_float_intrinsic_call<'tcx>( let layout = fx.layout_of(ty); let res = match intrinsic { - sym::fmaf32 | sym::fmaf64 => { + sym::fmaf32 | sym::fmaf64 | sym::fmuladdf32 | sym::fmuladdf64 => { CValue::by_val(fx.bcx.ins().fma(args[0], args[1], args[2]), layout) } sym::copysignf32 | sym::copysignf64 => { |
