intrinsics.fmuladdf{16,32,64,128}: expose llvm.fmuladd.* semantics
Add intrinsics `fmuladd{f16,f32,f64,f128}`. This computes `(a * b) + c`, to be fused if the code generator determines that (i) the target instruction set has support for a fused operation, and (ii) that the fused operation is more efficient than the equivalent, separate pair of `mul` and `add` instructions. https://llvm.org/docs/LangRef.html#llvm-fmuladd-intrinsic MIRI support is included for f32 and f64. The codegen_cranelift uses the `fma` function from libc, which is a correct implementation, but without the desired performance semantic. I think this requires an update to cranelift to expose a suitable instruction in its IR. I have not tested with codegen_gcc, but it should behave the same way (using `fma` from libc).
This commit is contained in:
parent
01e2fff90c
commit
0d8a978e8a
11 changed files with 222 additions and 1 deletions
|
@ -86,6 +86,11 @@ fn get_simple_intrinsic<'ll>(
|
|||
sym::fmaf64 => "llvm.fma.f64",
|
||||
sym::fmaf128 => "llvm.fma.f128",
|
||||
|
||||
sym::fmuladdf16 => "llvm.fmuladd.f16",
|
||||
sym::fmuladdf32 => "llvm.fmuladd.f32",
|
||||
sym::fmuladdf64 => "llvm.fmuladd.f64",
|
||||
sym::fmuladdf128 => "llvm.fmuladd.f128",
|
||||
|
||||
sym::fabsf16 => "llvm.fabs.f16",
|
||||
sym::fabsf32 => "llvm.fabs.f32",
|
||||
sym::fabsf64 => "llvm.fabs.f64",
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue