[mlir][arith] Add support for `arith.flush_denormals` emulation (#192660)
Add lowering pattern and a new pass `arith-expand-flush-denormals` that
rewrites `arith.flush_denormals` ops with integer arithmetics. This
lowering is useful for target architectures that cannot pattern-match
`arith.flush_denormals` + other FP arithmetics into special instructions
with FTZ semantics.
Assisted-by: claude-opus-4.7-thinking-high
Depends on #192641.
[X86][clang-cl] Make AVX10.2 map to the same target-cpu as AVX10.1 (#193147)
Diamondrapids contains a large feature set APX, which should not be
enabled by AVX10.2
[DAG] Reassociate (add (add X, Y), X) --> add(add(X, X), Y) (#162242)
Attempt to bring together self-additions, to help with folding to shift/mul/address patterns
[runtimes] Protect use of undefined CMAKE_Fortran_COMPILER (#193210)
Unlike everything else in CMake, cmake_path does not assume a default
value for undefined variables, but instead throws an error:
```
CMake Error at cmake/config-Fortran.cmake:77 (cmake_path):
cmake_path undefined variable for input path.
Call Stack (most recent call first):
CMakeLists.txt:284 (include)
```
Protect the use of cmake_path to not trigger this error when
CMAKE_Fortran_COMPILER is undefined.
Fixes the flang-aarch64-out-of-tree buildbot after #171610.
[Polly] Disable PCH reuse for unit tests (#193209)
Polly library targets already disable PCH reuse because Polly
unconditionally builds with -fno-rtti and -fno-exceptions. Reusing LLVM
PCHs that were built with RTTI or exceptions enabled is incompatible
with Clang when compiling Polly targets under those flags.
After 47eb8b43c990 enabled PCH reuse for unit tests, Polly unit tests
can hit the same mismatch as the library targets. Pass DISABLE_PCH_REUSE
through the shared add_polly_unittest wrapper so all Polly unit tests
follow the existing Polly target policy.
cc @aengelke -- a minor fix for polly.
[CIR][NFCI] Remove 'isConstant' from getCIRLinkageForX (#193100)
This variable has since disappeared from classic compiler, and we
weren't using it anywhere anyway. This patch gets us back in sync with
the classic codegen for these interfaces.
[AMDGPU] Multi dword spilling for unaligned tuples (#183701)
While spilling unaligned tuples, rather than breaking the
spill into 32-bit accesses, spill the first register as a single
32-bit spill, and spill the remainder of the tuple as an aligned tuple.
Some additional bookkeeping is required in the spilling
loop to manage the state.
References: https://github.com/llvm/llvm-project/pull/177317
[llvm-cov] Fix error propagation in CoverageMapping::load() (#193197)
Fix a subtle issue on the error path: if loadFromFile() fails there is no error to consume.
[InstCombine] fold fabs(uitofp(i16 a) - uitofp(i16 b)) < 1.0 to a == b (#191378)
Fixes: https://github.com/llvm/llvm-project/issues/187088
When a and b are types with bitwidth (16 bits) smaller than the mantissa
for float32 (24 bits), they will be exact and their absolute difference
would be integral ±1 or greater if a != b. On the corollary, if their
difference is < 1.0, this implies that a = b.
This patch exploits this fact to fold the expression to just a single
icmp.
Revert "[clang-tidy][NFC] add numeric include for transform_reduce" (#193200)
After experiment, this didn't fix the build failure. So revert this to
keep the trunk clean.
Reverts llvm/llvm-project#193165
[mlir][arith] Add `arith.flush_denormals` operation (#192641)
Add a new `arith.flush_denormals` operation. The operation takes a
floating-point value as input and returns zero if the value is denormal.
If the input is not denormal, the operation passes through the input.
This commit also adds support to the `ArithToAPFloat` infrastructure.
Running example:
```mlir
%flush_a = arith.flush_denormals %a : f32
%flush_b = arith.flush_denormals %b : f32
%res = arith.addf %flush_a, %flush_b : f32
%flush_res = arith.flush_denormals %res : f32
```
The exact lowering path depends on the backend and is not implemented as
part of this PR:
- Per-instruction mode. E.g., on NVIDIA architectures, the above example
can lower to `add.ftz.f32 dest, a, b`.
[11 lines not shown]
[mlir] Add option to run CSE between greedy rewriter iterations (#193081)
The greedy pattern rewrite driver previously only deduplicated constant
ops between iterations (via the operation folder). Structurally
identical non-constant subexpressions remained distinct SSA values,
blocking fold patterns that only fire when operands match. Reaching the
true fixpoint required chaining an external `cse,canonicalize,...`
pipeline.
Add an opt-in `cseBetweenIterations` flag on `GreedyRewriteConfig` that
runs full CSE on the scoped region after each pattern-application
iteration, and surface it as a `cse-between-iterations` option on the
canonicalizer pass. Off by default to preserve existing performance
characteristics.
Assisted-by: Claude Code
[AMDGPU] Prefer mul24 over mad24 on SDWA targets (#193033)
If either of a mul24's operands can potentially fold into a SDWA
pattern, then don't fold into a mad24 node (which doesn't have SDWA
variants).
Fixes regressions I first noticed in #162242 - but turns out its an
older problem
[DAG] Add Srl combine for extracting last element of BUILD_VECTOR (#181412)
While working on another combine, I noticed some redundant zext shift
pairs `v_lshrrev_b32 + v_lshlrev_b32` coming from a `build_vector(undef,
x)` created by `TargetLowering::SimplifyDemandedBits` and a `srl`
created by `lowerEXTRACT_VECTOR_ELT`.