[SPIR-V][HIP] Disable SPV_KHR_untyped_pointers (#183530)
SPV_KHR_untyped_pointers in SPIR-V to LLVM translator is incomplete with
few known issues. Therefore we better not to rely on this extension for SPIR-V
generation.
[AArch64] Fix type mismatch in bitconvert + vec_extract patterns (#183549)
This patch fixes mismatch in element width during isel of bitconvert +
vec_extract nodes. This resolves issue reported on
[this](https://github.com/llvm/llvm-project/pull/172837) PR.
[X86] Add i256 shift / funnel shift coverage to match i512 tests (#184346)
shift-i256.ll - added x86-64/x86-64-v2/x86-64-v3/x86-64-v4 coverage and retained the x86 test coverage
[SPIRV] Don't emit service function basic block names (#184206)
Right now if a module has a service function we always emit `OpName
entry` for the service function's basic block.
The actual service function isn't emitted and no other instruction uses
the basic block `OpName` instruction, so don't emit it.
Signed-off-by: Nick Sarnie <nick.sarnie at intel.com>
[VPlan] Preserve IsSingleScalar for sunken predicated stores. (#184329)
The predicated stores may be single scalar (e.g. for VF = 1). We should
preserve IsSingleScalar. As all stores access the same address,
IsSingleScalar must match across all stores in the group.
This fixes an assertion when interleaving-only with sunken stores.
Fixes https://github.com/llvm/llvm-project/issues/184317
PR: https://github.com/llvm/llvm-project/pull/184329
[CodeGen] Move rollback capabilities outside of the rematerializer
The rematerializer implements support for rolling back
rematerializations by modifying MIs that should normally be deleted in
an attempt to make them "transparent" to other analyses. This involves:
1. setting their opcode to DBG_VALUE and
2. setting their read register operands to the sentinel register.
This approach has several drawbacks.
1. It forces the rematerializer to support tracking these "dead MIs".
2. It is not actually clear whether this mechanism will interact well
with all other analyses. This is an issue since the intent of the
rematerializer is to be usable in as many contexts as possible.
3. In practice, it has shown itself to be relatively error-prone.
This commit removes rollback support from the rematerializer and moves
those capabilties to a rematerializer listener than can be instantiated
[5 lines not shown]
[CodeGen] Allow rematerializer to rematerialize at the end of a block
This makes the rematerializer able to rematerialize MIs at the end of a
basic block. We achive this by tracking the parent basic block of every
region inside the rematerializer and adding an explicit target region to
some of the class's methods. The latter removes the requirement that we
track the MI of every region (`Rematerializer::MIRegion`) after the
analysis phase; the class member is therefore deleted.
This new ability will be used shortly to improve the design of the
rollback mechanism.
[clang-tidy] Handle specialization of user-defined type in `bugprone-std-namespace-modification` (#183984)
Ignore `templateSpecializationType` based on user-define classes too.
Fixes #183752
[lld] Turn misc copy-assign to move-assign (#184145)
That's an automated patch generated from clang-tidy
performance-use-std-move as a follow-up to #184136
[CodeGen] Add listener support to the rematerializer (NFC)
This change adds support for adding listeners to the
target-independent rematerializer; listeners can catch certain
rematerialization-related events to implement some additional
functionnality on top of what the rematerializer already performs.
This has no user at the moment, but the plan is to have listeners start
being responsible for secondary/optional functionnalities that are at
the moment integrated with the rematerializer itself. Two examples of
that are:
1. rollback support (currently optional), and
2. region tracking (currently mandatory, but not fundamentally necessary
to the rematerializer).
[mlir][Vector][GPU] Distribute expanding `shape_cast` ops (#183830)
The initial implementation of `shape_cast` distribution only focused on
scenarios with collapsing shape casts. Within downstream pipelines such
as IREE, commit 962a9a3 exposes an issue with this implementation, where
the rank-expanding cast ops (stemming from the new `vector.broadcast`
canonicalization) silently fall through to the "collapsing-or-no-op"
logic. This brings about bugs with rank mismatches and firing validation
assertions when distributing rather common reshaping sequences
encountered after CSE/ canonicalization, such as below:
```
// Example 1: gather op
%weight = arith.constant dense_resource<__elided__> : tensor<256xi8>
%c0 = arith.constant 0 : index
...
%expand = vector.shape_cast <...> : vector<1xindex> to vector<1x1xindex>
%gather = vector.gather %weight[%c0] [%expand], <...>, <...> : memref<256xi8>, vector<1x1xindex>, vector<1x1xi1>, vector<1x1xi8> into vector<1x1xi8>
%collapse_back = vector.shape_cast %gather : vector<1x1xi8> to vector<1xi8>
// Example 2: multi-reduction
[19 lines not shown]
[Reland] [APINotes] Refactor APINotesReader to propagate llvm::Error (#184212)
Reland of #183812 with the explicit `std::move` restored to fix buildbot
failures on older compilers.
Revert "Avoid maxnum(sNaN, x) optimizations / folds (#170181)" (#184125)
This reverts commit ea3fdc5972db7f2d459e543307af05c357f2be26.
Re-enable const-folding for maxnum/minnum in the middle-end, GlobalISel,
and SelectionDAG.
Re-enable optimizations that depend on maxnum/minnum sNaN semantics in
InstCombine and DAGCombiner.
Now that maxnum(x, sNaN) is specified to non-deterministically produce
either NaN or x, these constant-foldings and optimizations are now valid
again according to the newly clarified semantics in #172012 .
[DAG] isKnownNeverZero - add ISD::OR DemandedElts handling (#183228)
This patch updates `SelectionDAG::isKnownNeverZero` to support `ISD::OR`
by forwarding the `DemandedElts` mask to its operands.
Previously, `ISD::OR` dropped the mask, causing the compiler to be
overly conservative if any lane in the vector was zero, even if that
lane wasn't demanded. This change allows the compiler to prove a vector
result is non-zero even if ignored lanes are zero.
Fixes #183037
**Tests:**
- Moved tests from the C++ file to the IR assembly file
(`known-never-zero.ll`) as requested.
- Confirmed the code now correctly tracks which parts of a vector are
actually needed for `ISD::OR`.
- This allows the compiler to prove a result is "never zero" even if
some unused lanes contain zeros.
[AArch64] Limit support to f32 and f64 in performSelectCombine (#184315)
This prevents a crash with fp128 types, other types (f16) were already
excluded.
Fixes #184300
[MLIR] Make test-block-is-in-loop pass a module pass (#184036)
This pass can't run in parallel on function as it would trigger race conditions.
Fixes #183999
[CIR][CUDA]: Handle duplicate mangled names (#183976)
Replace the NYI for duplicate function defs with the proper diagnostic
logic from OG codegen.
Related: #175871, #179278