[CI][SPIRV][NFC] Remove unneccessary mkdir from workflow (#184353)
The `CMake` command does the `mkdir` automatically.
Pointed out in https://github.com/llvm/llvm-project/pull/184174
Signed-off-by: Nick Sarnie <nick.sarnie at intel.com>
[libc] Various GPU allocator tweaks and optimizations (#184368)
Summary:
Some low-hanging fruit tweaks. Mostly preventing redundant loads and
unnecessary widening. Some fixes as well, like nullptr handling,
incorrect rounding, and oversized bitfields.
[Clang] Generate ptr and float atomics without integer casts (#183853)
Summary:
LLVM IR should support these for all cases except for compare-exchange.
Currently the code goes through an integer indirection for these cases.
This PR changes the behavior to use atomics directly to the target
memory type.
Reapply "[SPIRV][NFCI] Use unordered data structures for SPIR-V extensions (#184162)
Reapply https://github.com/llvm/llvm-project/pull/183567 with minor
changes.
Problem causing the revert was we couldn't use the enum in `DenseMap`
directly because of some `TableGen` limitations so I casted made the map
use the underlying type, but that caused some UB, so I
[fixed](https://github.com/llvm/llvm-project/pull/183769) the `TableGen`
limitation so now it just works.
Fix `assignValueToReg` function's argument (#184354)
Because of [PR#178198](https://github.com/llvm/llvm-project/pull/178198)
the argument changes for `assignValueToReg`.
This PR aiming at fixing M86k experimental target
[Clang] Fix clang crash for fopenmp statement(for) inside lambda function (#146772)
C++ range-for statements introduce implicit variables such as `__range`,
`__begin`, and `__end`. When such a loop appears inside an OpenMP
loop-based directive (e.g. `#pragma omp for`) within a lambda, these
implicit variables were not emitted before OpenMP privatization logic
ran.
OMPLoopScope assumes that loop-related variables are already present in
LocalDeclMap and temporarily overrides their addresses. Since the
range-for implicit variables had not yet been emitted, they were treated
as newly introduced entries and later erased during restore(), leading
to missing mappings and a crash during codegen.
Fix this by emitting the range-for implicit variables before OpenMP
privatization (setVarAddr/apply), ensuring that existing mappings are
correctly overridden and restored.
This fixes #146335
[AMDGPU] Generate more swaps (#184164)
Generate more swaps from:
```
mov T, X
...
mov X, Y
...
mov Y, X
```
by being more careful about what use/defs of X, Y, T are allowed in
intervening code and allowing flexibility where the swap is inserted.
---------
Signed-off-by: John Lu <John.Lu at amd.com>
[SPIR-V][HIP] Disable SPV_KHR_untyped_pointers (#183530)
SPV_KHR_untyped_pointers in SPIR-V to LLVM translator is incomplete with
few known issues. Therefore we better not to rely on this extension for SPIR-V
generation.
[AArch64] Fix type mismatch in bitconvert + vec_extract patterns (#183549)
This patch fixes mismatch in element width during isel of bitconvert +
vec_extract nodes. This resolves issue reported on
[this](https://github.com/llvm/llvm-project/pull/172837) PR.
[X86] Add i256 shift / funnel shift coverage to match i512 tests (#184346)
shift-i256.ll - added x86-64/x86-64-v2/x86-64-v3/x86-64-v4 coverage and retained the x86 test coverage
[SPIRV] Don't emit service function basic block names (#184206)
Right now if a module has a service function we always emit `OpName
entry` for the service function's basic block.
The actual service function isn't emitted and no other instruction uses
the basic block `OpName` instruction, so don't emit it.
Signed-off-by: Nick Sarnie <nick.sarnie at intel.com>
[VPlan] Preserve IsSingleScalar for sunken predicated stores. (#184329)
The predicated stores may be single scalar (e.g. for VF = 1). We should
preserve IsSingleScalar. As all stores access the same address,
IsSingleScalar must match across all stores in the group.
This fixes an assertion when interleaving-only with sunken stores.
Fixes https://github.com/llvm/llvm-project/issues/184317
PR: https://github.com/llvm/llvm-project/pull/184329
[CodeGen] Move rollback capabilities outside of the rematerializer
The rematerializer implements support for rolling back
rematerializations by modifying MIs that should normally be deleted in
an attempt to make them "transparent" to other analyses. This involves:
1. setting their opcode to DBG_VALUE and
2. setting their read register operands to the sentinel register.
This approach has several drawbacks.
1. It forces the rematerializer to support tracking these "dead MIs".
2. It is not actually clear whether this mechanism will interact well
with all other analyses. This is an issue since the intent of the
rematerializer is to be usable in as many contexts as possible.
3. In practice, it has shown itself to be relatively error-prone.
This commit removes rollback support from the rematerializer and moves
those capabilties to a rematerializer listener than can be instantiated
[5 lines not shown]
[CodeGen] Allow rematerializer to rematerialize at the end of a block
This makes the rematerializer able to rematerialize MIs at the end of a
basic block. We achive this by tracking the parent basic block of every
region inside the rematerializer and adding an explicit target region to
some of the class's methods. The latter removes the requirement that we
track the MI of every region (`Rematerializer::MIRegion`) after the
analysis phase; the class member is therefore deleted.
This new ability will be used shortly to improve the design of the
rollback mechanism.
[clang-tidy] Handle specialization of user-defined type in `bugprone-std-namespace-modification` (#183984)
Ignore `templateSpecializationType` based on user-define classes too.
Fixes #183752
[lld] Turn misc copy-assign to move-assign (#184145)
That's an automated patch generated from clang-tidy
performance-use-std-move as a follow-up to #184136
[CodeGen] Add listener support to the rematerializer (NFC)
This change adds support for adding listeners to the
target-independent rematerializer; listeners can catch certain
rematerialization-related events to implement some additional
functionnality on top of what the rematerializer already performs.
This has no user at the moment, but the plan is to have listeners start
being responsible for secondary/optional functionnalities that are at
the moment integrated with the rematerializer itself. Two examples of
that are:
1. rollback support (currently optional), and
2. region tracking (currently mandatory, but not fundamentally necessary
to the rematerializer).
[mlir][Vector][GPU] Distribute expanding `shape_cast` ops (#183830)
The initial implementation of `shape_cast` distribution only focused on
scenarios with collapsing shape casts. Within downstream pipelines such
as IREE, commit 962a9a3 exposes an issue with this implementation, where
the rank-expanding cast ops (stemming from the new `vector.broadcast`
canonicalization) silently fall through to the "collapsing-or-no-op"
logic. This brings about bugs with rank mismatches and firing validation
assertions when distributing rather common reshaping sequences
encountered after CSE/ canonicalization, such as below:
```
// Example 1: gather op
%weight = arith.constant dense_resource<__elided__> : tensor<256xi8>
%c0 = arith.constant 0 : index
...
%expand = vector.shape_cast <...> : vector<1xindex> to vector<1x1xindex>
%gather = vector.gather %weight[%c0] [%expand], <...>, <...> : memref<256xi8>, vector<1x1xindex>, vector<1x1xi1>, vector<1x1xi8> into vector<1x1xi8>
%collapse_back = vector.shape_cast %gather : vector<1x1xi8> to vector<1xi8>
// Example 2: multi-reduction
[19 lines not shown]