[AMDGPU][SILoadStoreOptimizer] Fix lds address operand offset (#176816)
The offset operand in GLOBAL_LOAD_ASYNC_TO_LDS_B128, for instance, is
added to both the lds and global address, but SILoadStoreOptimizer is
currently unaware of that. This PR inserts an add to counteract the
offset meant for the global address. This one add is better than not
doing the optimization at all, and having to insert 2 adds for each
global address calculation (with no offset).
```
; ENABLE-LABEL: name: promote_async_load_offset
; ENABLE: liveins: $ttmp7, $vgpr0, $sgpr0_sgpr1
; ENABLE-NEXT: {{ $}}
; ENABLE-NEXT: renamable $vgpr1 = V_LSHLREV_B32_e32 8, $vgpr0, implicit $exec
; ENABLE-NEXT: renamable $vgpr2, renamable $vcc_lo = V_ADD_CO_U32_e64 $vgpr0, 512, 0, implicit $exec
; ENABLE-NEXT: renamable $vgpr3, dead $sgpr_null = V_ADDC_U32_e64 0, killed $vgpr0, killed $vcc_lo, 0, implicit $exec
; ENABLE-NEXT: renamable $vgpr1 = disjoint V_OR_B32_e32 0, killed $vgpr1, implicit $exec
; ENABLE-NEXT: renamable $vgpr0 = V_ADD_U32_e32 256, $vgpr1, implicit $exec
; ENABLE-NEXT: GLOBAL_LOAD_ASYNC_TO_LDS_B128 killed $vgpr0, $vgpr2_vgpr3, -256, 0, implicit-def $asynccnt, implicit $exec, implicit $asynccnt :: (load store (s128), align 1, addrspace 3)
[18 lines not shown]
[clang][bytecode] Fix crash caused by overflow of Casting float number to integer (#177815)
Before this PR evaluation process will stop immediately regradless of
whether it's set to handle overflow,
this will prevent us getting value from stack, which leads to crash(with
or without assertion).
Closes #177751.
[clang][bytecode] Fix crash on discarded complex comparison (#177731)
Fixes llvm#176902: [clang][bytecode] crashes on ill-formed
_Static_assert comparing complex value
This patch resolves a crash in Clang's constant evaluation when handling
complex number comparisons in discarded expressions, such as those
involving short-circuiting logical operators. The crash occurred due to
unnecessary evaluation of the comparison in the experimental constant
interpreter.
The issue was originally observed and minimized in the following
example:
```cpp
#define EVAL(a, b) _Static_assert(a == b, "")
void foo() {
EVAL(; + 0, 1i);
[19 lines not shown]
[C++20] [Modules] Set ManglingContextDecl when we need to mangle a lambda but it's nullptr (#177899)
Close https://github.com/llvm/llvm-project/issues/177385
The root cause of the problem is, when we decide to mangle a lamdba in a
module interface while the ManglingContextDecl is nullptr, we didn't
update ManglingContextDecl. So that the following use of
ManglingContextDecl is an invalid value.
[RISC-V][MC] Introduce RVY extension feature
This adds the initial feature for the base RVY extension,
other extensions such as the hybrid mode will be added later.
RVY specification: https://riscv.github.io/riscv-cheri/
Co-authored-by: Jessica Clarke <jrtc27 at jrtc27.com>
Co-authored-by: Petr Vesely <petr.vesely at codasip.com>
Pull Request: https://github.com/llvm/llvm-project/pull/176870
[RISC-V] Reduce code duplication for uimm*_lsb* operands. NFC
Use a common tablegen class instead of duplicating all the data and add a
new case macro to handle the isShiftedUInt<>() call. This refactoring was
motivated by adding RVY support since I needed to add uimm{9,10}_lsb0000.
Pull Request: https://github.com/llvm/llvm-project/pull/177743
[RISCV] Use inheritance to simplify RVInstSet*VL* classes. NFC (#177797)
Rename classes to start with RVInstV to make it more clear they are
vector related.
[llvm][UnifyLoopExits] Avoid optimization if no exit block is found (#165343)
If there is not an exit block, we should not try unify the loops.
Instead we should just return.
Fixes #165252
[OpenCL] Add clang internal extension __cl_clang_function_scope_local_variables (#176726)
OpenCL spec restricts that variable in local address space can only be
declared at kernel function scope.
Add a Clang internal extension __cl_clang_function_scope_local_variables
to lift the restriction.
To expose static local allocations at kernel scope, targets can either
force-inline non-kernel functions that declare local memory or pass a
kernel-allocated local buffer to those functions via an implicit argument.
Motivation: support local memory allocation in libclc's implementation
of work-group collective built-ins, see example at:
https://github.com/intel/llvm/blob/41455e305117/libclc/libspirv/lib/amdgcn-amdhsa/group/collectives_helpers.llhttps://github.com/intel/llvm/blob/41455e305117/libclc/libspirv/lib/amdgcn-amdhsa/group/collectives.cl#L182
Right now this is a Clang-only OpenCL extension intended for compiling
OpenCL libraries with Clang. It could be proposed as a standard OpenCL
extension in the future.
[libclc] replace float remquo with amd ocml implementation (#177131)
Current implementation has two issues:
* unconditionally soft flushes denormal.
* can't pass OpenCL CTS test "test_bruteforce remquo" on intel gpu.
This PR upstreams remquo implementation from
https://github.com/ROCm/llvm-project/tree/amd-staging/amd/device-libs/ocml/src/remainderF_base.h
It supports denormal and can pass OpenCL CTS test. Number of LLVM IR
instructions of function _Z6remquoffPU3AS5i increased from 96 to 680.
---------
Co-authored-by: Copilot <175728472+Copilot at users.noreply.github.com>
TargetLowering: Allow FMINNUM/FMAXNUM to lower to FMINIMUM/FMAXIMUM even without `nsz` (#177828)
This restriction was originally added in
https://reviews.llvm.org/D143256, with the given justification:
> Currently, in TargetLowering, if the target does not support fminnum,
we lower to fminimum if neither operand could be a NaN. But this isn't
quite correct because fminnum and fminimum treat +/-0 differently; so,
we need to prove that one of the operands isn't a zero.
As far as I can tell, this was never correct. Before
https://github.com/llvm/llvm-project/pull/172012, `minnum` and `maxnum`
were nondeterministic with regards to signed zero, so it's always been
perfectly legal to lower them to operations that order signed zeroes.
[LV] Add additional tests for early-exit loops loads not known deref.
Add additional test coverage for loops with loads that are not known to
be dereferenceable.
[InstCombine] Don't convert a compare+select into a minnum/maxnum intrinsic that can't be lowered back to a compare+select (#177821)
This is a step on the yak-shaving expedition to properly implement the
new `minnum`/`maxnum` signed-zero semantics.
`InstCombineSelect` will convert a `fcmp`+`select` sequence to a
`minnum`/`maxnum` intrinsic. It doesn't require the `fcmp` to have any
particular fast-math flags, just that the `select` has `nnan` and `nsz`
(or is being used in a context where the result doesn't care about
signed zero).
It's not correct to propagate the `nnan` flag from the `fcmp`
instruction for poison-propagation reasons. Patches like
https://github.com/llvm/llvm-project/pull/117977 and
https://github.com/llvm/llvm-project/pull/141010 have *generously* made
it so that if `fcmp` doesn't have fast-math flags, we can still perform
the transformation by simply dropping the flags on the generated
intrinsic.
[25 lines not shown]
[Polly] Reject scalable vector types (#177871)
Polly currently does not consider types without fixed length, which can
be encountered if an input source uses e.g. ARM SVE builtins. Such
programs have already been optimized manually. Non-fixed type lengths
also add to the difficulty of dependency analysis. Skip such types
entirely for now.
Fixes: #177859
[mlir][DialectUtils] Fix 0 step handling in `constantTripCount` (#177329)
A step size of "zero" does not indicate "zero iterations". It may
indicate an infinite number of iterations.
This commit makes some transformations more conservative. We used to
fold away some loops with step size 0 and that's now no longer the case.
Relation discussion:
https://discourse.llvm.org/t/infinite-loops-and-dead-code/89530
[SLP]Support for tree throttling in SLP graphs with gathered loads
Gathered loads forming DAG instead of trees in SLP vectorizer. When
doing the throttling analysis for such graphs, need to consider partially
matched gathered loads DAG nodes and consider extract and/or gather
operations and their costs.
The patch adds this analysis and allows cutting off the expensive
sub-graphs with gathered loads.
Reviewers: hiraditya, RKSimon
Pull Request: https://github.com/llvm/llvm-project/pull/177855
[clang] Don't assert on perfect overload match with _Atomic (#176619)
An assertion incorrectly treated difference in _Atomic qualification as
different types for the purpose of verifying a perfect match in overload
resolution in C++.
Fixes #170433
[VectorCombine] Fold vector.reduce.OP(F(X)) == 0 -> OP(X) == 0 (#173069)
This commit introduces a pattern to do the following fold:
vector.reduce.OP f(X_i) == 0 -> vector.reduce.OP X_i == 0
In order to decide on this fold, we use the following properties:
1. OP X_i == 0 <=> \forall i \in [1, N] X_i == 0 1'. OP X_i == 0 <=>
\exists j \in [1, N] X_j == 0
2. f(x) == 0 <=> x == 0
From 1 and 2 (or 1' and 2), we can infer that
OP f(X_i) == 0 <=> OP X_i == 0.
For some of the OP's and f's, we need to have domain constraints on X to
ensure properties 1 (or 1') and 2.
[52 lines not shown]
[clang][test] Fix builtin-rotate.c test __int128 test failure on ARM32 (#177732)
- Run the INT128 prefix checks on 64-bit targets since __int128 is not
supported on ARM32
Fixes https://lab.llvm.org/buildbot/#/builders/154/builds/26813