Centralize prefetch target storage in MachineFunction. (#184194)
### Prefetch Symbol Resolution
Based on this
[suggestion](https://discourse.llvm.org/t/rfc-code-prefetch-insertion/88668/29?u=rlavaee),
we must identify if a prefetch target is defined in the current module
to avoid **undefined symbol errors**. Since this occurs during
sequential **CodeGen**, we must rely on function names rather than IR
Module APIs.
**Key Changes:**
* **`MachineFunction` Integration:** Added a `PrefetchTargets` field
(with serialization) to track all targets associated with a function.
* **Guaranteed Emission:** All prefetch targets are now emitted
regardless of basic block or callsite index matches to ensure the symbol
exists.
* **Fallback Placement:** Targets with non-matching callsite indices are
emitted at the end of the block to resolve the reference.
[AMDGPU] Cgscc amdgpu attributor boilerplate NFC (#179719)
This PR is adding a boilerplate of CGSCC AMDGPUAttributor pass
(amdgpu-attributor-cgscc) by doing refactoring from the existing Module
AMDGPUAttributor pass (amdgpu-attributor).
CGSCC AMDGPUAttributor pass sets `AttributorConfig.IsModulePass =
false`, and make Attributor's `Functions` set contain only functions in
a SCC.
The main implementations of abstract attributes have not changed - NFC.
Subsequently, in future work some of the AMDGPU abstract attributors
might move to be handled by CGSCC pass.
---------
Co-authored-by: Matt Arsenault <arsenm2 at gmail.com>
[flang][cuda] Relax host intrinsic semantic check in acc routine (#185483)
Semantic check that checks if any actual argument is on the device
doesn't need to be active in acc routine function/subroutine.
[CIR] Change CmpOp assembly format to use bare keyword style (#185114)
Update the assembly format of cir.cmp from the parenthesized style
`cir.cmp(gt, %a, %b) : !s32i, !cir.bool`
to the bare keyword style used by other CIR ops like cir.cast:
`cir.cmp gt %a, %b : !s32i`
The result type (!cir.bool) is now automatically inferred as it is
always cir::BoolType.
[AArch64] Add partial reduce patterns for new fdot instructions (#184659)
This patch enables generation of new dot instruction added in under
FEAT_F16F32DOT from partial reduce nodes.
[HLSL] Implement Texture2D default template (#184207)
The Texture2D type has a default template of float4. This can be written
in a couple way: `Texture2D<>` or `Texture2D`. This must be implemented
for consistenty with DXC in HLSL202x.
To implement `Texture2D<>` we simply add a default type for the template
parameter.
To implement `Texture2D`, we have to add a special case for a template
type without a template instantiation. For HLSL, we check if it is a
texture type. If so, the default type is filled in.
Note that HLSL202x does not support C++17 Class Template Argument
Deduction, so we cannot use that feature to give us `Texture2D`.
See https://github.com/llvm/wg-hlsl/pull/386 for alternatives that were
considered.
[13 lines not shown]
[clang][ssaf] Implement Entity Linker CLI and patching for JSON Format
This PR implements Entity ID patching for the JSON serialization format
and introduces `ssaf-linker`, a command-line tool that drives the
`EntityLinker`.
1. Entity ID references inside summary blobs use the sentinel
representation `{"@": <uint64>}`. Patching walks the JSON value tree
recursively, recognizes sentinels, and rewrites their indices using the
`EntityResolutionTable` provided by the linker.
2. An object with an `@` key but extra keys `(size != 1)`, an `@` value
that is not a valid `uint64`, and an entity ID not present in the
resolution table, lead to patching errors.
3. `JSONFormat::EntityIdConverter` is replaced with two `function_ref`
typedefs to eliminate the wrapper class.
4.`ssaf-linker` is implemented in `clang/tools/ssaf-linker/` and gets
built at `bin/ssaf-linker`.
5. lit tests check CLI, verbose output, timing output, validation
errors, I/O errors, linking errors, and successful linking.
rdar://162570931
[mlir][OpenACC] Normalize loop bounds in convertACCLoopToSCFFor for negative steps (#184935)
`convertACCLoopToSCFFor` was passing `acc.loop` bounds directly to
`scf.for`, which produces an `scf.for` with a negative step when the
source is a Fortran DO loop counting down (e.g. `DO k = n, 1, -1`).
Since `scf.for` requires a positive step, this generated invalid IR that
caused downstream crashes during LLVM lowering.
`convertACCLoopToSCFParallel` already normalizes all loops
unconditionally to `lb=0, step=1, ub=tripCount`, but
`convertACCLoopToSCFFor` did not. This patch applies the same
normalization to `convertACCLoopToSCFFor`, with IV denormalization in
the loop body (`original_iv = normalized_iv * orig_step + orig_lb`), and
lets later passes fold away constants.
[libc] Use unsigned char in strcmp, strncmp, and strcoll comparisons (#185393)
According to section 7.24.1 of the C standard, character comparison in
string functions must be performed as if the characters had the type
`unsigned char`.
The previous implementations of `strcmp`, `strncmp`, and `strcoll` were
doing a direct subtraction of `char` values. On platforms where `char`
is signed, this resulted in incorrect negative values being returned
when characters exceeding 127 were being compared.
This patch fixes the comparison functions to explicitly cast the
character values to `unsigned char` prior to computing their difference.
It also adds regression tests to ensure the comparison behaves correctly
for ASCII values greater than 127.
AMDGPU: Annotate group size ABI loads with range metadata (#185420)
We previously did the same for the grid size when annotated.
The group size is easier, so it's weird that this wasn't implemented
first.
[HLSL][Matrix] Make HLSLElementwiseCast respect matrix memory layout (#184429)
Fixes #184379
Changes the implementation of HLSLElementwiseCast to respect matrix
memory layout.
The new implementation reads from the `LoadList` array in row-major
order as opposed to column-major in the old implementation, which makes
more sense because `LoadList` is always interpreted in row-major order
when read as a matrix.
The writes to the allocation `V` for the destination matrix now respects
the default matrix memory layout.
Assisted-by: claude-opus-4.6
[CIR] Fix try_call replacement for indirect calls (#185095)
We had a bug in the FlattenCFG pass where if an indirect call occurred
within a cleanup scope that required exception handling, the indirect
callee was not being preserved in the cir.try_call. This fixes that.
[CIR][AArch64] Add lowering for remaining `vabd_*` builtins
Implement the missing CIR lowerings for the AdvSIMD (Neon) `vabd_*`
(absolute difference) intrinsic group.
Most `vabd` variants were already supported (see #183595); this patch
completes the remaining cases listed in [1].
Move the corresponding tests from:
* clang/test/CodeGen/AArch64/neon_intrinsics.c
to:
* clang/test/CodeGen/AArch64/neon/intrinsics.c
The implementation mirrors the existing lowering in
CodeGen/TargetBuiltins/ARM.cpp. To support this, add the
`emitCommonNeonSISDBuiltinExpr` helper.
Reference:
[1] https://arm-software.github.io/acle/neon_intrinsics/advsimd.html#absolute-difference
[TableGen] Add let append/prepend syntax for field concatenation (#182382)
## Motivation
LLVM TableGen currently lacks a way to **accumulate** field values
across class hierarchies. When a derived class sets a field via `let`,
it completely replaces the parent's value. This forces users into
verbose workarounds like:
```tablegen
class Op { // This is generic MLIR Base
code extraClassDeclaration = ?;
}
// Some Generic shared base
class MyShared1OpClass : Op {
code shared1ExtraClassDeclaration = [{ some generic code 1 }];
}
[157 lines not shown]
[PowerPC] Add AMO load with Compare and Swap Not Equal (#178061)
This commit adds support for lwat/ldat atomic operations with function
code 16 (Compare and Swap Not Equal) via 4 clang builtins:
__builtin_amo_lwat_csne for 32-bit unsigned operations
__builtin_amo_ldat_csne for 64-bit unsigned operations
__builtin_amo_lwat_csne_s for 32-bit signed operations
__builtin_amo_ldat_csne_s for 64-bit signed operations
[LLDB] Remove C++ language runtime dependency of Clang expression parser (#185450)
From
https://github.com/llvm/llvm-project/pull/169225#issuecomment-4024377289:
There was a dependency cycle involving the C++ language runtime:
```
//lldb/source/Plugins/TypeSystem/Clang:Clang ->
//lldb/source/Plugins/ExpressionParser/Clang:Clang ->
//lldb/source/Plugins/LanguageRuntime/CPlusPlus:CPlusPlus ->
//lldb/source/Plugins/TypeSystem/Clang:Clang
```
`ExpressionParserClang` doesn't need to depend on the C++ language
runtime. It only included a file, but didn't use it. This PR removes
that dependency.
[GOFF] Set reference to ADA (#179734)
Function symbols must have a reference to the ADA, because this becomes
the value of the r5 register when the function is called. Simply get the
value from the begin symbol of the section.
[clang-doc] Cleanup CMake files and ensure benchmarks build
There's some poor formatting, and ClangDocBenchmark references several
targets that are required, but only because they're required for
clang-doc itself. We can just get those requirements from the clangDoc
target.
Additionally, we can make sure the benchmark builds as part of testing
when LLVM_INCLUDE_BENCHMARKS is set.
[CIR] Extract CIR_VAOp base class for VAStartOp and VAEndOp (#185258)
Both ops share identical arguments and assembly format. Extract a common
base class to eliminate the duplication.
[TableGen] Fix ordering of register classes with artificial members. (#185448)
The current implementation wouldn't advance IB to skip artificial
registers once IA has reached the end.