[lldb][Clang] Removed redundant code in DWARFASTParserClang (#197802)
decl_up is initialized here but doesn't appear to be used or moved
anywhere before it goes out of scope. If the Declaration info isn't
needed for the FunctionSP, it should probably be removed.
[clang] Implement constexpr DesignatedInitUpdateExpr. (#196427)
DesignatedInitUpdateExpr exists to handle some obscure edge cases in C,
where the usual InitListExpr canonicalization can't be performed.
Previously, we didn't need constant evaluation for this, but C23
constexpr means we need to evaluate this before codegen.
Implementation is mostly straightforward: just need to evaluate the two
subexpressions, in order, and skip any NoInitExprs.
Along the way, I ran into an issue with the way we manage array APValues
for non-bytecode constant evaluation; fix reallocation to work
correctly.
Fixes #193373. Fixes #196450.
[Flang][MLIR][OpenMP] Add distinct var_ptr_ptr_type to omp.map.info operations & remove ref_ptr_ptee (#177302)
This is a precursor patch to attach and ref_ptr/ptee mapping that I
intend to upstream
over the next few weeks. The attach maps require both the type of the
descriptor and
the pointed to data to calculate the appropriate offload/base pointers
and size. In
the base case of ref_ptr_ptee all of this information can be gathered
from the pointer
and pointee maps, but in cases where we have only one (i.e.
ref_ptr/ref_ptee) we will
be missing one of the key elements required to create an corresponding
attach map.
So, this PR basically adds the ability to ferry around the type of both
var_ptr and
var_ptr_ptr as opposed to just var_ptr, then we can emit attach maps as
seperate
[8 lines not shown]
[Flang][OpenMP][MLIR] Add attach and ref map type lowering to MLIR (#177301)
This doesn't implement the functionality, just the relevant map type
lowering to MLIR's omp.map.info. The more complicated changes to
MapInfoFinalizationPass.cpp and OpenMPTOLLVMIRTranslation.cpp to support
attach map and the various ref/attach semantics will come in a
subsequent set of PRs. This just helps compartmentalize the changeset.
Reland "[DirectX][ObjectYAML] Add ILDN part support" (#197749)
This relands #194508, which was reverted in #197348.
This reland addresses the revert reasons:
- Rename `DXContainerYAML::DebugName::DebugName` to `Filename` to avoid
compilation failures on older `cl.exe` versions due to field name
matching
its class name.
- Fix layering by reverting the MC -> Object dependency introduced
previously:
`ILDNData` is no longer defined in `llvm/Object/DXContainer.h` and used
by MC. Instead, ILDN fields are represented by `mcdxbc::DebugName` in
`llvm/MC/DXContainerInfo.h`, and Object uses that type (Object already
depends on MC).
[AArch64] Make width of stack protector guard value load configurable. (#195379)
Certain embedded targets store the value of the stack protector global
in an MMIO register, which requires a load of a specific width. Allow
changing the backend to emit a 4-byte load for the value of the stack
protector, instead of an 8-byte load. (Or vice versa for an ilp32
target.)
The current version of the patch has a limitation: it still allocates a
pointer-sized stack slot for the guard. This could be fixed in the
future, if it becomes relevant.
[lldb] Use identity hashing for HashToISAMap in ObjCLanguageRuntime (#197759)
The keys of HashToISAMap are already hashes, and can be used as-is.
---------
Co-authored-by: Adrian Prantl <adrian.prantl at gmail.com>
[MIR] Save internal VirtRegMap state
Adds two optional fields to the per-vreg YAML record so MIR tests can
express VirtRegMap state that previously had no representation:
registers:
- { id: 1, class: vgpr_32, split-from: '%0', assigned-phys: '$vgpr5' }
Testing passes that consume sibling-register information (e.g.
InlineSpiller) requires constructing a VirtRegMap with split
relationships from a MIR test, which implies triggering live-range
splitting at minimum and make reproducers unnecessarily complicated.
So this change introduces a mechanism to serialize/deserialize the state
of the VirtRegMap pass.
Mechanism:
- For serialization:
- MIRPrinter emits the new fields only when the VirtRegMap is available.
[13 lines not shown]
[MIR] Serialize/Deserialize MachineInstr::LRSplit attribute
The LRSplit MachineInstr flag is set by SplitKit on copies inserted for
live-range splitting.
Until now the flag had no MIR-text representation.
This patch fixes that so that it gets easier to reproduce/capture issues
that involves SplitKit.
Round-trip coverage in
llvm/test/CodeGen/MIR/AMDGPU/lr-split-flag.mir.
[AMDGPU][test] Use mir test for regalloc issue
Use the newly introduced split-from flag to produce a more robust test case
for the hoistSpillInsideBB live-range update issue.
NFC
Allow ObjC writeback conversion in cleanup attribute type check (#195318)
Prior to #164440, CheckAssignmentConstraints in the cleanup attribute
handler ran before ObjC lifetime qualifiers were inferred on the
variable. It compared against a type without '__strong' and accepted
both 'T **' to 'T *__autoreleasing *' and 'T **' to 'T
*__unsafe_unretained *'. #164440 reversed the order, so the check now
runs after '__strong' is inferred and rejects both 'T *__strong *' to 'T
*__autoreleasing *' and 'T *__strong *' to
'T *__unsafe_unretained *'.
Fix the valid case by falling back to isObjCWritebackConversion when the
assignment check fails. This re-allows the '__strong' to
'__autoreleasing' writeback conversion while continuing to reject
'__strong' to '__unsafe_unretained'.
rdar://175133715
[flang][docs][nfc] Wordsmith -frelaxed-c-loc-checks flag description (#197804)
Clarify that unexpected behavior reflects the compiler's design choice
not to track aliases from non-target non-pointer C_LOC() arguments,
rather than implying a compiler defect. Align phrasing across
FlangOptions.td and Extensions.md.
Also add the `-checks` suffix to the flag per @sscalpone's suggestion.
[Instrumentor] Introduce BasePointerIO to communicate base pointer information
Loads, stores, and later probably calls, can request a base pointer info
object from the user runtime. This object is queried right after the
base pointer of the operation is defined, and then passed to the
pre/post runtime calls of the loads and stores. This allows users to
inspect pointers early and once, but provide the analysis results to all
operations that might be executed in loops. A potential use case is to
lookup the size and start of the underlying object and then provide
those to the access runtime calls for in-bounds checking.
[clang-doc] Removed OwnedPtr alias
The alias served a purpose during migration, but now conveys the wrong
semantics, as the memory these pointers reference is interned inside
a local arena, and doesn't convey any sort of ownership.
[clang-doc] Use distinct APIs for fixed arena allocation sites
Typically, code either always emits data into the TransientArena or the
PersistentArena. Use more explicit APIs to convey the intent directly
instead of relying on parameters or defaults. We were not always
consistent about this.
[Instrumentor] Add a property filter for static properties (#197530)
The user can define static filters in the json to limit instrumentation
to opportunities that match the static expression, e.g., is_volatile==1.
The matcher logic is pretty basic for now. Integer comparisons, string
equalities and `startswith` for strings are supported.
The commit was prepared with Claude (AI) and proofread/tested by me.
[lldb] Fix data race on ThreadPlan::GetNextID's plan-ID counter (#197811)
`g_nextPlanID` is a function-local static used to hand out unique
ThreadPlan IDs. It was a plain uint32_t, so concurrent ThreadPlan
constructors (e.g. each Process's private state thread queueing its base
plan) raced on the increment.
Make it std::atomic<uint32_t>. Prefix operator++ on std::atomic is
already an atomic fetch_add that returns the new value, so the call
sites are unchanged.
Found by ThreadSanitizer as part of #197792.
[lldb] Fix data race on ValueObject's unique-id counter (#197809)
g_value_obj_uid is a file-scope static that hands out unique IDs to
every ValueObject. It was a plain user_id_t, so concurrent
SBTarget::FindGlobalVariables / EvaluateExpression calls raced on the
increment.
Make it std::atomic<user_id_t>. Prefix operator++ on std::atomic is
already an atomic fetch_add that returns the new value, so the call
sites are unchanged.
Found by ThreadSanitizer as part of #197792.