Displaying 1 50 of 184,486 commits (0.071s)

LLVM — llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v4.ll

[x86] Move the AVX v4i64 test cases down to group them together.

Increasingly I don't want to mix the integer and floating point tests,
especially with AVX where they are handled quite differently.

LLVM — cfe/trunk/include/clang/AST ASTContext.h

Add some documentation stating that the memory allocated by the ASTContext.h placement new 
does not need to be explicitly freed

Reviewers: rnk

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D5392

LLVM — llvm/trunk/lib/Analysis LazyValueInfo.cpp

Add two thresholds lvi-overdefined-BB-threshold and lvi-overdefined-threshold
for LVI algorithm. For a specific value to be lowered, when the number of basic
blocks being checked for overdefined lattice value is larger than
lvi-overdefined-BB-threshold, or the times of encountering overdefined value
for a single basic block is larger than lvi-overdefined-threshold, the LVI
algorithm will stop further lowering the lattice value.

LLVM — cfe/trunk/include/clang/AST Decl.h, cfe/trunk/include/clang/Basic DiagnosticSemaKinds.td

ms-inline-asm: Scope inline asm labels to functions

Summary:
This fixes PR20023.  In order to implement this scoping rule, we piggy
back on the existing LabelDecl machinery, by creating LabelDecl's that
will carry the "internal" name of the inline assembly label, which we
will rewrite the asm label to.

Reviewers: rnk

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D4589

LLVM — llvm/trunk/include/llvm/MC MCTargetAsmParser.h, llvm/trunk/include/llvm/MC/MCParser MCAsmParser.h

ms-inline-asm: Add a sema callback for looking up label names

The implementation of the callback in clang's Sema will return an
internal name for labels.

Test Plan: Will be tested in clang.

Reviewers: rnk

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D4587

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp, llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v4.ll

[x86] Back out a bad choice about lowering v4i64 and pave the way for
a more sane approach to AVX2 support.

Fundamentally, there is no useful way to lower integer vectors in AVX.
None. We always end up with a VINSERTF128 in the end, so we might as
well eagerly switch to the floating point domain and do everything
there. This cleans up lots of weird and unlikely to be correct
differences between integer and floating point shuffles when we only
have AVX1.

The other nice consequence is that by doing things this way we will make
it much easier to write the integer lowering routines as we won't need
to duplicate the logic to check for AVX vs. AVX2 in each one -- if we
actually try to lower a 256-bit vector as an integer vector, we have
AVX2 and can rely on it. I think this will make the code much simpler
and more comprehensible.

Currently, I've disabled *all* support for AVX2 so that we always fall
back to AVX. This keeps everything working rather than asserting. That
will go away with the subsequent series of patches that provide
a baseline AVX2 implementation.

Please note, I'm going to implement AVX2 *without access to hardware*.
That means I cannot correctness test this path. I will be relying on
those with access to AVX2 hardware to do correctness testing and fix

    [2 lines not shown]

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp, llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v8.ll

[x86] Teach the new vector shuffle lowering how to cleverly lower single
input v8f32 shuffles which are not 128-bit lane crossing but have
different shuffle patterns in the low and high lanes. This removes most
of the extract/insert traffic that was unnecessary and is particularly
good at lowering cases where only one of the two lanes is shuffled at
all.

I've also added a collection of test cases with undef lanes because this
lowering is somewhat more sensitive to undef lanes than others.

LLVM — llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v8.ll

[x86] Add a bunch of test cases where we have different shuffle patterns
in the high and low 128-bit lanes of a v8f32 vector.

No functionality change yet, but wanted to set up the baseline for my
next patch which will make these quite a bit better. =]

LLVM — llvm/trunk/lib/Target/R600 SIShrinkInstructions.cpp

Fix typo

LLVM — llvm/trunk/lib/Target/R600/InstPrinter AMDGPUInstPrinter.cpp

Use llvm_unreachable instead of assert(!)

LLVM — llvm/trunk/lib/Target/R600/InstPrinter AMDGPUInstPrinter.cpp

R600/SI: Don't use strings for single characters

LLVM — llvm/trunk/lib/ExecutionEngine RTDyldMemoryManager.cpp

Remove redundant if test.

LLVM — llvm/trunk/include/llvm/Target TargetLowering.h, llvm/trunk/lib/CodeGen/SelectionDAG DAGCombiner.cpp

Refactor reciprocal square root estimate into target-independent function; NFC.

This is purely a plumbing patch. No functional changes intended.

The ultimate goal is to allow targets other than PowerPC (certainly X86 and Aarch64) to 
turn this:

z = y / sqrt(x)

into:

z = y * rsqrte(x)

using whatever HW magic they can use. See http://llvm.org/bugs/show_bug.cgi?id=20900 .

The first step is to add a target hook for RSQRTE, take the already target-independent 
code selfishly hoarded by PPC, and put it into DAGCombiner.

Next steps:

    The code in DAGCombiner::BuildRSQRTE() should be refactored further; tests that 
exercise that logic need to be added.
    Logic in PPCTargetLowering::BuildRSQRTE() should be hoisted into DAGCombiner.
    X86 and AArch64 overrides for TargetLowering.BuildRSQRTE() should be added.

Differential Revision: http://reviews.llvm.org/D5425

LLVM — llvm/trunk/lib/CodeGen AggressiveAntiDepBreaker.h CriticalAntiDepBreaker.h

mop up: "Don’t duplicate function or class name at the beginning of the comment."

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp

[x86] With the stronger canonicalization of shuffles added in r218216,
the new vector shuffle lowering no longer needs to check both symmetric
forms of UNPCK patterns for v4f64.

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp, llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v8.ll

[x86] Teach the new vector shuffle lowering to re-use the SHUFPS
lowering when it can use a symmetric SHUFPS across both 128-bit lanes.

This required making the SHUFPS lowering tolerant of other vector types,
and adjusting our canonicalization to canonicalize harder.

This is the last of the clever uses of symmetry I've thought of for
v8f32. The rest of the tricks I'm aware of here are to work around
assymetry in the mask.

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp

[x86] Refactor the logic to form SHUFPS instruction patterns to lower
a generic vector shuffle mask into a helper that isn't specific to the
other things that influence which choice is made or the specific types
used with the instruction.

No functionality changed.

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp, llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v4.ll

[x86] Teach the new vector shuffle lowering the basics about insertion
of a single element into a zero vector for v4f64 and v4i64 in AVX.
Ironically, there is less to see here because xor+blend is so crazy fast
that we can't really beat that to zero the high 128-bit lane.

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp, llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v8.ll

[x86] Teach the new vector shuffle lowering how to lower to UNPCKLPS and
UNPCKHPS with AVX vectors by recognizing those patterns when they are
repeated for both 128-bit lanes.

With this, we now generate the exact same (really nice) code for
Quentin's avx_test_case.ll which was the most significant regression
reported for the new shuffle lowering. In fact, I'm out of specific test
cases for AVX lowering, the rest were AVX2 I think. However, there are
a bunch of pretty obvious remaining things to improve with AVX...

LLVM — llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v8.ll

[x86] Add test cases for UNPCK instructions with v8f32 AVX vectors in
preparation for enhancing their support in the new vector shuffle
lowering.

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp, llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v8.ll

[x86] Begin teaching the new vector shuffle lowering among the most
important bits of cleverness: to detect and lower repeated shuffle
patterns between the two 128-bit lanes with a single instruction.

This patch just teaches it how to lower single-input shuffles that fit
this model using VPERMILPS. =] There is more that needs to happen here.

LLVM — llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v8.ll

[x86] Regenerate this test case now that I've improved my script for
generating the test cases to format things more consistently and
actually catch all the operand sequences that should be elided in favor
of the asm comments. No actual changes here.

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp

[x86] Explicitly lower to a blend early if it is trivial to do so for
v8f32 shuffles in the new vector shuffle lowering code.

This is very cheap to do and makes it much more clear that anything more
expensive but overlapping with this lowering should be selected
afterward (for example using AVX2's VPERMPS). However, no functionality
changed here as without this code we would fall through to create no-op
shuffles of each input and a blend. =]

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp, llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v4.ll

[x86] Teach the new vector shuffle lowering of v4f64 to prefer a direct
VBLENDPD over using VSHUFPD. While the 256-bit variant of VBLENDPD slows
down to the same speed as VSHUFPD on Sandy Bridge CPUs, it has twice the
reciprocal throughput on Ivy Bridge CPUs much like it does everywhere
for 128-bits. There isn't a downside, so just eagerly use this
instruction when it suffices.

LLVM — llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v4.ll

[x86] Add some more comprehensive tests for v4f64 blending.

LLVM — llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v4.ll

[x86] Re-generate a bunch of the v4f64 test cases with my new script.

This expands the integer cases to cover the fact that AVX2 moves their
lane-crossing shuffles into the integer domain. It also adds proper
support for AVX2 run lines and the "ALL" group when it doesn't matter.

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp

[x86] Switch the blend implementation to use a MVT switch rather than
awkward conditions. The readability improvement of this will be even
more important as I generalize it to handle more types.

No functionality changed.

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp

[x86] Remove some essentially lying comments from the v4f64 path of the
new vector shuffle lowering.

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp

[x86] Fix a helper to reflect that what we actually care about is
128-bit lane crossings, not 'half' crossings. This came up in code
review ages ago, but I hadn't really addresesd it. Also added some
documentation for the helper.

No functionality changed.

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp, llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v8.ll

[x86] Teach the new vector shuffle lowering the first step toward more
actual support for complex AVX shuffling tricks. We can do independent
blends of the low and high 128-bit lanes of an avx vector, so shuffle
the inputs into place and then do the blend at 256 bits. This will in
many cases remove one blend instruction.

The next step is to permute the low and high halves in-place rather than
extracting them and re-inserting them.

LLVM — llvm/trunk/lib/MC WinCOFFStreamer.cpp MCObjectFileInfo.cpp, llvm/trunk/test/MC/COFF comm.s comm.ll

MC: Support aligned COMMON symbols for COFF

link.exe:
Fuzz testing has shown that COMMON symbols with size > 32 will always
have an alignment of at least 32 and all symbols with size < 32 will
have an alignment of at least the largest power of 2 less than the size
of the symbol.

binutils:
The BFD linker essentially work like the link.exe behavior but with
alignment 4 instead of 32.  The BFD linker also supports an extension to
COFF which adds an -aligncomm argument to the .drectve section which
permits specifying a precise alignment for a variable but MC currently
doesn't support editing .drectve in this way.

With all of this in mind, we decide to play a little trick: we can
ensure that the alignment will be respected by bumping the size of the
global to it's alignment.

LLVM — llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v8.ll

[x86] Add some more test cases covering specific blend patterns.

LLVM — llvm/trunk/test/CodeGen/X86 vector-shuffle-256-v8.ll

[x86] Add the beginnings of some tests for our v8f32 shuffle lowering
under AVX.

This really just documents the current state of the world. I'm going to
try to flesh it out to cover any test cases I plan to improve prior to
improving them so that the delta made by changes is actually visible to
code reviewers.

This is made easier by the fact that I now have a script to automate the
process of producing test cases including the check lines. =]

LLVM — lldb/trunk/test/functionalities/data-formatter/data-formatter-stl/libcxx/map TestDataFormatterLibccMap.py

Enable libcxx map test on FreeBSD again

The test used to trigger an assertion failure "Cannot get layout of
forward declarations!", but it no longer fails when built with
Clang 3.4.1 (system compiler) or 3.5 from SVN on FreeBSD.

llvm.org/pr17231

LLVM — llvm/trunk/lib/ExecutionEngine RTDyldMemoryManager.cpp

RTDyldMemoryManager::getSymbolAddress(): Make sure to return 0 if symbol name is not met. 
[-Wreturn-type]

LLVM — llvm/trunk/lib/CodeGen RegisterCoalescer.h

mop up: "Don’t duplicate function or class name at the beginning of the comment."

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp, llvm/trunk/test/CodeGen/X86 vector-shuffle-128-v2.ll vector-shuffle-256-v4.ll

[x86] Teach the new vector shuffle lowering to use VPERMILPD for
single-input shuffles with doubles. This allows them to fold memory
operands into the shuffle, etc. This is just the analog to the v4f32
case in my prior commit.

LLVM — llvm/trunk/test/CodeGen/X86 vector-shuffle-128-v2.ll

[x86] Add an AVX run to the 128-bit v2 tests, teach them to have
a generic SSE and AVX mode in addition to a specific AVX1 test path, and
flesh out the AVX tests.

LLVM — llvm/trunk/test/CodeGen/X86 coff-comdat.ll, llvm/trunk/test/DebugInfo/COFF asm.ll multifile.ll

Update tests which broke from r218189

LLVM — llvm/trunk/lib/Target/X86 X86ISelLowering.cpp, llvm/trunk/test/CodeGen/X86 vector-shuffle-128-v4.ll

[x86] Teach the new vector shuffle lowering to use the AVX VPERMILPS
instruction for single-vector floating point shuffles. This in turn
allows the shuffles to fold a load into the instruction which is one of
the common regressions hit with the new shuffle lowering.

LLVM — llvm/trunk/lib/MC MCSectionCOFF.cpp, llvm/trunk/test/MC/COFF section-passthru-flags.s

MC: Fix MCSectionCOFF::PrintSwitchToSection

We had a few bugs:
- We were considering the GVKind instead of just looking at the section
  characteristics
- We would never print out 'y' when a section was meant to be unreadable
- We would never print out 's' when a section was meant to be shared
- We translated IMAGE_SCN_MEM_DISCARDABLE to 'n' when it should've meant
  IMAGE_SCN_LNK_REMOVE

LLVM — llvm/trunk/test/CodeGen/X86 vector-shuffle-128-v4.ll

[x86] Start moving to a fancier check syntax to reduce the need for
duplication of check lines. The idea is to have broad sets of
compilation modes that will frequently diverge without having to always
and immediately explode to the precise ISA feature set.

While this already helps due to VEX encoded differences, it will help
much more as I teach the new shuffle lowering about more of the new VEX
encoded instructions which can still be used to implement 128-bit
shuffles.

LLVM — llvm/trunk/lib/ExecutionEngine RTDyldMemoryManager.cpp, llvm/trunk/lib/ExecutionEngine/MCJIT MCJIT.cpp

[MCJIT] Make RTDyldMemoryManager::getSymbolAddress's behaviour more consistent.

This patch modifies RTDyldMemoryManager::getSymbolAddress(Name)'s behavior to
make it consistent with how clients are using it: Name should be mangled, and
getSymbolAddress should demangle it on the caller's behalf before looking the
name up in the process. This patch also fixes the one client
(MCJIT::getPointerToFunction) that had been passing unmangled names (by having
it pass mangled names instead).

Background:

RTDyldMemoryManager::getSymbolAddress(Name) has always used a re-try mechanism
when looking up symbol names in the current process. Prior to this patch
getSymbolAddress first tried to look up 'Name' exactly as the user passed it in
and then, if that failed, tried to demangle 'Name' and re-try the look up. The
implication of this behavior is that getSymbolAddress expected to be called with
unmangled names, and that handling mangled names was a fallback for convenience.
This is inconsistent with how clients (particularly the RuntimeDyldImpl
subclasses, but also MCJIT) usually use this API. Most clients pass in mangled
names, and succeed only because of the fallback case. For clients passing in
mangled names, getSymbolAddress's old behavior was actually dangerous, as it
could cause unmangled names in the process to shadow mangled names being looked
up.

For example, consider:

    [28 lines not shown]

LLVM — lldb/trunk/tools/lldb-gdbserver lldb-gdbserver.cpp

Fix lldb-gdbserver build.

Build break change by Paul Osmialowski.

Minor changes to argument passing (converted unintentional pass-by-value to pass-by-ref) 
by Todd.

LLVM — llvm/trunk/include/llvm/ProfileData CoverageMapping.h, llvm/trunk/lib/ProfileData CoverageMapping.cpp

llvm-cov: Allow creating CoverageMappings from filenames

LLVM — llvm/trunk/include/llvm/ProfileData CoverageMapping.h, llvm/trunk/lib/ProfileData CoverageMapping.cpp

llvm-cov: Disentangle the coverage data logic from the display (NFC)

This splits the logic for actually looking up coverage information
from the logic that displays it. These were tangled rather thoroughly
so this change is a bit large, but it mostly consists of moving things
around. The coverage lookup logic itself now lives in the library,
rather than being spread between the library and the tool.

LLVM — llvm/trunk/lib/ProfileData CoverageMappingReader.cpp, llvm/trunk/tools/llvm-cov CodeCoverage.cpp

llvm-cov: Move some reader debug output out of the tool.

This debug output is really for testing CoverageMappingReader, not the
llvm-cov tool. Move it to where it can be more useful.

LLVM — llvm/trunk/lib/Transforms/Scalar EarlyCSE.cpp

Using a deque to manage the stack of nodes is faster here.

  Vector is slow due to many reallocations as the size regularly changes in
  unpredictable ways. See the investigation provided on the mailing list for
  more information:

http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120116/135228.html

LLVM — lldb/trunk/source/Interpreter CommandObject.cpp

Have CommandObject::CheckRequirements() report the largest missing
requirement for a command instead of the smallest.  e.g. if a command
requires a Target, Process, Thread, and Frame, and none of those
are available, report the largest -- Target -- as being missing
instead of the smallest -- Frame.

Patch by Paul Osmialowski.

LLVM — llvm/trunk/lib/CodeGen TargetLoweringObjectFileImpl.cpp, llvm/trunk/lib/MC MCSectionCOFF.cpp

MC: Treat ReadOnlyWithRel and ReadOnlyWithRelLocal as ReadOnly for COFF

A problem with our old behavior becomes observable under x86-64 COFF
when we need a read-only GV which has an initializer which is referenced
using a relocation: we would mark the section as writable.  Marking the
section as writable interferes with section merging.

This fixes PR21009.