ipfw: remove locking workarounds in the table code
Before the "upper half lock" became sleepable the table manipulation code
needed sophisticated workarounds to recover from races, where the lock is
temporarily dropped to do malloc(M_WAITOK). Remove all these workarounds
as they are no longer needed.
Differential Revision: https://reviews.freebsd.org/D54580
ipfw: remove locking workarounds in the table code
Before the "upper half lock" became sleepable the table manipulation code
needed sophisticated workarounds to recover from races, where the lock is
temporarily dropped to do malloc(M_WAITOK). Remove all these workarounds
as they are no longer needed.
Differential Revision: https://reviews.freebsd.org/D54580
ipfw: make the upper half lock sleepable
The so called upper half ipfw lock is not used in the forwarding path. It
is used only during configuration changes and servicing system events like
interface arrival/departure or vnet creation. The original code drops the
lock before malloc(M_WAITOK) and then goes into great efforts to recover
from possible races. But the races still exist, e.g. create_table() would
first check for table existence, but then drop the lock. The change also
fixes unlock leak in check_table_space() in a branch that apparently was
never entered.
Changing to a sleepable lock we can reduce a lot of existing complexity
associated with race recovery, and as use the lock to cover other
configuration time allocations, like recently added per-rule bpf(4) taps.
This change doesn't remove much of a race recovery code, to ease bisection
in case of a regression. This will be done in a separate commit. This
change just removes lock drops during configuration events. The only
reduction is removal of get_map(), which is a straightforward reduce to a
[11 lines not shown]
ipfw: make the upper half lock sleepable
The so called upper half ipfw lock is not used in the forwarding path. It
is used only during configuration changes and servicing system events like
interface arrival/departure or vnet creation. The original code drops the
lock before malloc(M_WAITOK) and then goes into great efforts to recover
from possible races. But the races still exist, e.g. create_table() would
first check for table existence, but then drop the lock. The change also
fixes unlock leak in check_table_space() in a branch that apparently was
never entered.
Changing to a sleepable lock we can reduce a lot of existing complexity
associated with race recovery, and as use the lock to cover other
configuration time allocations, like recently added per-rule bpf(4) taps.
This change doesn't remove much of a race recovery code, to ease bisection
in case of a regression. This will be done in a separate commit. This
change just removes lock drops during configuration events. The only
reduction is removal of get_map(), which is a straightforward reduce to a
[11 lines not shown]
[CIR] Upstream support for calling through method pointers (#176063)
This adds support to CIR for calling functions through pointer to method
pointers with the Itanium ABI for x86_64 targets. The ARM-specific
handling of method pointers is not-yet implemented.
net: on interface detach purge all its routes before detaching protocols
Otherwise, a forwarding thread may use the interface being detached. This
is a regression from 0d469d23715d, which manifests itself as a reliably
reproducible panic in in6_selecthlim(). Note that there are old bug
reports about such a panic, and I believe this change will not fix them,
as their nature is not due to a screwed up detach sequence, but due to
lack of proper epoch(9) based synchronization between the detach and
forwarding.
Reviewed by: pouria
Reported & tested by: jhibbits
PR: 292162
Fixes: 0d469d23715d690b863787ebfa51529e1f6a9092
Differential Revision: https://reviews.freebsd.org/D54721
net: on interface detach purge all its routes before detaching protocols
Otherwise, a forwarding thread may use the interface being detached. This
is a regression from 0d469d23715d, which manifests itself as a reliably
reproducible panic in in6_selecthlim(). Note that there are old bug
reports about such a panic, and I believe this change will not fix them,
as their nature is not due to a screwed up detach sequence, but due to
lack of proper epoch(9) based synchronization between the detach and
forwarding.
Reviewed by: pouria
Reported & tested by: jhibbits
PR: 292162
Fixes: 0d469d23715d690b863787ebfa51529e1f6a9092
Differential Revision: https://reviews.freebsd.org/D54721
Create a poor-developer's msan for libc wide read functions. (#170586)
Most libcs optimize functions like strlen by reading in chunks larger
than a single character. As part of "the implementation", they can
legally do this as long as they are careful not to read invalid memory.
However, such tricks prevents those functions from being tested under
the various sanitizers.
This PR creates a test framework that can report when one of these
functions read or write in an invalid way without using the sanitizers.
[llvm][Support] Move llvm::createStringErrorV to a new ErrorExtras.h header (#176491)
Introducing `llvm::createStringErrorV` caused a `0.5%` compile-time
regression because it's an inline function in a core header. This moves
the API to a new header to prevent including this function in files that
don't need it.
Also includes the header in the source files that have been using
`createStringErrorV` (which currently is just LLDB).
[flang][cuda] Emit better error when subprogram attribute is absent or bad (#176501)
this patch update the parser for CUDA Fortran subprogram attribute to
emit more precise error.
Instead of having error like:
```
error: expected 'END'
attributes(managed) integer function fooj()
^
```
The parser will emit:
```
expected DEVICE, GLOBAL, GRID_GLOBAL, or HOST attribute
attributes(managed) integer function fooj()
^
```
[X86] Separate sibcall checks from guaranteed TCO (#176479)
Rename IsEligibleForTailCallOptimization to isEligibleForSiblingCallOpt.
LLVM supports two other ways to bypass this logic: musttail and
ShouldGuaranteeTCO. The result of this function doesn't really control
tail call eligibility, and returning false from it is not sufficient to
block tail call emission. Rename it to clarify the code.
Move the calling convention match check, which is the only thing that
matters in the guaranteed TCO case, out of this sibcall eligibility
check.
Move the GOT early binding check into the sibcall eligibility check,
since it is bypassed in either guaranteed TCO case. When that [diff
landed](https://reviews.llvm.org/D9799), it did not have exceptions for
`musttail`, but later in 9ff2eb1ea596a the two guaranteed tail call
cases were made to override this check, forcing lazy binding, which I
agree is the right tradeoff.
[3 lines not shown]
bootimage: make ${FATFILES} dependency more explicit for readability
Only add the FATFILES -> TARGETFS dependency in Makefile.bootimage
when FATFILES is defined in MD Makefiles.
This makes the optional nature of FATFILES clearer for MD ports and
avoids relying on empty-target handling as an implicit make(1) behavior.
[AArch64] Fix Windows prologue handling to pair more registers. (#170214)
Currently, there's code to suppress pairing, but we don't actually need
to suppress that; we just need to suppress the formation of
pre-decrement/post-increment instructions.
Pairing saves an instruction in some cases, and enables packed unwind in
some cases.