fix srp_follow to close a window on use-after-free
Use srp_enter() to get a new reference to the next element while
keeping the current element alive. Afterwards the old reference can
safely be released and the hazard in the caller provided srp_ref
struct can be updated to the hazard of the new element.
This is just in time for almost all the SRP code in the tree to go away.
from Carsten Beckmann carsten_beckmann at genua.de
ok jmatthew@
drm/amdkfd: Fix GPU mappings for APU after prefetch
From Harish Kasiviswanathan
e71a1bafe6f68a9a406f7d59259643c4966f4bde in linux-6.12.y/6.12.62
eac32ff42393efa6657efc821231b8d802c1d485 in mainline linux
Update to 2025cgtz from https://github.com/JodaOrg/global-tz
o Baja California agreed with California’s DST rules in 1953 and in
1961 through 1975, instead of observing standard time all year.
o The leapseconds file contains commentary about the IERS and NIST
last-modified and expiration timestamps for leap second data.
o Commentary now also uses UTGF-8 characters. This also affects
data in iso3166.tab and zone1970.tab.
when using install-info on compressed info files, uncompress from stdin
rather than passing "< filename" to the shell. from espie, ok tb.
leaks, but so does the surrounding code.
Protect the array that keeps track of which MMU contexts are in use with
a mutex. Also disable the context stealing code. It isn't mpsafe and we
should have more than enough MMU contexts to never need to steal one with
the current (hard) limites on the number of processes.
This enables some code that checks that a context that is being freed no
longer has live entries in the TSB. This code is somewhat expensive so
we may want to disable it again in the not too distant future.
ok deraadt@
The sun4v_send_ipi() function completely blocks interrupts. This may
result in failures if there is lots of IPI traffic between CPUs as
CPUs that are busy sending an IPI won't be able to process incoming IPIs.
Instead of blocking interrupts, use splhigh() to protect the per-CPU state
involved in sending IPIs.
The sun4v_broadcast_ipi() function did not block interrupts and therefore
lacked protection of the per-CPU state. This means an IPI sent from an
interrupt handler could overwrite the state, resulting in TLB flushes being
sent to the wrong CPUs or with the wrong parameters. Use splhigh() here
as well. This seems to fix (some) of the recent instability seen on
sparc64 after changes to how we tear down exiting processes.
ok deraadt@
Fix SMMUv3 StreamID boundary check. For a StreamID size of 32-bits, the
upper boundary crosses into 64-bit, so we need to use 1ULL for comparison.
This fixes use of PCI on Radxa Orion O6 (with SMMUv3 manually enabled).
Check that CH_DESTROY works. Also for now check that after CH_DESTROY
a CH_INSERT still works. This is because the table will be reinitalized
on first call but it may be something that could change in the long run.
Fix memory leak of the CH tables used by the per-peer pending queues.
Define CH_DESTROY() and use it in peer_delete() via adjout_peer_free()
to cleanup the lookup tables used by the pending attribute and prefix
queues. Also rename adjout_prefix_flush_pending() to
adjout_peer_flush_pending() since that function does no longer work
with struct adjout_prefix entries.
OK tb@
special case mbufs without a ifidx set in pf_match_rcvif.
this avoids generating a log message saying pf_match_rcvif can't
resolve an interface if there's no interface to resolve.
my if_get_smr change in pf_match_rcvif made a lot of logging appear
on sthen and jesper wallin's firewalls.
add missing KAME hack in netstat view
'systat netstat' would show fe80:1::1:34691 while
'netstart -n -f inet6' gave fe80::1%rge0.34691 as expected.
Copy over the infamously scope_id trick from netstat(1) to fix
displaying IPv6 addresses with embedded interface identifiers.
OK florian claudio bluhm
Perpare the adjout_prefix_dump upcalls for the next round of Adj-RIB-Out rework.
Both the peer and the pt_entry are now passed to the upcall since these
values will be removed from struct adjout_prefix.
Adjust all upcalls accordingly and also adjust other code parts of the
'show rib out' control message handling. Since now the pt_entry is passed
to the callbacks the other code also should do direct pt_lookup calls.
With this adjout_prefix_lookup() and adjout_prefix_match() become unused.
In up_generate_default() the adjout_prefix_lookup() can be removed and
replaced with a adjout_prefix_first() call after the pte is fetched.
OK tb@
replace the cas spinlock in kernel mutexes with a "parking" lock.
this is motivated because cas based locks are unfair, meaning that
no effort is made by the algorithm to try and give CPUs access to
the critical section in the order that they tried to acquire them.
cas based locks can also generate a lot of work for the cache
subsystem on a computer because every cpu ends up hammering the
same cacheline.
the combination of these effects for heavily contended mutexes can
get some systems into a situation where they don't make progress,
and are effectively livelocked.
this parking mutex mitigates against these problems.
it's called parking because it was very heavily influnced by what's
described in https://webkit.org/blog/6161/locking-in-webkit/. the
big influence is that the lock itself only has to record it's state,
but the machinery for waiting for the lock is external to the lock.
[82 lines not shown]
PREFIX_ADJOUT_FLAG_DEAD is no longer needed and can be replaced with
a check that the attrs pointer is NULL. Refactor the code now a bit
since the logic got a bit simpler.
OK tb@
Extend ptrace(2) PT_GET_THREAD_* to include thread names.
Use a new define larger then _MAXCOMLEN to avoid that define from
propagating to ptrace.h. Ensure that pts_name is large enough with
a compile time assert.
okay claudio@ jca@
Introduce a bitmap API that scales dynamically up but is also minimal for
the common case.
Functions include:
- set, test, clear: set, test and clear a bit in the map
- empty: check if a bitmap is empty (has no bit set).
- id_get: return the lowest free id in map
- id_put: return an id to the map, aka clear
- init, reset: initialize and free a map
The first 127 elements are put directly into struct bitmap without further
allocation. For maps with more than 127 elements external memory is allocated
in the set function. This memory is only freed by reset which must be called
before an object is removed containing a bitmap.
It is not possible to set bit 0 of a bitmap since that bit is used to
differentiate between access modes. In my use cases this is perfectly fine
since most code already treats 0 in a special way.
OK tb@