Add 'invaliduser' penalty to PerSourcePenalties, which is applied
to login attempts for usernames that do not match real accounts.
Defaults to 5s to match 'authfail' but allows administrators to
block such sources for longer if desired. with & ok djm@
static int, not int static
c99 6.11.5:
"The placement of a storage-class specifier other than at the beginning
of the declaration specifiers in a declaration is an obsolescent
feature."
static const, not const static
c99 6.11.5:
"The placement of a storage-class specifier other than at the beginning
of the declaration specifiers in a declaration is an obsolescent
feature."
ok krw@
Rearrange command completion so callbacks are called without holding any
locks. This makes it possible to mark the interrupt handler MPSAFE, but
we're not actually doing that yet.
Releasing the cq mutex means the completion callback can't use the cq
entry, so we have to copy any fields we use from it into the ccb. For now,
that's just the flags. This simplifies the callbacks in a few places.
ok dlg@ (some time ago)
also tested by kettenis@ with aplns(4)
avoid including "xcall.h" in cpu.h to avoid confusing userland.
llvm couldn't find "xcall.h". this follows the example set by amd64 now.
tb@ hit this, and says it helps.
sndioctl: Fix confusion between SIOCTL_NAMEMAX and SIOCTL_DISPLAYMAX
As both macros have the same value, so the change results in no
difference in the binary
Stop using PREFIX_ADJOUT_FLAG_STALE in up_generate_addpath().
Instead of marking prefixes with PREFIX_ADJOUT_FLAG_STALE
up_generate_addpath() can use a local array of path-ids to track which
paths were present at the start of the call. On update the path id is
cleared from the list and then at the end remove all remaining paths
from that list.
The extra traversals during the update should not matter since the number
of available paths small and so this linear search will only need one or
two cache lines.
It is possible to further optimize this by also tracking the adjout_prefix
pointer to drop the adjout_prefix_get() call at the end.
This also uses a fixed maximum of 2000 paths which is more than a
magnitude more than the biggest system I know.
OK tb@
BN_get_word(): return (BN_ULONG)-1 on error rather than BN_MASK2
While the latter is more general in that it also works on 1-complement
achitectures, we don't care about that. Adjust documentation and the
only error check for it in libcrypto.
ok deraadt
fix srp_follow to close a window on use-after-free
Use srp_enter() to get a new reference to the next element while
keeping the current element alive. Afterwards the old reference can
safely be released and the hazard in the caller provided srp_ref
struct can be updated to the hazard of the new element.
This is just in time for almost all the SRP code in the tree to go away.
from Carsten Beckmann carsten_beckmann at genua.de
ok jmatthew@
drm/amdkfd: Fix GPU mappings for APU after prefetch
From Harish Kasiviswanathan
e71a1bafe6f68a9a406f7d59259643c4966f4bde in linux-6.12.y/6.12.62
eac32ff42393efa6657efc821231b8d802c1d485 in mainline linux
Update to 2025cgtz from https://github.com/JodaOrg/global-tz
o Baja California agreed with California’s DST rules in 1953 and in
1961 through 1975, instead of observing standard time all year.
o The leapseconds file contains commentary about the IERS and NIST
last-modified and expiration timestamps for leap second data.
o Commentary now also uses UTGF-8 characters. This also affects
data in iso3166.tab and zone1970.tab.
when using install-info on compressed info files, uncompress from stdin
rather than passing "< filename" to the shell. from espie, ok tb.
leaks, but so does the surrounding code.
Protect the array that keeps track of which MMU contexts are in use with
a mutex. Also disable the context stealing code. It isn't mpsafe and we
should have more than enough MMU contexts to never need to steal one with
the current (hard) limites on the number of processes.
This enables some code that checks that a context that is being freed no
longer has live entries in the TSB. This code is somewhat expensive so
we may want to disable it again in the not too distant future.
ok deraadt@
The sun4v_send_ipi() function completely blocks interrupts. This may
result in failures if there is lots of IPI traffic between CPUs as
CPUs that are busy sending an IPI won't be able to process incoming IPIs.
Instead of blocking interrupts, use splhigh() to protect the per-CPU state
involved in sending IPIs.
The sun4v_broadcast_ipi() function did not block interrupts and therefore
lacked protection of the per-CPU state. This means an IPI sent from an
interrupt handler could overwrite the state, resulting in TLB flushes being
sent to the wrong CPUs or with the wrong parameters. Use splhigh() here
as well. This seems to fix (some) of the recent instability seen on
sparc64 after changes to how we tear down exiting processes.
ok deraadt@