[ConstantFolding] Support ptrtoaddr in ConstantFoldCompareInstOperands (#162653)
This folds `icmp (ptrtoaddr x, ptrtoaddr y)` to `icmp (x, y)`, matching
the existing ptrtoint fold. Restrict both folds to only the case where
the result type matches the address type.
I think that all folds this can do in practice end up actually being
valid for ptrtoint to a type large than the address size as well, but I
don't really see a way to justify this generically without making
assumptions about what kind of folding the recursive calls may do.
This is based on the icmp semantics specified in
https://github.com/llvm/llvm-project/pull/163936.
[Verifier] Make sure all constexprs in instructions are visited (#171643)
Previously this only happened for constants of some types and missed
incorrect ptrtoaddr.
[ValueTracking] Enhance overflow computation for unsigned mul (#171568)
Changed the range computation in computeOverflowForUnsignedMul to use
computeConstantRange as well.
This expands the patterns that InstCombine manages to narrow a mul that
has values that come from zext, for example if a value comes from a div
operation then the known bits doesn't give the narrowest possible range
for that value.
---------
Co-authored-by: Adar Dagan <adar.dagan at mobileye.com>
turn tun_input into a wrapper around p2p_input.
tun packets have the address family as a 4 byte prefix on their
payload which is used to decide which address family input handler
to call. p2p_input does the same thing except it looks at
m_pkthdr.ph_family.
this makes tun_input it's 4 byte prefix to set m_pkthdr.ph_family
and then calls p2p_input to use it.
call ip input handlers for pf diverted packets via if_input_proto.
this is a step toward being able to run tpmr and veb without the
net lock. right now ip input needs net lock, so if if_input_proto
can move their calls to a locked context, tpmr and veb wont need
to be locked first.
add if_input_proto() as a wrapper around calls to mbuf proto handling.
this version directly calls the proto handler, but it will be used
in the future in combination with struct netstack to move the proto
handler call around.
let the softnet threads use ifnet refs without accounting for them.
currently you need a real ifnet refcnt via if_get/if_unit, or you
can use if_get_smr in an smr read critical section, but this allows
code in the softnet threads to use an ifnet ref simply by virtue
of running in the softnet thread. this means softnet can avoid the
atomic ops against ifnet refcnts like smr critical sections can
do, but still sleep, which you cant do with in an smr critical
section.
this is implemented by having if_remove net_tq_barriers() before
letting interface teardown proceed.
[RISCV] Generate Xqcilsm LWMI/SWMI load/store multiple instructions (#171079)
This patch adds support for generating the Xqcilsm load/store multiple
instructions as a part of the RISCVLoadStoreOptimizer pass. For now we
only combine two load/store instructions into a load/store multiple.
Support for converting more loads/stores will be added in follow-up
patches. These instructions are only applicable for 32-bit loads/stores
with an alignment of 4-bytes.
[LoongArch] Add support for the ud macro instruction (#171583)
This patch adds support for the `ud ui5` macro instruction. The `ui5`
operand must be inthe range `0-31`. The macro expands to:
`amswap.w $rd, $r1, $rj`
where `ui5` specifies the register number used for `$rd` in the expanded
instruction, and `$rd` is the same as `$rj`.
Relevant binutils patch:
https://sourceware.org/pipermail/binutils/2025-December/146042.html
Do not spam middleware logs
This commit fixes an issue where if networking is not working and we fail to sync interface ips for tnc, it can result in middleware logs getting spammed non-stop if there are continuous network events being generated.
populate the enchdr in network byte order instead of host byte order.
this prepends the packet payloads you can see via enc(4) interfaces,
and should have been populated consistently from the beginning.
better late than never.
i've already fixed tcpdump to cope with these fields in either
order, so this is mostly about setting a good example in the kernel
than anything else.