[CodeGen] Ignore `ANNOTATION_LABEL` in scheduler (#190499)
This fixes a crash in `clang` for `armv7` targets when optimizations are
enabled.
Fixes #190497
NAS-140535 / 27.0.0-BETA.1 / add default_network to container (#18656)
## Summary
Containers created without explicit NIC devices are automatically given
a veth interface attached to `truenasbr0` (or the configured bridge) at
start time, but `container.query` didn't surface this — making it look
like the container had no network at all.
Adds a `default_network` field to `ContainerEntry` that returns the
bridge name (e.g. `"truenasbr0"`) when no NIC devices are explicitly
attached, or `null` when explicit NICs exist in `devices`.
## Changes
- **`ContainerEntry`**: New `default_network: str | None` field
- **`ContainerServicePart.extend`**: Computes `default_network` based on
whether any NIC device exists in the container's device list
- **`ContainerServicePart.extend_context_sync`**: Fetches the bridge
[11 lines not shown]
NAS-140535 / 26.0.0-BETA.2 / add default_network to container (#18655)
## Summary
Containers created without explicit NIC devices are automatically given
a veth interface attached to `truenasbr0` (or the configured bridge) at
start time, but `container.query` didn't surface this — making it look
like the container had no network at all.
Adds a `default_network` field to `ContainerEntry` that returns the
bridge name (e.g. `"truenasbr0"`) when no NIC devices are explicitly
attached, or `null` when explicit NICs exist in `devices`.
## Changes
- **`ContainerEntry`**: New `default_network: str | None` field
- **`ContainerServicePart.extend`**: Computes `default_network` based on
whether any NIC device exists in the container's device list
- **`ContainerServicePart.extend_context_sync`**: Fetches the bridge
[11 lines not shown]
[VPlan] Skip successors outside any loop when updating LoopInfo. (#190553)
Successors outside of any loop do not contribute to the innermost loop,
skip them to avoid incorrect results due to
getSmallestCommonLoop(nullptr, X) returning nullptr.
[InstCombine] Fix #163110: Support peeling off matching shifts from icmp operands via canEvaluateShifted (#165975)
Consider a pattern like `icmp (shl nsw X, L), (add nsw (shl nsw Y, L),
K)`. When the constant K is a multiple of 2^L, this can be simplified to
`icmp X, (add nsw Y, K >> L)`.
This patch extends canEvaluateShifted to support `Instruction::Add` and
updates its signature to accept `Instruction::BinaryOps` instead of a
boolean. This change allows the function to distinguish between LShr and
AShr requirements, ensuring that information is preserved according to
the signedness and overflow flags (nsw/nuw) of the operands.
The logic is integrated into `foldICmpCommutative` to enable peeling off
matching shifts from both sides of a comparison even when an offset is
present.
Fixes: #163110
NAS-140418 / 25.10.2.2 / fix R50BM rear nvme mapping (variants) (by yocalebo) (#18652)
## Summary
The R50BM rear NVMe enclosure mapping was broken because slot identity
was derived from `/sys/bus/pci/slots/` physical slot names, which shift
depending on what other PCI devices are present in the root port
complex.
## Root Cause
The four rear NVMe drives on the R50BM sit behind a PLX PEX 9733 PCIe
switch, which connects to the CPU via a root port at `b2:00.0`. The
R50BM's CPU SLOT 3 shares this same root port complex. When a card (e.g.
a second SAS HBA) is installed in SLOT 3, the kernel assigns it a sysfs
physical slot entry (`0-1`), which pushes all NVMe physical slot numbers
up by one:
| Configuration | NVMe sysfs slots |
[49 lines not shown]
NAS-140418 / 26.0.0-BETA.1 / fix R50BM rear nvme mapping (variants) (by yocalebo) (#18653)
## Summary
The R50BM rear NVMe enclosure mapping was broken because slot identity
was derived from `/sys/bus/pci/slots/` physical slot names, which shift
depending on what other PCI devices are present in the root port
complex.
## Root Cause
The four rear NVMe drives on the R50BM sit behind a PLX PEX 9733 PCIe
switch, which connects to the CPU via a root port at `b2:00.0`. The
R50BM's CPU SLOT 3 shares this same root port complex. When a card (e.g.
a second SAS HBA) is installed in SLOT 3, the kernel assigns it a sysfs
physical slot entry (`0-1`), which pushes all NVMe physical slot numbers
up by one:
| Configuration | NVMe sysfs slots |
[49 lines not shown]
mvc: MenuSystem - add JavaScript wrapper, POC code for https://github.com/opnsense/core/pull/10086
Although this isn't a full implementation yet, it can help callers that need to access the menu system.
In the long run it might be practical if this class would also construct the menu system, so we can add some flexibility there.