[mlir] Add side-effect check to moveOperationDependencies (#176361)
This patch adds a side-effect check to `moveOperationDependencies` to
match the behavior of `moveValueDefinitions`. Previously,
`moveOperationDependencies` would move operations with side-effecting
dependencies, which could change program semantics.
**Note** that the existing test changes are needed because unregistered
operations (e.g., "moved_op"()) are treated as side-effecting. These
tests were updated to use pure operations for operations in the moved
slice, while keeping unregistered ops for operations that aren't moved
(e.g., "before"(), "foo"()). This ensures that tests continue to
exercise their intended functionality without being blocked by the new
side-effect check.
[mlir][tosa] Add constant folding for tosa.add_shape operation (#173112)
This commit introduces constant folding for the tosa.add_shape
operation. When both operands of the add_shape operation are constant
shapes, the operation is evaluated at compile-time.
[mlir][linalg] Update createWriteOrMaskedWrite (#174810)
`createWriteOrMaskedWrite` is used extensively in the Linalg vectorizer.
When a write uses non-zero indices, the helper currently computes mask
sizes as if the write started at 0 (`size = dim(d)`), which can produce
incorrect `vector.create_mask` operands for the generated
`vector.transfer_write`. Instead, the mask size should be computed as
`size = dim(d) - write_index(d)`.
EXAMPLE
-------
Let`s use this example to illustrate:
```mlir
%res = tensor.insert_slice
%src into %dest[0, %c2] [5, 1] [1, 1] : tensor<5x1xi32> into tensor<?x3xi32>
```
This op is vectorized as a pair of `vector.transfer_read` +
`vector.transfer_write` ops. When calculating the mask for the
[20 lines not shown]
[ml_program] fix bufferizesToMemoryRead for ml_program.global_store (#177387)
This is a fix for the `BufferizableOpInterface` implementation for
`ml_program.global_store`.
`bufferizesToMemoryRead` currently returns false for
`GlobalStoreOpInterface`, but I believe it should return true as
`ml_program.global_store` needs to read its input buffer to know what
value to store to global.
This manifested in a bug where `one-shot-bufferize` would produce MLIR
that copies uninitialized data to the global var instead of the intended
value to be stored.
For the following MLIR:
```
module {
ml_program.global private mutable @"state_tensor"(dense<0.0> : tensor<4x75xf32>) : tensor<4x75xf32>
[61 lines not shown]
math/scilab: pin to java 8
Does not build with jdk11+.
[javac] /wrkdirs/usr/ports/math/scilab/work/scilab-6.1.1/modules/graphic_objects/src/java/org/scilab/modules/graphic_objects/xmlloader/CSSParser.java:17: error: package javax.annotation does not exist
PR: 272855
Approved-by: no maintainer
math/scilab: pin to java 8
Does not build with jdk11+.
[javac] /wrkdirs/usr/ports/math/scilab/work/scilab-6.1.1/modules/graphic_objects/src/java/org/scilab/modules/graphic_objects/xmlloader/CSSParser.java:17: error: package javax.annotation does not exist
PR: 272855
Approved-by: no maintainer
[AMDGPU][NFC] Refine the representation of MODE register values.
- Eliminate the field masks.
- Segregate the encoding logic.
- Simplify and clarify the user code.
This is supposed to help updating downstream branches where we
have a more advanced version of the same facility.