Skip to content

Add a16w8 per-op test for bmm (#19599)#19599

Open
christine-long-meta wants to merge 4 commits into
pytorch:mainfrom
christine-long-meta:export-D104532363
Open

Add a16w8 per-op test for bmm (#19599)#19599
christine-long-meta wants to merge 4 commits into
pytorch:mainfrom
christine-long-meta:export-D104532363

Conversation

@christine-long-meta
Copy link
Copy Markdown
Contributor

@christine-long-meta christine-long-meta commented May 14, 2026

Summary:

Add int16 activation / int8 weight (a16w8) quantization tests for aten.bmm on Ethos-U55 and Ethos-U85.

Changes

  • Add a16w8_bmm_test_parameters dict with 5 test configurations covering same-shape, different-shape, rectangular, batch-10, and negative-value tensors
  • Add test_bmm_a16w8_u55_INT using OpNotSupportedPipeline to verify that bmm with INT16 inputs is correctly rejected on U55 (which does not support bmm with int16)
  • Add test_bmm_a16w8_u85_INT using EthosU85PipelineINT with a16w8_quantization=True, symmetric_io_quantization=True
  • Remove unused aten_op_mm and exir_op_mm variables
  • Register ops/test_bmm.py in fbcode/ and xplat/ targets.bzl

Differential Revision: D104532363

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented May 14, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19599

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 1 Cancelled Job

As of commit 2381b0f with merge base 42d87c4 (image):

CANCELLED JOB - The following job was cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions github-actions Bot added ciflow/trunk module: arm Issues related to arm backend labels May 14, 2026
@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 14, 2026
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented May 14, 2026

Workflows were awaiting approval. CI has now been triggered for the ciflow labels on this PR.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

christine-long-meta added a commit to christine-long-meta/executorch that referenced this pull request May 14, 2026
Summary:

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.bmm` on Ethos-U55 and Ethos-U85.


## Changes
- Add `a16w8_bmm_test_parameters` dict with 5 test configurations covering same-shape, different-shape, rectangular, batch-10, and negative-value tensors
- Add `test_bmm_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16`
- Add `test_bmm_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Remove unused `aten_op_mm` and `exir_op_mm` variables
- Register `ops/test_bmm.py` in `fbcode/` and `xplat/` `targets.bzl`

bypass-pytorch-oss-checks

Differential Revision: D104532363
@meta-codesync meta-codesync Bot changed the title Add a16w8 per-op test for bmm Add a16w8 per-op test for bmm (#19599) May 14, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented May 14, 2026

@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532363.

1 similar comment
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented May 14, 2026

@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532363.

christine-long-meta added a commit to christine-long-meta/executorch that referenced this pull request May 14, 2026
Summary:
Pull Request resolved: pytorch#19599

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.bmm` on Ethos-U55 and Ethos-U85.

## Changes
- Add `a16w8_bmm_test_parameters` dict with 5 test configurations covering same-shape, different-shape, rectangular, batch-10, and negative-value tensors
- Add `test_bmm_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16`
- Add `test_bmm_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Remove unused `aten_op_mm` and `exir_op_mm` variables
- Register `ops/test_bmm.py` in `fbcode/` and `xplat/` `targets.bzl`

bypass-pytorch-oss-checks

Differential Revision: D104532363
@christine-long-meta christine-long-meta force-pushed the export-D104532363 branch 2 times, most recently from 6a8da35 to 71103e7 Compare May 16, 2026 02:07
@meta-codesync meta-codesync Bot changed the title Add a16w8 per-op test for bmm (#19599) Add a16w8 per-op test for bmm May 16, 2026
Summary:

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.var` on Ethos-U55 and Ethos-U85.

## Changes
- Add `test_parameters_ethosu` class attribute to `Var` with 2 test configurations (4D tensors with correction=0 and correction=1)
- Switch existing `test_var_dim_u55_INT_no_dim` and `test_var_dim_u85_INT_no_dim` from `Var.test_parameters` to `Var.test_parameters_ethosu` for Ethos-U compatible tensor shapes
- Add `test_var_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True`
- Add `test_var_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Register `ops/test_var.py` in `fbcode/` and `xplat/` `targets.bzl`

Differential Revision: D104532362
Summary:

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.conv1d` on Ethos-U55 and Ethos-U85.

## Changes
- Add `test_conv1d_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True`, reusing existing `test_data_INT` parameters
- Add `test_conv1d_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Register `ops/test_conv1d.py` in `fbcode/` and `xplat/` `targets.bzl`

Reviewed By: Ninja91

Differential Revision: D104532360
Summary:

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.gelu` on Ethos-U55 and Ethos-U85.

## Changes
- Add `test_gelu_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16`, reusing existing `Gelu.test_data` parameters (12 test configurations covering both \`none\` and \`tanh\` approximation modes)
- Add `test_gelu_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Register `ops/test_gelu.py` in `fbcode/` and `xplat/` `targets.bzl`

bypass-pytorch-oss-checks

Differential Revision: D104532359
Summary:

Add int16 activation / int8 weight (a16w8) quantization tests for `aten.bmm` on Ethos-U55 and Ethos-U85.

## Changes
- Add `a16w8_bmm_test_parameters` dict with 5 test configurations covering same-shape, different-shape, rectangular, batch-10, and negative-value tensors
- Add `test_bmm_a16w8_u55_INT` using `OpNotSupportedPipeline` to verify that bmm with INT16 inputs is correctly rejected on U55 (which does not support bmm with int16)
- Add `test_bmm_a16w8_u85_INT` using `EthosU85PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True`
- Remove unused `aten_op_mm` and `exir_op_mm` variables
- Register `ops/test_bmm.py` in `fbcode/` and `xplat/` `targets.bzl`

Differential Revision: D104532363
@meta-codesync meta-codesync Bot changed the title Add a16w8 per-op test for bmm Add a16w8 per-op test for bmm (#19599) May 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported module: arm Issues related to arm backend

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant