Add a16w8 per-op test for conv1d (#19597)#19597
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19597
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 1 New FailureAs of commit 09b0ebf with merge base 42d87c4 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
|
cf08707 to
400371e
Compare
This PR needs a
|
400371e to
19cf8db
Compare
cf08707 to
2f54bac
Compare
Summary:
Add int16 activation / int8 weight (a16w8) quantization tests for `aten.conv1d` on Ethos-U55 and Ethos-U85.
## Changes
- Add `a16w8_conv1d_test_parameters` dict with 14 test configurations (7 conv configs × {per_channel_quant=True, False}) covering kernel sizes 1/3/5, stride 1/2, dilation, depthwise, and no-bias variants
- Add `test_conv1d_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, per_channel_quantization=<varied>, qtol=128, epsilon=2**-16`
- Add `test_conv1d_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs
- Register `ops/test_conv1d.py` in `fbcode/` and `xplat/` `targets.bzl`
bypass-pytorch-oss-checks
Differential Revision: D104532360
2f54bac to
d52c255
Compare
|
@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532360. |
f5f9e2d to
66dc761
Compare
|
@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532360. |
2 similar comments
|
@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532360. |
|
@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532360. |
1719ccc to
c6484aa
Compare
c6484aa to
318aeb1
Compare
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.var` on Ethos-U55 and Ethos-U85. ## Changes - Add `test_parameters_ethosu` class attribute to `Var` with 2 test configurations (4D tensors with correction=0 and correction=1) - Switch existing `test_var_dim_u55_INT_no_dim` and `test_var_dim_u85_INT_no_dim` from `Var.test_parameters` to `Var.test_parameters_ethosu` for Ethos-U compatible tensor shapes - Add `test_var_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True` - Add `test_var_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_var.py` in `fbcode/` and `xplat/` `targets.bzl` Differential Revision: D104532362
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.conv1d` on Ethos-U55 and Ethos-U85. ## Changes - Add `test_conv1d_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True`, reusing existing `test_data_INT` parameters - Add `test_conv1d_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_conv1d.py` in `fbcode/` and `xplat/` `targets.bzl` Reviewed By: Ninja91 Differential Revision: D104532360
318aeb1 to
09b0ebf
Compare
Summary:
Add int16 activation / int8 weight (a16w8) quantization tests for
aten.conv1don Ethos-U55 and Ethos-U85.Changes
test_conv1d_a16w8_u55_INTusingEthosU55PipelineINTwitha16w8_quantization=True, symmetric_io_quantization=True, reusing existingtest_data_INTparameterstest_conv1d_a16w8_u85_INTusingEthosU85PipelineINTwith same kwargsops/test_conv1d.pyinfbcode/andxplat/targets.bzlReviewed By: Ninja91
Differential Revision: D104532360