Skip to content

Add fp32 resnet50 model for vitisai ep demo#116

Open
Json288 wants to merge 2 commits intomicrosoft:mainfrom
Json288:main
Open

Add fp32 resnet50 model for vitisai ep demo#116
Json288 wants to merge 2 commits intomicrosoft:mainfrom
Json288:main

Conversation

@Json288
Copy link
Copy Markdown

@Json288 Json288 commented Apr 4, 2026

Why is this change being made?

To verify and demonstrate VitisAI EP 's support for WebNN with fp32 image_classification models: ResNet50 & MobileNet.

What changed?

Added

  • demos/image-classification/models/amd/resnet50/config.json, preprocessor_config.json (local assets for AMD FP32 ResNet-50).
  • demos/image-classification/models/amd/MobileNetV2/config.json, preprocessor_config.json (local assets for AMD FP32 MobileNetV2).

Modified

  • demos/image-classification/index.js — FP32 AMD paths (amd/resnet50, amd/MobileNetV2), WebNN Hub layout handling (webnn/ JSON + webnn/onnx weights on remote vs flat onnx/ locally), env.fetch rewrites for Hub config.json / preprocessor_config.json, options.subfolder for remote/local, and AMD pixel_valuesinput patch for the classifier pipeline.
  • demos/image-classification/index.html — UI wiring for FP32 / model selection as needed for the new paths.
  • demos/image-classification/static/main.css — Styles for any new/updated controls.
  • fetch_models.js — Extra download entries for AMD ResNet-50 / MobileNetV2 from the Hub webnn/onnx paths into the local onnx/ mirror layout.

How was the change tested?

  • Lint: ESLint/Prettier clean on touched JS.
  • Manual: Image-classification demo hosted locally — local ./models flows; WebNN NPU + FP32 + MobileNet V2 / ResNet50.
  • Fetch script: node fetch_models against Hub when validating AMD webnn/ asset URLs.

Json288 added 2 commits April 3, 2026 20:05
Signed-off-by: Song <jamesong@amd.com>
Signed-off-by: Song <jamesong@amd.com>
Comment thread fetch_models.js
url: "webnn/efficientnet-lite4/resolve/main/onnx/model_fp16.onnx",
path: "./demos/image-classification/models/webnn/efficientnet-lite4/onnx",
},
{
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was testing out the changes in this branch, and it looks like I can't fetch the full suite of models (after the seventh one, the npm command exits and doesn't download the remaining ones). Are you seeing anything similar? Example output below.

PS C:\src\webnn-developer-preview-amd> npm run fetch-models

> webnn-developer-preview@1.0.0 fetch-models
> node fetch_models.js

[1/40] Downloading https://huggingface.co/xenova/resnet-50/resolve/main/onnx/model_fp16.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\xenova\resnet-50\onnx\model_fp16.onnx
-> downloading [========================================] 100% 0.0s
[1/40] Downloaded https://huggingface.co/xenova/resnet-50/resolve/main/onnx/model_fp16.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\xenova\resnet-50\onnx\model_fp16.onnx
[2/40] Downloading https://huggingface.co/webnn/mobilenet-v2/resolve/main/onnx/model_fp16.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\webnn\mobilenet-v2\onnx\model_fp16.onnx
-> downloading [========================================] 100% 0.0s
[2/40] Downloaded https://huggingface.co/webnn/mobilenet-v2/resolve/main/onnx/model_fp16.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\webnn\mobilenet-v2\onnx\model_fp16.onnx
[3/40] Downloading https://huggingface.co/webnn/efficientnet-lite4/resolve/main/onnx/model_fp16.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\webnn\efficientnet-lite4\onnx\model_fp16.onnx
-> downloading [========================================] 100% 0.0s
[3/40] Downloaded https://huggingface.co/webnn/efficientnet-lite4/resolve/main/onnx/model_fp16.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\webnn\efficientnet-lite4\onnx\model_fp16.onnx
[4/40] Downloading https://huggingface.co/amd/resnet50/resolve/main/webnn/onnx/model.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\amd\resnet50\onnx\model.onnx
-> downloading [========================================] 100% 0.0s
[4/40] Downloaded https://huggingface.co/amd/resnet50/resolve/main/webnn/onnx/model.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\amd\resnet50\onnx\model.onnx
[5/40] Downloading https://huggingface.co/amd/MobileNetV2/resolve/main/webnn/onnx/model.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\amd\MobileNetV2\onnx\model.onnx
-> downloading [========================================] 100% 0.0s
[5/40] Downloaded https://huggingface.co/amd/MobileNetV2/resolve/main/webnn/onnx/model.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\amd\MobileNetV2\onnx\model.onnx
[6/40] Downloading https://huggingface.co/microsoft/sd-turbo-webnn/resolve/main/text_encoder/model_layernorm.onnx to C:\src\webnn-developer-preview-amd\demos\sd-turbo\models\text_encoder\model_layernorm.onnx
-> downloading [========================================] 100% 0.0s
[6/40] Downloaded https://huggingface.co/microsoft/sd-turbo-webnn/resolve/main/text_encoder/model_layernorm.onnx to C:\src\webnn-developer-preview-amd\demos\sd-turbo\models\text_encoder\model_layernorm.onnx
[7/40] Downloading https://huggingface.co/microsoft/sd-turbo-webnn/resolve/main/unet/model_layernorm.onnx to C:\src\webnn-developer-preview-amd\demos\sd-turbo\models\unet\model_layernorm.onnx
PS C:\src\webnn-developer-preview-amd>

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I did not see a similar issue with fetch_models.js. And based on the provided log, the amd models added have been downloaded without error. Could you check if amd's models exist in the corresponding folder? And could you also try to run the fetch again to see if the problem persist?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new models exist, but I think the problem is that the rest of the models aren't getting downloaded for whatever reason. I'd expect to see ~40 models downloaded. So, I'm not sure if something regressed here, as I don't see this behavior on the main branch of the parent repo.

C:\src\webnn-developer-preview-amd>dir /s /b *.onnx
C:\src\webnn-developer-preview-amd\demos\image-classification\models\amd\MobileNetV2\onnx\model.onnx
C:\src\webnn-developer-preview-amd\demos\image-classification\models\amd\resnet50\onnx\model.onnx
C:\src\webnn-developer-preview-amd\demos\image-classification\models\webnn\efficientnet-lite4\onnx\model_fp16.onnx
C:\src\webnn-developer-preview-amd\demos\image-classification\models\webnn\mobilenet-v2\onnx\model_fp16.onnx
C:\src\webnn-developer-preview-amd\demos\image-classification\models\xenova\resnet-50\onnx\model_fp16.onnx
C:\src\webnn-developer-preview-amd\demos\sd-turbo\models\text_encoder\model_layernorm.onnx
C:\src\webnn-developer-preview-amd\demos\sd-turbo\models\unet\model_layernorm.onnx

Copy link
Copy Markdown
Contributor

@adrastogi adrastogi Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, not sure what changed in the previous few hours (I restarted and changed networks in the interim, so maybe something there got things unstuck)- but at the moment, running the script seems to make it past the seventh model (it's downloading number 13 as I write this). We can consider this resolved.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I jinxed it. It bailed out after the 14th model finished downloading. Trying again. 😢

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, there's some degree of flakiness that I'm observing. At any rate, will treat this as a separate issue. Thanks for confirming behavior on your side.

[26/40] Downloading https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.encoder-fp16.onnx to C:\src\webnn-developer-preview-amd\demos\segment-anything\models\sam_vit_b_01ec64.encoder-fp16.onnx
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.encoder-fp16.onnx (2 retries left)
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.encoder-fp16.onnx (1 retries left)
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.encoder-fp16.onnx after multiple attempts Error: HTTP error! Status: 404 Not Found
    at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
    at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.encoder-fp16.onnx Error: HTTP error! Status: 404 Not Found
    at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
    at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
[27/40] Downloading https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.decoder-orig-img-size-dynamic-fp16.onnx to C:\src\webnn-developer-preview-amd\demos\segment-anything\models\sam_vit_b_01ec64.decoder-orig-img-size-dynamic-fp16.onnx
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.decoder-orig-img-size-dynamic-fp16.onnx (2 retries left)
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.decoder-orig-img-size-dynamic-fp16.onnx (1 retries left)
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.decoder-orig-img-size-dynamic-fp16.onnx after multiple attempts Error: HTTP error! Status: 404 Not Found
    at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
    at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.decoder-orig-img-size-dynamic-fp16.onnx Error: HTTP error! Status: 404 Not Found
    at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
    at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
[28/40] Downloading https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-encoder-int8.onnx to C:\src\webnn-developer-preview-amd\demos\segment-anything\models\sam_vit_b-encoder-int8.onnx
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-encoder-int8.onnx (2 retries left)
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-encoder-int8.onnx (1 retries left)
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-encoder-int8.onnx after multiple attempts Error: HTTP error! Status: 404 Not Found
    at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
    at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-encoder-int8.onnx Error: HTTP error! Status: 404 Not Found
    at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
    at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
[29/40] Downloading https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-decoder-orig-img-size-dynamic-int8.onnx to C:\src\webnn-developer-preview-amd\demos\segment-anything\models\sam_vit_b-decoder-orig-img-size-dynamic-int8.onnx
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-decoder-orig-img-size-dynamic-int8.onnx (2 retries left)
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-decoder-orig-img-size-dynamic-int8.onnx (1 retries left)
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-decoder-orig-img-size-dynamic-int8.onnx after multiple attempts Error: HTTP error! Status: 404 Not Found
    at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
    at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-decoder-orig-img-size-dynamic-int8.onnx Error: HTTP error! Status: 404 Not Found
    at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
    at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
[30/40] Downloading https://huggingface.co/webnn/whisper-base-webnn/resolve/main/whisper_base_decoder_static_kvcache_128_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx to C:\src\webnn-developer-preview-amd\demos\whisper-base\models\whisper_base_decoder_static_kvcache_128_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx
-> downloading [========================================] 100% 0.0s
[30/40] Downloaded https://huggingface.co/webnn/whisper-base-webnn/resolve/main/whisper_base_decoder_static_kvcache_128_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx to C:\src\webnn-developer-preview-amd\demos\whisper-base\models\whisper_base_decoder_static_kvcache_128_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx
[31/40] Downloading https://huggingface.co/webnn/whisper-base-webnn/resolve/main/whisper_base_decoder_static_non_kvcache_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx to C:\src\webnn-developer-preview-amd\demos\whisper-base\models\whisper_base_decoder_static_non_kvcache_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx
-> downloading [========================================] 100% 0.0s
[31/40] Downloaded https://huggingface.co/webnn/whisper-base-webnn/resolve/main/whisper_base_decoder_static_non_kvcache_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx to C:\src\webnn-developer-preview-amd\demos\whisper-base\models\whisper_base_decoder_static_non_kvcache_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx
[32/40] Downloading https://huggingface.co/webnn/whisper-base-webnn/resolve/main/whisper_base_encoder_lm_fp16_layernorm_gelu.onnx to C:\src\webnn-developer-preview-amd\demos\whisper-base\models\whisper_base_encoder_lm_fp16_layernorm_gelu.onnx
-> downloading [========================================] 100% 0.0s
[32/40] Downloaded https://huggingface.co/webnn/whisper-base-webnn/resolve/main/whisper_base_encoder_lm_fp16_layernorm_gelu.onnx to C:\src\webnn-developer-preview-amd\demos\whisper-base\models\whisper_base_encoder_lm_fp16_layernorm_gelu.onnx

@@ -0,0 +1,2026 @@
{
"architectures": [
"ResNetForImageClassification"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying the sample locally- I did get back an inference but did see some log output indicating attempts at local file system access... is this expected?

[60792:33108:0407/135409.409:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [FATAL: onnxruntime, vaiml_mlopslib_partition_rule.cpp:1317 vaiml_mlopslib_partition_rule.cpp] Failed to open "C:\\temp\\adityar\\vaip\\.cache\\ef572deb32890331e9d986c1826163b6\\aie_unsupported_original_ops.json"

Copy link
Copy Markdown
Contributor

@adrastogi adrastogi Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Including the logs from about://gpu here as well. It seems like that perhaps the computation in my local testing is happening on the CPU?

[53892:58412:0407/140321.915:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, stat.cpp:193 stat.cpp] [Vitis AI EP] No. of Operators :
[53892:58412:0407/140321.915:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, stat.cpp:204 stat.cpp]    CPU   124 
[53892:58412:0407/140321.915:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, stat.cpp:213 stat.cpp]

about-gpu-2026-04-07T21-03-59-369Z.txt

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The failed file access caused the model to fall back to CPU. But that on-disk file access should not happen. Could you share the steps to reproduce this?

Copy link
Copy Markdown
Contributor

@adrastogi adrastogi Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure- as prerequisites, you'll need to have the 1.8.59 AMD NPU EP installed, and then you'll need to have the 2.0 Preview2 Windows App SDK build installed (you can run the installer from here). I used Google Chrome Canary as my test browser.

I cloned your fork locally, pulled down the models locally, and ran the site via npm run dev.

Once I had the server launched, I launched Google Chrome Canary with the following incantation:

PS C:\Users\adityar\AppData\Local\Google\Chrome SxS\Application> .\chrome.exe --no-first-run --no-default-browser-check --enable-features="WebMachineLearningNeuralNetwork,WebNNOnnxRuntime" --webnn-ort-ignore-ep-blocklist --webnn-ort-logging-level=VERBOSE --webnn-ort-library-path-for-testing="C:\Program Files\WindowsApps\Microsoft.WindowsAppRuntime.2-preview2_2.0.0.2_x64__8wekyb3d8bbwe" --webnn-ort-ep-device="VitisAIExecutionProvider,0x1022,0x17F0" --allow-third-party-modules about:blank --webnn-ort-ep-library-path-for-testing=VitisAIExecutionProvider?"C:\Program Files\WindowsApps\MicrosoftCorporationII.WinML.AMD.NPU.EP.1.8_1.8.59.0_x64__8wekyb3d8bbwe\ExecutionProvider\onnxruntime_vitisai_ep.dll"

(More details on some of these arguments is available here and if you have any questions on any of them, let me know.)

I then navigated to the local web site (http://127.0.0.1:8080/), loaded the Image Classification sample, and tried running the FP32 model. I was inspecting the verbose ORT logs via the chrome://gpu page.

If I should be using a newer AMD NPU EP, please let me know (I know that there's been ongoing work to eliminate the file accesses during model compilation, maybe I am on a build that doesn't have the latest fixes).

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My test EP seems to be an older version than the EP version shown in your log. But I also tried with a newer version and it also has no issue. Is it possible for you to test it with a newer EP? I will also try this specific version at the same time.

Copy link
Copy Markdown
Contributor

@adrastogi adrastogi Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay on my end. Hm, you are right, moving the EP binaries to a different directory seemed to have gotten me further (I will need to investigate why that is the case).

My command line:
PS C:\Users\adityar\AppData\Local\Google\Chrome SxS\Application> .\chrome.exe --no-first-run --no-default-browser-check --enable-features="WebMachineLearningNeuralNetwork,WebNNOnnxRuntime" --webnn-ort-ignore-ep-blocklist --webnn-ort-logging-level=VERBOSE --webnn-ort-library-path-for-testing="C:\Program Files\WindowsApps\Microsoft.WindowsAppRuntime.2-preview2_2.0.0.2_x64__8wekyb3d8bbwe" --webnn-ort-ep-device="VitisAIExecutionProvider,0x1022,0x17F0" --allow-third-party-modules --webnn-ort-ep-library-path-for-testing=VitisAIExecutionProvider?"C:\Program Files\AMD-EP\onnxruntime_vitisai_ep.dll"

Contents of that AMD-EP directory:

PS C:\Program Files\AMD-EP> dir

    Directory: C:\Program Files\AMD-EP

Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
-a---           3/26/2026 10:49 PM       10102944 aiecompiler_client.dll
-a---           2/24/2026  3:41 PM        1897555 AMD_WINML_0217_Drop_for_MSFT_Third_Party_Notices.pdf
-a---           3/26/2026 10:49 PM      151787168 dyn_dispatch_core.dll
-a---           3/26/2026 10:49 PM         285744 onnxruntime_providers_vitisai.dll
-a---           3/26/2026 10:49 PM        1414816 onnxruntime_vitis_ai_custom_ops.dll
-a---           3/26/2026 10:49 PM      147870368 onnxruntime_vitisai_ep.dll
-a---          10/23/2025 11:21 AM       16180139 third-party.zip
-a---           3/26/2026 10:49 PM      322732232 vaiml.dll
-a---           3/26/2026 10:49 PM         593096 zlib.dll

I got further but there's a new error now- something related to driver detection, possibly? Is this a problem with the driver on my device?

[17116:17492:0420/091155.319:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, vitisai_compile_model.cpp:1369 vitisai_compile_model.cpp] Vitis AI EP Load ONNX Model Success
[17116:17492:0420/091155.319:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, vitisai_compile_model.cpp:1370 vitisai_compile_model.cpp] Graph Input Node Name/Shape (1)
[17116:17492:0420/091155.319:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, vitisai_compile_model.cpp:1374 vitisai_compile_model.cpp]   input_1 : [1x3x224x224]
[17116:17492:0420/091155.320:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, vitisai_compile_model.cpp:1380 vitisai_compile_model.cpp] Graph Output Node Name/Shape (1)
[17116:17492:0420/091155.320:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, vitisai_compile_model.cpp:1384 vitisai_compile_model.cpp]   logits_0 : [1x1000]
[17116:17492:0420/091155.387:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, vitisai_compile_model.cpp:455 vitisai_compile_model.cpp] File base signature : 
[17116:17492:0420/091155.387:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, vitisai_compile_model.cpp:456 vitisai_compile_model.cpp] Algorithm-A: based on topologically ordered signature : ef572deb32890331e9d986c1826163b6
[17116:17492:0420/091155.387:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, vitisai_compile_model.cpp:458 vitisai_compile_model.cpp] Algorithm-B: based on graph inputs/outputs signature : 1f0eaf1ff73323fcca7c147314662faa
[17116:17492:0420/091155.387:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, vitisai_compile_model.cpp:460 vitisai_compile_model.cpp] Algorithm-B: node count: 122
[17116:17492:0420/091155.387:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, vitisai_compile_model.cpp:436 vitisai_compile_model.cpp] Can not find signature in meptable , use in memory signature ef572deb32890331e9d986c1826163b6
[17116:17492:0420/091155.390:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, cache_dir.cpp:70 cache_dir.cpp] skip update cache dir: in-mem mode
[17116:17492:0420/091155.421:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, vaiml_config.hpp:349 vaiml_config.hpp] VAIP commit: 09c5d5065a9a44c02518ac86a22065b7af38ad71
[17116:17492:0420/091155.423:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, vaip_driver_info.cpp:193 vaip_driver_info.cpp] Driver detection through setup api: 314
[17116:11764:0420/091212.629:ERROR:ui\gl\egl_util.cc:92] : EGL Driver message (Error) eglCreateContext: Requested version is not supported
[17116:11764:0420/091244.788:ERROR:ui\gl\egl_util.cc:92] : EGL Driver message (Error) eglCreateContext: Requested version is not supported
[17116:17492:0420/091318.593:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [WARNING: onnxruntime, node-arg-producer-map.cpp:138 node-arg-producer-map.cpp] NodeArgIndex graph ID (GraphId(staging=false, index=2)) does not match producer map graph ID (GraphId(staging=false, index=0))
[17116:17492:0420/091318.593:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [WARNING: onnxruntime, node-arg-producer-map.cpp:138 node-arg-producer-map.cpp] NodeArgIndex graph ID (GraphId(staging=false, index=2)) does not match producer map graph ID (GraphId(staging=false, index=0))
[17116:11764:0420/091322.213:ERROR:ui\gl\egl_util.cc:92] : EGL Driver message (Error) eglCreateContext: Requested version is not supported

about-gpu-2026-04-20T16-13-11-091Z.txt

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separately, in this configuration with the private path for the binaries, I do see the EP still trying to reach out to the file system (maybe these are benign, but those would likely not be accessible from within the sandbox).

image

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those EGL driver messages is irrelevant to the compilation process and shouldn't cause any trouble. Similarly, the file access attempts also shouldn't cause errors. But we can try to remove those if needed. Did the compilation go through successfully for you with outputs shown in the page? The process might take 5-10 min.

Copy link
Copy Markdown
Contributor

@adrastogi adrastogi Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The process might take 5-10 min.

Ah, I might have given up too early. Let me try again.

Have you tried enabling any of the other samples, by chance? If these models are taking 5-10 minutes to compile, then the other heavier-weight ones like the GenAI ones might not be usable.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I haven't. Good point though.

@microsoft-github-policy-service
Copy link
Copy Markdown

@Json288 please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

@microsoft-github-policy-service agree [company="{your company}"]

Options:

  • (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
  • (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"
Contributor License Agreement

Contribution License Agreement

This Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
and conveys certain license rights to Microsoft Corporation and its affiliates (“Microsoft”) for Your
contributions to Microsoft open source projects. This Agreement is effective as of the latest signature
date below.

  1. Definitions.
    “Code” means the computer software code, whether in human-readable or machine-executable form,
    that is delivered by You to Microsoft under this Agreement.
    “Project” means any of the projects owned or managed by Microsoft and offered under a license
    approved by the Open Source Initiative (www.opensource.org).
    “Submit” is the act of uploading, submitting, transmitting, or distributing code or other content to any
    Project, including but not limited to communication on electronic mailing lists, source code control
    systems, and issue tracking systems that are managed by, or on behalf of, the Project for the purpose of
    discussing and improving that Project, but excluding communication that is conspicuously marked or
    otherwise designated in writing by You as “Not a Submission.”
    “Submission” means the Code and any other copyrightable material Submitted by You, including any
    associated comments and documentation.
  2. Your Submission. You must agree to the terms of this Agreement before making a Submission to any
    Project. This Agreement covers any and all Submissions that You, now or in the future (except as
    described in Section 4 below), Submit to any Project.
  3. Originality of Work. You represent that each of Your Submissions is entirely Your original work.
    Should You wish to Submit materials that are not Your original work, You may Submit them separately
    to the Project if You (a) retain all copyright and license information that was in the materials as You
    received them, (b) in the description accompanying Your Submission, include the phrase “Submission
    containing materials of a third party:” followed by the names of the third party and any licenses or other
    restrictions of which You are aware, and (c) follow any other instructions in the Project’s written
    guidelines concerning Submissions.
  4. Your Employer. References to “employer” in this Agreement include Your employer or anyone else
    for whom You are acting in making Your Submission, e.g. as a contractor, vendor, or agent. If Your
    Submission is made in the course of Your work for an employer or Your employer has intellectual
    property rights in Your Submission by contract or applicable law, You must secure permission from Your
    employer to make the Submission before signing this Agreement. In that case, the term “You” in this
    Agreement will refer to You and the employer collectively. If You change employers in the future and
    desire to Submit additional Submissions for the new employer, then You agree to sign a new Agreement
    and secure permission from the new employer before Submitting those Submissions.
  5. Licenses.
  • Copyright License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license in the
    Submission to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute
    the Submission and such derivative works, and to sublicense any or all of the foregoing rights to third
    parties.
  • Patent License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under
    Your patent claims that are necessarily infringed by the Submission or the combination of the
    Submission with the Project to which it was Submitted to make, have made, use, offer to sell, sell and
    import or otherwise dispose of the Submission alone or with the Project.
  • Other Rights Reserved. Each party reserves all rights not expressly granted in this Agreement.
    No additional licenses or rights whatsoever (including, without limitation, any implied licenses) are
    granted by implication, exhaustion, estoppel or otherwise.
  1. Representations and Warranties. You represent that You are legally entitled to grant the above
    licenses. You represent that each of Your Submissions is entirely Your original work (except as You may
    have disclosed under Section 3). You represent that You have secured permission from Your employer to
    make the Submission in cases where Your Submission is made in the course of Your work for Your
    employer or Your employer has intellectual property rights in Your Submission by contract or applicable
    law. If You are signing this Agreement on behalf of Your employer, You represent and warrant that You
    have the necessary authority to bind the listed employer to the obligations contained in this Agreement.
    You are not expected to provide support for Your Submission, unless You choose to do so. UNLESS
    REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING, AND EXCEPT FOR THE WARRANTIES
    EXPRESSLY STATED IN SECTIONS 3, 4, AND 6, THE SUBMISSION PROVIDED UNDER THIS AGREEMENT IS
    PROVIDED WITHOUT WARRANTY OF ANY KIND, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF
    NONINFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
  2. Notice to Microsoft. You agree to notify Microsoft in writing of any facts or circumstances of which
    You later become aware that would make Your representations in this Agreement inaccurate in any
    respect.
  3. Information about Submissions. You agree that contributions to Projects and information about
    contributions may be maintained indefinitely and disclosed publicly, including Your name and other
    information that You submit with Your Submission.
  4. Governing Law/Jurisdiction. This Agreement is governed by the laws of the State of Washington, and
    the parties consent to exclusive jurisdiction and venue in the federal courts sitting in King County,
    Washington, unless no federal subject matter jurisdiction exists, in which case the parties consent to
    exclusive jurisdiction and venue in the Superior Court of King County, Washington. The parties waive all
    defenses of lack of personal jurisdiction and forum non-conveniens.
  5. Entire Agreement/Assignment. This Agreement is the entire agreement between the parties, and
    supersedes any and all prior agreements, understandings or communications, written or oral, between
    the parties relating to the subject matter hereof. This Agreement may be assigned by Microsoft.

log("[Transformer.js] env.allowRemoteModels: " + transformers.env.allowRemoteModels);
log("[Transformer.js] env.allowLocalModels: " + transformers.env.allowLocalModels);

const FP16_MODEL_PATHS = {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing I am still wondering about- I tried using just the production EPs via the following browser command:

PS C:\Users\adityar\AppData\Local\Google\Chrome SxS\Application> .\chrome.exe --no-first-run --no-default-browser-check --enable-features="WebMachineLearningNeuralNetwork,WebNNOnnxRuntime" --webnn-ort-ignore-ep-blocklist --webnn-ort-logging-level=VERBOSE --webnn-ort-library-path-for-testing="C:\Program Files\WindowsApps\Microsoft.WindowsAppRuntime.2-preview2_2.0.0.2_x64__8wekyb3d8bbwe" --webnn-ort-ep-device="VitisAIExecutionProvider,0x1022,0x17F0"

Both the GPU and NPU EP failed to register in this configuration because of a missing DLL dependency, and then the GPU process crashed shortly thereafter.

[12432:33028:0420/092011.118:ERROR:services\webnn\ort\environment.cc:591] : [WebNN] Failed to call ort_api->RegisterExecutionProviderLibrary( env.get(), ep_name.c_str(), package_info->library_path.value().c_str()): [WebNN] ORT status error code: 1 error message: Error loading "C:\Program Files\WindowsApps\MicrosoftCorporationII.WinML.AMD.GPU.EP.1.8_1.8.55.0_x64__8wekyb3d8bbwe\ExecutionProvider\onnxruntime_providers_migraphx.dll" which depends on "migraphx_c.dll" which is missing. (Error 1114: "A dynamic link library (DLL) initialization routine failed.")
[12432:33028:0420/092011.124:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, utils.cc:552 onnxruntime::LoadPluginOrProviderBridge] Loading EP library: 0000133C02659FC0 as a plugin
[12432:33028:0420/092011.132:ERROR:services\webnn\ort\environment.cc:591] : [WebNN] Failed to call ort_api->RegisterExecutionProviderLibrary( env.get(), ep_name.c_str(), package_info->library_path.value().c_str()): [WebNN] ORT status error code: 1 error message: Error loading "C:\Program Files\WindowsApps\MicrosoftCorporationII.WinML.AMD.NPU.EP.1.8_1.8.59.0_x64__8wekyb3d8bbwe\ExecutionProvider\onnxruntime_providers_vitisai.dll" which depends on "onnxruntime_providers_shared.dll" which is missing. (Error 126: "The specified module could not be found.")
[12432:33028:0420/092011.132:ERROR:services\webnn\ort\logging.cc:146] : [WebNN] [INFO] Registered OrtEpDevice #0: {ep_name: CPUExecutionProvider, ep_vendor: Microsoft, ep_metadata: {version: 1.24.4}, ep_options: {}}, OrtHardwareDevice: {type: CPU, vendor: AMD, vendor_id: 0x1022, device_id: 0x7, device_metadata: {Description: AMD Ryzen AI 9 365 w/ Radeon 880M              }}
[12432:33028:0420/092011.132:ERROR:services\webnn\ort\logging.cc:146] : [WebNN] [INFO] Registered OrtEpDevice #1: {ep_name: DmlExecutionProvider, ep_vendor: Microsoft, ep_metadata: {version: 1.24.4}, ep_options: {device_id: 0}}, OrtHardwareDevice: {type: GPU, vendor: Advanced Micro Devices, Inc., vendor_id: 0x1002, device_id: 0x150e, device_metadata: {Description: AMD Radeon(TM) 880M Graphics, LUID: 202817, DxgiAdapterNumber: 0, DxgiHighPerformanceIndex: 0, DxgiVideoMemory: 4096 MB, Discrete: 0}}
[12432:33028:0420/092011.132:ERROR:services\webnn\ort\environment.cc:501] : [WebNN] No EP device can be selected due to no matching device for user-specified webnn-ort-ep-device: VitisAIExecutionProvider,0x1022,0x17F0. Please check the registered EP devices in the logs by setting webnn-ort-logging-level to VERBOSE or INFO.
[12432:33028:0420/092011.132:ERROR:services\webnn\ort\environment.cc:501] : [WebNN] No EP device can be selected due to no matching device for user-specified webnn-ort-ep-device: VitisAIExecutionProvider,0x1022,0x17F0. Please check the registered EP devices in the logs by setting webnn-ort-logging-level to VERBOSE or INFO.
[12432:33028:0420/092011.132:ERROR:services\webnn\ort\environment.cc:501] : [WebNN] No EP device can be selected due to no matching device for user-specified webnn-ort-ep-device: VitisAIExecutionProvider,0x1022,0x17F0. Please check the registered EP devices in the logs by setting webnn-ort-logging-level to VERBOSE or INFO.
GpuProcessHost: The GPU process crashed! Exit code: STATUS_BREAKPOINT.

The reason I am interested in this configuration is that it's closer to the production environment (i.e., using the released EPs). I'll dig around more on my end but wanted to check if this looks familiar to you or if you spot something wrong with this configuration.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that you removed this arg: --webnn-ort-ep-library-path-for-testing=VitisAIExecutionProvider?"C:\Program Files\AMD-EP\onnxruntime_vitisai_ep.dll". Why is that?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah good question- it goes back to this remark:

The reason I am interested in this configuration is that it's closer to the production environment

Basically, trying to understand and diagnose why the official versions of the EP don't work from the MSIX since that is how AMD customers would use the feature.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For error caused by loading ep dll from msix installation path, we have a pr to fix it that's waiting to be merged.
As for this one, I think it should try to load onnxruntime_vitisai_ep.dll instead of onnxruntime_providers_vitisai.dll. Is this expected?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should try to load onnxruntime_vitisai_ep.dll instead of onnxruntime_providers_vitisai.dll. Is this expected?

Yeah, I noticed this as well- something is causing the EP routing to get confused and pick the wrong EP version... not sure what's going on here, trying to debug ORT to get more details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants