This issue was opened automatically by the Test Playbooks workflow after the test lemonade-chat-gpt-oss-120b-windows failed on the main branch.
Failure scope
- Playbook:
n8n-automation-gpt-oss
- Test id:
lemonade-chat-gpt-oss-120b-windows
- Device:
halo
- Operating system:
windows
- Runner labels:
self-hosted, Windows, halo
- Runner name:
xsj-aimlab-halo-0
- Commit:
5a22a31a3eb737edd92dfc76328b87983433199f
- Workflow run: https://github.com/amd/playbooks/actions/runs/25470865068
Hardware / OS to use to reproduce
Run the failing test on a machine that matches the runner labels above (OS = windows, device = halo). The repo's self-hosted runners already advertise these labels; if you reproduce locally, use the same OS family and the same AMD device class.
How to dispatch the same test from CI
Re-run only the failing playbook on the same matrix entry by triggering the workflow with the playbook id:
gh workflow run test-playbooks.yml --repo amd/playbooks -f playbook_id=n8n-automation-gpt-oss
The workflow's matrix narrows down to this (device, platform) combination automatically based on the playbook's tested_platforms.
How to run just this test locally
python .github/scripts/run_playbook_tests.py --playbook n8n-automation-gpt-oss --platform windows --device halo
The runner extracts test blocks from playbooks/*/n8n-automation-gpt-oss/README.md (the failing block starts around line 257).
Failing test (verbatim from the README)
$ErrorActionPreference = "Stop"
# Wait for server to come up
$modelsJson = $null
for ($i=0; $i -lt 120; $i++) {
$modelsJson = curl.exe -s --max-time 2 http://127.0.0.1:13305/api/v1/models
if ($modelsJson) { break }
Start-Sleep -Seconds 1
}
if (-not $modelsJson) { throw "Lemonade server not ready on http://127.0.0.1:13305" }
Write-Host "OK: Lemonade server is responding"
# Now that the server is responding, check if model is downloaded in Lemonade (robust JSON parse)
$parsed = $modelsJson | ConvertFrom-Json
$entry = $parsed.data | Where-Object { $_.id -eq "gpt-oss-120b-mxfp-GGUF" } | Select-Object -First 1
if (-not $entry) { throw "Model gpt-oss-120b-mxfp-GGUF is not present in Lemonade /api/v1/models." }
if (-not $entry.downloaded) { throw "Model gpt-oss-120b-mxfp-GGUF is present but not downloaded in Lemonade. Please download it." }
Write-Host "OK: gpt-oss-120b-mxfp-GGUF model is downloaded in Lemonade"
# Model chat test
$body = @{
model = "gpt-oss-120b-mxfp-GGUF"
messages = @(@{ role = "user"; content = "Reply with exactly: OK" })
temperature = 0
max_tokens = 32
} | ConvertTo-Json -Depth 5
$tmpBody = Join-Path $env:TEMP "lemonade-chat-body.json"
[System.IO.File]::WriteAllText($tmpBody, $body, [System.Text.UTF8Encoding]::new($false))
try {
$out = curl.exe -sS --fail-with-body --max-time 300 http://127.0.0.1:13305/api/v1/chat/completions `
-H "Content-Type: application/json" `
--data-binary "@$tmpBody"
if (-not $out) { throw "Empty response from Lemonade chat/completions" }
}
finally {
Remove-Item $tmpBody -Force -ErrorAction SilentlyContinue
}
Result
stderr (last lines)
Lemonade server not ready on http://127.0.0.1:13305
At line:10 char:25
+ ... delsJson) { throw "Lemonade server not ready on http://127.0.0.1:1330 ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OperationStopped: (Lemonade server...127.0.0.1:13305:String) [], RuntimeException
+ FullyQualifiedErrorId : Lemonade server not ready on http://127.0.0.1:13305
This issue is opened and deduplicated by .github/scripts/create_failure_issues.py. Close it once the failure is fixed; subsequent failures with the same scope will reopen a fresh issue.
This issue was opened automatically by the Test Playbooks workflow after the test
lemonade-chat-gpt-oss-120b-windowsfailed on themainbranch.Failure scope
n8n-automation-gpt-osslemonade-chat-gpt-oss-120b-windowshalowindowsself-hosted,Windows,haloxsj-aimlab-halo-05a22a31a3eb737edd92dfc76328b87983433199fHardware / OS to use to reproduce
Run the failing test on a machine that matches the runner labels above (OS =
windows, device =halo). The repo's self-hosted runners already advertise these labels; if you reproduce locally, use the same OS family and the same AMD device class.How to dispatch the same test from CI
Re-run only the failing playbook on the same matrix entry by triggering the workflow with the playbook id:
The workflow's matrix narrows down to this
(device, platform)combination automatically based on the playbook'stested_platforms.How to run just this test locally
The runner extracts test blocks from
playbooks/*/n8n-automation-gpt-oss/README.md(the failing block starts around line 257).Failing test (verbatim from the README)
1200sResult
1stderr (last lines)
This issue is opened and deduplicated by
.github/scripts/create_failure_issues.py. Close it once the failure is fixed; subsequent failures with the same scope will reopen a fresh issue.