The v6.0 reference implementation uses 10240 as max tokens run_mlperf.py:344 whereas the config yamls for the gpt-oss example use 32k.
I understand the 32k may be for the Accuracy dataset in endpoints (not sure if that's different from ones used for v6.0), but this may lead to some conflict in performance.
The v6.0 reference implementation uses 10240 as max tokens run_mlperf.py:344 whereas the config yamls for the gpt-oss example use 32k.
I understand the 32k may be for the Accuracy dataset in endpoints (not sure if that's different from ones used for v6.0), but this may lead to some conflict in performance.