Hi,
Many thanks for your work! I am currently working on integrating your solution for use with PyTorch and llama.cpp.
I had a breakthrough today: It seems I was able to increase the speed (TPS) with Qwen 3.5 35B-A3B to over 110 TPS, compared to 45–50 TPS in the baseline. This is running on AMD Strix Halo using ROCm + Vulkan. To be honest, though, I still need to validate these results more robustly.
May I ask if you already know when you will release the dflash model for the 122B version of Qwen 3.5 and when the final solution for the 27B variant will be available?
Thanks a lot and best regards!
Hi,
Many thanks for your work! I am currently working on integrating your solution for use with PyTorch and llama.cpp.
I had a breakthrough today: It seems I was able to increase the speed (TPS) with Qwen 3.5 35B-A3B to over 110 TPS, compared to 45–50 TPS in the baseline. This is running on AMD Strix Halo using ROCm + Vulkan. To be honest, though, I still need to validate these results more robustly.
May I ask if you already know when you will release the dflash model for the 122B version of Qwen 3.5 and when the final solution for the 27B variant will be available?
Thanks a lot and best regards!