Skip to content

An Open Letter to lightvector: New Year Wishes, Community Breakthroughs & My Hopes for KataGo in 2026 #1161

@krcc-151

Description

@krcc-151

Image

Dear lightvector and the KataGo development team,

As we have just stepped into the year 2026, with the new year bringing fresh hopes and new beginnings, I am writing this letter to extend my sincere new year wishes to you, while sharing exciting breakthroughs from the Chinese Go AI community, and my earnest hopes for KataGo's future in this new year.

First of all, I would like to express my sincere gratitude to you for creating and maintaining KataGo for so many years. As a long-time loyal user and Go enthusiast from China, KataGo has been an indispensable tool for me to learn and study Go, and it has completely changed the way I and countless players around the world understand this ancient game. I have been following KataGo's development closely for a long time, and I care deeply about the future of this project.

As a follower who has kept track of every update of KataGo, I can see that your latest 28B weight, which has been in training for two years, still delivers solid performance as the strongest official release to date. However, over these two years, the overall progress of the model has been extremely slow, with no major breakthroughs or transformative leaps in playing strength, nor any meaningful innovative exploration of new architectures or training methods. Many players around me, just like me, have felt that KataGo's development has settled into a slow, incremental pace for too long. As we step into the brand new year of 2026, we are all sincerely hoping to see this change, and to see KataGo embrace new growth and innovation once again.

The main reason I am writing this letter is to share with you the exciting breakthroughs and rigorous test results from the private Chinese Go AI training community (we often call it the "alchemy group", where we gather to discuss, train and optimize KataGo-based models). Over the past year leading into 2026, Go enthusiasts and independent developers in our community have discovered and verified new architectures and new optimization technologies for Go AI, achieving remarkable results that have not been explored in the official project.

First, our repeated tests have confirmed that the standard 18B model using the ordinary SWA (Stochastic Weight Averaging) method is weaker than the 18B TF model we trained based on Transformer architecture.

More importantly, we have completed strict, head-to-head match tests between our community-trained 18B TF model (based on Transformer architecture) and your current strongest official 28B weight, with all tests conducted strictly under the equal visit count setting. In low-compute (low visit count) scenarios, our 18B TF model is slightly weaker than the official 28B weight, with an exact win rate of 49%. However, in high-compute (high visit count) scenarios, our 18B TF model achieves a notably higher win rate, with a clear, obvious strength advantage, and significantly outperforms the official 28B weight. What's more notable is that the 18B TF model trained by our community, when playing black under Chinese rules, can often decisively flip the win rate in its favor in as few as a few dozen moves.

Beyond these verified, game-changing improvements our community has already tested and validated, I also want to humbly share an additional suggestion. You could look into the cutting-edge full model architectures and training paradigms from the latest state-of-the-art AI models in other fields. Many of these industry-leading innovations, with proper adaptation and fine-tuning for the unique logic of Go, have enormous potential to unlock even greater leaps in KataGo's playing strength, far beyond what we've already achieved with existing architecture tweaks.

It is really a pity that these proven and effective improvement directions have not been tried in the official project. Whether it is scaling up the parameter count, exploring Transformer-based architectures, or even the relatively simple adjustment of switching to the Muon optimizer, the official has not carried out relevant exploration and verification. Our community has already paved the way for these improvements, done all the preliminary verification work, and proved that these methods can bring huge improvements to the model's strength, even allowing an 18B parameter model to outperform the official latest 28B weight in high-compute scenarios.

So I sincerely write this letter to you, hoping that you can cheer up and refocus on KataGo's innovation in this new year of 2026. I, together with countless players who love KataGo deeply, are eagerly looking forward to seeing you explore these verified effective directions, draw on the latest innovations from the broader AI field, optimize the model, scale up the parameter count, and test promising new architectures. We are all waiting for KataGo to break free from the slow incremental progress of recent years, make new breakthroughs, become stronger again, and continue to be the most outstanding and leading Go AI in the world.

Thank you again for your great contribution to the Go world with KataGo. I am looking forward to the brand new chapter of KataGo in 2026.

With sincere respect and best regards,
A loyal KataGo user from China

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions