Welcome to A-Evo Lab, a research initiative dedicated to the frontier of Self-Evolving Agents and Continual Learning. Led by Henry Lu, we aim to bridge the gap between static LLM capabilities and autonomous, adaptive intelligence.
We believe the next leap in AI won't just come from larger pre-training, but from the ability of agents to evolve through interaction, feedback, and self-correction.
- A-EVOLVE Visualizer: A real-time dashboard for tracking agentic evolution traces and error analysis. [ Live Demo ]
- A-EVOLVE Framework: Our core engine for agentic self-improvement in production environments.
By applying our open-source reference evolution algorithms to a base Claude Opus-4.6 model with zero manual harness engineering, A-Evolve pushed agents into top-tier performance across four diverse benchmarks:
- 04/20 New Algorithm Drop, A-Evolve added new evolutionary algorithm GEPA, submitted by the GEPA team.
- 04/10 Integration, A-Evolve is officially integrated into Orch-Research Skills Library, along with others including AutoResearch, OpenRLHF, DeepSpeed, SGLang
- 04/07 New Agent Drop, We added recently leaked public ClawCode (Claude Code), took the evolution harness + skills we learned on Terminal-Bench 2.0 (TB2) and directly transplanted them onto the ClawCode. Result on TB2: baseline 67.8% → 72.9% (+5.1pp uplift)
- 04/03 New Algorithm Drop, A-Evolve added new evolutionary algorithm Meta-Harness
- 03/30 Integration, A-Evolve is officially integrated into AutoResearchClaw
- 03/25 🚀 Open-source A-Evolve, the universal infrastructure for developing and testing evolving algorithms.
- 03/25 📊 Open-source 4 evolving algorithms developed with A-Evolve, achieving SOTA (#1, ~#5, ~#7, #2) on MCP-Atlas, SWE-bench Verified, Terminal-Bench 2.0, and SkillsBench.
- 02/17 📄 Release the official implementation of Position: Agentic Evolution is the Path to Evolving LLMs (arXiv 2602.00359).
- [2026.01] Our position paper for Agentic Evolution: [https://arxiv.org/abs/2602.00359].
