Our paper: "Repo2Run: Automated Building Executable Environment for Code Repository at Scale" has been accepted by NeurIPS 2025 main track as a spotlight!
An LLM-based build agent system that helps manage and automate build processes in containerized environments. This project provides tools for handling dependencies, resolving conflicts, and managing build configurations.
- Docker-based sandbox environment for isolated builds
- Automated dependency management and conflict resolution
- Support for Python version management
- Waiting list and conflict list management for package dependencies
- Error format handling and output collection
- Python 3.x
- Docker
- Git
- Clone the repository:
git clone https://github.com/bytedance/repo2run.git
cd repo2run- Install the required dependencies:
pip install -r requirements.txtNote: The sample Dockerfile directory is named build_agent/docker_templates/ (not docker/). A folder named docker next to main.py can shadow the PyPI package docker and cause AttributeError: module 'docker' has no attribute 'from_env'.
-
Configure LLM credentials (see docs/LLM_CONFIGURATION.md for all supported providers).
OpenAI or OpenAI-compatible (e.g. DeepSeek — set base URL):
export OPENAI_API_KEY="your-key" export OPENAI_BASE_URL="https://api.deepseek.com" # only if not using api.openai.com
Anthropic (Claude):
export ANTHROPIC_API_KEY="your-key"
Template without secrets: copy .env.example to
.envlocally (do not commit.env).
The main entry point is through the build agent's main script. You can run it with the following arguments:
python build_agent/main.py --full_name <repository_full_name> --sha <sha> --root_path <root_path> --llm <llm_name> [--llm-provider auto|openai_compatible|anthropic]Where:
repository_full_name: The full name of the repository (e.g., user/repo)sha: The commit SHAroot_path: The root path for the build processllm_name: The model id for your vendor (default: gpt-4o-2024-05-13)llm-provider(optional):auto(default) infers API from model name (claude→ Anthropic; otherwise OpenAI-compatible). Override withopenai_compatible/openaioranthropic. Same as envREPO2RUN_LLM_PROVIDER.
💡 For example, you can use the following repository—which is relatively easy to set up—to verify whether there are any issues with running it. I have already confirmed that it can be successfully configured on several mainstream models, including GPT-4o and Claude 3.5.
python build_agent/main.py --full_name "Benexl/FastAnime" --sha "677f4690fab4651163d0330786672cf1ba1351bf" --root_path . --llm "gpt-4o-2024-05-13"
DeepSeek (OpenAI-compatible) example:
export OPENAI_API_KEY="your-deepseek-key"
export OPENAI_BASE_URL="https://api.deepseek.com"
python build_agent/main.py --full_name "Benexl/FastAnime" --sha "677f4690fab4651163d0330786672cf1ba1351bf" --root_path . --llm "deepseek-chat" --llm-provider openai_compatibleYou can use this relatively easy-to-configure repository as a baseline to evaluate whether your chosen model can effectively handle this type of task. If the entire program starts successfully, the corresponding repository contents will be saved under utils/repo, and an output folder will be created with a structure like the following:
inner_commands.jsonoutput_commands.jsonpip_list.jsonpipdeptree.jsonpipdeptree.txtsha.txttrack.jsontrack.txt
If you successfully configure the repository, there will be the following files:
Dockerfilecode_edit.pytest.txt
Please note: if the output folder does not contain trajectory files such as track.json, it indicates that there was an issue during execution. You can first check it yourself; if other problems arise, feel free to open a GitHub Issue.
build_agent/- Main package directoryagents/- Agent implementations for build configurationutils/- Utility functions and helper classes (includingllm.py,llm_providers.py)docker_templates/- Sample Dockerfiles (renamed to avoid shadowing thedockerPyPI package when running frombuild_agent/)main.py- Main entry pointmulti_main.py- Multi-process support
docs/- Extra documentation (LLM setup, PR workflow, upstream diff notes)tests/- Optional repository unit tests (pip install -r requirements-dev.txt && python -m pytest tests/).github/- Pull request template
The project uses Docker containers to create isolated build environments, ensuring clean and reproducible builds.
- Waiting List: Manages package installation queue
- Conflict Resolution: Handles version conflicts between packages
- Error Handling: Formats and processes build errors
Supports multiple Python versions for build environments.
Utilizes GPT models to assist in build configuration and problem resolution.
If you’d like to modify Repo2Run to better suit your needs, we’ve outlined some potential improvement plans. Due to time constraints, we may not be able to complete these changes immediately. However, if you implement any of them, we warmly welcome you to submit a PR and contribute to the project!
- Fork the repository
- Create your fix branch (
git checkout -b fix/short-description) - Commit your changes (
git commit -m 'fix: describe the change') - Push to the branch (
git push origin fix/short-description) - Open a Pull Request
We’ve collected some common issues for your reference. If you encounter something that isn’t covered or resolved, feel free to open an Issue.
A: I recommend first running our suggested example to verify that your workflow can run end to end. If files like track.json are not generated in your output folder, it’s usually an environment configuration issue. Check whether Docker has started correctly.
2. The program runs, but the model keeps throwing errors like: “ERROR! Your reply does not contain valid block or final answer”
A: This error originates from agents/configuration.py, which checks whether the LLM’s reply contains a command structure wrapped in triple backticks ```. In practice, we’ve clearly specified the required output format in the prompt; at least in our tests, GPT-4o and Claude-3.5-Sonnet did not exhibit this issue. If you encounter it, we suggest first inspecting the LLM’s raw output (e.g., track.json or `track.txt`).
A: You can modify the generate_dockerfile function in the Sandbox class located at utils/sandbox.py. It manages the generation of the initial Dockerfile. You can add statements like ENV http_proxy=XXX to configure the network proxy.
(we’ll work on these when time permits; PRs are very welcome)
- You can modify the System Prompt in the
Configurationclass withinconfiguration.py. The current prompt is tailored to GPT-4o and may not be suitable for other models (e.g., smaller models may exceed context limits).
- The current version supports Python. To add other languages, the main steps are:
- a. Modify the prompt
- b. Add the corresponding package management tool in
tools(seeapt_download.pyandpip_download.pyfor reference) - c. Change the base image
| Language | Docker base image | Installation tool |
|---|---|---|
| Python | python:[version] | pip |
| JavaScript/TypeScript | node:[version] | npm |
| Java | openjdk:[version] | maven |
| Rust | rust:[version] | cargo |
| Ruby | ruby:[version] | bundler |
| R | r-base:[version] | install.packages |
| Go | golang:[version] | go get |
| PHP | php:[version] | composer |
- Currently, success is defined narrowly: all tests must be runnable (i.e.,
pytest --collect-onlydoes not error). In practice, many repositories contain inherently failing or non-runnable tests, which blocks configuration success. - We think this can be improved. If you want to tailor the criteria, modify
tools/runtest.pyandtools/poetryruntest.py. - This part can be flexible, for example:
- Stricter: require tests to pass
- Looser: only require 80% of tests to run, or passing a specific test, etc.
@article{hu2025repo2run,
title={Repo2Run: Automated Building Executable Environment for Code Repository at Scale},
author={Hu, Ruida and Peng, Chao and Wang, Xinchen and Xu, Junjielong and Gao, Cuiyun},
journal={arXiv preprint arXiv:2502.13681},
year={2025}
}PS: The maintainer and author of the paper is a current Master’s student. Since the project is largely implemented and maintained by a single person, various bugs🐛 are inevitable. You’re warmly welcome to discuss the project with me.
Apache-2.0
