An end-to-end AI agent platform for building, orchestrating, publishing, and operating AI applications.
Flask + LangChain/LangGraph backend, Vue 3 workspace, visual workflows, datasets, tools, and OpenAPI delivery.
Visit Website · API Docs · 中文文档 · GitHub
- About The Project
- Architecture
- Built With
- Getting Started
- Usage
- Project Structure
- Documentation
- Testing
- Contact
- Acknowledgments
OpenAgent is a full-stack platform for teams building AI applications rather than a single chat demo. The repository combines a Flask backend, Celery workers, a Vue 3 frontend, visual workflow authoring, dataset and document management, public app and workflow publishing, and OpenAPI-based delivery.
What the current codebase already supports:
- Use the home assistant to route user requests to published public agents through A2A, or turn natural-language requirements into new AI app creation flows.
- Build and manage AI apps from a dedicated workspace with draft, publish, analysis, version comparison, and prompt comparison flows.
- Design workflows visually with nodes for LLMs, tool calls, dataset retrieval, code execution, HTTP requests, branching, text processing, template transforms, and structured parameter extraction.
- Manage datasets, upload documents, inspect segments, and connect retrieval to agents and workflows.
- Browse public apps, tools, and workflows through store-style views.
- Expose published apps over REST and SSE through
POST /api/openapi/chat.
Click the diagram to view the full-resolution architecture image.
- AI framework and orchestration: LangChain, LangGraph, workflow orchestration, tool calling, A2A delegation, skills, memory
- Knowledge and retrieval: RAG, semantic retrieval, full-text retrieval, hybrid retrieval, Weaviate, FAISS
- Backend: Python, Flask, SQLAlchemy, Celery, Flask-SocketIO, Redis, PostgreSQL
- Frontend: Vue 3, JavaScript / TypeScript, Vite, TailwindCSS, Pinia, Vue Flow, Arco Design
- Infrastructure and delivery: Docker Compose, Nginx, OpenAPI, SSE
- Model integrations: OpenAI, DeepSeek, Grok, Google, Moonshot, Tongyi, Wenxin, Ollama, Zhipu
- Docker 20.10+
- Docker Compose 2.x
- 8 GB+ RAM recommended for the full stack
- Access to at least one supported model provider API key
-
Clone the repository.
git clone https://github.com/Haohao-end/openagent.git cd openagent -
Create the runtime environment file.
cp api/.env.example api/.env
-
Review the minimum required settings in
api/.env.JWT_SECRET_KEYPOSTGRES_PASSWORDREDIS_PASSWORDWEAVIATE_API_KEYVITE_API_PREFIX- At least one provider key such as
OPENAI_API_KEY,DEEPSEEK_API_KEY, orDASHSCOPE_API_KEY
-
Start the Docker stack.
cd docker docker compose up -d --build -
Open the local services.
Service URL Notes Frontend http://localhost:3000 Vue 3 web UI API http://localhost:5001 Flask REST API Nginx http://localhost Reverse proxy
Backend:
cd api
pip install -r requirements.txt
flask run --port 5001Frontend:
cd ui
npm install
npm run serveVite serves the frontend on port 5173 by default. The frontend configuration resolves the API base from VITE_API_PREFIX, and local development commonly proxies /api to the Flask backend.
Useful commands:
cd api
pytestcd ui
npm run type-check
npm run lint
npm run build
npm run test:unit -- --run
Use the home page as the default assistant entry point to route user questions to the most relevant published public agents through A2A, or describe a new idea in natural language and trigger AI app creation. The same surface also supports multi-turn chat, suggested prompts, image upload, and audio input.
Manage app drafts, published versions, analysis views, prompt comparisons, copies, and publishing actions from the app workspace.
Author workflows with nodes such as LLM, tool, dataset retrieval, code, HTTP request, template transform, text processor, variable assigner, parameter extractor, if/else, start, and end.
Create datasets, upload documents, inspect segments, and wire retrieval nodes into workflows or AI apps for knowledge-enabled behavior.
Publish an app and call it over POST /api/openapi/chat with standard or streaming responses, including support for multi-turn conversation identifiers.
.
├── api/ # Flask backend, services, handlers, tasks, migrations, tests
├── ui/ # Vue 3 frontend, routes, views, components, tests
├── docker/ # Docker Compose stack, nginx, postgres init, deployment config
├── README.md # English project overview
└── README_ZH.md # Chinese project overview
- README_ZH.md - Chinese project overview
- api/README.md - backend notes
- ui/README.md - frontend notes
- docker/README.md - Docker stack details
- api/.env.example - environment reference
The repository already includes automated backend and frontend tests.
- Backend:
cd api && pytest - Frontend unit tests:
cd ui && npm run test:unit -- --run - Frontend type check:
cd ui && npm run type-check - Frontend build validation:
cd ui && npm run build
- Project Link: https://github.com/Haohao-end/openagent
- Website: https://openllm.cloud
- API Docs: https://s.apifox.cn/c76bd530-fd50-429c-94cc-f0e41c2675d1/api-305434417
- DeepWiki: https://deepwiki.com/Haohao-end/openagent
- Special thanks to Rui Yang and Haoyu Wang (Johns Hopkins University) for responsibly reporting a Host Header poisoning issue in the built-in tool icon URL construction and helping improve the security of this project.