This root directory contains the baseline implementation of an Agentic Operating System using the agno library and Google's Gemini models. This demonstration relies on agno-agent.py.
agno-agent.py demonstrates how to create a highly capable Single-Agent AI backend. Key highlights include:
- Agent Initialization: Creates an
Agno Assistagent that leveragesGemini-2.5-flashto process user input. - MCP Integration: Ingests
MCPTools(url="https://docs.agno.com/mcp"), allowing the agent to dynamically access standardized tools formatted via the Model Context Protocol. - Arize Phoenix Tracing: Pipes all LLM traces and tool interactions into a localized instance of Arize Phoenix, allowing you to observe exactly how your Agent is "thinking".
- AgentOS Wrapper: Wraps the raw Agent into the
AgentOSarchitecture, subsequently exposing it as a fully-featured FastAPI backend application.
To run this file, we follow the modern Python ecosystem approach using uv.
- Install all dependencies defined in
pyproject.toml:uv sync
(If you do not want to use uv sync, you can use uv pip install -r requirements.txt after activating your virtual environment).
Follow these steps directly in your PowerShell or CMD terminal:
You must provide a free Gemini API key to run the model. PowerShell:
$env:GOOGLE_API_KEY="YOUR_API_KEY"CMD:
set GOOGLE_API_KEY=YOUR_API_KEYSince this script is heavily instrumented with OpenTelemetry, it expects Arize Phoenix to be listening on port 6006/4317. Open a new terminal and run:
docker run -p 6006:6006 -p 4317:4317 -p 4318:4318 arizephoenix/phoenix:latestYou can view your agent telemetry traces in your browser at: http://localhost:6006
Now that everything is ready, start the agno-agent.py FastAPI server using uv:
uv run python agno-agent.py(If you are already inside your virtual environment, simply typing python agno-agent.py works as well).
Success!
Your backend should now be listening on http://0.0.0.0:8000. You can point your internal applications or the official Agno web dashboard at this URI.
Note: For advanced multi-agent orchestration, please see
cookbook/teams_flow.pyand its accompanying documentation in thecookbook/directory.
Dojo is the frontend interface for interacting with your AgentOS. To start it:
- Open a new terminal.
- Navigate to your AgentOS scripts directory:
cd e:\SKILLSET\AGENTOS\AGNO-OS - Execute the launch script:
.\scripts\launch_dojo.sh # Or using bash: bash scripts/launch_dojo.sh - Open your browser and navigate to the local URL (usually
http://localhost:3000). - Connect to your AgentOS backend by supplying your backend URL (
http://localhost:8000) in the UI connection settings.
(Alternatively, you can use the hosted platform at os.agno.com and point it to your local backend.)
Arize Phoenix is used for observing and tracing agent execution. Here is what is needed in your virtual environment and how to run it in WSL.
Ensure the open-telemetry and phoenix instrumentations are added to your python environment. In your requirements.txt or pyproject.toml, make sure you have dependencies like:
arize-phoenix
openinference-instrumentation-agno
opentelemetry-sdk
opentelemetry-exporter-otlp
If using uv, you can install them via:
uv pip install arize-phoenix openinference-instrumentation-agno opentelemetry-sdk opentelemetry-exporter-otlpThe easiest way to run Phoenix in WSL is via Docker.
- Ensure Docker Desktop is running and WSL integration is enabled for your distro.
- Open your WSL terminal.
- Run the Phoenix container:
docker run -p 6006:6006 -p 4317:4317 -p 4318:4318 arizephoenix/phoenix:latest
Once Phoenix is running, ensure your AgentOS script is correctly instrumented. When you run your Python script (e.g. uv run python agno-agent.py), the open-telemetry configuration within AgentOS will automatically pipe its traces to localhost:4317 (gRPC) or localhost:4318 (HTTP).
You can view the real-time traces by opening http://localhost:6006 in your Windows browser.