Skip to content

jeffelin/Agent_Zero

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Agent_Zero - HGS-DSL Agent Visualizer

An implementation of the Human Goal Stack (HGS) framework with Domain-Specific Language (DSL) layers for web automation and task execution.

Overview

Agent_Zero is a modular system that bridges high-level human goals with low-level execution primitives. It uses LLM-powered reasoning to decompose goals into executable action trees following the Human Goal Stack (HGS) framework. Currently operating in visualization-only mode, the system focuses on tree generation and visualization, with execution infrastructure preserved for future use. The system features a comprehensive React frontend for visualization and control, a FastAPI backend for reasoning and tree generation, and a Chrome extension for browser interaction and demonstration recording.

Key Features

  • Goal Decomposition: LLM-powered decomposition of high-level goals into executable action trees (HGS trees)
  • Tree Visualization: Interactive tree visualization with minimalist Graphviz SVG rendering (visualization-only mode)
  • LLM-Based Generation: Pure LLM-based tree generation for flexible goal decomposition
  • Interactive Chat: Context-aware conversation interface for refining goals and tasks
  • Chrome Extension: Browser extension for demonstration recording and DOM event capture
    • Records user interactions (clicks, inputs, navigation) on any website
    • Captures DOM events, screenshots, and optional video recordings
    • Single-tab recording (records only the tab where recording started)
    • Real-time event streaming to backend via WebSocket
  • Demonstration Learning: Record user demonstrations and learn reusable workflows (infrastructure available)
  • Data Cleaning Pipeline: Automated cleaning and enrichment of demonstration data
  • Skill Library: Persistent storage and retrieval of learned skills
  • Training Interface: Train on datasets like Mind2Web for procedure synthesis
  • Real-Time Communication: WebSocket support for live recording and monitoring

Current Configuration

Mode: Visualization-only (execution disabled)

  • Tree generation and visualization are fully functional
  • Execution workflows are temporarily disabled for visualization focus

Active Tree Type: HGS Tree

  • Linear goal decomposition: Goal → Task → SubGoal → Action
  • Only HGS trees are currently enabled in the UI
  • AND/OR trees and Decision trees are available via API but commented out in the frontend

Generation Method: LLM Only

  • Pure LLM-based generation (flexible but requires API key)
  • Demo-based, frequency analysis, and hybrid methods are temporarily disabled
  • All generation uses OpenAI API or compatible LLM endpoints

Chrome Extension Status: Active and Fully Functional

  • Extension Name: Agent Zero - DOM Recorder (Version 2.0.1)
  • Manifest Version: 3 (Chrome's latest standard)
  • Recording Features:
    • DOM event capture (clicks, inputs, scrolls, navigation, form submissions)
    • Screenshot capture linked to events (throttled to prevent excessive captures)
    • Optional video recording (WebM format) of browser tab
    • Single-tab recording mode (only records from the tab where recording started)
  • User Interface:
    • Extension popup with start/stop controls
    • Live event counter during recording
    • Real-time status updates
  • Integration:
    • Works with all websites (http/https protocols)
    • Integrated with main UI for session management
    • Backend API integration for data storage
    • WebSocket support for real-time event streaming
  • Data Storage: Recording data stored in data/sessions/YYYY-MM-DD/{session_id}/ organized by date
  • Status: Extension is production-ready and actively used for demonstration recording

Repository

Agent_Zero is open source and available on GitHub:

git clone https://github.com/jeffelin/Agent_Zero.git
cd Agent_Zero

Architecture

Agent_Zero consists of a Python FastAPI backend for reasoning and execution, and a React frontend for visualization and control.

Core Components

  1. Human Goal Stack (HGS)

    • Goal: The high-level objective (e.g., "Research AI agents")
    • Task: A specific unit of work derived from the Goal
    • SubGoal: An actionable step that maps to a platform-specific action
  2. Domain-Specific Languages (DSL)

    • DSL_sys (System Primitives): Universal instruction set for digital agents. Atomic operations like click, type, scroll, navigate, and extract. Platform-independent.
    • DSL_p (Platform Actions): Compositions of DSL_sys primitives tailored for specific platforms (e.g., ArxivSearch, HandshakeLogin)
  3. Stepper & Execution Engine (Infrastructure Available, Execution Disabled)

    • Stepper: Reasoning engine that prunes the search space and decides which DSL_p action to take based on the current SubGoal
    • Mapping Function M: Logic that maps a SubGoal to a specific DSL_p selection using heuristic inference or LLM reasoning
    • Note: Execution workflows are temporarily disabled; components remain in codebase for future use
  4. Programming by Example (PBE)

    • Learning module (backend/pbe) that generalizes new DSL_p actions from human demonstrations
    • Synthesizes reusable procedures from multiple demonstrations
    • Supports anti-unification and pattern extraction
  5. State Management

    • State detection and matching for robust execution
    • Context-aware execution that adapts to UI changes
    • State snapshots for replay and debugging
  6. Recording System

    • Records user demonstrations for learning
    • Analyzes demonstrations to extract workflows
    • Replays recorded workflows for similar tasks

Directory Structure

Agent_Zero/
├── backend/                    # FastAPI backend application
│   ├── api/                   # REST API endpoints (routes.py)
│   ├── core/                  # Core logic (DSL, Stepper, Execution Controller)
│   │   ├── dsl_sys.py         # System primitives
│   │   ├── dsl_p.py           # Platform actions
│   │   ├── stepper.py         # Action selection logic
│   │   └── execution_controller.py
│   ├── pbe/                   # Programming by Example learning module
│   │   ├── synthesizer.py     # Procedure synthesis from demos
│   │   ├── generalizer.py     # Pattern generalization
│   │   └── evaluator.py       # Evaluation metrics
│   ├── automation/            # Browser and desktop automation
│   │   ├── backends/          # Playwright, visual, and hybrid executors
│   │   └── service.py         # Backend selector/orchestrator
│   ├── recording/             # Demonstration recording and analysis
│   │   ├── recorder.py        # Records user actions
│   │   ├── analyzer.py        # Analyzes demonstrations
│   │   └── replayer.py        # Replays recorded workflows
│   ├── memory/                # Persistent conversation + execution memory
│   │   ├── conversation_store.py  # Writes transcripts to data/conversations
│   │   └── memory_retrieval.py    # Fetches relevant memories for new runs
│   ├── state/                 # State detection and matching
│   │   ├── state_detector.py  # Detects UI state
│   │   ├── state_snapshot.py  # Captures DOM/visual diffs
│   │   └── state_context.py   # Manages execution context
│   ├── services/              # Business logic services
│   │   ├── workflow_service.py      # HGS tree generation
│   │   ├── execution_service.py     # Tree execution + subgoal planning
│   │   ├── data_cleaning_service.py  # Data cleaning pipeline
│   │   ├── session_processor.py      # Session processing and consolidation
│   │   ├── tree_visualization_service.py  # Tree visualization (Graphviz)
│   │   ├── feature_extractor_service.py   # Feature extraction for ML
│   │   ├── multi_strategy_element_finder.py  # Multi-strategy element finding
│   │   ├── storage_manager.py        # Storage management utilities
│   │   ├── storage.py               # Skill library storage
│   │   └── telemetry.py             # Structured logging
│   ├── execution/             # Controllers/orchestrators + retry policies
│   ├── pflow/                 # Plan/execute/reflect helper nodes
│   ├── data/                  # Storage for skills, procedures, and state
│   │   ├── procedures/         # Synthesized procedures
│   │   ├── mind2web_loader.py  # Mind2Web dataset loader
│   │   └── procedure_storage.py  # Procedure storage utilities
│   ├── tools/                 # External tool integrations
│   ├── scripts/               # Utilities (e.g., convert_sessions_to_demos.py)
│   ├── clients/               # LLM client integrations
│   ├── config/                # Configuration management
│   ├── models/                # Data models
│   ├── utils/                 # Utility functions
│   ├── validators/            # Validation logic
│   └── results/               # Runtime artifacts (git-ignored)
├── frontend/                  # React + TypeScript frontend
│   ├── src/                   # Source code
│   │   ├── components/        # React components
│   │   ├── pages/             # Page components
│   │   └── hooks/             # Custom React hooks
│   ├── extension/             # Chrome Extension for demonstration recording
│   │   ├── manifest.json      # Extension manifest (Manifest v3)
│   │   ├── content_script.js  # DOM event capture (~1,800 lines)
│   │   ├── background.js      # Service worker for state management
│   │   ├── popup.html         # Extension popup UI
│   │   └── popup.js           # Popup logic and controls
│   └── dist/                  # Production build artifacts
├── data/                      # Storage for skills and demonstrations
│   ├── sessions/              # Browser recording sessions (organized by date)
│   │   └── YYYY-MM-DD/        # Date-organized session folders
│   │       └── {session_id}/  # Individual session data
│   │           ├── jsons/     # JSON files (session.json, events.jsonl, processed.json)
│   │           ├── videos/    # Video recordings (video.webm)
│   │           └── screenshots/  # Screenshots linked to events
│   ├── conversations/         # Session conversation transcripts
│   ├── demonstrations/        # Recorded demonstrations (legacy)
│   ├── examples/              # Curated example workflows
│   └── skills.json            # Skill library 
├── requirements.txt           # Python dependencies
├── SETUP_AND_RUN.md           # Additional set up and run directions
└── README.md                  # This file

Frontend Architecture

The frontend is a dual-interface system:

  1. Main UI (React + TypeScript): Visualizes HGS trees using React Flow plus backend-generated SVG rendering with minimalist styling

    • Three-panel layout: Input (left), Visualization (center), Chat/Logs (right)
    • Interactive tree visualization with scrolling and zoom
    • Chat interface for goal refinement and clarification
    • PNG export of the current tree visualization (via Export dialog)
    • Demonstration recording controls integrated in the UI
  2. Chrome Extension (frontend/extension/): Browser extension for demonstration recording (Fully Functional)

    • Manifest Version 3 (Chrome's latest extension standard)
    • Components:
      • content_script.js: Injected into web pages to capture DOM events and user interactions (~1,800 lines)
      • background.js: Service worker managing recording state and backend communication
      • popup.html / popup.js: Extension popup UI with start/stop controls and live event counter
    • Features:
      • Records clicks, inputs, scrolls, navigation, and form submissions
      • Captures screenshots linked to important events
      • Optional video recording of the browser tab (WebM format)
      • Single-tab recording mode (only records events from the tab where recording started)
      • Real-time event streaming to backend via WebSocket
      • Session management and state persistence
    • Status: Fully functional for demonstration recording
    • Permissions: <all_urls>, activeTab, scripting, storage, tabs

Setup & Installation

Prerequisites

  • Python 3.9+ (Python 3.10+ recommended)
  • Node.js 18+
  • OpenAI API key (or compatible LLM API)
  • Playwright browsers (installed automatically with dependencies)
  • Graphviz (for tree visualization)
    • macOS: brew install graphviz
    • Ubuntu/Debian: sudo apt-get install graphviz
    • Windows: Download from graphviz.org
  • Google Chrome (for extension and browser automation)

Quick Start

Use the provided script:

./RUN.sh

This will:

  1. Activate the virtual environment (if available)
  2. Check for API keys
  3. Build the frontend if needed
  4. Start the server on http://localhost:8000

Note: After starting the server, you still need to:

  • Load the Chrome extension (see "Chrome Extension Setup" below)
  • Optionally set up Mind2Web dataset (see "Mind2Web Dataset Setup" below)

Manual Installation

Backend

  1. Navigate to the project root:

    cd Agent_Zero
  2. Create and activate a virtual environment:

    python3 -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Install Playwright browsers:

    playwright install
  5. Install Graphviz (required for tree visualization):

    # macOS
    brew install graphviz
    
    # Ubuntu/Debian
    sudo apt-get install graphviz
    
    # Windows
    # Download from: https://graphviz.org/download/
  6. Set environment variables:

    export OPENAI_API_KEY='your-key-here'

    Or create a .env file in the project root:

    echo "OPENAI_API_KEY=your-key-here" > .env
  7. Run the server:

    uvicorn backend.api.routes:app --reload --port 8000

    Or use the start script:

    ./backend/START_SERVER.sh

Data Directory Setup

The data/ directory is automatically created when you first use the application. It stores:

  • Session recordings (data/sessions/): Browser demonstration sessions organized by date
  • Conversation transcripts (data/conversations/): Session conversation history
  • Example workflows (data/examples/): Saved example workflows
  • Demonstrations (data/demonstrations/): Processed demonstration data (legacy)

Directory Structure:

data/
├── sessions/
│   └── YYYY-MM-DD/              # Organized by date
│       └── {session_id}/        # Individual session folders
│           ├── jsons/           # JSON files (session.json, events.jsonl, processed.json)
│           ├── videos/          # Video recordings (optional)
│           └── screenshots/     # Screenshots linked to events
├── conversations/               # Session conversation transcripts
└── examples/                    # Curated example workflows

The directory is created automatically on first use. No manual setup required.

Mind2Web Dataset Setup (Optional)

Mind2Web is a dataset containing 2,350+ real-world web tasks from 137 websites. It's used for training and evaluation.

Location: Agent_Zero/datasets/Mind2Web/data/

Quick Setup

  1. Navigate to the Mind2Web directory:

    cd Agent_Zero/datasets/Mind2Web
  2. Clone the training data from Hugging Face:

    git clone https://huggingface.co/datasets/osunlp/Mind2Web data
  3. Download and extract test splits (password: mind2web):

    cd data
    # Download these files manually from:
    # https://huggingface.co/datasets/osunlp/Mind2Web/tree/main
    
    # Then extract:
    unzip -P mind2web test_task.zip
    unzip -P mind2web test_website.zip
    unzip -P mind2web test_domain.zip
  4. Verify the structure:

    ls -la
    # Should see: train/, test_task/, test_website/, test_domain/
  5. Test the loader:

    cd Agent_Zero
    python3 scripts/testing/test_mind2web.py

Expected Output:

✅ Loader available: True
✅ Dataset root: /path/to/Agent_Zero/datasets/Mind2Web/data
✅ Available splits: ['train', 'test_task', 'test_website', 'test_domain']
✅ Sample task loaded

Note: Mind2Web setup is optional. The application works without it, but you won't be able to use the training interface with Mind2Web tasks.

For detailed setup instructions, see datasets/Mind2Web/SETUP.md.

Chrome Extension Setup

The Chrome extension is required for demonstration recording and browser interaction. It's a Manifest v3 extension that captures user interactions for learning workflows.

Installation Steps:

  1. Open Google Chrome and navigate to chrome://extensions/

  2. Enable "Developer mode" (toggle switch in the top right corner)

  3. Click "Load unpacked"

  4. Navigate to and select the extension directory:

    Agent_Zero/frontend/extension/
    
  5. The extension "Agent Zero - DOM Recorder" should appear in your extensions list with version 2.0.1

  6. Verify installation: Click the extension icon in Chrome's toolbar - you should see the popup with "Start Recording" button

Extension Features:

  • DOM Event Recording: Captures clicks, inputs, scrolls, form submissions, and navigation
  • Screenshot Capture: Takes screenshots linked to important events (throttled to prevent excessive captures)
  • Video Recording: Optional tab video recording (WebM format) when enabled
  • Single-Tab Mode: Records only from the tab where recording started (ignores other tabs)
  • Live Event Counter: Popup shows real-time event count during recording
  • Session Management: Tracks recording sessions and manages state persistence

Extension Components:

  • manifest.json: Extension configuration (Manifest v3)
  • content_script.js: Injected into web pages to capture DOM events (~1,800 lines)
  • background.js: Service worker managing state and backend communication
  • popup.html / popup.js: Extension popup UI with controls

Using the Extension:

  1. From Extension Popup:

    • Click the extension icon in Chrome's toolbar
    • Click "Start Recording" button
    • Perform actions on the webpage
    • Click "Stop Recording" when done
    • View event count in the popup
  2. From Main UI:

    • Use the "Demonstrate" source option in the left panel
    • Click "Start Recording" - this communicates with the extension
    • Record your demonstration
    • Stop recording from the UI

Extension Permissions (Required):

  • activeTab: Access to current tab for recording
  • scripting: Inject content scripts into web pages
  • storage: Store recording session state locally
  • tabs: Manage and track browser tabs
  • <all_urls>: Record demonstrations on any website (http/https)

Current Status:

  • Extension is fully functional for demonstration recording
  • Recording infrastructure works end-to-end
  • Data is stored in data/sessions/YYYY-MM-DD/{session_id}/
  • Integration with backend API for session management
  • WebSocket support for real-time event streaming

Troubleshooting:

  • Reload the extension in chrome://extensions/ after code changes
  • Check browser console (F12) for extension errors
  • Verify backend is running on http://localhost:8000
  • Ensure extension has proper permissions enabled
  • Check that the website is not a Chrome internal page (chrome://, about:, etc.)

Note: The extension must be loaded before starting recording sessions. After making changes to extension files, reload it in Chrome.

API Endpoints

The backend exposes a comprehensive REST API and WebSocket endpoints. See backend/README.md for detailed API documentation.

Core Endpoints

  • GET /api - Health check
  • POST /api/generate - Generate HGS tree from prompt
  • POST /api/generate_mvp - Generate minimal viable tree with visualization
  • POST /api/execute - Execute an HGS tree (endpoint available but execution temporarily disabled)
  • POST /api/generate_and_execute - Generate and execute in one call (execution temporarily disabled)
  • POST /api/plan_primitives - Plan primitive actions
  • POST /api/execute_plan_stream - Stream execution of a plan (execution temporarily disabled)
  • POST /api/execute_plan - Execute a plan synchronously (execution temporarily disabled)

Tree Generation & Visualization

  • POST /api/tree/visualize - Generate tree visualization (SVG)
  • POST /api/and_or_tree/add_or_branch - Add OR branch to tree (for AND/OR trees) - API available but frontend disabled
  • POST /api/and_or_tree/add_condition - Add condition to tree (for AND/OR trees) - API available but frontend disabled

Current Status:

  • Tree Type: Only HGS trees are enabled in the frontend UI
  • Generation Method: Only LLM-based generation is enabled
  • Execution: Temporarily disabled (visualization-only mode)
  • Demo-based and frequency analysis methods are temporarily disabled in the UI (code preserved for future use)

Recording & Sessions

  • POST /api/recording/start_session - Start a new recording session
  • POST /api/recording/stop_session - Stop recording session
  • POST /api/recording/event - Record a DOM event
  • POST /api/recording/video - Upload video recording
  • POST /api/recording/screenshot - Upload screenshot
  • GET /api/recording/sessions - List all recording sessions
  • GET /api/recording/sessions/{session_id} - Get session details
  • GET /api/recording_status - Get current recording status
  • POST /api/recording/clean - Clean recording data
  • POST /api/start_demonstration - Start demonstration recording
  • POST /api/stop_demonstration - Stop demonstration recording
  • WebSocket /ws/recording_stream - WebSocket stream for live recording

Demonstrations

  • GET /api/demonstrations - List all demonstrations
  • GET /api/demonstrations/{demo_id} - Get specific demonstration
  • POST /api/execute_demo - Execute a demonstration
  • POST /api/execute_demonstration - Execute demonstration with options
  • POST /api/execute_demonstration_template - Execute demonstration template
  • GET /api/demonstrations/{demo_id}/template - Get demonstration template
  • DELETE /api/demonstrations/{demo_id} - Delete demonstration
  • POST /api/analyze_demonstration - Analyze a demonstration

Skills & Examples

  • GET /api/skills - List saved skills
  • GET /api/skills/{skill_id} - Get a specific skill
  • POST /api/save_skill - Save a skill
  • GET /api/examples - List examples
  • GET /api/examples/{example_id} - Get specific example
  • POST /api/save_example - Save an example
  • POST /api/execute_example - Execute an example
  • DELETE /api/examples/{example_id} - Delete example

Training & Procedures

  • GET /api/train/datasets - List available datasets
  • GET /api/train/tasks - List training tasks
  • GET /api/train/tasks/{task_id} - Get specific task
  • GET /api/train/tasks/synthesized - Get synthesized tasks
  • GET /api/train/tasks/stats/summary - Get task statistics
  • POST /api/train/tasks/similarity - Find similar tasks
  • POST /api/train/sample-task - Sample a task
  • POST /api/train/run - Start a training job
  • GET /api/train/status/{job_id} - Get training job status
  • GET /api/train/subsets - Get training subsets
  • GET /api/train/procedures - List synthesized procedures
  • GET /api/train/procedures/{procedure_id} - Get specific procedure
  • DELETE /api/train/procedures/{procedure_id} - Delete procedure
  • POST /api/train/estimate-cost - Estimate training costs

Automation

  • POST /api/automation/run - Run automation action
  • GET /api/automation/screenshots/latest - Get latest screenshot
  • POST /api/detect_state - Detect current browser/desktop state

Development

Adding a New Tool

  1. Create a new file in backend/tools/ (e.g., my_tool.py)
  2. Define your tool class/function
  3. The ToolRegistry will automatically discover it (if configured)

Running Tests

pytest tests/

Code Structure

  • Backend: FastAPI application with modular services
  • Frontend: React application with TypeScript
  • Extension: Chrome extension for browser automation
  • Experiments: Research and experimental architectures

Starting the Application

Development Mode

  1. Start the Backend (Terminal 1):

    cd Agent_Zero
    source venv/bin/activate  # or your virtual environment
    uvicorn backend.api.routes:app --reload --port 8000

    Backend will be available at: http://localhost:8000

  2. Start the Frontend (Terminal 2):

    cd Agent_Zero/frontend
    npm run dev

    Frontend will be available at: http://localhost:5173

  3. Load the Chrome Extension:

    • Open Chrome and go to chrome://extensions/
    • Enable "Developer mode"
    • Click "Load unpacked"
    • Select Agent_Zero/frontend/extension/
  4. Access the Application:

    • Open http://localhost:5173 in Chrome
    • The extension will be active and ready to record demonstrations

Production Mode

  1. Build the frontend:

    cd frontend
    npm run build
  2. Start the backend (serves both API and frontend):

    uvicorn backend.api.routes:app --port 8000
  3. Access at http://localhost:8000 (both API and frontend)

Configuration

Configuration is managed through:

  • Environment variables (.env file in project root)
  • backend/config/settings.py - Settings loader
  • backend/config/env.py - Environment configuration

Key settings:

  • OPENAI_API_KEY - LLM API key (required for tree generation)
  • LOG_LEVEL - Logging level (DEBUG, INFO, WARNING, ERROR)
  • ENABLE_TELEMETRY - Enable telemetry logging

Create a .env file in the project root:

OPENAI_API_KEY=your-key-here
LOG_LEVEL=INFO

Documentation

Contributing

When contributing to Agent_Zero:

  1. Backend Changes: Follow the service-oriented architecture. Keep business logic in services/, API endpoints in api/routes.py, and models in models/.
  2. Frontend Changes: Use TypeScript, follow component structure in src/components/, and maintain type safety.
  3. New Features: Document in the appropriate README and update API documentation.
  4. Testing: Add tests for new functionality and ensure existing tests pass.

Troubleshooting

Common Issues

Backend won't start:

  • Check Python version (3.9+ required)
  • Verify virtual environment is activated
  • Ensure all dependencies are installed: pip install -r requirements.txt
  • Check for API key: export OPENAI_API_KEY='your-key-here'

Frontend build errors:

  • Clear node_modules and reinstall: rm -rf node_modules && npm install
  • Check Node.js version (18+ required)
  • Verify TypeScript compilation: npm run build

Extension not working:

  • Reload extension in Chrome after code changes (chrome://extensions/ → reload icon)
  • Check browser console for errors (F12 → Console tab)
  • Verify manifest.json is valid JSON
  • Ensure backend is running on http://localhost:8000
  • Check extension permissions are enabled
  • Verify extension is loaded from correct directory: Agent_Zero/frontend/extension/

Recording issues:

  • Extension not recording:
    • Check that extension popup shows "Start Recording" button
    • Verify backend is running and accessible
    • Check browser console for connection errors
    • Ensure WebSocket connection to backend is working
  • No events captured:
    • Verify you're on a webpage (not Chrome internal pages like chrome://)
    • Check that recording was started from the active tab
    • Ensure content script is injected (check browser console)
  • Events from wrong tab:
    • Extension uses single-tab recording mode
    • Only records from the tab where recording started
    • Open a new tab before starting recording if needed
  • Backend connection errors:
    • Verify backend is running on http://localhost:8000
    • Check CORS settings in backend
    • Verify WebSocket endpoint is accessible
    • Check backend logs for recording endpoint errors
  • Storage issues:
    • Check browser storage permissions
    • Verify data/sessions/ directory is writable
    • Ensure sufficient disk space for recordings
    • Check backend logs for file write errors

For more detailed troubleshooting, see SETUP_AND_RUN.md.

About

Agent Zero @ CSAIL, Agentic Programming By Demonstration, turning web demonstrations into Human Goal Stack trees.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors