Skip to content

Latest commit

 

History

History
164 lines (113 loc) · 7.24 KB

File metadata and controls

164 lines (113 loc) · 7.24 KB

Configuration Guide

AlphaAgent loads AppConfig from ~/.alpha-agent/config.yaml, environment variables, and optional .env files (python-dotenv). Implementation: alpha_agent/config/config_manager.py.


Config file

Item Value
Path ~/.alpha-agent/config.yaml (alpha_agent.config.paths.CONFIG_FILE)
Secrets Prefer ~/.alpha-agent/.env or provider env vars - not git-tracked YAML

Resolution order

Model id (llm.model)

Highest precedence first (see AppConfig.load):

  1. CLI --model on chat / run
  2. ALPHA_AGENT_MODEL
  3. llm.model in config.yaml

Use LiteLLM-style ids (e.g. gemini/gemini-3.1-flash-lite-preview, openai/gpt-4o-mini).

API key

  1. resolve_api_key_for_model in llm_env.py: provider-specific env (e.g. GEMINI_API_KEY for gemini/...), then ALPHA_AGENT_API_KEY
  2. Optional llm.api_key in YAML - loaded with a warning (discouraged)

Local providers (ollama, lm_studio, vllm, …) may return empty key unless you set a generic override.

Environment files (dotenv)

Loaded so process environment wins over files:

  1. ~/.alpha-agent/.env
  2. Project .env in the current working directory

Implemented in AppConfig._load_dotenv_files (override=False).

Debug flag

  • YAML debug: boolean (or string 1/true/yes/on)
  • CLI --debug on chat / run: when passed, forces debug=True over YAML (debug_override)

Security mode

Set at CLI load time: alpha-agent chat --mode / run --mode (safe, dev, unrestricted). Stored on AppConfig.security_mode for that process. Mid-session changes use /mode (updates in-memory config + new SecurityManager).


LLMConfig fields (config_manager.py)

Field Default / notes
provider Optional; derived from model prefix when omitted
model Default gemini/gemini-3.1-flash-lite-preview
api_base Optional URL (http/https)
temperature 0.7 (0–2)
max_tokens 8192
context_window 32000
stream false
api_key Resolved at runtime; excluded from normal model dump persistence

AppConfig top-level fields

Field Purpose
config_version Integer; drives migrate_config
workspace Tool filesystem sandbox: ~/.alpha-agent/workspace (set at load)
orchestra Agent-owned data root: ~/.alpha-agent/workspace/AgentOrchestra (history, memory, logs, agents, plugins, exports, CLI history file)
agents_path Agent definitions directory (under orchestra/agents by default)
llm LLMConfig
default_agent e.g. alpha
security_mode From CLI when loading
debug Verbose tracebacks (handle_error, etc.)
rate_limit Concurrency / sliding window for LLM
retry Backoff for retries / dispatch
routing dict[str, str]: regex pattern → agent id (persisted; see RoutingTable)

AppConfig.save() (optional): reads YAML, writes merged routing and config_version floor, preserves other keys - used after /route.


config_version and migrations

  • CURRENT_CONFIG_VERSION is 2 in alpha_agent/config/migrations.py.
  • On load, file’s config_version selects migrate_config steps (e.g. v1 → v2 normalizes legacy llm.model to provider/model).
  • Migrations are idempotent.
  • New optional keys (e.g. debug, routing) do not require a bump if defaults apply via Pydantic.

Routing persistence

YAML example:

routing:
  "^debug\\s": beta
  "rewrite this": researcher

Bindings are loaded at ChatLoop start via RoutingTable.load_from_config. /route updates RAM, config.routing, and AppConfig.save(). A catch-all .* tied only to YAML default_agent may be skipped so CLI --agent remains the session default; see routing.py.


Headless / CI

  • alpha-agent init -y / --no-onboarding: no interactive prompts; directories + default config only.
  • alpha-agent --reset --reset-yes: non-interactive full wipe of ~/.alpha-agent (no Y/N prompt); then run init again.
  • --no-onboarding on chat, run, sessions list, tools list: accepted for forward compatibility (AppConfig.load).
  • If stdin is not a TTY, _non_interactive treats flows as non-interactive where applicable.

Set ALPHA_AGENT_MODEL and at least one API key env var (or local model) before AppConfig.load in CI.


Example config.yaml

config_version: 2
default_agent: alpha
debug: false

llm:
  model: gemini/gemini-3.1-flash-lite-preview
  provider: gemini
  temperature: 0.7
  max_tokens: 8192
  context_window: 32000
  stream: false
  # api_base: https://example.com/v1  # optional

rate_limit:
  enabled: true
  max_concurrency: 5
  max_requests_per_window: 30
  window_seconds: 60

retry:
  max_retries: 3
  backoff_base_seconds: 5.0
  backoff_multiplier: 3.0

routing:
  "fix.*": alpha

Do not commit real llm.api_key values.


Related