diff --git a/README.md b/README.md index 9c58ad5..2c8a002 100644 --- a/README.md +++ b/README.md @@ -81,6 +81,7 @@ browser-cli --help ``` Installed users should start with [`docs/installed-with-uv.md`](docs/installed-with-uv.md). +For removal and local cleanup guidance, see [`docs/uninstall.md`](docs/uninstall.md). ## Development @@ -258,6 +259,8 @@ Exit codes: ## Documentation - Repo navigation and subsystem ownership: [`AGENTS.md`](AGENTS.md) +- Installed-user guide: [`docs/installed-with-uv.md`](docs/installed-with-uv.md) +- Uninstall and cleanup guide: [`docs/uninstall.md`](docs/uninstall.md) - Explore-to-task skill: [`skills/browser-cli-explore-delivery/SKILL.md`](skills/browser-cli-explore-delivery/SKILL.md) - Smoke checklist: [`docs/smoke-checklist.md`](docs/smoke-checklist.md) diff --git a/docs/installed-with-uv.md b/docs/installed-with-uv.md index 032cb36..a38b1ca 100644 --- a/docs/installed-with-uv.md +++ b/docs/installed-with-uv.md @@ -108,3 +108,8 @@ browser-cli automation ui Replace `` with the ID shown by `browser-cli automation list`. After publish, use the automation CLI to inspect the published snapshot rather than editing snapshot files directly. + +## Remove Browser CLI + +To remove Browser CLI later, including Browser CLI home data and local cleanup +steps for maintainers, see [`docs/uninstall.md`](docs/uninstall.md). diff --git a/docs/superpowers/plans/2026-04-14-automation-manifest-roundtrip-implementation-plan.md b/docs/superpowers/plans/2026-04-14-automation-manifest-roundtrip-implementation-plan.md new file mode 100644 index 0000000..0ae92df --- /dev/null +++ b/docs/superpowers/plans/2026-04-14-automation-manifest-roundtrip-implementation-plan.md @@ -0,0 +1,1109 @@ +# Automation Manifest Round-Trip Implementation Plan + +> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking. + +**Goal:** Unify Browser CLI automation manifest semantics across publish, import, export, inspect, and persistence so supported fields round-trip without silent drift. + +**Architecture:** Add a shared automation projection layer under `src/browser_cli/automation/projections.py` and route manifest-to-persistence, persistence-to-manifest, snapshot-manifest rendering, and inspect config payload generation through it. Keep `AutomationManifest` and `PersistedAutomationDefinition` as the durable semantic models, but make publisher, API, and inspect surfaces delegate field projection to one shared implementation. + +**Tech Stack:** Python 3.10, pytest, Browser CLI automation loader/publisher/API modules, `dumps_toml_sections` + +--- + +## File Map + +- Create: `src/browser_cli/automation/projections.py` + Responsibility: shared semantic projections among manifest, persisted definition, API payloads, snapshot manifests, and inspect config views. +- Create: `tests/unit/test_automation_projections.py` + Responsibility: round-trip regression tests for semantic field preservation and inspect config parity. +- Modify: `src/browser_cli/automation/models.py` + Responsibility: retain dataclasses and convert the existing manifest-to-persisted helper into a compatibility wrapper over the new projection layer. +- Modify: `src/browser_cli/automation/persistence/store.py` + Responsibility: persist the shared supported field set, including `runtime.log_level`, through sqlite storage and reload. +- Modify: `src/browser_cli/automation/publisher.py` + Responsibility: replace local snapshot-manifest assembly with shared projection helpers. +- Modify: `src/browser_cli/automation/api/server.py` + Responsibility: replace local payload conversion, export TOML rendering, and automation serialization with shared projection helpers. +- Modify: `src/browser_cli/commands/automation.py` + Responsibility: build `snapshot_config` and `live_config` from the same shared config-view projection. +- Modify: `tests/unit/test_task_runtime_automation.py` + Responsibility: keep existing manifest-to-persisted assertions pointed at the shared projection behavior. +- Modify: `tests/unit/test_automation_publish.py` + Responsibility: assert published snapshot manifests preserve the full supported field set after path remapping. +- Modify: `tests/unit/test_automation_api.py` + Responsibility: assert export TOML round-trips supported fields semantically. +- Modify: `tests/unit/test_automation_commands.py` + Responsibility: assert `inspect --version` renders `snapshot_config` and `live_config` with the same supported config field shape. +- Modify: `docs/superpowers/plans/2026-04-14-automation-manifest-roundtrip-implementation-plan.md` + Responsibility: mark task completion during execution if this plan is used as the working log. + +## Task 1: Add Failing Semantic Round-Trip Regression Tests + +**Files:** +- Create: `tests/unit/test_automation_projections.py` +- Test: `tests/unit/test_automation_projections.py` + +- [x] **Step 1: Write failing projection regression tests** + +Create `tests/unit/test_automation_projections.py`: + +```python +from __future__ import annotations + +from pathlib import Path + +from browser_cli.automation.loader import load_automation_manifest +from browser_cli.automation.models import PersistedAutomationDefinition +from browser_cli.automation.projections import ( + manifest_to_config_payload, + manifest_to_persisted_definition, + manifest_to_snapshot_manifest_toml, + persisted_definition_to_config_payload, + persisted_definition_to_manifest_toml, +) + + +def _write_task_fixture(base_dir: Path) -> Path: + task_dir = base_dir / "task" + task_dir.mkdir() + (task_dir / "task.py").write_text( + "def run(flow, inputs):\n return {'ok': True}\n", + encoding="utf-8", + ) + (task_dir / "task.meta.json").write_text( + '{"task":{"id":"demo","name":"Demo","goal":"Run"},"environment":{},"success_path":{},"recovery_hints":{},"failures":[],"knowledge":{}}', + encoding="utf-8", + ) + (task_dir / "automation.toml").write_text( + "[automation]\n" + 'id = "demo"\n' + 'name = "Demo"\n' + 'description = "Semantic round-trip"\n' + 'version = "7"\n' + "[task]\n" + 'path = "task.py"\n' + 'meta_path = "task.meta.json"\n' + 'entrypoint = "run"\n' + "[inputs]\n" + 'url = "https://example.com"\n' + "[schedule]\n" + 'mode = "interval"\n' + "interval_seconds = 900\n" + 'timezone = "Asia/Shanghai"\n' + "[outputs]\n" + 'artifact_dir = "artifacts"\n' + 'result_json_path = "artifacts/result.json"\n' + 'stdout = "text"\n' + "[hooks]\n" + 'before_run = ["echo before"]\n' + 'after_success = ["echo success"]\n' + 'after_failure = ["echo failure"]\n' + "[runtime]\n" + "retry_attempts = 2\n" + "retry_backoff_seconds = 11\n" + "timeout_seconds = 42.5\n" + 'log_level = "debug"\n', + encoding="utf-8", + ) + return task_dir + + +def _build_persisted_definition(base_dir: Path) -> PersistedAutomationDefinition: + return PersistedAutomationDefinition( + id="demo", + name="Demo", + description="Semantic round-trip", + version="7", + task_path=base_dir / "live" / "task.py", + task_meta_path=base_dir / "live" / "task.meta.json", + entrypoint="run", + enabled=True, + schedule_kind="interval", + schedule_payload={ + "mode": "interval", + "interval_seconds": 900, + "timezone": "Asia/Shanghai", + }, + timezone="Asia/Shanghai", + output_dir=base_dir / "runs", + result_json_path=base_dir / "runs" / "result.json", + stdout_mode="text", + input_overrides={"url": "https://example.com"}, + before_run_hooks=("echo before",), + after_success_hooks=("echo success",), + after_failure_hooks=("echo failure",), + retry_attempts=2, + retry_backoff_seconds=11, + timeout_seconds=42.5, + log_level="debug", + ) + + +def test_manifest_to_persisted_definition_preserves_supported_fields(tmp_path: Path) -> None: + manifest = load_automation_manifest(_write_task_fixture(tmp_path) / "automation.toml") + + persisted = manifest_to_persisted_definition(manifest, enabled=True) + + assert persisted.description == "Semantic round-trip" + assert persisted.schedule_kind == "interval" + assert persisted.schedule_payload["interval_seconds"] == 900 + assert persisted.timezone == "Asia/Shanghai" + assert persisted.result_json_path is not None + assert persisted.stdout_mode == "text" + assert persisted.before_run_hooks == ("echo before",) + assert persisted.after_success_hooks == ("echo success",) + assert persisted.after_failure_hooks == ("echo failure",) + assert persisted.retry_attempts == 2 + assert persisted.retry_backoff_seconds == 11 + assert persisted.timeout_seconds == 42.5 + + +def test_persisted_definition_to_manifest_toml_round_trips_supported_fields( + tmp_path: Path, +) -> None: + automation = _build_persisted_definition(tmp_path) + automation.task_path.parent.mkdir(parents=True) + automation.task_path.write_text("def run(flow, inputs):\n return {}\n", encoding="utf-8") + automation.task_meta_path.write_text( + '{"task":{"id":"demo","name":"Demo","goal":"Run"},"environment":{},"success_path":{},"recovery_hints":{},"failures":[],"knowledge":{}}', + encoding="utf-8", + ) + + manifest_path = tmp_path / "exported.toml" + manifest_path.write_text( + persisted_definition_to_manifest_toml(automation), + encoding="utf-8", + ) + + manifest = load_automation_manifest(manifest_path) + + assert manifest.inputs == {"url": "https://example.com"} + assert manifest.schedule["interval_seconds"] == 900 + assert manifest.schedule["timezone"] == "Asia/Shanghai" + assert manifest.outputs.stdout == "text" + assert manifest.hooks.after_failure == ("echo failure",) + assert manifest.runtime.retry_backoff_seconds == 11 + assert manifest.runtime.timeout_seconds == 42.5 + assert manifest.runtime.log_level == "debug" + + +def test_manifest_to_snapshot_manifest_toml_remaps_paths_without_losing_supported_fields( + tmp_path: Path, +) -> None: + task_dir = _write_task_fixture(tmp_path) + manifest = load_automation_manifest(task_dir / "automation.toml") + snapshot_dir = tmp_path / "home" / "automations" / "demo" / "versions" / "3" + snapshot_dir.mkdir(parents=True) + task_path = snapshot_dir / "task.py" + task_meta_path = snapshot_dir / "task.meta.json" + task_path.write_text("def run(flow, inputs):\n return {}\n", encoding="utf-8") + task_meta_path.write_text( + '{"task":{"id":"demo","name":"Demo","goal":"Run"},"environment":{},"success_path":{},"recovery_hints":{},"failures":[],"knowledge":{}}', + encoding="utf-8", + ) + + manifest_path = snapshot_dir / "automation.toml" + manifest_path.write_text( + manifest_to_snapshot_manifest_toml( + manifest, + version=3, + task_path=task_path, + task_meta_path=task_meta_path, + output_dir=tmp_path / "home" / "automations" / "demo", + ), + encoding="utf-8", + ) + + snapshot_manifest = load_automation_manifest(manifest_path) + + assert snapshot_manifest.automation.version == "3" + assert snapshot_manifest.task.path == task_path + assert snapshot_manifest.task.meta_path == task_meta_path + assert snapshot_manifest.outputs.result_json_path is not None + assert snapshot_manifest.outputs.result_json_path.name == "result.json" + assert snapshot_manifest.hooks.after_success == ("echo success",) + assert snapshot_manifest.runtime.log_level == "debug" + + +def test_manifest_and_persisted_config_payloads_share_supported_keys(tmp_path: Path) -> None: + manifest = load_automation_manifest(_write_task_fixture(tmp_path) / "automation.toml") + persisted = _build_persisted_definition(tmp_path) + + manifest_payload = manifest_to_config_payload(manifest) + persisted_payload = persisted_definition_to_config_payload(persisted) + + assert set(manifest_payload) == set(persisted_payload) + assert manifest_payload["timezone"] == "Asia/Shanghai" + assert persisted_payload["timezone"] == "Asia/Shanghai" + assert manifest_payload["retry_backoff_seconds"] == 11 + assert persisted_payload["retry_backoff_seconds"] == 11 + assert manifest_payload["log_level"] == "debug" + assert persisted_payload["log_level"] == "debug" +``` + +- [ ] **Step 2: Run the new regression suite and verify it fails** + +Run: + +```bash +uv run pytest tests/unit/test_automation_projections.py -v +``` + +Expected: FAIL because `browser_cli.automation.projections` does not exist yet and the shared projection helpers are not implemented. + +- [ ] **Step 3: Commit the failing regression scaffold** + +```bash +git add tests/unit/test_automation_projections.py +git commit -m "test: add automation projection round-trip regressions" +``` + +## Task 2: Implement The Shared Projection Layer + +**Files:** +- Create: `src/browser_cli/automation/projections.py` +- Modify: `src/browser_cli/automation/models.py` +- Modify: `tests/unit/test_task_runtime_automation.py` +- Test: `tests/unit/test_automation_projections.py` +- Test: `tests/unit/test_task_runtime_automation.py` + +- [x] **Step 1: Implement shared projection helpers** + +Create `src/browser_cli/automation/projections.py`: + +```python +from __future__ import annotations + +from pathlib import Path +from typing import Any + +from browser_cli.automation.models import ( + AutomationManifest, + PersistedAutomationDefinition, +) +from browser_cli.automation.toml import dumps_toml_sections + + +def manifest_to_persisted_definition( + manifest: AutomationManifest, + *, + enabled: bool = False, +) -> PersistedAutomationDefinition: + return PersistedAutomationDefinition( + id=manifest.automation.id, + name=manifest.automation.name, + description=manifest.automation.description, + version=str(manifest.automation.version), + task_path=manifest.task.path, + task_meta_path=manifest.task.meta_path, + entrypoint=manifest.task.entrypoint, + enabled=enabled, + schedule_kind=str(manifest.schedule.get("mode") or "manual"), + schedule_payload=dict(manifest.schedule), + timezone=str(manifest.schedule.get("timezone") or "UTC"), + output_dir=manifest.outputs.artifact_dir, + result_json_path=manifest.outputs.result_json_path, + stdout_mode=manifest.outputs.stdout, + input_overrides=dict(manifest.inputs), + before_run_hooks=manifest.hooks.before_run, + after_success_hooks=manifest.hooks.after_success, + after_failure_hooks=manifest.hooks.after_failure, + retry_attempts=manifest.runtime.retry_attempts, + retry_backoff_seconds=manifest.runtime.retry_backoff_seconds, + timeout_seconds=manifest.runtime.timeout_seconds, + log_level=manifest.runtime.log_level, + ) + + +def payload_to_persisted_definition(payload: dict[str, Any]) -> PersistedAutomationDefinition: + automation_id = str(payload.get("id") or "").strip() + output_dir_raw = str(payload.get("output_dir") or "").strip() + result_json_raw = str(payload.get("result_json_path") or "").strip() + return PersistedAutomationDefinition( + id=automation_id, + name=str(payload.get("name") or automation_id), + description=str(payload.get("description") or ""), + version=str(payload.get("version") or "0.1.0"), + task_path=Path(str(payload.get("task_path") or "")), + task_meta_path=Path(str(payload.get("task_meta_path") or "")), + entrypoint=str(payload.get("entrypoint") or "run"), + enabled=bool(payload.get("enabled")), + definition_status=str(payload.get("definition_status") or "valid"), + definition_error=str(payload.get("definition_error")) if payload.get("definition_error") else None, + schedule_kind=str(payload.get("schedule_kind") or "manual"), + schedule_payload=dict(payload.get("schedule_payload") or {}), + timezone=str(payload.get("timezone") or "UTC"), + output_dir=Path(output_dir_raw) if output_dir_raw else Path(), + result_json_path=Path(result_json_raw) if result_json_raw else None, + stdout_mode=str(payload.get("stdout_mode") or "json"), + input_overrides=dict(payload.get("input_overrides") or {}), + before_run_hooks=tuple(payload.get("before_run_hooks") or []), + after_success_hooks=tuple(payload.get("after_success_hooks") or []), + after_failure_hooks=tuple(payload.get("after_failure_hooks") or []), + retry_attempts=int(payload.get("retry_attempts") or 0), + retry_backoff_seconds=int(payload.get("retry_backoff_seconds") or 0), + timeout_seconds=float(payload["timeout_seconds"]) if payload.get("timeout_seconds") is not None else None, + log_level=str(payload.get("log_level") or "info"), + ) + + +def persisted_definition_to_manifest_toml( + automation: PersistedAutomationDefinition, +) -> str: + return dumps_toml_sections( + [ + ("automation", { + "id": automation.id, + "name": automation.name, + "description": automation.description, + "version": automation.version, + }), + ("task", { + "path": str(automation.task_path), + "meta_path": str(automation.task_meta_path), + "entrypoint": automation.entrypoint, + }), + ("inputs", dict(automation.input_overrides)), + ("schedule", _schedule_values(automation)), + ("outputs", { + "artifact_dir": str(automation.output_dir), + "result_json_path": str(automation.result_json_path) if automation.result_json_path else None, + "stdout": automation.stdout_mode, + }), + ("hooks", { + "before_run": list(automation.before_run_hooks), + "after_success": list(automation.after_success_hooks), + "after_failure": list(automation.after_failure_hooks), + }), + ("runtime", { + "retry_attempts": automation.retry_attempts, + "retry_backoff_seconds": automation.retry_backoff_seconds, + "timeout_seconds": automation.timeout_seconds, + "log_level": automation.log_level, + }), + ] + ) + + +def manifest_to_snapshot_manifest_toml( + manifest: AutomationManifest, + *, + version: int, + task_path: Path, + task_meta_path: Path, + output_dir: Path, +) -> str: + result_json_path = _remap_result_json_path( + manifest.outputs.artifact_dir, + manifest.outputs.result_json_path, + output_dir, + ) + return dumps_toml_sections( + [ + ("automation", { + "id": manifest.automation.id, + "name": manifest.automation.name, + "description": manifest.automation.description, + "version": str(version), + }), + ("task", { + "path": str(task_path), + "meta_path": str(task_meta_path), + "entrypoint": manifest.task.entrypoint, + }), + ("inputs", dict(manifest.inputs)), + ("schedule", dict(manifest.schedule)), + ("outputs", { + "artifact_dir": str(output_dir), + "result_json_path": str(result_json_path) if result_json_path else None, + "stdout": manifest.outputs.stdout, + }), + ("hooks", { + "before_run": list(manifest.hooks.before_run), + "after_success": list(manifest.hooks.after_success), + "after_failure": list(manifest.hooks.after_failure), + }), + ("runtime", { + "retry_attempts": manifest.runtime.retry_attempts, + "retry_backoff_seconds": manifest.runtime.retry_backoff_seconds, + "timeout_seconds": manifest.runtime.timeout_seconds, + "log_level": manifest.runtime.log_level, + }), + ] + ) + + +def manifest_to_config_payload(manifest: AutomationManifest) -> dict[str, object]: + return { + "id": manifest.automation.id, + "name": manifest.automation.name, + "description": manifest.automation.description, + "version": str(manifest.automation.version), + "task_path": str(manifest.task.path), + "task_meta_path": str(manifest.task.meta_path), + "entrypoint": manifest.task.entrypoint, + "schedule_kind": str(manifest.schedule.get("mode") or "manual"), + "schedule_payload": dict(manifest.schedule), + "timezone": str(manifest.schedule.get("timezone") or "UTC"), + "output_dir": str(manifest.outputs.artifact_dir), + "result_json_path": str(manifest.outputs.result_json_path) if manifest.outputs.result_json_path else None, + "stdout_mode": manifest.outputs.stdout, + "input_overrides": dict(manifest.inputs), + "before_run_hooks": list(manifest.hooks.before_run), + "after_success_hooks": list(manifest.hooks.after_success), + "after_failure_hooks": list(manifest.hooks.after_failure), + "retry_attempts": manifest.runtime.retry_attempts, + "retry_backoff_seconds": manifest.runtime.retry_backoff_seconds, + "timeout_seconds": manifest.runtime.timeout_seconds, + "log_level": manifest.runtime.log_level, + } + + +def persisted_definition_to_config_payload( + automation: PersistedAutomationDefinition, +) -> dict[str, object]: + return { + "id": automation.id, + "name": automation.name, + "description": automation.description, + "version": automation.version, + "task_path": str(automation.task_path), + "task_meta_path": str(automation.task_meta_path), + "entrypoint": automation.entrypoint, + "schedule_kind": automation.schedule_kind, + "schedule_payload": dict(automation.schedule_payload), + "timezone": automation.timezone, + "output_dir": str(automation.output_dir), + "result_json_path": str(automation.result_json_path) if automation.result_json_path else None, + "stdout_mode": automation.stdout_mode, + "input_overrides": dict(automation.input_overrides), + "before_run_hooks": list(automation.before_run_hooks), + "after_success_hooks": list(automation.after_success_hooks), + "after_failure_hooks": list(automation.after_failure_hooks), + "retry_attempts": automation.retry_attempts, + "retry_backoff_seconds": automation.retry_backoff_seconds, + "timeout_seconds": automation.timeout_seconds, + "log_level": automation.log_level, + } + + +def _schedule_values(automation: PersistedAutomationDefinition) -> dict[str, Any]: + values = {"mode": automation.schedule_kind, "timezone": automation.timezone} + for key, value in automation.schedule_payload.items(): + if key not in {"mode", "timezone"}: + values[key] = value + return values + + +def _remap_result_json_path( + source_artifact_dir: Path, + source_result_json_path: Path | None, + target_artifact_dir: Path, +) -> Path | None: + if source_result_json_path is None: + return None + try: + relative = source_result_json_path.relative_to(source_artifact_dir) + except ValueError: + return target_artifact_dir / source_result_json_path.name + return target_artifact_dir / relative +``` + +- [x] **Step 2: Convert `models.py` into a compatibility wrapper** + +Update `src/browser_cli/automation/models.py`: + +```python + log_level: str = "info" + +def manifest_to_persisted_definition( + manifest: AutomationManifest, + *, + enabled: bool = False, +) -> PersistedAutomationDefinition: + from browser_cli.automation.projections import manifest_to_persisted_definition as _impl + + return _impl(manifest, enabled=enabled) +``` + +- [x] **Step 3: Point the existing manifest regression test at the shared behavior** + +Update `tests/unit/test_task_runtime_automation.py`: + +```python +from browser_cli.automation.projections import manifest_to_persisted_definition +``` + +Keep the existing assertion body intact so the old retry-backoff/timeout check +continues covering the new projection module, and append: + +```python + assert persisted.log_level == "info" +``` + +- [x] **Step 4: Run the focused projection suites** + +Run: + +```bash +uv run pytest tests/unit/test_automation_projections.py tests/unit/test_task_runtime_automation.py -v +``` + +Expected: PASS + +- [ ] **Step 5: Commit** + +```bash +git add src/browser_cli/automation/projections.py src/browser_cli/automation/models.py tests/unit/test_automation_projections.py tests/unit/test_task_runtime_automation.py +git commit -m "refactor: add shared automation projections" +``` + +## Task 3: Route Publish And API Export Through Shared Projections + +**Files:** +- Modify: `src/browser_cli/automation/persistence/store.py` +- Modify: `src/browser_cli/automation/publisher.py` +- Modify: `src/browser_cli/automation/api/server.py` +- Modify: `tests/unit/test_automation_publish.py` +- Modify: `tests/unit/test_automation_api.py` +- Test: `tests/unit/test_automation_publish.py` +- Test: `tests/unit/test_automation_api.py` + +- [x] **Step 1: Add failing publish and export regressions for the remaining supported fields** + +Append to `tests/unit/test_automation_publish.py`: + +```python +def test_publish_task_dir_preserves_hooks_timeout_and_log_level( + tmp_path: Path, monkeypatch +) -> None: + monkeypatch.setenv("BROWSER_CLI_HOME", str(tmp_path / "home")) + task_dir = tmp_path / "task" + task_dir.mkdir() + (task_dir / "task.py").write_text( + "def run(flow, inputs):\n return {'ok': True}\n", + encoding="utf-8", + ) + (task_dir / "task.meta.json").write_text( + '{"task":{"id":"demo","name":"Demo","goal":"Run"},"environment":{},"success_path":{},"recovery_hints":{},"failures":[],"knowledge":{}}', + encoding="utf-8", + ) + (task_dir / "automation.toml").write_text( + "[automation]\n" + 'id = "demo"\n' + 'name = "Demo"\n' + "[task]\n" + 'path = "task.py"\n' + 'meta_path = "task.meta.json"\n' + "[hooks]\n" + 'before_run = [\"echo before\"]\n' + 'after_success = [\"echo success\"]\n' + 'after_failure = [\"echo failure\"]\n' + "[runtime]\n" + "timeout_seconds = 6.5\n" + 'log_level = "debug"\n', + encoding="utf-8", + ) + + published = publish_task_dir(task_dir, app_paths=get_app_paths()) + manifest = load_automation_manifest(published.manifest_path) + + assert manifest.hooks.before_run == ("echo before",) + assert manifest.hooks.after_success == ("echo success",) + assert manifest.hooks.after_failure == ("echo failure",) + assert manifest.runtime.timeout_seconds == 6.5 + assert manifest.runtime.log_level == "debug" +``` + +Append to `tests/unit/test_automation_api.py`: + +```python +def test_automation_api_export_round_trips_supported_fields(tmp_path: Path) -> None: + runtime = AutomationServiceRuntime(store=AutomationStore(tmp_path / "automations.db")) + server = AutomationHttpServer(("127.0.0.1", 0), AutomationRequestHandler, runtime) + thread = threading.Thread(target=server.serve_forever, daemon=True) + thread.start() + host, port = server.server_address[:2] + try: + create_payload = _request( + host, + int(port), + "POST", + "/api/automations", + { + "id": "demo", + "name": "Demo", + "description": "Round trip", + "version": "7", + "task_path": str(tmp_path / "task.py"), + "task_meta_path": str(tmp_path / "task.meta.json"), + "entrypoint": "run", + "schedule_kind": "interval", + "schedule_payload": { + "mode": "interval", + "interval_seconds": 900, + "timezone": "Asia/Shanghai", + }, + "timezone": "Asia/Shanghai", + "output_dir": str(tmp_path / "runs"), + "result_json_path": str(tmp_path / "runs" / "result.json"), + "stdout_mode": "text", + "input_overrides": {"url": "https://example.com"}, + "before_run_hooks": ["echo before"], + "after_success_hooks": ["echo success"], + "after_failure_hooks": ["echo failure"], + "retry_attempts": 2, + "retry_backoff_seconds": 11, + "timeout_seconds": 42.5, + "log_level": "debug", + }, + ) + assert create_payload["data"]["id"] == "demo" + + export_payload = _request(host, int(port), "GET", "/api/automations/demo/export") + manifest_path = tmp_path / "exported.toml" + manifest_path.write_text(export_payload["data"]["toml"], encoding="utf-8") + (tmp_path / "task.py").write_text( + "def run(flow, inputs):\n return {'ok': True}\n", + encoding="utf-8", + ) + (tmp_path / "task.meta.json").write_text( + '{"task":{"id":"demo","name":"Demo","goal":"Run"},"environment":{},"success_path":{},"recovery_hints":{},"failures":[],"knowledge":{}}', + encoding="utf-8", + ) + manifest = load_automation_manifest(manifest_path) + + assert manifest.inputs == {"url": "https://example.com"} + assert manifest.schedule["interval_seconds"] == 900 + assert manifest.outputs.stdout == "text" + assert manifest.hooks.after_success == ("echo success",) + assert manifest.runtime.retry_backoff_seconds == 11 + assert manifest.runtime.timeout_seconds == 42.5 + assert manifest.runtime.log_level == "debug" + finally: + server.shutdown() + server.server_close() + thread.join(timeout=2.0) +``` + +Run: + +```bash +uv run pytest tests/unit/test_automation_publish.py tests/unit/test_automation_api.py -v +``` + +Expected: FAIL because publish and export still use local assembly code paths that do not share the new projection layer. + +- [x] **Step 2: Refactor publish and API export/import serialization** + +Update `src/browser_cli/automation/publisher.py`: + +```python +from browser_cli.automation.projections import manifest_to_snapshot_manifest_toml + + if source_manifest_path.exists(): + manifest_source = "task_dir" + source_manifest = load_automation_manifest(source_manifest_path) + rendered_manifest = manifest_to_snapshot_manifest_toml( + source_manifest, + version=version, + task_path=task_path, + task_meta_path=task_meta_path, + output_dir=automation_root, + ) +``` + +Update `src/browser_cli/automation/api/server.py`: + +```python +from browser_cli.automation.projections import ( + manifest_to_persisted_definition, + payload_to_persisted_definition, + persisted_definition_to_config_payload, + persisted_definition_to_manifest_toml, +) +``` + +Replace the direct payload conversion and export helpers: + +```python +created = self.server.runtime.store.upsert_automation(payload_to_persisted_definition(body)) +``` + +```python +automation = manifest_to_persisted_definition(manifest, enabled=enabled) +``` + +```python +self._send_json( + {"ok": True, "data": {"toml": persisted_definition_to_manifest_toml(automation)}, "meta": {}} +) +``` + +And update `_serialize_automation()` so config fields come from the shared +projection helper first: + +```python +payload = persisted_definition_to_config_payload(automation) +payload.update( + { + "enabled": automation.enabled, + "definition_status": automation.definition_status, + "definition_error": automation.definition_error, + "created_at": automation.created_at, + "updated_at": automation.updated_at, + "last_run_at": automation.last_run_at, + "next_run_at": automation.next_run_at, + } +) +``` + +Update `src/browser_cli/automation/persistence/store.py` so sqlite preserves the +new supported field: + +```python + CREATE TABLE IF NOT EXISTS automations ( + ... + retry_backoff_seconds INTEGER NOT NULL DEFAULT 0, + timeout_seconds REAL, + log_level TEXT NOT NULL DEFAULT 'info', + created_at TEXT NOT NULL, + ... + ); +``` + +Add a lightweight additive migration in `_initialize()`: + +```python + self._ensure_column( + conn, + "automations", + "log_level", + "TEXT NOT NULL DEFAULT 'info'", + ) +``` + +Add the helper on `AutomationStore`: + +```python + @staticmethod + def _ensure_column( + conn: sqlite3.Connection, + table: str, + column: str, + ddl: str, + ) -> None: + columns = { + str(row["name"]) + for row in conn.execute(f"PRAGMA table_info({table})").fetchall() + } + if column not in columns: + conn.execute(f"ALTER TABLE {table} ADD COLUMN {column} {ddl}") +``` + +Update the `INSERT` / `ON CONFLICT` statement field lists to include +`log_level`: + +```python + after_failure_hooks_json, retry_attempts, retry_backoff_seconds, + timeout_seconds, log_level, created_at, updated_at, last_run_at, next_run_at +``` + +```python + retry_backoff_seconds = excluded.retry_backoff_seconds, + timeout_seconds = excluded.timeout_seconds, + log_level = excluded.log_level, + updated_at = excluded.updated_at, +``` + +And wire the row conversion helpers: + +```python + "log_level": automation.log_level, +``` + +```python + log_level=str(row["log_level"] or "info"), +``` + +Delete `_automation_to_toml()` and `_payload_to_automation()` after routing all +callers through the shared projection layer. + +- [x] **Step 3: Run the focused publish and API suites** + +Run: + +```bash +uv run pytest tests/unit/test_automation_publish.py tests/unit/test_automation_api.py -v +``` + +Expected: PASS + +- [x] **Step 4: Commit** + +```bash +git add src/browser_cli/automation/publisher.py src/browser_cli/automation/api/server.py tests/unit/test_automation_publish.py tests/unit/test_automation_api.py +git commit -m "refactor: route automation publish and export through projections" +``` + +## Task 4: Align Inspect Snapshot And Live Config Views + +**Files:** +- Modify: `src/browser_cli/commands/automation.py` +- Modify: `tests/unit/test_automation_commands.py` +- Test: `tests/unit/test_automation_commands.py` + +- [x] **Step 1: Add a failing inspect parity regression** + +Append to `tests/unit/test_automation_commands.py`: + +```python +def test_automation_inspect_version_aligns_snapshot_and_live_config_shapes( + monkeypatch, tmp_path: Path +) -> None: + monkeypatch.setenv("BROWSER_CLI_HOME", str(tmp_path / "home")) + version_dir = tmp_path / "home" / "automations" / "demo" / "versions" / "1" + version_dir.mkdir(parents=True) + (version_dir / "task.py").write_text( + "def run(flow, inputs):\n return {'ok': True}\n", + encoding="utf-8", + ) + (version_dir / "task.meta.json").write_text( + '{"task":{"id":"demo","name":"Demo","goal":"Run"},"environment":{},"success_path":{},"recovery_hints":{},"failures":[],"knowledge":{}}', + encoding="utf-8", + ) + (version_dir / "automation.toml").write_text( + "[automation]\n" + 'id = "demo"\n' + 'name = "Demo Snapshot"\n' + 'description = "Snapshot"\n' + 'version = "1"\n' + "[task]\n" + 'path = "task.py"\n' + 'meta_path = "task.meta.json"\n' + "[schedule]\n" + 'mode = "interval"\n' + "interval_seconds = 900\n" + 'timezone = "Asia/Shanghai"\n' + "[outputs]\n" + 'artifact_dir = "artifacts"\n' + 'result_json_path = "artifacts/result.json"\n' + 'stdout = "text"\n' + "[hooks]\n" + 'before_run = [\"echo before\"]\n' + 'after_success = [\"echo success\"]\n' + 'after_failure = [\"echo failure\"]\n' + "[runtime]\n" + "retry_attempts = 2\n" + "retry_backoff_seconds = 11\n" + "timeout_seconds = 42.5\n" + 'log_level = "debug"\n', + encoding="utf-8", + ) + (version_dir / "publish.json").write_text( + f'{{"automation_id":"demo","version":1,"source_task_path":"/tmp/task","snapshot_dir":"{version_dir}"}}', + encoding="utf-8", + ) + monkeypatch.setattr( + "browser_cli.commands.automation.request_automation_service", + lambda method, path, body=None, start_if_needed=True: { + "ok": True, + "data": { + "id": "demo", + "name": "Demo Live", + "description": "Live", + "version": "2", + "task_path": str(version_dir / "task.py"), + "task_meta_path": str(version_dir / "task.meta.json"), + "entrypoint": "run", + "schedule_kind": "interval", + "schedule_payload": { + "mode": "interval", + "interval_seconds": 900, + "timezone": "Asia/Shanghai", + }, + "timezone": "Asia/Shanghai", + "output_dir": str(tmp_path / "home" / "automations" / "demo"), + "result_json_path": str(tmp_path / "home" / "automations" / "demo" / "result.json"), + "stdout_mode": "text", + "input_overrides": {"url": "https://example.com"}, + "before_run_hooks": ["echo before"], + "after_success_hooks": ["echo success"], + "after_failure_hooks": ["echo failure"], + "retry_attempts": 2, + "retry_backoff_seconds": 11, + "timeout_seconds": 42.5, + "log_level": "debug", + "enabled": True, + "definition_status": "valid", + "latest_run": {"status": "success"}, + }, + }, + ) + + payload = json.loads( + run_automation_command( + Namespace(automation_subcommand="inspect", automation_id="demo", version=1) + ) + ) + + snapshot_keys = set(payload["data"]["snapshot_config"]) + live_keys = set(payload["data"]["live_config"]) + + assert snapshot_keys == live_keys + assert payload["data"]["snapshot_config"]["log_level"] == "debug" + assert payload["data"]["live_config"]["retry_backoff_seconds"] == 11 + assert payload["data"]["live_config"]["log_level"] == "debug" +``` + +Run: + +```bash +uv run pytest tests/unit/test_automation_commands.py -v +``` + +Expected: FAIL because `snapshot_config` and `live_config` are still built by +different code paths with different field sets and runtime defaults. + +- [x] **Step 2: Route inspect config rendering through shared projections** + +Update `src/browser_cli/commands/automation.py`: + +```python +from browser_cli.automation.projections import ( + manifest_to_config_payload, + payload_to_persisted_definition, + persisted_definition_to_config_payload, +) +``` + +Replace `_snapshot_manifest_to_automation_payload()` with: + +```python +def _snapshot_manifest_to_automation_payload(manifest: AutomationManifest) -> dict[str, object]: + return manifest_to_config_payload(manifest) +``` + +Inside `run_automation_command()` `inspect` handling, derive `live_config` from +the shared projection too: + +```python + live_automation_data = dict(payload.get("data") or {}) + live_config = ( + persisted_definition_to_config_payload( + payload_to_persisted_definition(live_automation_data) + ) + if live_automation_data + else None + ) +``` + +And return: + +```python + "snapshot_config": snapshot_config, + "snapshot_config_error": snapshot_config_error, + "live_config": live_config, +``` + +Keep `latest_run`, `versions`, and `summary` as separate operational sections. + +- [x] **Step 3: Run the inspect command suite** + +Run: + +```bash +uv run pytest tests/unit/test_automation_commands.py -v +``` + +Expected: PASS + +- [ ] **Step 4: Commit** + +```bash +git add src/browser_cli/commands/automation.py tests/unit/test_automation_commands.py +git commit -m "fix: align automation inspect config views" +``` + +## Task 5: Run Full Validation And Mark The Plan Complete + +**Files:** +- Modify: `docs/superpowers/plans/2026-04-14-automation-manifest-roundtrip-implementation-plan.md` +- Test: `tests/unit/test_automation_projections.py` +- Test: `tests/unit/test_task_runtime_automation.py` +- Test: `tests/unit/test_automation_publish.py` +- Test: `tests/unit/test_automation_api.py` +- Test: `tests/unit/test_automation_commands.py` + +- [x] **Step 1: Run the focused automation round-trip suites** + +Run: + +```bash +uv run pytest \ + tests/unit/test_automation_projections.py \ + tests/unit/test_task_runtime_automation.py \ + tests/unit/test_automation_publish.py \ + tests/unit/test_automation_api.py \ + tests/unit/test_automation_commands.py -v +``` + +Expected: PASS + +- [x] **Step 2: Run repository validation** + +Run: + +```bash +./scripts/lint.sh +./scripts/test.sh +./scripts/guard.sh +``` + +Expected: all three scripts exit `0`. + +- [x] **Step 3: Update this plan file to mark the completed steps** + +Update the completed steps in this file: + +```markdown +- [x] **Step 1: Run the focused automation round-trip suites** +- [x] **Step 2: Run repository validation** +- [x] **Step 3: Update this plan file to mark the completed steps** +``` + +- [ ] **Step 4: Commit** + +```bash +git add docs/superpowers/plans/2026-04-14-automation-manifest-roundtrip-implementation-plan.md +git commit -m "docs: mark automation manifest round-trip plan complete" +``` + +## Self-Review + +Spec coverage: + +- shared projection layer for manifest/persistence/publish/export/inspect: covered by Tasks 2, 3, and 4 +- semantic consistency over raw text preservation: covered by Task 1 regression style and Task 3 export/publish round-trip checks +- publish preserves supported fields while allowing path remapping: covered by Tasks 1 and 3 +- import/export round-trip through persisted live state: covered by Tasks 2 and 3 +- `inspect --version` snapshot/live config parity: covered by Task 4 +- supported field set coverage for hooks/runtime/schedule/inputs/outputs: covered by Tasks 1, 3, and 4 + +Placeholder scan: + +- no `TODO`/`TBD` placeholders remain +- every code-edit step includes concrete code +- every test step includes an exact command and expected result + +Type consistency: + +- projection helper names are consistent across tasks: + `manifest_to_persisted_definition` + `payload_to_persisted_definition` + `persisted_definition_to_manifest_toml` + `manifest_to_snapshot_manifest_toml` + `manifest_to_config_payload` + `persisted_definition_to_config_payload` +- `live_config` and `snapshot_config` are explicitly routed through the same + projection family rather than separate ad hoc dict builders diff --git a/docs/superpowers/plans/2026-04-14-browser-cli-uninstall-doc-implementation-plan.md b/docs/superpowers/plans/2026-04-14-browser-cli-uninstall-doc-implementation-plan.md new file mode 100644 index 0000000..0b1b26a --- /dev/null +++ b/docs/superpowers/plans/2026-04-14-browser-cli-uninstall-doc-implementation-plan.md @@ -0,0 +1,351 @@ +# Browser CLI Uninstall Doc Implementation Plan + +> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking. + +**Goal:** Add a maintainer-oriented uninstall guide for Browser CLI that documents full cleanup of the repository development environment and Browser CLI home, with backup guidance and links from the main install docs. + +**Architecture:** Add a new `docs/uninstall.md` document built around the current runtime path model and cleanup commands, then link to it from `README.md` and `docs/installed-with-uv.md`. Lock the new documentation surface with a repo text-contract test so the uninstall guide and its entry points do not silently disappear later. + +**Tech Stack:** Markdown docs, pytest repo text-contract tests, Browser CLI path and lifecycle commands + +--- + +## File Map + +- Create: `docs/uninstall.md` + Responsibility: full uninstall guide for maintainers and developers, including backup, runtime cleanup, repo cleanup, Browser CLI home deletion, optional uv tool uninstall, and verification. +- Modify: `README.md` + Responsibility: expose the uninstall guide from the main docs surface. +- Modify: `docs/installed-with-uv.md` + Responsibility: point installed users at the uninstall guide. +- Modify: `tests/unit/test_repo_text_contracts.py` + Responsibility: lock the presence of the uninstall guide and its documentation entry points. +- Modify: `docs/superpowers/plans/2026-04-14-browser-cli-uninstall-doc-implementation-plan.md` + Responsibility: update checkbox state during execution if this plan is used as the working log. + +## Task 1: Lock The Uninstall Documentation Surface With A Repo Text Contract + +**Files:** +- Modify: `tests/unit/test_repo_text_contracts.py` +- Test: `tests/unit/test_repo_text_contracts.py` + +- [x] **Step 1: Add the failing uninstall documentation test** + +Append to `tests/unit/test_repo_text_contracts.py`: + +```python +def test_uninstall_doc_is_linked_from_primary_install_docs() -> None: + uninstall_text = _read("docs/uninstall.md") + readme_text = _read("README.md") + uv_doc_text = _read("docs/installed-with-uv.md") + + assert "# Uninstall Browser CLI" in uninstall_text + assert "browser-cli paths" in uninstall_text + assert "browser-cli automation stop" in uninstall_text + assert "browser-cli reload" in uninstall_text + assert "uv tool uninstall browser-cli" in uninstall_text + assert "docs/uninstall.md" in readme_text + assert "docs/uninstall.md" in uv_doc_text +``` + +- [ ] **Step 2: Run the test to verify it fails** + +Run: + +```bash +uv run pytest tests/unit/test_repo_text_contracts.py::test_uninstall_doc_is_linked_from_primary_install_docs -v +``` + +Expected: FAIL because `docs/uninstall.md` does not exist and the existing docs do not link to it. + +- [x] **Step 3: Add the minimal uninstall document and link placeholders** + +Create `docs/uninstall.md`: + +```markdown +# Uninstall Browser CLI +``` + +Add a docs link in `README.md`: + +```markdown +- Uninstall and cleanup guidance: [`docs/uninstall.md`](docs/uninstall.md) +``` + +Add a pointer in `docs/installed-with-uv.md`: + +```markdown +To remove Browser CLI later, see [`docs/uninstall.md`](docs/uninstall.md). +``` + +- [ ] **Step 4: Run the test to verify it still fails for missing content** + +Run: + +```bash +uv run pytest tests/unit/test_repo_text_contracts.py::test_uninstall_doc_is_linked_from_primary_install_docs -v +``` + +Expected: FAIL because the uninstall doc stub does not yet contain the required commands and headings. + +- [ ] **Step 5: Commit** + +```bash +git add tests/unit/test_repo_text_contracts.py docs/uninstall.md README.md docs/installed-with-uv.md +git commit -m "test: lock uninstall documentation surface" +``` + +## Task 2: Write The Full Uninstall Guide + +**Files:** +- Modify: `docs/uninstall.md` +- Test: `tests/unit/test_repo_text_contracts.py` + +- [x] **Step 1: Replace the stub with the full uninstall document** + +Write `docs/uninstall.md`: + +```markdown +# Uninstall Browser CLI + +This guide is for repository maintainers and local developers who want to fully +remove Browser CLI from a machine. It covers: + +- stopping Browser CLI runtime processes +- backing up high-value local data first +- removing the repository development environment +- removing Browser CLI home data +- optionally removing any uv tool installation + +## Before You Delete Anything + +Inspect the current Browser CLI runtime paths and status first: + +```bash +browser-cli paths +browser-cli status +``` + +If you set `BROWSER_CLI_HOME`, Browser CLI home may not be `~/.browser-cli`. +Use the `home` path shown by `browser-cli paths` as the deletion target. + +## Back Up What You Want To Keep + +Before deleting Browser CLI home, consider backing up these paths from +`browser-cli paths`: + +- `tasks_dir` +- `automations_dir` +- `automation_db_path` +- optionally `artifacts_dir` + +Deleting Browser CLI home removes local task source, published automation +snapshots, automation persistence, runtime logs, and artifacts. + +## Stop Runtime Processes + +Stop the automation service and clear daemon runtime state before deleting files: + +```bash +browser-cli automation stop +browser-cli reload +``` + +`browser-cli reload` is runtime cleanup, not uninstall. It resets Browser CLI +state before deletion, but it does not remove Browser CLI files by itself. + +## Remove The Repository Development Environment + +From the repository root, remove the local development environment: + +```bash +rm -rf .venv +rm -rf .pytest_cache +find . -type d -name "__pycache__" -prune -exec rm -rf {} + +``` + +This step only removes repo-local development state. It does not remove Browser +CLI home data. + +## Remove Browser CLI Home Data + +Delete the Browser CLI home reported by `browser-cli paths`. The default path is +usually: + +```bash +rm -rf ~/.browser-cli +``` + +If `browser-cli paths` showed a different `home`, delete that path instead. + +Removing Browser CLI home deletes: + +- `run/` runtime state and logs +- `artifacts/` +- `tasks/` +- `automations/` +- `automations.db` +- managed-profile runtime state stored under Browser CLI home + +## Optional: Remove uv Tool Installation + +If you also installed Browser CLI as a uv tool, remove it separately: + +```bash +uv tool uninstall browser-cli +``` + +This does not remove: + +- the repository checkout +- Browser CLI home data + +## Verify Removal + +Verify repo-local cleanup: + +```bash +test ! -d .venv && echo "repo venv removed" +``` + +Verify Browser CLI home removal by checking the path you identified earlier: + +```bash +test ! -d ~/.browser-cli && echo "browser-cli home removed" +``` + +If you used a custom `BROWSER_CLI_HOME`, replace `~/.browser-cli` with the +actual `home` path shown by `browser-cli paths`. +``` + +- [x] **Step 2: Run the focused repo text-contract test** + +Run: + +```bash +uv run pytest tests/unit/test_repo_text_contracts.py::test_uninstall_doc_is_linked_from_primary_install_docs -v +``` + +Expected: PASS + +- [ ] **Step 3: Commit** + +```bash +git add docs/uninstall.md +git commit -m "docs: add browser-cli uninstall guide" +``` + +## Task 3: Link The Guide From The Main Install Docs + +**Files:** +- Modify: `README.md` +- Modify: `docs/installed-with-uv.md` +- Test: `tests/unit/test_repo_text_contracts.py` + +- [x] **Step 1: Add the uninstall guide link to README documentation** + +Update the Documentation section in `README.md`: + +```markdown +## Documentation + +- Repo navigation and subsystem ownership: [`AGENTS.md`](AGENTS.md) +- Installed-user guide: [`docs/installed-with-uv.md`](docs/installed-with-uv.md) +- Uninstall and cleanup guide: [`docs/uninstall.md`](docs/uninstall.md) +``` + +- [x] **Step 2: Add the uninstall pointer to the uv install guide** + +Append to `docs/installed-with-uv.md`: + +```markdown +## Remove Browser CLI + +To remove Browser CLI later, including Browser CLI home data and local cleanup +steps for maintainers, see [`docs/uninstall.md`](docs/uninstall.md). +``` + +- [x] **Step 3: Run the focused repo text-contract test again** + +Run: + +```bash +uv run pytest tests/unit/test_repo_text_contracts.py::test_uninstall_doc_is_linked_from_primary_install_docs -v +``` + +Expected: PASS + +- [ ] **Step 4: Commit** + +```bash +git add README.md docs/installed-with-uv.md tests/unit/test_repo_text_contracts.py +git commit -m "docs: link browser-cli uninstall guide" +``` + +## Task 4: Run Full Validation And Record The Final State + +**Files:** +- Modify: `docs/superpowers/plans/2026-04-14-browser-cli-uninstall-doc-implementation-plan.md` +- Test: `tests/unit/test_repo_text_contracts.py` + +- [x] **Step 1: Run the relevant unit tests** + +Run: + +```bash +uv run pytest tests/unit/test_repo_text_contracts.py -v +``` + +Expected: PASS + +- [x] **Step 2: Run repository lint** + +Run: + +```bash +scripts/lint.sh +``` + +Expected: exit code 0 + +- [x] **Step 3: Run repository tests** + +Run: + +```bash +scripts/test.sh +``` + +Expected: exit code 0 + +- [x] **Step 4: Run repository guards** + +Run: + +```bash +scripts/guard.sh +``` + +Expected: exit code 0 + +- [ ] **Step 5: Commit the completed uninstall documentation change** + +```bash +git add docs/uninstall.md README.md docs/installed-with-uv.md tests/unit/test_repo_text_contracts.py docs/superpowers/plans/2026-04-14-browser-cli-uninstall-doc-implementation-plan.md +git commit -m "docs: document browser-cli uninstall" +``` + +## Self-Review + +- Spec coverage: + - full maintainer-oriented uninstall guidance is covered in Task 2 + - backup guidance and runtime cleanup commands are covered in Task 2 + - README and installed-user doc links are covered in Task 3 + - repo-level regression protection is covered in Task 1 +- Placeholder scan: + - no deferred implementation markers remain + - all file paths and commands are explicit +- Type and contract consistency: + - the uninstall guide uses only commands that exist today + - the docs point to `docs/uninstall.md` + - the guide distinguishes repo cleanup, Browser CLI home deletion, and uv tool uninstall diff --git a/docs/superpowers/plans/2026-04-14-daemon-parameter-validation-implementation-plan.md b/docs/superpowers/plans/2026-04-14-daemon-parameter-validation-implementation-plan.md new file mode 100644 index 0000000..4dae129 --- /dev/null +++ b/docs/superpowers/plans/2026-04-14-daemon-parameter-validation-implementation-plan.md @@ -0,0 +1,517 @@ +# Browser CLI Daemon Parameter Validation Implementation Plan + +> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking. + +**Goal:** Make daemon numeric argument parsing return stable `INVALID_INPUT` errors instead of leaking `INTERNAL_ERROR` for malformed user input. + +**Architecture:** Keep all request parsing changes inside `src/browser_cli/daemon/app.py`, where daemon handlers normalize `request.args` before calling `browser_service`. Add shared numeric parsing helpers on `BrowserDaemonApp`, migrate every direct `int(...)` and `float(...)` conversion in the handler layer to those helpers, and lock the behavior with request-level regression tests that execute `BrowserDaemonApp.execute()`. + +**Tech Stack:** Python 3.10, pytest, Browser CLI daemon request/response models, `InvalidInputError` + +--- + +## File Map + +- Create: `tests/unit/test_daemon_app_validation.py` + Responsibility: request-level regression tests for malformed numeric daemon input. +- Modify: `src/browser_cli/daemon/app.py` + Responsibility: add shared numeric parsing helpers and route all direct numeric request parsing through them. +- Modify: `docs/superpowers/plans/2026-04-14-daemon-parameter-validation-implementation-plan.md` + Responsibility: update checkbox state during execution if this plan is used as the working log. + +## Task 1: Add Request-Level Regression Tests For Numeric Input Failures + +**Files:** +- Create: `tests/unit/test_daemon_app_validation.py` +- Test: `tests/unit/test_daemon_app_validation.py` + +- [x] **Step 1: Write the failing regression tests** + +Create `tests/unit/test_daemon_app_validation.py`: + +```python +from __future__ import annotations + +import asyncio +from contextlib import asynccontextmanager +from types import SimpleNamespace + +from browser_cli import error_codes +from browser_cli.daemon.app import BrowserDaemonApp +from browser_cli.daemon.models import DaemonRequest + + +class _FakeTabs: + @asynccontextmanager + async def claim_active_tab(self, **_kwargs): + yield SimpleNamespace(page_id="page_0001") + + async def update_tab(self, *_args, **_kwargs) -> None: + return None + + +class _FakeBrowserService: + async def begin_command(self, _action: str) -> None: + return None + + async def end_command(self) -> dict[str, str]: + return {"driver": "playwright"} + + @property + def active_driver_name(self) -> str: + return "playwright" + + @property + def chrome_environment(self): + return None + + async def get_page_summary(self, _page_id: str) -> dict[str, str]: + return {"url": "https://example.com", "title": "Example"} + + async def mouse_click( + self, + page_id: str, + *, + x: int, + y: int, + button: str, + count: int, + ) -> dict[str, object]: + return {"page_id": page_id, "x": x, "y": y, "button": button, "count": count} + + async def resize(self, page_id: str, *, width: int, height: int) -> dict[str, object]: + return {"page_id": page_id, "width": width, "height": height} + + async def wait_for_network_idle( + self, page_id: str, *, timeout_seconds: float = 30.0 + ) -> dict[str, object]: + return {"page_id": page_id, "timeout_seconds": timeout_seconds} + + +class _FakeState: + def __init__(self) -> None: + self.tabs = _FakeTabs() + self.browser_service = _FakeBrowserService() + + +def _execute(request: DaemonRequest) -> dict[str, object]: + async def _scenario() -> dict[str, object]: + app = BrowserDaemonApp(state=_FakeState()) # type: ignore[arg-type] + response = await app.execute(request) + return response.to_dict() + + return asyncio.run(_scenario()) + + +def test_mouse_click_missing_x_returns_invalid_input() -> None: + payload = _execute( + DaemonRequest( + action="mouse-click", + args={"y": 10}, + agent_id="agent-a", + request_id="req-1", + ) + ) + + assert payload["ok"] is False + assert payload["error_code"] == error_codes.INVALID_INPUT + assert payload["error_message"] == "x is required." + + +def test_mouse_drag_invalid_coordinate_returns_invalid_input() -> None: + payload = _execute( + DaemonRequest( + action="mouse-drag", + args={"x1": 1, "y1": 2, "x2": "bad", "y2": 4}, + agent_id="agent-a", + request_id="req-2", + ) + ) + + assert payload["ok"] is False + assert payload["error_code"] == error_codes.INVALID_INPUT + assert payload["error_message"] == "x2 must be an integer." + + +def test_wait_network_invalid_timeout_returns_invalid_input() -> None: + payload = _execute( + DaemonRequest( + action="wait-network", + args={"timeout": "slow"}, + agent_id="agent-a", + request_id="req-3", + ) + ) + + assert payload["ok"] is False + assert payload["error_code"] == error_codes.INVALID_INPUT + assert payload["error_message"] == "timeout must be a number." + + +def test_resize_non_positive_values_keep_handler_level_constraint() -> None: + payload = _execute( + DaemonRequest( + action="resize", + args={"width": 0, "height": 100}, + agent_id="agent-a", + request_id="req-4", + ) + ) + + assert payload["ok"] is False + assert payload["error_code"] == error_codes.INVALID_INPUT + assert payload["error_message"] == "width and height must be positive integers." +``` + +- [x] **Step 2: Run the regression tests to verify they fail** + +Run: + +```bash +uv run pytest tests/unit/test_daemon_app_validation.py -v +``` + +Expected: FAIL because `mouse-click`, `mouse-drag`, and `wait-network` still leak raw numeric conversion failures instead of returning `InvalidInputError`. + +- [ ] **Step 3: Commit the failing-test scaffold** + +```bash +git add tests/unit/test_daemon_app_validation.py +git commit -m "test: add daemon parameter validation regressions" +``` + +## Task 2: Add Shared Numeric Parsing Helpers And Migrate All Direct Conversions + +**Files:** +- Modify: `src/browser_cli/daemon/app.py` +- Test: `tests/unit/test_daemon_app_validation.py` + +- [x] **Step 1: Add shared numeric parsing helpers to `BrowserDaemonApp`** + +Insert these helpers below `_optional_str()` in `src/browser_cli/daemon/app.py`: + +```python + @classmethod + def _require_int(cls, args: dict[str, Any], key: str) -> int: + raw = args.get(key) + if raw is None or str(raw).strip() == "": + raise InvalidInputError(f"{key} is required.") + try: + return int(raw) + except (TypeError, ValueError) as exc: + raise InvalidInputError(f"{key} must be an integer.") from exc + + @classmethod + def _optional_int(cls, args: dict[str, Any], key: str) -> int | None: + raw = args.get(key) + if raw is None or str(raw).strip() == "": + return None + try: + return int(raw) + except (TypeError, ValueError) as exc: + raise InvalidInputError(f"{key} must be an integer.") from exc + + @classmethod + def _require_float(cls, args: dict[str, Any], key: str) -> float: + raw = args.get(key) + if raw is None or str(raw).strip() == "": + raise InvalidInputError(f"{key} is required.") + try: + return float(raw) + except (TypeError, ValueError) as exc: + raise InvalidInputError(f"{key} must be a number.") from exc + + @classmethod + def _optional_float(cls, args: dict[str, Any], key: str) -> float | None: + raw = args.get(key) + if raw is None or str(raw).strip() == "": + return None + try: + return float(raw) + except (TypeError, ValueError) as exc: + raise InvalidInputError(f"{key} must be a number.") from exc +``` + +- [x] **Step 2: Replace all direct numeric request parsing in handlers** + +Update these handler fragments in `src/browser_cli/daemon/app.py`: + +```python + async def _handle_scroll(self, request: DaemonRequest) -> dict[str, Any]: + dx = self._optional_int(request.args, "dx") or 0 + dy = self._optional_int(request.args, "dy") or 700 + return await self._run_active_page_action( + request, lambda page_id: self._state.browser_service.wheel(page_id, dx=dx, dy=dy) + ) +``` + +```python + async def _handle_mouse_click(self, request: DaemonRequest) -> dict[str, Any]: + x = self._require_int(request.args, "x") + y = self._require_int(request.args, "y") + count = self._optional_int(request.args, "count") or 1 + return await self._run_active_page_action( + request, + lambda page_id: self._state.browser_service.mouse_click( + page_id, + x=x, + y=y, + button=str(request.args.get("button") or "left"), + count=count, + ), + ) +``` + +```python + async def _handle_mouse_move(self, request: DaemonRequest) -> dict[str, Any]: + x = self._require_int(request.args, "x") + y = self._require_int(request.args, "y") + return await self._run_active_page_action( + request, + lambda page_id: self._state.browser_service.mouse_move(page_id, x=x, y=y), + ) +``` + +```python + async def _handle_mouse_drag(self, request: DaemonRequest) -> dict[str, Any]: + x1 = self._require_int(request.args, "x1") + y1 = self._require_int(request.args, "y1") + x2 = self._require_int(request.args, "x2") + y2 = self._require_int(request.args, "y2") + return await self._run_active_page_action( + request, + lambda page_id: self._state.browser_service.mouse_drag( + page_id, x1=x1, y1=y1, x2=x2, y2=y2 + ), + ) +``` + +```python + async def _handle_wait(self, request: DaemonRequest) -> dict[str, Any]: + seconds = self._optional_float(request.args, "seconds") + text = request.args.get("text") + gone = bool(request.args.get("gone")) + exact = bool(request.args.get("exact")) + return await self._run_active_page_action( + request, + lambda page_id: self._state.browser_service.wait( + page_id, + seconds=seconds, + text=str(text) if text else None, + gone=gone, + exact=exact, + ), + ) +``` + +```python + async def _handle_wait_network(self, request: DaemonRequest) -> dict[str, Any]: + timeout_seconds = self._optional_float(request.args, "timeout") or 30.0 + return await self._run_active_page_action( + request, + lambda page_id: self._state.browser_service.wait_for_network_idle( + page_id, timeout_seconds=timeout_seconds + ), + ) +``` + +```python + async def _handle_network_wait(self, request: DaemonRequest) -> dict[str, Any]: + filters = self._network_filters_from_request(request.args) + timeout_seconds = self._optional_float(request.args, "timeout_seconds") or 30.0 + return await self._run_active_page_action( + request, + lambda page_id: self._state.browser_service.wait_for_network_record( + page_id, + **filters, + timeout_seconds=timeout_seconds, + ), + ) +``` + +```python + async def _handle_verify_text(self, request: DaemonRequest) -> dict[str, Any]: + text = self._require_str(request.args, "text") + exact = bool(request.args.get("exact")) + timeout = self._optional_float(request.args, "timeout") or 5.0 + return await self._run_active_page_action( + request, + lambda page_id: self._state.browser_service.verify_text( + page_id, text=text, exact=exact, timeout_seconds=timeout + ), + ) +``` + +```python + async def _handle_verify_visible(self, request: DaemonRequest) -> dict[str, Any]: + role = self._require_str(request.args, "role") + name = self._require_str(request.args, "name") + timeout = self._optional_float(request.args, "timeout") or 5.0 + return await self._run_active_page_action( + request, + lambda page_id: self._state.browser_service.verify_visible( + page_id, role=role, name=name, timeout_seconds=timeout + ), + ) +``` + +```python + async def _handle_video_start(self, request: DaemonRequest) -> dict[str, Any]: + width = self._optional_int(request.args, "width") + height = self._optional_int(request.args, "height") + return await self._run_active_page_action( + request, + lambda page_id: self._state.browser_service.start_video( + page_id, + width=width, + height=height, + ), + ) +``` + +```python + @classmethod + def _network_filters_from_request(cls, args: dict[str, Any]) -> dict[str, Any]: + return { + "url_contains": cls._optional_str(args, "url_contains"), + "url_regex": cls._optional_str(args, "url_regex"), + "method": cls._optional_str(args, "method"), + "status": cls._optional_int(args, "status"), + "resource_type": cls._optional_str(args, "resource_type"), + "mime_contains": cls._optional_str(args, "mime_contains"), + "include_static": bool(args.get("include_static")), + } +``` + +Also update `resize` and `cookie-set`: + +```python + async def _handle_resize(self, request: DaemonRequest) -> dict[str, Any]: + width = self._require_int(request.args, "width") + height = self._require_int(request.args, "height") + if width <= 0 or height <= 0: + raise InvalidInputError("width and height must be positive integers.") + return await self._run_active_page_action( + request, + lambda page_id: self._state.browser_service.resize(page_id, width=width, height=height), + ) +``` + +```python + expires=self._optional_float(request.args, "expires"), +``` + +- [x] **Step 3: Run the regression tests to verify they pass** + +Run: + +```bash +uv run pytest tests/unit/test_daemon_app_validation.py -v +``` + +Expected: PASS + +- [x] **Step 4: Add one focused success-path test for parsed values** + +Append to `tests/unit/test_daemon_app_validation.py`: + +```python +def test_mouse_click_successfully_parses_integer_fields() -> None: + payload = _execute( + DaemonRequest( + action="mouse-click", + args={"x": "12", "y": "14", "count": "2"}, + agent_id="agent-a", + request_id="req-5", + ) + ) + + assert payload["ok"] is True + assert payload["data"] == { + "page_id": "page_0001", + "x": 12, + "y": 14, + "button": "left", + "count": 2, + } +``` + +Run: + +```bash +uv run pytest tests/unit/test_daemon_app_validation.py::test_mouse_click_successfully_parses_integer_fields -v +``` + +Expected: PASS + +- [ ] **Step 5: Commit** + +```bash +git add src/browser_cli/daemon/app.py tests/unit/test_daemon_app_validation.py +git commit -m "fix: normalize daemon numeric parameter validation" +``` + +## Task 3: Run Focused And Full Validation + +**Files:** +- Modify: `docs/superpowers/plans/2026-04-14-daemon-parameter-validation-implementation-plan.md` +- Test: `tests/unit/test_daemon_app_validation.py` + +- [x] **Step 1: Run the focused daemon validation suite** + +Run: + +```bash +uv run pytest tests/unit/test_daemon_app_validation.py -v +``` + +Expected: PASS + +- [x] **Step 2: Run repository validation** + +Run: + +```bash +./scripts/lint.sh +./scripts/test.sh +./scripts/guard.sh +``` + +Expected: all three scripts exit `0`. + +- [x] **Step 3: Update this plan file to mark the completed steps** + +Update the completed steps in this file: + +```markdown +- [x] **Step 1: Run the focused daemon validation suite** +- [x] **Step 2: Run repository validation** +``` + +- [ ] **Step 4: Commit** + +```bash +git add docs/superpowers/plans/2026-04-14-daemon-parameter-validation-implementation-plan.md +git commit -m "docs: mark daemon parameter validation plan complete" +``` + +## Self-Review + +Spec coverage: + +- shared helper methods in `BrowserDaemonApp`: covered by Task 2 +- handler-layer-only scope: covered by Task 2 file scope +- malformed numeric values return `INVALID_INPUT`: covered by Task 1 tests and Task 2 implementation +- preserve handler-level range validation: covered by the `resize` regression in Task 1 and Task 2 +- regression protection through `BrowserDaemonApp.execute()`: covered by Task 1 + +Placeholder scan: + +- no `TODO`, `TBD`, or deferred “handle later” markers remain +- every code-edit step includes concrete code blocks +- every verification step includes an exact command and expected result + +Type consistency: + +- helper names `_require_int`, `_optional_int`, `_require_float`, `_optional_float` are defined once and used consistently +- request/response assertions use the existing `DaemonRequest` and `DaemonResponse.to_dict()` flow diff --git a/docs/superpowers/specs/2026-04-14-automation-manifest-roundtrip-design.md b/docs/superpowers/specs/2026-04-14-automation-manifest-roundtrip-design.md new file mode 100644 index 0000000..d6cb01b --- /dev/null +++ b/docs/superpowers/specs/2026-04-14-automation-manifest-roundtrip-design.md @@ -0,0 +1,395 @@ +# Automation Manifest Round-Trip Design + +Date: 2026-04-14 +Status: Drafted for review +Repo: `browser-cli` + +## Summary + +This design unifies Browser CLI's automation manifest semantics across +`publish`, `import`, `export`, `inspect`, and persistence. + +The goal is semantic consistency, not raw TOML text preservation. + +That means: + +- supported manifest fields must keep the same meaning across all round-trip + paths +- published snapshot manifests may rewrite paths into snapshot/runtime context +- exported manifests may normalize defaults and formatting +- no supported field may be silently dropped, renamed, or interpreted + differently on one path than another + +## Problem Statement + +Browser CLI already has the pieces for automation packaging and inspection, but +the field mapping logic is duplicated across several surfaces: + +- `publisher.py` renders snapshot manifests +- `api/server.py` converts between API payloads, persisted definitions, and + exported TOML +- `commands/automation.py` assembles inspect payloads +- `models.py` contains only part of the manifest-to-persistence mapping + +That duplication creates release-risking drift: + +- supported fields can survive one path but disappear on another +- `snapshot_config` and `live_config` can expose similar data with different + shapes +- publish and export can each define their own idea of the canonical manifest +- future field additions require hand-editing multiple projection sites + +For the first public release, Browser CLI needs one stable semantic model for +automation configuration. + +## Goals + +- Make `publish`, `import`, `export`, `inspect`, and persistence agree on one + supported manifest field set. +- Preserve supported field semantics across all round-trip paths. +- Keep `snapshot_config` and `live_config` separate in `inspect`, but render + them with the same config-view shape. +- Allow path normalization into snapshot or runtime context without treating it + as semantic drift. +- Reduce duplicated projection logic so future manifest fields are harder to + implement inconsistently. + +## Non-Goals + +- No raw TOML text preservation guarantee. +- No unknown-field passthrough guarantee. +- No change to automation runtime execution behavior in this work. +- No third configuration truth source beyond snapshot manifests and persisted + live state. +- No new public automation subcommands. + +## Chosen Direction + +Browser CLI introduces a shared automation projection layer that owns the +supported manifest semantics and the stable conversions between internal models +and user-facing views. + +Two existing truths remain: + +- immutable snapshot manifest under + `~/.browser-cli/automations//versions//automation.toml` +- current live persisted automation definition in the automation service + +The new projection layer does not replace those truths. It ensures both are +encoded, decoded, and rendered through one semantic contract. + +## Options Considered + +### 1. Shared Projection Layer + +Create one reusable mapping layer for manifest, persistence, export, publish, +and inspect. + +Advantages: + +- one semantic contract for supported fields +- fewer projection sites to keep in sync +- easier to add future supported fields safely + +Disadvantages: + +- requires touching several existing modules +- moves some logic out of current helpers + +Chosen direction. + +### 2. Surface-By-Surface Fixes + +Patch `publish`, `export`, `inspect`, and import independently. + +Advantages: + +- smaller immediate edits + +Disadvantages: + +- preserves duplicated mapping logic +- makes future drift likely +- keeps config-view inconsistency risk high + +Rejected. + +### 3. Persist Raw Manifest Blobs + +Store raw TOML or parsed source payloads as an additional truth source. + +Advantages: + +- closer to text preservation + +Disadvantages: + +- conflicts with the chosen semantic-consistency goal +- introduces redundant truth sources +- adds release-window scope without improving core user semantics + +Rejected. + +## Canonical Supported Field Set + +This design standardizes the supported manifest fields that every round-trip +path must understand and preserve semantically: + +- `automation.id` +- `automation.name` +- `automation.description` +- `automation.version` +- `task.path` +- `task.meta_path` +- `task.entrypoint` +- `inputs` +- `schedule.mode` +- `schedule.timezone` +- normalized schedule payload fields beyond `mode` and `timezone` +- `outputs.artifact_dir` +- `outputs.result_json_path` +- `outputs.stdout` +- `hooks.before_run` +- `hooks.after_success` +- `hooks.after_failure` +- `runtime.retry_attempts` +- `runtime.retry_backoff_seconds` +- `runtime.timeout_seconds` +- `runtime.log_level` + +Unknown field preservation is explicitly out of scope. + +## Semantic Normalization Rules + +This design uses semantic equality rather than textual equality. + +Allowed normalization: + +- rewrite `task.path` and `task.meta_path` into snapshot-local paths during + publish +- rewrite `outputs.artifact_dir` and `outputs.result_json_path` into snapshot + or runtime output locations +- render explicit defaults when generating a manifest from persistence or + generated defaults +- normalize schedule payload ordering or TOML formatting + +Disallowed drift: + +- silent loss of a supported field +- renaming a supported field on one path +- changing the meaning of `None`, empty, or default values across surfaces +- exposing different config-view field sets for snapshot vs live inspect + +## Architecture + +### Existing Models + +The existing dataclasses remain the core semantic models: + +- `AutomationManifest` is the file-backed manifest semantic model +- `PersistedAutomationDefinition` is the live automation-service model + +### New Shared Projection Layer + +Add a shared projection module at: + +`src/browser_cli/automation/projections.py` + +This module owns the stable conversions among: + +- `AutomationManifest -> PersistedAutomationDefinition` +- `PersistedAutomationDefinition -> manifest sections / TOML-ready structure` +- `AutomationManifest -> snapshot manifest sections`, with path remapping +- `AutomationManifest -> inspect config view` +- `PersistedAutomationDefinition -> inspect config view` + +This module becomes the single semantic mapping authority for supported fields. + +### Ownership Boundaries + +- `models.py` keeps dataclass definitions. Projection authority moves to the + shared projection layer. +- `publisher.py` delegates snapshot manifest construction to the shared + projection layer. +- `api/server.py` delegates import/export field projection to the shared + projection layer. +- `commands/automation.py` delegates inspect config rendering to the shared + projection layer. + +## Publish Contract + +### Source Manifest Present + +If the task directory contains `automation.toml`: + +- load and validate it through `load_automation_manifest()` +- convert it through the shared projection layer into the snapshot manifest + representation +- rewrite only the fields that must move into snapshot/runtime context, such as + task and output paths +- preserve all supported fields semantically + +### Source Manifest Absent + +If the task directory does not contain `automation.toml`: + +- generate the minimal default manifest +- mark the publish result as `manifest_source=generated_defaults` + +### Invalid Source Manifest + +If `automation.toml` exists but is invalid: + +- fail publish +- do not silently regenerate from defaults + +## Import Contract + +`automation import` accepts manifest semantics only. + +The import path must: + +- load the manifest through `load_automation_manifest()` +- project it into `PersistedAutomationDefinition` through the shared projection + layer +- preserve the entire supported field set in persisted live state + +## Export Contract + +`automation export` must: + +- take the persisted live automation definition +- project it back into manifest semantics through the shared projection layer +- render TOML from those semantics + +The resulting TOML does not need to match original source text, but +reloading it through `load_automation_manifest()` must produce supported-field +semantics equivalent to the persisted source definition. + +## Inspect Contract + +`browser-cli automation inspect ` remains the live operational +view. + +`browser-cli automation inspect --version N` must continue to +show: + +- `snapshot_config` from the immutable snapshot manifest +- `live_config` from the current persisted automation definition +- `latest_run` as a separate operational section + +This design adds a stronger rule: + +- `snapshot_config` and `live_config` must be rendered from the same shared + inspect-config projection shape + +Allowed differences: + +- path values may differ because one view is snapshot-scoped and the other is + live-runtime-scoped +- operational metadata such as `enabled`, timestamps, and latest run remains + live-only data + +Disallowed differences: + +- one view exposing a supported config field while the other omits it +- one view renaming or reshaping supported fields differently from the other + +## File-Level Change Plan + +### `src/browser_cli/automation/projections.py` + +New shared projection helpers for: + +- manifest to persisted definition +- persisted definition to manifest sections +- manifest to snapshot manifest sections with path remapping +- manifest/persisted definition to shared inspect config payloads + +### `src/browser_cli/automation/models.py` + +Keep dataclasses. Existing projection helpers either become thin wrappers or +move entirely to the new projection layer. + +### `src/browser_cli/automation/publisher.py` + +Replace local snapshot-manifest assembly with shared projection helpers. + +### `src/browser_cli/automation/api/server.py` + +Replace local import/export field projection with shared projection helpers. + +### `src/browser_cli/commands/automation.py` + +Replace separate live/snapshot inspect payload assembly with shared inspect +config projection helpers. + +## Testing Strategy + +The validation focus is semantic round-trip coverage, not raw text comparison. + +Required regression coverage: + +- source manifest -> persisted definition preserves supported fields +- persisted definition -> exported TOML -> reloaded manifest preserves + supported fields semantically +- source manifest -> published snapshot manifest -> reloaded manifest preserves + supported fields semantically after path remapping +- `inspect --version` renders `snapshot_config` and `live_config` with the same + supported config field set + +At minimum, tests must cover: + +- `inputs` +- `schedule.timezone` +- schedule payload fields beyond `mode/timezone` where supported +- `outputs.result_json_path` +- `hooks.before_run` +- `hooks.after_success` +- `hooks.after_failure` +- `runtime.retry_attempts` +- `runtime.retry_backoff_seconds` +- `runtime.timeout_seconds` +- `runtime.log_level` + +## Acceptance Criteria + +This work is complete when all of the following are true: + +1. Publishing a task with source `automation.toml` no longer drops supported + manifest fields in the published snapshot. +2. Exported automation TOML can be reloaded without semantic loss of supported + fields. +3. Import preserves the supported field set into persisted live automation + state. +4. `inspect --version` continues to separate snapshot truth from live truth. +5. `snapshot_config` and `live_config` share one stable config-view field + shape. +6. Path rewriting may occur, but no supported field changes meaning across + round-trip paths. + +## Risks And Mitigations + +- Risk: moving projection logic can break existing payload shapes. + Mitigation: keep the public inspect/live response structure intact and change + only config payload assembly behind it. + +- Risk: duplicated legacy helpers may continue drifting after the refactor. + Mitigation: centralize supported-field projection in one new module and make + other surfaces delegate to it. + +- Risk: tests may accidentally assert exact text output instead of semantics. + Mitigation: use `load_automation_manifest()` and structured payload + comparisons as the primary assertions. + +## Result + +After this change, Browser CLI still has two truths: + +- immutable published snapshot configuration +- current live persisted automation configuration + +But both truths are interpreted through one shared manifest semantic contract. + +That is the release-ready consistency bar for Browser CLI's automation +round-trip behavior. diff --git a/docs/superpowers/specs/2026-04-14-browser-cli-uninstall-doc-design.md b/docs/superpowers/specs/2026-04-14-browser-cli-uninstall-doc-design.md new file mode 100644 index 0000000..3f27ec3 --- /dev/null +++ b/docs/superpowers/specs/2026-04-14-browser-cli-uninstall-doc-design.md @@ -0,0 +1,266 @@ +# Browser CLI Uninstall Doc Design + +Date: 2026-04-14 +Status: Drafted for review +Repo: `/home/hongv/workspace/browser-cli` + +## Summary + +This spec adds a maintainer-oriented uninstall guide for Browser CLI. + +The new documentation should cover: + +- complete removal of the repository development environment +- complete removal of Browser CLI runtime data under the Browser CLI home +- optional removal of any uv tool installation +- backup guidance for high-value local task and automation data before deletion + +The guide is not primarily an installed-user quick-uninstall note. It is a +full cleanup document for maintainers and developers who may have both a local +repository checkout and Browser CLI runtime state on the same machine. + +## Problem Statement + +The repository currently documents installation and first-run guidance, but it +does not document how to remove Browser CLI cleanly. + +Current docs include: + +- `README.md` +- `docs/installed-with-uv.md` +- `docs/installed-with-pip.md` + +Those docs explain install and migration paths, but they do not explain: + +- which runtime paths Browser CLI actually owns +- how to stop the daemon and automation service before deletion +- how to remove the repo-local development environment +- how to distinguish uninstalling the uv tool from deleting Browser CLI home + data +- how to avoid accidentally deleting high-value task and automation data + +Without an uninstall guide, maintainers have to infer removal steps from the +source code and path layout. + +## Goals + +- Add a single uninstall document for Browser CLI. +- Make the primary audience repository maintainers and developers. +- Default to a full uninstall path rather than a minimal uninstall path. +- Warn users to back up high-value Browser CLI home data before deletion. +- Use only commands and paths that exist in the current implementation. +- Clarify the difference between: + - runtime cleanup + - repo development-environment cleanup + - Browser CLI home deletion + - uv tool uninstall + +## Non-Goals + +- This spec does not add a new `browser-cli uninstall` command. +- This spec does not redesign Browser CLI runtime storage. +- This spec does not make installed-user removal the primary focus. +- This spec does not automate backups. + +## Current Implementation Constraints + +The uninstall doc must align with current code behavior. + +### Runtime Path Model + +`src/browser_cli/constants.py` defines the Browser CLI home and its owned +paths. + +Relevant paths include: + +- `home` +- `run/` +- `artifacts/` +- `tasks/` +- `automations/` +- `automations.db` +- daemon log and run-info files +- automation-service log and run-info files + +The home path is: + +- `BROWSER_CLI_HOME` when set +- otherwise `~/.browser-cli` + +The uninstall guide must therefore tell users to discover the real home path +before deleting anything. + +### Discovery Commands + +The current implementation already exposes the right discovery surfaces: + +- `browser-cli paths` +- `browser-cli status` + +`paths` exposes the canonical filesystem paths that matter for cleanup. +`status` exposes runtime state and helps users understand whether daemon or +automation processes are still active. + +### Runtime Cleanup Surfaces + +The current implementation provides: + +- `browser-cli automation stop` +- `browser-cli reload` + +These are runtime cleanup tools, not uninstall commands. + +The uninstall doc must explain that: + +- `automation stop` is used to stop the automation service and clear stale + runtime metadata +- `reload` is used to reset daemon runtime state before deletion +- final uninstall still depends on deleting directories and files + +## Chosen Direction + +Add a new document: + +- `docs/uninstall.md` + +This document should be the primary uninstall reference and should be linked +from: + +- `README.md` +- `docs/installed-with-uv.md` + +The document should be structured around a full cleanup flow, but it should +also call out narrower removal cases inside that flow. + +## Documentation Structure + +The uninstall guide should use this structure. + +### 1. Summary + +Explain that the document targets full removal for maintainers and local +developers. + +Clarify that full removal can include: + +- repo-local development environment cleanup +- Browser CLI home deletion +- optional uv tool uninstall + +### 2. Before You Delete Anything + +Require users to inspect current state first: + +```bash +browser-cli paths +browser-cli status +``` + +Explain why this matters: + +- Browser CLI home may not be `~/.browser-cli` +- active daemon or automation-service processes may still exist + +### 3. Back Up What You Want To Keep + +Explicitly list high-value paths: + +- `tasks_dir` +- `automations_dir` +- `automation_db_path` +- optionally `artifacts_dir` + +Explain that deleting Browser CLI home removes local task sources, published +automation snapshots, and automation persistence. + +### 4. Stop Runtime Processes + +Use only existing commands: + +```bash +browser-cli automation stop +browser-cli reload +``` + +Explain that these commands reduce residual runtime state before deletion but do +not by themselves uninstall Browser CLI. + +### 5. Remove The Repository Development Environment + +Provide commands for maintainers working inside the repository checkout, such as +removing: + +- `.venv` +- `.pytest_cache` +- `__pycache__` directories + +Make it explicit that this step only affects the checkout, not Browser CLI home. + +### 6. Remove Browser CLI Home Data + +Tell users to delete the home path reported by `browser-cli paths`. + +Use `~/.browser-cli` only as the default example, not as the sole target. + +The doc must explain that removing Browser CLI home deletes: + +- run state +- logs +- artifacts +- local tasks +- published automations +- automation database +- managed-profile runtime state stored there + +### 7. Optional: Remove uv Tool Installation + +Include: + +```bash +uv tool uninstall browser-cli +``` + +Explain that this only removes the installed CLI tool and does not remove: + +- the repository checkout +- Browser CLI home data + +### 8. Verify Removal + +Show how to verify that: + +- the repo development environment is gone +- Browser CLI home is gone +- any optional uv tool install has been removed + +The doc should avoid assuming that `browser-cli` is still available after the +user removes the tool install. + +## Required Warnings + +The uninstall doc must explicitly warn about these four cases: + +1. Deleting Browser CLI home deletes high-value local task and automation data. +2. `browser-cli reload` is runtime cleanup, not uninstall. +3. `BROWSER_CLI_HOME` changes the deletion target. +4. `uv tool uninstall browser-cli` does not remove Browser CLI home or the repo + checkout. + +## Related Documentation Changes + +`README.md` should gain a link to `docs/uninstall.md`. + +`docs/installed-with-uv.md` should also point to `docs/uninstall.md` so +installed users can still find the removal guide even though the guide is +maintainer-oriented. + +## Acceptance Criteria + +This design is complete when: + +- `docs/uninstall.md` exists +- the guide is based on current Browser CLI commands and paths +- the guide defaults to full cleanup for maintainers +- the guide warns users to back up task and automation data first +- `README.md` links to the guide +- `docs/installed-with-uv.md` links to the guide diff --git a/docs/superpowers/specs/2026-04-14-daemon-parameter-validation-design.md b/docs/superpowers/specs/2026-04-14-daemon-parameter-validation-design.md new file mode 100644 index 0000000..d57b7ed --- /dev/null +++ b/docs/superpowers/specs/2026-04-14-daemon-parameter-validation-design.md @@ -0,0 +1,287 @@ +# Browser CLI Daemon Parameter Validation Design + +Date: 2026-04-14 +Status: Drafted for review +Repo: `browser-cli` + +## Summary + +This design standardizes user-input numeric parsing in the daemon request +handler layer so malformed command arguments return stable `INVALID_INPUT` +errors instead of leaking `INTERNAL_ERROR` responses. + +The change is intentionally scoped to `src/browser_cli/daemon/app.py`, where +daemon request arguments are converted from `request.args` into typed Python +values before browser operations are called. + +The design introduces shared parsing helpers for integers and floats, then +applies them across all user-facing numeric argument entrypoints, including: + +- mouse coordinate commands +- resize commands +- wait and timeout commands +- other handlers that currently perform raw `int(...)` or `float(...)` + conversions on request arguments + +## Problem Statement + +The daemon request layer currently mixes two styles of input handling: + +- some fields use explicit validation helpers such as `_require_str()` +- some numeric fields are converted inline with `int(request.args.get(...))` or + `float(request.args.get(...))` + +That inconsistency creates release-quality problems: + +1. missing or malformed numeric arguments can raise raw `TypeError` or + `ValueError` +2. those exceptions bypass user-input semantics and surface as + `INTERNAL_ERROR` +3. clients receive traceback-shaped failures for what should be ordinary usage + errors + +This is especially visible in coordinate-heavy commands such as +`mouse-click`, `mouse-move`, and `mouse-drag`, but the underlying issue is +broader: daemon handlers are not consistently responsible for normalizing and +validating user input before calling the service layer. + +## Goals + +- Make malformed numeric command arguments return `InvalidInputError`. +- Keep missing-field errors explicit and field-oriented. +- Keep range or command-specific constraints in the relevant handler. +- Standardize numeric parsing behavior across daemon handlers. +- Preserve successful command behavior. +- Add regression tests that prove malformed inputs no longer surface as + `INTERNAL_ERROR`. + +## Non-Goals + +- This design does not change browser service or driver method signatures. +- This design does not redesign non-numeric argument validation. +- This design does not introduce a new daemon transport or error envelope. +- This design does not move CLI/daemon parsing logic into lower layers. + +## Chosen Direction + +Browser CLI should treat daemon handlers as the single normalization layer for +user-supplied request arguments. + +Numeric parsing should be centralized in helper methods on +`BrowserDaemonApp`, then reused by every handler that accepts numeric input. + +The error model should be: + +- missing required field -> field-level message +- malformed numeric value -> field-level type message +- command-specific range/semantic violation -> handler-level constraint message + +This preserves good diagnostics without conflating internal bugs with user +input errors. + +## Options Considered + +### 1. Shared daemon parsing helpers + +Add shared helper methods such as `_require_int()` and `_require_float()` to +`BrowserDaemonApp`, then migrate handlers to use them. + +Advantages: + +- keeps parsing semantics in one place +- improves consistency across commands +- makes future handlers harder to regress +- preserves lower-layer boundaries + +Disadvantages: + +- requires touching several handlers in one pass + +Chosen direction. + +### 2. Per-handler local try/except blocks + +Patch only the currently failing handlers by catching `TypeError` and +`ValueError` inline. + +Advantages: + +- smallest initial patch + +Disadvantages: + +- duplicates parsing logic +- easy for future handlers to drift +- does not create a durable validation pattern + +Rejected. + +### 3. Outer exception remapping + +Catch numeric conversion failures at the top of `execute()` and translate them +to `InvalidInputError`. + +Advantages: + +- broad coverage with fewer edits + +Disadvantages: + +- weak context for error messages +- risks converting genuine internal bugs into usage errors +- blurs responsibility between handler code and top-level execution flow + +Rejected. + +## Layering Rule + +The parsing and validation change must stay in: + +- `src/browser_cli/daemon/app.py` + +This file owns user-facing daemon argument normalization and is the correct +layer for converting request payloads into typed inputs. + +The following layers should remain unchanged in responsibility: + +- `browser_service`: consumes already-validated typed values +- drivers: execute browser behavior, not user-input parsing +- transport: carries payloads, not validation policy + +## Helper Contract + +`BrowserDaemonApp` should add shared helper methods for numeric parsing. + +At minimum: + +- `_require_int(args, key)` +- `_optional_int(args, key)` +- `_require_float(args, key)` +- `_optional_float(args, key)` + +Optional range-specific helpers may be added only if they reduce duplication +without obscuring command-level semantics. + +### Error Messages + +The helpers should produce field-level diagnostics: + +- missing required integer: + `x is required.` +- invalid integer: + `x must be an integer.` +- missing required float: + `timeout is required.` +- invalid float: + `timeout_seconds must be a number.` + +Helpers should not enforce command-specific range rules such as positive sizes +or non-negative counts. Those remain the handler's responsibility. + +## Handler Coverage + +The implementation should cover every handler in `daemon/app.py` that currently +does direct numeric parsing from `request.args`. + +This includes two categories. + +### Category 1: Unsafe Inline Parsing + +These handlers currently risk surfacing internal failures and must be migrated: + +- `mouse-click` +- `mouse-move` +- `mouse-drag` +- any other handler using direct `int(request.args.get(...))` or + `float(request.args.get(...))` without protective validation + +### Category 2: Already-Constrained Commands With Duplicated Parsing + +These handlers should also be moved to the shared helpers so behavior stays +consistent: + +- `resize` +- `scroll` +- timeout-based `wait` handlers +- network wait handlers +- verify handlers with timeout parameters + +The exact list should be derived from the current `daemon/app.py` +implementation rather than guessed from public docs. + +## Error Semantics + +The resulting behavior should be: + +- missing numeric fields return `INVALID_INPUT` +- malformed numeric fields return `INVALID_INPUT` +- command-specific range checks continue returning `INVALID_INPUT` +- internal daemon failures unrelated to user input continue returning + `INTERNAL_ERROR` or other existing typed errors + +This distinction matters because clients should be able to tell whether they +need to fix the request or retry the system. + +## Testing Strategy + +This change needs regression tests at the daemon application layer. + +### Required Tests + +Add tests that execute `BrowserDaemonApp.execute()` for malformed numeric +requests and assert: + +- `ok` is `False` +- `error_code` is `INVALID_INPUT` +- the error message identifies the bad field or violated command constraint + +### Regression Cases + +At least one test should explicitly lock the previously broken behavior: + +- `mouse-click` without `x` +- or another command that previously returned `INTERNAL_ERROR` + +The test should prove the response now returns `INVALID_INPUT` instead. + +### Success-Path Safety + +Existing success-path tests should remain unchanged unless a handler signature +or public contract is actually modified. + +If needed, add a small success-path sanity test for one representative command +that now uses the shared helpers. + +## Files Expected To Change + +Primary files: + +- `src/browser_cli/daemon/app.py` +- `tests/unit/test_daemon_server.py` or `tests/unit/test_daemon_browser_service.py` + only if existing coverage belongs there +- a new or expanded unit test file for daemon app request validation + +No architecture changes are expected beyond request-handler input normalization. + +## Risks + +- over-broad top-level exception remapping could hide real bugs +- inconsistent migration could leave some handlers on old parsing behavior +- helper overreach could mix field parsing with command semantics + +The design controls these risks by: + +- keeping parsing helpers narrow +- keeping command-specific range rules in handlers +- testing malformed requests through `BrowserDaemonApp.execute()` + +## Acceptance Criteria + +This design is complete when: + +- malformed numeric daemon inputs no longer return `INTERNAL_ERROR` +- missing numeric arguments return field-level `InvalidInputError` messages +- command-specific numeric constraints still return explicit usage errors +- all direct user-facing numeric conversions in `daemon/app.py` are routed + through shared helper logic +- regression tests lock the new error behavior diff --git a/docs/uninstall.md b/docs/uninstall.md new file mode 100644 index 0000000..3e9a105 --- /dev/null +++ b/docs/uninstall.md @@ -0,0 +1,110 @@ +# Uninstall Browser CLI + +This guide is for repository maintainers and local developers who want to fully +remove Browser CLI from a machine. It covers: + +- stopping Browser CLI runtime processes +- backing up high-value local data first +- removing the repository development environment +- removing Browser CLI home data +- optionally removing any uv tool installation + +## Before You Delete Anything + +Inspect the current Browser CLI runtime paths and status first: + +```bash +browser-cli paths +browser-cli status +``` + +If you set `BROWSER_CLI_HOME`, Browser CLI home may not be `~/.browser-cli`. +Use the `home` path shown by `browser-cli paths` as the deletion target. + +## Back Up What You Want To Keep + +Before deleting Browser CLI home, consider backing up these paths from +`browser-cli paths`: + +- `tasks_dir` +- `automations_dir` +- `automation_db_path` +- optionally `artifacts_dir` + +Deleting Browser CLI home removes local task source, published automation +snapshots, automation persistence, runtime logs, and artifacts. + +## Stop Runtime Processes + +Stop the automation service and clear daemon runtime state before deleting files: + +```bash +browser-cli automation stop +browser-cli reload +``` + +`browser-cli reload` is runtime cleanup, not uninstall. It resets Browser CLI +state before deletion, but it does not remove Browser CLI files by itself. + +## Remove The Repository Development Environment + +From the repository root, remove the local development environment: + +```bash +rm -rf .venv +rm -rf .pytest_cache +find . -type d -name "__pycache__" -prune -exec rm -rf {} + +``` + +This step only removes repo-local development state. It does not remove Browser +CLI home data. + +## Remove Browser CLI Home Data + +Delete the Browser CLI home reported by `browser-cli paths`. The default path is +usually: + +```bash +rm -rf ~/.browser-cli +``` + +If `browser-cli paths` showed a different `home`, delete that path instead. + +Removing Browser CLI home deletes: + +- `run/` runtime state and logs +- `artifacts/` +- `tasks/` +- `automations/` +- `automations.db` +- managed-profile runtime state stored under Browser CLI home + +## Optional: Remove uv Tool Installation + +If you also installed Browser CLI as a uv tool, remove it separately: + +```bash +uv tool uninstall browser-cli +``` + +This does not remove: + +- the repository checkout +- Browser CLI home data + +## Verify Removal + +Verify repo-local cleanup: + +```bash +test ! -d .venv && echo "repo venv removed" +``` + +Verify Browser CLI home removal by checking the path you identified earlier: + +```bash +test ! -d ~/.browser-cli && echo "browser-cli home removed" +``` + +If you used a custom `BROWSER_CLI_HOME`, replace `~/.browser-cli` with the +actual `home` path shown by `browser-cli paths`. diff --git a/src/browser_cli/automation/api/server.py b/src/browser_cli/automation/api/server.py index cae86d0..289881a 100644 --- a/src/browser_cli/automation/api/server.py +++ b/src/browser_cli/automation/api/server.py @@ -10,11 +10,13 @@ from urllib.parse import urlparse from browser_cli.automation.loader import load_automation_manifest -from browser_cli.automation.models import ( - PersistedAutomationDefinition, +from browser_cli.automation.models import PersistedAutomationDefinition +from browser_cli.automation.projections import ( manifest_to_persisted_definition, + payload_to_persisted_definition, + persisted_definition_to_config_payload, + persisted_definition_to_manifest_toml, ) -from browser_cli.automation.toml import dumps_toml_sections from browser_cli.automation.web import render_index_html from browser_cli.errors import BrowserCliError, InvalidInputError @@ -49,7 +51,11 @@ def _handle(self, method: str) -> None: return if path == "/api/service/status": self._send_json( - {"ok": True, "data": self.server.runtime.status_payload(), "meta": {}} + { + "ok": True, + "data": self.server.runtime.status_payload(), + "meta": {}, + } ) return if path == "/api/service/stop" and method == "POST": @@ -66,9 +72,15 @@ def _handle(self, method: str) -> None: return if path == "/api/automations" and method == "POST": body = self._read_json_body() - created = self.server.runtime.store.upsert_automation(_payload_to_automation(body)) + created = self.server.runtime.store.upsert_automation( + payload_to_persisted_definition(body) + ) self._send_json( - {"ok": True, "data": self._serialize_automation(created), "meta": {}}, + { + "ok": True, + "data": self._serialize_automation(created), + "meta": {}, + }, status=HTTPStatus.CREATED, ) return @@ -82,7 +94,11 @@ def _handle(self, method: str) -> None: automation = manifest_to_persisted_definition(manifest, enabled=enabled) created = self.server.runtime.store.upsert_automation(automation) self._send_json( - {"ok": True, "data": self._serialize_automation(created), "meta": {}} + { + "ok": True, + "data": self._serialize_automation(created), + "meta": {}, + } ) return if path.startswith("/api/automations/") and path.endswith("/runs") and method == "GET": @@ -101,7 +117,11 @@ def _handle(self, method: str) -> None: automation_id = path.split("/")[3] updated = self.server.runtime.store.set_enabled(automation_id, True) self._send_json( - {"ok": True, "data": self._serialize_automation(updated), "meta": {}} + { + "ok": True, + "data": self._serialize_automation(updated), + "meta": {}, + } ) return if ( @@ -112,7 +132,11 @@ def _handle(self, method: str) -> None: automation_id = path.split("/")[3] updated = self.server.runtime.store.set_enabled(automation_id, False) self._send_json( - {"ok": True, "data": self._serialize_automation(updated), "meta": {}} + { + "ok": True, + "data": self._serialize_automation(updated), + "meta": {}, + } ) return if path.startswith("/api/automations/") and path.endswith("/run") and method == "POST": @@ -128,7 +152,11 @@ def _handle(self, method: str) -> None: automation_id = path.split("/")[3] automation = self.server.runtime.store.get_automation(automation_id) self._send_json( - {"ok": True, "data": {"toml": _automation_to_toml(automation)}, "meta": {}} + { + "ok": True, + "data": {"toml": persisted_definition_to_manifest_toml(automation)}, + "meta": {}, + } ) return if path.startswith("/api/automations/") and method == "GET": @@ -147,10 +175,14 @@ def _handle(self, method: str) -> None: payload = self._read_json_body() payload["id"] = automation_id updated = self.server.runtime.store.upsert_automation( - _payload_to_automation(payload) + payload_to_persisted_definition(payload) ) self._send_json( - {"ok": True, "data": self._serialize_automation(updated), "meta": {}} + { + "ok": True, + "data": self._serialize_automation(updated), + "meta": {}, + } ) return if path.startswith("/api/runs/") and path.endswith("/retry") and method == "POST": @@ -175,7 +207,11 @@ def _handle(self, method: str) -> None: ) return self._send_json( - {"ok": False, "error_message": f"Not found: {path}", "error_code": "NOT_FOUND"}, + { + "ok": False, + "error_message": f"Not found: {path}", + "error_code": "NOT_FOUND", + }, status=HTTPStatus.NOT_FOUND, ) except KeyError as exc: @@ -230,39 +266,23 @@ def _read_json_body(self) -> dict: return json.loads(raw.decode("utf-8")) def _serialize_automation( - self, automation: PersistedAutomationDefinition, *, include_latest_run: bool = False + self, + automation: PersistedAutomationDefinition, + *, + include_latest_run: bool = False, ) -> dict: - payload = { - "id": automation.id, - "name": automation.name, - "description": automation.description, - "version": automation.version, - "task_path": str(automation.task_path), - "task_meta_path": str(automation.task_meta_path), - "entrypoint": automation.entrypoint, - "enabled": automation.enabled, - "definition_status": automation.definition_status, - "definition_error": automation.definition_error, - "schedule_kind": automation.schedule_kind, - "schedule_payload": automation.schedule_payload, - "timezone": automation.timezone, - "output_dir": str(automation.output_dir), - "result_json_path": str(automation.result_json_path) - if automation.result_json_path - else None, - "stdout_mode": automation.stdout_mode, - "input_overrides": automation.input_overrides, - "before_run_hooks": list(automation.before_run_hooks), - "after_success_hooks": list(automation.after_success_hooks), - "after_failure_hooks": list(automation.after_failure_hooks), - "retry_attempts": automation.retry_attempts, - "retry_backoff_seconds": automation.retry_backoff_seconds, - "timeout_seconds": automation.timeout_seconds, - "created_at": automation.created_at, - "updated_at": automation.updated_at, - "last_run_at": automation.last_run_at, - "next_run_at": automation.next_run_at, - } + payload = persisted_definition_to_config_payload(automation) + payload.update( + { + "enabled": automation.enabled, + "definition_status": automation.definition_status, + "definition_error": automation.definition_error, + "created_at": automation.created_at, + "updated_at": automation.updated_at, + "last_run_at": automation.last_run_at, + "next_run_at": automation.next_run_at, + } + ) if include_latest_run: runs = self.server.runtime.store.list_runs(automation.id, limit=1) payload["latest_run"] = self._serialize_run(runs[0]) if runs else None @@ -295,95 +315,6 @@ def _serialize_event(self, event) -> dict: } -def _payload_to_automation(payload: dict) -> PersistedAutomationDefinition: - automation_id = str(payload.get("id") or "").strip() - if not automation_id: - raise ValueError("Automation id is required.") - output_dir_raw = str(payload.get("output_dir") or "").strip() - output_dir = Path(output_dir_raw) if output_dir_raw else Path() - result_json_raw = str(payload.get("result_json_path") or "").strip() - return PersistedAutomationDefinition( - id=automation_id, - name=str(payload.get("name") or automation_id), - description=str(payload.get("description") or ""), - version=str(payload.get("version") or "0.1.0"), - task_path=Path(str(payload.get("task_path") or "")), - task_meta_path=Path(str(payload.get("task_meta_path") or "")), - entrypoint=str(payload.get("entrypoint") or "run"), - enabled=bool(payload.get("enabled")), - schedule_kind=str(payload.get("schedule_kind") or "manual"), - schedule_payload=dict(payload.get("schedule_payload") or {}), - timezone=str(payload.get("timezone") or "UTC"), - output_dir=output_dir, - result_json_path=Path(result_json_raw) if result_json_raw else None, - stdout_mode=str(payload.get("stdout_mode") or "json"), - input_overrides=dict(payload.get("input_overrides") or {}), - before_run_hooks=tuple(payload.get("before_run_hooks") or []), - after_success_hooks=tuple(payload.get("after_success_hooks") or []), - after_failure_hooks=tuple(payload.get("after_failure_hooks") or []), - retry_attempts=int(payload.get("retry_attempts") or 0), - retry_backoff_seconds=int(payload.get("retry_backoff_seconds") or 0), - timeout_seconds=float(payload["timeout_seconds"]) - if payload.get("timeout_seconds") is not None - else None, - ) - - -def _automation_to_toml(automation: PersistedAutomationDefinition) -> str: - schedule_values = {"mode": automation.schedule_kind, "timezone": automation.timezone} - for key, value in automation.schedule_payload.items(): - if key not in {"mode", "timezone"}: - schedule_values[key] = value - runtime_values: dict[str, object | None] = { - "retry_attempts": automation.retry_attempts, - "retry_backoff_seconds": automation.retry_backoff_seconds, - "timeout_seconds": automation.timeout_seconds, - "log_level": "info", - } - return dumps_toml_sections( - [ - ( - "automation", - { - "id": automation.id, - "name": automation.name, - "description": automation.description, - "version": automation.version, - }, - ), - ( - "task", - { - "path": str(automation.task_path), - "meta_path": str(automation.task_meta_path), - "entrypoint": automation.entrypoint, - }, - ), - ("inputs", dict(automation.input_overrides)), - ("schedule", schedule_values), - ( - "outputs", - { - "artifact_dir": str(automation.output_dir), - "result_json_path": ( - str(automation.result_json_path) if automation.result_json_path else "" - ), - "stdout": automation.stdout_mode, - }, - ), - ( - "hooks", - { - "before_run": list(automation.before_run_hooks), - "after_success": list(automation.after_success_hooks), - "after_failure": list(automation.after_failure_hooks), - }, - ), - ("runtime", runtime_values), - ] - ) - - def _read_text(path: Path | None) -> str: if path is None or not path.exists(): return "" diff --git a/src/browser_cli/automation/models.py b/src/browser_cli/automation/models.py index a65bd16..2281a95 100644 --- a/src/browser_cli/automation/models.py +++ b/src/browser_cli/automation/models.py @@ -83,6 +83,7 @@ class PersistedAutomationDefinition: retry_attempts: int = 0 retry_backoff_seconds: int = 0 timeout_seconds: float | None = None + log_level: str = "info" created_at: str | None = None updated_at: str | None = None last_run_at: str | None = None @@ -121,26 +122,6 @@ def manifest_to_persisted_definition( *, enabled: bool = False, ) -> PersistedAutomationDefinition: - return PersistedAutomationDefinition( - id=manifest.automation.id, - name=manifest.automation.name, - task_path=manifest.task.path, - task_meta_path=manifest.task.meta_path, - output_dir=manifest.outputs.artifact_dir, - description=manifest.automation.description, - version=str(manifest.automation.version), - entrypoint=manifest.task.entrypoint, - enabled=enabled, - schedule_kind=str(manifest.schedule.get("mode") or "manual"), - schedule_payload=dict(manifest.schedule), - timezone=str(manifest.schedule.get("timezone") or "UTC"), - result_json_path=manifest.outputs.result_json_path, - stdout_mode=manifest.outputs.stdout, - input_overrides=dict(manifest.inputs), - before_run_hooks=manifest.hooks.before_run, - after_success_hooks=manifest.hooks.after_success, - after_failure_hooks=manifest.hooks.after_failure, - retry_attempts=manifest.runtime.retry_attempts, - retry_backoff_seconds=manifest.runtime.retry_backoff_seconds, - timeout_seconds=manifest.runtime.timeout_seconds, - ) + from browser_cli.automation.projections import manifest_to_persisted_definition as _impl + + return _impl(manifest, enabled=enabled) diff --git a/src/browser_cli/automation/persistence/store.py b/src/browser_cli/automation/persistence/store.py index f329810..daaefa1 100644 --- a/src/browser_cli/automation/persistence/store.py +++ b/src/browser_cli/automation/persistence/store.py @@ -63,6 +63,7 @@ def _initialize(self) -> None: retry_attempts INTEGER NOT NULL DEFAULT 0, retry_backoff_seconds INTEGER NOT NULL DEFAULT 0, timeout_seconds REAL, + log_level TEXT NOT NULL DEFAULT 'info', created_at TEXT NOT NULL, updated_at TEXT NOT NULL, last_run_at TEXT, @@ -100,6 +101,25 @@ def _initialize(self) -> None: ON automation_run_events(run_id, created_at ASC); """ ) + self._ensure_column( + conn, + "automations", + "log_level", + "TEXT NOT NULL DEFAULT 'info'", + ) + + @staticmethod + def _ensure_column( + conn: sqlite3.Connection, + table: str, + column: str, + ddl: str, + ) -> None: + columns = { + str(row["name"]) for row in conn.execute(f"PRAGMA table_info({table})").fetchall() + } + if column not in columns: + conn.execute(f"ALTER TABLE {table} ADD COLUMN {column} {ddl}") @contextmanager def _connect(self) -> Iterator[sqlite3.Connection]: @@ -126,14 +146,14 @@ def upsert_automation( schedule_payload_json, timezone, output_dir, result_json_path, stdout_mode, input_overrides_json, before_run_hooks_json, after_success_hooks_json, after_failure_hooks_json, retry_attempts, retry_backoff_seconds, - timeout_seconds, created_at, updated_at, last_run_at, next_run_at + timeout_seconds, log_level, created_at, updated_at, last_run_at, next_run_at ) VALUES ( :id, :name, :description, :version, :task_path, :task_meta_path, :entrypoint, :enabled, :definition_status, :definition_error, :schedule_kind, :schedule_payload_json, :timezone, :output_dir, :result_json_path, :stdout_mode, :input_overrides_json, :before_run_hooks_json, :after_success_hooks_json, :after_failure_hooks_json, :retry_attempts, :retry_backoff_seconds, - :timeout_seconds, :created_at, :updated_at, :last_run_at, :next_run_at + :timeout_seconds, :log_level, :created_at, :updated_at, :last_run_at, :next_run_at ) ON CONFLICT(id) DO UPDATE SET name = excluded.name, @@ -158,6 +178,7 @@ def upsert_automation( retry_attempts = excluded.retry_attempts, retry_backoff_seconds = excluded.retry_backoff_seconds, timeout_seconds = excluded.timeout_seconds, + log_level = excluded.log_level, updated_at = excluded.updated_at, next_run_at = excluded.next_run_at """, @@ -557,6 +578,7 @@ def _automation_to_row(automation: PersistedAutomationDefinition) -> dict[str, A "retry_attempts": automation.retry_attempts, "retry_backoff_seconds": automation.retry_backoff_seconds, "timeout_seconds": automation.timeout_seconds, + "log_level": automation.log_level, "created_at": automation.created_at, "updated_at": automation.updated_at, "last_run_at": automation.last_run_at, @@ -591,6 +613,7 @@ def _row_to_automation(row: sqlite3.Row) -> PersistedAutomationDefinition: timeout_seconds=float(row["timeout_seconds"]) if row["timeout_seconds"] is not None else None, + log_level=str(row["log_level"] or "info"), created_at=str(row["created_at"]) if row["created_at"] else None, updated_at=str(row["updated_at"]) if row["updated_at"] else None, last_run_at=str(row["last_run_at"]) if row["last_run_at"] else None, diff --git a/src/browser_cli/automation/projections.py b/src/browser_cli/automation/projections.py new file mode 100644 index 0000000..993cf90 --- /dev/null +++ b/src/browser_cli/automation/projections.py @@ -0,0 +1,279 @@ +"""Shared semantic projections for automation manifests and live definitions.""" + +from __future__ import annotations + +from pathlib import Path +from typing import Any + +from browser_cli.automation.models import ( + AutomationManifest, + PersistedAutomationDefinition, +) +from browser_cli.automation.toml import dumps_toml_sections + + +def manifest_to_persisted_definition( + manifest: AutomationManifest, + *, + enabled: bool = False, +) -> PersistedAutomationDefinition: + return PersistedAutomationDefinition( + id=manifest.automation.id, + name=manifest.automation.name, + task_path=manifest.task.path, + task_meta_path=manifest.task.meta_path, + output_dir=manifest.outputs.artifact_dir, + description=manifest.automation.description, + version=str(manifest.automation.version), + entrypoint=manifest.task.entrypoint, + enabled=enabled, + schedule_kind=str(manifest.schedule.get("mode") or "manual"), + schedule_payload=dict(manifest.schedule), + timezone=str(manifest.schedule.get("timezone") or "UTC"), + result_json_path=manifest.outputs.result_json_path, + stdout_mode=manifest.outputs.stdout, + input_overrides=dict(manifest.inputs), + before_run_hooks=manifest.hooks.before_run, + after_success_hooks=manifest.hooks.after_success, + after_failure_hooks=manifest.hooks.after_failure, + retry_attempts=manifest.runtime.retry_attempts, + retry_backoff_seconds=manifest.runtime.retry_backoff_seconds, + timeout_seconds=manifest.runtime.timeout_seconds, + log_level=manifest.runtime.log_level, + ) + + +def payload_to_persisted_definition(payload: dict[str, Any]) -> PersistedAutomationDefinition: + automation_id = str(payload.get("id") or "").strip() + if not automation_id: + raise ValueError("Automation id is required.") + output_dir_raw = str(payload.get("output_dir") or "").strip() + result_json_raw = str(payload.get("result_json_path") or "").strip() + return PersistedAutomationDefinition( + id=automation_id, + name=str(payload.get("name") or automation_id), + description=str(payload.get("description") or ""), + version=str(payload.get("version") or "0.1.0"), + task_path=Path(str(payload.get("task_path") or "")), + task_meta_path=Path(str(payload.get("task_meta_path") or "")), + entrypoint=str(payload.get("entrypoint") or "run"), + enabled=bool(payload.get("enabled")), + definition_status=str(payload.get("definition_status") or "valid"), + definition_error=str(payload.get("definition_error")) + if payload.get("definition_error") + else None, + schedule_kind=str(payload.get("schedule_kind") or "manual"), + schedule_payload=dict(payload.get("schedule_payload") or {}), + timezone=str(payload.get("timezone") or "UTC"), + output_dir=Path(output_dir_raw) if output_dir_raw else Path(), + result_json_path=Path(result_json_raw) if result_json_raw else None, + stdout_mode=str(payload.get("stdout_mode") or "json"), + input_overrides=dict(payload.get("input_overrides") or {}), + before_run_hooks=tuple(payload.get("before_run_hooks") or []), + after_success_hooks=tuple(payload.get("after_success_hooks") or []), + after_failure_hooks=tuple(payload.get("after_failure_hooks") or []), + retry_attempts=int(payload.get("retry_attempts") or 0), + retry_backoff_seconds=int(payload.get("retry_backoff_seconds") or 0), + timeout_seconds=float(payload["timeout_seconds"]) + if payload.get("timeout_seconds") is not None + else None, + log_level=str(payload.get("log_level") or "info"), + ) + + +def persisted_definition_to_manifest_toml( + automation: PersistedAutomationDefinition, +) -> str: + return dumps_toml_sections( + [ + ( + "automation", + { + "id": automation.id, + "name": automation.name, + "description": automation.description, + "version": automation.version, + }, + ), + ( + "task", + { + "path": str(automation.task_path), + "meta_path": str(automation.task_meta_path), + "entrypoint": automation.entrypoint, + }, + ), + ("inputs", dict(automation.input_overrides)), + ("schedule", _schedule_values(automation)), + ( + "outputs", + { + "artifact_dir": str(automation.output_dir), + "result_json_path": str(automation.result_json_path) + if automation.result_json_path + else None, + "stdout": automation.stdout_mode, + }, + ), + ( + "hooks", + { + "before_run": list(automation.before_run_hooks), + "after_success": list(automation.after_success_hooks), + "after_failure": list(automation.after_failure_hooks), + }, + ), + ( + "runtime", + { + "retry_attempts": automation.retry_attempts, + "retry_backoff_seconds": automation.retry_backoff_seconds, + "timeout_seconds": automation.timeout_seconds, + "log_level": automation.log_level, + }, + ), + ] + ) + + +def manifest_to_snapshot_manifest_toml( + manifest: AutomationManifest, + *, + version: int, + task_path: Path, + task_meta_path: Path, + output_dir: Path, +) -> str: + result_json_path = _remap_result_json_path( + manifest.outputs.artifact_dir, + manifest.outputs.result_json_path, + output_dir, + ) + return dumps_toml_sections( + [ + ( + "automation", + { + "id": manifest.automation.id, + "name": manifest.automation.name, + "description": manifest.automation.description, + "version": str(version), + }, + ), + ( + "task", + { + "path": str(task_path), + "meta_path": str(task_meta_path), + "entrypoint": manifest.task.entrypoint, + }, + ), + ("inputs", dict(manifest.inputs)), + ("schedule", dict(manifest.schedule)), + ( + "outputs", + { + "artifact_dir": str(output_dir), + "result_json_path": str(result_json_path) if result_json_path else None, + "stdout": manifest.outputs.stdout, + }, + ), + ( + "hooks", + { + "before_run": list(manifest.hooks.before_run), + "after_success": list(manifest.hooks.after_success), + "after_failure": list(manifest.hooks.after_failure), + }, + ), + ( + "runtime", + { + "retry_attempts": manifest.runtime.retry_attempts, + "retry_backoff_seconds": manifest.runtime.retry_backoff_seconds, + "timeout_seconds": manifest.runtime.timeout_seconds, + "log_level": manifest.runtime.log_level, + }, + ), + ] + ) + + +def manifest_to_config_payload(manifest: AutomationManifest) -> dict[str, object]: + return { + "id": manifest.automation.id, + "name": manifest.automation.name, + "description": manifest.automation.description, + "version": str(manifest.automation.version), + "task_path": str(manifest.task.path), + "task_meta_path": str(manifest.task.meta_path), + "entrypoint": manifest.task.entrypoint, + "schedule_kind": str(manifest.schedule.get("mode") or "manual"), + "schedule_payload": dict(manifest.schedule), + "timezone": str(manifest.schedule.get("timezone") or "UTC"), + "output_dir": str(manifest.outputs.artifact_dir), + "result_json_path": str(manifest.outputs.result_json_path) + if manifest.outputs.result_json_path + else None, + "stdout_mode": manifest.outputs.stdout, + "input_overrides": dict(manifest.inputs), + "before_run_hooks": list(manifest.hooks.before_run), + "after_success_hooks": list(manifest.hooks.after_success), + "after_failure_hooks": list(manifest.hooks.after_failure), + "retry_attempts": manifest.runtime.retry_attempts, + "retry_backoff_seconds": manifest.runtime.retry_backoff_seconds, + "timeout_seconds": manifest.runtime.timeout_seconds, + "log_level": manifest.runtime.log_level, + } + + +def persisted_definition_to_config_payload( + automation: PersistedAutomationDefinition, +) -> dict[str, object]: + return { + "id": automation.id, + "name": automation.name, + "description": automation.description, + "version": automation.version, + "task_path": str(automation.task_path), + "task_meta_path": str(automation.task_meta_path), + "entrypoint": automation.entrypoint, + "schedule_kind": automation.schedule_kind, + "schedule_payload": dict(automation.schedule_payload), + "timezone": automation.timezone, + "output_dir": str(automation.output_dir), + "result_json_path": str(automation.result_json_path) + if automation.result_json_path + else None, + "stdout_mode": automation.stdout_mode, + "input_overrides": dict(automation.input_overrides), + "before_run_hooks": list(automation.before_run_hooks), + "after_success_hooks": list(automation.after_success_hooks), + "after_failure_hooks": list(automation.after_failure_hooks), + "retry_attempts": automation.retry_attempts, + "retry_backoff_seconds": automation.retry_backoff_seconds, + "timeout_seconds": automation.timeout_seconds, + "log_level": automation.log_level, + } + + +def _schedule_values(automation: PersistedAutomationDefinition) -> dict[str, Any]: + values = {"mode": automation.schedule_kind, "timezone": automation.timezone} + for key, value in automation.schedule_payload.items(): + if key not in {"mode", "timezone"}: + values[key] = value + return values + + +def _remap_result_json_path( + source_artifact_dir: Path, + source_result_json_path: Path | None, + target_artifact_dir: Path, +) -> Path | None: + if source_result_json_path is None: + return None + try: + relative = source_result_json_path.relative_to(source_artifact_dir) + except ValueError: + return target_artifact_dir / source_result_json_path.name + return target_artifact_dir / relative diff --git a/src/browser_cli/automation/publisher.py b/src/browser_cli/automation/publisher.py index c73df9a..dac1dc1 100644 --- a/src/browser_cli/automation/publisher.py +++ b/src/browser_cli/automation/publisher.py @@ -9,7 +9,7 @@ from typing import Any from browser_cli.automation.loader import load_automation_manifest -from browser_cli.automation.models import AutomationManifest +from browser_cli.automation.projections import manifest_to_snapshot_manifest_toml from browser_cli.automation.toml import dumps_toml_sections from browser_cli.constants import AppPaths from browser_cli.task_runtime import validate_task_dir @@ -51,7 +51,7 @@ def publish_task_dir(task_dir: Path, *, app_paths: AppPaths) -> PublishedAutomat if source_manifest_path.exists(): manifest_source = "task_dir" source_manifest = load_automation_manifest(source_manifest_path) - rendered_manifest = render_automation_manifest_from_manifest( + rendered_manifest = manifest_to_snapshot_manifest_toml( source_manifest, version=version, task_path=task_path, @@ -146,84 +146,6 @@ def render_automation_manifest( return dumps_toml_sections(sections) -def render_automation_manifest_from_manifest( - manifest: AutomationManifest, - *, - version: int, - task_path: Path, - task_meta_path: Path, - output_dir: Path, -) -> str: - schedule = dict(manifest.schedule) - artifact_dir = output_dir - result_json_path = _remap_result_json_path( - manifest.outputs.artifact_dir, - manifest.outputs.result_json_path, - artifact_dir, - ) - sections: list[tuple[str, dict[str, Any]]] = [ - ( - "automation", - { - "id": str(manifest.automation.id), - "name": str(manifest.automation.name), - "description": str(manifest.automation.description), - "version": str(version), - }, - ), - ( - "task", - { - "path": str(task_path), - "meta_path": str(task_meta_path), - "entrypoint": str(manifest.task.entrypoint), - }, - ), - ("inputs", dict(manifest.inputs)), - ("schedule", schedule), - ( - "outputs", - { - "artifact_dir": str(artifact_dir), - "result_json_path": str(result_json_path) if result_json_path else None, - "stdout": str(manifest.outputs.stdout), - }, - ), - ( - "hooks", - { - "before_run": list(manifest.hooks.before_run), - "after_success": list(manifest.hooks.after_success), - "after_failure": list(manifest.hooks.after_failure), - }, - ), - ( - "runtime", - { - "retry_attempts": int(manifest.runtime.retry_attempts), - "retry_backoff_seconds": int(manifest.runtime.retry_backoff_seconds), - "timeout_seconds": manifest.runtime.timeout_seconds, - "log_level": str(manifest.runtime.log_level), - }, - ), - ] - return dumps_toml_sections(sections) - - -def _remap_result_json_path( - source_artifact_dir: Path, - source_result_json_path: Path | None, - target_artifact_dir: Path, -) -> Path | None: - if source_result_json_path is None: - return None - try: - relative = source_result_json_path.relative_to(source_artifact_dir) - except ValueError: - return target_artifact_dir / source_result_json_path.name - return target_artifact_dir / relative - - def _next_version(versions_dir: Path) -> int: versions = [ int(path.name) for path in versions_dir.iterdir() if path.is_dir() and path.name.isdigit() diff --git a/src/browser_cli/commands/automation.py b/src/browser_cli/commands/automation.py index 5fde78a..4ec34ee 100644 --- a/src/browser_cli/commands/automation.py +++ b/src/browser_cli/commands/automation.py @@ -8,6 +8,12 @@ from browser_cli.automation import load_automation_manifest, publish_task_dir from browser_cli.automation.models import AutomationManifest +from browser_cli.automation.projections import ( + manifest_to_config_payload, + manifest_to_persisted_definition, + payload_to_persisted_definition, + persisted_definition_to_config_payload, +) from browser_cli.automation.service.client import ( automation_service_ui_url, ensure_automation_service_running, @@ -86,6 +92,13 @@ def run_automation_command(args: Namespace) -> str: ) versions = _load_snapshot_versions(args.automation_id) live_automation_data = dict(payload.get("data") or {}) + live_config = ( + persisted_definition_to_config_payload( + payload_to_persisted_definition(live_automation_data) + ) + if live_automation_data + else None + ) selected = ( _select_snapshot_version( args.automation_id, @@ -113,7 +126,7 @@ def run_automation_command(args: Namespace) -> str: "data": { "snapshot_config": snapshot_config, "snapshot_config_error": snapshot_config_error, - "live_config": live_automation_data, + "live_config": live_config, "versions": versions, "selected_version": selected, "latest_run": latest_run_payload, @@ -131,12 +144,15 @@ def run_automation_command(args: Namespace) -> str: if subcommand == "import": ensure_automation_service_running() manifest = load_automation_manifest(args.path) + automation = manifest_to_persisted_definition( + manifest, enabled=bool(getattr(args, "enable", False)) + ) + body = persisted_definition_to_config_payload(automation) + body["enabled"] = automation.enabled payload = request_automation_service( "POST", "/api/automations", - body=_manifest_to_automation_payload( - manifest, enabled=bool(getattr(args, "enable", False)) - ), + body=body, start_if_needed=False, ) payload["meta"] = {"action": "automation-import"} @@ -285,63 +301,4 @@ def _load_snapshot_manifest(path: Path) -> tuple[AutomationManifest | None, str def _snapshot_manifest_to_automation_payload(manifest: AutomationManifest) -> dict[str, object]: - schedule = dict(manifest.schedule) - return { - "id": manifest.automation.id, - "name": manifest.automation.name, - "description": manifest.automation.description, - "version": manifest.automation.version, - "task_path": str(manifest.task.path), - "task_meta_path": str(manifest.task.meta_path), - "entrypoint": manifest.task.entrypoint, - "enabled": None, - "definition_status": "snapshot", - "definition_error": None, - "schedule_kind": str(schedule.get("mode") or "manual"), - "schedule_payload": schedule, - "timezone": str(schedule.get("timezone") or "UTC"), - "output_dir": str(manifest.outputs.artifact_dir), - "result_json_path": str(manifest.outputs.result_json_path) - if manifest.outputs.result_json_path - else None, - "stdout_mode": manifest.outputs.stdout, - "input_overrides": dict(manifest.inputs), - "before_run_hooks": list(manifest.hooks.before_run), - "after_success_hooks": list(manifest.hooks.after_success), - "after_failure_hooks": list(manifest.hooks.after_failure), - "retry_attempts": int(manifest.runtime.retry_attempts or 0), - "retry_backoff_seconds": int(manifest.runtime.retry_backoff_seconds or 0), - "timeout_seconds": manifest.runtime.timeout_seconds, - "created_at": None, - "updated_at": None, - "last_run_at": None, - "next_run_at": None, - "latest_run": None, - } - - -def _manifest_to_automation_payload(manifest, *, enabled: bool) -> dict[str, object]: - schedule = dict(manifest.schedule) - return { - "id": manifest.automation.id, - "name": manifest.automation.name, - "description": manifest.automation.description, - "version": manifest.automation.version, - "task_path": str(manifest.task.path), - "task_meta_path": str(manifest.task.meta_path), - "entrypoint": manifest.task.entrypoint, - "enabled": enabled, - "schedule_kind": str(schedule.get("mode") or "manual"), - "schedule_payload": schedule, - "timezone": str(schedule.get("timezone") or "UTC"), - "output_dir": str(manifest.outputs.artifact_dir), - "result_json_path": str(manifest.outputs.result_json_path or ""), - "stdout_mode": manifest.outputs.stdout, - "input_overrides": dict(manifest.inputs), - "before_run_hooks": list(manifest.hooks.before_run), - "after_success_hooks": list(manifest.hooks.after_success), - "after_failure_hooks": list(manifest.hooks.after_failure), - "retry_attempts": int(manifest.runtime.retry_attempts or 0), - "retry_backoff_seconds": int(manifest.runtime.retry_backoff_seconds or 0), - "timeout_seconds": manifest.runtime.timeout_seconds, - } + return manifest_to_config_payload(manifest) diff --git a/src/browser_cli/daemon/app.py b/src/browser_cli/daemon/app.py index 7092176..015ef5a 100644 --- a/src/browser_cli/daemon/app.py +++ b/src/browser_cli/daemon/app.py @@ -348,8 +348,8 @@ async def _handle_forward(self, request: DaemonRequest) -> dict[str, Any]: return {"page": payload} async def _handle_resize(self, request: DaemonRequest) -> dict[str, Any]: - width = int(request.args.get("width") or 0) - height = int(request.args.get("height") or 0) + width = self._require_int(request.args, "width") + height = self._require_int(request.args, "height") if width <= 0 or height <= 0: raise InvalidInputError("width and height must be positive integers.") return await self._run_active_page_action( @@ -473,43 +473,52 @@ async def _handle_key_up(self, request: DaemonRequest) -> dict[str, Any]: ) async def _handle_scroll(self, request: DaemonRequest) -> dict[str, Any]: - dx = int(request.args.get("dx") or 0) - dy = int(request.args.get("dy") or 700) + dx = self._optional_int(request.args, "dx") or 0 + dy = self._optional_int(request.args, "dy") or 700 return await self._run_active_page_action( request, lambda page_id: self._state.browser_service.wheel(page_id, dx=dx, dy=dy) ) async def _handle_mouse_click(self, request: DaemonRequest) -> dict[str, Any]: + x = self._require_int(request.args, "x") + y = self._require_int(request.args, "y") + count = self._optional_int(request.args, "count") or 1 return await self._run_active_page_action( request, lambda page_id: self._state.browser_service.mouse_click( page_id, - x=int(request.args.get("x")), - y=int(request.args.get("y")), + x=x, + y=y, button=str(request.args.get("button") or "left"), - count=int(request.args.get("count") or 1), + count=count, ), ) async def _handle_mouse_move(self, request: DaemonRequest) -> dict[str, Any]: + x = self._require_int(request.args, "x") + y = self._require_int(request.args, "y") return await self._run_active_page_action( request, lambda page_id: self._state.browser_service.mouse_move( page_id, - x=int(request.args.get("x")), - y=int(request.args.get("y")), + x=x, + y=y, ), ) async def _handle_mouse_drag(self, request: DaemonRequest) -> dict[str, Any]: + x1 = self._require_int(request.args, "x1") + y1 = self._require_int(request.args, "y1") + x2 = self._require_int(request.args, "x2") + y2 = self._require_int(request.args, "y2") return await self._run_active_page_action( request, lambda page_id: self._state.browser_service.mouse_drag( page_id, - x1=int(request.args.get("x1")), - y1=int(request.args.get("y1")), - x2=int(request.args.get("x2")), - y2=int(request.args.get("y2")), + x1=x1, + y1=y1, + x2=x2, + y2=y2, ), ) @@ -545,7 +554,7 @@ async def _handle_eval_on(self, request: DaemonRequest) -> dict[str, Any]: ) async def _handle_wait(self, request: DaemonRequest) -> dict[str, Any]: - seconds = request.args.get("seconds") + seconds = self._optional_float(request.args, "seconds") text = request.args.get("text") gone = bool(request.args.get("gone")) exact = bool(request.args.get("exact")) @@ -553,7 +562,7 @@ async def _handle_wait(self, request: DaemonRequest) -> dict[str, Any]: request, lambda page_id: self._state.browser_service.wait( page_id, - seconds=float(seconds) if seconds is not None else None, + seconds=seconds, text=str(text) if text else None, gone=gone, exact=exact, @@ -561,7 +570,7 @@ async def _handle_wait(self, request: DaemonRequest) -> dict[str, Any]: ) async def _handle_wait_network(self, request: DaemonRequest) -> dict[str, Any]: - timeout_seconds = float(request.args.get("timeout") or 30.0) + timeout_seconds = self._optional_float(request.args, "timeout") or 30.0 return await self._run_active_page_action( request, lambda page_id: self._state.browser_service.wait_for_network_idle( @@ -612,7 +621,7 @@ async def _handle_network_start(self, request: DaemonRequest) -> dict[str, Any]: async def _handle_network_wait(self, request: DaemonRequest) -> dict[str, Any]: filters = self._network_filters_from_request(request.args) - timeout_seconds = float(request.args.get("timeout_seconds") or 30.0) + timeout_seconds = self._optional_float(request.args, "timeout_seconds") or 30.0 return await self._run_active_page_action( request, lambda page_id: self._state.browser_service.wait_for_network_record( @@ -686,9 +695,7 @@ async def _handle_cookie_set(self, request: DaemonRequest) -> dict[str, Any]: value=self._require_str(request.args, "value"), domain=self._optional_str(request.args, "domain"), path=str(request.args.get("path") or "/"), - expires=float(request.args["expires"]) - if request.args.get("expires") is not None - else None, + expires=self._optional_float(request.args, "expires"), http_only=bool(request.args.get("http_only")), secure=bool(request.args.get("secure")), same_site=self._optional_str(request.args, "same_site"), @@ -725,7 +732,7 @@ async def _handle_storage_load(self, request: DaemonRequest) -> dict[str, Any]: async def _handle_verify_text(self, request: DaemonRequest) -> dict[str, Any]: text = self._require_str(request.args, "text") exact = bool(request.args.get("exact")) - timeout = float(request.args.get("timeout") or 5.0) + timeout = self._optional_float(request.args, "timeout") or 5.0 return await self._run_active_page_action( request, lambda page_id: self._state.browser_service.verify_text( @@ -736,7 +743,7 @@ async def _handle_verify_text(self, request: DaemonRequest) -> dict[str, Any]: async def _handle_verify_visible(self, request: DaemonRequest) -> dict[str, Any]: role = self._require_str(request.args, "role") name = self._require_str(request.args, "name") - timeout = float(request.args.get("timeout") or 5.0) + timeout = self._optional_float(request.args, "timeout") or 5.0 return await self._run_active_page_action( request, lambda page_id: self._state.browser_service.verify_visible( @@ -811,14 +818,14 @@ async def _handle_trace_stop(self, request: DaemonRequest) -> dict[str, Any]: ) async def _handle_video_start(self, request: DaemonRequest) -> dict[str, Any]: - width = request.args.get("width") - height = request.args.get("height") + width = self._optional_int(request.args, "width") + height = self._optional_int(request.args, "height") return await self._run_active_page_action( request, lambda page_id: self._state.browser_service.start_video( page_id, - width=int(width) if width is not None else None, - height=int(height) if height is not None else None, + width=width, + height=height, ), ) @@ -863,14 +870,53 @@ def _optional_str(args: dict[str, Any], key: str) -> str | None: value = str(args.get(key) or "").strip() return value or None + @classmethod + def _require_int(cls, args: dict[str, Any], key: str) -> int: + raw = args.get(key) + if raw is None or str(raw).strip() == "": + raise InvalidInputError(f"{key} is required.") + try: + return int(raw) + except (TypeError, ValueError) as exc: + raise InvalidInputError(f"{key} must be an integer.") from exc + + @classmethod + def _optional_int(cls, args: dict[str, Any], key: str) -> int | None: + raw = args.get(key) + if raw is None or str(raw).strip() == "": + return None + try: + return int(raw) + except (TypeError, ValueError) as exc: + raise InvalidInputError(f"{key} must be an integer.") from exc + + @classmethod + def _require_float(cls, args: dict[str, Any], key: str) -> float: + raw = args.get(key) + if raw is None or str(raw).strip() == "": + raise InvalidInputError(f"{key} is required.") + try: + return float(raw) + except (TypeError, ValueError) as exc: + raise InvalidInputError(f"{key} must be a number.") from exc + + @classmethod + def _optional_float(cls, args: dict[str, Any], key: str) -> float | None: + raw = args.get(key) + if raw is None or str(raw).strip() == "": + return None + try: + return float(raw) + except (TypeError, ValueError) as exc: + raise InvalidInputError(f"{key} must be a number.") from exc + @classmethod def _network_filters_from_request(cls, args: dict[str, Any]) -> dict[str, Any]: - status = args.get("status") return { "url_contains": cls._optional_str(args, "url_contains"), "url_regex": cls._optional_str(args, "url_regex"), "method": cls._optional_str(args, "method"), - "status": int(status) if status is not None else None, + "status": cls._optional_int(args, "status"), "resource_type": cls._optional_str(args, "resource_type"), "mime_contains": cls._optional_str(args, "mime_contains"), "include_static": bool(args.get("include_static")), diff --git a/tests/unit/test_automation_api.py b/tests/unit/test_automation_api.py index 110a64a..d1959cf 100644 --- a/tests/unit/test_automation_api.py +++ b/tests/unit/test_automation_api.py @@ -6,8 +6,9 @@ from pathlib import Path from browser_cli.automation.api import AutomationHttpServer, AutomationRequestHandler -from browser_cli.automation.api.server import _payload_to_automation +from browser_cli.automation.loader import load_automation_manifest from browser_cli.automation.persistence import AutomationStore +from browser_cli.automation.projections import payload_to_persisted_definition from browser_cli.automation.service.runtime import AutomationServiceRuntime REPO_ROOT = Path(__file__).resolve().parents[2] @@ -77,7 +78,6 @@ def test_automation_api_crud_and_export(tmp_path: Path) -> None: ) assert "[automation]" in export_payload["data"]["toml"] assert 'id = "interactive_reveal_capture"' in export_payload["data"]["toml"] - assert 'result_json_path = ""' in export_payload["data"]["toml"] assert "retry_backoff_seconds = 7" in export_payload["data"]["toml"] assert "timeout_seconds = 0" not in export_payload["data"]["toml"] finally: @@ -108,8 +108,8 @@ def test_automation_api_returns_not_found_for_missing_run(tmp_path: Path) -> Non thread.join(timeout=2.0) -def test_payload_to_automation_preserves_stdout_and_retry_backoff() -> None: - persisted = _payload_to_automation( +def test_payload_to_automation_preserves_stdout_retry_backoff_and_log_level() -> None: + persisted = payload_to_persisted_definition( { "id": "demo", "name": "Demo", @@ -120,9 +120,79 @@ def test_payload_to_automation_preserves_stdout_and_retry_backoff() -> None: "retry_attempts": 1, "retry_backoff_seconds": 9, "timeout_seconds": 4.0, + "log_level": "debug", } ) assert persisted.stdout_mode == "text" assert persisted.retry_backoff_seconds == 9 assert persisted.timeout_seconds == 4.0 + assert persisted.log_level == "debug" + + +def test_automation_api_export_round_trips_supported_fields(tmp_path: Path) -> None: + runtime = AutomationServiceRuntime(store=AutomationStore(tmp_path / "automations.db")) + server = AutomationHttpServer(("127.0.0.1", 0), AutomationRequestHandler, runtime) + thread = threading.Thread(target=server.serve_forever, daemon=True) + thread.start() + host, port = server.server_address[:2] + try: + (tmp_path / "task.py").write_text( + "def run(flow, inputs):\n return {'ok': True}\n", + encoding="utf-8", + ) + (tmp_path / "task.meta.json").write_text( + '{"task":{"id":"demo","name":"Demo","goal":"Run"},"environment":{},"success_path":{},"recovery_hints":{},"failures":[],"knowledge":{}}', + encoding="utf-8", + ) + create_payload = _request( + host, + int(port), + "POST", + "/api/automations", + { + "id": "demo", + "name": "Demo", + "description": "Round trip", + "version": "7", + "task_path": str(tmp_path / "task.py"), + "task_meta_path": str(tmp_path / "task.meta.json"), + "entrypoint": "run", + "schedule_kind": "interval", + "schedule_payload": { + "mode": "interval", + "interval_seconds": 900, + "timezone": "Asia/Shanghai", + }, + "timezone": "Asia/Shanghai", + "output_dir": str(tmp_path / "runs"), + "result_json_path": str(tmp_path / "runs" / "result.json"), + "stdout_mode": "text", + "input_overrides": {"url": "https://example.com"}, + "before_run_hooks": ["echo before"], + "after_success_hooks": ["echo success"], + "after_failure_hooks": ["echo failure"], + "retry_attempts": 2, + "retry_backoff_seconds": 11, + "timeout_seconds": 42.5, + "log_level": "debug", + }, + ) + assert create_payload["data"]["id"] == "demo" + + export_payload = _request(host, int(port), "GET", "/api/automations/demo/export") + manifest_path = tmp_path / "exported.toml" + manifest_path.write_text(export_payload["data"]["toml"], encoding="utf-8") + manifest = load_automation_manifest(manifest_path) + + assert manifest.inputs == {"url": "https://example.com"} + assert manifest.schedule["interval_seconds"] == 900 + assert manifest.outputs.stdout == "text" + assert manifest.hooks.after_success == ("echo success",) + assert manifest.runtime.retry_backoff_seconds == 11 + assert manifest.runtime.timeout_seconds == 42.5 + assert manifest.runtime.log_level == "debug" + finally: + server.shutdown() + server.server_close() + thread.join(timeout=2.0) diff --git a/tests/unit/test_automation_commands.py b/tests/unit/test_automation_commands.py index 0f8e4b5..af5ef4b 100644 --- a/tests/unit/test_automation_commands.py +++ b/tests/unit/test_automation_commands.py @@ -264,3 +264,102 @@ def test_automation_inspect_version_reports_snapshot_config_error( assert payload["data"]["snapshot_config"] is None assert "snapshot_config_error" in payload["data"] assert payload["data"]["live_config"]["name"] == "Demo Live" + + +def test_automation_inspect_version_aligns_snapshot_and_live_config_shapes( + monkeypatch, tmp_path: Path +) -> None: + monkeypatch.setenv("BROWSER_CLI_HOME", str(tmp_path / "home")) + version_dir = tmp_path / "home" / "automations" / "demo" / "versions" / "1" + version_dir.mkdir(parents=True) + (version_dir / "task.py").write_text( + "def run(flow, inputs):\n return {'ok': True}\n", encoding="utf-8" + ) + (version_dir / "task.meta.json").write_text( + '{"task":{"id":"demo","name":"Demo","goal":"Run"},"environment":{},"success_path":{},"recovery_hints":{},"failures":[],"knowledge":{}}', + encoding="utf-8", + ) + (version_dir / "automation.toml").write_text( + "[automation]\n" + 'id = "demo"\n' + 'name = "Demo Snapshot"\n' + 'description = "Snapshot"\n' + 'version = "1"\n' + "[task]\n" + 'path = "task.py"\n' + 'meta_path = "task.meta.json"\n' + "[inputs]\n" + 'url = "https://example.com"\n' + "[schedule]\n" + 'mode = "interval"\n' + "interval_seconds = 900\n" + 'timezone = "Asia/Shanghai"\n' + "[outputs]\n" + 'artifact_dir = "artifacts"\n' + 'result_json_path = "artifacts/result.json"\n' + 'stdout = "text"\n' + "[hooks]\n" + 'before_run = ["echo before"]\n' + 'after_success = ["echo success"]\n' + 'after_failure = ["echo failure"]\n' + "[runtime]\n" + "retry_attempts = 2\n" + "retry_backoff_seconds = 11\n" + "timeout_seconds = 42.5\n" + 'log_level = "debug"\n', + encoding="utf-8", + ) + (version_dir / "publish.json").write_text( + f'{{"automation_id":"demo","version":1,"source_task_path":"/tmp/task","snapshot_dir":"{version_dir}"}}', + encoding="utf-8", + ) + monkeypatch.setattr( + "browser_cli.commands.automation.request_automation_service", + lambda method, path, body=None, start_if_needed=True: { + "ok": True, + "data": { + "id": "demo", + "name": "Demo Live", + "description": "Live", + "version": "2", + "task_path": str(version_dir / "task.py"), + "task_meta_path": str(version_dir / "task.meta.json"), + "entrypoint": "run", + "schedule_kind": "interval", + "schedule_payload": { + "mode": "interval", + "interval_seconds": 900, + "timezone": "Asia/Shanghai", + }, + "timezone": "Asia/Shanghai", + "output_dir": str(tmp_path / "home" / "automations" / "demo"), + "result_json_path": str(tmp_path / "home" / "automations" / "demo" / "result.json"), + "stdout_mode": "text", + "input_overrides": {"url": "https://example.com"}, + "before_run_hooks": ["echo before"], + "after_success_hooks": ["echo success"], + "after_failure_hooks": ["echo failure"], + "retry_attempts": 2, + "retry_backoff_seconds": 11, + "timeout_seconds": 42.5, + "log_level": "debug", + "enabled": True, + "definition_status": "valid", + "latest_run": {"status": "success"}, + }, + }, + ) + + payload = json.loads( + run_automation_command( + Namespace(automation_subcommand="inspect", automation_id="demo", version=1) + ) + ) + + snapshot_keys = set(payload["data"]["snapshot_config"]) + live_keys = set(payload["data"]["live_config"]) + + assert snapshot_keys == live_keys + assert payload["data"]["snapshot_config"]["log_level"] == "debug" + assert payload["data"]["live_config"]["retry_backoff_seconds"] == 11 + assert payload["data"]["live_config"]["log_level"] == "debug" diff --git a/tests/unit/test_automation_projections.py b/tests/unit/test_automation_projections.py new file mode 100644 index 0000000..93bac7c --- /dev/null +++ b/tests/unit/test_automation_projections.py @@ -0,0 +1,192 @@ +from __future__ import annotations + +from pathlib import Path + +from browser_cli.automation.loader import load_automation_manifest +from browser_cli.automation.models import PersistedAutomationDefinition +from browser_cli.automation.projections import ( + manifest_to_config_payload, + manifest_to_persisted_definition, + manifest_to_snapshot_manifest_toml, + persisted_definition_to_config_payload, + persisted_definition_to_manifest_toml, +) + + +def _write_task_fixture(base_dir: Path) -> Path: + task_dir = base_dir / "task" + task_dir.mkdir() + (task_dir / "task.py").write_text( + "def run(flow, inputs):\n return {'ok': True}\n", + encoding="utf-8", + ) + (task_dir / "task.meta.json").write_text( + '{"task":{"id":"demo","name":"Demo","goal":"Run"},"environment":{},"success_path":{},"recovery_hints":{},"failures":[],"knowledge":{}}', + encoding="utf-8", + ) + (task_dir / "automation.toml").write_text( + "[automation]\n" + 'id = "demo"\n' + 'name = "Demo"\n' + 'description = "Semantic round-trip"\n' + 'version = "7"\n' + "[task]\n" + 'path = "task.py"\n' + 'meta_path = "task.meta.json"\n' + 'entrypoint = "run"\n' + "[inputs]\n" + 'url = "https://example.com"\n' + "[schedule]\n" + 'mode = "interval"\n' + "interval_seconds = 900\n" + 'timezone = "Asia/Shanghai"\n' + "[outputs]\n" + 'artifact_dir = "artifacts"\n' + 'result_json_path = "artifacts/result.json"\n' + 'stdout = "text"\n' + "[hooks]\n" + 'before_run = ["echo before"]\n' + 'after_success = ["echo success"]\n' + 'after_failure = ["echo failure"]\n' + "[runtime]\n" + "retry_attempts = 2\n" + "retry_backoff_seconds = 11\n" + "timeout_seconds = 42.5\n" + 'log_level = "debug"\n', + encoding="utf-8", + ) + return task_dir + + +def _build_persisted_definition(base_dir: Path) -> PersistedAutomationDefinition: + return PersistedAutomationDefinition( + id="demo", + name="Demo", + description="Semantic round-trip", + version="7", + task_path=base_dir / "live" / "task.py", + task_meta_path=base_dir / "live" / "task.meta.json", + entrypoint="run", + enabled=True, + schedule_kind="interval", + schedule_payload={ + "mode": "interval", + "interval_seconds": 900, + "timezone": "Asia/Shanghai", + }, + timezone="Asia/Shanghai", + output_dir=base_dir / "runs", + result_json_path=base_dir / "runs" / "result.json", + stdout_mode="text", + input_overrides={"url": "https://example.com"}, + before_run_hooks=("echo before",), + after_success_hooks=("echo success",), + after_failure_hooks=("echo failure",), + retry_attempts=2, + retry_backoff_seconds=11, + timeout_seconds=42.5, + log_level="debug", + ) + + +def test_manifest_to_persisted_definition_preserves_supported_fields(tmp_path: Path) -> None: + manifest = load_automation_manifest(_write_task_fixture(tmp_path) / "automation.toml") + + persisted = manifest_to_persisted_definition(manifest, enabled=True) + + assert persisted.description == "Semantic round-trip" + assert persisted.schedule_kind == "interval" + assert persisted.schedule_payload["interval_seconds"] == 900 + assert persisted.timezone == "Asia/Shanghai" + assert persisted.result_json_path is not None + assert persisted.stdout_mode == "text" + assert persisted.before_run_hooks == ("echo before",) + assert persisted.after_success_hooks == ("echo success",) + assert persisted.after_failure_hooks == ("echo failure",) + assert persisted.retry_attempts == 2 + assert persisted.retry_backoff_seconds == 11 + assert persisted.timeout_seconds == 42.5 + assert persisted.log_level == "debug" + + +def test_persisted_definition_to_manifest_toml_round_trips_supported_fields( + tmp_path: Path, +) -> None: + automation = _build_persisted_definition(tmp_path) + automation.task_path.parent.mkdir(parents=True) + automation.task_path.write_text("def run(flow, inputs):\n return {}\n", encoding="utf-8") + automation.task_meta_path.write_text( + '{"task":{"id":"demo","name":"Demo","goal":"Run"},"environment":{},"success_path":{},"recovery_hints":{},"failures":[],"knowledge":{}}', + encoding="utf-8", + ) + + manifest_path = tmp_path / "exported.toml" + manifest_path.write_text( + persisted_definition_to_manifest_toml(automation), + encoding="utf-8", + ) + + manifest = load_automation_manifest(manifest_path) + + assert manifest.inputs == {"url": "https://example.com"} + assert manifest.schedule["interval_seconds"] == 900 + assert manifest.schedule["timezone"] == "Asia/Shanghai" + assert manifest.outputs.stdout == "text" + assert manifest.hooks.after_failure == ("echo failure",) + assert manifest.runtime.retry_backoff_seconds == 11 + assert manifest.runtime.timeout_seconds == 42.5 + assert manifest.runtime.log_level == "debug" + + +def test_manifest_to_snapshot_manifest_toml_remaps_paths_without_losing_supported_fields( + tmp_path: Path, +) -> None: + task_dir = _write_task_fixture(tmp_path) + manifest = load_automation_manifest(task_dir / "automation.toml") + snapshot_dir = tmp_path / "home" / "automations" / "demo" / "versions" / "3" + snapshot_dir.mkdir(parents=True) + task_path = snapshot_dir / "task.py" + task_meta_path = snapshot_dir / "task.meta.json" + task_path.write_text("def run(flow, inputs):\n return {}\n", encoding="utf-8") + task_meta_path.write_text( + '{"task":{"id":"demo","name":"Demo","goal":"Run"},"environment":{},"success_path":{},"recovery_hints":{},"failures":[],"knowledge":{}}', + encoding="utf-8", + ) + + manifest_path = snapshot_dir / "automation.toml" + manifest_path.write_text( + manifest_to_snapshot_manifest_toml( + manifest, + version=3, + task_path=task_path, + task_meta_path=task_meta_path, + output_dir=tmp_path / "home" / "automations" / "demo", + ), + encoding="utf-8", + ) + + snapshot_manifest = load_automation_manifest(manifest_path) + + assert snapshot_manifest.automation.version == "3" + assert snapshot_manifest.task.path == task_path + assert snapshot_manifest.task.meta_path == task_meta_path + assert snapshot_manifest.outputs.result_json_path is not None + assert snapshot_manifest.outputs.result_json_path.name == "result.json" + assert snapshot_manifest.hooks.after_success == ("echo success",) + assert snapshot_manifest.runtime.log_level == "debug" + + +def test_manifest_and_persisted_config_payloads_share_supported_keys(tmp_path: Path) -> None: + manifest = load_automation_manifest(_write_task_fixture(tmp_path) / "automation.toml") + persisted = _build_persisted_definition(tmp_path) + + manifest_payload = manifest_to_config_payload(manifest) + persisted_payload = persisted_definition_to_config_payload(persisted) + + assert set(manifest_payload) == set(persisted_payload) + assert manifest_payload["timezone"] == "Asia/Shanghai" + assert persisted_payload["timezone"] == "Asia/Shanghai" + assert manifest_payload["retry_backoff_seconds"] == 11 + assert persisted_payload["retry_backoff_seconds"] == 11 + assert manifest_payload["log_level"] == "debug" + assert persisted_payload["log_level"] == "debug" diff --git a/tests/unit/test_automation_publish.py b/tests/unit/test_automation_publish.py index 98098eb..a458811 100644 --- a/tests/unit/test_automation_publish.py +++ b/tests/unit/test_automation_publish.py @@ -92,6 +92,46 @@ def test_publish_task_dir_preserves_source_manifest_fields(tmp_path: Path, monke assert manifest.runtime.retry_backoff_seconds == 7 +def test_publish_task_dir_preserves_hooks_timeout_and_log_level( + tmp_path: Path, monkeypatch +) -> None: + monkeypatch.setenv("BROWSER_CLI_HOME", str(tmp_path / "home")) + task_dir = tmp_path / "task" + task_dir.mkdir() + (task_dir / "task.py").write_text( + "def run(flow, inputs):\n return {'ok': True}\n", encoding="utf-8" + ) + (task_dir / "task.meta.json").write_text( + '{"task":{"id":"demo","name":"Demo","goal":"Run"},"environment":{},"success_path":{},"recovery_hints":{},"failures":[],"knowledge":{}}', + encoding="utf-8", + ) + (task_dir / "automation.toml").write_text( + "[automation]\n" + 'id = "demo"\n' + 'name = "Demo"\n' + "[task]\n" + 'path = "task.py"\n' + 'meta_path = "task.meta.json"\n' + "[hooks]\n" + 'before_run = ["echo before"]\n' + 'after_success = ["echo success"]\n' + 'after_failure = ["echo failure"]\n' + "[runtime]\n" + "timeout_seconds = 6.5\n" + 'log_level = "debug"\n', + encoding="utf-8", + ) + + published = publish_task_dir(task_dir, app_paths=get_app_paths()) + manifest = load_automation_manifest(published.manifest_path) + + assert manifest.hooks.before_run == ("echo before",) + assert manifest.hooks.after_success == ("echo success",) + assert manifest.hooks.after_failure == ("echo failure",) + assert manifest.runtime.timeout_seconds == 6.5 + assert manifest.runtime.log_level == "debug" + + def test_publish_task_dir_generates_defaults_when_manifest_is_absent( tmp_path: Path, monkeypatch ) -> None: diff --git a/tests/unit/test_daemon_app_validation.py b/tests/unit/test_daemon_app_validation.py new file mode 100644 index 0000000..a1c3585 --- /dev/null +++ b/tests/unit/test_daemon_app_validation.py @@ -0,0 +1,162 @@ +from __future__ import annotations + +import asyncio +from contextlib import asynccontextmanager +from types import SimpleNamespace + +from browser_cli import error_codes +from browser_cli.daemon.app import BrowserDaemonApp +from browser_cli.daemon.models import DaemonRequest + + +class _FakeTabs: + @asynccontextmanager + async def claim_active_tab(self, **_kwargs): + yield SimpleNamespace(page_id="page_0001") + + async def update_tab(self, *_args, **_kwargs) -> None: + return None + + +class _FakeBrowserService: + async def begin_command(self, _action: str) -> None: + return None + + async def end_command(self) -> dict[str, str]: + return {"driver": "playwright"} + + @property + def active_driver_name(self) -> str: + return "playwright" + + @property + def chrome_environment(self): + return None + + async def get_page_summary(self, _page_id: str) -> dict[str, str]: + return {"url": "https://example.com", "title": "Example"} + + async def mouse_click( + self, + page_id: str, + *, + x: int, + y: int, + button: str, + count: int, + ) -> dict[str, object]: + return {"page_id": page_id, "x": x, "y": y, "button": button, "count": count} + + async def mouse_drag( + self, + page_id: str, + *, + x1: int, + y1: int, + x2: int, + y2: int, + ) -> dict[str, object]: + return {"page_id": page_id, "x1": x1, "y1": y1, "x2": x2, "y2": y2} + + async def resize(self, page_id: str, *, width: int, height: int) -> dict[str, object]: + return {"page_id": page_id, "width": width, "height": height} + + async def wait_for_network_idle( + self, page_id: str, *, timeout_seconds: float = 30.0 + ) -> dict[str, object]: + return {"page_id": page_id, "timeout_seconds": timeout_seconds} + + +class _FakeState: + def __init__(self) -> None: + self.tabs = _FakeTabs() + self.browser_service = _FakeBrowserService() + + +def _execute(request: DaemonRequest) -> dict[str, object]: + async def _scenario() -> dict[str, object]: + app = BrowserDaemonApp(state=_FakeState()) # type: ignore[arg-type] + response = await app.execute(request) + return response.to_dict() + + return asyncio.run(_scenario()) + + +def test_mouse_click_missing_x_returns_invalid_input() -> None: + payload = _execute( + DaemonRequest( + action="mouse-click", + args={"y": 10}, + agent_id="agent-a", + request_id="req-1", + ) + ) + + assert payload["ok"] is False + assert payload["error_code"] == error_codes.INVALID_INPUT + assert payload["error_message"] == "x is required." + + +def test_mouse_drag_invalid_coordinate_returns_invalid_input() -> None: + payload = _execute( + DaemonRequest( + action="mouse-drag", + args={"x1": 1, "y1": 2, "x2": "bad", "y2": 4}, + agent_id="agent-a", + request_id="req-2", + ) + ) + + assert payload["ok"] is False + assert payload["error_code"] == error_codes.INVALID_INPUT + assert payload["error_message"] == "x2 must be an integer." + + +def test_wait_network_invalid_timeout_returns_invalid_input() -> None: + payload = _execute( + DaemonRequest( + action="wait-network", + args={"timeout": "slow"}, + agent_id="agent-a", + request_id="req-3", + ) + ) + + assert payload["ok"] is False + assert payload["error_code"] == error_codes.INVALID_INPUT + assert payload["error_message"] == "timeout must be a number." + + +def test_resize_non_positive_values_keep_handler_level_constraint() -> None: + payload = _execute( + DaemonRequest( + action="resize", + args={"width": 0, "height": 100}, + agent_id="agent-a", + request_id="req-4", + ) + ) + + assert payload["ok"] is False + assert payload["error_code"] == error_codes.INVALID_INPUT + assert payload["error_message"] == "width and height must be positive integers." + + +def test_mouse_click_successfully_parses_integer_fields() -> None: + payload = _execute( + DaemonRequest( + action="mouse-click", + args={"x": "12", "y": "14", "count": "2"}, + agent_id="agent-a", + request_id="req-5", + ) + ) + + assert payload["ok"] is True + assert payload["data"] == { + "page_id": "page_0001", + "x": 12, + "y": 14, + "button": "left", + "count": 2, + } diff --git a/tests/unit/test_repo_text_contracts.py b/tests/unit/test_repo_text_contracts.py index 1dee8f3..fb04d75 100644 --- a/tests/unit/test_repo_text_contracts.py +++ b/tests/unit/test_repo_text_contracts.py @@ -26,3 +26,17 @@ def test_doctor_command_no_longer_describes_pip_users() -> None: assert '"""Install and runtime diagnostics for pip users."""' not in doctor_text assert "uv tool install browser-cli" in doctor_text or "uv sync --dev" in doctor_text + + +def test_uninstall_doc_is_linked_from_primary_install_docs() -> None: + uninstall_text = _read("docs/uninstall.md") + readme_text = _read("README.md") + uv_doc_text = _read("docs/installed-with-uv.md") + + assert "# Uninstall Browser CLI" in uninstall_text + assert "browser-cli paths" in uninstall_text + assert "browser-cli automation stop" in uninstall_text + assert "browser-cli reload" in uninstall_text + assert "uv tool uninstall browser-cli" in uninstall_text + assert "docs/uninstall.md" in readme_text + assert "docs/uninstall.md" in uv_doc_text diff --git a/tests/unit/test_task_runtime_automation.py b/tests/unit/test_task_runtime_automation.py index 8291e3d..c24714e 100644 --- a/tests/unit/test_task_runtime_automation.py +++ b/tests/unit/test_task_runtime_automation.py @@ -6,7 +6,7 @@ from browser_cli.automation.hooks import run_hook_commands from browser_cli.automation.loader import load_automation_manifest -from browser_cli.automation.models import manifest_to_persisted_definition +from browser_cli.automation.projections import manifest_to_persisted_definition from browser_cli.task_runtime import parse_input_overrides from browser_cli.task_runtime.models import TaskMetadataError, validate_task_metadata @@ -79,6 +79,7 @@ def test_load_automation_manifest_preserves_retry_backoff_and_stdout(tmp_path: P assert persisted.stdout_mode == "text" assert persisted.retry_backoff_seconds == 7 assert persisted.timeout_seconds == 12.5 + assert persisted.log_level == "info" def test_run_hook_commands_executes_shell_commands(tmp_path: Path) -> None: