From 6366f9bf47e9f58a73c0e5ae9e9ada4305ed0e7b Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Fri, 20 Mar 2026 23:55:47 +0100 Subject: [PATCH 01/40] feat: AuthResolver + CommandLogger foundation with full auth wiring MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Phase 1 — Foundation: - AuthResolver: per-(host,org) token resolution, host classification, try_with_fallback(), build_error_context(), detect_token_type() - CommandLogger base + InstallLogger subclass with semantic lifecycle - DiagnosticCollector: CATEGORY_AUTH, auth() method, render group Phase 2 — Auth wiring (all touchpoints): - github_downloader.py: AuthResolver injected, per-dep token resolution in _clone_with_fallback and _download_github_file - install.py: _validate_package_exists uses try_with_fallback(unauth_first=True) - copilot.py + operations.py: replaced os.getenv bypasses with TokenManager - All 7 hardcoded auth error messages replaced with build_error_context() Security fix: global env vars (GITHUB_APM_PAT) no longer leak to non-default hosts (GHES, GHE Cloud, generic). Enterprise hosts resolve via per-org env vars or git credential helpers only. Phase 4 — Tests: - test_auth.py: 34 tests (classify_host, resolve, try_with_fallback, build_error_context, detect_token_type) - test_command_logger.py: 44 tests (CommandLogger, InstallLogger, _ValidationOutcome) - Fixed test_github_downloader_token_precedence.py import paths - Updated test_auth_scoping.py for AuthResolver integration Phase 5 — Agents & Skills: - 3 agent personas: auth-expert, python-architect, cli-logging-expert - 2 new skills: auth, python-architecture - Updated cli-logging-ux skill with CommandLogger awareness All 2829 tests pass. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .github/agents/auth-expert.agent.md | 46 +++ .github/agents/cli-logging-expert.agent.md | 50 +++ .github/agents/python-architect.agent.md | 53 +++ .github/skills/auth/SKILL.md | 24 ++ .github/skills/cli-logging-ux/SKILL.md | 25 +- .github/skills/python-architecture/SKILL.md | 25 ++ src/apm_cli/adapters/client/copilot.py | 7 +- src/apm_cli/commands/install.py | 99 +++-- src/apm_cli/core/__init__.py | 4 + src/apm_cli/core/auth.py | 359 ++++++++++++++++++ src/apm_cli/core/command_logger.py | 264 +++++++++++++ src/apm_cli/deps/github_downloader.py | 137 +++---- src/apm_cli/registry/operations.py | 8 +- src/apm_cli/utils/diagnostics.py | 34 ++ ...test_github_downloader_token_precedence.py | 4 +- tests/unit/test_auth.py | 359 ++++++++++++++++++ tests/unit/test_auth_scoping.py | 27 +- tests/unit/test_command_logger.py | 305 +++++++++++++++ 18 files changed, 1713 insertions(+), 117 deletions(-) create mode 100644 .github/agents/auth-expert.agent.md create mode 100644 .github/agents/cli-logging-expert.agent.md create mode 100644 .github/agents/python-architect.agent.md create mode 100644 .github/skills/auth/SKILL.md create mode 100644 .github/skills/python-architecture/SKILL.md create mode 100644 src/apm_cli/core/auth.py create mode 100644 src/apm_cli/core/command_logger.py create mode 100644 tests/unit/test_auth.py create mode 100644 tests/unit/test_command_logger.py diff --git a/.github/agents/auth-expert.agent.md b/.github/agents/auth-expert.agent.md new file mode 100644 index 00000000..e517f99e --- /dev/null +++ b/.github/agents/auth-expert.agent.md @@ -0,0 +1,46 @@ +--- +name: auth-expert +description: >- + Expert on GitHub authentication, EMU, GHE, ADO, and APM's AuthResolver + architecture. Activate when reviewing or writing code that touches token + management, credential resolution, or remote host authentication. +model: claude-opus-4.6 +--- + +# Auth Expert + +You are an expert on Git hosting authentication across GitHub.com, GitHub Enterprise (*.ghe.com, GHES), Azure DevOps, and generic Git hosts. You have deep knowledge of APM's auth architecture and the broader credential ecosystem. + +## Core Knowledge + +- **Token types**: Fine-grained PATs (`github_pat_`), classic PATs (`ghp_`), EMU tokens (`ghu_`), OAuth tokens (`gho_`), server tokens (`ghs_`) +- **GitHub EMU constraints**: Enterprise-scoped, cannot access public github.com, `ghu_` prefix +- **Host classification**: github.com (public), *.ghe.com (no public repos), GHES (`GITHUB_HOST`), ADO +- **Git credential helpers**: macOS Keychain, Windows Credential Manager, `gh auth`, `git credential fill` +- **Rate limiting**: 60/hr unauthenticated, 5000/hr authenticated, primary (403) vs secondary (429) + +## APM Architecture + +- **AuthResolver** (`src/apm_cli/core/auth.py`): Single source of truth. Per-(host, org) resolution. Frozen `AuthContext` for thread safety. +- **Token precedence**: `GITHUB_APM_PAT_{ORG}` → `GITHUB_APM_PAT` → `GITHUB_TOKEN` → `GH_TOKEN` → `git credential fill` +- **Fallback chains**: unauth-first for validation (save rate limits), auth-first for download +- **GitHubTokenManager** (`src/apm_cli/core/token_manager.py`): Low-level token lookup, wrapped by AuthResolver + +## Decision Framework + +When reviewing or writing auth code: + +1. **Every remote operation** must go through AuthResolver — no direct `os.getenv()` for tokens +2. **Per-dep resolution**: Use `resolve_for_dep(dep_ref)`, never `self.github_token` instance vars +3. **Host awareness**: *.ghe.com = auth-only, github.com = fallback chain, ADO = auth-only +4. **Error messages**: Always use `build_error_context()` — never hardcode env var names +5. **Thread safety**: AuthContext is resolved before `executor.submit()`, passed per-worker + +## Common Pitfalls + +- EMU PATs on public github.com repos → will fail silently +- `git credential fill` only resolves per-host, not per-org +- `_build_repo_url` must accept token param, not use instance var +- Windows: `GIT_ASKPASS` must be `'echo'` not empty string +- Classic PATs (`ghp_`) work cross-org but are being deprecated — prefer fine-grained +- ADO uses Basic auth with base64-encoded `:PAT` — different from GitHub bearer token flow diff --git a/.github/agents/cli-logging-expert.agent.md b/.github/agents/cli-logging-expert.agent.md new file mode 100644 index 00000000..f6dca5b8 --- /dev/null +++ b/.github/agents/cli-logging-expert.agent.md @@ -0,0 +1,50 @@ +--- +name: cli-logging-expert +description: >- + Expert on CLI output UX, CommandLogger patterns, and diagnostic rendering in + APM. Activate when designing user-facing output, progress indicators, or + verbose/quiet mode behavior. +model: claude-opus-4.6 +--- + +# CLI Logging Expert + +You are an expert on CLI output UX with excellent taste. You ensure verbose mode tells everything for AI agents while non-verbose is clean for humans. + +## Core Principles + +- **Traffic light rule**: Red = error (must act), Yellow = warning (should know), Green = success, Blue = info, Dim = verbose detail +- **Newspaper test**: Most important info first. Summary before details. +- **Signal-to-noise**: Every message must pass "So What?" test — if the user can't act on it, don't show it +- **Context-aware**: Same event, different message depending on partial/full install, verbose/quiet, dry-run + +## APM Output Architecture + +- **CommandLogger** (`src/apm_cli/core/command_logger.py`): Base for ALL commands. Lifecycle: start → progress → complete → summary. +- **InstallLogger**: Subclass with validation/resolution/download/summary phases. Knows partial vs full. +- **DiagnosticCollector** (`src/apm_cli/utils/diagnostics.py`): Collect-then-render. Categories: security, auth, collision, overwrite, warning, error, info. +- **`_rich_*` helpers** (`src/apm_cli/utils/console.py`): Low-level output. CommandLogger delegates to these. +- **STATUS_SYMBOLS**: ASCII-safe symbols `[*]`, `[>]`, `[!]`, `[x]`, `[+]`, `[i]`, etc. + +## Anti-patterns + +- Using `_rich_*` directly instead of `CommandLogger` in command functions +- Showing total dep count when user asked to install 1 package +- `"[+] No dependencies to install"` — contradictory symbol +- `"Installation complete"` when nothing was installed +- MCP noise during APM-only partial install +- Hardcoded env var names in error messages (use `AuthResolver.build_error_context`) + +## Verbose Mode Design + +- **For humans (default)**: Counts, summaries, actionable messages only +- **For agents (--verbose)**: Auth chain steps, per-file details, resolution decisions, timing +- **Progressive disclosure**: Default shows what happened; `--verbose` shows why and how + +## Message Writing Rules + +1. **Lead with the outcome** — "Installed 3 dependencies" not "The installation process has completed" +2. **Use exact counts** — "2 prompts integrated" not "prompts integrated" +3. **Name the thing** — "Skipping my-skill — local file exists" not "Skipping file — conflict detected" +4. **Include the fix** — "Use `apm install --force` to overwrite" after every skip warning +5. **No emojis** — ASCII `STATUS_SYMBOLS` only, never emoji characters diff --git a/.github/agents/python-architect.agent.md b/.github/agents/python-architect.agent.md new file mode 100644 index 00000000..dac40610 --- /dev/null +++ b/.github/agents/python-architect.agent.md @@ -0,0 +1,53 @@ +--- +name: python-architect +description: >- + Expert on Python design patterns, modularization, and scalable architecture + for the APM CLI codebase. Activate when creating new modules, refactoring + class hierarchies, or making cross-cutting architectural decisions. +model: claude-opus-4.6 +--- + +# Python Architect + +You are an expert Python architect specializing in CLI tool design. You guide architectural decisions for the APM CLI codebase. + +## Design Philosophy + +- **Speed and simplicity over complexity** — don't over-engineer +- **Solid foundation, iterate** — build minimal but extensible +- **Pay only for what you touch** — O(work) proportional to affected files, not repo size + +## Patterns in APM + +- **Strategy + Chain of Responsibility**: `AuthResolver` — configurable fallback chains per host type +- **Base class + subclass**: `CommandLogger` → `InstallLogger` — shared lifecycle, command-specific phases +- **Collect-then-render**: `DiagnosticCollector` — push diagnostics during operation, render summary at end +- **BaseIntegrator**: All file integrators share one base for collision detection, manifest sync, path security + +## When to Abstract vs Inline + +- **Abstract** when 3+ call sites share the same logic pattern +- **Inline** when logic is truly unique to one call site +- **Base class** when commands share lifecycle (start → progress → complete → summary) +- **Dataclass** for structured data that flows between components (frozen when thread-safe required) + +## Code Quality Standards + +- Type hints on all public APIs +- Lazy imports to break circular dependencies +- Thread safety via locks or frozen dataclasses +- No mutable shared state in parallel operations + +## Module Organization + +- `src/apm_cli/core/` — domain logic (auth, resolution, locking, compilation) +- `src/apm_cli/integration/` — file-level integrators (BaseIntegrator subclasses) +- `src/apm_cli/utils/` — cross-cutting helpers (console, diagnostics, file ops) +- One class per file when the class is the primary abstraction; group small helpers + +## Refactoring Guidance + +1. **Extract when shared** — if two commands duplicate logic, extract to `core/` or `utils/` +2. **Push down to base** — if two integrators share logic, push into `BaseIntegrator` +3. **Prefer composition** — inject collaborators via constructor, not deep inheritance +4. **Keep constructors thin** — expensive init goes in factory methods or lazy properties diff --git a/.github/skills/auth/SKILL.md b/.github/skills/auth/SKILL.md new file mode 100644 index 00000000..b4fa3757 --- /dev/null +++ b/.github/skills/auth/SKILL.md @@ -0,0 +1,24 @@ +--- +name: auth +description: > + Activate when code touches token management, credential resolution, git auth + flows, GITHUB_APM_PAT, ADO_APM_PAT, AuthResolver, HostInfo, AuthContext, or + any remote host authentication — even if 'auth' isn't mentioned explicitly. +--- + +# Auth Skill + +[Auth expert persona](../../agents/auth-expert.agent.md) + +## When to activate + +- Any change to `src/apm_cli/core/auth.py` or `src/apm_cli/core/token_manager.py` +- Code that reads `GITHUB_APM_PAT`, `GITHUB_TOKEN`, `GH_TOKEN`, `ADO_APM_PAT` +- Code using `git ls-remote`, `git clone`, or GitHub/ADO API calls +- Error messages mentioning tokens, authentication, or credentials +- Changes to `github_downloader.py` auth paths +- Per-host or per-org token resolution logic + +## Key rule + +All auth flows MUST go through `AuthResolver`. No direct `os.getenv()` for token variables in application code. diff --git a/.github/skills/cli-logging-ux/SKILL.md b/.github/skills/cli-logging-ux/SKILL.md index a7987936..edaa0a64 100644 --- a/.github/skills/cli-logging-ux/SKILL.md +++ b/.github/skills/cli-logging-ux/SKILL.md @@ -5,10 +5,12 @@ description: > error messages, progress indicators, or diagnostic summaries in the APM codebase. Activate whenever code touches console helpers (_rich_success, _rich_warning, _rich_error, _rich_info, _rich_echo), DiagnosticCollector, - STATUS_SYMBOLS, or any user-facing terminal output — even if the user - doesn't mention "logging" or "UX" explicitly. + STATUS_SYMBOLS, CommandLogger, or any user-facing terminal output — even + if the user doesn't mention "logging" or "UX" explicitly. --- +[CLI Logging UX expert persona](../../agents/cli-logging-expert.agent.md) + # CLI Logging & Developer Experience ## Decision framework @@ -147,6 +149,19 @@ if SkillIntegrator._dirs_equal(source, target): continue # Nothing changed, nothing to report ``` +## CommandLogger Architecture + +All CLI commands must use `CommandLogger` (or a subclass) for output: + +- **`CommandLogger`** (`src/apm_cli/core/command_logger.py`): Base for all commands. Provides `start()`, `progress()`, `success()`, `error()`, `warning()`, `verbose_detail()`, `dry_run_notice()`, `auth_step()`, `render_summary()`. +- **`InstallLogger(CommandLogger)`**: Install-specific with `validation_start()`, `resolution_start()`, `nothing_to_install()`, `download_start()`, `install_summary()`. +- **`DiagnosticCollector`**: Injected via `logger.diagnostics`. Collect-then-render pattern. + +### Rule: No direct _rich_* in commands +Command functions must NOT call `_rich_info()`, `_rich_error()`, etc. directly. Use `logger.progress()`, `logger.error()`, etc. instead. The _rich_* helpers are internal to CommandLogger. + +Exception: Rich tables and panels for display (not lifecycle logging) may use `console.print()` directly. + ## Anti-patterns 1. **Warning for non-actionable state** — If the user can't do anything about it, use `_rich_info` or defer to `--verbose`, not `_rich_warning`. @@ -160,3 +175,9 @@ if SkillIntegrator._dirs_equal(source, target): 5. **Inconsistent symbols** — Always use `STATUS_SYMBOLS` dict with `symbol=` param, not inline characters. 6. **Walls of text** — Use Rich tables for structured data, panels for grouped content. Break up long output with visual hierarchy (indentation, `└─` tree connectors). + +7. **Calling `_rich_info("Installing...")` directly in a command** — Use `logger.start("Installing...")` instead. The `_rich_*` helpers are internal to `CommandLogger`. + +8. **Checking `if verbose:` manually** — Use `logger.verbose_detail("...")` which handles the check internally. + +9. **Checking `if dry_run:` manually** — Use `logger.should_execute` or `logger.dry_run_notice("...")` instead. diff --git a/.github/skills/python-architecture/SKILL.md b/.github/skills/python-architecture/SKILL.md new file mode 100644 index 00000000..244c5abe --- /dev/null +++ b/.github/skills/python-architecture/SKILL.md @@ -0,0 +1,25 @@ +--- +name: python-architecture +description: > + Activate when creating new modules, refactoring class hierarchies, introducing + design patterns, or making changes spanning 3+ files in the APM CLI codebase. +--- + +# Python Architecture Skill + +[Python architect persona](../../agents/python-architect.agent.md) + +## When to activate + +- Creating new Python modules or packages under `src/apm_cli/` +- Refactoring class hierarchies or introducing base classes +- Changes that touch 3+ files with shared logic patterns +- Introducing new design patterns (Strategy, Observer, etc.) +- Cross-cutting concerns (logging, auth, error handling) +- Performance-sensitive paths (parallel downloads, large manifests) + +## Key rules + +- Follow existing patterns (BaseIntegrator, CommandLogger, AuthResolver) before inventing new ones +- Prefer composition over deep inheritance +- Push shared logic into base classes, not duplicated across siblings diff --git a/src/apm_cli/adapters/client/copilot.py b/src/apm_cli/adapters/client/copilot.py index 64147f39..734ace6c 100644 --- a/src/apm_cli/adapters/client/copilot.py +++ b/src/apm_cli/adapters/client/copilot.py @@ -12,6 +12,7 @@ from ...registry.client import SimpleRegistryClient from ...registry.integration import RegistryIntegration from ...core.docker_args import DockerArgsProcessor +from ...core.token_manager import GitHubTokenManager from ...utils.github_host import is_github_hostname @@ -199,8 +200,10 @@ def _format_server_config(self, server_info, env_overrides=None, runtime_vars=No is_github_server = self._is_github_server(server_name, remote.get("url", "")) if is_github_server: - # Check for GitHub Personal Access Token - github_token = os.getenv("GITHUB_PERSONAL_ACCESS_TOKEN") + # Use centralized token manager (copilot chain: GITHUB_COPILOT_PAT → GITHUB_TOKEN → GITHUB_APM_PAT), + # falling back to GITHUB_PERSONAL_ACCESS_TOKEN for Copilot CLI compat. + _tm = GitHubTokenManager() + github_token = _tm.get_token_for_purpose('copilot') or os.getenv("GITHUB_PERSONAL_ACCESS_TOKEN") if github_token: config["headers"] = { "Authorization": f"Bearer {github_token}" diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index cde1ff48..88a67ab9 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -245,57 +245,72 @@ def _validate_package_exists(package): ) return result.returncode == 0 - # For GitHub.com, use standard approach (public repos don't need auth) - package_url = f"{dep_ref.to_github_url()}.git" + # For GitHub.com, use AuthResolver with unauth-first fallback + from apm_cli.core.auth import AuthResolver - # For regular packages, use git ls-remote - with tempfile.TemporaryDirectory() as temp_dir: - try: - - # Try cloning with minimal fetch - cmd = [ - "git", - "ls-remote", - "--heads", - "--exit-code", - package_url, - ] - result = subprocess.run( - cmd, capture_output=True, text=True, timeout=30 # 30 second timeout - ) + auth_resolver = AuthResolver() + host = dep_ref.host or default_host() + org = dep_ref.repo_url.split('/')[0] if dep_ref.repo_url and '/' in dep_ref.repo_url else None - return result.returncode == 0 + def _ls_remote(token, git_env): + """Try git ls-remote with optional auth.""" + if token: + url = f"https://x-access-token:{token}@{host}/{dep_ref.repo_url}.git" + else: + url = f"{dep_ref.to_github_url()}.git" + result = subprocess.run( + ['git', 'ls-remote', '--heads', '--exit-code', url], + capture_output=True, text=True, timeout=30, + env=git_env, + ) + if result.returncode != 0: + raise RuntimeError(f"git ls-remote failed: {result.stderr}") + return True - except subprocess.TimeoutExpired: - return False - except Exception: - return False + try: + return auth_resolver.try_with_fallback( + host, _ls_remote, + org=org, + unauth_first=True, + ) + except Exception: + return False except Exception: # If parsing fails, assume it's a regular GitHub package - package_url = ( - f"https://{package}.git" - if is_valid_fqdn(package) - else f"https://{default_host()}/{package}.git" - ) - with tempfile.TemporaryDirectory() as temp_dir: - try: - cmd = [ - "git", - "ls-remote", - "--heads", - "--exit-code", - package_url, - ] + from apm_cli.core.auth import AuthResolver - result = subprocess.run(cmd, capture_output=True, text=True, timeout=30) + auth_resolver = AuthResolver() + host = default_host() + org = package.split('/')[0] if '/' in package else None - return result.returncode == 0 + if is_valid_fqdn(package): + base_url = f"https://{package}.git" + else: + base_url = f"https://{host}/{package}.git" - except subprocess.TimeoutExpired: - return False - except Exception: - return False + def _ls_remote_fallback(token, git_env): + if token and not is_valid_fqdn(package): + url = f"https://x-access-token:{token}@{host}/{package}.git" + else: + url = base_url + result = subprocess.run( + ['git', 'ls-remote', '--heads', '--exit-code', url], + capture_output=True, text=True, timeout=30, + env=git_env, + ) + if result.returncode != 0: + raise RuntimeError(f"git ls-remote failed: {result.stderr}") + return True + + try: + return auth_resolver.try_with_fallback( + host, _ls_remote_fallback, + org=org, + unauth_first=True, + ) + except Exception: + return False # --------------------------------------------------------------------------- diff --git a/src/apm_cli/core/__init__.py b/src/apm_cli/core/__init__.py index 76f85a4b..77d3d637 100644 --- a/src/apm_cli/core/__init__.py +++ b/src/apm_cli/core/__init__.py @@ -1 +1,5 @@ """Core package.""" + +from apm_cli.core.auth import AuthContext, AuthResolver, HostInfo + +__all__ = ["AuthContext", "AuthResolver", "HostInfo"] diff --git a/src/apm_cli/core/auth.py b/src/apm_cli/core/auth.py new file mode 100644 index 00000000..cca3a430 --- /dev/null +++ b/src/apm_cli/core/auth.py @@ -0,0 +1,359 @@ +"""Centralized authentication resolution for APM CLI. + +Every APM operation that touches a remote host MUST use AuthResolver. +Resolution is per-(host, org) pair, thread-safe, and cached per-process. + +Usage:: + + resolver = AuthResolver() + ctx = resolver.resolve("github.com", org="microsoft") + # ctx.token, ctx.source, ctx.token_type, ctx.host_info, ctx.git_env + +For dependencies:: + + ctx = resolver.resolve_for_dep(dep_ref) + +For operations with automatic auth/unauth fallback:: + + result = resolver.try_with_fallback( + "github.com", lambda token, env: download(token, env), + org="microsoft", + ) +""" + +from __future__ import annotations + +import os +import sys +import threading +from dataclasses import dataclass, field +from typing import TYPE_CHECKING, Callable, Optional, TypeVar + +from apm_cli.core.token_manager import GitHubTokenManager +from apm_cli.utils.github_host import ( + default_host, + is_azure_devops_hostname, + is_github_hostname, + is_valid_fqdn, +) + +if TYPE_CHECKING: + from apm_cli.models.dependency.reference import DependencyReference + +T = TypeVar("T") + + +# --------------------------------------------------------------------------- +# Data classes +# --------------------------------------------------------------------------- + +@dataclass(frozen=True) +class HostInfo: + """Immutable description of a remote Git host.""" + + host: str + kind: str # "github" | "ghe_cloud" | "ghes" | "ado" | "generic" + has_public_repos: bool + api_base: str + + +@dataclass +class AuthContext: + """Resolved authentication for a single (host, org) pair. + + Treat as immutable after construction — fields are never mutated. + Not frozen because ``git_env`` is a dict (unhashable). + """ + + token: Optional[str] + source: str # e.g. "GITHUB_APM_PAT_ORGNAME", "GITHUB_TOKEN", "none" + token_type: str # "fine-grained", "classic", "emu", "ado", "artifactory", "unknown" + host_info: HostInfo + git_env: dict = field(compare=False, repr=False) + + +# --------------------------------------------------------------------------- +# AuthResolver +# --------------------------------------------------------------------------- + +class AuthResolver: + """Single source of truth for auth resolution. + + Every APM operation that touches a remote host MUST use this class. + Resolution is per-(host, org) pair, thread-safe, cached per-process. + """ + + def __init__(self, token_manager: Optional[GitHubTokenManager] = None): + self._token_manager = token_manager or GitHubTokenManager() + self._cache: dict[tuple, AuthContext] = {} + self._lock = threading.Lock() + + # -- host classification ------------------------------------------------ + + @staticmethod + def classify_host(host: str) -> HostInfo: + """Return a ``HostInfo`` describing *host*.""" + h = host.lower() + + if h == "github.com": + return HostInfo( + host=host, + kind="github", + has_public_repos=True, + api_base="https://api.github.com", + ) + + if h.endswith(".ghe.com"): + return HostInfo( + host=host, + kind="ghe_cloud", + has_public_repos=False, + api_base=f"https://{host}/api/v3", + ) + + if is_azure_devops_hostname(host): + return HostInfo( + host=host, + kind="ado", + has_public_repos=True, + api_base="https://dev.azure.com", + ) + + # GHES: GITHUB_HOST is set to a non-github.com, non-ghe.com FQDN + ghes_host = os.environ.get("GITHUB_HOST", "").lower() + if ghes_host and ghes_host == h and ghes_host != "github.com" and not ghes_host.endswith(".ghe.com"): + if is_valid_fqdn(ghes_host): + return HostInfo( + host=host, + kind="ghes", + has_public_repos=True, + api_base=f"https://{host}/api/v3", + ) + + # Generic FQDN (GitLab, Bitbucket, self-hosted, etc.) + return HostInfo( + host=host, + kind="generic", + has_public_repos=True, + api_base=f"https://{host}/api/v3", + ) + + # -- token type detection ----------------------------------------------- + + @staticmethod + def detect_token_type(token: str) -> str: + """Classify a token string by its prefix.""" + if token.startswith("github_pat_"): + return "fine-grained" + if token.startswith("ghp_"): + return "classic" + if token.startswith("ghu_"): + return "emu" + if token.startswith(("gho_", "ghs_", "ghr_")): + return "classic" + return "unknown" + + # -- core resolution ---------------------------------------------------- + + def resolve(self, host: str, org: Optional[str] = None) -> AuthContext: + """Resolve auth for *(host, org)*. Cached & thread-safe.""" + key = (host, org) + with self._lock: + if key in self._cache: + return self._cache[key] + + host_info = self.classify_host(host) + token, source = self._resolve_token(host_info, org) + token_type = self.detect_token_type(token) if token else "unknown" + git_env = self._build_git_env(token) + + ctx = AuthContext( + token=token, + source=source, + token_type=token_type, + host_info=host_info, + git_env=git_env, + ) + + with self._lock: + self._cache[key] = ctx + return ctx + + def resolve_for_dep(self, dep_ref: "DependencyReference") -> AuthContext: + """Resolve auth from a ``DependencyReference``.""" + host = dep_ref.host or default_host() + org: Optional[str] = None + if dep_ref.repo_url: + parts = dep_ref.repo_url.split("/") + if parts: + org = parts[0] + return self.resolve(host, org) + + # -- fallback strategy -------------------------------------------------- + + def try_with_fallback( + self, + host: str, + operation: Callable[..., T], + *, + org: Optional[str] = None, + unauth_first: bool = False, + verbose_callback: Optional[Callable[[str], None]] = None, + ) -> T: + """Execute *operation* with automatic auth/unauth fallback. + + Parameters + ---------- + host: + Target git host. + operation: + ``operation(token, git_env) -> T`` — the work to do. + org: + Optional organisation for per-org token lookup. + unauth_first: + If *True*, try unauthenticated first (saves rate limits, EMU-safe). + verbose_callback: + Called with a human-readable step description at each attempt. + """ + auth_ctx = self.resolve(host, org) + host_info = auth_ctx.host_info + git_env = auth_ctx.git_env + + def _log(msg: str) -> None: + if verbose_callback: + verbose_callback(msg) + + # Hosts that never have public repos → auth-only, no fallback + if host_info.kind in ("ghe_cloud", "ado"): + _log(f"Auth-only attempt for {host_info.kind} host {host}") + return operation(auth_ctx.token, git_env) + + if unauth_first: + # Validation path: save rate limits, EMU-safe + try: + _log(f"Trying unauthenticated access to {host}") + return operation(None, git_env) + except Exception: + if auth_ctx.token: + _log(f"Unauthenticated failed, retrying with token (source: {auth_ctx.source})") + return operation(auth_ctx.token, git_env) + raise + else: + # Download path: auth-first for higher rate limits + if auth_ctx.token: + try: + _log(f"Trying authenticated access to {host} (source: {auth_ctx.source})") + return operation(auth_ctx.token, git_env) + except Exception: + if host_info.has_public_repos: + _log("Authenticated failed, retrying without token") + return operation(None, git_env) + raise + else: + _log(f"No token available, trying unauthenticated access to {host}") + return operation(None, git_env) + + # -- error context ------------------------------------------------------ + + def build_error_context( + self, host: str, operation: str, org: Optional[str] = None + ) -> str: + """Build an actionable error message for auth failures.""" + auth_ctx = self.resolve(host, org) + lines: list[str] = [f"Authentication failed for {operation} on {host}."] + + if auth_ctx.token: + lines.append(f"Token was provided (source: {auth_ctx.source}, type: {auth_ctx.token_type}).") + if auth_ctx.token_type == "emu": + lines.append( + "EMU tokens are scoped to your enterprise and cannot " + "access public github.com repos." + ) + lines.append( + "If your organization uses SAML SSO, you may need to " + "authorize your token at https://github.com/settings/tokens" + ) + else: + lines.append("No token available.") + lines.append( + "Set GITHUB_APM_PAT or GITHUB_TOKEN, or run 'gh auth login'." + ) + + if org: + lines.append( + f"If packages span multiple organizations, set per-org tokens: " + f"GITHUB_APM_PAT_{_org_to_env_suffix(org)}" + ) + + lines.append("Run with --verbose for detailed auth diagnostics.") + return "\n".join(lines) + + # -- internals ---------------------------------------------------------- + + def _resolve_token( + self, host_info: HostInfo, org: Optional[str] + ) -> tuple[Optional[str], str]: + """Walk the token resolution chain. Returns (token, source). + + Global env vars (``GITHUB_APM_PAT``, ``GITHUB_TOKEN``, ``GH_TOKEN``) + are only checked for the default host and ADO. Non-default hosts + (GHES, GHE Cloud, generic) resolve via per-org env vars or git + credential helpers — leaking a github.com PAT to an enterprise + server would be a security risk and would fail auth anyway. + """ + # 1. Per-org env var (any host) + if org: + env_name = f"GITHUB_APM_PAT_{_org_to_env_suffix(org)}" + token = os.environ.get(env_name) + if token: + return token, env_name + + # 2. Global env var chain — only for default host or ADO + _is_default = host_info.host.lower() == default_host().lower() + purpose = self._purpose_for_host(host_info) + if _is_default or host_info.kind == "ado": + token = self._token_manager.get_token_for_purpose(purpose) + if token: + source = self._identify_env_source(purpose) + return token, source + + # 3. Git credential helper (not for ADO — uses its own PAT) + if host_info.kind not in ("ado",): + credential = self._token_manager.resolve_credential_from_git(host_info.host) + if credential: + return credential, "git-credential-fill" + + return None, "none" + + @staticmethod + def _purpose_for_host(host_info: HostInfo) -> str: + if host_info.kind == "ado": + return "ado_modules" + return "modules" + + def _identify_env_source(self, purpose: str) -> str: + """Return the name of the first env var that matched for *purpose*.""" + for var in self._token_manager.TOKEN_PRECEDENCE.get(purpose, []): + if os.environ.get(var): + return var + return "env" + + @staticmethod + def _build_git_env(token: Optional[str] = None) -> dict: + """Pre-built env dict for subprocess git calls.""" + env = os.environ.copy() + env["GIT_TERMINAL_PROMPT"] = "0" + # On Windows, GIT_ASKPASS='' can cause issues; use 'echo' instead + env["GIT_ASKPASS"] = "" if sys.platform != "win32" else "echo" + if token: + env["GIT_TOKEN"] = token + return env + + +# --------------------------------------------------------------------------- +# Helpers +# --------------------------------------------------------------------------- + +def _org_to_env_suffix(org: str) -> str: + """Convert an org name to an env-var suffix (upper-case, hyphens → underscores).""" + return org.upper().replace("-", "_") diff --git a/src/apm_cli/core/command_logger.py b/src/apm_cli/core/command_logger.py new file mode 100644 index 00000000..438d539a --- /dev/null +++ b/src/apm_cli/core/command_logger.py @@ -0,0 +1,264 @@ +"""Command logger infrastructure for structured CLI output. + +Provides CommandLogger (base for all commands) and InstallLogger +(install-specific phases). All methods delegate to _rich_* helpers +from apm_cli.utils.console — no new output primitives. +""" + +from dataclasses import dataclass + +from apm_cli.utils.console import ( + _rich_echo, + _rich_error, + _rich_info, + _rich_success, + _rich_warning, +) + + +@dataclass +class _ValidationOutcome: + """Result of package validation before install.""" + + valid: list # List of (canonical_name, already_present: bool) tuples + invalid: list # List of (package_name, reason: str) tuples + + @property + def all_failed(self) -> bool: + return len(self.valid) == 0 and len(self.invalid) > 0 + + @property + def has_failures(self) -> bool: + return len(self.invalid) > 0 + + @property + def new_packages(self) -> list: + """Packages that are valid and NOT already present.""" + return [(name, present) for name, present in self.valid if not present] + + +class CommandLogger: + """Base context-aware logger for all CLI commands. + + Provides a standard lifecycle: start → progress → complete/error → summary. + All methods delegate to existing _rich_* helpers from apm_cli.utils.console. + No new output primitives — this is a semantic wrapper. + + Usage: + logger = CommandLogger("compile", verbose=True, dry_run=False) + logger.start("Compiling agent manifests...") + logger.progress("Processing 3 files...") + logger.success("Compiled 3 manifests") + logger.render_summary() + """ + + def __init__(self, command: str, verbose: bool = False, dry_run: bool = False): + self.command = command + self.verbose = verbose + self.dry_run = dry_run + self._diagnostics = None # Lazy init + + @property + def diagnostics(self): + """Lazy-init DiagnosticCollector.""" + if self._diagnostics is None: + from apm_cli.utils.diagnostics import DiagnosticCollector + + self._diagnostics = DiagnosticCollector(verbose=self.verbose) + return self._diagnostics + + # --- Common lifecycle --- + + def start(self, message: str, symbol: str = "running"): + """Log start of an operation.""" + _rich_info(message, symbol=symbol) + + def progress(self, message: str, symbol: str = "info"): + """Log progress during an operation.""" + _rich_info(message, symbol=symbol) + + def success(self, message: str, symbol: str = "sparkles"): + """Log successful completion.""" + _rich_success(message, symbol=symbol) + + def warning(self, message: str, symbol: str = "warning"): + """Log a warning.""" + _rich_warning(message, symbol=symbol) + + def error(self, message: str, symbol: str = "error"): + """Log an error.""" + _rich_error(message, symbol=symbol) + + def verbose_detail(self, message: str): + """Log a detail only when verbose mode is enabled.""" + if self.verbose: + _rich_echo(message, color="dim") + + # --- Dry-run awareness --- + + def dry_run_notice(self, what_would_happen: str): + """Log what would happen in dry-run mode.""" + _rich_info(f"[dry-run] {what_would_happen}", symbol="info") + + @property + def should_execute(self) -> bool: + """Return False if in dry-run mode.""" + return not self.dry_run + + # --- Auth diagnostics (available to all commands) --- + + def auth_step(self, step: str, success: bool, detail: str = ""): + """Log an auth resolution step (verbose only).""" + if self.verbose: + status = "✓" if success else "✗" + msg = f" auth: {status} {step}" + if detail: + msg += f" ({detail})" + _rich_echo(msg, color="dim") + + def auth_resolved(self, ctx): + """Log the resolved auth context (verbose only). + + Args: + ctx: AuthContext instance (imported lazily to avoid circular deps) + """ + if self.verbose: + source = getattr(ctx, "source", "unknown") + token_type = getattr(ctx, "token_type", "unknown") + has_token = getattr(ctx, "token", None) is not None + if has_token: + _rich_echo( + f" auth: resolved via {source} (type: {token_type})", color="dim" + ) + else: + _rich_echo(" auth: no credentials available", color="dim") + + # --- Summary --- + + def render_summary(self): + """Render diagnostic summary if any diagnostics were collected.""" + if self._diagnostics and self._diagnostics.has_diagnostics: + self._diagnostics.render_summary() + + +class InstallLogger(CommandLogger): + """Install-specific logger with validation, resolution, and download phases. + + Knows whether this is a partial install (specific packages requested) or + full install (all deps from apm.yml). Adjusts messages accordingly. + """ + + def __init__( + self, verbose: bool = False, dry_run: bool = False, partial: bool = False + ): + super().__init__("install", verbose=verbose, dry_run=dry_run) + self.partial = partial # True when specific packages are passed to `apm install` + + # --- Validation phase --- + + def validation_start(self, count: int): + """Log start of package validation.""" + noun = "package" if count == 1 else "packages" + _rich_info(f"Validating {count} {noun}...", symbol="gear") + + def validation_pass(self, canonical: str, already_present: bool): + """Log a package that passed validation.""" + if already_present: + _rich_echo(f" ✓ {canonical} (already in apm.yml)", color="dim") + else: + _rich_success(f" ✓ {canonical}") + + def validation_fail(self, package: str, reason: str): + """Log a package that failed validation.""" + _rich_error(f" ✗ {package} — {reason}") + + def validation_summary(self, outcome: _ValidationOutcome): + """Log validation summary and decide whether to continue. + + Returns True if install should continue, False if all packages failed. + """ + if outcome.all_failed: + _rich_error("All packages failed validation. Nothing to install.") + return False + + if outcome.has_failures: + failed_count = len(outcome.invalid) + noun = "package" if failed_count == 1 else "packages" + _rich_warning( + f"{failed_count} {noun} failed validation and will be skipped." + ) + + return True + + # --- Resolution phase --- + + def resolution_start(self, to_install_count: int, lockfile_count: int): + """Log start of dependency resolution.""" + if self.partial: + noun = "package" if to_install_count == 1 else "packages" + _rich_info( + f"Installing {to_install_count} new {noun}...", symbol="running" + ) + if lockfile_count > 0 and self.verbose: + _rich_echo( + f" ({lockfile_count} existing dependencies in lockfile)", + color="dim", + ) + else: + _rich_info("Installing dependencies from apm.yml...", symbol="running") + if lockfile_count > 0: + _rich_info( + f"Using apm.lock.yaml ({lockfile_count} locked dependencies)" + ) + + def nothing_to_install(self): + """Log when there's nothing to install — context-aware message.""" + if self.partial: + _rich_info("Requested packages are already installed.", symbol="check") + else: + _rich_success("All dependencies are up to date.", symbol="check") + + # --- Download phase --- + + def download_start(self, dep_name: str, cached: bool): + """Log start of a package download.""" + if cached: + self.verbose_detail(f" Using cached: {dep_name}") + elif self.verbose: + _rich_info(f" Downloading: {dep_name}", symbol="download") + + def download_complete(self, dep_name: str, ref_suffix: str = ""): + """Log completion of a package download.""" + msg = f" ✓ {dep_name}" + if ref_suffix: + msg += f" ({ref_suffix})" + _rich_echo(msg, color="green") + + def download_failed(self, dep_name: str, error: str): + """Log a download failure.""" + _rich_error(f" ✗ {dep_name} — {error}") + + # --- Install summary --- + + def install_summary(self, apm_count: int, mcp_count: int, errors: int = 0): + """Log final install summary.""" + parts = [] + if apm_count > 0: + noun = "dependency" if apm_count == 1 else "dependencies" + parts.append(f"{apm_count} APM {noun}") + if mcp_count > 0: + noun = "server" if mcp_count == 1 else "servers" + parts.append(f"{mcp_count} MCP {noun}") + + if parts: + summary = " and ".join(parts) + if errors > 0: + _rich_warning( + f"Installed {summary} with {errors} error(s).", symbol="warning" + ) + else: + _rich_success(f"Installed {summary}.", symbol="sparkles") + elif errors > 0: + _rich_error( + f"Installation failed with {errors} error(s).", symbol="error" + ) diff --git a/src/apm_cli/deps/github_downloader.py b/src/apm_cli/deps/github_downloader.py index 32cb3d1b..86841559 100644 --- a/src/apm_cli/deps/github_downloader.py +++ b/src/apm_cli/deps/github_downloader.py @@ -18,7 +18,7 @@ from git import Repo, RemoteProgress from git.exc import GitCommandError, InvalidGitRepositoryError -from ..core.token_manager import GitHubTokenManager +from ..core.auth import AuthResolver from ..models.apm_package import ( DependencyReference, PackageInfo, @@ -173,9 +173,11 @@ def _get_op_name(self, op_code): class GitHubPackageDownloader: """Downloads and validates APM packages from GitHub repositories.""" - def __init__(self): + def __init__(self, auth_resolver=None): """Initialize the GitHub package downloader.""" - self.token_manager = GitHubTokenManager() + from apm_cli.core.auth import AuthResolver + self.auth_resolver = auth_resolver or AuthResolver() + self.token_manager = self.auth_resolver._token_manager # Backward compat self.git_env = self._setup_git_environment() def _setup_git_environment(self) -> Dict[str, Any]: @@ -184,31 +186,8 @@ def _setup_git_environment(self) -> Dict[str, Any]: Returns: Dict containing environment variables for Git operations """ - # Use centralized token management env = self.token_manager.setup_environment() - - # Get tokens for modules (APM package access) - # GitHub: GITHUB_APM_PAT -> GITHUB_TOKEN -> GH_TOKEN -> git credential helpers - self.github_token = self.token_manager.get_token_with_credential_fallback( - 'modules', default_host(), env - ) - self.has_github_token = self.github_token is not None - self._github_token_from_credential_fill = ( - self.has_github_token - and self.token_manager.get_token_for_purpose('modules', env) is None - ) - - # Azure DevOps: ADO_APM_PAT - self.ado_token = self.token_manager.get_token_for_purpose('ado_modules', env) - self.has_ado_token = self.ado_token is not None - - # JFrog Artifactory: ARTIFACTORY_APM_TOKEN - self.artifactory_token = self.token_manager.get_token_for_purpose('artifactory_modules', env) - self.has_artifactory_token = self.artifactory_token is not None - _debug(f"Token setup: has_github_token={self.has_github_token}, has_ado_token={self.has_ado_token}, has_artifactory_token={self.has_artifactory_token}" - f"{', source=credential_helper' if self._github_token_from_credential_fill else ''}") - # Configure Git security settings env['GIT_TERMINAL_PROMPT'] = '0' env['GIT_ASKPASS'] = 'echo' # Prevent interactive credential prompts @@ -222,6 +201,28 @@ def _setup_git_environment(self) -> Dict[str, Any]: env['GIT_CONFIG_GLOBAL'] = empty_cfg else: env['GIT_CONFIG_GLOBAL'] = '/dev/null' + + # Resolve default host tokens via AuthResolver (backward compat properties) + default_ctx = self.auth_resolver.resolve(default_host()) + self._default_github_ctx = default_ctx + self.github_token = default_ctx.token + self.has_github_token = default_ctx.token is not None + self._github_token_from_credential_fill = ( + self.has_github_token + and self.token_manager.get_token_for_purpose('modules', env) is None + ) + + # Azure DevOps + ado_ctx = self.auth_resolver.resolve("dev.azure.com") + self.ado_token = ado_ctx.token + self.has_ado_token = ado_ctx.token is not None + + # JFrog Artifactory (not host-based, uses dedicated env var) + self.artifactory_token = self.token_manager.get_token_for_purpose('artifactory_modules', env) + self.has_artifactory_token = self.artifactory_token is not None + + _debug(f"Token setup: has_github_token={self.has_github_token}, has_ado_token={self.has_ado_token}, has_artifactory_token={self.has_artifactory_token}" + f"{', source=credential_helper' if self._github_token_from_credential_fill else ''}") return env @@ -486,7 +487,7 @@ def _sanitize_git_error(self, error_message: str) -> str: return sanitized - def _build_repo_url(self, repo_ref: str, use_ssh: bool = False, dep_ref: DependencyReference = None) -> str: + def _build_repo_url(self, repo_ref: str, use_ssh: bool = False, dep_ref: DependencyReference = None, token: Optional[str] = None) -> str: """Build the appropriate repository URL for cloning. Supports both GitHub and Azure DevOps URL formats: @@ -497,6 +498,7 @@ def _build_repo_url(self, repo_ref: str, use_ssh: bool = False, dep_ref: Depende repo_ref: Repository reference in format "owner/repo" or "org/project/repo" for ADO use_ssh: Whether to use SSH URL for git operations dep_ref: Optional DependencyReference for ADO-specific URL building + token: Optional per-dependency token override Returns: str: Repository URL suitable for git clone operations @@ -510,6 +512,10 @@ def _build_repo_url(self, repo_ref: str, use_ssh: bool = False, dep_ref: Depende # Check if this is Azure DevOps (either via dep_ref or host detection) is_ado = (dep_ref and dep_ref.is_azure_devops()) or is_azure_devops_hostname(host) + # Use provided token or fall back to instance default + github_token = token if token is not None else self.github_token + ado_token = token if (token is not None and is_ado) else self.ado_token + _debug(f"_build_repo_url: host={host}, is_ado={is_ado}, dep_ref={'present' if dep_ref else 'None'}, " f"ado_org={dep_ref.ado_organization if dep_ref else None}") @@ -517,12 +523,12 @@ def _build_repo_url(self, repo_ref: str, use_ssh: bool = False, dep_ref: Depende # Use Azure DevOps URL builders with ADO-specific token if use_ssh: return build_ado_ssh_url(dep_ref.ado_organization, dep_ref.ado_project, dep_ref.ado_repo) - elif self.ado_token: + elif ado_token: return build_ado_https_clone_url( dep_ref.ado_organization, dep_ref.ado_project, dep_ref.ado_repo, - token=self.ado_token, + token=ado_token, host=host ) else: @@ -537,9 +543,9 @@ def _build_repo_url(self, repo_ref: str, use_ssh: bool = False, dep_ref: Depende is_github = is_github_hostname(host) if use_ssh: return build_ssh_url(host, repo_ref) - elif is_github and self.github_token: + elif is_github and github_token: # Only send GitHub tokens to GitHub hosts - return build_https_clone_url(host, repo_ref, token=self.github_token) + return build_https_clone_url(host, repo_ref, token=github_token) else: # Generic hosts: plain HTTPS, let git credential helpers handle auth return build_https_clone_url(host, repo_ref, token=None) @@ -576,8 +582,17 @@ def _clone_with_fallback(self, repo_url_base: str, target_path: Path, progress_r is_github = True is_generic = not is_ado and not is_github - # Tokens are only valid for their matching host type - has_token = self.ado_token if is_ado else (self.github_token if is_github else None) + # Resolve per-dependency token via AuthResolver. + # Only use resolved token for GitHub/ADO hosts — generic hosts (GitLab, + # Bitbucket, etc.) delegate auth to git credential helpers. + if dep_ref and not is_generic: + dep_ctx = self.auth_resolver.resolve_for_dep(dep_ref) + dep_token = dep_ctx.token + elif is_generic: + dep_token = None + else: + dep_token = self.github_token # fallback + has_token = dep_token _debug(f"_clone_with_fallback: repo={repo_url_base}, is_ado={is_ado}, is_generic={is_generic}, has_token={has_token is not None}") @@ -594,7 +609,7 @@ def _clone_with_fallback(self, repo_url_base: str, target_path: Path, progress_r # Method 1: Try authenticated HTTPS if token is available (GitHub/ADO only) if has_token: try: - auth_url = self._build_repo_url(repo_url_base, use_ssh=False, dep_ref=dep_ref) + auth_url = self._build_repo_url(repo_url_base, use_ssh=False, dep_ref=dep_ref, token=dep_token) _debug(f"Attempting clone with authenticated HTTPS (URL sanitized)") return Repo.clone_from(auth_url, target_path, env=clone_env, progress=progress_reporter, **clone_kwargs) except GitCommandError as e: @@ -620,7 +635,8 @@ def _clone_with_fallback(self, repo_url_base: str, target_path: Path, progress_r error_msg = f"Failed to clone repository {repo_url_base} using all available methods. " configured_host = os.environ.get("GITHUB_HOST", "") if is_ado and not self.has_ado_token: - error_msg += "For private Azure DevOps repositories, set ADO_APM_PAT environment variable." + host = dep_host or "dev.azure.com" + error_msg += self.auth_resolver.build_error_context(host, "clone", org=dep_ref.ado_organization if dep_ref else None) elif is_generic: host_name = dep_host or "the target host" error_msg += ( @@ -638,8 +654,9 @@ def _clone_with_fallback(self, repo_url_base: str, target_path: Path, progress_r f"use the full hostname in apm.yml: {suggested}" ) elif not self.has_github_token: - error_msg += "For private repositories, set GITHUB_APM_PAT or GITHUB_TOKEN environment variable, " \ - "or ensure SSH keys are configured." + host = dep_host or default_host() + org = dep_ref.repo_url.split('/')[0] if dep_ref and dep_ref.repo_url else None + error_msg += self.auth_resolver.build_error_context(host, "clone", org=org) else: error_msg += "Please check repository access permissions and authentication setup." @@ -759,11 +776,9 @@ def resolve_git_reference(self, repo_ref: Union[str, "DependencyReference"]) -> # Check if this might be a private repository access issue if "Authentication failed" in str(e) or "remote: Repository not found" in str(e): error_msg = f"Failed to clone repository {dep_ref.repo_url}. " - if not self.has_github_token: - error_msg += "This might be a private repository that requires authentication. " \ - "Please set GITHUB_APM_PAT or GITHUB_TOKEN environment variable." - else: - error_msg += "Authentication failed. Please check your GitHub token permissions." + host = dep_ref.host or default_host() + org = dep_ref.repo_url.split('/')[0] if dep_ref.repo_url else None + error_msg += self.auth_resolver.build_error_context(host, "resolve reference", org=org) raise RuntimeError(error_msg) else: sanitized_error = self._sanitize_git_error(str(e)) @@ -891,7 +906,7 @@ def _download_ado_file(self, dep_ref: DependencyReference, file_path: str, ref: elif e.response.status_code == 401 or e.response.status_code == 403: error_msg = f"Authentication failed for Azure DevOps {dep_ref.repo_url}. " if not self.ado_token: - error_msg += "Please set ADO_APM_PAT with an Azure DevOps PAT with Code (Read) scope." + error_msg += self.auth_resolver.build_error_context(host, "download", org=dep_ref.ado_organization if dep_ref else None) else: error_msg += "Please check your Azure DevOps PAT permissions." raise RuntimeError(error_msg) @@ -937,11 +952,20 @@ def _download_github_file(self, dep_ref: DependencyReference, file_path: str, re # Parse owner/repo from repo_url owner, repo = dep_ref.repo_url.split('/', 1) + # Resolve token via AuthResolver for CDN fast-path decision + org = None + if dep_ref and dep_ref.repo_url: + parts = dep_ref.repo_url.split('/') + if parts: + org = parts[0] + file_ctx = self.auth_resolver.resolve(host, org) + token = file_ctx.token + # --- CDN fast-path for github.com without a token --- # raw.githubusercontent.com is served from GitHub's CDN and is not # subject to the REST API rate limit (60 req/h unauthenticated). # Only available for github.com — GHES/GHE-DR have no equivalent. - if host.lower() == "github.com" and not self.github_token: + if host.lower() == "github.com" and not token: content = self._try_raw_download(owner, repo, ref, file_path) if content is not None: return content @@ -964,15 +988,6 @@ def _download_github_file(self, dep_ref: DependencyReference, file_path: str, re else: api_url = f"https://{host}/api/v3/repos/{owner}/{repo}/contents/{file_path}?ref={ref}" - # Resolve the best available token for this host. - # self.github_token is pre-resolved for the default host during __init__; - # for non-default hosts, query credential fill for that specific host - # (env vars like GITHUB_APM_PAT are intended for the default host). - if host.lower() == default_host().lower(): - token = self.github_token - else: - token = self.token_manager.resolve_credential_from_git(host) - # Set up authentication headers headers = { 'Accept': 'application/vnd.github.v3.raw' # Returns raw content directly @@ -1031,7 +1046,7 @@ def _download_github_file(self, dep_ref: DependencyReference, file_path: str, re if not token: error_msg += ( "Unauthenticated requests are limited to 60/hour (shared per IP). " - "Set GITHUB_APM_PAT, GITHUB_TOKEN, or GH_TOKEN to increase the limit to 5,000/hour." + + self.auth_resolver.build_error_context(host, "API request (rate limited)", org=owner) ) else: error_msg += ( @@ -1054,11 +1069,7 @@ def _download_github_file(self, dep_ref: DependencyReference, file_path: str, re pass # Fall through to the original error error_msg = f"Authentication failed for {dep_ref.repo_url} (file: {file_path}, ref: {ref}). " if not token: - error_msg += ( - "This might be a private repository. " - "Set GITHUB_APM_PAT, GITHUB_TOKEN, or GH_TOKEN, or run 'gh auth login' " - "so APM can discover your credentials automatically." - ) + error_msg += self.auth_resolver.build_error_context(host, "download", org=owner) elif token and not host.lower().endswith(".ghe.com"): error_msg += ( "Both authenticated and unauthenticated access were attempted. " @@ -1900,11 +1911,9 @@ def download_package( # Check if this might be a private repository access issue if "Authentication failed" in str(e) or "remote: Repository not found" in str(e): error_msg = f"Failed to clone repository {dep_ref.repo_url}. " - if not self.has_github_token: - error_msg += "This might be a private repository that requires authentication. " \ - "Please set GITHUB_APM_PAT or GITHUB_TOKEN environment variable." - else: - error_msg += "Authentication failed. Please check your GitHub token permissions." + host = dep_ref.host or default_host() + org = dep_ref.repo_url.split('/')[0] if dep_ref.repo_url else None + error_msg += self.auth_resolver.build_error_context(host, "clone", org=org) raise RuntimeError(error_msg) else: sanitized_error = self._sanitize_git_error(str(e)) diff --git a/src/apm_cli/registry/operations.py b/src/apm_cli/registry/operations.py index 5fc9fa2d..68089122 100644 --- a/src/apm_cli/registry/operations.py +++ b/src/apm_cli/registry/operations.py @@ -7,6 +7,7 @@ import requests +from ..core.token_manager import GitHubTokenManager from .client import SimpleRegistryClient logger = logging.getLogger(__name__) @@ -329,9 +330,10 @@ def _prompt_for_environment_variables(self, required_vars: Dict[str, Dict]) -> D if var_name == 'GITHUB_DYNAMIC_TOOLSETS': env_vars[var_name] = '1' # Enable dynamic toolsets for GitHub MCP server elif 'token' in var_name.lower() or 'key' in var_name.lower(): - # For tokens/keys, try environment defaults with fallback chain - # Priority: GITHUB_APM_PAT (APM modules) > GITHUB_TOKEN (user tokens) - env_vars[var_name] = os.getenv('GITHUB_APM_PAT') or os.getenv('GITHUB_TOKEN', '') + # Use centralized token manager for consistent precedence + # (GITHUB_APM_PAT → GITHUB_TOKEN → GH_TOKEN) + _tm = GitHubTokenManager() + env_vars[var_name] = _tm.get_token_for_purpose('modules') or '' else: # For other variables, use empty string or reasonable default env_vars[var_name] = '' diff --git a/src/apm_cli/utils/diagnostics.py b/src/apm_cli/utils/diagnostics.py index e06b1771..82868061 100644 --- a/src/apm_cli/utils/diagnostics.py +++ b/src/apm_cli/utils/diagnostics.py @@ -24,10 +24,12 @@ CATEGORY_WARNING = "warning" CATEGORY_ERROR = "error" CATEGORY_SECURITY = "security" +CATEGORY_AUTH = "auth" CATEGORY_INFO = "info" _CATEGORY_ORDER = [ CATEGORY_SECURITY, + CATEGORY_AUTH, CATEGORY_COLLISION, CATEGORY_OVERWRITE, CATEGORY_WARNING, @@ -142,6 +144,18 @@ def info(self, message: str, package: str = "", detail: str = "") -> None: ) ) + def auth(self, message: str, package: str = "", detail: str = "") -> None: + """Record an authentication diagnostic (credential resolution, fallback, EMU detection).""" + with self._lock: + self._diagnostics.append( + Diagnostic( + message=message, + category=CATEGORY_AUTH, + package=package, + detail=detail, + ) + ) + # ------------------------------------------------------------------ # Query helpers # ------------------------------------------------------------------ @@ -160,6 +174,11 @@ def security_count(self) -> int: """Return number of security findings.""" return sum(1 for d in self._diagnostics if d.category == CATEGORY_SECURITY) + @property + def auth_count(self) -> int: + """Return number of auth diagnostics.""" + return sum(1 for d in self._diagnostics if d.category == CATEGORY_AUTH) + @property def has_critical_security(self) -> bool: """Return True if any critical-severity security finding exists.""" @@ -210,6 +229,8 @@ def render_summary(self) -> None: if cat == CATEGORY_SECURITY: self._render_security_group(items) + elif cat == CATEGORY_AUTH: + self._render_auth_group(items) elif cat == CATEGORY_COLLISION: self._render_collision_group(items) elif cat == CATEGORY_OVERWRITE: @@ -271,6 +292,19 @@ def _render_security_group(self, items: List[Diagnostic]) -> None: f" [i] {len(info)} file(s) contain unusual characters" ) + def _render_auth_group(self, items: List[Diagnostic]) -> None: + """Render auth diagnostics group.""" + count = len(items) + noun = "issue" if count == 1 else "issues" + _rich_warning(f" [!] {count} authentication {noun}") + for d in items: + pkg_prefix = f"[{d.package}] " if d.package else "" + _rich_echo(f" └─ {pkg_prefix}{d.message}", color="yellow") + if d.detail and self.verbose: + _rich_echo(f" {d.detail}", color="dim") + if not self.verbose: + _rich_info(" Run with --verbose for auth resolution details") + def _render_collision_group(self, items: List[Diagnostic]) -> None: count = len(items) noun = "file" if count == 1 else "files" diff --git a/tests/test_github_downloader_token_precedence.py b/tests/test_github_downloader_token_precedence.py index 40c7416e..dba9c79d 100644 --- a/tests/test_github_downloader_token_precedence.py +++ b/tests/test_github_downloader_token_precedence.py @@ -4,8 +4,8 @@ from unittest.mock import patch import pytest -from src.apm_cli.deps.github_downloader import GitHubPackageDownloader -from src.apm_cli.core.token_manager import GitHubTokenManager +from apm_cli.deps.github_downloader import GitHubPackageDownloader +from apm_cli.core.token_manager import GitHubTokenManager from apm_cli.utils import github_host diff --git a/tests/unit/test_auth.py b/tests/unit/test_auth.py new file mode 100644 index 00000000..5c2dbc98 --- /dev/null +++ b/tests/unit/test_auth.py @@ -0,0 +1,359 @@ +"""Unit tests for AuthResolver, HostInfo, and AuthContext.""" + +import os +from unittest.mock import patch + +import pytest + +from apm_cli.core.auth import AuthResolver, HostInfo, AuthContext +from apm_cli.core.token_manager import GitHubTokenManager + + +# --------------------------------------------------------------------------- +# TestClassifyHost +# --------------------------------------------------------------------------- + +class TestClassifyHost: + def test_github_com(self): + hi = AuthResolver.classify_host("github.com") + assert hi.kind == "github" + assert hi.has_public_repos is True + assert hi.api_base == "https://api.github.com" + + def test_ghe_cloud(self): + hi = AuthResolver.classify_host("contoso.ghe.com") + assert hi.kind == "ghe_cloud" + assert hi.has_public_repos is False + assert hi.api_base == "https://contoso.ghe.com/api/v3" + + def test_ado(self): + hi = AuthResolver.classify_host("dev.azure.com") + assert hi.kind == "ado" + + def test_visualstudio(self): + hi = AuthResolver.classify_host("myorg.visualstudio.com") + assert hi.kind == "ado" + + def test_ghes_via_env(self): + """GITHUB_HOST set to a custom FQDN → GHES.""" + with patch.dict(os.environ, {"GITHUB_HOST": "github.mycompany.com"}): + hi = AuthResolver.classify_host("github.mycompany.com") + assert hi.kind == "ghes" + + def test_generic_fqdn(self): + hi = AuthResolver.classify_host("gitlab.com") + assert hi.kind == "generic" + + def test_case_insensitive(self): + hi = AuthResolver.classify_host("GitHub.COM") + assert hi.kind == "github" + + +# --------------------------------------------------------------------------- +# TestDetectTokenType +# --------------------------------------------------------------------------- + +class TestDetectTokenType: + def test_fine_grained(self): + assert AuthResolver.detect_token_type("github_pat_abc123") == "fine-grained" + + def test_classic(self): + assert AuthResolver.detect_token_type("ghp_abc123") == "classic" + + def test_emu(self): + assert AuthResolver.detect_token_type("ghu_abc123") == "emu" + + def test_oauth(self): + assert AuthResolver.detect_token_type("gho_abc123") == "classic" + + def test_server_to_server(self): + assert AuthResolver.detect_token_type("ghs_abc123") == "classic" + + def test_refresh(self): + assert AuthResolver.detect_token_type("ghr_abc123") == "classic" + + def test_unknown(self): + assert AuthResolver.detect_token_type("some-random-token") == "unknown" + + +# --------------------------------------------------------------------------- +# TestResolve +# --------------------------------------------------------------------------- + +class TestResolve: + def test_per_org_env_var(self): + """GITHUB_APM_PAT_MICROSOFT takes precedence for org 'microsoft'.""" + with patch.dict(os.environ, { + "GITHUB_APM_PAT_MICROSOFT": "org-specific-token", + "GITHUB_APM_PAT": "global-token", + }, clear=False): + resolver = AuthResolver() + ctx = resolver.resolve("github.com", org="microsoft") + assert ctx.token == "org-specific-token" + assert ctx.source == "GITHUB_APM_PAT_MICROSOFT" + + def test_per_org_with_hyphens(self): + """Org name with hyphens → underscores in env var.""" + with patch.dict(os.environ, { + "GITHUB_APM_PAT_CONTOSO_MICROSOFT": "emu-token", + }, clear=False): + resolver = AuthResolver() + ctx = resolver.resolve("github.com", org="contoso-microsoft") + assert ctx.token == "emu-token" + assert ctx.source == "GITHUB_APM_PAT_CONTOSO_MICROSOFT" + + def test_falls_back_to_global(self): + """No per-org var → falls back to GITHUB_APM_PAT.""" + with patch.dict(os.environ, { + "GITHUB_APM_PAT": "global-token", + }, clear=True): + resolver = AuthResolver() + ctx = resolver.resolve("github.com", org="unknown-org") + assert ctx.token == "global-token" + assert ctx.source == "GITHUB_APM_PAT" + + def test_no_token_returns_none(self): + """No tokens at all → token is None.""" + with patch.dict(os.environ, {}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + ctx = resolver.resolve("github.com") + assert ctx.token is None + assert ctx.source == "none" + + def test_caching(self): + """Second call returns cached result.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "token"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + ctx1 = resolver.resolve("github.com", org="microsoft") + ctx2 = resolver.resolve("github.com", org="microsoft") + assert ctx1 is ctx2 + + def test_different_orgs_different_cache(self): + """Different orgs get different cache entries.""" + with patch.dict(os.environ, { + "GITHUB_APM_PAT_ORG_A": "token-a", + "GITHUB_APM_PAT_ORG_B": "token-b", + }, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + ctx_a = resolver.resolve("github.com", org="org-a") + ctx_b = resolver.resolve("github.com", org="org-b") + assert ctx_a.token == "token-a" + assert ctx_b.token == "token-b" + + def test_ado_token(self): + """ADO host resolves ADO_APM_PAT.""" + with patch.dict(os.environ, {"ADO_APM_PAT": "ado-token"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + ctx = resolver.resolve("dev.azure.com") + assert ctx.token == "ado-token" + + def test_credential_fallback(self): + """Falls back to git credential helper when no env vars.""" + with patch.dict(os.environ, {}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value="cred-token" + ): + resolver = AuthResolver() + ctx = resolver.resolve("github.com") + assert ctx.token == "cred-token" + assert ctx.source == "git-credential-fill" + + def test_git_env_has_lockdown(self): + """Resolved context has git security env vars.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "token"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + ctx = resolver.resolve("github.com") + assert ctx.git_env.get("GIT_TERMINAL_PROMPT") == "0" + + +# --------------------------------------------------------------------------- +# TestTryWithFallback +# --------------------------------------------------------------------------- + +class TestTryWithFallback: + def test_unauth_first_succeeds(self): + """Unauth-first: if unauth works, auth is never tried.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "token"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + calls = [] + + def op(token, env): + calls.append(token) + return "success" + + result = resolver.try_with_fallback("github.com", op, unauth_first=True) + assert result == "success" + assert calls == [None] + + def test_unauth_first_falls_back_to_auth(self): + """Unauth-first: if unauth fails, retries with token.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "token"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + calls = [] + + def op(token, env): + calls.append(token) + if token is None: + raise RuntimeError("Unauthorized") + return "success" + + result = resolver.try_with_fallback("github.com", op, unauth_first=True) + assert result == "success" + assert calls == [None, "token"] + + def test_ghe_cloud_auth_only(self): + """*.ghe.com: auth-only, no unauth fallback. Uses git credential (not global env).""" + with patch.dict(os.environ, {}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value="ghe-cred" + ): + resolver = AuthResolver() + calls = [] + + def op(token, env): + calls.append(token) + return "success" + + result = resolver.try_with_fallback( + "contoso.ghe.com", op, unauth_first=True + ) + assert result == "success" + # GHE Cloud has no public repos → unauth skipped, auth called once + assert calls == ["ghe-cred"] + + def test_auth_first_succeeds(self): + """Auth-first (default): auth works, unauth not tried.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "token"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + calls = [] + + def op(token, env): + calls.append(token) + return "success" + + result = resolver.try_with_fallback("github.com", op) + assert result == "success" + assert calls == ["token"] + + def test_auth_first_falls_back_to_unauth(self): + """Auth-first: if auth fails on public host, retries unauthenticated.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "token"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + calls = [] + + def op(token, env): + calls.append(token) + if token is not None: + raise RuntimeError("Token expired") + return "success" + + result = resolver.try_with_fallback("github.com", op) + assert result == "success" + assert calls == ["token", None] + + def test_no_token_tries_unauth(self): + """No token available: tries unauthenticated directly.""" + with patch.dict(os.environ, {}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + calls = [] + + def op(token, env): + calls.append(token) + return "success" + + result = resolver.try_with_fallback("github.com", op) + assert result == "success" + assert calls == [None] + + def test_verbose_callback(self): + """verbose_callback is called at each step.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "token"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + messages = [] + + def op(token, env): + return "ok" + + resolver.try_with_fallback( + "github.com", op, verbose_callback=messages.append + ) + assert len(messages) > 0 + + +# --------------------------------------------------------------------------- +# TestBuildErrorContext +# --------------------------------------------------------------------------- + +class TestBuildErrorContext: + def test_no_token_message(self): + with patch.dict(os.environ, {}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + msg = resolver.build_error_context("github.com", "clone") + assert "GITHUB_APM_PAT" in msg + assert "--verbose" in msg + + def test_emu_detection(self): + with patch.dict(os.environ, {"GITHUB_APM_PAT": "ghu_emu_token"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + msg = resolver.build_error_context("github.com", "clone") + assert "EMU" in msg + + def test_multi_org_hint(self): + with patch.dict(os.environ, {"GITHUB_APM_PAT": "token"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + msg = resolver.build_error_context( + "github.com", "clone", org="microsoft" + ) + assert "GITHUB_APM_PAT_MICROSOFT" in msg + + def test_token_present_shows_source(self): + with patch.dict(os.environ, {"GITHUB_APM_PAT": "ghp_tok"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + msg = resolver.build_error_context("github.com", "clone") + assert "GITHUB_APM_PAT" in msg + assert "SAML SSO" in msg diff --git a/tests/unit/test_auth_scoping.py b/tests/unit/test_auth_scoping.py index c7a38cf2..dc065642 100644 --- a/tests/unit/test_auth_scoping.py +++ b/tests/unit/test_auth_scoping.py @@ -135,7 +135,24 @@ def _run_clone(self, dl, dep, succeed_on=1): else: effects.append(GitCommandError("clone", "failed")) - with patch('apm_cli.deps.github_downloader.Repo') as MockRepo: + # Reconstruct the env matching construction so per-dep resolution + # via AuthResolver sees the same tokens the downloader was built with. + env_vars = {} + if dl.github_token: + env_vars["GITHUB_APM_PAT"] = dl.github_token + if dl.ado_token: + env_vars["ADO_APM_PAT"] = dl.ado_token + + # Clear the resolver cache so resolve_for_dep re-resolves with the + # controlled env rather than returning stale entries. + dl.auth_resolver._cache.clear() + + with patch.dict(os.environ, env_vars, clear=True), \ + patch( + "apm_cli.core.token_manager.GitHubTokenManager.resolve_credential_from_git", + return_value=None, + ), \ + patch('apm_cli.deps.github_downloader.Repo') as MockRepo: MockRepo.clone_from.side_effect = effects target = Path(tempfile.mkdtemp()) try: @@ -221,7 +238,13 @@ def test_generic_host_error_message_mentions_credential_helpers(self): dl = _make_downloader(github_token="ghp_TESTTOKEN") dep = _dep("https://gitlab.com/acme/rules.git") - with patch('apm_cli.deps.github_downloader.Repo') as MockRepo: + dl.auth_resolver._cache.clear() + with patch.dict(os.environ, {"GITHUB_APM_PAT": "ghp_TESTTOKEN"}, clear=True), \ + patch( + "apm_cli.core.token_manager.GitHubTokenManager.resolve_credential_from_git", + return_value=None, + ), \ + patch('apm_cli.deps.github_downloader.Repo') as MockRepo: MockRepo.clone_from.side_effect = GitCommandError("clone", "failed") target = Path(tempfile.mkdtemp()) try: diff --git a/tests/unit/test_command_logger.py b/tests/unit/test_command_logger.py new file mode 100644 index 00000000..3b7bb600 --- /dev/null +++ b/tests/unit/test_command_logger.py @@ -0,0 +1,305 @@ +"""Unit tests for CommandLogger, InstallLogger, and _ValidationOutcome.""" + +from unittest.mock import MagicMock, patch + +from apm_cli.core.command_logger import CommandLogger, InstallLogger, _ValidationOutcome + + +class TestValidationOutcome: + def test_all_failed(self): + outcome = _ValidationOutcome(valid=[], invalid=[("pkg", "not found")]) + assert outcome.all_failed is True + assert outcome.has_failures is True + + def test_partial_failure(self): + outcome = _ValidationOutcome( + valid=[("pkg1", False)], + invalid=[("pkg2", "not found")], + ) + assert outcome.all_failed is False + assert outcome.has_failures is True + + def test_all_valid(self): + outcome = _ValidationOutcome( + valid=[("pkg1", False), ("pkg2", True)], + invalid=[], + ) + assert outcome.all_failed is False + assert outcome.has_failures is False + + def test_new_packages(self): + outcome = _ValidationOutcome( + valid=[("pkg1", False), ("pkg2", True), ("pkg3", False)], + invalid=[], + ) + new = outcome.new_packages + assert len(new) == 2 + assert ("pkg1", False) in new + assert ("pkg3", False) in new + + def test_empty(self): + outcome = _ValidationOutcome(valid=[], invalid=[]) + assert outcome.all_failed is False + assert outcome.has_failures is False + + +class TestCommandLogger: + @patch("apm_cli.core.command_logger._rich_info") + def test_start(self, mock_info): + logger = CommandLogger("test") + logger.start("Starting operation...") + mock_info.assert_called_once_with("Starting operation...", symbol="running") + + @patch("apm_cli.core.command_logger._rich_success") + def test_success(self, mock_success): + logger = CommandLogger("test") + logger.success("Done!") + mock_success.assert_called_once_with("Done!", symbol="sparkles") + + @patch("apm_cli.core.command_logger._rich_error") + def test_error(self, mock_error): + logger = CommandLogger("test") + logger.error("Failed!") + mock_error.assert_called_once_with("Failed!", symbol="error") + + @patch("apm_cli.core.command_logger._rich_warning") + def test_warning(self, mock_warning): + logger = CommandLogger("test") + logger.warning("Careful!") + mock_warning.assert_called_once_with("Careful!", symbol="warning") + + @patch("apm_cli.core.command_logger._rich_echo") + def test_verbose_detail_when_verbose(self, mock_echo): + logger = CommandLogger("test", verbose=True) + logger.verbose_detail("Some detail") + mock_echo.assert_called_once_with("Some detail", color="dim") + + @patch("apm_cli.core.command_logger._rich_echo") + def test_verbose_detail_when_not_verbose(self, mock_echo): + logger = CommandLogger("test", verbose=False) + logger.verbose_detail("Some detail") + mock_echo.assert_not_called() + + def test_should_execute_default(self): + logger = CommandLogger("test") + assert logger.should_execute is True + + def test_should_execute_dry_run(self): + logger = CommandLogger("test", dry_run=True) + assert logger.should_execute is False + + def test_diagnostics_lazy_init(self): + logger = CommandLogger("test") + assert logger._diagnostics is None + diag = logger.diagnostics + assert diag is not None + assert logger.diagnostics is diag # Same instance + + def test_diagnostics_verbose_propagated(self): + logger = CommandLogger("test", verbose=True) + assert logger.diagnostics.verbose is True + + @patch("apm_cli.core.command_logger._rich_echo") + def test_auth_step_verbose(self, mock_echo): + logger = CommandLogger("test", verbose=True) + logger.auth_step("Trying GITHUB_APM_PAT", success=True, detail="found") + mock_echo.assert_called_once() + call_args = mock_echo.call_args[0][0] + assert "✓" in call_args + assert "GITHUB_APM_PAT" in call_args + + @patch("apm_cli.core.command_logger._rich_echo") + def test_auth_step_not_verbose(self, mock_echo): + logger = CommandLogger("test", verbose=False) + logger.auth_step("Trying GITHUB_APM_PAT", success=True) + mock_echo.assert_not_called() + + @patch("apm_cli.core.command_logger._rich_echo") + def test_auth_resolved_with_token(self, mock_echo): + logger = CommandLogger("test", verbose=True) + mock_ctx = MagicMock() + mock_ctx.source = "GITHUB_APM_PAT" + mock_ctx.token_type = "fine-grained" + mock_ctx.token = "some-token" + logger.auth_resolved(mock_ctx) + mock_echo.assert_called_once() + assert "GITHUB_APM_PAT" in mock_echo.call_args[0][0] + + @patch("apm_cli.core.command_logger._rich_echo") + def test_auth_resolved_no_token(self, mock_echo): + logger = CommandLogger("test", verbose=True) + mock_ctx = MagicMock() + mock_ctx.token = None + logger.auth_resolved(mock_ctx) + mock_echo.assert_called_once() + assert "no credentials" in mock_echo.call_args[0][0] + + @patch("apm_cli.core.command_logger._rich_echo") + def test_auth_resolved_not_verbose(self, mock_echo): + logger = CommandLogger("test", verbose=False) + mock_ctx = MagicMock() + mock_ctx.token = "tok" + logger.auth_resolved(mock_ctx) + mock_echo.assert_not_called() + + def test_render_summary_no_diagnostics(self): + """render_summary with no diagnostics should not crash.""" + logger = CommandLogger("test") + logger.render_summary() # No-op, no diagnostics + + @patch("apm_cli.core.command_logger._rich_info") + def test_progress(self, mock_info): + logger = CommandLogger("test") + logger.progress("Processing 3 files...") + mock_info.assert_called_once_with("Processing 3 files...", symbol="info") + + @patch("apm_cli.core.command_logger._rich_info") + def test_dry_run_notice(self, mock_info): + logger = CommandLogger("test", dry_run=True) + logger.dry_run_notice("Would compile 3 files") + mock_info.assert_called_once_with( + "[dry-run] Would compile 3 files", symbol="info" + ) + + @patch("apm_cli.core.command_logger._rich_echo") + def test_auth_step_failure(self, mock_echo): + logger = CommandLogger("test", verbose=True) + logger.auth_step("Trying gh CLI", success=False) + mock_echo.assert_called_once() + assert "✗" in mock_echo.call_args[0][0] + + +class TestInstallLogger: + def test_partial_flag(self): + logger = InstallLogger(partial=True) + assert logger.partial is True + assert logger.command == "install" + + @patch("apm_cli.core.command_logger._rich_info") + def test_validation_start(self, mock_info): + logger = InstallLogger() + logger.validation_start(3) + mock_info.assert_called_once_with("Validating 3 packages...", symbol="gear") + + @patch("apm_cli.core.command_logger._rich_info") + def test_validation_start_singular(self, mock_info): + logger = InstallLogger() + logger.validation_start(1) + mock_info.assert_called_once_with("Validating 1 package...", symbol="gear") + + @patch("apm_cli.core.command_logger._rich_success") + def test_validation_pass_new(self, mock_success): + logger = InstallLogger() + logger.validation_pass("microsoft/repo", already_present=False) + mock_success.assert_called_once() + + @patch("apm_cli.core.command_logger._rich_echo") + def test_validation_pass_existing(self, mock_echo): + logger = InstallLogger() + logger.validation_pass("microsoft/repo", already_present=True) + assert "already in apm.yml" in mock_echo.call_args[0][0] + + @patch("apm_cli.core.command_logger._rich_error") + def test_validation_fail(self, mock_error): + logger = InstallLogger() + logger.validation_fail("bad/pkg", "not accessible") + assert "bad/pkg" in mock_error.call_args[0][0] + + @patch("apm_cli.core.command_logger._rich_error") + def test_validation_summary_all_failed(self, mock_error): + logger = InstallLogger() + outcome = _ValidationOutcome(valid=[], invalid=[("pkg", "reason")]) + result = logger.validation_summary(outcome) + assert result is False + mock_error.assert_called() + + @patch("apm_cli.core.command_logger._rich_warning") + def test_validation_summary_partial_failure(self, mock_warning): + logger = InstallLogger() + outcome = _ValidationOutcome( + valid=[("pkg1", False)], + invalid=[("pkg2", "reason")], + ) + result = logger.validation_summary(outcome) + assert result is True + mock_warning.assert_called() + + def test_validation_summary_all_valid(self): + logger = InstallLogger() + outcome = _ValidationOutcome(valid=[("pkg", False)], invalid=[]) + result = logger.validation_summary(outcome) + assert result is True + + @patch("apm_cli.core.command_logger._rich_info") + def test_resolution_start_partial(self, mock_info): + logger = InstallLogger(partial=True) + logger.resolution_start(to_install_count=1, lockfile_count=4) + assert "1 new package" in mock_info.call_args[0][0] + + @patch("apm_cli.core.command_logger._rich_info") + def test_resolution_start_full(self, mock_info): + logger = InstallLogger(partial=False) + logger.resolution_start(to_install_count=4, lockfile_count=4) + first_call = mock_info.call_args_list[0][0][0] + assert "apm.yml" in first_call + # Second call shows lockfile info + second_call = mock_info.call_args_list[1][0][0] + assert "4 locked dependencies" in second_call + + @patch("apm_cli.core.command_logger._rich_info") + def test_nothing_to_install_partial(self, mock_info): + logger = InstallLogger(partial=True) + logger.nothing_to_install() + assert "already installed" in mock_info.call_args[0][0] + + @patch("apm_cli.core.command_logger._rich_success") + def test_nothing_to_install_full(self, mock_success): + logger = InstallLogger(partial=False) + logger.nothing_to_install() + assert "up to date" in mock_success.call_args[0][0] + + @patch("apm_cli.core.command_logger._rich_success") + def test_install_summary_apm_only(self, mock_success): + logger = InstallLogger() + logger.install_summary(apm_count=3, mcp_count=0) + assert "3 APM dependencies" in mock_success.call_args[0][0] + + @patch("apm_cli.core.command_logger._rich_success") + def test_install_summary_both(self, mock_success): + logger = InstallLogger() + logger.install_summary(apm_count=2, mcp_count=1) + call_msg = mock_success.call_args[0][0] + assert "APM" in call_msg + assert "MCP" in call_msg + + @patch("apm_cli.core.command_logger._rich_warning") + def test_install_summary_with_errors(self, mock_warning): + logger = InstallLogger() + logger.install_summary(apm_count=2, mcp_count=0, errors=1) + assert "error" in mock_warning.call_args[0][0] + + @patch("apm_cli.core.command_logger._rich_error") + def test_install_summary_all_errors(self, mock_error): + logger = InstallLogger() + logger.install_summary(apm_count=0, mcp_count=0, errors=3) + assert "3 error" in mock_error.call_args[0][0] + + @patch("apm_cli.core.command_logger._rich_error") + def test_download_failed(self, mock_error): + logger = InstallLogger() + logger.download_failed("pkg/repo", "timeout") + assert "pkg/repo" in mock_error.call_args[0][0] + + @patch("apm_cli.core.command_logger._rich_echo") + def test_download_complete(self, mock_echo): + logger = InstallLogger() + logger.download_complete("pkg/repo", ref_suffix="v1.0") + call_msg = mock_echo.call_args[0][0] + assert "pkg/repo" in call_msg + assert "v1.0" in call_msg + + @patch("apm_cli.core.command_logger._rich_echo") + def test_download_complete_no_ref(self, mock_echo): + logger = InstallLogger() + logger.download_complete("pkg/repo") + assert "pkg/repo" in mock_echo.call_args[0][0] From fdc7d087a1762cbed61fdc19e4e5e6967667171e Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 00:20:13 +0100 Subject: [PATCH 02/40] feat: InstallLogger wiring, auth docs, CHANGELOG, test updates Install UX overhaul (3a): - Wire InstallLogger into install.py with semantic lifecycle methods - Short-circuit on total validation failure (no more misleading counts) - Context-aware messages: partial vs full, new vs locked dependencies - Demote noise (lockfile info, MCP transitive, orphan cleanup) to verbose Documentation (5a, 5b): - Rewrite authentication.md: resolution chain, token lookup table, per-org setup, EMU/GHE Cloud/GHES sections, troubleshooting - CHANGELOG entry under [Unreleased] for #393 Test updates (4c): - 11 new CATEGORY_AUTH tests in test_diagnostics.py - Fix tuple unpacking in test_canonicalization, test_dev_dependencies, test_generic_git_url_install (validation now returns outcome) - Update error message assertion in test_install_command All 2839 tests pass. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- CHANGELOG.md | 17 + .../docs/getting-started/authentication.md | 217 ++++------- .../content/docs/guides/private-packages.md | 2 +- src/apm_cli/commands/install.py | 359 +++++++++++------- .../test_generic_git_url_install.py | 16 +- tests/unit/test_canonicalization.py | 30 +- tests/unit/test_dev_dependencies.py | 4 +- tests/unit/test_diagnostics.py | 124 ++++++ tests/unit/test_install_command.py | 2 +- 9 files changed, 473 insertions(+), 298 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 566b3460..2e8f7954 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -8,6 +8,23 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ## [Unreleased] +### Added + +- `AuthResolver` — centralized per-(host, org) token resolution with host classification (github.com, `*.ghe.com`, GHES, ADO), per-org env vars (`GITHUB_APM_PAT_{ORG}`), EMU token detection, and try-with-fallback strategy (#393) +- `CommandLogger` base class with semantic lifecycle methods (start/progress/success/error), verbose/dry-run support, and lazy `DiagnosticCollector`; `InstallLogger` subclass for validation/resolution/download phases (#393) +- `CATEGORY_AUTH` in `DiagnosticCollector` for auth diagnostic grouping (#393) + +### Changed + +- `_validate_package_exists()` tries unauthenticated first, falls back to `AuthResolver` — fixes EMU org installs (#393) +- `github_downloader.py` uses per-dependency token resolution via `AuthResolver` in clone and download paths (#393) +- `copilot.py` and `operations.py` use `AuthResolver` instead of direct `os.getenv` bypasses (#393) +- 7 hardcoded auth error messages replaced with actionable, context-aware messages (#393) + +### Security + +- Global env vars (`GITHUB_APM_PAT`) no longer leak to non-default hosts; enterprise hosts resolve via per-org env vars or git credentials only (#393) + ## [0.8.3] - 2026-03-20 ### Added diff --git a/docs/src/content/docs/getting-started/authentication.md b/docs/src/content/docs/getting-started/authentication.md index 559c757a..b7d553c3 100644 --- a/docs/src/content/docs/getting-started/authentication.md +++ b/docs/src/content/docs/getting-started/authentication.md @@ -4,215 +4,146 @@ sidebar: order: 4 --- -APM works without any tokens for public packages. Authentication is only needed for private repositories and enterprise hosts. +APM works without tokens for public packages on github.com. Authentication is needed for private repositories, enterprise hosts (`*.ghe.com`, GHES), and Azure DevOps. -## How APM Authenticates +## How APM resolves authentication -APM resolves dependencies either via `git clone` (for full packages) or the GitHub API (for individual files). Authentication depends on the host: +APM resolves tokens per `(host, org)` pair. For each dependency, it walks a resolution chain until it finds a token: -| Host | Token variable | How it's used | -|------|---------------|---------------| -| GitHub.com / GitHub Enterprise (`*.ghe.com`) | `GITHUB_APM_PAT` → `GITHUB_TOKEN` → `GH_TOKEN` | Injected into the HTTPS URL as `x-access-token` | -| Azure DevOps | `ADO_APM_PAT` | Injected into the HTTPS URL as the password | -| JFrog Artifactory | `ARTIFACTORY_APM_TOKEN` | Bearer token in HTTP `Authorization` header | -| Any other git host (including GitHub Enterprise on custom domains) | — | Delegated to **git credential helpers** or SSH keys | +1. **Per-org env var** — `GITHUB_APM_PAT_{ORG}` (checked for any host) +2. **Global env vars** — `GITHUB_APM_PAT` → `GITHUB_TOKEN` → `GH_TOKEN` (default host only) +3. **Git credential helper** — `git credential fill` (any host except ADO) -When APM has a token for a recognized host (GitHub.com, GitHub Enterprise under `*.ghe.com`, or Azure DevOps), it injects it directly and disables interactive prompts. When no token is available, or the host is treated as generic (including GitHub Enterprise on custom domains), APM relaxes the git environment so your existing credential helpers — `gh auth`, macOS Keychain, Windows Credential Manager, `git-credential-store`, etc. — can provide credentials transparently. +If nothing matches, APM attempts unauthenticated access (works for public repos on github.com). -For single-file downloads from GitHub (which use the GitHub API rather than `git clone`), APM also queries `git credential fill` as a last-resort fallback when no token environment variable is set. This means credentials stored by `gh auth login` or your OS keychain work for both folder-level and file-level dependencies. +Results are cached per-process — the same `(host, org)` pair is resolved once. -### Object-style `git:` references +### Security constraint -The `git:` object form in `apm.yml` lets you reference any git URL explicitly — HTTPS, SSH, or any host: +Global env vars (`GITHUB_APM_PAT`, `GITHUB_TOKEN`, `GH_TOKEN`) only apply to the default host (github.com unless `GITHUB_HOST` is set). Non-default hosts resolve via per-org env vars or git credentials. APM never sends a github.com token to an enterprise host. -```yaml -dependencies: - apm: - - git: https://gitlab.com/acme/coding-standards.git - path: instructions/security - ref: v2.0 - - git: git@bitbucket.org:team/rules.git - path: prompts/review.prompt.md -``` +## Token lookup -Authentication for these URLs follows the same rules: APM uses `GITHUB_APM_PAT` / `ADO_APM_PAT` for recognized hosts (GitHub.com and GitHub Enterprise under `*.ghe.com`, Azure DevOps), and falls back to your git credential helpers or SSH keys for everything else (including GitHub Enterprise on custom domains). If your GitLab, Bitbucket, GitHub Enterprise, or self-hosted git server is already configured in `~/.gitconfig` or your SSH agent, APM will work without any additional setup. +| Priority | Variable | Scope | Notes | +|----------|----------|-------|-------| +| 1 | `GITHUB_APM_PAT_{ORG}` | Per-org, any host | Org name uppercased, hyphens → underscores | +| 2 | `GITHUB_APM_PAT` | Default host only | github.com unless `GITHUB_HOST` overrides | +| 3 | `GITHUB_TOKEN` | Default host only | Shared with GitHub Actions | +| 4 | `GH_TOKEN` | Default host only | Set by `gh auth login` | +| 5 | `git credential fill` | Per-host | System credential manager, `gh auth`, OS keychain | -## Token Reference +For Azure DevOps, the only token source is `ADO_APM_PAT`. -### GITHUB_APM_PAT +For JFrog Artifactory, use `ARTIFACTORY_APM_TOKEN`. -```bash -export GITHUB_APM_PAT=github_pat_finegrained_token_here -``` +For runtime features (`GITHUB_COPILOT_PAT`), see [Agent Workflows](../../guides/agent-workflows/). -- **Scope**: Private repositories on GitHub.com and GitHub Enterprise instances under `*.ghe.com` -- **Type**: [Fine-grained PAT](https://github.com/settings/personal-access-tokens/new) (org or user-scoped) -- **Permissions**: Repository read access -- **Fallback**: `GITHUB_TOKEN` (e.g., in GitHub Actions), then `GH_TOKEN` (used by the GitHub CLI) +## Multi-org setup -### ADO_APM_PAT +When your manifest pulls from multiple GitHub organizations, use per-org env vars: ```bash -export ADO_APM_PAT=your_ado_pat +export GITHUB_APM_PAT_CONTOSO=ghp_token_for_contoso +export GITHUB_APM_PAT_FABRIKAM=ghp_token_for_fabrikam ``` -- **Scope**: Private repositories on Azure DevOps -- **Type**: PAT created at `https://dev.azure.com/{org}/_usersSettings/tokens` -- **Permissions**: Code (Read) +The org name comes from the dependency reference — `contoso/my-package` checks `GITHUB_APM_PAT_CONTOSO`. Naming rules: -### GITHUB_COPILOT_PAT +- Uppercase the org name +- Replace hyphens with underscores +- `contoso-microsoft` → `GITHUB_APM_PAT_CONTOSO_MICROSOFT` -```bash -export GITHUB_COPILOT_PAT=ghp_copilot_token -``` +Per-org tokens take priority over global tokens. Use this when different orgs require different PATs (e.g., separate SSO authorizations). -- **Scope**: Runtime features (see [Agent Workflows](../../guides/agent-workflows/)) -- **Fallback**: `GITHUB_APM_PAT`, then `GITHUB_TOKEN` (e.g., in GitHub Actions) +## Enterprise (EMU / GHE Cloud) -### GITHUB_HOST +GHE Cloud hosts (`*.ghe.com`) are always auth-required — APM never attempts unauthenticated access. Set a per-org token: ```bash -export GITHUB_HOST=github.company.com +export GITHUB_APM_PAT_MYENTERPRISE=ghp_enterprise_token +apm install myenterprise.ghe.com/platform/standards ``` -- **Purpose**: Set default host for bare package names (e.g., `owner/repo`) -- **Default**: `github.com` -- **Note**: Azure DevOps has no equivalent — always use FQDN syntax - -## Common Setup Scenarios +### EMU tokens -#### Public Packages (No Setup) +Enterprise Managed User tokens (`ghu_` prefix) are scoped to the enterprise. They cannot access public repos on github.com. If your manifest mixes enterprise and public packages, use separate tokens: ```bash -apm install microsoft/apm-sample-package +export GITHUB_APM_PAT_MYENTERPRISE=ghu_emu_token # *.ghe.com only +export GITHUB_APM_PAT=ghp_public_token # github.com ``` -#### Private GitHub Packages +## GitHub Enterprise Server (GHES) -```bash -export GITHUB_APM_PAT=ghp_org_token -apm install your-org/private-package -``` - -#### Private Azure DevOps Packages - -```bash -export ADO_APM_PAT=your_ado_pat -apm install dev.azure.com/org/project/repo -``` - -#### GitHub Enterprise +Set `GITHUB_HOST` to your GHES instance. Bare package names resolve against this host: ```bash export GITHUB_HOST=github.company.com -export GITHUB_APM_PAT=ghp_enterprise_token -apm install team/package # → github.company.com/team/package +export GITHUB_APM_PAT_MYORG=ghp_ghes_token +apm install myorg/internal-package # → github.company.com/myorg/internal-package ``` -> When `GITHUB_HOST` is set, **all** bare package names resolve against that host. Use full hostnames for packages on other servers: -> ```yaml -> dependencies: -> apm: -> - team/internal-package # → GITHUB_HOST -> - github.com/public/open-source-package # → github.com -> ``` - -#### GitLab, Bitbucket, or Self-Hosted Git - -No APM-specific token is needed. Configure access using your standard git setup: +Use full hostnames for packages on other hosts: ```yaml -# SSH — if your key is in the SSH agent, it just works -- git: git@gitlab.com:acme/standards.git - -# HTTPS — relies on git credential helpers -- git: https://gitlab.com/acme/standards.git +dependencies: + apm: + - team/internal-package # → GITHUB_HOST + - github.com/public/open-source-package # → github.com ``` -To configure HTTPS credentials for a generic host, use any standard git credential helper: +Global env vars apply to whichever host `GITHUB_HOST` points to. Alternatively, skip env vars and configure `git credential fill` for your GHES host. + +## Azure DevOps ```bash -# gh CLI (GitHub-compatible forges) -gh auth login - -# Git credential store (any host) -git credential approve < **Note:** Artifactory downloads use zip archives, so `apm.lock` will not contain commit SHAs for Artifactory-sourced packages. diff --git a/docs/src/content/docs/guides/private-packages.md b/docs/src/content/docs/guides/private-packages.md index ae20ad59..696ffabd 100644 --- a/docs/src/content/docs/guides/private-packages.md +++ b/docs/src/content/docs/guides/private-packages.md @@ -36,7 +36,7 @@ dependencies: - your-org/my-private-package#v1.0.0 ``` -For GitLab, Bitbucket, or self-hosted git servers, use the `git:` object form and rely on your [existing git credentials](../../getting-started/authentication/#object-style-git-references): +For GitLab, Bitbucket, or self-hosted git servers, use the [`git:` object form](../dependencies/) and rely on your [existing git credentials](../../getting-started/authentication/): ```yaml dependencies: diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index 88a67ab9..dbfa3c99 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -18,6 +18,7 @@ ) from ..drift import build_download_ref, detect_orphans, detect_ref_change from ..models.results import InstallResult +from ..core.command_logger import InstallLogger, _ValidationOutcome from ..utils.console import _rich_error, _rich_info, _rich_success, _rich_warning from ..utils.diagnostics import DiagnosticCollector from ..utils.github_host import default_host, is_valid_fqdn @@ -25,7 +26,6 @@ from ._helpers import ( _create_minimal_apm_yml, _get_default_config, - _load_apm_config, _rich_blank_line, _update_gitignore_for_apm_modules, ) @@ -56,7 +56,7 @@ # --------------------------------------------------------------------------- -def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False): +def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, logger=None): """Validate packages exist and can be accessed, then add to apm.yml dependencies section. Implements normalize-on-write: any input form (HTTPS URL, SSH URL, FQDN, shorthand) @@ -67,6 +67,10 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False): packages: Package specifiers to validate and add. dry_run: If True, only show what would be added. dev: If True, write to devDependencies instead of dependencies. + logger: InstallLogger for structured output. + + Returns: + Tuple of (validated_packages list, _ValidationOutcome). """ import subprocess import tempfile @@ -81,7 +85,10 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False): with open(apm_yml_path, "r") as f: data = yaml.safe_load(f) or {} except Exception as e: - _rich_error(f"Failed to read {APM_YML_FILENAME}: {e}") + if logger: + logger.error(f"Failed to read {APM_YML_FILENAME}: {e}") + else: + _rich_error(f"Failed to read {APM_YML_FILENAME}: {e}") sys.exit(1) # Ensure dependencies structure exists @@ -109,12 +116,23 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False): continue # First, validate all packages - _rich_info(f"Validating {len(packages)} package(s)...") + valid_outcomes = [] # (canonical, already_present) tuples + invalid_outcomes = [] # (package, reason) tuples + + if logger: + logger.validation_start(len(packages)) + else: + _rich_info(f"Validating {len(packages)} package(s)...") for package in packages: # Validate package format (should be owner/repo, a git URL, or a local path) if "/" not in package and not DependencyReference.is_local_path(package): - _rich_error(f"Invalid package format: {package}. Use 'owner/repo' format.") + reason = "invalid format — use 'owner/repo'" + invalid_outcomes.append((package, reason)) + if logger: + logger.validation_fail(package, reason) + else: + _rich_error(f"Invalid package format: {package}. Use 'owner/repo' format.") continue # Canonicalize input @@ -123,7 +141,12 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False): canonical = dep_ref.to_canonical() identity = dep_ref.get_identity() except ValueError as e: - _rich_error(f"Invalid package: {package} — {e}") + reason = str(e) + invalid_outcomes.append((package, reason)) + if logger: + logger.validation_fail(package, reason) + else: + _rich_error(f"Invalid package: {package} — {e}") continue # Check if package is already in dependencies (by identity) @@ -131,36 +154,64 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False): # Validate package exists and is accessible if _validate_package_exists(package): - if already_in_deps: + valid_outcomes.append((canonical, already_in_deps)) + if logger: + logger.validation_pass(canonical, already_present=already_in_deps) + elif already_in_deps: _rich_info( f"✓ {canonical} - already in apm.yml, ensuring installation..." ) else: + _rich_info(f"✓ {canonical} - accessible") + + if not already_in_deps: validated_packages.append(canonical) existing_identities.add(identity) # prevent duplicates within batch - _rich_info(f"✓ {canonical} - accessible") else: - _rich_error(f"✗ {package} - not accessible or doesn't exist") + reason = "not accessible or doesn't exist (check auth or repo name)" + invalid_outcomes.append((package, reason)) + if logger: + logger.validation_fail(package, reason) + else: + _rich_error(f"✗ {package} - not accessible or doesn't exist") + + outcome = _ValidationOutcome(valid=valid_outcomes, invalid=invalid_outcomes) + + # Let the logger emit a summary and decide whether to continue + if logger: + should_continue = logger.validation_summary(outcome) + if not should_continue: + return [], outcome if not validated_packages: if dry_run: - _rich_warning("No new packages to add") + _rich_warning("No new packages to add") if not logger else None # If all packages already exist in apm.yml, that's OK - we'll reinstall them - return [] + return [], outcome if dry_run: - _rich_info( - f"Dry run: Would add {len(validated_packages)} package(s) to apm.yml:" - ) - for pkg in validated_packages: - _rich_info(f" + {pkg}") - return validated_packages + if logger: + logger.progress( + f"Dry run: Would add {len(validated_packages)} package(s) to apm.yml" + ) + for pkg in validated_packages: + logger.verbose_detail(f" + {pkg}") + else: + _rich_info( + f"Dry run: Would add {len(validated_packages)} package(s) to apm.yml:" + ) + for pkg in validated_packages: + _rich_info(f" + {pkg}") + return validated_packages, outcome # Add validated packages to dependencies (already canonical) dep_label = "devDependencies" if dev else "apm.yml" for package in validated_packages: current_deps.append(package) - _rich_info(f"Added {package} to {dep_label}") + if logger: + logger.verbose_detail(f"Added {package} to {dep_label}") + else: + _rich_info(f"Added {package} to {dep_label}") # Update dependencies data[dep_section]["apm"] = current_deps @@ -169,12 +220,18 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False): try: with open(apm_yml_path, "w") as f: yaml.safe_dump(data, f, default_flow_style=False, sort_keys=False) - _rich_success(f"Updated {APM_YML_FILENAME} with {len(validated_packages)} new package(s)") + if logger: + logger.success(f"Updated {APM_YML_FILENAME} with {len(validated_packages)} new package(s)") + else: + _rich_success(f"Updated {APM_YML_FILENAME} with {len(validated_packages)} new package(s)") except Exception as e: - _rich_error(f"Failed to write {APM_YML_FILENAME}: {e}") + if logger: + logger.error(f"Failed to write {APM_YML_FILENAME}: {e}") + else: + _rich_error(f"Failed to write {APM_YML_FILENAME}: {e}") sys.exit(1) - return validated_packages + return validated_packages, outcome def _validate_package_exists(package): @@ -377,6 +434,10 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo apm install --dry-run # Show what would be installed """ try: + # Create structured logger for install output + is_partial = bool(packages) + logger = InstallLogger(verbose=verbose, dry_run=dry_run, partial=is_partial) + # Check if apm.yml exists apm_yml_exists = Path(APM_YML_FILENAME).exists() @@ -386,30 +447,36 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo project_name = Path.cwd().name config = _get_default_config(project_name) _create_minimal_apm_yml(config) - _rich_success(f"Created {APM_YML_FILENAME}", symbol="sparkles") + logger.success(f"Created {APM_YML_FILENAME}") # Error when NO apm.yml AND NO packages if not apm_yml_exists and not packages: - _rich_error(f"No {APM_YML_FILENAME} found") - _rich_info("💡 Run 'apm init' to create one, or:") - _rich_info(" apm install to auto-create + install") + logger.error(f"No {APM_YML_FILENAME} found") + logger.progress("Run 'apm init' to create one, or:") + logger.progress(" apm install to auto-create + install") sys.exit(1) # If packages are specified, validate and add them to apm.yml first if packages: - validated_packages = _validate_and_add_packages_to_apm_yml( - packages, dry_run, dev=dev + validated_packages, outcome = _validate_and_add_packages_to_apm_yml( + packages, dry_run, dev=dev, logger=logger, ) + # Short-circuit: all packages failed validation — nothing to install + if outcome.all_failed: + return # Note: Empty validated_packages is OK if packages are already in apm.yml # We'll proceed with installation from apm.yml to ensure everything is synced - _rich_info("Installing dependencies from apm.yml...") + logger.resolution_start( + to_install_count=len(packages) if packages else 0, + lockfile_count=0, # Refined later inside _install_apm_dependencies + ) # Parse apm.yml to get both APM and MCP dependencies try: apm_package = APMPackage.from_apm_yml(Path(APM_YML_FILENAME)) except Exception as e: - _rich_error(f"Failed to parse {APM_YML_FILENAME}: {e}") + logger.error(f"Failed to parse {APM_YML_FILENAME}: {e}") sys.exit(1) # Get APM and MCP dependencies @@ -430,25 +497,25 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo # Show what will be installed if dry run if dry_run: - _rich_info("Dry run mode - showing what would be installed:") + logger.progress("Dry run mode - showing what would be installed:") if should_install_apm and apm_deps: - _rich_info(f"APM dependencies ({len(apm_deps)}):") + logger.progress(f"APM dependencies ({len(apm_deps)}):") for dep in apm_deps: action = "update" if update else "install" - _rich_info( - f" - {dep.repo_url}#{dep.reference or 'main'} → {action}" + logger.progress( + f" - {dep.repo_url}#{dep.reference or 'main'} -> {action}" ) if should_install_mcp and mcp_deps: - _rich_info(f"MCP dependencies ({len(mcp_deps)}):") + logger.progress(f"MCP dependencies ({len(mcp_deps)}):") for dep in mcp_deps: - _rich_info(f" - {dep}") + logger.progress(f" - {dep}") if not apm_deps and not dev_apm_deps and not mcp_deps: - _rich_warning("No dependencies found in apm.yml") + logger.warning("No dependencies found in apm.yml") - _rich_success("Dry run complete - no changes made") + logger.success("Dry run complete - no changes made") return # Install APM dependencies first (if requested) @@ -474,8 +541,8 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo apm_diagnostics = None if should_install_apm and has_any_apm_deps: if not APM_DEPS_AVAILABLE: - _rich_error("APM dependency system not available") - _rich_info(f"Import error: {_APM_IMPORT_ERROR}") + logger.error("APM dependency system not available") + logger.progress(f"Import error: {_APM_IMPORT_ERROR}") sys.exit(1) try: @@ -485,16 +552,17 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo install_result = _install_apm_dependencies( apm_package, update, verbose, only_pkgs, force=force, parallel_downloads=parallel_downloads, + logger=logger, ) apm_count = install_result.installed_count prompt_count = install_result.prompts_integrated agent_count = install_result.agents_integrated apm_diagnostics = install_result.diagnostics except Exception as e: - _rich_error(f"Failed to install APM dependencies: {e}") + logger.error(f"Failed to install APM dependencies: {e}") sys.exit(1) elif should_install_apm and not has_any_apm_deps: - _rich_info("No APM dependencies found in apm.yml") + logger.verbose_detail("No APM dependencies found in apm.yml") # When --update is used, package files on disk may have changed. # Clear the parse cache so transitive MCP collection reads fresh data. @@ -508,7 +576,7 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo lock_path = get_lockfile_path(Path.cwd()) transitive_mcp = MCPIntegrator.collect_transitive(apm_modules_path, lock_path, trust_transitive_mcp) if transitive_mcp: - _rich_info(f"Collected {len(transitive_mcp)} transitive MCP dependency(ies)") + logger.verbose_detail(f"Collected {len(transitive_mcp)} transitive MCP dependency(ies)") mcp_deps = MCPIntegrator.deduplicate(mcp_deps + transitive_mcp) # Continue with MCP installation (existing logic) @@ -534,27 +602,25 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo if old_mcp_servers: MCPIntegrator.remove_stale(old_mcp_servers, runtime, exclude) MCPIntegrator.update_lockfile(builtins.set(), mcp_configs={}) - _rich_warning("No MCP dependencies found in apm.yml") + logger.verbose_detail("No MCP dependencies found in apm.yml") elif not should_install_mcp and old_mcp_servers: # --only=apm: APM install regenerated the lockfile and dropped # mcp_servers. Restore the previous set so it is not lost. MCPIntegrator.update_lockfile(old_mcp_servers, mcp_configs=old_mcp_configs) - # Show beautiful post-install summary + # Show diagnostics and final install summary if apm_diagnostics and apm_diagnostics.has_diagnostics: apm_diagnostics.render_summary() else: _rich_blank_line() - if install_mode == InstallMode.ALL: - # Load apm.yml config for summary - apm_config = _load_apm_config() - _show_install_summary( - apm_count, prompt_count, agent_count, mcp_count, apm_config - ) - elif install_mode == InstallMode.APM: - _rich_success(f"Installed {apm_count} APM dependencies") - elif install_mode == InstallMode.MCP: - _rich_success(f"Configured {mcp_count} MCP servers") + + error_count = 0 + if apm_diagnostics: + try: + error_count = int(apm_diagnostics.error_count) + except (TypeError, ValueError): + error_count = 0 + logger.install_summary(apm_count=apm_count, mcp_count=mcp_count, errors=error_count) # Hard-fail when critical security findings blocked any package. # Consistent with apm unpack which also hard-fails on critical. @@ -881,6 +947,7 @@ def _install_apm_dependencies( only_packages: "builtins.list" = None, force: bool = False, parallel_downloads: int = 4, + logger: "InstallLogger" = None, ): """Install APM package dependencies. @@ -891,6 +958,7 @@ def _install_apm_dependencies( only_packages: If provided, only install these specific packages (not all from apm.yml) force: Whether to overwrite locally-authored files on collision parallel_downloads: Max concurrent downloads (0 disables parallelism) + logger: InstallLogger for structured output """ if not APM_DEPS_AVAILABLE: raise RuntimeError("APM dependency system not available") @@ -901,18 +969,21 @@ def _install_apm_dependencies( if not all_apm_deps: return InstallResult() - _rich_info(f"Installing APM dependencies ({len(all_apm_deps)})...") - project_root = Path.cwd() # T5: Check for existing lockfile - use locked versions for reproducible installs from apm_cli.deps.lockfile import LockFile, get_lockfile_path lockfile_path = get_lockfile_path(project_root) existing_lockfile = None + lockfile_count = 0 if lockfile_path.exists() and not update_refs: existing_lockfile = LockFile.read(lockfile_path) if existing_lockfile and existing_lockfile.dependencies: - _rich_info(f"Using apm.lock.yaml ({len(existing_lockfile.dependencies)} locked dependencies)") + lockfile_count = len(existing_lockfile.dependencies) + if logger: + logger.verbose_detail(f"Using apm.lock.yaml ({lockfile_count} locked dependencies)") + else: + _rich_info(f"Using apm.lock.yaml ({lockfile_count} locked dependencies)") apm_modules_dir = project_root / APM_MODULES_DIR apm_modules_dir.mkdir(exist_ok=True) @@ -968,7 +1039,9 @@ def download_callback(dep_ref, modules_dir): return install_path except Exception as e: # Log but don't fail - allow resolution to continue - if verbose: + if logger: + logger.verbose_detail(f" Failed to resolve transitive dep {dep_ref.repo_url}: {e}") + elif verbose: _rich_error(f" └─ Failed to resolve transitive dep {dep_ref.repo_url}: {e}") return None @@ -983,10 +1056,16 @@ def download_callback(dep_ref, modules_dir): # Check for circular dependencies if dependency_graph.circular_dependencies: - _rich_error("Circular dependencies detected:") + if logger: + logger.error("Circular dependencies detected:") + else: + _rich_error("Circular dependencies detected:") for circular in dependency_graph.circular_dependencies: - cycle_path = " → ".join(circular.cycle_path) - _rich_error(f" {cycle_path}") + cycle_path = " -> ".join(circular.cycle_path) + if logger: + logger.error(f" {cycle_path}") + else: + _rich_error(f" {cycle_path}") raise RuntimeError("Cannot install packages with circular dependencies") # Get flattened dependencies for installation @@ -1034,7 +1113,10 @@ def _collect_descendants(node, visited=None): ] if not deps_to_install: - _rich_info("No APM dependencies to install", symbol="check") + if logger: + logger.nothing_to_install() + else: + _rich_info("No APM dependencies to install", symbol="check") return InstallResult() # ------------------------------------------------------------------ @@ -1071,9 +1153,14 @@ def _collect_descendants(node, visited=None): claude_dir = project_root / CLAUDE_DIR if not github_dir.exists() and not claude_dir.exists(): github_dir.mkdir(parents=True, exist_ok=True) - _rich_info( - "Created .github/ as standard skills root (.github/skills/) and to enable VSCode/Copilot integration" - ) + if logger: + logger.verbose_detail( + "Created .github/ as standard skills root (.github/skills/) and to enable VSCode/Copilot integration" + ) + else: + _rich_info( + "Created .github/ as standard skills root (.github/skills/) and to enable VSCode/Copilot integration" + ) detected_target, detection_reason = detect_target( project_root=project_root, @@ -1257,7 +1344,10 @@ def _collect_descendants(node, visited=None): continue installed_count += 1 - _rich_success(f"✓ {dep_ref.local_path} (local)") + if logger: + logger.download_complete(dep_ref.local_path, ref_suffix="local") + else: + _rich_success(f"✓ {dep_ref.local_path} (local)") # Build minimal PackageInfo for integration from apm_cli.models.apm_package import ( @@ -1416,10 +1506,16 @@ def _collect_descendants(node, visited=None): if skip_download and _dep_locked_chk and _dep_locked_chk.content_hash: from ..utils.content_hash import verify_package_hash if not verify_package_hash(install_path, _dep_locked_chk.content_hash): - _rich_warning( - f"Content hash mismatch for " - f"{dep_ref.get_unique_key()} — re-downloading" - ) + if logger: + logger.warning( + f"Content hash mismatch for " + f"{dep_ref.get_unique_key()} -- re-downloading" + ) + else: + _rich_warning( + f"Content hash mismatch for " + f"{dep_ref.get_unique_key()} — re-downloading" + ) safe_rmtree(install_path, apm_modules_dir) skip_download = False @@ -1437,7 +1533,10 @@ def _collect_descendants(node, visited=None): ref_str = f"#{short_sha}" elif dep_ref.reference: ref_str = f"#{dep_ref.reference}" - _rich_info(f"✓ {display_name}{ref_str} (cached)") + if logger: + logger.download_complete(display_name, ref_suffix=f"{ref_str} (cached)" if ref_str else "cached") + else: + _rich_info(f"✓ {display_name}{ref_str} (cached)") installed_count += 1 if not dep_ref.reference: unpinned_count += 1 @@ -1604,7 +1703,10 @@ def _collect_descendants(node, visited=None): # Show resolved ref alongside package name for visibility resolved = getattr(package_info, 'resolved_reference', None) ref_suffix = f"#{resolved}" if resolved else "" - _rich_success(f"✓ {display_name}{ref_suffix}") + if logger: + logger.download_complete(display_name, ref_suffix=ref_suffix) + else: + _rich_success(f"✓ {display_name}{ref_suffix}") # Track unpinned deps for aggregated diagnostic if not dep_ref.reference: @@ -1632,20 +1734,17 @@ def _collect_descendants(node, visited=None): from apm_cli.models.apm_package import PackageType package_type = package_info.package_type - if package_type == PackageType.CLAUDE_SKILL: - _rich_info( - f" └─ Package type: Skill (SKILL.md detected)" - ) - elif package_type == PackageType.MARKETPLACE_PLUGIN: - _rich_info( - f" └─ Package type: Marketplace Plugin (plugin.json detected)" - ) - elif package_type == PackageType.HYBRID: - _rich_info( - f" └─ Package type: Hybrid (apm.yml + SKILL.md)" - ) - elif package_type == PackageType.APM_PACKAGE: - _rich_info(f" └─ Package type: APM Package (apm.yml)") + _type_label = { + PackageType.CLAUDE_SKILL: "Skill (SKILL.md detected)", + PackageType.MARKETPLACE_PLUGIN: "Marketplace Plugin (plugin.json detected)", + PackageType.HYBRID: "Hybrid (apm.yml + SKILL.md)", + PackageType.APM_PACKAGE: "APM Package (apm.yml)", + }.get(package_type) + if _type_label: + if logger: + logger.verbose_detail(f" Package type: {_type_label}") + else: + _rich_info(f" └─ Package type: {_type_label}") # Auto-integrate prompts and agents if enabled # Pre-deploy security gate @@ -1734,18 +1833,29 @@ def _collect_descendants(node, visited=None): _deleted_orphan_paths.append(_target) _removed_orphan_count += 1 except Exception as _orphan_err: - _rich_warning( - f" └─ Could not remove orphaned path {_orphan_path}: {_orphan_err}" - ) + if logger: + logger.verbose_detail( + f" Could not remove orphaned path {_orphan_path}: {_orphan_err}" + ) + else: + _rich_warning( + f" └─ Could not remove orphaned path {_orphan_path}: {_orphan_err}" + ) _failed_orphan_count += 1 # Clean up empty parent directories left after file removal if _deleted_orphan_paths: BaseIntegrator.cleanup_empty_parents(_deleted_orphan_paths, project_root) if _removed_orphan_count > 0: - _rich_info( - f"Removed {_removed_orphan_count} file(s) from packages " - "no longer in apm.yml" - ) + if logger: + logger.verbose_detail( + f"Removed {_removed_orphan_count} file(s) from packages " + "no longer in apm.yml" + ) + else: + _rich_info( + f"Removed {_removed_orphan_count} file(s) from packages " + "no longer in apm.yml" + ) # Generate apm.lock for reproducible installs (T4: lockfile generation) if installed_packages: @@ -1794,27 +1904,44 @@ def _collect_descendants(node, visited=None): lockfile = existing lockfile.save(lockfile_path) - _rich_info(f"Generated apm.lock.yaml with {len(lockfile.dependencies)} dependencies") + if logger: + logger.verbose_detail(f"Generated apm.lock.yaml with {len(lockfile.dependencies)} dependencies") + else: + _rich_info(f"Generated apm.lock.yaml with {len(lockfile.dependencies)} dependencies") except Exception as e: - _rich_warning(f"Could not generate apm.lock.yaml: {e}") + if logger: + logger.warning(f"Could not generate apm.lock.yaml: {e}") + else: + _rich_warning(f"Could not generate apm.lock.yaml: {e}") - # Show link resolution stats if any were resolved + # Show integration stats (verbose-only when logger is available) if total_links_resolved > 0: - _rich_info(f"✓ Resolved {total_links_resolved} context file links") + if logger: + logger.verbose_detail(f"Resolved {total_links_resolved} context file links") + else: + _rich_info(f"✓ Resolved {total_links_resolved} context file links") - # Show Claude commands stats if any were integrated if total_commands_integrated > 0: - _rich_info(f"✓ Integrated {total_commands_integrated} command(s)") + if logger: + logger.verbose_detail(f"Integrated {total_commands_integrated} command(s)") + else: + _rich_info(f"✓ Integrated {total_commands_integrated} command(s)") - # Show hooks stats if any were integrated if total_hooks_integrated > 0: - _rich_info(f"✓ Integrated {total_hooks_integrated} hook(s)") + if logger: + logger.verbose_detail(f"Integrated {total_hooks_integrated} hook(s)") + else: + _rich_info(f"✓ Integrated {total_hooks_integrated} hook(s)") - # Show instructions stats if any were integrated if total_instructions_integrated > 0: - _rich_info(f"✓ Integrated {total_instructions_integrated} instruction(s)") + if logger: + logger.verbose_detail(f"Integrated {total_instructions_integrated} instruction(s)") + else: + _rich_info(f"✓ Integrated {total_instructions_integrated} instruction(s)") - _rich_success(f"Installed {installed_count} APM dependencies") + # Summary is now emitted by the caller via logger.install_summary() + if not logger: + _rich_success(f"Installed {installed_count} APM dependencies") if unpinned_count: noun = "dependency has" if unpinned_count == 1 else "dependencies have" @@ -1829,29 +1956,5 @@ def _collect_descendants(node, visited=None): raise RuntimeError(f"Failed to resolve APM dependencies: {e}") -# --------------------------------------------------------------------------- -# Summary -# --------------------------------------------------------------------------- -def _show_install_summary( - apm_count: int, prompt_count: int, agent_count: int, mcp_count: int, apm_config -): - """Show post-install summary. - - Args: - apm_count: Number of APM packages installed - prompt_count: Number of prompts integrated - agent_count: Number of agents integrated - mcp_count: Number of MCP servers configured - apm_config: The apm.yml configuration dict - """ - parts = [] - if apm_count > 0: - parts.append(f"{apm_count} APM package(s)") - if mcp_count > 0: - parts.append(f"{mcp_count} MCP server(s)") - if parts: - _rich_success(f"Installation complete: {', '.join(parts)}") - else: - _rich_success("Installation complete") diff --git a/tests/integration/test_generic_git_url_install.py b/tests/integration/test_generic_git_url_install.py index a90002a2..072640f5 100644 --- a/tests/integration/test_generic_git_url_install.py +++ b/tests/integration/test_generic_git_url_install.py @@ -209,11 +209,11 @@ def test_install_https_url_stores_canonical(self): with patch("apm_cli.commands.install._validate_package_exists", return_value=True): from apm_cli.commands.install import _validate_and_add_packages_to_apm_yml - result = _validate_and_add_packages_to_apm_yml( + validated, _outcome = _validate_and_add_packages_to_apm_yml( ["https://github.com/microsoft/apm-sample-package.git"] ) - assert result == ["microsoft/apm-sample-package"] + assert validated == ["microsoft/apm-sample-package"] data = yaml.safe_load(self.apm_yml_path.read_text()) assert "microsoft/apm-sample-package" in data["dependencies"]["apm"] # Verify raw URL is NOT stored @@ -230,11 +230,11 @@ def test_install_ssh_url_stores_canonical(self): with patch("apm_cli.commands.install._validate_package_exists", return_value=True): from apm_cli.commands.install import _validate_and_add_packages_to_apm_yml - result = _validate_and_add_packages_to_apm_yml( + validated, _outcome = _validate_and_add_packages_to_apm_yml( ["git@github.com:microsoft/apm-sample-package.git"] ) - assert result == ["microsoft/apm-sample-package"] + assert validated == ["microsoft/apm-sample-package"] data = yaml.safe_load(self.apm_yml_path.read_text()) assert "microsoft/apm-sample-package" in data["dependencies"]["apm"] @@ -249,11 +249,11 @@ def test_no_duplicate_when_already_in_canonical_form(self): with patch("apm_cli.commands.install._validate_package_exists", return_value=True): from apm_cli.commands.install import _validate_and_add_packages_to_apm_yml - result = _validate_and_add_packages_to_apm_yml( + validated, _outcome = _validate_and_add_packages_to_apm_yml( ["microsoft/apm-sample-package"] ) - assert result == [] + assert validated == [] data = yaml.safe_load(self.apm_yml_path.read_text()) assert data["dependencies"]["apm"].count("microsoft/apm-sample-package") == 1 @@ -264,11 +264,11 @@ def test_no_duplicate_when_url_matches_existing_canonical(self): with patch("apm_cli.commands.install._validate_package_exists", return_value=True): from apm_cli.commands.install import _validate_and_add_packages_to_apm_yml - result = _validate_and_add_packages_to_apm_yml( + validated, _outcome = _validate_and_add_packages_to_apm_yml( ["https://github.com/microsoft/apm-sample-package.git"] ) - assert result == [] + assert validated == [] data = yaml.safe_load(self.apm_yml_path.read_text()) # Should still be exactly 1 entry apm_deps = data["dependencies"]["apm"] diff --git a/tests/unit/test_canonicalization.py b/tests/unit/test_canonicalization.py index 5ef4e285..1fcfa5f8 100644 --- a/tests/unit/test_canonicalization.py +++ b/tests/unit/test_canonicalization.py @@ -271,11 +271,11 @@ def test_https_url_stored_as_shorthand(self, mock_success, mock_info, mock_valid monkeypatch.chdir(tmp_path) from apm_cli.commands.install import _validate_and_add_packages_to_apm_yml - result = _validate_and_add_packages_to_apm_yml( + validated, _outcome = _validate_and_add_packages_to_apm_yml( ["https://github.com/microsoft/apm-sample-package.git"] ) - assert result == ["microsoft/apm-sample-package"] + assert validated == ["microsoft/apm-sample-package"] data = yaml.safe_load(apm_yml.read_text()) assert "microsoft/apm-sample-package" in data["dependencies"]["apm"] @@ -290,11 +290,11 @@ def test_ssh_url_stored_as_shorthand(self, mock_success, mock_info, mock_validat monkeypatch.chdir(tmp_path) from apm_cli.commands.install import _validate_and_add_packages_to_apm_yml - result = _validate_and_add_packages_to_apm_yml( + validated, _outcome = _validate_and_add_packages_to_apm_yml( ["git@github.com:microsoft/apm-sample-package.git"] ) - assert result == ["microsoft/apm-sample-package"] + assert validated == ["microsoft/apm-sample-package"] @patch("apm_cli.commands.install._validate_package_exists", return_value=True) @patch("apm_cli.commands.install._rich_info") @@ -307,11 +307,11 @@ def test_fqdn_github_stored_as_shorthand(self, mock_success, mock_info, mock_val monkeypatch.chdir(tmp_path) from apm_cli.commands.install import _validate_and_add_packages_to_apm_yml - result = _validate_and_add_packages_to_apm_yml( + validated, _outcome = _validate_and_add_packages_to_apm_yml( ["github.com/microsoft/apm-sample-package"] ) - assert result == ["microsoft/apm-sample-package"] + assert validated == ["microsoft/apm-sample-package"] @patch("apm_cli.commands.install._validate_package_exists", return_value=True) @patch("apm_cli.commands.install._rich_info") @@ -324,11 +324,11 @@ def test_gitlab_url_preserves_host(self, mock_success, mock_info, mock_validate, monkeypatch.chdir(tmp_path) from apm_cli.commands.install import _validate_and_add_packages_to_apm_yml - result = _validate_and_add_packages_to_apm_yml( + validated, _outcome = _validate_and_add_packages_to_apm_yml( ["https://gitlab.com/acme/standards.git"] ) - assert result == ["gitlab.com/acme/standards"] + assert validated == ["gitlab.com/acme/standards"] data = yaml.safe_load(apm_yml.read_text()) assert "gitlab.com/acme/standards" in data["dependencies"]["apm"] @@ -346,12 +346,12 @@ def test_duplicate_detection_different_forms(self, mock_warn, mock_info, mock_va monkeypatch.chdir(tmp_path) from apm_cli.commands.install import _validate_and_add_packages_to_apm_yml - result = _validate_and_add_packages_to_apm_yml( + validated, _outcome = _validate_and_add_packages_to_apm_yml( ["https://github.com/microsoft/apm-sample-package.git"] ) # Should return empty — package already exists - assert result == [] + assert validated == [] data = yaml.safe_load(apm_yml.read_text()) # No duplicate added assert data["dependencies"]["apm"].count("microsoft/apm-sample-package") == 1 @@ -367,13 +367,13 @@ def test_batch_dedup(self, mock_success, mock_info, mock_validate, tmp_path, mon monkeypatch.chdir(tmp_path) from apm_cli.commands.install import _validate_and_add_packages_to_apm_yml - result = _validate_and_add_packages_to_apm_yml([ + validated, _outcome = _validate_and_add_packages_to_apm_yml([ "microsoft/apm-sample-package", "https://github.com/microsoft/apm-sample-package.git", ]) - assert len(result) == 1 - assert result[0] == "microsoft/apm-sample-package" + assert len(validated) == 1 + assert validated[0] == "microsoft/apm-sample-package" @patch("apm_cli.commands.install._validate_package_exists", return_value=True) @patch("apm_cli.commands.install._rich_info") @@ -386,11 +386,11 @@ def test_ref_preserved_in_canonical(self, mock_success, mock_info, mock_validate monkeypatch.chdir(tmp_path) from apm_cli.commands.install import _validate_and_add_packages_to_apm_yml - result = _validate_and_add_packages_to_apm_yml( + validated, _outcome = _validate_and_add_packages_to_apm_yml( ["https://github.com/microsoft/apm-sample-package.git#v1.0.0"] ) - assert result == ["microsoft/apm-sample-package#v1.0.0"] + assert validated == ["microsoft/apm-sample-package#v1.0.0"] # ── Uninstall identity matching ───────────────────────────────────────────── diff --git a/tests/unit/test_dev_dependencies.py b/tests/unit/test_dev_dependencies.py index 825e9b6d..4e4de703 100644 --- a/tests/unit/test_dev_dependencies.py +++ b/tests/unit/test_dev_dependencies.py @@ -412,8 +412,8 @@ def test_dev_creates_dev_dependencies_section(self, mock_validate, tmp_path): ) mock_validate.return_value = True - result = _validate_and_add_packages_to_apm_yml(["org/dev-pkg"], dev=True) - assert "org/dev-pkg" in result + validated, _outcome = _validate_and_add_packages_to_apm_yml(["org/dev-pkg"], dev=True) + assert "org/dev-pkg" in validated with open(apm_yml) as f: data = yaml.safe_load(f) diff --git a/tests/unit/test_diagnostics.py b/tests/unit/test_diagnostics.py index 314e8d62..92889614 100644 --- a/tests/unit/test_diagnostics.py +++ b/tests/unit/test_diagnostics.py @@ -7,6 +7,7 @@ import pytest from apm_cli.utils.diagnostics import ( + CATEGORY_AUTH, CATEGORY_COLLISION, CATEGORY_ERROR, CATEGORY_INFO, @@ -430,3 +431,126 @@ def test_info_unpinned_deps_plural(self): " [i] 3 dependencies have no pinned version " "-- pin with #tag or #sha to prevent drift" ) + + +# ── Auth category ─────────────────────────────────────────────────── + + +class TestAuthCategory: + def test_auth_adds_diagnostic(self): + dc = DiagnosticCollector() + dc.auth("EMU token detected — fallback to unauthenticated", package="pkg-a") + assert dc.has_diagnostics is True + assert len(dc._diagnostics) == 1 + assert dc._diagnostics[0].category == CATEGORY_AUTH + assert dc._diagnostics[0].message == "EMU token detected — fallback to unauthenticated" + assert dc._diagnostics[0].package == "pkg-a" + + def test_auth_with_detail(self): + dc = DiagnosticCollector() + dc.auth("credential fallback", package="pkg-b", detail="tried GITHUB_APM_PAT first") + d = dc._diagnostics[0] + assert d.detail == "tried GITHUB_APM_PAT first" + + def test_auth_count_zero_when_empty(self): + dc = DiagnosticCollector() + dc.warn("unrelated") + assert dc.auth_count == 0 + + def test_auth_count_returns_correct_count(self): + dc = DiagnosticCollector() + dc.auth("issue 1") + dc.auth("issue 2") + dc.warn("not auth") + assert dc.auth_count == 2 + + @patch(f"{_MOCK_BASE}._get_console", return_value=None) + @patch(f"{_MOCK_BASE}._rich_echo") + @patch(f"{_MOCK_BASE}._rich_warning") + @patch(f"{_MOCK_BASE}._rich_info") + def test_auth_render_singular( + self, mock_info, mock_warning, mock_echo, mock_console + ): + dc = DiagnosticCollector() + dc.auth("token expired", package="pkg-x") + dc.render_summary() + warning_texts = [str(c) for c in mock_warning.call_args_list] + assert any("1 authentication issue" in t for t in warning_texts) + + @patch(f"{_MOCK_BASE}._get_console", return_value=None) + @patch(f"{_MOCK_BASE}._rich_echo") + @patch(f"{_MOCK_BASE}._rich_warning") + @patch(f"{_MOCK_BASE}._rich_info") + def test_auth_render_plural( + self, mock_info, mock_warning, mock_echo, mock_console + ): + dc = DiagnosticCollector() + dc.auth("issue 1", package="p1") + dc.auth("issue 2", package="p2") + dc.render_summary() + warning_texts = [str(c) for c in mock_warning.call_args_list] + assert any("2 authentication issues" in t for t in warning_texts) + + @patch(f"{_MOCK_BASE}._get_console", return_value=None) + @patch(f"{_MOCK_BASE}._rich_echo") + @patch(f"{_MOCK_BASE}._rich_warning") + @patch(f"{_MOCK_BASE}._rich_info") + def test_auth_render_shows_package_and_message( + self, mock_info, mock_warning, mock_echo, mock_console + ): + dc = DiagnosticCollector() + dc.auth("EMU token fallback", package="my-pkg") + dc.render_summary() + echo_texts = [str(c) for c in mock_echo.call_args_list] + assert any("my-pkg" in t and "EMU token fallback" in t for t in echo_texts) + + @patch(f"{_MOCK_BASE}._get_console", return_value=None) + @patch(f"{_MOCK_BASE}._rich_echo") + @patch(f"{_MOCK_BASE}._rich_warning") + @patch(f"{_MOCK_BASE}._rich_info") + def test_auth_verbose_renders_detail( + self, mock_info, mock_warning, mock_echo, mock_console + ): + dc = DiagnosticCollector(verbose=True) + dc.auth("fallback used", package="pkg", detail="GITHUB_APM_PAT → unauthenticated") + dc.render_summary() + echo_texts = [str(c) for c in mock_echo.call_args_list] + assert any("GITHUB_APM_PAT" in t for t in echo_texts) + + @patch(f"{_MOCK_BASE}._get_console", return_value=None) + @patch(f"{_MOCK_BASE}._rich_echo") + @patch(f"{_MOCK_BASE}._rich_warning") + @patch(f"{_MOCK_BASE}._rich_info") + def test_auth_non_verbose_shows_hint( + self, mock_info, mock_warning, mock_echo, mock_console + ): + dc = DiagnosticCollector(verbose=False) + dc.auth("credential issue", detail="secret detail") + dc.render_summary() + info_texts = [str(c) for c in mock_info.call_args_list] + assert any("--verbose" in t for t in info_texts) + # detail should NOT appear in non-verbose mode + echo_texts = [str(c) for c in mock_echo.call_args_list] + assert not any("secret detail" in t for t in echo_texts) + + @patch(f"{_MOCK_BASE}._get_console", return_value=None) + @patch(f"{_MOCK_BASE}._rich_echo") + @patch(f"{_MOCK_BASE}._rich_warning") + @patch(f"{_MOCK_BASE}._rich_info") + def test_auth_renders_before_collision( + self, mock_info, mock_warning, mock_echo, mock_console + ): + dc = DiagnosticCollector() + dc.skip("collision.md", package="p1") + dc.auth("auth issue", package="p2") + call_order = [] + + with patch(f"{_MOCK_BASE}._get_console", return_value=None), \ + patch(f"{_MOCK_BASE}._rich_echo"), \ + patch(f"{_MOCK_BASE}._rich_warning", side_effect=lambda *a, **k: call_order.append(str(a))), \ + patch(f"{_MOCK_BASE}._rich_info"): + dc.render_summary() + + auth_idx = next(i for i, t in enumerate(call_order) if "authentication" in t) + coll_idx = next(i for i, t in enumerate(call_order) if "skipped" in t) + assert auth_idx < coll_idx, "auth should render before collision" diff --git a/tests/unit/test_install_command.py b/tests/unit/test_install_command.py index 5696e204..6500aa71 100644 --- a/tests/unit/test_install_command.py +++ b/tests/unit/test_install_command.py @@ -218,7 +218,7 @@ def test_install_invalid_package_format_with_no_apm_yml(self, mock_validate): # Should create apm.yml but fail to add invalid package assert Path("apm.yml").exists() - assert "Invalid package format" in result.output + assert "invalid format" in result.output @patch("apm_cli.commands.install._validate_package_exists") @patch("apm_cli.commands.install.APM_DEPS_AVAILABLE", True) From 6ef25114b12f5b7351f81e95d94358e4e72d330a Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 00:47:22 +0100 Subject: [PATCH 03/40] feat: CommandLogger wiring across all commands and support modules MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Phase 3 — Logging migration (3b-3g): - compile: CommandLogger in cli.py, _log() helper in agents_compiler.py - deps: CommandLogger in cli.py + _utils.py (logger=None backward compat) - audit: CommandLogger, pass logger to helpers, verbose_detail for scan stats - mcp/run/init: CommandLogger with --verbose support - config/list/update/pack/runtime/prune/uninstall: CommandLogger wiring - Support modules: logger=None param in skill/prompt/instruction/mcp integrators, vscode adapter, drift module Phase 4 — Integration tests (4d, 4e): - test_auth_resolver.py: 26 integration tests across 8 test classes - Scripts verified — no auth error pattern updates needed All 2839 tests pass, 126 skipped. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- src/apm_cli/adapters/client/vscode.py | 52 ++- src/apm_cli/commands/audit.py | 97 +++--- src/apm_cli/commands/compile/cli.py | 114 +++---- src/apm_cli/commands/config.py | 29 +- src/apm_cli/commands/deps/_utils.py | 39 ++- src/apm_cli/commands/deps/cli.py | 64 ++-- src/apm_cli/commands/init.py | 44 ++- src/apm_cli/commands/list_cmd.py | 15 +- src/apm_cli/commands/mcp.py | 32 +- src/apm_cli/commands/pack.py | 43 +-- src/apm_cli/commands/prune.py | 33 +- src/apm_cli/commands/run.py | 60 ++-- src/apm_cli/commands/runtime.py | 36 +-- src/apm_cli/commands/uninstall/cli.py | 31 +- src/apm_cli/commands/update.py | 45 +-- src/apm_cli/compilation/agents_compiler.py | 37 ++- src/apm_cli/drift.py | 4 + .../integration/instruction_integrator.py | 2 + src/apm_cli/integration/mcp_integrator.py | 299 ++++++++++++----- src/apm_cli/integration/prompt_integrator.py | 3 +- src/apm_cli/integration/skill_integrator.py | 33 +- tests/integration/test_auth_resolver.py | 302 ++++++++++++++++++ tests/unit/test_audit_command.py | 17 +- tests/unit/test_unpacker.py | 2 +- 24 files changed, 978 insertions(+), 455 deletions(-) create mode 100644 tests/integration/test_auth_resolver.py diff --git a/src/apm_cli/adapters/client/vscode.py b/src/apm_cli/adapters/client/vscode.py index 6932f86d..4dd17042 100644 --- a/src/apm_cli/adapters/client/vscode.py +++ b/src/apm_cli/adapters/client/vscode.py @@ -32,7 +32,7 @@ def __init__(self, registry_url=None): self.registry_client = SimpleRegistryClient(registry_url) self.registry_integration = RegistryIntegration(registry_url) - def get_config_path(self): + def get_config_path(self, logger=None): """Get the path to the VSCode MCP configuration file in the repository. Returns: @@ -50,11 +50,14 @@ def get_config_path(self): if not vscode_dir.exists(): vscode_dir.mkdir(parents=True, exist_ok=True) except Exception as e: - print(f"Warning: Could not create .vscode directory: {e}") + if logger: + logger.warning(f"Could not create .vscode directory: {e}") + else: + print(f"Warning: Could not create .vscode directory: {e}") return str(mcp_config_path) - def update_config(self, new_config): + def update_config(self, new_config, logger=None): """Update the VSCode MCP configuration with new values. Args: @@ -63,7 +66,7 @@ def update_config(self, new_config): Returns: bool: True if successful, False otherwise. """ - config_path = self.get_config_path() + config_path = self.get_config_path(logger=logger) try: # Write the updated config @@ -72,16 +75,19 @@ def update_config(self, new_config): return True except Exception as e: - print(f"Error updating VSCode MCP configuration: {e}") + if logger: + logger.error(f"Error updating VSCode MCP configuration: {e}") + else: + print(f"Error updating VSCode MCP configuration: {e}") return False - def get_current_config(self): + def get_current_config(self, logger=None): """Get the current VSCode MCP configuration. Returns: dict: Current VSCode MCP configuration from the local .vscode/mcp.json file. """ - config_path = self.get_config_path() + config_path = self.get_config_path(logger=logger) try: try: @@ -90,10 +96,13 @@ def get_current_config(self): except (FileNotFoundError, json.JSONDecodeError): return {} except Exception as e: - print(f"Error reading VSCode MCP configuration: {e}") + if logger: + logger.error(f"Error reading VSCode MCP configuration: {e}") + else: + print(f"Error reading VSCode MCP configuration: {e}") return {} - def configure_mcp_server(self, server_url, server_name=None, enabled=True, env_overrides=None, server_info_cache=None, runtime_vars=None): + def configure_mcp_server(self, server_url, server_name=None, enabled=True, env_overrides=None, server_info_cache=None, runtime_vars=None, logger=None): """Configure an MCP server in VS Code mcp.json file. This method updates the .vscode/mcp.json file to add or update @@ -105,6 +114,7 @@ def configure_mcp_server(self, server_url, server_name=None, enabled=True, env_o enabled (bool, optional): Whether to enable the server. Defaults to True. env_overrides (dict, optional): Environment variable overrides. Defaults to None. server_info_cache (dict, optional): Pre-fetched server info to avoid duplicate registry calls. + logger: Optional CommandLogger for structured output. Returns: bool: True if successful, False otherwise. @@ -113,7 +123,10 @@ def configure_mcp_server(self, server_url, server_name=None, enabled=True, env_o ValueError: If server is not found in registry. """ if not server_url: - print("Error: server_url cannot be empty") + if logger: + logger.error("server_url cannot be empty") + else: + print("Error: server_url cannot be empty") return False try: @@ -133,14 +146,17 @@ def configure_mcp_server(self, server_url, server_name=None, enabled=True, env_o server_config, input_vars = self._format_server_config(server_info) if not server_config: - print(f"Unable to configure server: {server_url}") + if logger: + logger.error(f"Unable to configure server: {server_url}") + else: + print(f"Unable to configure server: {server_url}") return False # Use provided server name or fallback to server_url config_key = server_name or server_url # Get current config - current_config = self.get_current_config() + current_config = self.get_current_config(logger=logger) # Ensure servers and inputs sections exist if "servers" not in current_config: @@ -159,17 +175,23 @@ def configure_mcp_server(self, server_url, server_name=None, enabled=True, env_o existing_input_ids.add(var.get("id")) # Update the configuration - result = self.update_config(current_config) + result = self.update_config(current_config, logger=logger) if result: - print(f"Successfully configured MCP server '{config_key}' for VS Code") + if logger: + logger.verbose_detail(f"Configured MCP server '{config_key}' for VS Code") + else: + print(f"Successfully configured MCP server '{config_key}' for VS Code") return result except ValueError: # Re-raise ValueError for registry errors raise except Exception as e: - print(f"Error configuring MCP server: {e}") + if logger: + logger.error(f"Error configuring MCP server: {e}") + else: + print(f"Error configuring MCP server: {e}") return False def _format_server_config(self, server_info): diff --git a/src/apm_cli/commands/audit.py b/src/apm_cli/commands/audit.py index fe2c7f16..a0dd38e1 100644 --- a/src/apm_cli/commands/audit.py +++ b/src/apm_cli/commands/audit.py @@ -20,13 +20,10 @@ from ..deps.lockfile import LockFile, get_lockfile_path from ..integration.base_integrator import BaseIntegrator from ..security.content_scanner import ContentScanner, ScanFinding +from ..core.command_logger import CommandLogger from ..utils.console import ( _get_console, _rich_echo, - _rich_error, - _rich_info, - _rich_success, - _rich_warning, STATUS_SYMBOLS, ) @@ -109,16 +106,16 @@ def _scan_lockfile_packages( return all_findings, files_scanned -def _scan_single_file(file_path: Path) -> Tuple[Dict[str, List[ScanFinding]], int]: +def _scan_single_file(file_path: Path, logger) -> Tuple[Dict[str, List[ScanFinding]], int]: """Scan a single arbitrary file. Returns (findings_by_file, files_scanned). """ if not file_path.exists(): - _rich_error(f"File not found: {file_path}") + logger.error(f"File not found: {file_path}") sys.exit(1) if file_path.is_dir(): - _rich_error(f"Path is a directory, not a file: {file_path}") + logger.error(f"Path is a directory, not a file: {file_path}") sys.exit(1) findings = ContentScanner.scan_file(file_path) @@ -219,6 +216,7 @@ def _render_findings_table( def _render_summary( findings_by_file: Dict[str, List[ScanFinding]], files_scanned: int, + logger, ) -> None: """Render a summary panel with counts.""" all_findings: List[ScanFinding] = [] @@ -233,40 +231,38 @@ def _render_summary( _rich_echo("") if critical > 0: - _rich_echo( - f"{STATUS_SYMBOLS['error']} {critical} critical finding(s) in " - f"{affected} file(s) — hidden characters detected", - color="red", - bold=True, + logger.error( + f"{critical} critical finding(s) in " + f"{affected} file(s) — hidden characters detected" ) - _rich_info(" These characters may embed invisible instructions") - _rich_info(" Review file contents, then run 'apm audit --strip' to remove") + logger.progress(" These characters may embed invisible instructions") + logger.progress(" Review file contents, then run 'apm audit --strip' to remove") elif warning > 0: - _rich_warning( - f"{STATUS_SYMBOLS['warning']} {warning} warning(s) in " + logger.warning( + f"{warning} warning(s) in " f"{affected} file(s) — hidden characters detected" ) - _rich_info(" Run 'apm audit --strip' to remove hidden characters") + logger.progress(" Run 'apm audit --strip' to remove hidden characters") elif info > 0: - _rich_info( - f"{STATUS_SYMBOLS['info']} {info} info-level finding(s) in " + logger.progress( + f"{info} info-level finding(s) in " f"{affected} file(s) — unusual characters (use --verbose to see)" ) else: - _rich_success( - f"{STATUS_SYMBOLS['success']} {files_scanned} file(s) scanned — " - f"no issues found" + logger.success( + f"{files_scanned} file(s) scanned — no issues found" ) if info > 0 and (critical > 0 or warning > 0): - _rich_info(f" Plus {info} info-level finding(s) (use --verbose to see)") + logger.progress(f" Plus {info} info-level finding(s) (use --verbose to see)") - _rich_echo(f" {files_scanned} file(s) scanned", color="dim") + logger.verbose_detail(f" {files_scanned} file(s) scanned") def _apply_strip( findings_by_file: Dict[str, List[ScanFinding]], project_root: Path, + logger, ) -> int: """Strip dangerous and suspicious characters from affected files. @@ -284,7 +280,7 @@ def _apply_strip( try: abs_path.resolve().relative_to(project_root.resolve()) except ValueError: - _rich_warning(f" Skipping {rel_path}: outside project root") + logger.warning(f" Skipping {rel_path}: outside project root") continue if not abs_path.exists(): @@ -296,15 +292,16 @@ def _apply_strip( if cleaned != original: abs_path.write_text(cleaned, encoding="utf-8") modified += 1 - _rich_info(f" {STATUS_SYMBOLS['check']} Cleaned: {rel_path}") + logger.progress(f" Cleaned: {rel_path}", symbol="check") except (OSError, UnicodeDecodeError) as exc: - _rich_warning(f" Could not clean {rel_path}: {exc}") + logger.warning(f" Could not clean {rel_path}: {exc}") return modified def _preview_strip( findings_by_file: Dict[str, List[ScanFinding]], + logger, ) -> int: """Preview what --strip would remove without modifying files. @@ -322,11 +319,11 @@ def _preview_strip( affected += 1 if affected == 0: - _rich_info("Nothing to clean — no strippable characters found") + logger.progress("Nothing to clean — no strippable characters found") return 0 _rich_echo("") - _rich_info(f"Dry run — the following would be removed by --strip:", symbol="search") + logger.progress("Dry run — the following would be removed by --strip:", symbol="search") _rich_echo("") if console: @@ -371,8 +368,8 @@ def _preview_strip( _rich_echo(f" {rel_path}: {len(strippable)} character(s)", color="white") _rich_echo("") - _rich_info(f"{affected} file(s) would be modified") - _rich_info("Run 'apm audit --strip' to apply") + logger.progress(f"{affected} file(s) would be modified") + logger.progress("Run 'apm audit --strip' to apply") return affected @@ -445,6 +442,8 @@ def audit(ctx, package, file_path, strip, verbose, dry_run, output_format, outpu apm audit -f json -o out.json # JSON report to file """ # Resolve effective format (auto-detect from extension when needed) + logger = CommandLogger("audit", verbose=verbose) + effective_format = output_format if output_path and effective_format == "text": from ..security.audit_report import detect_format_from_extension @@ -453,7 +452,7 @@ def audit(ctx, package, file_path, strip, verbose, dry_run, output_format, outpu # --format json/sarif/markdown is incompatible with --strip / --dry-run if effective_format != "text" and (strip or dry_run): - _rich_error( + logger.error( f"--format {effective_format} cannot be combined with --strip or --dry-run" ) sys.exit(1) @@ -462,21 +461,21 @@ def audit(ctx, package, file_path, strip, verbose, dry_run, output_format, outpu if file_path: # -- File mode: scan a single arbitrary file -- - findings_by_file, files_scanned = _scan_single_file(Path(file_path)) + findings_by_file, files_scanned = _scan_single_file(Path(file_path), logger) else: # -- Package mode: scan from lockfile -- lockfile_path = get_lockfile_path(project_root) if not lockfile_path.exists(): - _rich_info( + logger.progress( "No apm.lock.yaml found — nothing to scan. " "Use --file to scan a specific file." ) sys.exit(0) if package: - _rich_info(f"Scanning package: {package}") + logger.progress(f"Scanning package: {package}") else: - _rich_info("Scanning all installed packages...") + logger.start("Scanning all installed packages...") findings_by_file, files_scanned = _scan_lockfile_packages( project_root, package_filter=package, @@ -484,33 +483,31 @@ def audit(ctx, package, file_path, strip, verbose, dry_run, output_format, outpu if files_scanned == 0: if package: - _rich_warning( + logger.warning( f"Package '{package}' not found in apm.lock.yaml " f"or has no deployed files" ) else: - _rich_info("No deployed files found in apm.lock.yaml") + logger.progress("No deployed files found in apm.lock.yaml") sys.exit(0) # -- Warn if --dry-run used without --strip -- if dry_run and not strip: - _rich_info("--dry-run only works with --strip (e.g. apm audit --strip --dry-run)") + logger.progress("--dry-run only works with --strip (e.g. apm audit --strip --dry-run)") # -- Strip mode -- if strip: if not findings_by_file: - _rich_info("Nothing to clean — no hidden characters found") + logger.progress("Nothing to clean — no hidden characters found") sys.exit(0) if dry_run: - _preview_strip(findings_by_file) + _preview_strip(findings_by_file, logger) sys.exit(0) - modified = _apply_strip(findings_by_file, project_root) + modified = _apply_strip(findings_by_file, project_root, logger) if modified > 0: - _rich_success( - f"{STATUS_SYMBOLS['success']} Cleaned {modified} file(s)" - ) + logger.success(f"Cleaned {modified} file(s)") else: - _rich_info("Nothing to clean — no strippable characters found") + logger.progress("Nothing to clean — no strippable characters found") sys.exit(0) # -- Display findings -- @@ -523,14 +520,14 @@ def audit(ctx, package, file_path, strip, verbose, dry_run, output_format, outpu if effective_format == "text": if output_path: - _rich_error( + logger.error( "Text format does not support --output. " "Use --format json, sarif, or markdown to write to a file." ) sys.exit(1) if findings_by_file: _render_findings_table(findings_by_file, verbose=verbose) - _render_summary(findings_by_file, files_scanned) + _render_summary(findings_by_file, files_scanned, logger) elif effective_format == "markdown": from ..security.audit_report import findings_to_markdown @@ -538,7 +535,7 @@ def audit(ctx, package, file_path, strip, verbose, dry_run, output_format, outpu if output_path: Path(output_path).parent.mkdir(parents=True, exist_ok=True) Path(output_path).write_text(md_report, encoding="utf-8") - _rich_success(f"Audit report written to {output_path}") + logger.success(f"Audit report written to {output_path}") else: click.echo(md_report) else: @@ -562,7 +559,7 @@ def audit(ctx, package, file_path, strip, verbose, dry_run, output_format, outpu if output_path: write_report(report, Path(output_path)) - _rich_success(f"Audit report written to {output_path}") + logger.success(f"Audit report written to {output_path}") else: click.echo(serialize_report(report)) diff --git a/src/apm_cli/commands/compile/cli.py b/src/apm_cli/commands/compile/cli.py index 2dd61d9b..10e63941 100644 --- a/src/apm_cli/commands/compile/cli.py +++ b/src/apm_cli/commands/compile/cli.py @@ -7,6 +7,7 @@ from ...constants import AGENTS_MD_FILENAME, APM_DIR, APM_MODULES_DIR, APM_YML_FILENAME from ...compilation import AgentsCompiler, CompilationConfig +from ...core.command_logger import CommandLogger from ...primitives.discovery import discover_primitives from ...utils.console import ( STATUS_SYMBOLS, @@ -250,14 +251,16 @@ def compile( * --local-only: Ignore dependencies, compile only local .apm/ primitives * --clean: Remove orphaned AGENTS.md files that are no longer generated """ + logger = CommandLogger("compile", verbose=verbose, dry_run=dry_run) + try: # Check if this is an APM project first from pathlib import Path if not Path(APM_YML_FILENAME).exists(): - _rich_error("[x] Not an APM project - no apm.yml found") - _rich_info(" To initialize an APM project, run:") - _rich_info(" apm init") + logger.error("Not an APM project - no apm.yml found") + logger.progress(" To initialize an APM project, run:") + logger.progress(" apm init") sys.exit(1) # Check if there are any instruction files to compile @@ -287,49 +290,49 @@ def compile( ) if has_empty_apm: - _rich_error("[x] No instruction files found in .apm/ directory") - _rich_info(" To add instructions, create files like:") - _rich_info(" .apm/instructions/coding-standards.instructions.md") - _rich_info(" .apm/chatmodes/backend-engineer.chatmode.md") + logger.error("No instruction files found in .apm/ directory") + logger.progress(" To add instructions, create files like:") + logger.progress(" .apm/instructions/coding-standards.instructions.md") + logger.progress(" .apm/chatmodes/backend-engineer.chatmode.md") else: - _rich_error("[x] No APM content found to compile") - _rich_info(" To get started:") - _rich_info(" 1. Install APM dependencies: apm install /") - _rich_info( + logger.error("No APM content found to compile") + logger.progress(" To get started:") + logger.progress(" 1. Install APM dependencies: apm install /") + logger.progress( " 2. Or create local instructions: mkdir -p .apm/instructions" ) - _rich_info(" 3. Then create .instructions.md or .chatmode.md files") + logger.progress(" 3. Then create .instructions.md or .chatmode.md files") if not dry_run: # Don't exit on dry-run to allow testing sys.exit(1) # Validation-only mode if validate: - _rich_info("Validating APM context...", symbol="gear") + logger.start("Validating APM context...", symbol="gear") compiler = AgentsCompiler(".") try: primitives = discover_primitives(".") except Exception as e: - _rich_error(f"Failed to discover primitives: {e}") - _rich_info(f" Error details: {type(e).__name__}") + logger.error(f"Failed to discover primitives: {e}") + logger.progress(f" Error details: {type(e).__name__}") sys.exit(1) validation_errors = compiler.validate_primitives(primitives) if validation_errors: _display_validation_errors(validation_errors) - _rich_error(f"Validation failed with {len(validation_errors)} errors") + logger.error(f"Validation failed with {len(validation_errors)} errors") sys.exit(1) - _rich_success("All primitives validated successfully!", symbol="sparkles") - _rich_info(f"Validated {primitives.count()} primitives:") - _rich_info(f" * {len(primitives.chatmodes)} chatmodes") - _rich_info(f" * {len(primitives.instructions)} instructions") - _rich_info(f" * {len(primitives.contexts)} contexts") + logger.success("All primitives validated successfully!") + logger.progress(f"Validated {primitives.count()} primitives:") + logger.progress(f" * {len(primitives.chatmodes)} chatmodes") + logger.progress(f" * {len(primitives.instructions)} instructions") + logger.progress(f" * {len(primitives.contexts)} contexts") # Show MCP dependency validation count try: from ...models.apm_package import APMPackage apm_pkg = APMPackage.from_apm_yml(Path(APM_YML_FILENAME)) mcp_count = len(apm_pkg.get_mcp_dependencies()) if mcp_count > 0: - _rich_info(f" * {mcp_count} MCP dependencies") + logger.progress(f" * {mcp_count} MCP dependencies") except Exception: pass return @@ -339,7 +342,7 @@ def compile( _watch_mode(output, chatmode, no_links, dry_run) return - _rich_info("Starting context compilation...", symbol="cogs") + logger.start("Starting context compilation...", symbol="cogs") # Auto-detect target if not explicitly provided from ...core.target_detection import detect_target, get_target_description @@ -383,38 +386,36 @@ def compile( if config.strategy == "distributed" and not single_agents: # Show target-aware message with detection reason if detected_target == "minimal": - _rich_info(f"Compiling for AGENTS.md only ({detection_reason})") - _rich_info( + logger.progress(f"Compiling for AGENTS.md only ({detection_reason})") + logger.progress( " Create .github/ or .claude/ folder for full integration", symbol="light_bulb", ) elif detected_target == "vscode" or detected_target == "agents": - _rich_info( + logger.progress( f"Compiling for AGENTS.md (VSCode/Copilot) - {detection_reason}" ) elif detected_target == "claude": - _rich_info( + logger.progress( f"Compiling for CLAUDE.md (Claude Code) - {detection_reason}" ) else: # "all" - _rich_info(f"Compiling for AGENTS.md + CLAUDE.md - {detection_reason}") + logger.progress(f"Compiling for AGENTS.md + CLAUDE.md - {detection_reason}") if dry_run: - _rich_info( - "Dry run mode: showing placement without writing files", - symbol="eye", + logger.dry_run_notice( + "showing placement without writing files" ) if verbose: - _rich_info( - "Verbose mode: showing source attribution and optimizer analysis", - symbol="magnifying_glass", + logger.verbose_detail( + "Verbose mode: showing source attribution and optimizer analysis" ) else: - _rich_info("Using single-file compilation (legacy mode)", symbol="page") + logger.progress("Using single-file compilation (legacy mode)", symbol="page") # Perform compilation compiler = AgentsCompiler(".") - result = compiler.compile(config) + result = compiler.compile(config, logger=logger) compile_has_critical = result.has_critical_security if result.success: @@ -427,7 +428,7 @@ def compile( pass else: # Success message for actual compilation - _rich_success("Compilation completed successfully!", symbol="check") + logger.success("Compilation completed successfully!", symbol="check") else: # Traditional single-file compilation - keep existing logic @@ -488,30 +489,29 @@ def compile( if verdict.has_critical: compile_has_critical = True if actionable: - _rich_warning( + logger.warning( f"Compiled output contains {actionable} hidden character(s) " - f"— run 'apm audit --file {output_path}' to inspect" + f"-- run 'apm audit --file {output_path}' to inspect" ) try: _atomic_write(output_path, final_content) except OSError as e: - _rich_error(f"Failed to write final AGENTS.md: {e}") + logger.error(f"Failed to write final AGENTS.md: {e}") sys.exit(1) else: - _rich_info( + logger.progress( "No changes detected; preserving existing AGENTS.md for idempotency" ) # Report success at the top if dry_run: - _rich_success( + logger.success( "Context compilation completed successfully (dry run)", symbol="check", ) else: - _rich_success( + logger.success( f"Context compiled successfully to {output_path}", - symbol="sparkles", ) stats = ( @@ -538,16 +538,16 @@ def compile( if config.strategy != "distributed" or single_agents: # Only show warnings for single-file mode (backward compatibility) if result.warnings: - _rich_warning( + logger.warning( f"Compilation completed with {len(result.warnings)} warnings:" ) for warning in result.warnings: - click.echo(f" [!] {warning}") + _rich_echo(f" [!] {warning}", color="yellow") if result.errors: - _rich_error(f"Compilation failed with {len(result.errors)} errors:") + logger.error(f"Compilation failed with {len(result.errors)} errors:") for error in result.errors: - click.echo(f" [x] {error}") + _rich_echo(f" [x] {error}", color="red") sys.exit(1) # Check for orphaned packages after successful compilation @@ -555,28 +555,28 @@ def compile( orphaned_packages = _check_orphaned_packages() if orphaned_packages: _rich_blank_line() - _rich_warning( - f"[!] Found {len(orphaned_packages)} orphaned package(s) that were included in compilation:" + logger.warning( + f"Found {len(orphaned_packages)} orphaned package(s) that were included in compilation:" ) for pkg in orphaned_packages: - _rich_info(f" * {pkg}") - _rich_info(" Run 'apm prune' to remove orphaned packages") + logger.progress(f" * {pkg}") + logger.progress(" Run 'apm prune' to remove orphaned packages") except Exception: pass # Continue if orphan check fails # Hard-fail when critical security findings were detected in compiled # output. Consistent with apm install and apm unpack behavior. if compile_has_critical: - _rich_error( + logger.error( "Compiled output contains critical hidden characters" - " — run 'apm audit' to inspect, 'apm audit --strip' to clean" + " -- run 'apm audit' to inspect, 'apm audit --strip' to clean" ) sys.exit(1) except ImportError as e: - _rich_error(f"Compilation module not available: {e}") - _rich_info("This might be a development environment issue.") + logger.error(f"Compilation module not available: {e}") + logger.progress("This might be a development environment issue.") sys.exit(1) except Exception as e: - _rich_error(f"Error during compilation: {e}") + logger.error(f"Error during compilation: {e}") sys.exit(1) diff --git a/src/apm_cli/commands/config.py b/src/apm_cli/commands/config.py index 93429a4b..7794ce20 100644 --- a/src/apm_cli/commands/config.py +++ b/src/apm_cli/commands/config.py @@ -7,7 +7,7 @@ import click from ..constants import APM_YML_FILENAME -from ..utils.console import _rich_echo, _rich_error, _rich_info, _rich_success +from ..core.command_logger import CommandLogger from ..version import get_version from ._helpers import HIGHLIGHT, RESET, _get_console, _load_apm_config @@ -21,6 +21,7 @@ def config(ctx): """Configure APM CLI settings.""" # If no subcommand, show current configuration if ctx.invoked_subcommand is None: + logger = CommandLogger("config") try: # Lazy import rich table from rich.table import Table # type: ignore @@ -87,7 +88,7 @@ def config(ctx): except (ImportError, NameError): # Fallback display - _rich_info("Current APM Configuration:") + logger.progress("Current APM Configuration:") if Path(APM_YML_FILENAME).exists(): apm_config = _load_apm_config() @@ -99,7 +100,7 @@ def config(ctx): f" MCP Dependencies: {len(apm_config.get('dependencies', {}).get('mcp', []))}" ) else: - _rich_info("Not in an APM project directory") + logger.progress("Not in an APM project directory") click.echo(f"\n{HIGHLIGHT}Global:{RESET}") click.echo(f" APM CLI Version: {get_version()}") @@ -117,20 +118,21 @@ def set(key, value): """ from ..config import set_auto_integrate + logger = CommandLogger("config set") if key == "auto-integrate": if value.lower() in ["true", "1", "yes"]: set_auto_integrate(True) - _rich_success("Auto-integration enabled") + logger.success("Auto-integration enabled") elif value.lower() in ["false", "0", "no"]: set_auto_integrate(False) - _rich_success("Auto-integration disabled") + logger.success("Auto-integration disabled") else: - _rich_error(f"Invalid value '{value}'. Use 'true' or 'false'.") + logger.error(f"Invalid value '{value}'. Use 'true' or 'false'.") sys.exit(1) else: - _rich_error(f"Unknown configuration key: '{key}'") - _rich_info("Valid keys: auto-integrate") - _rich_info( + logger.error(f"Unknown configuration key: '{key}'") + logger.progress("Valid keys: auto-integrate") + logger.progress( "This error may indicate a bug in command routing. Please report this issue." ) sys.exit(1) @@ -147,21 +149,22 @@ def get(key): """ from ..config import get_config, get_auto_integrate + logger = CommandLogger("config get") if key: if key == "auto-integrate": value = get_auto_integrate() click.echo(f"auto-integrate: {value}") else: - _rich_error(f"Unknown configuration key: '{key}'") - _rich_info("Valid keys: auto-integrate") - _rich_info( + logger.error(f"Unknown configuration key: '{key}'") + logger.progress("Valid keys: auto-integrate") + logger.progress( "This error may indicate a bug in command routing. Please report this issue." ) sys.exit(1) else: # Show all config config_data = get_config() - _rich_info("APM Configuration:") + logger.progress("APM Configuration:") for k, v in config_data.items(): # Map internal keys to user-friendly names if k == "auto_integrate": diff --git a/src/apm_cli/commands/deps/_utils.py b/src/apm_cli/commands/deps/_utils.py index bf0d59a1..8ab24f89 100644 --- a/src/apm_cli/commands/deps/_utils.py +++ b/src/apm_cli/commands/deps/_utils.py @@ -6,7 +6,6 @@ from ...constants import APM_DIR, APM_MODULES_DIR, APM_YML_FILENAME, SKILL_MD_FILENAME from ...models.apm_package import APMPackage from ...deps.github_downloader import GitHubPackageDownloader -from ...utils.console import _rich_error, _rich_info, _rich_success, _rich_warning def _is_nested_under_package(candidate: Path, apm_modules_path: Path) -> bool: @@ -201,8 +200,12 @@ def _get_detailed_package_info(package_path: Path) -> Dict[str, Any]: } -def _update_single_package(package_name: str, project_deps: List, apm_modules_path: Path): +def _update_single_package(package_name: str, project_deps: List, apm_modules_path: Path, logger=None): """Update a specific package.""" + if logger is None: + from ...core.command_logger import CommandLogger + logger = CommandLogger("deps-update") + # Find the dependency reference for this package target_dep = None for dep in project_deps: @@ -211,7 +214,7 @@ def _update_single_package(package_name: str, project_deps: List, apm_modules_pa break if not target_dep: - _rich_error(f"Package '{package_name}' not found in apm.yml dependencies") + logger.error(f"Package '{package_name}' not found in apm.yml dependencies") return # Find the installed package directory using namespaced structure @@ -233,30 +236,34 @@ def _update_single_package(package_name: str, project_deps: List, apm_modules_pa package_dir = apm_modules_path / package_name if not package_dir.exists(): - _rich_error(f"Package '{package_name}' not installed in apm_modules/") - _rich_info(f"Run 'apm install' to install it first") + logger.error(f"Package '{package_name}' not installed in apm_modules/") + logger.progress(f"Run 'apm install' to install it first") return try: downloader = GitHubPackageDownloader() - _rich_info(f"Updating {target_dep.repo_url}...") + logger.progress(f"Updating {target_dep.repo_url}...") # Download latest version package_info = downloader.download_package(target_dep, package_dir) - _rich_success(f"[+] Updated {target_dep.repo_url}") + logger.success(f"Updated {target_dep.repo_url}") except Exception as e: - _rich_error(f"Failed to update {package_name}: {e}") + logger.error(f"Failed to update {package_name}: {e}") -def _update_all_packages(project_deps: List, apm_modules_path: Path): +def _update_all_packages(project_deps: List, apm_modules_path: Path, logger=None): """Update all packages.""" + if logger is None: + from ...core.command_logger import CommandLogger + logger = CommandLogger("deps-update") + if not project_deps: - _rich_info("No APM dependencies to update") + logger.progress("No APM dependencies to update") return - _rich_info(f"Updating {len(project_deps)} APM dependencies...") + logger.start(f"Updating {len(project_deps)} APM dependencies...") downloader = GitHubPackageDownloader() updated_count = 0 @@ -280,17 +287,17 @@ def _update_all_packages(project_deps: List, apm_modules_path: Path): package_dir = apm_modules_path / dep.repo_url if not package_dir.exists(): - _rich_warning(f"[!] {dep.repo_url} not installed - skipping") + logger.warning(f"{dep.repo_url} not installed - skipping") continue try: - _rich_info(f" Updating {dep.repo_url}...") + logger.verbose_detail(f" Updating {dep.repo_url}...") package_info = downloader.download_package(dep, package_dir) updated_count += 1 - _rich_success(f" [+] {dep.repo_url}") + logger.success(f" {dep.repo_url}") except Exception as e: - _rich_error(f" [x] Failed to update {dep.repo_url}: {e}") + logger.error(f" Failed to update {dep.repo_url}: {e}") continue - _rich_success(f"Updated {updated_count} of {len(project_deps)} packages") + logger.success(f"Updated {updated_count} of {len(project_deps)} packages") diff --git a/src/apm_cli/commands/deps/cli.py b/src/apm_cli/commands/deps/cli.py index a14631a5..2a3b6524 100644 --- a/src/apm_cli/commands/deps/cli.py +++ b/src/apm_cli/commands/deps/cli.py @@ -9,7 +9,7 @@ # Import existing APM components from ...constants import APM_DIR, APM_MODULES_DIR, APM_YML_FILENAME, SKILL_MD_FILENAME from ...models.apm_package import APMPackage, ValidationResult, validate_apm_package -from ...utils.console import _rich_success, _rich_error, _rich_info, _rich_warning +from ...core.command_logger import CommandLogger # Import APM dependency system components (with fallback) from ...deps.github_downloader import GitHubPackageDownloader @@ -37,6 +37,8 @@ def deps(): @deps.command(name="list", help="List installed APM dependencies") def list_packages(): """Show all installed APM dependencies with context files and agent workflows.""" + logger = CommandLogger("deps-list") + try: # Import Rich components with fallback from rich.table import Table @@ -55,12 +57,8 @@ def list_packages(): # Check if apm_modules exists if not apm_modules_path.exists(): - if has_rich: - console.print(" No APM dependencies installed yet", style="cyan") - console.print("Run 'apm install' to install dependencies from apm.yml", style="dim") - else: - click.echo(" No APM dependencies installed yet") - click.echo("Run 'apm install' to install dependencies from apm.yml") + logger.progress("No APM dependencies installed yet") + logger.verbose_detail("Run 'apm install' to install dependencies from apm.yml") return # Load project dependencies to check for orphaned packages @@ -167,13 +165,10 @@ def list_packages(): 'is_orphaned': is_orphaned }) except Exception as e: - click.echo(f"[!] Warning: Failed to read package {org_repo_name}: {e}") + logger.warning(f"Failed to read package {org_repo_name}: {e}") if not installed_packages: - if has_rich: - console.print(" apm_modules/ directory exists but contains no valid packages", style="cyan") - else: - click.echo(" apm_modules/ directory exists but contains no valid packages") + logger.progress("apm_modules/ directory exists but contains no valid packages") return # Display packages in table format @@ -235,13 +230,15 @@ def list_packages(): click.echo("\n Run 'apm prune' to remove orphaned packages") except Exception as e: - _rich_error(f"Error listing dependencies: {e}") + logger.error(f"Error listing dependencies: {e}") sys.exit(1) @deps.command(help="Show dependency tree structure") def tree(): """Display dependencies in hierarchical tree format using lockfile.""" + logger = CommandLogger("deps-tree") + try: # Import Rich components with fallback from rich.tree import Tree @@ -395,24 +392,26 @@ def _add_children(parent_branch, parent_repo_url, depth=0): click.echo("+-- No dependencies installed") except Exception as e: - _rich_error(f"Error showing dependency tree: {e}") + logger.error(f"Error showing dependency tree: {e}") sys.exit(1) @deps.command(help="Remove all APM dependencies") def clean(): """Remove entire apm_modules/ directory.""" + logger = CommandLogger("deps-clean") + project_root = Path(".") apm_modules_path = project_root / APM_MODULES_DIR if not apm_modules_path.exists(): - _rich_info("No apm_modules/ directory found - already clean") + logger.progress("No apm_modules/ directory found - already clean") return # Show what will be removed package_count = len([d for d in apm_modules_path.iterdir() if d.is_dir()]) - _rich_warning(f"This will remove the entire apm_modules/ directory ({package_count} packages)") + logger.warning(f"This will remove the entire apm_modules/ directory ({package_count} packages)") # Confirmation prompt try: @@ -422,14 +421,14 @@ def clean(): confirm = click.confirm("Continue?") if not confirm: - _rich_info("Operation cancelled") + logger.progress("Operation cancelled") return try: shutil.rmtree(apm_modules_path) - _rich_success("Successfully removed apm_modules/ directory") + logger.success("Successfully removed apm_modules/ directory") except Exception as e: - _rich_error(f"Error removing apm_modules/: {e}") + logger.error(f"Error removing apm_modules/: {e}") sys.exit(1) @@ -437,50 +436,53 @@ def clean(): @click.argument('package', required=False) def update(package: Optional[str]): """Update specific package or all if no package specified.""" - + logger = CommandLogger("deps-update") + project_root = Path(".") apm_modules_path = project_root / APM_MODULES_DIR if not apm_modules_path.exists(): - _rich_info("No apm_modules/ directory found - no packages to update") + logger.progress("No apm_modules/ directory found - no packages to update") return # Get project dependencies to validate updates try: apm_yml_path = project_root / APM_YML_FILENAME if not apm_yml_path.exists(): - _rich_error(f"No {APM_YML_FILENAME} found in current directory") + logger.error(f"No {APM_YML_FILENAME} found in current directory") return project_package = APMPackage.from_apm_yml(apm_yml_path) project_deps = project_package.get_apm_dependencies() if not project_deps: - _rich_info("No APM dependencies defined in apm.yml") + logger.progress("No APM dependencies defined in apm.yml") return except Exception as e: - _rich_error(f"Error reading {APM_YML_FILENAME}: {e}") + logger.error(f"Error reading {APM_YML_FILENAME}: {e}") return if package: # Update specific package - _update_single_package(package, project_deps, apm_modules_path) + _update_single_package(package, project_deps, apm_modules_path, logger=logger) else: # Update all packages - _update_all_packages(project_deps, apm_modules_path) + _update_all_packages(project_deps, apm_modules_path, logger=logger) @deps.command(help="Show detailed package information") @click.argument('package', required=True) def info(package: str): """Show detailed information about a specific package including context files and workflows.""" + logger = CommandLogger("deps-info") + project_root = Path(".") apm_modules_path = project_root / APM_MODULES_DIR if not apm_modules_path.exists(): - _rich_error("No apm_modules/ directory found") - _rich_info("Run 'apm install' to install dependencies first") + logger.error("No apm_modules/ directory found") + logger.progress("Run 'apm install' to install dependencies first") sys.exit(1) # Find the package directory - handle org/repo and deep sub-path structures @@ -504,8 +506,8 @@ def info(package: str): break if not package_path: - _rich_error(f"Package '{package}' not found in apm_modules/") - _rich_info("Available packages:") + logger.error(f"Package '{package}' not found in apm_modules/") + logger.progress("Available packages:") for org_dir in apm_modules_path.iterdir(): if org_dir.is_dir() and not org_dir.name.startswith('.'): @@ -591,5 +593,5 @@ def info(package: str): click.echo(f" * {package_info['hooks']} hook file(s)") except Exception as e: - _rich_error(f"Error reading package information: {e}") + logger.error(f"Error reading package information: {e}") sys.exit(1) diff --git a/src/apm_cli/commands/init.py b/src/apm_cli/commands/init.py index e0394f76..5d032d12 100644 --- a/src/apm_cli/commands/init.py +++ b/src/apm_cli/commands/init.py @@ -7,14 +7,10 @@ import click from ..constants import APM_YML_FILENAME +from ..core.command_logger import CommandLogger from ..utils.console import ( _create_files_table, - _rich_echo, - _rich_error, - _rich_info, _rich_panel, - _rich_success, - _rich_warning, ) from ._helpers import ( INFO, @@ -37,13 +33,15 @@ @click.option( "--plugin", is_flag=True, help="Initialize as plugin author (creates plugin.json + apm.yml)" ) +@click.option("--verbose", "-v", is_flag=True, help="Show detailed output") @click.pass_context -def init(ctx, project_name, yes, plugin): +def init(ctx, project_name, yes, plugin, verbose): """Initialize a new APM project (like npm init). Creates a minimal apm.yml with auto-detected metadata. With --plugin, also creates plugin.json for plugin authors. """ + logger = CommandLogger("init", verbose=verbose) try: # Handle explicit current directory if project_name == ".": @@ -54,7 +52,7 @@ def init(ctx, project_name, yes, plugin): project_dir = Path(project_name) project_dir.mkdir(exist_ok=True) os.chdir(project_dir) - _rich_info(f"Created project directory: {project_name}", symbol="folder") + logger.progress(f"Created project directory: {project_name}", symbol="folder") final_project_name = project_name else: project_dir = Path.cwd() @@ -62,7 +60,7 @@ def init(ctx, project_name, yes, plugin): # Validate plugin name early if plugin and not _validate_plugin_name(final_project_name): - _rich_error( + logger.error( f"Invalid plugin name '{final_project_name}'. " "Must be kebab-case (lowercase letters, numbers, hyphens), " "start with a letter, and be at most 64 characters." @@ -74,7 +72,7 @@ def init(ctx, project_name, yes, plugin): # Handle existing apm.yml in brownfield projects if apm_yml_exists: - _rich_warning("apm.yml already exists") + logger.warning("apm.yml already exists") if not yes: Confirm = _lazy_confirm() @@ -87,14 +85,14 @@ def init(ctx, project_name, yes, plugin): confirm = click.confirm("Continue and overwrite?") if not confirm: - _rich_info("Initialization cancelled.") + logger.progress("Initialization cancelled.") return else: - _rich_info("--yes specified, overwriting apm.yml...") + logger.progress("--yes specified, overwriting apm.yml...") # Get project configuration (interactive mode or defaults) if not yes: - config = _interactive_project_setup(final_project_name) + config = _interactive_project_setup(final_project_name, logger) else: # Use auto-detected defaults config = _get_default_config(final_project_name) @@ -103,7 +101,7 @@ def init(ctx, project_name, yes, plugin): if plugin and yes: config["version"] = "0.1.0" - _rich_success(f"Initializing APM project: {config['name']}", symbol="rocket") + logger.start(f"Initializing APM project: {config['name']}", symbol="running") # Create apm.yml (with devDependencies for plugin mode) _create_minimal_apm_yml(config, plugin=plugin) @@ -112,7 +110,7 @@ def init(ctx, project_name, yes, plugin): if plugin: _create_plugin_json(config) - _rich_success("APM project initialized successfully!", symbol="sparkles") + logger.success("APM project initialized successfully!") # Display created file info try: @@ -126,10 +124,10 @@ def init(ctx, project_name, yes, plugin): table = _create_files_table(files_data, title="Created Files") console.print(table) except (ImportError, NameError): - _rich_info("Created:") - _rich_echo(" * apm.yml - Project configuration", style="muted") + logger.progress("Created:") + click.echo(" * apm.yml - Project configuration") if plugin: - _rich_echo(" * plugin.json - Plugin metadata", style="muted") + click.echo(" * plugin.json - Plugin metadata") _rich_blank_line() @@ -154,16 +152,16 @@ def init(ctx, project_name, yes, plugin): style="cyan", ) except (ImportError, NameError): - _rich_info("Next steps:") + logger.progress("Next steps:") for step in next_steps: click.echo(f" * {step}") except Exception as e: - _rich_error(f"Error initializing project: {e}") + logger.error(f"Error initializing project: {e}") sys.exit(1) -def _interactive_project_setup(default_name): +def _interactive_project_setup(default_name, logger): """Interactive setup for new APM projects with auto-detection.""" from ._helpers import _auto_detect_author, _auto_detect_description @@ -200,8 +198,8 @@ def _interactive_project_setup(default_name): except (ImportError, NameError): # Fallback to click prompts - _rich_info("Setting up your APM project...") - _rich_info("Press ^C at any time to quit.") + logger.progress("Setting up your APM project...") + logger.progress("Press ^C at any time to quit.") name = click.prompt("Project name", default=default_name).strip() version = click.prompt("Version", default="1.0.0").strip() @@ -215,7 +213,7 @@ def _interactive_project_setup(default_name): click.echo(f" author: {author}") if not click.confirm("\nIs this OK?", default=True): - _rich_info("Aborted.") + logger.progress("Aborted.") sys.exit(0) return { diff --git a/src/apm_cli/commands/list_cmd.py b/src/apm_cli/commands/list_cmd.py index 55465ad8..cdad2a81 100644 --- a/src/apm_cli/commands/list_cmd.py +++ b/src/apm_cli/commands/list_cmd.py @@ -5,13 +5,11 @@ import click +from ..core.command_logger import CommandLogger from ..utils.console import ( STATUS_SYMBOLS, _rich_echo, - _rich_error, - _rich_info, _rich_panel, - _rich_warning, ) from ._helpers import HIGHLIGHT, RESET, _get_console, _list_available_scripts @@ -23,11 +21,12 @@ @click.pass_context def list(ctx): """List all available scripts from apm.yml.""" + logger = CommandLogger("list") try: scripts = _list_available_scripts() if not scripts: - _rich_warning("No scripts found.") + logger.warning("No scripts found.") # Show helpful example in a panel example_content = """scripts: @@ -41,7 +40,7 @@ def list(ctx): style="blue", ) except (ImportError, NameError): - _rich_info(" Add scripts to your apm.yml file:") + logger.progress("Add scripts to your apm.yml file:") click.echo("scripts:") click.echo(' start: "codex run main.prompt.md"') click.echo(' fast: "llm prompt main.prompt.md -m github/gpt-4o-mini"') @@ -78,7 +77,7 @@ def list(ctx): except Exception: # Fallback to simple output - _rich_info("Available scripts:") + logger.progress("Available scripts:") for name, command in scripts.items(): icon = STATUS_SYMBOLS["default"] if name == default_script else " " click.echo(f" {icon} {HIGHLIGHT}{name}{RESET}: {command}") @@ -88,7 +87,7 @@ def list(ctx): ) else: # Fallback to simple output - _rich_info("Available scripts:") + logger.progress("Available scripts:") for name, command in scripts.items(): icon = STATUS_SYMBOLS["default"] if name == default_script else " " click.echo(f" {icon} {HIGHLIGHT}{name}{RESET}: {command}") @@ -98,5 +97,5 @@ def list(ctx): ) except Exception as e: - _rich_error(f"Error listing scripts: {e}") + logger.error(f"Error listing scripts: {e}") sys.exit(1) diff --git a/src/apm_cli/commands/mcp.py b/src/apm_cli/commands/mcp.py index 26a18f00..bac184d1 100644 --- a/src/apm_cli/commands/mcp.py +++ b/src/apm_cli/commands/mcp.py @@ -5,7 +5,7 @@ import click -from ..utils.console import _rich_error, _rich_info, _rich_success, _rich_warning +from ..core.command_logger import CommandLogger from ._helpers import _get_console # Restore builtin since a subcommand is named ``list`` @@ -21,9 +21,11 @@ def mcp(): @mcp.command(help="Search MCP servers in registry") @click.argument("query", required=True) @click.option("--limit", default=10, show_default=True, help="Number of results to show") +@click.option("--verbose", "-v", is_flag=True, help="Show detailed output") @click.pass_context -def search(ctx, query, limit): +def search(ctx, query, limit, verbose): """Search for MCP servers in the registry.""" + logger = CommandLogger("mcp-search", verbose=verbose) try: from ..registry.integration import RegistryIntegration @@ -33,9 +35,9 @@ def search(ctx, query, limit): console = _get_console() if not console: # Fallback for non-rich environments - click.echo(f"Searching for: {query}") + logger.progress(f"Searching for: {query}", symbol="search") if not servers: - click.echo("No servers found") + logger.warning("No servers found") return for server in servers: click.echo(f" {server.get('name', 'Unknown')}") @@ -98,15 +100,17 @@ def search(ctx, query, limit): ) except Exception as e: - _rich_error(f"Error searching registry: {e}") + logger.error(f"Error searching registry: {e}") sys.exit(1) @mcp.command(help="Show detailed MCP server information") @click.argument("server_name", required=True) +@click.option("--verbose", "-v", is_flag=True, help="Show detailed output") @click.pass_context -def show(ctx, server_name): +def show(ctx, server_name, verbose): """Show detailed information about an MCP server.""" + logger = CommandLogger("mcp-show", verbose=verbose) try: from ..registry.integration import RegistryIntegration @@ -115,7 +119,7 @@ def show(ctx, server_name): console = _get_console() if not console: # Fallback for non-rich environments - click.echo(f"Getting details for: {server_name}") + logger.progress(f"Getting details for: {server_name}", symbol="search") try: server_info = registry.get_package_info(server_name) click.echo(f"Name: {server_info.get('name', 'Unknown')}") @@ -126,7 +130,7 @@ def show(ctx, server_name): f"Repository: {server_info.get('repository', {}).get('url', 'Unknown')}" ) except ValueError: - click.echo(f"Server '{server_name}' not found") + logger.error(f"Server '{server_name}' not found") sys.exit(1) return @@ -283,15 +287,17 @@ def show(ctx, server_name): console.print(install_table) except Exception as e: - _rich_error(f"Error getting server details: {e}") + logger.error(f"Error getting server details: {e}") sys.exit(1) @mcp.command(help="List all available MCP servers") @click.option("--limit", default=20, help="Number of results to show") +@click.option("--verbose", "-v", is_flag=True, help="Show detailed output") @click.pass_context -def list(ctx, limit): +def list(ctx, limit, verbose): """List all available MCP servers in the registry.""" + logger = CommandLogger("mcp-list", verbose=verbose) try: from ..registry.integration import RegistryIntegration @@ -300,10 +306,10 @@ def list(ctx, limit): console = _get_console() if not console: # Fallback for non-rich environments - click.echo("Fetching available MCP servers...") + logger.progress("Fetching available MCP servers...", symbol="search") servers = registry.list_available_packages()[:limit] if not servers: - click.echo("No servers found") + logger.warning("No servers found") return for server in servers: click.echo(f" {server.get('name', 'Unknown')}") @@ -369,5 +375,5 @@ def list(ctx, limit): ) except Exception as e: - _rich_error(f"Error listing servers: {e}") + logger.error(f"Error listing servers: {e}") sys.exit(1) diff --git a/src/apm_cli/commands/pack.py b/src/apm_cli/commands/pack.py index ca5e6ab6..f36a58db 100644 --- a/src/apm_cli/commands/pack.py +++ b/src/apm_cli/commands/pack.py @@ -7,7 +7,8 @@ from ..bundle.packer import pack_bundle from ..bundle.unpacker import unpack_bundle -from ..utils.console import _rich_echo, _rich_success, _rich_error, _rich_info, _rich_warning +from ..core.command_logger import CommandLogger +from ..utils.console import _rich_echo @click.command(name="pack", help="Create a self-contained bundle from installed dependencies") @@ -38,6 +39,7 @@ @click.pass_context def pack_cmd(ctx, fmt, target, archive, output, dry_run, force): """Create a self-contained APM bundle.""" + logger = CommandLogger("pack", dry_run=dry_run) try: result = pack_bundle( project_root=Path("."), @@ -50,28 +52,28 @@ def pack_cmd(ctx, fmt, target, archive, output, dry_run, force): ) if dry_run: - _rich_info("Dry run -- no files written") + logger.dry_run_notice("No files written") if result.files: - _rich_info(f"Would pack {len(result.files)} file(s):") + logger.progress(f"Would pack {len(result.files)} file(s):") for f in result.files: click.echo(f" {f}") else: - _rich_warning("No files to pack") + logger.warning("No files to pack") return if not result.files: - _rich_warning("No deployed files found -- empty bundle created") + logger.warning("No deployed files found -- empty bundle created") else: - _rich_success(f"Packed {len(result.files)} file(s) -> {result.bundle_path}") + logger.success(f"Packed {len(result.files)} file(s) -> {result.bundle_path}") if fmt == "plugin": - _rich_info( - "Plugin bundle ready — contains plugin.json and " - "plugin-native directories (agents/, skills/, commands/, …). " + logger.progress( + "Plugin bundle ready -- contains plugin.json and " + "plugin-native directories (agents/, skills/, commands/, ...). " "No APM-specific files included." ) except (FileNotFoundError, ValueError) as exc: - _rich_error(str(exc)) + logger.error(str(exc)) sys.exit(1) @@ -90,8 +92,9 @@ def pack_cmd(ctx, fmt, target, archive, output, dry_run, force): @click.pass_context def unpack_cmd(ctx, bundle_path, output, skip_verify, dry_run, force): """Extract an APM bundle into the project.""" + logger = CommandLogger("unpack", dry_run=dry_run) try: - _rich_info(f"Unpacking {bundle_path} → {output}") + logger.start(f"Unpacking {bundle_path} -> {output}") result = unpack_bundle( bundle_path=Path(bundle_path), @@ -102,37 +105,37 @@ def unpack_cmd(ctx, bundle_path, output, skip_verify, dry_run, force): ) if dry_run: - _rich_info("Dry run -- no files written") + logger.dry_run_notice("No files written") if result.files: - _rich_info(f"Would unpack {len(result.files)} file(s):") + logger.progress(f"Would unpack {len(result.files)} file(s):") _log_unpack_file_list(result) else: - _rich_warning("No files in bundle") + logger.warning("No files in bundle") return if not result.files: - _rich_warning("No files were unpacked") + logger.warning("No files were unpacked") else: _log_unpack_file_list(result) if result.skipped_count > 0: - _rich_warning( + logger.warning( f" {result.skipped_count} file(s) skipped (missing from bundle)" ) if result.security_critical > 0: - _rich_warning( + logger.warning( f" Deployed with --force despite {result.security_critical} " f"critical hidden-character finding(s)" ) elif result.security_warnings > 0: - _rich_warning( + logger.warning( f" {result.security_warnings} hidden-character warning(s) " f"-- run 'apm audit' to inspect" ) verified_msg = " (verified)" if result.verified else "" - _rich_success(f"Unpacked {len(result.files)} file(s){verified_msg}") + logger.success(f"Unpacked {len(result.files)} file(s){verified_msg}") except (FileNotFoundError, ValueError) as exc: - _rich_error(str(exc)) + logger.error(str(exc)) sys.exit(1) diff --git a/src/apm_cli/commands/prune.py b/src/apm_cli/commands/prune.py index 96a1e3fa..d5203282 100644 --- a/src/apm_cli/commands/prune.py +++ b/src/apm_cli/commands/prune.py @@ -7,7 +7,7 @@ import click from ..constants import APM_LOCK_FILENAME, APM_MODULES_DIR, APM_YML_FILENAME -from ..utils.console import _rich_error, _rich_info, _rich_success, _rich_warning +from ..core.command_logger import CommandLogger from ..utils.path_security import PathTraversalError, safe_rmtree from ._helpers import _build_expected_install_paths, _scan_installed_packages @@ -31,19 +31,20 @@ def prune(ctx, dry_run): apm prune # Remove orphaned packages apm prune --dry-run # Show what would be removed """ + logger = CommandLogger("prune", dry_run=dry_run) try: # Check if apm.yml exists if not Path(APM_YML_FILENAME).exists(): - _rich_error("No apm.yml found. Run 'apm init' first.") + logger.error("No apm.yml found. Run 'apm init' first.") sys.exit(1) # Check if apm_modules exists apm_modules_dir = Path(APM_MODULES_DIR) if not apm_modules_dir.exists(): - _rich_info("No apm_modules/ directory found. Nothing to prune.") + logger.progress("No apm_modules/ directory found. Nothing to prune.") return - _rich_info("Analyzing installed packages vs apm.yml...") + logger.start("Analyzing installed packages vs apm.yml...") # Build expected vs installed using shared helpers try: @@ -52,26 +53,26 @@ def prune(ctx, dry_run): lockfile = LockFile.read(get_lockfile_path(Path.cwd())) expected_installed = _build_expected_install_paths(declared_deps, lockfile, apm_modules_dir) except Exception as e: - _rich_error(f"Failed to parse {APM_YML_FILENAME}: {e}") + logger.error(f"Failed to parse {APM_YML_FILENAME}: {e}") sys.exit(1) installed_packages = _scan_installed_packages(apm_modules_dir) orphaned_packages = [p for p in installed_packages if p not in expected_installed] if not orphaned_packages: - _rich_success("No orphaned packages found. apm_modules/ is clean.") + logger.success("No orphaned packages found. apm_modules/ is clean.", symbol="check") return # Show what will be removed - _rich_info(f"Found {len(orphaned_packages)} orphaned package(s):") + logger.progress(f"Found {len(orphaned_packages)} orphaned package(s):") for pkg_name in orphaned_packages: if dry_run: - _rich_info(f" - {pkg_name} (would be removed)") + logger.progress(f" - {pkg_name} (would be removed)") else: - _rich_info(f" - {pkg_name}") + logger.progress(f" - {pkg_name}") if dry_run: - _rich_success("Dry run complete - no changes made") + logger.success("Dry run complete - no changes made") return # Remove orphaned packages @@ -83,12 +84,12 @@ def prune(ctx, dry_run): pkg_path = apm_modules_dir.joinpath(*path_parts) try: safe_rmtree(pkg_path, apm_modules_dir) - _rich_info(f"+ Removed {org_repo_name}") + logger.progress(f"+ Removed {org_repo_name}") removed_count += 1 pruned_keys.append(org_repo_name) deleted_pkg_paths.append(pkg_path) except Exception as e: - _rich_error(f"x Failed to remove {org_repo_name}: {e}") + logger.error(f"x Failed to remove {org_repo_name}: {e}") # Batch parent cleanup -- single bottom-up pass from ..integration.base_integrator import BaseIntegrator @@ -127,7 +128,7 @@ def prune(ctx, dry_run): # Batch parent cleanup -- single bottom-up pass BaseIntegrator.cleanup_empty_parents(deleted_targets, stop_at=project_root) if deployed_cleaned > 0: - _rich_info(f"+ Cleaned {deployed_cleaned} deployed integration file(s)") + logger.progress(f"+ Cleaned {deployed_cleaned} deployed integration file(s)") # Write updated lockfile (or remove if empty) try: if lockfile.dependencies: @@ -139,10 +140,10 @@ def prune(ctx, dry_run): # Final summary if removed_count > 0: - _rich_success(f"Pruned {removed_count} orphaned package(s)") + logger.success(f"Pruned {removed_count} orphaned package(s)") else: - _rich_warning("No packages were removed") + logger.warning("No packages were removed") except Exception as e: - _rich_error(f"Error pruning packages: {e}") + logger.error(f"Error pruning packages: {e}") sys.exit(1) diff --git a/src/apm_cli/commands/run.py b/src/apm_cli/commands/run.py index efb87a25..7c646927 100644 --- a/src/apm_cli/commands/run.py +++ b/src/apm_cli/commands/run.py @@ -5,7 +5,8 @@ import click -from ..utils.console import _rich_echo, _rich_error, _rich_info, _rich_panel, _rich_success, _rich_warning +from ..core.command_logger import CommandLogger +from ..utils.console import _rich_panel from ._helpers import ( HIGHLIGHT, RESET, @@ -19,18 +20,20 @@ @click.command(help="Run a script with parameters") @click.argument("script_name", required=False) @click.option("--param", "-p", multiple=True, help="Parameter in format name=value") +@click.option("--verbose", "-v", is_flag=True, help="Show detailed output") @click.pass_context -def run(ctx, script_name, param): +def run(ctx, script_name, param, verbose): """Run a script from apm.yml (uses 'start' script if no name specified).""" + logger = CommandLogger("run", verbose=verbose) try: # If no script name specified, use 'start' script if not script_name: script_name = _get_default_script() if not script_name: - _rich_error( + logger.error( "No script specified and no 'start' script defined in apm.yml" ) - _rich_info("Available scripts:") + logger.progress("Available scripts:") scripts = _list_available_scripts() console = _get_console() @@ -62,7 +65,7 @@ def run(ctx, script_name, param): if "=" in p: param_name, value = p.split("=", 1) params[param_name] = value - _rich_echo(f" - {param_name}: {value}", style="muted") + logger.verbose_detail(f" - {param_name}: {value}") # Import and use script runner try: @@ -72,42 +75,44 @@ def run(ctx, script_name, param): success = script_runner.run_script(script_name, params) if not success: - _rich_error("Script execution failed") + logger.error("Script execution failed") sys.exit(1) _rich_blank_line() - _rich_success("Script executed successfully!", symbol="sparkles") + logger.success("Script executed successfully!") except ImportError as ie: - _rich_warning("Script runner not available yet") - _rich_info(f"Import error: {ie}") - _rich_info(f"Would run script: {script_name} with params {params}") + logger.warning("Script runner not available yet") + logger.verbose_detail(f"Import error: {ie}") + logger.verbose_detail(f"Would run script: {script_name} with params {params}") except Exception as ee: - _rich_error(f"Script execution error: {ee}") + logger.error(f"Script execution error: {ee}") sys.exit(1) except Exception as e: - _rich_error(f"Error running script: {e}") + logger.error(f"Error running script: {e}") sys.exit(1) @click.command(help="Preview a script's compiled prompt files") @click.argument("script_name", required=False) @click.option("--param", "-p", multiple=True, help="Parameter in format name=value") +@click.option("--verbose", "-v", is_flag=True, help="Show detailed output") @click.pass_context -def preview(ctx, script_name, param): +def preview(ctx, script_name, param, verbose): """Preview compiled prompt files for a script.""" + logger = CommandLogger("preview", verbose=verbose) try: # If no script name specified, use 'start' script if not script_name: script_name = _get_default_script() if not script_name: - _rich_error( + logger.error( "No script specified and no 'start' script defined in apm.yml" ) sys.exit(1) - _rich_info(f"Previewing script: {script_name}", symbol="info") + logger.start(f"Previewing script: {script_name}") # Parse parameters params = {} @@ -115,7 +120,7 @@ def preview(ctx, script_name, param): if "=" in p: param_name, value = p.split("=", 1) params[param_name] = value - _rich_echo(f" - {param_name}: {value}", style="muted") + logger.verbose_detail(f" - {param_name}: {value}") # Import and use script runner for preview try: @@ -126,7 +131,7 @@ def preview(ctx, script_name, param): # Get the script command scripts = script_runner.list_scripts() if script_name not in scripts: - _rich_error(f"Script '{script_name}' not found") + logger.error(f"Script '{script_name}' not found") sys.exit(1) command = scripts[script_name] @@ -150,8 +155,8 @@ def preview(ctx, script_name, param): title="> Command (no prompt compilation)", style="yellow", ) - _rich_warning( - f"No .prompt.md files found in command. APM only compiles files ending with '.prompt.md'" + logger.warning( + "No .prompt.md files found in command. APM only compiles files ending with '.prompt.md'" ) # Show compiled files if any .prompt.md files were processed @@ -179,7 +184,7 @@ def preview(ctx, script_name, param): except (ImportError, NameError): # Fallback display - _rich_info("Original command:") + logger.progress("Original command:") click.echo(f" {command}") compiled_command, compiled_prompt_files = ( @@ -187,10 +192,10 @@ def preview(ctx, script_name, param): ) if compiled_prompt_files: - _rich_info("Compiled command:") + logger.progress("Compiled command:") click.echo(f" {compiled_command}") - _rich_info("Compiled prompt files:") + logger.progress("Compiled prompt files:") for prompt_file in compiled_prompt_files: output_name = ( Path(prompt_file).stem.replace(".prompt", "") + ".txt" @@ -198,21 +203,20 @@ def preview(ctx, script_name, param): compiled_path = Path(".apm/compiled") / output_name click.echo(f" - {compiled_path}") else: - _rich_warning("Command (no prompt compilation):") + logger.warning("Command (no prompt compilation):") click.echo(f" {compiled_command}") - _rich_info( + logger.progress( "APM only compiles files ending with '.prompt.md' extension." ) _rich_blank_line() - _rich_success( + logger.success( f"Preview complete! Use 'apm run {script_name}' to execute.", - symbol="sparkles", ) except ImportError: - _rich_warning("Script runner not available yet") + logger.warning("Script runner not available yet") except Exception as e: - _rich_error(f"Error previewing script: {e}") + logger.error(f"Error previewing script: {e}") sys.exit(1) diff --git a/src/apm_cli/commands/runtime.py b/src/apm_cli/commands/runtime.py index 22bbadc8..fb2218fd 100644 --- a/src/apm_cli/commands/runtime.py +++ b/src/apm_cli/commands/runtime.py @@ -5,12 +5,10 @@ import click +from ..core.command_logger import CommandLogger from ..utils.console import ( STATUS_SYMBOLS, - _rich_error, - _rich_info, _rich_panel, - _rich_success, ) from ._helpers import HIGHLIGHT, RESET, _get_console @@ -34,8 +32,9 @@ def runtime(): ) def setup(runtime_name, version, vanilla): """Set up an AI runtime with APM-managed installation.""" + logger = CommandLogger("runtime setup") try: - _rich_info(f"Setting up {runtime_name} runtime...") + logger.start(f"Setting up {runtime_name} runtime...") from ..runtime.manager import RuntimeManager @@ -45,16 +44,17 @@ def setup(runtime_name, version, vanilla): if not success: sys.exit(1) else: - _rich_success(f"{runtime_name} runtime setup complete!", symbol="sparkles") + logger.success(f"{runtime_name} runtime setup complete!") except Exception as e: - _rich_error(f"Error setting up runtime: {e}") + logger.error(f"Error setting up runtime: {e}") sys.exit(1) @runtime.command(help="List available and installed runtimes") def list(): """List all available runtimes and their installation status.""" + logger = CommandLogger("runtime list") try: from ..runtime.manager import RuntimeManager @@ -99,7 +99,7 @@ def list(): except (ImportError, NameError): # Fallback to simple output - _rich_info("Available Runtimes:") + logger.progress("Available Runtimes:") click.echo() for name, info in runtimes.items(): @@ -118,7 +118,7 @@ def list(): click.echo() except Exception as e: - _rich_error(f"Error listing runtimes: {e}") + logger.error(f"Error listing runtimes: {e}") sys.exit(1) @@ -127,8 +127,9 @@ def list(): @click.confirmation_option(prompt="Are you sure you want to remove this runtime?", help="Confirm the action without prompting") def remove(runtime_name): """Remove an installed runtime from APM management.""" + logger = CommandLogger("runtime remove") try: - _rich_info(f"Removing {runtime_name} runtime...") + logger.start(f"Removing {runtime_name} runtime...") from ..runtime.manager import RuntimeManager @@ -138,18 +139,17 @@ def remove(runtime_name): if not success: sys.exit(1) else: - _rich_success( - f"{runtime_name} runtime removed successfully!", symbol="sparkles" - ) + logger.success(f"{runtime_name} runtime removed successfully!") except Exception as e: - _rich_error(f"Error removing runtime: {e}") + logger.error(f"Error removing runtime: {e}") sys.exit(1) @runtime.command(help="Check which runtime will be used") def status(): """Show which runtime APM will use for execution.""" + logger = CommandLogger("runtime status") try: from ..runtime.manager import RuntimeManager @@ -170,19 +170,19 @@ def status(): except (ImportError, NameError): # Fallback display - _rich_info("Runtime Status:") + logger.progress("Runtime Status:") click.echo() click.echo(f"Preference order: {' -> '.join(preference)}") if available_runtime: - _rich_success(f"Active runtime: {available_runtime}") + logger.success(f"Active runtime: {available_runtime}") else: - _rich_error("No runtimes available") - _rich_info( + logger.error("No runtimes available") + logger.progress( "Run 'apm runtime setup copilot' to install the primary runtime" ) except Exception as e: - _rich_error(f"Error checking runtime status: {e}") + logger.error(f"Error checking runtime status: {e}") sys.exit(1) diff --git a/src/apm_cli/commands/uninstall/cli.py b/src/apm_cli/commands/uninstall/cli.py index 15ab4482..28ca343e 100644 --- a/src/apm_cli/commands/uninstall/cli.py +++ b/src/apm_cli/commands/uninstall/cli.py @@ -7,7 +7,7 @@ import click from ...constants import APM_MODULES_DIR, APM_YML_FILENAME -from ...utils.console import _rich_error, _rich_info, _rich_success, _rich_warning +from ...core.command_logger import CommandLogger from ...deps.lockfile import LockFile from ...models.apm_package import APMPackage, DependencyReference @@ -41,17 +41,18 @@ def uninstall(ctx, packages, dry_run): apm uninstall org/pkg1 org/pkg2 # Remove multiple packages apm uninstall acme/my-package --dry-run # Show what would be removed """ + logger = CommandLogger("uninstall", dry_run=dry_run) try: # Check if apm.yml exists if not Path(APM_YML_FILENAME).exists(): - _rich_error(f"No {APM_YML_FILENAME} found. Run 'apm init' first.") + logger.error(f"No {APM_YML_FILENAME} found. Run 'apm init' first.") sys.exit(1) if not packages: - _rich_error("No packages specified. Specify packages to uninstall.") + logger.error("No packages specified. Specify packages to uninstall.") sys.exit(1) - _rich_info(f"Uninstalling {len(packages)} package(s)...") + logger.start(f"Uninstalling {len(packages)} package(s)...") # Read current apm.yml import yaml @@ -61,7 +62,7 @@ def uninstall(ctx, packages, dry_run): with open(apm_yml_path, "r") as f: data = yaml.safe_load(f) or {} except Exception as e: - _rich_error(f"Failed to read {APM_YML_FILENAME}: {e}") + logger.error(f"Failed to read {APM_YML_FILENAME}: {e}") sys.exit(1) if "dependencies" not in data: @@ -74,7 +75,7 @@ def uninstall(ctx, packages, dry_run): # Step 1: Validate packages packages_to_remove, packages_not_found = _validate_uninstall_packages(packages, current_deps) if not packages_to_remove: - _rich_warning("No packages found in apm.yml to remove") + logger.warning("No packages found in apm.yml to remove") return # Step 2: Dry run @@ -85,14 +86,14 @@ def uninstall(ctx, packages, dry_run): # Step 3: Remove from apm.yml for package in packages_to_remove: current_deps.remove(package) - _rich_info(f"Removed {package} from apm.yml") + logger.progress(f"Removed {package} from apm.yml") data["dependencies"]["apm"] = current_deps try: with open(apm_yml_path, "w") as f: yaml.safe_dump(data, f, default_flow_style=False, sort_keys=False) - _rich_success(f"Updated {APM_YML_FILENAME} (removed {len(packages_to_remove)} package(s))") + logger.success(f"Updated {APM_YML_FILENAME} (removed {len(packages_to_remove)} package(s))") except Exception as e: - _rich_error(f"Failed to write {APM_YML_FILENAME}: {e}") + logger.error(f"Failed to write {APM_YML_FILENAME}: {e}") sys.exit(1) # Step 4: Load lockfile and capture pre-uninstall MCP state @@ -151,7 +152,7 @@ def uninstall(ctx, packages, dry_run): else: lockfile_path.unlink(missing_ok=True) except Exception: - _rich_warning("Failed to update lockfile — it may be out of sync with uninstalled packages.") + logger.warning("Failed to update lockfile -- it may be out of sync with uninstalled packages.") # Step 9: Sync integrations cleaned = {"prompts": 0, "agents": 0, "skills": 0, "commands": 0, "hooks": 0, "instructions": 0} @@ -164,24 +165,24 @@ def uninstall(ctx, packages, dry_run): for label, count in cleaned.items(): if count > 0: - _rich_info(f"\u2713 Cleaned up {count} integrated {label}") + logger.progress(f"\u2713 Cleaned up {count} integrated {label}") # Step 10: MCP cleanup try: apm_package = APMPackage.from_apm_yml(Path(APM_YML_FILENAME)) _cleanup_stale_mcp(apm_package, lockfile, lockfile_path, _pre_uninstall_mcp_servers) except Exception: - _rich_warning("MCP cleanup during uninstall failed") + logger.warning("MCP cleanup during uninstall failed") # Final summary summary_lines = [f"Removed {len(packages_to_remove)} package(s) from apm.yml"] if removed_from_modules > 0: summary_lines.append(f"Removed {removed_from_modules} package(s) from apm_modules/") - _rich_success("Uninstall complete: " + ", ".join(summary_lines)) + logger.success("Uninstall complete: " + ", ".join(summary_lines)) if packages_not_found: - _rich_warning(f"Note: {len(packages_not_found)} package(s) were not found in apm.yml") + logger.warning(f"Note: {len(packages_not_found)} package(s) were not found in apm.yml") except Exception as e: - _rich_error(f"Error uninstalling packages: {e}") + logger.error(f"Error uninstalling packages: {e}") sys.exit(1) diff --git a/src/apm_cli/commands/update.py b/src/apm_cli/commands/update.py index 7bfcf632..569c88b3 100644 --- a/src/apm_cli/commands/update.py +++ b/src/apm_cli/commands/update.py @@ -6,7 +6,7 @@ import click -from ..utils.console import _rich_echo, _rich_error, _rich_info, _rich_success, _rich_warning +from ..core.command_logger import CommandLogger from ..version import get_version @@ -64,19 +64,20 @@ def update(check): import subprocess import tempfile + logger = CommandLogger("update") current_version = get_version() # Skip check for development versions if current_version == "unknown": - _rich_warning( + logger.warning( "Cannot determine current version. Running in development mode?" ) if not check: - _rich_info("To update, reinstall from the repository.") + logger.progress("To update, reinstall from the repository.") return - _rich_info(f"Current version: {current_version}", symbol="info") - _rich_info("Checking for updates...", symbol="running") + logger.progress(f"Current version: {current_version}") + logger.start("Checking for updates...") # Check for latest version from ..utils.version_checker import get_latest_version_from_github @@ -84,28 +85,28 @@ def update(check): latest_version = get_latest_version_from_github() if not latest_version: - _rich_error("Unable to fetch latest version from GitHub") - _rich_info("Please check your internet connection or try again later") + logger.error("Unable to fetch latest version from GitHub") + logger.progress("Please check your internet connection or try again later") sys.exit(1) from ..utils.version_checker import is_newer_version if not is_newer_version(current_version, latest_version): - _rich_success( + logger.success( f"You're already on the latest version: {current_version}", symbol="check", ) return - _rich_info(f"Latest version available: {latest_version}", symbol="sparkles") + logger.progress(f"Latest version available: {latest_version}", symbol="sparkles") if check: - _rich_warning(f"Update available: {current_version} -> {latest_version}") - _rich_info("Run 'apm update' (without --check) to install", symbol="info") + logger.warning(f"Update available: {current_version} -> {latest_version}") + logger.progress("Run 'apm update' (without --check) to install") return # Proceed with update - _rich_info("Downloading and installing update...", symbol="running") + logger.start("Downloading and installing update...") # Download install script to temp file try: @@ -126,7 +127,7 @@ def update(check): os.chmod(temp_script, 0o755) # Run install script - _rich_info("Running installer...", symbol="gear") + logger.progress("Running installer...", symbol="gear") # Note: We don't capture output so the installer can prompt when needed. result = subprocess.run(_get_installer_run_command(temp_script), check=False) @@ -139,28 +140,28 @@ def update(check): pass if result.returncode == 0: - _rich_success( + logger.success( f"Successfully updated to version {latest_version}!", - symbol="sparkles", ) - _rich_info( + logger.progress( "Please restart your terminal or run 'apm --version' to verify" ) else: - _rich_error("Installation failed - see output above for details") + logger.error("Installation failed - see output above for details") sys.exit(1) except ImportError: - _rich_error("'requests' library not available") - _rich_info("Please update manually using:") + logger.error("'requests' library not available") + logger.progress("Please update manually using:") click.echo(f" {_get_manual_update_command()}") sys.exit(1) except Exception as e: - _rich_error(f"Update failed: {e}") - _rich_info("Please update manually using:") + logger.error(f"Update failed: {e}") + logger.progress("Please update manually using:") click.echo(f" {_get_manual_update_command()}") sys.exit(1) except Exception as e: - _rich_error(f"Error during update: {e}") + _logger = CommandLogger("update") + _logger.error(f"Error during update: {e}") sys.exit(1) diff --git a/src/apm_cli/compilation/agents_compiler.py b/src/apm_cli/compilation/agents_compiler.py index 3a7e053b..d452d4d9 100644 --- a/src/apm_cli/compilation/agents_compiler.py +++ b/src/apm_cli/compilation/agents_compiler.py @@ -156,8 +156,14 @@ def __init__(self, base_dir: str = "."): self.base_dir = Path(base_dir) self.warnings: List[str] = [] self.errors: List[str] = [] + self._logger = None + + def _log(self, method: str, message: str, **kwargs): + """Delegate to logger if available, else no-op.""" + if self._logger: + getattr(self._logger, method)(message, **kwargs) - def compile(self, config: CompilationConfig, primitives: Optional[PrimitiveCollection] = None) -> CompilationResult: + def compile(self, config: CompilationConfig, primitives: Optional[PrimitiveCollection] = None, logger=None) -> CompilationResult: """Compile AGENTS.md and/or CLAUDE.md based on target configuration. Routes compilation to appropriate targets based on config.target: @@ -174,6 +180,7 @@ def compile(self, config: CompilationConfig, primitives: Optional[PrimitiveColle """ self.warnings.clear() self.errors.clear() + self._logger = logger try: # Use provided primitives or discover them (with dependency support) @@ -273,7 +280,7 @@ def _compile_distributed(self, config: CompilationConfig, primitives: PrimitiveC output = distributed_compiler.output_formatter.format_default(compilation_results) # Display the professional output - print(output) + self._log("progress", output) if not distributed_result.success: self.warnings.extend(distributed_result.warnings) @@ -519,7 +526,7 @@ def _compile_claude_md(self, config: CompilationConfig, primitives: PrimitiveCol output = formatter.format_dry_run(formatter_results) else: output = formatter.format_default(formatter_results) - print(output) + self._log("progress", output) # Generate summary content for result object summary_lines = [ @@ -774,8 +781,8 @@ def _display_placement_preview(self, distributed_result) -> None: Args: distributed_result: Result from distributed compilation. """ - print("Distributed AGENTS.md Placement Preview:") - print() + self._log("progress", "Distributed AGENTS.md Placement Preview:") + self._log("progress", "") for placement in distributed_result.placements: try: @@ -783,13 +790,13 @@ def _display_placement_preview(self, distributed_result) -> None: except ValueError: # Fallback for path resolution issues rel_path = placement.agents_path - print(f"{rel_path}") - print(f" Instructions: {len(placement.instructions)}") - print(f" Patterns: {', '.join(sorted(placement.coverage_patterns))}") + self._log("verbose_detail", f"{rel_path}") + self._log("verbose_detail", f" Instructions: {len(placement.instructions)}") + self._log("verbose_detail", f" Patterns: {', '.join(sorted(placement.coverage_patterns))}") if placement.source_attribution: sources = set(placement.source_attribution.values()) - print(f" Sources: {', '.join(sorted(sources))}") - print() + self._log("verbose_detail", f" Sources: {', '.join(sorted(sources))}") + self._log("verbose_detail", "") def _display_trace_info(self, distributed_result, primitives: PrimitiveCollection) -> None: """Display detailed trace information for --trace mode. @@ -798,15 +805,15 @@ def _display_trace_info(self, distributed_result, primitives: PrimitiveCollectio distributed_result: Result from distributed compilation. primitives (PrimitiveCollection): Full primitive collection. """ - print("Distributed Compilation Trace:") - print() + self._log("progress", "Distributed Compilation Trace:") + self._log("progress", "") for placement in distributed_result.placements: try: rel_path = placement.agents_path.relative_to(self.base_dir.resolve()) except ValueError: rel_path = placement.agents_path - print(f"{rel_path}") + self._log("verbose_detail", f"{rel_path}") for instruction in placement.instructions: source = getattr(instruction, 'source', 'local') @@ -815,8 +822,8 @@ def _display_trace_info(self, distributed_result, primitives: PrimitiveCollectio except ValueError: inst_path = instruction.file_path - print(f" * {instruction.apply_to or 'no pattern'} <- {source} {inst_path}") - print() + self._log("verbose_detail", f" * {instruction.apply_to or 'no pattern'} <- {source} {inst_path}") + self._log("verbose_detail", "") def _generate_placement_summary(self, distributed_result) -> str: """Generate a text summary of placement results. diff --git a/src/apm_cli/drift.py b/src/apm_cli/drift.py index 292a9ca5..1e3a1574 100644 --- a/src/apm_cli/drift.py +++ b/src/apm_cli/drift.py @@ -62,6 +62,7 @@ def detect_ref_change( locked_dep: "Optional[LockedDependency]", *, update_refs: bool = False, + logger=None, ) -> bool: """Return ``True`` when the manifest ref differs from the locked resolved_ref. @@ -102,6 +103,7 @@ def detect_orphans( intended_dep_keys: builtins.set, *, only_packages: builtins.list, + logger=None, ) -> builtins.set: """Return the set of deployed file paths whose owning package left the manifest. @@ -138,6 +140,7 @@ def detect_orphans( def detect_config_drift( current_configs: Dict[str, dict], stored_configs: Dict[str, dict], + logger=None, ) -> builtins.set: """Return names of entries whose current config differs from the stored baseline. @@ -172,6 +175,7 @@ def build_download_ref( *, update_refs: bool, ref_changed: bool, + logger=None, ) -> "DependencyReference": """Build the dependency reference passed to the package downloader. diff --git a/src/apm_cli/integration/instruction_integrator.py b/src/apm_cli/integration/instruction_integrator.py index c5c4c6ad..fb1166f4 100644 --- a/src/apm_cli/integration/instruction_integrator.py +++ b/src/apm_cli/integration/instruction_integrator.py @@ -49,6 +49,7 @@ def integrate_package_instructions( force: bool = False, managed_files: Optional[Set[str]] = None, diagnostics=None, + logger=None, ) -> IntegrationResult: """Integrate all instructions from a package into .github/instructions/. @@ -182,6 +183,7 @@ def integrate_package_instructions_cursor( force: bool = False, managed_files: Optional[Set[str]] = None, diagnostics=None, + logger=None, ) -> IntegrationResult: """Integrate instructions as Cursor Rules into ``.cursor/rules/``. diff --git a/src/apm_cli/integration/mcp_integrator.py b/src/apm_cli/integration/mcp_integrator.py index e2b9cbac..43de1e52 100644 --- a/src/apm_cli/integration/mcp_integrator.py +++ b/src/apm_cli/integration/mcp_integrator.py @@ -28,7 +28,7 @@ _rich_warning, ) -logger = logging.getLogger(__name__) +_log = logging.getLogger(__name__) def _is_vscode_available() -> bool: @@ -60,6 +60,7 @@ def collect_transitive( apm_modules_dir: Path, lock_path: Optional[Path] = None, trust_private: bool = False, + logger=None, ) -> list: """Collect MCP dependencies from resolved APM packages listed in apm.lock. @@ -113,25 +114,44 @@ def collect_transitive( for dep in mcp: if hasattr(dep, "is_self_defined") and dep.is_self_defined: if is_direct: - _rich_info( - f"Trusting direct dependency MCP '{dep.name}' " - f"from '{pkg.name}'" - ) + if logger: + logger.verbose_detail( + f"Trusting direct dependency MCP '{dep.name}' " + f"from '{pkg.name}'" + ) + else: + _rich_info( + f"Trusting direct dependency MCP '{dep.name}' " + f"from '{pkg.name}'" + ) elif trust_private: - _rich_info( - f"Trusting self-defined MCP server '{dep.name}' " - f"from transitive package '{pkg.name}' (--trust-transitive-mcp)" - ) + if logger: + logger.verbose_detail( + f"Trusting self-defined MCP server '{dep.name}' " + f"from transitive package '{pkg.name}' (--trust-transitive-mcp)" + ) + else: + _rich_info( + f"Trusting self-defined MCP server '{dep.name}' " + f"from transitive package '{pkg.name}' (--trust-transitive-mcp)" + ) else: - _rich_warning( - f"Transitive package '{pkg.name}' declares self-defined " - f"MCP server '{dep.name}' (registry: false). " - f"Re-declare it in your apm.yml or use --trust-transitive-mcp." - ) + if logger: + logger.warning( + f"Transitive package '{pkg.name}' declares self-defined " + f"MCP server '{dep.name}' (registry: false). " + f"Re-declare it in your apm.yml or use --trust-transitive-mcp." + ) + else: + _rich_warning( + f"Transitive package '{pkg.name}' declares self-defined " + f"MCP server '{dep.name}' (registry: false). " + f"Re-declare it in your apm.yml or use --trust-transitive-mcp." + ) continue collected.append(dep) except Exception: - logger.debug( + _log.debug( "Skipping package at %s: failed to parse apm.yml", apm_yml_path, exc_info=True, @@ -423,6 +443,7 @@ def remove_stale( stale_names: builtins.set, runtime: str = None, exclude: str = None, + logger=None, ) -> None: """Remove MCP server entries that are no longer required by any dependency. @@ -469,11 +490,16 @@ def remove_stale( _json.dumps(config, indent=2), encoding="utf-8" ) for name in removed: - _rich_info( - f"+ Removed stale MCP server '{name}' from .vscode/mcp.json" - ) + if logger: + logger.progress( + f"Removed stale MCP server '{name}' from .vscode/mcp.json" + ) + else: + _rich_info( + f"+ Removed stale MCP server '{name}' from .vscode/mcp.json" + ) except Exception: - logger.debug( + _log.debug( "Failed to clean stale MCP servers from .vscode/mcp.json", exc_info=True, ) @@ -499,7 +525,7 @@ def remove_stale( f"+ Removed stale MCP server '{name}' from Copilot CLI config" ) except Exception: - logger.debug( + _log.debug( "Failed to clean stale MCP servers from Copilot CLI config", exc_info=True, ) @@ -523,7 +549,7 @@ def remove_stale( f"+ Removed stale MCP server '{name}' from Codex CLI config" ) except Exception: - logger.debug( + _log.debug( "Failed to clean stale MCP servers from Codex CLI config", exc_info=True, ) @@ -549,7 +575,7 @@ def remove_stale( f"+ Removed stale MCP server '{name}' from .cursor/mcp.json" ) except Exception: - logger.debug( + _log.debug( "Failed to clean stale MCP servers from .cursor/mcp.json", exc_info=True, ) @@ -571,11 +597,16 @@ def remove_stale( _json.dumps(config, indent=2), encoding="utf-8" ) for name in removed: - _rich_info( - f"+ Removed stale MCP server '{name}' from opencode.json" - ) + if logger: + logger.progress( + f"Removed stale MCP server '{name}' from opencode.json" + ) + else: + _rich_info( + f"+ Removed stale MCP server '{name}' from opencode.json" + ) except Exception: - logger.debug( + _log.debug( "Failed to clean stale MCP servers from opencode.json", exc_info=True, ) @@ -615,7 +646,7 @@ def update_lockfile( lockfile.mcp_configs = mcp_configs lockfile.save(lock_path) except Exception: - logger.debug( + _log.debug( "Failed to update MCP servers in lockfile at %s", lock_path, exc_info=True, @@ -686,6 +717,7 @@ def _install_for_runtime( shared_env_vars: dict = None, server_info_cache: dict = None, shared_runtime_vars: dict = None, + logger=None, ) -> bool: """Install MCP dependencies for a specific runtime. @@ -700,7 +732,10 @@ def _install_for_runtime( all_ok = True for dep in mcp_deps: - click.echo(f" Installing {dep}...") + if logger: + logger.verbose_detail(f" Installing {dep}...") + else: + click.echo(f" Installing {dep}...") try: result = install_package( runtime, @@ -710,32 +745,49 @@ def _install_for_runtime( shared_runtime_vars=shared_runtime_vars, ) if result["failed"]: - click.echo(f" x Failed to install {dep}") + if logger: + logger.error(f" Failed to install {dep}") + else: + click.echo(f" x Failed to install {dep}") all_ok = False except Exception as install_error: - logger.debug( + _log.debug( "Failed to install MCP dep %s for runtime %s", dep, runtime, exc_info=True, ) - click.echo(f" x Failed to install {dep}: {install_error}") + if logger: + logger.error(f" Failed to install {dep}: {install_error}") + else: + click.echo(f" x Failed to install {dep}: {install_error}") all_ok = False return all_ok except ImportError as e: - _rich_warning(f"Core operations not available for runtime {runtime}: {e}") - _rich_info(f"Dependencies for {runtime}: {', '.join(mcp_deps)}") + if logger: + logger.warning(f"Core operations not available for runtime {runtime}: {e}") + logger.progress(f"Dependencies for {runtime}: {', '.join(mcp_deps)}") + else: + _rich_warning(f"Core operations not available for runtime {runtime}: {e}") + _rich_info(f"Dependencies for {runtime}: {', '.join(mcp_deps)}") return False except ValueError as e: - _rich_warning(f"Runtime {runtime} not supported: {e}") - _rich_info("Supported runtimes: vscode, copilot, codex, cursor, opencode, llm") + if logger: + logger.warning(f"Runtime {runtime} not supported: {e}") + logger.progress("Supported runtimes: vscode, copilot, codex, cursor, opencode, llm") + else: + _rich_warning(f"Runtime {runtime} not supported: {e}") + _rich_info("Supported runtimes: vscode, copilot, codex, cursor, opencode, llm") return False except Exception as e: - logger.debug( + _log.debug( "Unexpected error installing for runtime %s", runtime, exc_info=True ) - _rich_error(f"Error installing for runtime {runtime}: {e}") + if logger: + logger.error(f"Error installing for runtime {runtime}: {e}") + else: + _rich_error(f"Error installing for runtime {runtime}: {e}") return False # ------------------------------------------------------------------ @@ -750,6 +802,7 @@ def install( verbose: bool = False, apm_config: dict = None, stored_mcp_configs: dict = None, + logger=None, ) -> int: """Install MCP dependencies. @@ -769,7 +822,10 @@ def install( Number of MCP servers newly configured or updated. """ if not mcp_deps: - _rich_warning("No MCP dependencies found in apm.yml") + if logger: + logger.warning("No MCP dependencies found in apm.yml") + else: + _rich_warning("No MCP dependencies found in apm.yml") return 0 # Split into registry-resolved and self-defined deps @@ -812,15 +868,24 @@ def install( header.append(")", style="cyan") console.print(header) except Exception: - _rich_info(f"Installing MCP dependencies ({len(mcp_deps)})...") + if logger: + logger.progress(f"Installing MCP dependencies ({len(mcp_deps)})...") + else: + _rich_info(f"Installing MCP dependencies ({len(mcp_deps)})...") else: - _rich_info(f"Installing MCP dependencies ({len(mcp_deps)})...") + if logger: + logger.progress(f"Installing MCP dependencies ({len(mcp_deps)})...") + else: + _rich_info(f"Installing MCP dependencies ({len(mcp_deps)})...") # Runtime detection and multi-runtime installation if runtime: # Single runtime mode target_runtimes = [runtime] - _rich_info(f"Targeting specific runtime: {runtime}") + if logger: + logger.progress(f"Targeting specific runtime: {runtime}") + else: + _rich_info(f"Targeting specific runtime: {runtime}") else: if apm_config is None: # Lazy load -- only when the caller doesn't provide it @@ -906,6 +971,17 @@ def install( f"(available + used in scripts)" ) console.print("|") + elif logger: + logger.verbose_detail( + f"Installed runtimes: {', '.join(installed_runtimes)}" + ) + logger.verbose_detail( + f"Script runtimes: {', '.join(script_runtimes)}" + ) + if target_runtimes: + logger.verbose_detail( + f"Target runtimes: {', '.join(target_runtimes)}" + ) else: _rich_info( f"Installed runtimes: {', '.join(installed_runtimes)}" @@ -919,23 +995,41 @@ def install( ) if not target_runtimes: - _rich_warning( - "Scripts reference runtimes that are not installed" - ) - _rich_info( - "Install missing runtimes with: apm runtime setup " - ) + if logger: + logger.warning( + "Scripts reference runtimes that are not installed" + ) + logger.progress( + "Install missing runtimes with: apm runtime setup " + ) + else: + _rich_warning( + "Scripts reference runtimes that are not installed" + ) + _rich_info( + "Install missing runtimes with: apm runtime setup " + ) else: target_runtimes = installed_runtimes if target_runtimes: if verbose: - _rich_info( - f"No scripts detected, using all installed runtimes: " - f"{', '.join(target_runtimes)}" - ) + if logger: + logger.verbose_detail( + f"No scripts detected, using all installed runtimes: " + f"{', '.join(target_runtimes)}" + ) + else: + _rich_info( + f"No scripts detected, using all installed runtimes: " + f"{', '.join(target_runtimes)}" + ) else: - _rich_warning("No MCP-compatible runtimes installed") - _rich_info("Install a runtime with: apm runtime setup copilot") + if logger: + logger.warning("No MCP-compatible runtimes installed") + logger.progress("Install a runtime with: apm runtime setup copilot") + else: + _rich_warning("No MCP-compatible runtimes installed") + _rich_info("Install a runtime with: apm runtime setup copilot") # Apply exclusions if exclude: @@ -943,16 +1037,25 @@ def install( # All runtimes excluded -- nothing to configure if not target_runtimes and installed_runtimes: - _rich_warning( - f"All installed runtimes excluded (--exclude {exclude}), " - "skipping MCP configuration" - ) + if logger: + logger.warning( + f"All installed runtimes excluded (--exclude {exclude}), " + "skipping MCP configuration" + ) + else: + _rich_warning( + f"All installed runtimes excluded (--exclude {exclude}), " + "skipping MCP configuration" + ) return 0 # Fall back to VS Code only if no runtimes are installed at all if not target_runtimes and not installed_runtimes: target_runtimes = ["vscode"] - _rich_info("No runtimes installed, using VS Code as fallback") + if logger: + logger.progress("No runtimes installed, using VS Code as fallback") + else: + _rich_info("No runtimes installed, using VS Code as fallback") # Use the new registry operations module for better server detection configured_count = 0 @@ -966,20 +1069,33 @@ def install( # Early validation: check all servers exist in registry (fail-fast) if verbose: - _rich_info( - f"Validating {len(registry_deps)} registry servers..." - ) + if logger: + logger.verbose_detail( + f"Validating {len(registry_deps)} registry servers..." + ) + else: + _rich_info( + f"Validating {len(registry_deps)} registry servers..." + ) valid_servers, invalid_servers = operations.validate_servers_exist( registry_dep_names ) if invalid_servers: - _rich_error( - f"Server(s) not found in registry: {', '.join(invalid_servers)}" - ) - _rich_info( - "Run 'apm mcp search ' to find available servers" - ) + if logger: + logger.error( + f"Server(s) not found in registry: {', '.join(invalid_servers)}" + ) + logger.progress( + "Run 'apm mcp search ' to find available servers" + ) + else: + _rich_error( + f"Server(s) not found in registry: {', '.join(invalid_servers)}" + ) + _rich_info( + "Run 'apm mcp search ' to find available servers" + ) raise RuntimeError( f"Cannot install {len(invalid_servers)} missing server(s)" ) @@ -1022,6 +1138,10 @@ def install( f"| [green]+[/green] {dep} " f"[dim](already configured)[/dim]" ) + elif logger: + logger.success( + "All registry MCP servers already configured" + ) else: _rich_success( "All registry MCP servers already configured" @@ -1034,6 +1154,11 @@ def install( f"| [green]+[/green] {dep} " f"[dim](already configured)[/dim]" ) + elif logger: + logger.verbose_detail( + "Already configured registry MCP servers: " + f"{', '.join(already_configured_servers)}" + ) elif verbose: _rich_info( "Already configured registry MCP servers: " @@ -1042,9 +1167,14 @@ def install( # Batch fetch server info once if verbose: - _rich_info( - f"Installing {len(servers_to_install)} servers..." - ) + if logger: + logger.verbose_detail( + f"Installing {len(servers_to_install)} servers..." + ) + else: + _rich_info( + f"Installing {len(servers_to_install)} servers..." + ) server_info_cache = operations.batch_fetch_server_info( servers_to_install ) @@ -1087,13 +1217,17 @@ def install( any_ok = False for rt in target_runtimes: if verbose: - _rich_info(f"Configuring {rt}...") + if logger: + logger.verbose_detail(f"Configuring {rt}...") + else: + _rich_info(f"Configuring {rt}...") if MCPIntegrator._install_for_runtime( rt, [dep], shared_env_vars, server_info_cache, shared_runtime_vars, + logger=logger, ): any_ok = True @@ -1115,10 +1249,16 @@ def install( ) except ImportError: - _rich_warning("Registry operations not available") - _rich_error( - "Cannot validate MCP servers without registry operations" - ) + if logger: + logger.warning("Registry operations not available") + logger.error( + "Cannot validate MCP servers without registry operations" + ) + else: + _rich_warning("Registry operations not available") + _rich_error( + "Cannot validate MCP servers without registry operations" + ) raise RuntimeError( "Registry operations module required for MCP installation" ) @@ -1163,6 +1303,9 @@ def install( f"| [green]+[/green] {name} " f"[dim](already configured)[/dim]" ) + elif logger: + for name in already_configured_self_defined: + logger.verbose_detail(f"{name} already configured, skipping") elif verbose: for name in already_configured_self_defined: _rich_info(f"{name} already configured, skipping") @@ -1197,12 +1340,16 @@ def install( any_ok = False for rt in target_runtimes: if verbose: - _rich_info(f"Configuring {dep.name} for {rt}...") + if logger: + logger.verbose_detail(f"Configuring {dep.name} for {rt}...") + else: + _rich_info(f"Configuring {dep.name} for {rt}...") if MCPIntegrator._install_for_runtime( rt, [dep.name], self_defined_env, self_defined_cache, + logger=logger, ): any_ok = True diff --git a/src/apm_cli/integration/prompt_integrator.py b/src/apm_cli/integration/prompt_integrator.py index a4440f38..0d33ff58 100644 --- a/src/apm_cli/integration/prompt_integrator.py +++ b/src/apm_cli/integration/prompt_integrator.py @@ -68,7 +68,8 @@ def get_target_filename(self, source_file: Path, package_name: str) -> str: def integrate_package_prompts(self, package_info, project_root: Path, force: bool = False, managed_files: set = None, - diagnostics=None) -> IntegrationResult: + diagnostics=None, + logger=None) -> IntegrationResult: """Integrate all prompts from a package into .github/prompts/. Deploys with clean filenames. Skips files that exist locally and diff --git a/src/apm_cli/integration/skill_integrator.py b/src/apm_cli/integration/skill_integrator.py index cf52665f..4335a46d 100644 --- a/src/apm_cli/integration/skill_integrator.py +++ b/src/apm_cli/integration/skill_integrator.py @@ -453,7 +453,7 @@ def _dircmp_equal(dcmp) -> bool: return True @staticmethod - def _promote_sub_skills(sub_skills_dir: Path, target_skills_root: Path, parent_name: str, *, warn: bool = True, owned_by: dict[str, str] | None = None, diagnostics=None, managed_files=None, force: bool = False, project_root: Path | None = None) -> tuple[int, list[Path]]: + def _promote_sub_skills(sub_skills_dir: Path, target_skills_root: Path, parent_name: str, *, warn: bool = True, owned_by: dict[str, str] | None = None, diagnostics=None, managed_files=None, force: bool = False, project_root: Path | None = None, logger=None) -> tuple[int, list[Path]]: """Promote sub-skills from .apm/skills/ to top-level skill entries. Args: @@ -515,24 +515,32 @@ def _promote_sub_skills(sub_skills_dir: Path, target_skills_root: Path, parent_n diagnostics.skip( rel_path, package=parent_name ) + elif logger: + logger.warning( + f"Skipping skill '{sub_name}' -- local skill exists (not managed by APM). " + f"Use 'apm install --force' to overwrite." + ) else: try: from apm_cli.utils.console import _rich_warning _rich_warning( - f"Skipping skill '{sub_name}' — local skill exists (not managed by APM). " + f"Skipping skill '{sub_name}' -- local skill exists (not managed by APM). " f"Use 'apm install --force' to overwrite." ) except ImportError: pass continue # SKIP — protect user content - # Cross-package overwrite with different content if warn and not is_self_overwrite: if diagnostics is not None: diagnostics.overwrite( path=rel_path, package=parent_name, - detail=f"Skill '{sub_name}' replaced — previously from another package", + detail=f"Skill '{sub_name}' replaced -- previously from another package", + ) + elif logger: + logger.warning( + f"Sub-skill '{sub_name}' from '{parent_name}' overwrites existing skill at {rel_path}" ) else: try: @@ -572,7 +580,7 @@ def _build_skill_ownership_map(project_root: Path) -> dict[str, str]: def _promote_sub_skills_standalone( self, package_info, project_root: Path, diagnostics=None, - managed_files=None, force: bool = False, + managed_files=None, force: bool = False, logger=None, ) -> tuple[int, list[Path]]: """Promote sub-skills from a package that is NOT itself a skill. @@ -634,6 +642,7 @@ def _promote_sub_skills_standalone( def _integrate_native_skill( self, package_info, project_root: Path, source_skill_md: Path, diagnostics=None, managed_files=None, force: bool = False, + logger=None, ) -> SkillIntegrationResult: """Copy a native Skill (with existing SKILL.md) to .github/skills/ and optionally .claude/skills/ and .cursor/skills/. @@ -685,6 +694,10 @@ def _integrate_native_skill( f"Skill name '{raw_skill_name}' normalized to '{skill_name}' ({error_msg})", package=raw_skill_name, ) + elif logger: + logger.warning( + f"Skill name '{raw_skill_name}' normalized to '{skill_name}' ({error_msg})" + ) else: try: from apm_cli.utils.console import _rich_warning @@ -725,7 +738,7 @@ def _integrate_native_skill( sub_skills_dir = package_path / ".apm" / "skills" github_skills_root = project_root / ".github" / "skills" owned_by = self._build_skill_ownership_map(project_root) - sub_skills_count, sub_deployed = self._promote_sub_skills(sub_skills_dir, github_skills_root, skill_name, warn=True, owned_by=owned_by, diagnostics=diagnostics, managed_files=managed_files, force=force, project_root=project_root) + sub_skills_count, sub_deployed = self._promote_sub_skills(sub_skills_dir, github_skills_root, skill_name, warn=True, owned_by=owned_by, diagnostics=diagnostics, managed_files=managed_files, force=force, project_root=project_root, logger=logger) all_target_paths.extend(sub_deployed) # === T7: Copy to .claude/skills/ (secondary - compatibility) === @@ -775,7 +788,7 @@ def _integrate_native_skill( target_paths=all_target_paths ) - def integrate_package_skill(self, package_info, project_root: Path, diagnostics=None, managed_files=None, force: bool = False) -> SkillIntegrationResult: + def integrate_package_skill(self, package_info, project_root: Path, diagnostics=None, managed_files=None, force: bool = False, logger=None) -> SkillIntegrationResult: """Integrate a package's skill into .github/skills/ directory. Copies native skills (packages with SKILL.md at root) to .github/skills/ @@ -798,7 +811,7 @@ def integrate_package_skill(self, package_info, project_root: Path, diagnostics= # Even non-skill packages may ship sub-skills under .apm/skills/. # Promote them so Copilot can discover them independently. sub_skills_count, sub_deployed = self._promote_sub_skills_standalone( - package_info, project_root, diagnostics=diagnostics, managed_files=managed_files, force=force + package_info, project_root, diagnostics=diagnostics, managed_files=managed_files, force=force, logger=logger ) return SkillIntegrationResult( skill_created=False, @@ -831,12 +844,12 @@ def integrate_package_skill(self, package_info, project_root: Path, diagnostics= # Check if this is a native Skill (already has SKILL.md at root) source_skill_md = package_path / "SKILL.md" if source_skill_md.exists(): - return self._integrate_native_skill(package_info, project_root, source_skill_md, diagnostics=diagnostics, managed_files=managed_files, force=force) + return self._integrate_native_skill(package_info, project_root, source_skill_md, diagnostics=diagnostics, managed_files=managed_files, force=force, logger=logger) # No SKILL.md at root -- not a skill package. # Still promote any sub-skills shipped under .apm/skills/. sub_skills_count, sub_deployed = self._promote_sub_skills_standalone( - package_info, project_root, diagnostics=diagnostics, managed_files=managed_files, force=force + package_info, project_root, diagnostics=diagnostics, managed_files=managed_files, force=force, logger=logger ) return SkillIntegrationResult( skill_created=False, diff --git a/tests/integration/test_auth_resolver.py b/tests/integration/test_auth_resolver.py new file mode 100644 index 00000000..4e795a87 --- /dev/null +++ b/tests/integration/test_auth_resolver.py @@ -0,0 +1,302 @@ +""" +Integration tests for AuthResolver. + +These tests exercise the resolver end-to-end — classify_host, token resolution, +caching, try_with_fallback, and build_error_context — using real env-var +manipulation rather than deep mocking. + +No network access is required; all tests control the environment via +``unittest.mock.patch.dict(os.environ, ...)``. +""" + +import os +from unittest.mock import patch + +import pytest + +from apm_cli.core.auth import AuthResolver, HostInfo +from apm_cli.core.token_manager import GitHubTokenManager + +# --------------------------------------------------------------------------- +# Gate: only run when APM_E2E_TESTS=1 +# --------------------------------------------------------------------------- + +E2E_MODE = os.environ.get("APM_E2E_TESTS", "").lower() in ("1", "true", "yes") + +pytestmark = [ + pytest.mark.integration, + pytest.mark.skipif(not E2E_MODE, reason="Integration tests require APM_E2E_TESTS=1"), +] + +# Shared helper: suppress git-credential-fill so env vars are the only source. +_NO_GIT_CRED = patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None +) + + +# --------------------------------------------------------------------------- +# 1. Clean env → no token +# --------------------------------------------------------------------------- + +class TestAuthResolverNoEnv: + def test_auth_resolver_no_env_resolves_none(self): + """With a completely clean environment the resolver returns no token.""" + with patch.dict(os.environ, {}, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + ctx = resolver.resolve("github.com") + + assert ctx.token is None + assert ctx.source == "none" + assert ctx.token_type == "unknown" + assert ctx.host_info.kind == "github" + + +# --------------------------------------------------------------------------- +# 2. GITHUB_APM_PAT is picked up +# --------------------------------------------------------------------------- + +class TestGlobalPat: + def test_auth_resolver_respects_github_apm_pat(self): + """GITHUB_APM_PAT is the primary env var for module access.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "ghp_global123"}, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + ctx = resolver.resolve("github.com") + + assert ctx.token == "ghp_global123" + assert ctx.source == "GITHUB_APM_PAT" + assert ctx.token_type == "classic" + + +# --------------------------------------------------------------------------- +# 3. Per-org override takes precedence +# --------------------------------------------------------------------------- + +class TestPerOrgOverride: + def test_auth_resolver_per_org_override(self): + """GITHUB_APM_PAT_{ORG} beats GITHUB_APM_PAT.""" + env = { + "GITHUB_APM_PAT": "ghp_global", + "GITHUB_APM_PAT_CONTOSO": "github_pat_contoso_specific", + } + with patch.dict(os.environ, env, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + ctx = resolver.resolve("github.com", org="contoso") + + assert ctx.token == "github_pat_contoso_specific" + assert ctx.source == "GITHUB_APM_PAT_CONTOSO" + assert ctx.token_type == "fine-grained" + + def test_per_org_hyphen_normalisation(self): + """Org names with hyphens are converted to underscores in the env var.""" + env = {"GITHUB_APM_PAT_MY_ORG": "ghp_hyphens"} + with patch.dict(os.environ, env, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + ctx = resolver.resolve("github.com", org="my-org") + + assert ctx.token == "ghp_hyphens" + assert ctx.source == "GITHUB_APM_PAT_MY_ORG" + + +# --------------------------------------------------------------------------- +# 4. GHE Cloud skips global env vars +# --------------------------------------------------------------------------- + +class TestGheCloudSkipsGlobal: + def test_auth_resolver_ghe_cloud_skips_global(self): + """*.ghe.com hosts must NOT pick up GITHUB_APM_PAT (security boundary).""" + env = {"GITHUB_APM_PAT": "ghp_should_not_leak"} + with patch.dict(os.environ, env, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + ctx = resolver.resolve("contoso.ghe.com") + + assert ctx.token is None, ( + "Global GITHUB_APM_PAT must not leak to GHE Cloud hosts" + ) + assert ctx.source == "none" + assert ctx.host_info.kind == "ghe_cloud" + assert ctx.host_info.has_public_repos is False + + def test_ghe_cloud_per_org_still_works(self): + """Per-org tokens work even on GHE Cloud hosts.""" + env = { + "GITHUB_APM_PAT": "ghp_should_not_leak", + "GITHUB_APM_PAT_ENTERPRISE_TEAM": "ghp_enterprise", + } + with patch.dict(os.environ, env, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + ctx = resolver.resolve("contoso.ghe.com", org="enterprise-team") + + assert ctx.token == "ghp_enterprise" + assert ctx.source == "GITHUB_APM_PAT_ENTERPRISE_TEAM" + + +# --------------------------------------------------------------------------- +# 5. Cache consistency +# --------------------------------------------------------------------------- + +class TestCacheConsistency: + def test_auth_resolver_cache_consistency(self): + """Same (host, org) always returns the same object (identity check).""" + env = {"GITHUB_APM_PAT": "ghp_cached"} + with patch.dict(os.environ, env, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + ctx1 = resolver.resolve("github.com", org="microsoft") + ctx2 = resolver.resolve("github.com", org="microsoft") + + assert ctx1 is ctx2, "Cached result must be the same object" + + def test_different_keys_are_independent(self): + """Different (host, org) pairs produce independent cache entries.""" + env = { + "GITHUB_APM_PAT_ALPHA": "ghp_alpha", + "GITHUB_APM_PAT_BETA": "ghp_beta", + } + with patch.dict(os.environ, env, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + ctx_a = resolver.resolve("github.com", org="alpha") + ctx_b = resolver.resolve("github.com", org="beta") + + assert ctx_a is not ctx_b + assert ctx_a.token == "ghp_alpha" + assert ctx_b.token == "ghp_beta" + + +# --------------------------------------------------------------------------- +# 6. try_with_fallback: unauth-first for public repos +# --------------------------------------------------------------------------- + +class TestTryWithFallbackUnauthFirst: + def test_try_with_fallback_unauth_first_public(self): + """unauth_first=True on github.com: unauthenticated call succeeds, + token is never used.""" + env = {"GITHUB_APM_PAT": "ghp_not_needed"} + with patch.dict(os.environ, env, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + calls: list = [] + + def op(token, git_env): + calls.append(token) + return "ok" + + result = resolver.try_with_fallback( + "github.com", op, org="microsoft", unauth_first=True + ) + + assert result == "ok" + assert calls == [None], "unauth_first should try None first" + + def test_unauth_first_falls_back_on_failure(self): + """If unauth fails and a token exists, retry with token.""" + env = {"GITHUB_APM_PAT": "ghp_fallback"} + with patch.dict(os.environ, env, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + calls: list = [] + + def op(token, git_env): + calls.append(token) + if token is None: + raise RuntimeError("rate-limited") + return "ok" + + result = resolver.try_with_fallback( + "github.com", op, org="microsoft", unauth_first=True + ) + + assert result == "ok" + assert calls == [None, "ghp_fallback"] + + def test_ghe_cloud_never_tries_unauth(self): + """GHE Cloud hosts skip the unauth attempt entirely.""" + env = {"GITHUB_APM_PAT_CORP": "ghp_corp"} + with patch.dict(os.environ, env, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + calls: list = [] + + def op(token, git_env): + calls.append(token) + return "ok" + + result = resolver.try_with_fallback( + "corp.ghe.com", op, org="corp", unauth_first=True + ) + + assert result == "ok" + assert calls == ["ghp_corp"], ( + "GHE Cloud must use auth-only path" + ) + + +# --------------------------------------------------------------------------- +# 7. classify_host variants +# --------------------------------------------------------------------------- + +class TestClassifyHostVariants: + """End-to-end classification of various host strings.""" + + @pytest.mark.parametrize( + "host, expected_kind, expected_public", + [ + ("github.com", "github", True), + ("GitHub.COM", "github", True), + ("GITHUB.com", "github", True), + ("contoso.ghe.com", "ghe_cloud", False), + ("ACME.GHE.COM", "ghe_cloud", False), + ("dev.azure.com", "ado", True), + ("myorg.visualstudio.com", "ado", True), + ("gitlab.com", "generic", True), + ("bitbucket.org", "generic", True), + ("git.internal.corp", "generic", True), + ], + ) + def test_classify_host_variants(self, host, expected_kind, expected_public): + # Clear GITHUB_HOST so GHES detection doesn't interfere + with patch.dict(os.environ, {}, clear=True): + hi = AuthResolver.classify_host(host) + assert hi.kind == expected_kind, f"{host} → expected {expected_kind}, got {hi.kind}" + assert hi.has_public_repos is expected_public + + def test_ghes_via_github_host_env(self): + """GITHUB_HOST pointing at a custom FQDN triggers GHES classification.""" + with patch.dict(os.environ, {"GITHUB_HOST": "github.mycompany.com"}, clear=True): + hi = AuthResolver.classify_host("github.mycompany.com") + assert hi.kind == "ghes" + assert hi.has_public_repos is True + assert "api/v3" in hi.api_base + + def test_api_base_values(self): + """Verify API base URLs for each host kind.""" + with patch.dict(os.environ, {}, clear=True): + assert AuthResolver.classify_host("github.com").api_base == "https://api.github.com" + assert AuthResolver.classify_host("acme.ghe.com").api_base == "https://acme.ghe.com/api/v3" + assert AuthResolver.classify_host("dev.azure.com").api_base == "https://dev.azure.com" + + +# --------------------------------------------------------------------------- +# 8. build_error_context integration +# --------------------------------------------------------------------------- + +class TestBuildErrorContextIntegration: + """Verify error messages are actionable under realistic conditions.""" + + def test_no_token_suggests_env_vars(self): + with patch.dict(os.environ, {}, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + msg = resolver.build_error_context("github.com", "install") + + assert "GITHUB_APM_PAT" in msg + assert "--verbose" in msg + + def test_emu_token_warns(self): + with patch.dict(os.environ, {"GITHUB_APM_PAT": "ghu_emu_abc"}, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + msg = resolver.build_error_context("github.com", "clone") + + assert "EMU" in msg + assert "enterprise" in msg.lower() + + def test_org_hint_included(self): + with patch.dict(os.environ, {"GITHUB_APM_PAT": "ghp_tok"}, clear=True), _NO_GIT_CRED: + resolver = AuthResolver() + msg = resolver.build_error_context("github.com", "clone", org="contoso") + + assert "GITHUB_APM_PAT_CONTOSO" in msg diff --git a/tests/unit/test_audit_command.py b/tests/unit/test_audit_command.py index d3d4ce9c..810a8aa7 100644 --- a/tests/unit/test_audit_command.py +++ b/tests/unit/test_audit_command.py @@ -7,11 +7,14 @@ from click.testing import CliRunner from apm_cli.commands.audit import audit, _scan_single_file, _apply_strip, _preview_strip +from apm_cli.core.command_logger import CommandLogger from apm_cli.security.content_scanner import ContentScanner # ── Fixtures ──────────────────────────────────────────────────────── +_logger = CommandLogger("audit", verbose=False) + @pytest.fixture def runner(): @@ -457,12 +460,12 @@ class TestScanSingleFile: """Direct tests for the _scan_single_file helper.""" def test_returns_findings_and_count(self, clean_file): - findings, count = _scan_single_file(clean_file) + findings, count = _scan_single_file(clean_file, _logger) assert findings == {} assert count == 1 def test_findings_keyed_by_path(self, warning_file): - findings, count = _scan_single_file(warning_file) + findings, count = _scan_single_file(warning_file, _logger) assert count == 1 assert len(findings) == 1 key = list(findings.keys())[0] @@ -476,13 +479,13 @@ class TestApplyStrip: """Direct tests for the _apply_strip helper.""" def test_returns_count_of_modified(self, warning_file): - findings, _ = _scan_single_file(warning_file) - modified = _apply_strip(findings, warning_file.parent) + findings, _ = _scan_single_file(warning_file, _logger) + modified = _apply_strip(findings, warning_file.parent, _logger) assert modified == 1 def test_modifies_critical_only_files(self, critical_file): - findings, _ = _scan_single_file(critical_file) - modified = _apply_strip(findings, critical_file.parent) + findings, _ = _scan_single_file(critical_file, _logger) + modified = _apply_strip(findings, critical_file.parent, _logger) # File has only critical findings → should be modified (dangerous chars stripped) assert modified == 1 content = critical_file.read_text(encoding="utf-8") @@ -500,5 +503,5 @@ def test_rejects_path_outside_root(self, tmp_path): project = tmp_path / "project" project.mkdir() - modified = _apply_strip(findings_by_file, project) + modified = _apply_strip(findings_by_file, project, _logger) assert modified == 0 diff --git a/tests/unit/test_unpacker.py b/tests/unit/test_unpacker.py index 6bb83d56..5a465887 100644 --- a/tests/unit/test_unpacker.py +++ b/tests/unit/test_unpacker.py @@ -417,7 +417,7 @@ def test_unpack_cmd_dry_run_logs_files(self, tmp_path): os.chdir(original_dir) assert result.exit_code == 0 - assert "Dry run" in result.output + assert "dry-run" in result.output assert "Would unpack 1 file(s)" in result.output assert ".github/agents/a.md" in result.output From c355d02177ac852476b457999752b5deca6d3013 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 01:05:51 +0100 Subject: [PATCH 04/40] fix: EMU docs accuracy, credential timeout (subsumes #389) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Fix docs conflating EMU with *.ghe.com — EMU orgs can exist on github.com too; *.ghe.com is specifically GHE Cloud Data Residency - Increase git credential fill timeout: 5s → 60s default (configurable via APM_GIT_CREDENTIAL_TIMEOUT, max 180s) — fixes silent auth failures on Windows when credential helper shows interactive account picker - Add 7 timeout tests - CHANGELOG entry for credential timeout fix Credit: credential timeout fix based on investigation by @frblondin in #389. Co-authored-by: Thomas Caudal Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- CHANGELOG.md | 4 +++ .../docs/getting-started/authentication.md | 28 +++++++++++------ src/apm_cli/core/token_manager.py | 23 +++++++++++++- tests/test_token_manager.py | 31 +++++++++++++++++++ 4 files changed, 75 insertions(+), 11 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 2e8f7954..bdf9641b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -25,6 +25,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - Global env vars (`GITHUB_APM_PAT`) no longer leak to non-default hosts; enterprise hosts resolve via per-org env vars or git credentials only (#393) +### Fixed + +- `git credential fill` timeout increased from 5s to 60s (configurable via `APM_GIT_CREDENTIAL_TIMEOUT`, max 180s) — fixes silent auth failures on Windows when credential helper shows interactive dialogs (#393) + ## [0.8.3] - 2026-03-20 ### Added diff --git a/docs/src/content/docs/getting-started/authentication.md b/docs/src/content/docs/getting-started/authentication.md index b7d553c3..01dea3ef 100644 --- a/docs/src/content/docs/getting-started/authentication.md +++ b/docs/src/content/docs/getting-started/authentication.md @@ -55,22 +55,24 @@ The org name comes from the dependency reference — `contoso/my-package` checks Per-org tokens take priority over global tokens. Use this when different orgs require different PATs (e.g., separate SSO authorizations). -## Enterprise (EMU / GHE Cloud) +## Enterprise Managed Users (EMU) -GHE Cloud hosts (`*.ghe.com`) are always auth-required — APM never attempts unauthenticated access. Set a per-org token: +EMU orgs can live on **github.com** (e.g., `contoso-microsoft`) or on **GHE Cloud Data Residency** (`*.ghe.com`). EMU tokens (`ghu_` prefix) are enterprise-scoped and cannot access public repos on github.com. + +If your manifest mixes enterprise and public packages, use separate tokens: ```bash -export GITHUB_APM_PAT_MYENTERPRISE=ghp_enterprise_token -apm install myenterprise.ghe.com/platform/standards +export GITHUB_APM_PAT_CONTOSO_MICROSOFT=ghu_emu_token # EMU org (any host) +export GITHUB_APM_PAT=ghp_public_token # public github.com repos ``` -### EMU tokens +### GHE Cloud Data Residency (`*.ghe.com`) -Enterprise Managed User tokens (`ghu_` prefix) are scoped to the enterprise. They cannot access public repos on github.com. If your manifest mixes enterprise and public packages, use separate tokens: +`*.ghe.com` hosts are always auth-required — there are no public repos. APM skips the unauthenticated attempt entirely for these hosts: ```bash -export GITHUB_APM_PAT_MYENTERPRISE=ghu_emu_token # *.ghe.com only -export GITHUB_APM_PAT=ghp_public_token # github.com +export GITHUB_APM_PAT_MYENTERPRISE=ghp_enterprise_token +apm install myenterprise.ghe.com/platform/standards ``` ## GitHub Enterprise Server (GHES) @@ -126,7 +128,7 @@ Authorize your PAT for SSO at [github.com/settings/tokens](https://github.com/se ### EMU token can't access public repos -EMU tokens (`ghu_` prefix) are enterprise-scoped. Use a standard PAT for public github.com repos alongside the EMU token for `*.ghe.com` — see [Enterprise (EMU / GHE Cloud)](#enterprise-emu--ghe-cloud) above. +EMU tokens (`ghu_` prefix) are enterprise-scoped and cannot access public github.com repos. Use a standard PAT for public repos alongside your EMU token — see [Enterprise Managed Users (EMU)](#enterprise-managed-users-emu) above. ### Diagnosing auth failures @@ -140,7 +142,13 @@ The output shows which env var matched (or `none`), the detected token type (`fi ### Git credential helper not found -APM calls `git credential fill` as a fallback. Ensure a credential helper is configured: +APM calls `git credential fill` as a fallback (60s timeout). If your credential helper needs more time (e.g., Windows account picker), set `APM_GIT_CREDENTIAL_TIMEOUT` (seconds, max 180): + +```bash +export APM_GIT_CREDENTIAL_TIMEOUT=120 +``` + +Ensure a credential helper is configured: ```bash git config credential.helper # check current helper diff --git a/src/apm_cli/core/token_manager.py b/src/apm_cli/core/token_manager.py index 4da53b79..d70235fa 100644 --- a/src/apm_cli/core/token_manager.py +++ b/src/apm_cli/core/token_manager.py @@ -69,6 +69,27 @@ def _is_valid_credential_token(token: str) -> bool: return False return True + # `git credential fill` may invoke OS credential helpers that show + # interactive dialogs (e.g. Windows Credential Manager account picker). + # The 60s default prevents false negatives on slow helpers. + DEFAULT_CREDENTIAL_TIMEOUT = 60 + MAX_CREDENTIAL_TIMEOUT = 180 + + @classmethod + def _get_credential_timeout(cls) -> int: + """Return timeout (seconds) for ``git credential fill``. + + Configurable via ``APM_GIT_CREDENTIAL_TIMEOUT`` (1–180). + """ + raw = os.environ.get("APM_GIT_CREDENTIAL_TIMEOUT", "").strip() + if not raw: + return cls.DEFAULT_CREDENTIAL_TIMEOUT + try: + val = int(raw) + except ValueError: + return cls.DEFAULT_CREDENTIAL_TIMEOUT + return max(1, min(val, cls.MAX_CREDENTIAL_TIMEOUT)) + @staticmethod def resolve_credential_from_git(host: str) -> Optional[str]: """Resolve a credential from the git credential store. @@ -89,7 +110,7 @@ def resolve_credential_from_git(host: str) -> Optional[str]: input=f"protocol=https\nhost={host}\n\n", capture_output=True, text=True, - timeout=5, + timeout=GitHubTokenManager._get_credential_timeout(), env={**os.environ, 'GIT_TERMINAL_PROMPT': '0', 'GIT_ASKPASS': ''}, ) if result.returncode != 0: diff --git a/tests/test_token_manager.py b/tests/test_token_manager.py index 3b21a2e2..1ddb02cb 100644 --- a/tests/test_token_manager.py +++ b/tests/test_token_manager.py @@ -205,6 +205,37 @@ def test_accepts_valid_gho_token(self): assert token == 'gho_abc123def456' +class TestCredentialTimeout: + """Tests for configurable git credential fill timeout.""" + + def test_default_timeout_is_60(self): + with patch.dict(os.environ, {}, clear=True): + assert GitHubTokenManager._get_credential_timeout() == 60 + + def test_env_override(self): + with patch.dict(os.environ, {'APM_GIT_CREDENTIAL_TIMEOUT': '42'}): + assert GitHubTokenManager._get_credential_timeout() == 42 + + def test_clamps_to_max(self): + with patch.dict(os.environ, {'APM_GIT_CREDENTIAL_TIMEOUT': '999'}): + assert GitHubTokenManager._get_credential_timeout() == 180 + + def test_clamps_to_min(self): + with patch.dict(os.environ, {'APM_GIT_CREDENTIAL_TIMEOUT': '0'}): + assert GitHubTokenManager._get_credential_timeout() == 1 + + def test_invalid_value_falls_back(self): + with patch.dict(os.environ, {'APM_GIT_CREDENTIAL_TIMEOUT': 'abc'}): + assert GitHubTokenManager._get_credential_timeout() == 60 + + def test_timeout_used_in_subprocess(self): + mock_result = MagicMock(returncode=0, stdout="password=tok\n") + with patch.dict(os.environ, {'APM_GIT_CREDENTIAL_TIMEOUT': '90'}, clear=True), \ + patch('subprocess.run', return_value=mock_result) as mock_run: + GitHubTokenManager.resolve_credential_from_git('github.com') + assert mock_run.call_args.kwargs['timeout'] == 90 + + class TestIsValidCredentialToken: """Test _is_valid_credential_token validation.""" From 6ddaea4b0f8205715622d7544a644319a3bc0b48 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 01:14:43 +0100 Subject: [PATCH 05/40] =?UTF-8?q?fix:=20correct=20token=20prefix=20mapping?= =?UTF-8?q?=20=E2=80=94=20EMU=20uses=20standard=20PAT=20prefixes?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ghu_ is OAuth user-to-server (e.g. gh auth login), NOT EMU. EMU users get regular ghp_ (classic) or github_pat_ (fine-grained) tokens — there is no prefix that identifies EMU. EMU is a property of the account, not the token format. Changes: - detect_token_type: ghu_ → 'oauth', gho_ → 'oauth', ghs_/ghr_ → 'github-app' (was all 'classic') - build_error_context: replace token_type=='emu' check with host-based heuristics (*.ghe.com → enterprise msg, github.com → SAML/EMU mention) - Auth docs: remove ghu_ as EMU prefix, clarify EMU uses standard PATs - Agent persona: correct token prefix reference table - Tests: update detect_token_type + build_error_context assertions Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .github/agents/auth-expert.agent.md | 6 +-- .../docs/getting-started/authentication.md | 10 ++-- src/apm_cli/core/auth.py | 51 ++++++++++++++----- tests/integration/test_auth_resolver.py | 8 +-- tests/unit/test_auth.py | 35 +++++++++---- 5 files changed, 75 insertions(+), 35 deletions(-) diff --git a/.github/agents/auth-expert.agent.md b/.github/agents/auth-expert.agent.md index e517f99e..0e963346 100644 --- a/.github/agents/auth-expert.agent.md +++ b/.github/agents/auth-expert.agent.md @@ -13,8 +13,8 @@ You are an expert on Git hosting authentication across GitHub.com, GitHub Enterp ## Core Knowledge -- **Token types**: Fine-grained PATs (`github_pat_`), classic PATs (`ghp_`), EMU tokens (`ghu_`), OAuth tokens (`gho_`), server tokens (`ghs_`) -- **GitHub EMU constraints**: Enterprise-scoped, cannot access public github.com, `ghu_` prefix +- **Token prefixes**: Fine-grained PATs (`github_pat_`), classic PATs (`ghp_`), OAuth user-to-server (`ghu_` — e.g. `gh auth login`), OAuth app (`gho_`), GitHub App install (`ghs_`), GitHub App refresh (`ghr_`) +- **EMU (Enterprise Managed Users)**: Use standard PAT prefixes (`ghp_`, `github_pat_`). There is NO special prefix for EMU — it's a property of the account, not the token. EMU tokens are enterprise-scoped and cannot access public github.com repos. EMU orgs can exist on github.com or *.ghe.com. - **Host classification**: github.com (public), *.ghe.com (no public repos), GHES (`GITHUB_HOST`), ADO - **Git credential helpers**: macOS Keychain, Windows Credential Manager, `gh auth`, `git credential fill` - **Rate limiting**: 60/hr unauthenticated, 5000/hr authenticated, primary (403) vs secondary (429) @@ -38,7 +38,7 @@ When reviewing or writing auth code: ## Common Pitfalls -- EMU PATs on public github.com repos → will fail silently +- EMU PATs on public github.com repos → will fail silently (you cannot detect EMU from prefix) - `git credential fill` only resolves per-host, not per-org - `_build_repo_url` must accept token param, not use instance var - Windows: `GIT_ASKPASS` must be `'echo'` not empty string diff --git a/docs/src/content/docs/getting-started/authentication.md b/docs/src/content/docs/getting-started/authentication.md index 01dea3ef..49768f60 100644 --- a/docs/src/content/docs/getting-started/authentication.md +++ b/docs/src/content/docs/getting-started/authentication.md @@ -57,13 +57,13 @@ Per-org tokens take priority over global tokens. Use this when different orgs re ## Enterprise Managed Users (EMU) -EMU orgs can live on **github.com** (e.g., `contoso-microsoft`) or on **GHE Cloud Data Residency** (`*.ghe.com`). EMU tokens (`ghu_` prefix) are enterprise-scoped and cannot access public repos on github.com. +EMU orgs can live on **github.com** (e.g., `contoso-microsoft`) or on **GHE Cloud Data Residency** (`*.ghe.com`). EMU tokens are standard PATs (`ghp_` classic or `github_pat_` fine-grained) — there is no special prefix. They are scoped to the enterprise and cannot access public repos on github.com. If your manifest mixes enterprise and public packages, use separate tokens: ```bash -export GITHUB_APM_PAT_CONTOSO_MICROSOFT=ghu_emu_token # EMU org (any host) -export GITHUB_APM_PAT=ghp_public_token # public github.com repos +export GITHUB_APM_PAT_CONTOSO_MICROSOFT=github_pat_enterprise_token # EMU org (any host) +export GITHUB_APM_PAT=ghp_public_token # public github.com repos ``` ### GHE Cloud Data Residency (`*.ghe.com`) @@ -128,7 +128,7 @@ Authorize your PAT for SSO at [github.com/settings/tokens](https://github.com/se ### EMU token can't access public repos -EMU tokens (`ghu_` prefix) are enterprise-scoped and cannot access public github.com repos. Use a standard PAT for public repos alongside your EMU token — see [Enterprise Managed Users (EMU)](#enterprise-managed-users-emu) above. +EMU PATs use standard prefixes (`ghp_`, `github_pat_`) — there is no EMU-specific prefix. They are enterprise-scoped and cannot access public github.com repos. Use a standard PAT for public repos alongside your EMU PAT — see [Enterprise Managed Users (EMU)](#enterprise-managed-users-emu) above. ### Diagnosing auth failures @@ -138,7 +138,7 @@ Run with `--verbose` to see the full resolution chain: apm install --verbose your-org/package ``` -The output shows which env var matched (or `none`), the detected token type (`fine-grained`, `classic`, `emu`), and the host classification (`github`, `ghe_cloud`, `ghes`, `ado`, `generic`). +The output shows which env var matched (or `none`), the detected token type (`fine-grained`, `classic`, `oauth`, `github-app`), and the host classification (`github`, `ghe_cloud`, `ghes`, `ado`, `generic`). ### Git credential helper not found diff --git a/src/apm_cli/core/auth.py b/src/apm_cli/core/auth.py index cca3a430..6baf02c6 100644 --- a/src/apm_cli/core/auth.py +++ b/src/apm_cli/core/auth.py @@ -67,7 +67,7 @@ class AuthContext: token: Optional[str] source: str # e.g. "GITHUB_APM_PAT_ORGNAME", "GITHUB_TOKEN", "none" - token_type: str # "fine-grained", "classic", "emu", "ado", "artifactory", "unknown" + token_type: str # "fine-grained", "classic", "oauth", "github-app", "unknown" host_info: HostInfo git_env: dict = field(compare=False, repr=False) @@ -142,15 +142,33 @@ def classify_host(host: str) -> HostInfo: @staticmethod def detect_token_type(token: str) -> str: - """Classify a token string by its prefix.""" + """Classify a token string by its prefix. + + Note: EMU (Enterprise Managed Users) tokens use standard PAT + prefixes (``ghp_`` or ``github_pat_``). There is no prefix that + identifies a token as EMU-scoped — that's a property of the + account, not the token format. + + Prefix reference (docs.github.com): + - ``github_pat_`` → fine-grained PAT + - ``ghp_`` → classic PAT + - ``ghu_`` → OAuth user-to-server (e.g. ``gh auth login``) + - ``gho_`` → OAuth app token + - ``ghs_`` → GitHub App installation (server-to-server) + - ``ghr_`` → GitHub App refresh token + """ if token.startswith("github_pat_"): return "fine-grained" if token.startswith("ghp_"): return "classic" if token.startswith("ghu_"): - return "emu" - if token.startswith(("gho_", "ghs_", "ghr_")): - return "classic" + return "oauth" + if token.startswith("gho_"): + return "oauth" + if token.startswith("ghs_"): + return "github-app" + if token.startswith("ghr_"): + return "github-app" return "unknown" # -- core resolution ---------------------------------------------------- @@ -264,15 +282,24 @@ def build_error_context( if auth_ctx.token: lines.append(f"Token was provided (source: {auth_ctx.source}, type: {auth_ctx.token_type}).") - if auth_ctx.token_type == "emu": + host_info = self.classify_host(host) + if host_info.kind == "ghe_cloud": lines.append( - "EMU tokens are scoped to your enterprise and cannot " - "access public github.com repos." + "GHE Cloud Data Residency hosts (*.ghe.com) require " + "enterprise-scoped tokens. Ensure your PAT is authorized " + "for this enterprise." + ) + elif host.lower() == "github.com": + lines.append( + "If your organization uses SAML SSO or is an EMU org, " + "ensure your PAT is authorized at " + "https://github.com/settings/tokens" + ) + else: + lines.append( + "If your organization uses SAML SSO, you may need to " + "authorize your token at https://github.com/settings/tokens" ) - lines.append( - "If your organization uses SAML SSO, you may need to " - "authorize your token at https://github.com/settings/tokens" - ) else: lines.append("No token available.") lines.append( diff --git a/tests/integration/test_auth_resolver.py b/tests/integration/test_auth_resolver.py index 4e795a87..12777d7f 100644 --- a/tests/integration/test_auth_resolver.py +++ b/tests/integration/test_auth_resolver.py @@ -286,13 +286,13 @@ def test_no_token_suggests_env_vars(self): assert "GITHUB_APM_PAT" in msg assert "--verbose" in msg - def test_emu_token_warns(self): - with patch.dict(os.environ, {"GITHUB_APM_PAT": "ghu_emu_abc"}, clear=True), _NO_GIT_CRED: + def test_github_com_error_mentions_emu_sso(self): + """github.com errors should mention EMU/SSO as possible causes.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "ghp_some_token"}, clear=True), _NO_GIT_CRED: resolver = AuthResolver() msg = resolver.build_error_context("github.com", "clone") - assert "EMU" in msg - assert "enterprise" in msg.lower() + assert "EMU" in msg or "SAML" in msg def test_org_hint_included(self): with patch.dict(os.environ, {"GITHUB_APM_PAT": "ghp_tok"}, clear=True), _NO_GIT_CRED: diff --git a/tests/unit/test_auth.py b/tests/unit/test_auth.py index 5c2dbc98..c76a2479 100644 --- a/tests/unit/test_auth.py +++ b/tests/unit/test_auth.py @@ -60,17 +60,17 @@ def test_fine_grained(self): def test_classic(self): assert AuthResolver.detect_token_type("ghp_abc123") == "classic" - def test_emu(self): - assert AuthResolver.detect_token_type("ghu_abc123") == "emu" + def test_oauth_user(self): + assert AuthResolver.detect_token_type("ghu_abc123") == "oauth" - def test_oauth(self): - assert AuthResolver.detect_token_type("gho_abc123") == "classic" + def test_oauth_app(self): + assert AuthResolver.detect_token_type("gho_abc123") == "oauth" - def test_server_to_server(self): - assert AuthResolver.detect_token_type("ghs_abc123") == "classic" + def test_github_app_install(self): + assert AuthResolver.detect_token_type("ghs_abc123") == "github-app" - def test_refresh(self): - assert AuthResolver.detect_token_type("ghr_abc123") == "classic" + def test_github_app_refresh(self): + assert AuthResolver.detect_token_type("ghr_abc123") == "github-app" def test_unknown(self): assert AuthResolver.detect_token_type("some-random-token") == "unknown" @@ -328,14 +328,27 @@ def test_no_token_message(self): assert "GITHUB_APM_PAT" in msg assert "--verbose" in msg - def test_emu_detection(self): - with patch.dict(os.environ, {"GITHUB_APM_PAT": "ghu_emu_token"}, clear=True): + def test_ghe_cloud_error_context(self): + """*.ghe.com errors mention enterprise-scoped tokens.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT_CONTOSO": "token"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value=None + ): + resolver = AuthResolver() + msg = resolver.build_error_context( + "contoso.ghe.com", "clone", org="contoso" + ) + assert "enterprise" in msg.lower() + + def test_github_com_error_mentions_emu(self): + """github.com errors mention EMU/SSO possibility.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "ghp_token"}, clear=True): with patch.object( GitHubTokenManager, "resolve_credential_from_git", return_value=None ): resolver = AuthResolver() msg = resolver.build_error_context("github.com", "clone") - assert "EMU" in msg + assert "EMU" in msg or "SAML" in msg def test_multi_org_hint(self): with patch.dict(os.environ, {"GITHUB_APM_PAT": "token"}, clear=True): From 37978ac1c4690adcded37d0072e6082d52effbc9 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 01:45:09 +0100 Subject: [PATCH 06/40] fix: remove host-gating on global env vars, add credential-fill fallback MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Global env vars (GITHUB_APM_PAT, GITHUB_TOKEN, GH_TOKEN) now apply to all hosts — not just the default host. HTTPS is the real transport security boundary; host-gating was unnecessary and forced users into awkward per-org token setups. When a global token doesn't work for a particular host (e.g. github.com PAT on *.ghe.com), try_with_fallback() retries with git credential fill before giving up. This preserves seamless behavior for gh auth login users with multiple hosts. Changes: - auth.py: remove _is_default gate in _resolve_token(), add _try_credential_fallback() in try_with_fallback() - authentication.md: remove Security constraint section, simplify token table, fix EMU example, add HTTPS note - 5 new tests, 3 updated tests (2851 passing) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .github/agents/auth-expert.agent.md | 2 +- .../docs/getting-started/authentication.md | 21 ++-- src/apm_cli/core/auth.py | 67 ++++++++---- tests/integration/test_auth_resolver.py | 42 ++++++-- tests/test_github_downloader.py | 12 ++- tests/unit/test_auth.py | 102 +++++++++++++++--- 6 files changed, 188 insertions(+), 58 deletions(-) diff --git a/.github/agents/auth-expert.agent.md b/.github/agents/auth-expert.agent.md index 0e963346..1e074c34 100644 --- a/.github/agents/auth-expert.agent.md +++ b/.github/agents/auth-expert.agent.md @@ -32,7 +32,7 @@ When reviewing or writing auth code: 1. **Every remote operation** must go through AuthResolver — no direct `os.getenv()` for tokens 2. **Per-dep resolution**: Use `resolve_for_dep(dep_ref)`, never `self.github_token` instance vars -3. **Host awareness**: *.ghe.com = auth-only, github.com = fallback chain, ADO = auth-only +3. **Host awareness**: Global env vars are checked for all hosts (no host-gating). `try_with_fallback()` retries with `git credential fill` if the token is rejected. HTTPS is the transport security boundary. *.ghe.com and ADO always require auth (no unauthenticated fallback). 4. **Error messages**: Always use `build_error_context()` — never hardcode env var names 5. **Thread safety**: AuthContext is resolved before `executor.submit()`, passed per-worker diff --git a/docs/src/content/docs/getting-started/authentication.md b/docs/src/content/docs/getting-started/authentication.md index 49768f60..f9194e0d 100644 --- a/docs/src/content/docs/getting-started/authentication.md +++ b/docs/src/content/docs/getting-started/authentication.md @@ -11,25 +11,23 @@ APM works without tokens for public packages on github.com. Authentication is ne APM resolves tokens per `(host, org)` pair. For each dependency, it walks a resolution chain until it finds a token: 1. **Per-org env var** — `GITHUB_APM_PAT_{ORG}` (checked for any host) -2. **Global env vars** — `GITHUB_APM_PAT` → `GITHUB_TOKEN` → `GH_TOKEN` (default host only) +2. **Global env vars** — `GITHUB_APM_PAT` → `GITHUB_TOKEN` → `GH_TOKEN` (any host) 3. **Git credential helper** — `git credential fill` (any host except ADO) -If nothing matches, APM attempts unauthenticated access (works for public repos on github.com). +If the global token doesn't work for the target host, APM automatically retries with git credential helpers. If nothing matches, APM attempts unauthenticated access (works for public repos on github.com). Results are cached per-process — the same `(host, org)` pair is resolved once. -### Security constraint - -Global env vars (`GITHUB_APM_PAT`, `GITHUB_TOKEN`, `GH_TOKEN`) only apply to the default host (github.com unless `GITHUB_HOST` is set). Non-default hosts resolve via per-org env vars or git credentials. APM never sends a github.com token to an enterprise host. +All token-bearing requests use HTTPS. Tokens are never sent over unencrypted connections. ## Token lookup | Priority | Variable | Scope | Notes | |----------|----------|-------|-------| | 1 | `GITHUB_APM_PAT_{ORG}` | Per-org, any host | Org name uppercased, hyphens → underscores | -| 2 | `GITHUB_APM_PAT` | Default host only | github.com unless `GITHUB_HOST` overrides | -| 3 | `GITHUB_TOKEN` | Default host only | Shared with GitHub Actions | -| 4 | `GH_TOKEN` | Default host only | Set by `gh auth login` | +| 2 | `GITHUB_APM_PAT` | Any host | Falls back to git credential helpers if rejected | +| 3 | `GITHUB_TOKEN` | Any host | Shared with GitHub Actions | +| 4 | `GH_TOKEN` | Any host | Set by `gh auth login` | | 5 | `git credential fill` | Per-host | System credential manager, `gh auth`, OS keychain | For Azure DevOps, the only token source is `ADO_APM_PAT`. @@ -62,10 +60,11 @@ EMU orgs can live on **github.com** (e.g., `contoso-microsoft`) or on **GHE Clou If your manifest mixes enterprise and public packages, use separate tokens: ```bash -export GITHUB_APM_PAT_CONTOSO_MICROSOFT=github_pat_enterprise_token # EMU org (any host) -export GITHUB_APM_PAT=ghp_public_token # public github.com repos +export GITHUB_APM_PAT_CONTOSO_MICROSOFT=github_pat_enterprise_token # EMU org ``` +Public repos on github.com work without authentication. Set `GITHUB_APM_PAT` only if you need to access private repos or avoid rate limits. + ### GHE Cloud Data Residency (`*.ghe.com`) `*.ghe.com` hosts are always auth-required — there are no public repos. APM skips the unauthenticated attempt entirely for these hosts: @@ -94,7 +93,7 @@ dependencies: - github.com/public/open-source-package # → github.com ``` -Global env vars apply to whichever host `GITHUB_HOST` points to. Alternatively, skip env vars and configure `git credential fill` for your GHES host. +Setting `GITHUB_HOST` makes bare package names (without explicit host) resolve against your GHES instance. Alternatively, skip env vars and configure `git credential fill` for your GHES host. ## Azure DevOps diff --git a/src/apm_cli/core/auth.py b/src/apm_cli/core/auth.py index 6baf02c6..38509692 100644 --- a/src/apm_cli/core/auth.py +++ b/src/apm_cli/core/auth.py @@ -3,6 +3,11 @@ Every APM operation that touches a remote host MUST use AuthResolver. Resolution is per-(host, org) pair, thread-safe, and cached per-process. +All token-bearing requests use HTTPS — that is the transport security +boundary. Global env vars are tried for every host; if the token is +wrong for the target host, ``try_with_fallback`` retries with git +credential helpers automatically. + Usage:: resolver = AuthResolver() @@ -232,6 +237,10 @@ def try_with_fallback( If *True*, try unauthenticated first (saves rate limits, EMU-safe). verbose_callback: Called with a human-readable step description at each attempt. + + When the resolved token comes from a global env var and fails + (e.g. a github.com PAT tried on ``*.ghe.com``), the method + retries with ``git credential fill`` before giving up. """ auth_ctx = self.resolve(host, org) host_info = auth_ctx.host_info @@ -241,10 +250,23 @@ def _log(msg: str) -> None: if verbose_callback: verbose_callback(msg) - # Hosts that never have public repos → auth-only, no fallback + def _try_credential_fallback(exc: Exception) -> T: + """Retry with git-credential-fill when an env-var token fails.""" + if auth_ctx.source in ("git-credential-fill", "none"): + raise exc + _log(f"Token from {auth_ctx.source} failed, trying git credential fill for {host}") + cred = self._token_manager.resolve_credential_from_git(host) + if cred: + return operation(cred, self._build_git_env(cred)) + raise exc + + # Hosts that never have public repos → auth-only if host_info.kind in ("ghe_cloud", "ado"): _log(f"Auth-only attempt for {host_info.kind} host {host}") - return operation(auth_ctx.token, git_env) + try: + return operation(auth_ctx.token, git_env) + except Exception as exc: + return _try_credential_fallback(exc) if unauth_first: # Validation path: save rate limits, EMU-safe @@ -254,7 +276,10 @@ def _log(msg: str) -> None: except Exception: if auth_ctx.token: _log(f"Unauthenticated failed, retrying with token (source: {auth_ctx.source})") - return operation(auth_ctx.token, git_env) + try: + return operation(auth_ctx.token, git_env) + except Exception as exc: + return _try_credential_fallback(exc) raise else: # Download path: auth-first for higher rate limits @@ -262,11 +287,14 @@ def _log(msg: str) -> None: try: _log(f"Trying authenticated access to {host} (source: {auth_ctx.source})") return operation(auth_ctx.token, git_env) - except Exception: + except Exception as exc: if host_info.has_public_repos: _log("Authenticated failed, retrying without token") - return operation(None, git_env) - raise + try: + return operation(None, git_env) + except Exception: + return _try_credential_fallback(exc) + return _try_credential_fallback(exc) else: _log(f"No token available, trying unauthenticated access to {host}") return operation(None, git_env) @@ -322,11 +350,16 @@ def _resolve_token( ) -> tuple[Optional[str], str]: """Walk the token resolution chain. Returns (token, source). - Global env vars (``GITHUB_APM_PAT``, ``GITHUB_TOKEN``, ``GH_TOKEN``) - are only checked for the default host and ADO. Non-default hosts - (GHES, GHE Cloud, generic) resolve via per-org env vars or git - credential helpers — leaking a github.com PAT to an enterprise - server would be a security risk and would fail auth anyway. + Resolution order: + 1. Per-org env var ``GITHUB_APM_PAT_{ORG}`` (any host) + 2. Global env vars ``GITHUB_APM_PAT`` → ``GITHUB_TOKEN`` → ``GH_TOKEN`` + (any host — if the token is wrong for the target host, + ``try_with_fallback`` retries with git credentials) + 3. Git credential helper (any host except ADO) + + All token-bearing requests use HTTPS, which is the transport + security boundary. Host-gating global env vars is unnecessary + and creates DX friction for multi-host setups. """ # 1. Per-org env var (any host) if org: @@ -335,14 +368,12 @@ def _resolve_token( if token: return token, env_name - # 2. Global env var chain — only for default host or ADO - _is_default = host_info.host.lower() == default_host().lower() + # 2. Global env var chain (any host) purpose = self._purpose_for_host(host_info) - if _is_default or host_info.kind == "ado": - token = self._token_manager.get_token_for_purpose(purpose) - if token: - source = self._identify_env_source(purpose) - return token, source + token = self._token_manager.get_token_for_purpose(purpose) + if token: + source = self._identify_env_source(purpose) + return token, source # 3. Git credential helper (not for ADO — uses its own PAT) if host_info.kind not in ("ado",): diff --git a/tests/integration/test_auth_resolver.py b/tests/integration/test_auth_resolver.py index 12777d7f..de63e514 100644 --- a/tests/integration/test_auth_resolver.py +++ b/tests/integration/test_auth_resolver.py @@ -98,21 +98,21 @@ def test_per_org_hyphen_normalisation(self): # --------------------------------------------------------------------------- -# 4. GHE Cloud skips global env vars +# 4. GHE Cloud uses global env vars # --------------------------------------------------------------------------- -class TestGheCloudSkipsGlobal: - def test_auth_resolver_ghe_cloud_skips_global(self): - """*.ghe.com hosts must NOT pick up GITHUB_APM_PAT (security boundary).""" +class TestGheCloudGlobalVars: + def test_auth_resolver_ghe_cloud_uses_global(self): + """*.ghe.com hosts pick up GITHUB_APM_PAT (global vars apply to all hosts).""" env = {"GITHUB_APM_PAT": "ghp_should_not_leak"} with patch.dict(os.environ, env, clear=True), _NO_GIT_CRED: resolver = AuthResolver() ctx = resolver.resolve("contoso.ghe.com") - assert ctx.token is None, ( - "Global GITHUB_APM_PAT must not leak to GHE Cloud hosts" + assert ctx.token == "ghp_should_not_leak", ( + "Global GITHUB_APM_PAT should be returned for GHE Cloud hosts" ) - assert ctx.source == "none" + assert ctx.source == "GITHUB_APM_PAT" assert ctx.host_info.kind == "ghe_cloud" assert ctx.host_info.has_public_repos is False @@ -129,6 +129,34 @@ def test_ghe_cloud_per_org_still_works(self): assert ctx.token == "ghp_enterprise" assert ctx.source == "GITHUB_APM_PAT_ENTERPRISE_TEAM" + def test_ghe_cloud_global_var_with_credential_fallback_in_try_with_fallback(self): + """When a global env-var token fails on GHE Cloud, try_with_fallback + retries via git credential fill before giving up.""" + env = {"GITHUB_APM_PAT": "wrong-global-token"} + with patch.dict(os.environ, env, clear=True), \ + patch.object( + GitHubTokenManager, + "resolve_credential_from_git", + return_value="correct-ghe-cred", + ): + resolver = AuthResolver() + calls: list = [] + + def op(token, git_env): + calls.append(token) + if token == "wrong-global-token": + raise RuntimeError("auth failed") + return "ok" + + result = resolver.try_with_fallback( + "contoso.ghe.com", op, org="contoso" + ) + + assert result == "ok" + assert calls == ["wrong-global-token", "correct-ghe-cred"], ( + "Should try global token first, then fall back to git credential fill" + ) + # --------------------------------------------------------------------------- # 5. Cache consistency diff --git a/tests/test_github_downloader.py b/tests/test_github_downloader.py index dd1b9886..ec941ddf 100644 --- a/tests/test_github_downloader.py +++ b/tests/test_github_downloader.py @@ -1334,15 +1334,14 @@ def test_credential_fill_for_non_default_host(self): actual_headers = mock_get.call_args[1].get('headers') or mock_get.call_args[0][1] assert actual_headers.get('Authorization') == 'token enterprise-token' - def test_non_default_host_ignores_default_host_token(self): - """When default host has a token, non-default host should use its own credential, not the default.""" + def test_non_default_host_uses_global_token(self): + """Global env vars (GITHUB_APM_PAT) are now tried for all hosts, not just the default.""" with patch.dict(os.environ, {'GITHUB_APM_PAT': 'default-host-pat'}, clear=True), \ patch( 'apm_cli.core.token_manager.GitHubTokenManager.resolve_credential_from_git', ) as mock_cred: mock_cred.return_value = 'enterprise-cred' downloader = GitHubPackageDownloader() - # Default host token from env assert downloader.github_token == 'default-host-pat' dep_ref = DependencyReference( @@ -1360,8 +1359,11 @@ def test_non_default_host_ignores_default_host_token(self): assert result == b'enterprise content' actual_headers = mock_get.call_args[1].get('headers') or mock_get.call_args[0][1] - # Must use the enterprise credential, NOT the default-host PAT - assert actual_headers.get('Authorization') == 'token enterprise-cred' + # Global PAT is now used for non-default hosts too + assert actual_headers.get('Authorization') == 'token default-host-pat' + + # Credential fill is not reached because the global env var is found first + mock_cred.assert_not_called() def test_error_message_mentions_gh_auth_login(self): """Error message should mention 'gh auth login' when no token is available.""" diff --git a/tests/unit/test_auth.py b/tests/unit/test_auth.py index c76a2479..9daa10d0 100644 --- a/tests/unit/test_auth.py +++ b/tests/unit/test_auth.py @@ -170,6 +170,26 @@ def test_credential_fallback(self): assert ctx.token == "cred-token" assert ctx.source == "git-credential-fill" + def test_global_var_resolves_for_non_default_host(self): + """GITHUB_APM_PAT resolves for *.ghe.com (any host, not just default).""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "global-token"}, clear=True): + resolver = AuthResolver() + ctx = resolver.resolve("contoso.ghe.com") + assert ctx.token == "global-token" + assert ctx.source == "GITHUB_APM_PAT" + + def test_global_var_resolves_for_ghes_host(self): + """GITHUB_APM_PAT resolves for a GHES host set via GITHUB_HOST.""" + with patch.dict(os.environ, { + "GITHUB_HOST": "github.mycompany.com", + "GITHUB_APM_PAT": "global-token", + }, clear=True): + resolver = AuthResolver() + ctx = resolver.resolve("github.mycompany.com") + assert ctx.token == "global-token" + assert ctx.source == "GITHUB_APM_PAT" + assert ctx.host_info.kind == "ghes" + def test_git_env_has_lockdown(self): """Resolved context has git security env vars.""" with patch.dict(os.environ, {"GITHUB_APM_PAT": "token"}, clear=True): @@ -223,24 +243,21 @@ def op(token, env): assert calls == [None, "token"] def test_ghe_cloud_auth_only(self): - """*.ghe.com: auth-only, no unauth fallback. Uses git credential (not global env).""" - with patch.dict(os.environ, {}, clear=True): - with patch.object( - GitHubTokenManager, "resolve_credential_from_git", return_value="ghe-cred" - ): - resolver = AuthResolver() - calls = [] + """*.ghe.com: auth-only, no unauth fallback. Uses global env var.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "global-token"}, clear=True): + resolver = AuthResolver() + calls = [] - def op(token, env): - calls.append(token) - return "success" + def op(token, env): + calls.append(token) + return "success" - result = resolver.try_with_fallback( - "contoso.ghe.com", op, unauth_first=True - ) - assert result == "success" - # GHE Cloud has no public repos → unauth skipped, auth called once - assert calls == ["ghe-cred"] + result = resolver.try_with_fallback( + "contoso.ghe.com", op, unauth_first=True + ) + assert result == "success" + # GHE Cloud has no public repos → unauth skipped, auth called once + assert calls == ["global-token"] def test_auth_first_succeeds(self): """Auth-first (default): auth works, unauth not tried.""" @@ -295,6 +312,59 @@ def op(token, env): assert result == "success" assert calls == [None] + def test_credential_fallback_when_env_token_fails(self): + """Env token fails on auth-only host → retries with git credential fill.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "wrong-token"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value="correct-cred" + ): + resolver = AuthResolver() + calls = [] + + def op(token, env): + calls.append(token) + if token == "wrong-token": + raise RuntimeError("Bad credentials") + return "success" + + result = resolver.try_with_fallback("contoso.ghe.com", op) + assert result == "success" + assert calls == ["wrong-token", "correct-cred"] + + def test_no_credential_fallback_when_source_is_credential(self): + """When token already came from git-credential-fill, no retry on failure.""" + with patch.dict(os.environ, {}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value="cred-token" + ): + resolver = AuthResolver() + + def op(token, env): + raise RuntimeError("Bad credentials") + + with pytest.raises(RuntimeError, match="Bad credentials"): + resolver.try_with_fallback("contoso.ghe.com", op) + + def test_credential_fallback_on_auth_first_path(self): + """Auth-first on public host: auth fails, unauth fails → credential fill kicks in.""" + with patch.dict(os.environ, {"GITHUB_APM_PAT": "wrong-token"}, clear=True): + with patch.object( + GitHubTokenManager, "resolve_credential_from_git", return_value="correct-cred" + ): + resolver = AuthResolver() + calls = [] + + def op(token, env): + calls.append(token) + if token in ("wrong-token", None): + raise RuntimeError("Failed") + return "success" + + result = resolver.try_with_fallback("github.com", op) + assert result == "success" + # auth-first → unauth fallback → credential fill + assert calls == ["wrong-token", None, "correct-cred"] + def test_verbose_callback(self): """verbose_callback is called at each step.""" with patch.dict(os.environ, {"GITHUB_APM_PAT": "token"}, clear=True): From efa13c189a460e4bccf832558c95d542589b813c Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 01:56:52 +0100 Subject: [PATCH 07/40] fix: use validated count in resolution_start, not raw input count resolution_start() was using len(packages) (user-supplied arguments) instead of len(validated_packages) (packages that passed validation). This caused misleading 'Installing N new packages' when some packages failed validation or were already in apm.yml. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- src/apm_cli/commands/install.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index dbfa3c99..ba102d6e 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -468,7 +468,7 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo # We'll proceed with installation from apm.yml to ensure everything is synced logger.resolution_start( - to_install_count=len(packages) if packages else 0, + to_install_count=len(validated_packages) if packages else 0, lockfile_count=0, # Refined later inside _install_apm_dependencies ) From ee57015280e196090d3ec63f7639a9d83fb8ec0a Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 07:33:00 +0100 Subject: [PATCH 08/40] fix: wire verbose auth logging through validation _validate_package_exists was referencing _rich_echo which was not imported, causing a silent NameError caught by the outer try/except. Added _rich_echo import and wired verbose_callback through both try_with_fallback call sites so --verbose shows the full auth resolution chain during package validation. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- apm.yml | 8 ++++++++ src/apm_cli/commands/install.py | 10 +++++++--- 2 files changed, 15 insertions(+), 3 deletions(-) create mode 100644 apm.yml diff --git a/apm.yml b/apm.yml new file mode 100644 index 00000000..42530e05 --- /dev/null +++ b/apm.yml @@ -0,0 +1,8 @@ +name: awd-cli +version: 1.0.0 +description: APM project for awd-cli +author: danielmeppiel +dependencies: + apm: [] + mcp: [] +scripts: {} diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index ba102d6e..98b8e647 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -19,7 +19,7 @@ from ..drift import build_download_ref, detect_orphans, detect_ref_change from ..models.results import InstallResult from ..core.command_logger import InstallLogger, _ValidationOutcome -from ..utils.console import _rich_error, _rich_info, _rich_success, _rich_warning +from ..utils.console import _rich_echo, _rich_error, _rich_info, _rich_success, _rich_warning from ..utils.diagnostics import DiagnosticCollector from ..utils.github_host import default_host, is_valid_fqdn from ..utils.path_security import safe_rmtree @@ -153,7 +153,7 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo already_in_deps = identity in existing_identities # Validate package exists and is accessible - if _validate_package_exists(package): + if _validate_package_exists(package, verbose=bool(logger and logger.verbose)): valid_outcomes.append((canonical, already_in_deps)) if logger: logger.validation_pass(canonical, already_present=already_in_deps) @@ -234,12 +234,14 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo return validated_packages, outcome -def _validate_package_exists(package): +def _validate_package_exists(package, verbose=False): """Validate that a package exists and is accessible on GitHub, Azure DevOps, or locally.""" import os import subprocess import tempfile + verbose_log = (lambda msg: _rich_echo(f" {msg}", color="dim")) if verbose else None + try: # Parse the package to check if it's a virtual package or ADO from apm_cli.models.apm_package import DependencyReference @@ -329,6 +331,7 @@ def _ls_remote(token, git_env): host, _ls_remote, org=org, unauth_first=True, + verbose_callback=verbose_log, ) except Exception: return False @@ -365,6 +368,7 @@ def _ls_remote_fallback(token, git_env): host, _ls_remote_fallback, org=org, unauth_first=True, + verbose_callback=verbose_log, ) except Exception: return False From 985f25569f55968c572824831c27cf0b205bd942 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 07:33:08 +0100 Subject: [PATCH 09/40] chore: remove accidental apm.yml from test run Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- apm.yml | 8 -------- 1 file changed, 8 deletions(-) delete mode 100644 apm.yml diff --git a/apm.yml b/apm.yml deleted file mode 100644 index 42530e05..00000000 --- a/apm.yml +++ /dev/null @@ -1,8 +0,0 @@ -name: awd-cli -version: 1.0.0 -description: APM project for awd-cli -author: danielmeppiel -dependencies: - apm: [] - mcp: [] -scripts: {} From 67fd18dbf9b06df08c389058c35388c3f647df49 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 07:36:51 +0100 Subject: [PATCH 10/40] fix: show git error details and auth context in verbose validation MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Verbose output now shows: - Auth resolution summary (host, org, token source, token type) - Sanitized git stderr on each failed attempt (no token leaked) - Full fallback chain visibility: unauth → PAT → credential fill This gives users actionable diagnostics when auth fails. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- src/apm_cli/commands/install.py | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index 98b8e647..da4de87f 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -311,6 +311,10 @@ def _validate_package_exists(package, verbose=False): host = dep_ref.host or default_host() org = dep_ref.repo_url.split('/')[0] if dep_ref.repo_url and '/' in dep_ref.repo_url else None + if verbose_log: + ctx = auth_resolver.resolve(host, org=org) + verbose_log(f"Auth resolved: host={host}, org={org}, source={ctx.source}, type={ctx.token_type}") + def _ls_remote(token, git_env): """Try git ls-remote with optional auth.""" if token: @@ -323,6 +327,10 @@ def _ls_remote(token, git_env): env=git_env, ) if result.returncode != 0: + # Log sanitized error (never log token-bearing URLs) + stderr_clean = result.stderr.strip().split('\n')[-1] if result.stderr else "unknown error" + if verbose_log: + verbose_log(f"git ls-remote rc={result.returncode}: {stderr_clean}") raise RuntimeError(f"git ls-remote failed: {result.stderr}") return True @@ -360,6 +368,9 @@ def _ls_remote_fallback(token, git_env): env=git_env, ) if result.returncode != 0: + stderr_clean = result.stderr.strip().split('\n')[-1] if result.stderr else "unknown error" + if verbose_log: + verbose_log(f"git ls-remote rc={result.returncode}: {stderr_clean}") raise RuntimeError(f"git ls-remote failed: {result.stderr}") return True From 7a73a67ca5aa3a81603000e2285e888316d0acea Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 07:40:51 +0100 Subject: [PATCH 11/40] fix: use Bearer auth for git ls-remote instead of x-access-token URL MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Fine-grained PATs (github_pat_) get 403 when using the x-access-token:{token}@host URL format because GitHub rejects Basic auth for these tokens. Switch validation to use: git -c http.extraHeader='Authorization: Bearer {token}' ls-remote This works with ALL token types (fine-grained, classic, OAuth, GitHub App). The x-access-token URL format only works reliably with classic PATs (ghp_) and GitHub App installation tokens (ghs_). Note: the broader clone path (_build_repo_url / build_https_clone_url) still uses x-access-token — tracked for a follow-up fix. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- src/apm_cli/commands/install.py | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index da4de87f..99f8e335 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -317,12 +317,13 @@ def _validate_package_exists(package, verbose=False): def _ls_remote(token, git_env): """Try git ls-remote with optional auth.""" + url = f"https://{host}/{dep_ref.repo_url}.git" + cmd = ['git'] if token: - url = f"https://x-access-token:{token}@{host}/{dep_ref.repo_url}.git" - else: - url = f"{dep_ref.to_github_url()}.git" + cmd += ['-c', f'http.extraHeader=Authorization: Bearer {token}'] + cmd += ['ls-remote', '--heads', '--exit-code', url] result = subprocess.run( - ['git', 'ls-remote', '--heads', '--exit-code', url], + cmd, capture_output=True, text=True, timeout=30, env=git_env, ) @@ -358,12 +359,13 @@ def _ls_remote(token, git_env): base_url = f"https://{host}/{package}.git" def _ls_remote_fallback(token, git_env): - if token and not is_valid_fqdn(package): - url = f"https://x-access-token:{token}@{host}/{package}.git" - else: - url = base_url + url = base_url + cmd = ['git'] + if token: + cmd += ['-c', f'http.extraHeader=Authorization: Bearer {token}'] + cmd += ['ls-remote', '--heads', '--exit-code', url] result = subprocess.run( - ['git', 'ls-remote', '--heads', '--exit-code', url], + cmd, capture_output=True, text=True, timeout=30, env=git_env, ) From e9bc2dacc026d8914e428598b0962ea4fba4cb2b Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 07:48:25 +0100 Subject: [PATCH 12/40] fix: switch validation from git ls-remote to GitHub API git ls-remote with http.extraHeader has a fatal flaw: the Bearer header persists across credential-helper retries, preventing git from using credentials from 'gh auth' or OS keychain when the env-var token fails. Switch to GitHub REST API (GET /repos/{owner}/{repo}) which: - Works with ALL token types (fine-grained, classic, OAuth, App) - Returns clean HTTP status codes (200=ok, 404=no access, 401=bad token) - No credential-helper interference - Faster than spawning git subprocess The git ls-remote approach remains in the ADO/GHES path where the API endpoint may differ. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- src/apm_cli/commands/install.py | 97 +++++++++++++++++++-------------- 1 file changed, 57 insertions(+), 40 deletions(-) diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index 99f8e335..98910c76 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -310,34 +310,47 @@ def _validate_package_exists(package, verbose=False): auth_resolver = AuthResolver() host = dep_ref.host or default_host() org = dep_ref.repo_url.split('/')[0] if dep_ref.repo_url and '/' in dep_ref.repo_url else None + host_info = auth_resolver.classify_host(host) if verbose_log: ctx = auth_resolver.resolve(host, org=org) verbose_log(f"Auth resolved: host={host}, org={org}, source={ctx.source}, type={ctx.token_type}") - def _ls_remote(token, git_env): - """Try git ls-remote with optional auth.""" - url = f"https://{host}/{dep_ref.repo_url}.git" - cmd = ['git'] + def _check_repo(token, git_env): + """Check repo accessibility via GitHub API (or git ls-remote for non-GitHub).""" + import urllib.request + import urllib.error + + api_base = host_info.api_base + api_url = f"{api_base}/repos/{dep_ref.repo_url}" + headers = { + "Accept": "application/vnd.github+json", + "User-Agent": "apm-cli", + } if token: - cmd += ['-c', f'http.extraHeader=Authorization: Bearer {token}'] - cmd += ['ls-remote', '--heads', '--exit-code', url] - result = subprocess.run( - cmd, - capture_output=True, text=True, timeout=30, - env=git_env, - ) - if result.returncode != 0: - # Log sanitized error (never log token-bearing URLs) - stderr_clean = result.stderr.strip().split('\n')[-1] if result.stderr else "unknown error" + headers["Authorization"] = f"Bearer {token}" + + req = urllib.request.Request(api_url, headers=headers) + try: + resp = urllib.request.urlopen(req, timeout=15) if verbose_log: - verbose_log(f"git ls-remote rc={result.returncode}: {stderr_clean}") - raise RuntimeError(f"git ls-remote failed: {result.stderr}") - return True + verbose_log(f"API {api_url} → {resp.status}") + return True + except urllib.error.HTTPError as e: + if verbose_log: + verbose_log(f"API {api_url} → {e.code} {e.reason}") + if e.code == 404 and token: + # 404 with token could mean no access — raise to trigger fallback + raise RuntimeError(f"API returned {e.code}") + raise RuntimeError(f"API returned {e.code}: {e.reason}") + except Exception as e: + if verbose_log: + verbose_log(f"API request failed: {e}") + raise try: return auth_resolver.try_with_fallback( - host, _ls_remote, + host, _check_repo, org=org, unauth_first=True, verbose_callback=verbose_log, @@ -352,33 +365,37 @@ def _ls_remote(token, git_env): auth_resolver = AuthResolver() host = default_host() org = package.split('/')[0] if '/' in package else None - - if is_valid_fqdn(package): - base_url = f"https://{package}.git" - else: - base_url = f"https://{host}/{package}.git" - - def _ls_remote_fallback(token, git_env): - url = base_url - cmd = ['git'] + repo_path = package # owner/repo format + + def _check_repo_fallback(token, git_env): + import urllib.request + import urllib.error + + host_info = auth_resolver.classify_host(host) + api_url = f"{host_info.api_base}/repos/{repo_path}" + headers = { + "Accept": "application/vnd.github+json", + "User-Agent": "apm-cli", + } if token: - cmd += ['-c', f'http.extraHeader=Authorization: Bearer {token}'] - cmd += ['ls-remote', '--heads', '--exit-code', url] - result = subprocess.run( - cmd, - capture_output=True, text=True, timeout=30, - env=git_env, - ) - if result.returncode != 0: - stderr_clean = result.stderr.strip().split('\n')[-1] if result.stderr else "unknown error" + headers["Authorization"] = f"Bearer {token}" + + req = urllib.request.Request(api_url, headers=headers) + try: + resp = urllib.request.urlopen(req, timeout=15) + return True + except urllib.error.HTTPError as e: + if verbose_log: + verbose_log(f"API fallback → {e.code} {e.reason}") + raise RuntimeError(f"API returned {e.code}") + except Exception as e: if verbose_log: - verbose_log(f"git ls-remote rc={result.returncode}: {stderr_clean}") - raise RuntimeError(f"git ls-remote failed: {result.stderr}") - return True + verbose_log(f"API fallback failed: {e}") + raise try: return auth_resolver.try_with_fallback( - host, _ls_remote_fallback, + host, _check_repo_fallback, org=org, unauth_first=True, verbose_callback=verbose_log, From f8ec65ce9dda7f2ec06e280fe1f32314a50e4bd2 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 07:58:21 +0100 Subject: [PATCH 13/40] docs: add fine-grained PAT scoping guidance to auth docs MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Fine-grained PATs are scoped to a single resource owner (user or org). A user-scoped PAT cannot access org repos — even for internal repos where the user is a member. Document required permissions (Metadata Read + Contents Read), resource owner setup, and alternatives. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .../docs/getting-started/authentication.md | 26 +++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/docs/src/content/docs/getting-started/authentication.md b/docs/src/content/docs/getting-started/authentication.md index f9194e0d..f54c5236 100644 --- a/docs/src/content/docs/getting-started/authentication.md +++ b/docs/src/content/docs/getting-started/authentication.md @@ -53,10 +53,32 @@ The org name comes from the dependency reference — `contoso/my-package` checks Per-org tokens take priority over global tokens. Use this when different orgs require different PATs (e.g., separate SSO authorizations). +## Fine-grained PAT setup + +Fine-grained PATs (`github_pat_`) are scoped to a **single resource owner** — either a user account or an organization. A user-scoped fine-grained PAT **cannot** access repos owned by an organization, even if you are a member of that org. + +To access org packages, create the PAT with the **org** as the resource owner at [github.com/settings/personal-access-tokens/new](https://github.com/settings/personal-access-tokens/new). + +Required permissions: + +| Permission | Level | Purpose | +|------------|-------|---------| +| **Metadata** | Read | Validation and discovery | +| **Contents** | Read | Downloading package files | + +Set **Repository access** to "All repositories" or select the specific repos your manifest references. + +**Alternatives that skip scoping entirely:** + +- `gh auth login` — produces an OAuth token that inherits your full org membership. Easiest zero-config path. +- Classic PATs (`ghp_`) — inherit the user's membership across all orgs. GitHub is deprecating these in favor of fine-grained PATs. + ## Enterprise Managed Users (EMU) EMU orgs can live on **github.com** (e.g., `contoso-microsoft`) or on **GHE Cloud Data Residency** (`*.ghe.com`). EMU tokens are standard PATs (`ghp_` classic or `github_pat_` fine-grained) — there is no special prefix. They are scoped to the enterprise and cannot access public repos on github.com. +Fine-grained PATs for EMU orgs **must** use the EMU org as the resource owner — a user-scoped fine-grained PAT will not work. See [Fine-grained PAT setup](#fine-grained-pat-setup). + If your manifest mixes enterprise and public packages, use separate tokens: ```bash @@ -129,6 +151,10 @@ Authorize your PAT for SSO at [github.com/settings/tokens](https://github.com/se EMU PATs use standard prefixes (`ghp_`, `github_pat_`) — there is no EMU-specific prefix. They are enterprise-scoped and cannot access public github.com repos. Use a standard PAT for public repos alongside your EMU PAT — see [Enterprise Managed Users (EMU)](#enterprise-managed-users-emu) above. +### Fine-grained PAT can't access org repos + +Fine-grained PATs are scoped to one resource owner. If you created the PAT under your **user account**, it cannot access repos owned by an organization — even if you are an org member. Recreate the PAT with the **org** as the resource owner. Classic PATs (`ghp_`) and `gh auth login` OAuth tokens do not have this limitation. See [Fine-grained PAT setup](#fine-grained-pat-setup). + ### Diagnosing auth failures Run with `--verbose` to see the full resolution chain: From 869767ff39e94cdd7fe91b8dd354466c21c1f2f3 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 08:45:16 +0100 Subject: [PATCH 14/40] fix: improve auth error UX and logging compliance MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Wire build_error_context() into validation failure path so verbose mode shows actionable auth guidance (env vars, token setup) - Add '--verbose for auth details' hint when validation fails without verbose mode - Add ADO guard to _try_credential_fallback() preventing credential fill for Azure DevOps hosts (consistent with _resolve_token policy) - Add verbose logging to git ls-remote validation path (ADO/GHE/GHES) matching the GitHub API path's diagnostic output - Add '--verbose for detailed diagnostics' hint to top-level install error handlers - Fix traffic-light violations: orphan removal failure → error (was warning), 'no new packages' → info (was warning), hash mismatch re-download → info/progress (was warning, auto-recovery) - Add 3 new tests for validation failure reason messages Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- src/apm_cli/commands/install.py | 51 +++++++++++++++++++--- src/apm_cli/core/auth.py | 2 + tests/unit/test_install_command.py | 70 ++++++++++++++++++++++++++++++ 3 files changed, 116 insertions(+), 7 deletions(-) diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index 98910c76..9034492c 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -153,7 +153,8 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo already_in_deps = identity in existing_identities # Validate package exists and is accessible - if _validate_package_exists(package, verbose=bool(logger and logger.verbose)): + verbose = bool(logger and logger.verbose) + if _validate_package_exists(package, verbose=verbose): valid_outcomes.append((canonical, already_in_deps)) if logger: logger.validation_pass(canonical, already_present=already_in_deps) @@ -168,12 +169,14 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo validated_packages.append(canonical) existing_identities.add(identity) # prevent duplicates within batch else: - reason = "not accessible or doesn't exist (check auth or repo name)" + reason = "not accessible or doesn't exist" + if not verbose: + reason += " — run with --verbose for auth details" invalid_outcomes.append((package, reason)) if logger: logger.validation_fail(package, reason) else: - _rich_error(f"✗ {package} - not accessible or doesn't exist") + _rich_error(f"✗ {package} — {reason}") outcome = _ValidationOutcome(valid=valid_outcomes, invalid=invalid_outcomes) @@ -185,7 +188,7 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo if not validated_packages: if dry_run: - _rich_warning("No new packages to add") if not logger else None + _rich_info("No new packages to add") if not logger else None # If all packages already exist in apm.yml, that's OK - we'll reinstall them return [], outcome @@ -294,6 +297,9 @@ def _validate_package_exists(package, verbose=False): else: validate_env = {**os.environ, **downloader.git_env} + if verbose_log: + verbose_log(f"Trying git ls-remote for {dep_ref.host}") + cmd = ["git", "ls-remote", "--heads", "--exit-code", package_url] result = subprocess.run( cmd, @@ -302,6 +308,19 @@ def _validate_package_exists(package, verbose=False): timeout=30, env=validate_env, ) + + if verbose_log: + if result.returncode == 0: + verbose_log(f"git ls-remote rc=0 for {package}") + else: + # Sanitize stderr to avoid leaking tokens + stderr_snippet = (result.stderr or "").strip()[:200] + for env_var in ("GIT_ASKPASS", "GIT_CONFIG_GLOBAL"): + stderr_snippet = stderr_snippet.replace( + validate_env.get(env_var, ""), "***" + ) + verbose_log(f"git ls-remote rc={result.returncode}: {stderr_snippet}") + return result.returncode == 0 # For GitHub.com, use AuthResolver with unauth-first fallback @@ -356,6 +375,13 @@ def _check_repo(token, git_env): verbose_callback=verbose_log, ) except Exception: + if verbose_log: + try: + ctx = auth_resolver.build_error_context(host, f"accessing {package}", org=org) + for line in ctx.splitlines(): + verbose_log(line) + except Exception: + pass return False except Exception: @@ -401,6 +427,13 @@ def _check_repo_fallback(token, git_env): verbose_callback=verbose_log, ) except Exception: + if verbose_log: + try: + ctx = auth_resolver.build_error_context(host, f"accessing {package}", org=org) + for line in ctx.splitlines(): + verbose_log(line) + except Exception: + pass return False @@ -594,6 +627,8 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo apm_diagnostics = install_result.diagnostics except Exception as e: logger.error(f"Failed to install APM dependencies: {e}") + if not verbose: + logger.progress("Run with --verbose for detailed diagnostics") sys.exit(1) elif should_install_apm and not has_any_apm_deps: logger.verbose_detail("No APM dependencies found in apm.yml") @@ -664,6 +699,8 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo except Exception as e: _rich_error(f"Error installing dependencies: {e}") + if not verbose: + _rich_info("Run with --verbose for detailed diagnostics") sys.exit(1) @@ -1541,12 +1578,12 @@ def _collect_descendants(node, visited=None): from ..utils.content_hash import verify_package_hash if not verify_package_hash(install_path, _dep_locked_chk.content_hash): if logger: - logger.warning( + logger.progress( f"Content hash mismatch for " f"{dep_ref.get_unique_key()} -- re-downloading" ) else: - _rich_warning( + _rich_info( f"Content hash mismatch for " f"{dep_ref.get_unique_key()} — re-downloading" ) @@ -1872,7 +1909,7 @@ def _collect_descendants(node, visited=None): f" Could not remove orphaned path {_orphan_path}: {_orphan_err}" ) else: - _rich_warning( + _rich_error( f" └─ Could not remove orphaned path {_orphan_path}: {_orphan_err}" ) _failed_orphan_count += 1 diff --git a/src/apm_cli/core/auth.py b/src/apm_cli/core/auth.py index 38509692..cd1797c9 100644 --- a/src/apm_cli/core/auth.py +++ b/src/apm_cli/core/auth.py @@ -254,6 +254,8 @@ def _try_credential_fallback(exc: Exception) -> T: """Retry with git-credential-fill when an env-var token fails.""" if auth_ctx.source in ("git-credential-fill", "none"): raise exc + if host_info.kind == "ado": + raise exc _log(f"Token from {auth_ctx.source} failed, trying git credential fill for {host}") cred = self._token_manager.resolve_credential_from_git(host) if cred: diff --git a/tests/unit/test_install_command.py b/tests/unit/test_install_command.py index 6500aa71..4f517e24 100644 --- a/tests/unit/test_install_command.py +++ b/tests/unit/test_install_command.py @@ -243,3 +243,73 @@ def test_install_dry_run_with_no_apm_yml_shows_what_would_be_created( assert "Would add" in result.output or "Dry run" in result.output # apm.yml should still be created (for dry-run to work) assert Path("apm.yml").exists() + + +class TestValidationFailureReasonMessages: + """Test that validation failure reasons include actionable auth guidance.""" + + def setup_method(self): + self.runner = CliRunner() + try: + self.original_dir = os.getcwd() + except FileNotFoundError: + self.original_dir = str(Path(__file__).parent.parent.parent) + os.chdir(self.original_dir) + + def teardown_method(self): + try: + os.chdir(self.original_dir) + except (FileNotFoundError, OSError): + os.chdir(str(Path(__file__).parent.parent.parent)) + + @contextlib.contextmanager + def _chdir_tmp(self): + with tempfile.TemporaryDirectory() as tmp_dir: + try: + os.chdir(tmp_dir) + yield Path(tmp_dir) + finally: + os.chdir(self.original_dir) + + @patch("apm_cli.commands.install._validate_package_exists", return_value=False) + def test_validation_failure_without_verbose_includes_verbose_hint(self, mock_validate): + """When validation fails without --verbose, reason should suggest --verbose.""" + with self._chdir_tmp(): + # Create apm.yml so we exercise the validation path + Path("apm.yml").write_text("name: test\ndependencies:\n apm: []\n mcp: []\n") + result = self.runner.invoke(cli, ["install", "owner/repo"]) + # Normalize terminal line-wrapping before checking + output = " ".join(result.output.split()) + assert "run with --verbose for auth details" in output + + @patch("apm_cli.commands.install._validate_package_exists", return_value=False) + def test_validation_failure_with_verbose_omits_verbose_hint(self, mock_validate): + """When validation fails with --verbose, reason should NOT suggest --verbose.""" + with self._chdir_tmp(): + Path("apm.yml").write_text("name: test\ndependencies:\n apm: []\n mcp: []\n") + result = self.runner.invoke(cli, ["install", "owner/repo", "--verbose"]) + assert "not accessible or doesn't exist" in result.output + assert "run with --verbose for auth details" not in result.output + + @patch("apm_cli.core.token_manager.GitHubTokenManager.resolve_credential_from_git", return_value=None) + @patch("urllib.request.urlopen") + def test_verbose_validation_failure_calls_build_error_context(self, mock_urlopen, _mock_cred): + """When GitHub validation fails in verbose mode, build_error_context should be invoked.""" + import urllib.error + mock_urlopen.side_effect = urllib.error.HTTPError( + url="https://api.github.com/repos/owner/repo", code=404, + msg="Not Found", hdrs={}, fp=None, + ) + + with patch.object( + __import__("apm_cli.core.auth", fromlist=["AuthResolver"]).AuthResolver, + "build_error_context", + return_value="Authentication failed for accessing owner/repo on github.com.\nNo token available.", + ) as mock_build_ctx: + from apm_cli.commands.install import _validate_package_exists + result = _validate_package_exists("owner/repo", verbose=True) + assert result is False + mock_build_ctx.assert_called_once() + call_args = mock_build_ctx.call_args + assert "github.com" in call_args[0][0] # host + assert "owner/repo" in call_args[0][1] # operation From c3777923c13f4d36150eb64c349dc02fda21fa87 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 11:06:21 +0100 Subject: [PATCH 15/40] feat: add auth+logging acceptance tests, fix moderate UX violations MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Auth acceptance: - scripts/test-auth-acceptance.sh: 10 P0 auth scenarios covering public/private repos, token priority, fallback chain, error paths. Runnable locally (set tokens + run) or via CI workflow_dispatch. - .github/workflows/auth-acceptance.yml: manual trigger with inputs for test repos, uses auth-acceptance environment for secrets. Logging acceptance: - tests/acceptance/test_logging_acceptance.py: 16 tests verifying install output contract (validation, errors, dry-run, verbose, symbol consistency, diagnostics ordering) — fully mocked, no network needed. Moderate UX fixes: - Route hash mismatch, orphan removal, and lockfile errors through DiagnosticCollector for deferred summary rendering - Fix compile/cli.py color bypasses: _rich_echo(color=yellow/red) replaced with logger.warning()/logger.error() Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .github/workflows/auth-acceptance.yml | 54 ++ scripts/test-auth-acceptance.sh | 569 ++++++++++++++++++ src/apm_cli/commands/compile/cli.py | 4 +- src/apm_cli/commands/install.py | 31 +- tests/acceptance/__init__.py | 0 tests/acceptance/test_logging_acceptance.py | 621 ++++++++++++++++++++ 6 files changed, 1261 insertions(+), 18 deletions(-) create mode 100644 .github/workflows/auth-acceptance.yml create mode 100755 scripts/test-auth-acceptance.sh create mode 100644 tests/acceptance/__init__.py create mode 100644 tests/acceptance/test_logging_acceptance.py diff --git a/.github/workflows/auth-acceptance.yml b/.github/workflows/auth-acceptance.yml new file mode 100644 index 00000000..c12eb2bc --- /dev/null +++ b/.github/workflows/auth-acceptance.yml @@ -0,0 +1,54 @@ +name: Auth Acceptance Tests + +on: + workflow_dispatch: + inputs: + public_repo: + description: 'Public test repo (owner/repo)' + default: 'microsoft/apm-sample-package' + private_repo: + description: 'Private test repo (owner/repo, optional)' + required: false + emu_repo: + description: 'EMU internal test repo (owner/repo, optional)' + required: false + +env: + PYTHON_VERSION: '3.12' + +permissions: + contents: read + +jobs: + auth-tests: + runs-on: ubuntu-latest + environment: auth-acceptance # configure PAT secrets in this environment + permissions: + contents: read + + steps: + - uses: actions/checkout@v4 + + - name: Set up Python + uses: actions/setup-python@v5 + with: + python-version: ${{ env.PYTHON_VERSION }} + + - name: Install uv + uses: astral-sh/setup-uv@v4 + + - name: Install dependencies + run: uv sync + + - name: Install APM in dev mode + run: uv run pip install -e . + + - name: Run auth acceptance tests + env: + APM_BINARY: .venv/bin/apm + AUTH_TEST_PUBLIC_REPO: ${{ inputs.public_repo }} + AUTH_TEST_PRIVATE_REPO: ${{ inputs.private_repo }} + AUTH_TEST_EMU_REPO: ${{ inputs.emu_repo }} + GITHUB_APM_PAT: ${{ secrets.AUTH_TEST_GITHUB_APM_PAT }} + GITHUB_TOKEN: ${{ secrets.AUTH_TEST_GITHUB_TOKEN }} + run: ./scripts/test-auth-acceptance.sh diff --git a/scripts/test-auth-acceptance.sh b/scripts/test-auth-acceptance.sh new file mode 100755 index 00000000..75ba9551 --- /dev/null +++ b/scripts/test-auth-acceptance.sh @@ -0,0 +1,569 @@ +#!/usr/bin/env bash +# ============================================================================= +# APM Auth Acceptance Tests +# ============================================================================= +# +# Tests the auth resolution chain across token sources, host types, and repo +# visibilities. Covers P0 scenarios from the auth acceptance matrix. +# +# LOCAL USAGE: +# # 1. Set required tokens: +# export APM_BINARY="/path/to/apm" # or uses 'apm' from PATH +# export AUTH_TEST_PUBLIC_REPO="microsoft/apm-sample-package" +# export AUTH_TEST_PRIVATE_REPO="your-org/private-repo" # optional +# export AUTH_TEST_EMU_REPO="emu-org/internal-repo" # optional +# export GITHUB_APM_PAT="ghp_..." # or github_pat_... +# export GITHUB_APM_PAT_YOURORG="github_pat_..." # for per-org test +# +# # 2. Run: +# ./scripts/test-auth-acceptance.sh +# +# CI USAGE (GitHub Actions): +# Triggered via workflow_dispatch. Secrets injected as env vars. +# See .github/workflows/auth-acceptance.yml +# +# ============================================================================= + +set -uo pipefail + +# --------------------------------------------------------------------------- +# Colors & symbols +# --------------------------------------------------------------------------- +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +DIM='\033[2m' +BOLD='\033[1m' +NC='\033[0m' + +# --------------------------------------------------------------------------- +# Counters +# --------------------------------------------------------------------------- +TESTS_PASSED=0 +TESTS_FAILED=0 +TESTS_SKIPPED=0 +RESULTS=() # array of "STATUS scenario_name" + +# --------------------------------------------------------------------------- +# Config +# --------------------------------------------------------------------------- +APM_BINARY="${APM_BINARY:-apm}" +AUTH_TEST_PUBLIC_REPO="${AUTH_TEST_PUBLIC_REPO:-microsoft/apm-sample-package}" +AUTH_TEST_PRIVATE_REPO="${AUTH_TEST_PRIVATE_REPO:-}" +AUTH_TEST_EMU_REPO="${AUTH_TEST_EMU_REPO:-}" + +# Stash original env so we can restore between tests +_ORIG_GITHUB_APM_PAT="${GITHUB_APM_PAT:-}" +_ORIG_GITHUB_TOKEN="${GITHUB_TOKEN:-}" +_ORIG_GH_TOKEN="${GH_TOKEN:-}" + +# --------------------------------------------------------------------------- +# Temp dir & cleanup +# --------------------------------------------------------------------------- +WORK_DIR="$(mktemp -d)" +trap 'rm -rf "$WORK_DIR"' EXIT + +# --------------------------------------------------------------------------- +# Helpers +# --------------------------------------------------------------------------- + +log_header() { + echo "" + echo -e "${BOLD}${BLUE}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}" + echo -e "${BOLD}${BLUE} $1${NC}" + echo -e "${BOLD}${BLUE}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}" +} + +log_scenario() { + echo "" + echo -e "${BOLD}🧪 Scenario: $1${NC}" +} + +record_pass() { + local name="$1" + TESTS_PASSED=$((TESTS_PASSED + 1)) + RESULTS+=("PASS $name") + echo -e " ${GREEN}✅ PASS${NC} — $name" +} + +record_fail() { + local name="$1" + local detail="${2:-}" + TESTS_FAILED=$((TESTS_FAILED + 1)) + RESULTS+=("FAIL $name") + echo -e " ${RED}❌ FAIL${NC} — $name" + if [[ -n "$detail" ]]; then + echo -e " ${DIM} $detail${NC}" + fi +} + +record_skip() { + local name="$1" + local reason="${2:-missing env var}" + TESTS_SKIPPED=$((TESTS_SKIPPED + 1)) + RESULTS+=("SKIP $name") + echo -e " ${YELLOW}⏭️ SKIP${NC} — $name ($reason)" +} + +# Prepare a minimal apm.yml in a fresh temp directory and echo the path. +# Usage: test_dir=$(setup_test_dir "owner/repo") +setup_test_dir() { + local package="$1" + local dir + dir="$(mktemp -d "$WORK_DIR/test-XXXXXX")" + cat > "$dir/apm.yml" < [extra_args...] +run_apm_install() { + local package="$1"; shift + local dir + dir="$(setup_test_dir "$package")" + + APM_OUTPUT="$(cd "$dir" && "$APM_BINARY" install "$@" 2>&1)" && APM_EXIT=0 || APM_EXIT=$? +} + +# Assert that $APM_OUTPUT contains a pattern (extended grep). +assert_output_contains() { + local pattern="$1" + local msg="${2:-output should contain '$pattern'}" + if echo "$APM_OUTPUT" | grep -qiE "$pattern"; then + return 0 + else + record_fail "$msg" "pattern not found: $pattern" + return 1 + fi +} + +# Assert that $APM_OUTPUT does NOT contain a pattern. +assert_output_not_contains() { + local pattern="$1" + local msg="${2:-output should not contain '$pattern'}" + if echo "$APM_OUTPUT" | grep -qiE "$pattern"; then + record_fail "$msg" "unexpected pattern found: $pattern" + return 1 + else + return 0 + fi +} + +# Assert exit code. +assert_exit_code() { + local expected="$1" + local msg="${2:-exit code should be $expected}" + if [[ "$APM_EXIT" -eq "$expected" ]]; then + return 0 + else + record_fail "$msg" "expected exit=$expected, got exit=$APM_EXIT" + return 1 + fi +} + +# Unset all auth env vars to guarantee a clean slate. +unset_all_auth() { + unset GITHUB_APM_PAT 2>/dev/null || true + unset GITHUB_TOKEN 2>/dev/null || true + unset GH_TOKEN 2>/dev/null || true + # Unset any per-org vars that may have been set + while IFS='=' read -r name _; do + if [[ "$name" == GITHUB_APM_PAT_* ]]; then + unset "$name" 2>/dev/null || true + fi + done < <(env) +} + +# Restore original auth env vars. +restore_auth() { + unset_all_auth + [[ -n "$_ORIG_GITHUB_APM_PAT" ]] && export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + [[ -n "$_ORIG_GITHUB_TOKEN" ]] && export GITHUB_TOKEN="$_ORIG_GITHUB_TOKEN" + [[ -n "$_ORIG_GH_TOKEN" ]] && export GH_TOKEN="$_ORIG_GH_TOKEN" +} + +# Derive the org-env-suffix from an owner/repo string. +# "my-org/repo" → "MY_ORG" +org_env_suffix() { + local owner="${1%%/*}" + echo "$owner" | tr '[:lower:]-' '[:upper:]_' +} + +# --------------------------------------------------------------------------- +# Scenario 1: Public repo, no auth +# --------------------------------------------------------------------------- +test_scenario_1_public_no_auth() { + local name="Public repo, no auth" + log_scenario "$name" + unset_all_auth + export GIT_TERMINAL_PROMPT=0 + export GCM_INTERACTIVE=never + + run_apm_install "$AUTH_TEST_PUBLIC_REPO" --verbose + + local ok=true + assert_exit_code 0 "$name — succeeds" || ok=false + assert_output_contains "unauthenticated" "$name — shows unauthenticated access" || ok=false + assert_output_not_contains "source=GITHUB_APM_PAT" "$name — no PAT source shown" || ok=false + + $ok && record_pass "$name" + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 2: Public repo, PAT set (rate-limit behavior) +# --------------------------------------------------------------------------- +test_scenario_2_public_with_pat() { + local name="Public repo, PAT set" + log_scenario "$name" + + if [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then + record_skip "$name" "GITHUB_APM_PAT not set" + return + fi + + unset_all_auth + export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + + run_apm_install "$AUTH_TEST_PUBLIC_REPO" --verbose + + local ok=true + assert_exit_code 0 "$name — succeeds" || ok=false + # Public repos try unauthenticated first to save rate limits + assert_output_contains "unauthenticated" "$name — tries unauth first" || ok=false + + $ok && record_pass "$name" + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 3: Private repo, global PAT +# --------------------------------------------------------------------------- +test_scenario_3_private_global_pat() { + local name="Private repo, global PAT" + log_scenario "$name" + + if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then + record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" + return + fi + if [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then + record_skip "$name" "GITHUB_APM_PAT not set" + return + fi + + unset_all_auth + export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + + run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose + + local ok=true + assert_exit_code 0 "$name — succeeds" || ok=false + # Verbose should show the auth fallback chain + assert_output_contains "source=GITHUB_APM_PAT" "$name — shows PAT source" || ok=false + + $ok && record_pass "$name" + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 4: Private repo, per-org PAT +# --------------------------------------------------------------------------- +test_scenario_4_private_per_org_pat() { + local name="Private repo, per-org PAT" + log_scenario "$name" + + if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then + record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" + return + fi + + local org_suffix + org_suffix="$(org_env_suffix "$AUTH_TEST_PRIVATE_REPO")" + local per_org_var="GITHUB_APM_PAT_${org_suffix}" + local per_org_val="${!per_org_var:-}" + + if [[ -z "$per_org_val" ]] && [[ -n "$_ORIG_GITHUB_APM_PAT" ]]; then + # Fall back to the global PAT for testing the per-org path + per_org_val="$_ORIG_GITHUB_APM_PAT" + fi + if [[ -z "$per_org_val" ]]; then + record_skip "$name" "$per_org_var not set" + return + fi + + unset_all_auth + export "$per_org_var=$per_org_val" + + run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose + + local ok=true + assert_exit_code 0 "$name — succeeds" || ok=false + assert_output_contains "source=GITHUB_APM_PAT_${org_suffix}" "$name — shows per-org source" || ok=false + + $ok && record_pass "$name" + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 5: Token priority (per-org > global) +# --------------------------------------------------------------------------- +test_scenario_5_token_priority() { + local name="Token priority: per-org > global" + log_scenario "$name" + + if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then + record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" + return + fi + if [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then + record_skip "$name" "GITHUB_APM_PAT not set" + return + fi + + local org_suffix + org_suffix="$(org_env_suffix "$AUTH_TEST_PRIVATE_REPO")" + local per_org_var="GITHUB_APM_PAT_${org_suffix}" + + unset_all_auth + export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + export "$per_org_var=$_ORIG_GITHUB_APM_PAT" + + run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose + + local ok=true + assert_exit_code 0 "$name — succeeds" || ok=false + # Per-org should win over global + assert_output_contains "source=GITHUB_APM_PAT_${org_suffix}" "$name — per-org wins" || ok=false + + $ok && record_pass "$name" + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 6: GITHUB_TOKEN fallback +# --------------------------------------------------------------------------- +test_scenario_6_github_token_fallback() { + local name="GITHUB_TOKEN fallback" + log_scenario "$name" + + if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then + record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" + return + fi + if [[ -z "$_ORIG_GITHUB_TOKEN" ]] && [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then + record_skip "$name" "GITHUB_TOKEN and GITHUB_APM_PAT not set" + return + fi + + local token="${_ORIG_GITHUB_TOKEN:-$_ORIG_GITHUB_APM_PAT}" + + unset_all_auth + export GITHUB_TOKEN="$token" + + run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose + + local ok=true + assert_exit_code 0 "$name — succeeds" || ok=false + assert_output_contains "source=GITHUB_TOKEN" "$name — shows GITHUB_TOKEN source" || ok=false + + $ok && record_pass "$name" + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 7: Invalid token, graceful failure +# --------------------------------------------------------------------------- +test_scenario_7_invalid_token() { + local name="Invalid token, graceful failure" + log_scenario "$name" + + if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then + record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" + return + fi + + unset_all_auth + export GITHUB_APM_PAT="ghp_invalidtoken1234567890abcdefghijklmn" + export GIT_TERMINAL_PROMPT=0 + export GCM_INTERACTIVE=never + + run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose + + local ok=true + assert_exit_code 1 "$name — fails with exit 1" || ok=false + # Should not crash or produce a traceback + assert_output_not_contains "Traceback" "$name — no Python traceback" || ok=false + + $ok && record_pass "$name" + unset GCM_INTERACTIVE + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 8: Nonexistent repo +# --------------------------------------------------------------------------- +test_scenario_8_nonexistent_repo() { + local name="Nonexistent repo" + log_scenario "$name" + + unset_all_auth + export GIT_TERMINAL_PROMPT=0 + export GCM_INTERACTIVE=never + + run_apm_install "owner/this-repo-does-not-exist-12345" --verbose + + local ok=true + assert_exit_code 1 "$name — fails with exit 1" || ok=false + assert_output_contains "not accessible or doesn't exist" "$name — clear error message" || ok=false + + $ok && record_pass "$name" + unset GCM_INTERACTIVE + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 9: No auth, private repo +# --------------------------------------------------------------------------- +test_scenario_9_no_auth_private_repo() { + local name="No auth, private repo" + log_scenario "$name" + + if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then + record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" + return + fi + + unset_all_auth + export GIT_TERMINAL_PROMPT=0 + export GCM_INTERACTIVE=never + + run_apm_install "$AUTH_TEST_PRIVATE_REPO" + + local ok=true + assert_exit_code 1 "$name — fails" || ok=false + assert_output_contains "not accessible|--verbose|GITHUB_APM_PAT|GITHUB_TOKEN|auth" \ + "$name — suggests auth guidance" || ok=false + + $ok && record_pass "$name" + unset GCM_INTERACTIVE + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 10: Verbose vs non-verbose output contract +# --------------------------------------------------------------------------- +test_scenario_10_verbose_contract() { + local name="Verbose vs non-verbose output contract" + log_scenario "$name" + + unset_all_auth + export GIT_TERMINAL_PROMPT=0 + export GCM_INTERACTIVE=never + + # Non-verbose run + run_apm_install "owner/this-repo-does-not-exist-12345" + local non_verbose_output="$APM_OUTPUT" + local non_verbose_exit="$APM_EXIT" + + # Verbose run + run_apm_install "owner/this-repo-does-not-exist-12345" --verbose + local verbose_output="$APM_OUTPUT" + local verbose_exit="$APM_EXIT" + + local ok=true + + # Both should fail + APM_EXIT="$non_verbose_exit" + assert_exit_code 1 "$name — non-verbose fails" || ok=false + APM_EXIT="$verbose_exit" + assert_exit_code 1 "$name — verbose fails" || ok=false + + # Non-verbose: should NOT expose auth resolution details + APM_OUTPUT="$non_verbose_output" + assert_output_not_contains "Auth resolved:" "$name — non-verbose hides auth details" || ok=false + assert_output_contains "--verbose" "$name — non-verbose hints at --verbose" || ok=false + + # Verbose: should show auth diagnostic info + APM_OUTPUT="$verbose_output" + # Verbose output should contain auth-related diagnostic lines + if echo "$verbose_output" | grep -qiE "Auth resolved|unauthenticated|API .* →"; then + : # ok + else + record_fail "$name — verbose shows auth steps" "no auth diagnostic lines found" + ok=false + fi + + $ok && record_pass "$name" + unset GCM_INTERACTIVE + restore_auth +} + +# --------------------------------------------------------------------------- +# Run all scenarios +# --------------------------------------------------------------------------- + +log_header "APM Auth Acceptance Tests" +echo "" +echo -e "${DIM}Binary: $APM_BINARY${NC}" +echo -e "${DIM}Public repo: $AUTH_TEST_PUBLIC_REPO${NC}" +echo -e "${DIM}Private repo: ${AUTH_TEST_PRIVATE_REPO:-}${NC}" +echo -e "${DIM}EMU repo: ${AUTH_TEST_EMU_REPO:-}${NC}" +echo "" + +test_scenario_1_public_no_auth +test_scenario_2_public_with_pat +test_scenario_3_private_global_pat +test_scenario_4_private_per_org_pat +test_scenario_5_token_priority +test_scenario_6_github_token_fallback +test_scenario_7_invalid_token +test_scenario_8_nonexistent_repo +test_scenario_9_no_auth_private_repo +test_scenario_10_verbose_contract + +# --------------------------------------------------------------------------- +# Summary +# --------------------------------------------------------------------------- +TOTAL=$((TESTS_PASSED + TESTS_FAILED + TESTS_SKIPPED)) + +log_header "Summary" +echo "" +printf " %-8s %s\n" "Total:" "$TOTAL" +printf " ${GREEN}%-8s %s${NC}\n" "Passed:" "$TESTS_PASSED" +printf " ${RED}%-8s %s${NC}\n" "Failed:" "$TESTS_FAILED" +printf " ${YELLOW}%-8s %s${NC}\n" "Skipped:" "$TESTS_SKIPPED" +echo "" +echo -e "${DIM}──────────────────────────────────────────────────${NC}" + +for entry in "${RESULTS[@]}"; do + status="${entry%% *}" + scenario="${entry#* }" + case "$status" in + PASS) echo -e " ${GREEN}✅${NC} $scenario" ;; + FAIL) echo -e " ${RED}❌${NC} $scenario" ;; + SKIP) echo -e " ${YELLOW}⏭️${NC} $scenario" ;; + esac +done + +echo -e "${DIM}──────────────────────────────────────────────────${NC}" +echo "" + +if [[ "$TESTS_FAILED" -gt 0 ]]; then + echo -e "${RED}${BOLD}Auth acceptance tests FAILED${NC}" + exit 1 +fi + +echo -e "${GREEN}${BOLD}Auth acceptance tests PASSED${NC}" +exit 0 diff --git a/src/apm_cli/commands/compile/cli.py b/src/apm_cli/commands/compile/cli.py index 10e63941..a972de27 100644 --- a/src/apm_cli/commands/compile/cli.py +++ b/src/apm_cli/commands/compile/cli.py @@ -542,12 +542,12 @@ def compile( f"Compilation completed with {len(result.warnings)} warnings:" ) for warning in result.warnings: - _rich_echo(f" [!] {warning}", color="yellow") + logger.warning(f" {warning}") if result.errors: logger.error(f"Compilation failed with {len(result.errors)} errors:") for error in result.errors: - _rich_echo(f" [x] {error}", color="red") + logger.error(f" {error}") sys.exit(1) # Check for orphaned packages after successful compilation diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index 9034492c..657f7a64 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -1577,16 +1577,15 @@ def _collect_descendants(node, visited=None): if skip_download and _dep_locked_chk and _dep_locked_chk.content_hash: from ..utils.content_hash import verify_package_hash if not verify_package_hash(install_path, _dep_locked_chk.content_hash): + _hash_msg = ( + f"Content hash mismatch for " + f"{dep_ref.get_unique_key()} -- re-downloading" + ) + diagnostics.warn(_hash_msg, package=dep_ref.get_unique_key()) if logger: - logger.progress( - f"Content hash mismatch for " - f"{dep_ref.get_unique_key()} -- re-downloading" - ) + logger.progress(_hash_msg) else: - _rich_info( - f"Content hash mismatch for " - f"{dep_ref.get_unique_key()} — re-downloading" - ) + _rich_info(_hash_msg) safe_rmtree(install_path, apm_modules_dir) skip_download = False @@ -1904,14 +1903,12 @@ def _collect_descendants(node, visited=None): _deleted_orphan_paths.append(_target) _removed_orphan_count += 1 except Exception as _orphan_err: + _orphan_msg = f"Could not remove orphaned path {_orphan_path}: {_orphan_err}" + diagnostics.error(_orphan_msg) if logger: - logger.verbose_detail( - f" Could not remove orphaned path {_orphan_path}: {_orphan_err}" - ) + logger.verbose_detail(f" {_orphan_msg}") else: - _rich_error( - f" └─ Could not remove orphaned path {_orphan_path}: {_orphan_err}" - ) + _rich_error(f" └─ {_orphan_msg}") _failed_orphan_count += 1 # Clean up empty parent directories left after file removal if _deleted_orphan_paths: @@ -1980,10 +1977,12 @@ def _collect_descendants(node, visited=None): else: _rich_info(f"Generated apm.lock.yaml with {len(lockfile.dependencies)} dependencies") except Exception as e: + _lock_msg = f"Could not generate apm.lock.yaml: {e}" + diagnostics.error(_lock_msg) if logger: - logger.warning(f"Could not generate apm.lock.yaml: {e}") + logger.warning(_lock_msg) else: - _rich_warning(f"Could not generate apm.lock.yaml: {e}") + _rich_warning(_lock_msg) # Show integration stats (verbose-only when logger is available) if total_links_resolved > 0: diff --git a/tests/acceptance/__init__.py b/tests/acceptance/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/tests/acceptance/test_logging_acceptance.py b/tests/acceptance/test_logging_acceptance.py new file mode 100644 index 00000000..e636fb33 --- /dev/null +++ b/tests/acceptance/test_logging_acceptance.py @@ -0,0 +1,621 @@ +"""Acceptance tests for APM CLI logging UX contract. + +These tests verify the exact output contract for install command logging. +They use Click's CliRunner with mocked network calls — NO real tokens or +network access needed. + +Each test validates output format, symbols, and message content against the +acceptance plan. +""" + +import contextlib +import os +import tempfile +from pathlib import Path +from unittest.mock import MagicMock, patch + +import pytest +import yaml +from click.testing import CliRunner + +from apm_cli.cli import cli +from apm_cli.models.results import InstallResult +from apm_cli.utils.console import STATUS_SYMBOLS + + +# --------------------------------------------------------------------------- +# Helpers +# --------------------------------------------------------------------------- + + +class _InstallAcceptanceBase: + """Shared fixtures for install logging acceptance tests.""" + + def setup_method(self): + self.runner = CliRunner() + try: + self.original_dir = os.getcwd() + except FileNotFoundError: + self.original_dir = str(Path(__file__).parent.parent.parent) + os.chdir(self.original_dir) + + def teardown_method(self): + try: + os.chdir(self.original_dir) + except (FileNotFoundError, OSError): + repo_root = Path(__file__).parent.parent.parent + os.chdir(str(repo_root)) + + @contextlib.contextmanager + def _chdir_tmp(self): + with tempfile.TemporaryDirectory() as tmp_dir: + try: + os.chdir(tmp_dir) + yield Path(tmp_dir) + finally: + os.chdir(self.original_dir) + + @staticmethod + def _write_apm_yml(tmp: Path, deps=None, mcp_deps=None): + """Write a minimal apm.yml.""" + data = { + "name": "test-project", + "dependencies": { + "apm": deps or [], + "mcp": mcp_deps or [], + }, + } + (tmp / "apm.yml").write_text(yaml.safe_dump(data, sort_keys=False)) + + @staticmethod + def _make_install_result(**kwargs): + """Build an InstallResult with sensible defaults.""" + defaults = dict( + installed_count=0, + prompts_integrated=0, + agents_integrated=0, + diagnostics=MagicMock( + has_diagnostics=False, + has_critical_security=False, + error_count=0, + ), + ) + defaults.update(kwargs) + return InstallResult(**defaults) + + # Common patch targets + _VALIDATE = "apm_cli.commands.install._validate_package_exists" + _INSTALL_APM = "apm_cli.commands.install._install_apm_dependencies" + _APM_PKG = "apm_cli.commands.install.APMPackage" + _DEPS_AVAIL = "apm_cli.commands.install.APM_DEPS_AVAILABLE" + _MIGRATE_LOCK = "apm_cli.commands.install.migrate_lockfile_if_needed" + _LOCKFILE_READ = "apm_cli.commands.install.LockFile.read" + _GET_LOCKPATH = "apm_cli.commands.install.get_lockfile_path" + + +# --------------------------------------------------------------------------- +# I1: Single public package, happy path +# --------------------------------------------------------------------------- + + +class TestI1SinglePublicPackageHappyPath(_InstallAcceptanceBase): + """I1: Single public package installs successfully.""" + + @patch(_InstallAcceptanceBase._GET_LOCKPATH) + @patch(_InstallAcceptanceBase._LOCKFILE_READ) + @patch(_InstallAcceptanceBase._MIGRATE_LOCK) + @patch(_InstallAcceptanceBase._INSTALL_APM) + @patch(_InstallAcceptanceBase._APM_PKG) + @patch(_InstallAcceptanceBase._DEPS_AVAIL, True) + @patch(_InstallAcceptanceBase._VALIDATE) + def test_happy_path_output( + self, + mock_validate, + mock_apm_pkg, + mock_install, + mock_migrate, + mock_lock_read, + mock_lock_path, + ): + mock_validate.return_value = True + + pkg = MagicMock() + pkg.get_apm_dependencies.return_value = [ + MagicMock(repo_url="owner/repo", reference="main") + ] + pkg.get_mcp_dependencies.return_value = [] + pkg.get_dev_apm_dependencies.return_value = [] + mock_apm_pkg.from_apm_yml.return_value = pkg + + mock_install.return_value = self._make_install_result(installed_count=1) + mock_lock_read.return_value = None + mock_lock_path.return_value = Path("apm.lock.yaml") + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp) + result = self.runner.invoke(cli, ["install", "owner/repo"]) + + out = result.output + assert result.exit_code == 0, f"Exit {result.exit_code}: {out}" + + # Validation phase + assert "Validating 1 package" in out + assert "✓ owner/repo" in out + + # Installation phase + assert "Installing" in out + + # Summary — 1 APM dependency + assert "1 APM dependency" in out or "Installed 1 APM" in out + + +# --------------------------------------------------------------------------- +# I4: Package fails validation +# --------------------------------------------------------------------------- + + +class TestI4PackageFailsValidation(_InstallAcceptanceBase): + """I4: Package fails validation — appropriate error output.""" + + @patch(_InstallAcceptanceBase._VALIDATE) + def test_not_accessible_message(self, mock_validate): + mock_validate.return_value = False + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp) + result = self.runner.invoke(cli, ["install", "owner/nonexistent"]) + + out = result.output + assert "not accessible or doesn't exist" in out + assert "✗" in out + + @patch(_InstallAcceptanceBase._VALIDATE) + def test_verbose_hint_when_not_verbose(self, mock_validate): + """Non-verbose mode shows --verbose hint.""" + mock_validate.return_value = False + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp) + result = self.runner.invoke(cli, ["install", "owner/nonexistent"]) + + assert "--verbose" in result.output + + @patch(_InstallAcceptanceBase._VALIDATE) + def test_no_verbose_hint_when_verbose(self, mock_validate): + """Verbose mode should NOT repeat the --verbose hint in the validation reason.""" + mock_validate.return_value = False + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp) + result = self.runner.invoke( + cli, ["install", "--verbose", "owner/nonexistent"] + ) + + # The validation failure reason should NOT contain the verbose hint + # when already in verbose mode. + lines_with_cross = [l for l in result.output.splitlines() if "✗" in l] + for line in lines_with_cross: + assert "run with --verbose" not in line.lower(), ( + f"Redundant --verbose hint found in verbose mode: {line}" + ) + + @patch(_InstallAcceptanceBase._VALIDATE) + def test_all_failed_summary(self, mock_validate): + """When all packages fail, summary says 'Nothing to install'.""" + mock_validate.return_value = False + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp) + result = self.runner.invoke(cli, ["install", "owner/nonexistent"]) + + assert "All packages failed validation" in result.output or "Nothing to install" in result.output + + +# --------------------------------------------------------------------------- +# I5: Package already installed +# --------------------------------------------------------------------------- + + +class TestI5PackageAlreadyInstalled(_InstallAcceptanceBase): + """I5: Package already in apm.yml.""" + + @patch(_InstallAcceptanceBase._GET_LOCKPATH) + @patch(_InstallAcceptanceBase._LOCKFILE_READ) + @patch(_InstallAcceptanceBase._MIGRATE_LOCK) + @patch(_InstallAcceptanceBase._INSTALL_APM) + @patch(_InstallAcceptanceBase._APM_PKG) + @patch(_InstallAcceptanceBase._DEPS_AVAIL, True) + @patch(_InstallAcceptanceBase._VALIDATE) + def test_already_installed_message( + self, + mock_validate, + mock_apm_pkg, + mock_install, + mock_migrate, + mock_lock_read, + mock_lock_path, + ): + mock_validate.return_value = True + + pkg = MagicMock() + pkg.get_apm_dependencies.return_value = [ + MagicMock(repo_url="owner/repo", reference="main") + ] + pkg.get_mcp_dependencies.return_value = [] + pkg.get_dev_apm_dependencies.return_value = [] + mock_apm_pkg.from_apm_yml.return_value = pkg + mock_install.return_value = self._make_install_result(installed_count=1) + mock_lock_read.return_value = None + mock_lock_path.return_value = Path("apm.lock.yaml") + + with self._chdir_tmp() as tmp: + # Pre-populate apm.yml WITH the package already listed + self._write_apm_yml(tmp, deps=["owner/repo"]) + result = self.runner.invoke(cli, ["install", "owner/repo"]) + + out = result.output + assert "already in apm.yml" in out + + +# --------------------------------------------------------------------------- +# I6: Mixed valid + invalid packages +# --------------------------------------------------------------------------- + + +class TestI6MixedValidInvalid(_InstallAcceptanceBase): + """I6: First package validates, second doesn't.""" + + @patch(_InstallAcceptanceBase._GET_LOCKPATH) + @patch(_InstallAcceptanceBase._LOCKFILE_READ) + @patch(_InstallAcceptanceBase._MIGRATE_LOCK) + @patch(_InstallAcceptanceBase._INSTALL_APM) + @patch(_InstallAcceptanceBase._APM_PKG) + @patch(_InstallAcceptanceBase._DEPS_AVAIL, True) + @patch(_InstallAcceptanceBase._VALIDATE) + def test_mixed_shows_check_and_cross( + self, + mock_validate, + mock_apm_pkg, + mock_install, + mock_migrate, + mock_lock_read, + mock_lock_path, + ): + # First package valid, second invalid + mock_validate.side_effect = [True, False] + + pkg = MagicMock() + pkg.get_apm_dependencies.return_value = [ + MagicMock(repo_url="good/pkg", reference="main") + ] + pkg.get_mcp_dependencies.return_value = [] + pkg.get_dev_apm_dependencies.return_value = [] + mock_apm_pkg.from_apm_yml.return_value = pkg + mock_install.return_value = self._make_install_result(installed_count=1) + mock_lock_read.return_value = None + mock_lock_path.return_value = Path("apm.lock.yaml") + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp) + result = self.runner.invoke( + cli, ["install", "good/pkg", "bad/missing"] + ) + + out = result.output + assert result.exit_code == 0, f"Exit {result.exit_code}: {out}" + + # Check mark for good package, cross for bad + assert "✓" in out, "Expected ✓ for valid package" + assert "✗" in out, "Expected ✗ for invalid package" + + # Continues to install the valid one + assert "1" in out and "failed validation" in out + + +# --------------------------------------------------------------------------- +# I7: Full manifest install, up to date +# --------------------------------------------------------------------------- + + +class TestI7ManifestUpToDate(_InstallAcceptanceBase): + """I7: No packages arg, deps up to date.""" + + @patch(_InstallAcceptanceBase._GET_LOCKPATH) + @patch(_InstallAcceptanceBase._LOCKFILE_READ) + @patch(_InstallAcceptanceBase._MIGRATE_LOCK) + @patch(_InstallAcceptanceBase._INSTALL_APM) + @patch(_InstallAcceptanceBase._APM_PKG) + @patch(_InstallAcceptanceBase._DEPS_AVAIL, True) + def test_up_to_date_or_no_deps( + self, + mock_apm_pkg, + mock_install, + mock_migrate, + mock_lock_read, + mock_lock_path, + ): + pkg = MagicMock() + pkg.get_apm_dependencies.return_value = [] + pkg.get_mcp_dependencies.return_value = [] + pkg.get_dev_apm_dependencies.return_value = [] + mock_apm_pkg.from_apm_yml.return_value = pkg + mock_install.return_value = self._make_install_result() + mock_lock_read.return_value = None + mock_lock_path.return_value = Path("apm.lock.yaml") + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp, deps=["owner/cached-pkg"]) + result = self.runner.invoke(cli, ["install"]) + + out = result.output + # Should indicate nothing new was done, or summary with 0 + assert result.exit_code == 0, f"Exit {result.exit_code}: {out}" + + +# --------------------------------------------------------------------------- +# Logging rules: Traffic-light, non-verbose, verbose, dry-run, symbols +# --------------------------------------------------------------------------- + + +class TestLoggingRules(_InstallAcceptanceBase): + """Verify logging traffic-light rules and verbosity contracts.""" + + # --- Non-verbose contract --- + + @patch(_InstallAcceptanceBase._VALIDATE) + def test_non_verbose_no_auth_details(self, mock_validate): + """Non-verbose output must NOT contain auth debug details.""" + mock_validate.return_value = False + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp) + result = self.runner.invoke(cli, ["install", "owner/repo"]) + + out = result.output + assert "Auth resolved" not in out + assert "API" not in out + assert "git ls-remote" not in out + + @patch(_InstallAcceptanceBase._VALIDATE) + def test_non_verbose_has_verbose_hint(self, mock_validate): + """Non-verbose failure should suggest --verbose.""" + mock_validate.return_value = False + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp) + result = self.runner.invoke(cli, ["install", "owner/repo"]) + + assert "--verbose" in result.output + + # --- Dry-run contract --- + + @patch(_InstallAcceptanceBase._GET_LOCKPATH) + @patch(_InstallAcceptanceBase._LOCKFILE_READ) + @patch(_InstallAcceptanceBase._MIGRATE_LOCK) + @patch(_InstallAcceptanceBase._APM_PKG) + @patch(_InstallAcceptanceBase._DEPS_AVAIL, True) + @patch(_InstallAcceptanceBase._VALIDATE) + def test_dry_run_shows_dry_run_label( + self, + mock_validate, + mock_apm_pkg, + mock_migrate, + mock_lock_read, + mock_lock_path, + ): + """--dry-run output must say 'dry run' or 'Dry run'.""" + mock_validate.return_value = True + + pkg = MagicMock() + pkg.get_apm_dependencies.return_value = [ + MagicMock(repo_url="owner/repo", reference="main") + ] + pkg.get_mcp_dependencies.return_value = [] + pkg.get_dev_apm_dependencies.return_value = [] + mock_apm_pkg.from_apm_yml.return_value = pkg + mock_lock_read.return_value = None + mock_lock_path.return_value = Path("apm.lock.yaml") + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp) + result = self.runner.invoke( + cli, ["install", "--dry-run", "owner/repo"] + ) + + out = result.output.lower() + assert "dry run" in out or "dry-run" in out, ( + f"Expected dry-run label in output:\n{result.output}" + ) + + @patch(_InstallAcceptanceBase._GET_LOCKPATH) + @patch(_InstallAcceptanceBase._LOCKFILE_READ) + @patch(_InstallAcceptanceBase._MIGRATE_LOCK) + @patch(_InstallAcceptanceBase._APM_PKG) + @patch(_InstallAcceptanceBase._DEPS_AVAIL, True) + @patch(_InstallAcceptanceBase._VALIDATE) + def test_dry_run_no_file_changes( + self, + mock_validate, + mock_apm_pkg, + mock_migrate, + mock_lock_read, + mock_lock_path, + ): + """--dry-run must not write to apm.yml beyond the initial package addition.""" + mock_validate.return_value = True + + pkg = MagicMock() + pkg.get_apm_dependencies.return_value = [ + MagicMock(repo_url="owner/repo", reference="main") + ] + pkg.get_mcp_dependencies.return_value = [] + pkg.get_dev_apm_dependencies.return_value = [] + mock_apm_pkg.from_apm_yml.return_value = pkg + mock_lock_read.return_value = None + mock_lock_path.return_value = Path("apm.lock.yaml") + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp) + original = (tmp / "apm.yml").read_text() + + result = self.runner.invoke( + cli, ["install", "--dry-run", "owner/repo"] + ) + + # apm.yml should be unchanged (dry-run skips writing) + final = (tmp / "apm.yml").read_text() + assert original == final, "Dry-run modified apm.yml" + + # --- Symbol consistency --- + + def test_status_symbols_are_ascii_brackets(self): + """All STATUS_SYMBOLS must be ASCII bracket format [x].""" + bracket_pattern = {"[*]", "[>]", "[i]", "[!]", "[x]", "[+]", "[#]"} + for key, sym in STATUS_SYMBOLS.items(): + assert sym in bracket_pattern, ( + f"STATUS_SYMBOLS['{key}'] = '{sym}' is not a valid bracket symbol" + ) + + +# --------------------------------------------------------------------------- +# Error paths +# --------------------------------------------------------------------------- + + +class TestErrorPaths(_InstallAcceptanceBase): + """Verify error output patterns and --verbose hints.""" + + @patch(_InstallAcceptanceBase._GET_LOCKPATH) + @patch(_InstallAcceptanceBase._LOCKFILE_READ) + @patch(_InstallAcceptanceBase._MIGRATE_LOCK) + @patch(_InstallAcceptanceBase._INSTALL_APM) + @patch(_InstallAcceptanceBase._APM_PKG) + @patch(_InstallAcceptanceBase._DEPS_AVAIL, True) + @patch(_InstallAcceptanceBase._VALIDATE) + def test_install_error_verbose_hint( + self, + mock_validate, + mock_apm_pkg, + mock_install, + mock_migrate, + mock_lock_read, + mock_lock_path, + ): + """When _install_apm_dependencies raises, non-verbose shows hint.""" + mock_validate.return_value = True + + pkg = MagicMock() + pkg.get_apm_dependencies.return_value = [ + MagicMock(repo_url="owner/repo", reference="main") + ] + pkg.get_mcp_dependencies.return_value = [] + pkg.get_dev_apm_dependencies.return_value = [] + mock_apm_pkg.from_apm_yml.return_value = pkg + + mock_install.side_effect = RuntimeError("download timed out") + mock_lock_read.return_value = None + mock_lock_path.return_value = Path("apm.lock.yaml") + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp) + result = self.runner.invoke(cli, ["install", "owner/repo"]) + + out = result.output + assert result.exit_code == 1 + assert "Run with --verbose" in out + + @patch(_InstallAcceptanceBase._GET_LOCKPATH) + @patch(_InstallAcceptanceBase._LOCKFILE_READ) + @patch(_InstallAcceptanceBase._MIGRATE_LOCK) + @patch(_InstallAcceptanceBase._INSTALL_APM) + @patch(_InstallAcceptanceBase._APM_PKG) + @patch(_InstallAcceptanceBase._DEPS_AVAIL, True) + @patch(_InstallAcceptanceBase._VALIDATE) + def test_install_error_no_hint_when_verbose( + self, + mock_validate, + mock_apm_pkg, + mock_install, + mock_migrate, + mock_lock_read, + mock_lock_path, + ): + """When --verbose is active, don't show the --verbose hint.""" + mock_validate.return_value = True + + pkg = MagicMock() + pkg.get_apm_dependencies.return_value = [ + MagicMock(repo_url="owner/repo", reference="main") + ] + pkg.get_mcp_dependencies.return_value = [] + pkg.get_dev_apm_dependencies.return_value = [] + mock_apm_pkg.from_apm_yml.return_value = pkg + + mock_install.side_effect = RuntimeError("download timed out") + mock_lock_read.return_value = None + mock_lock_path.return_value = Path("apm.lock.yaml") + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp) + result = self.runner.invoke( + cli, ["install", "--verbose", "owner/repo"] + ) + + out = result.output + assert result.exit_code == 1 + assert "Run with --verbose" not in out + + @patch(_InstallAcceptanceBase._GET_LOCKPATH) + @patch(_InstallAcceptanceBase._LOCKFILE_READ) + @patch(_InstallAcceptanceBase._MIGRATE_LOCK) + @patch(_InstallAcceptanceBase._INSTALL_APM) + @patch(_InstallAcceptanceBase._APM_PKG) + @patch(_InstallAcceptanceBase._DEPS_AVAIL, True) + @patch(_InstallAcceptanceBase._VALIDATE) + def test_diagnostics_render_before_summary( + self, + mock_validate, + mock_apm_pkg, + mock_install, + mock_migrate, + mock_lock_read, + mock_lock_path, + ): + """Diagnostics section must appear before final install summary.""" + mock_validate.return_value = True + + pkg = MagicMock() + pkg.get_apm_dependencies.return_value = [ + MagicMock(repo_url="owner/repo", reference="main") + ] + pkg.get_mcp_dependencies.return_value = [] + pkg.get_dev_apm_dependencies.return_value = [] + mock_apm_pkg.from_apm_yml.return_value = pkg + + # Build a real DiagnosticCollector with some content + from apm_cli.utils.diagnostics import DiagnosticCollector + + diag = DiagnosticCollector() + diag.warn("test-pkg", "some warning") + + mock_install.return_value = self._make_install_result( + installed_count=1, + diagnostics=diag, + ) + mock_lock_read.return_value = None + mock_lock_path.return_value = Path("apm.lock.yaml") + + with self._chdir_tmp() as tmp: + self._write_apm_yml(tmp) + result = self.runner.invoke(cli, ["install", "owner/repo"]) + + out = result.output + assert result.exit_code == 0, f"Exit {result.exit_code}: {out}" + + # Diagnostics separator appears before summary + diag_pos = out.find("Diagnostics") + summary_pos = out.find("Installed") + if diag_pos != -1 and summary_pos != -1: + assert diag_pos < summary_pos, ( + "Diagnostics should render BEFORE the install summary" + ) From 2752a8657b608ccc369b77453ac86f1417d9c759 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 11:11:06 +0100 Subject: [PATCH 16/40] feat: expand auth acceptance to 18 scenarios covering all sources Add scenarios for: GH_TOKEN fallback, EMU internal repos, credential helper only (gh auth), token type detection, mixed manifest (public + private), ADO with/without PAT, fine-grained PAT wrong resource owner. Update workflow with GHE/ADO inputs and secrets. Fix emoji symbols to ASCII-safe [+]/[x]/[-] matching CLI convention. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .github/workflows/auth-acceptance.yml | 10 + scripts/test-auth-acceptance.sh | 363 ++++++++++++++++++++++++-- 2 files changed, 348 insertions(+), 25 deletions(-) diff --git a/.github/workflows/auth-acceptance.yml b/.github/workflows/auth-acceptance.yml index c12eb2bc..eb431a5f 100644 --- a/.github/workflows/auth-acceptance.yml +++ b/.github/workflows/auth-acceptance.yml @@ -12,6 +12,12 @@ on: emu_repo: description: 'EMU internal test repo (owner/repo, optional)' required: false + ghe_repo: + description: 'GHE Cloud test repo (org/repo@host, optional)' + required: false + ado_repo: + description: 'Azure DevOps test repo (org/project/_git/repo, optional)' + required: false env: PYTHON_VERSION: '3.12' @@ -49,6 +55,10 @@ jobs: AUTH_TEST_PUBLIC_REPO: ${{ inputs.public_repo }} AUTH_TEST_PRIVATE_REPO: ${{ inputs.private_repo }} AUTH_TEST_EMU_REPO: ${{ inputs.emu_repo }} + AUTH_TEST_GHE_REPO: ${{ inputs.ghe_repo }} + AUTH_TEST_ADO_REPO: ${{ inputs.ado_repo }} GITHUB_APM_PAT: ${{ secrets.AUTH_TEST_GITHUB_APM_PAT }} GITHUB_TOKEN: ${{ secrets.AUTH_TEST_GITHUB_TOKEN }} + GH_TOKEN: ${{ secrets.AUTH_TEST_GH_TOKEN }} + ADO_APM_PAT: ${{ secrets.AUTH_TEST_ADO_APM_PAT }} run: ./scripts/test-auth-acceptance.sh diff --git a/scripts/test-auth-acceptance.sh b/scripts/test-auth-acceptance.sh index 75ba9551..2e006920 100755 --- a/scripts/test-auth-acceptance.sh +++ b/scripts/test-auth-acceptance.sh @@ -3,22 +3,45 @@ # APM Auth Acceptance Tests # ============================================================================= # -# Tests the auth resolution chain across token sources, host types, and repo -# visibilities. Covers P0 scenarios from the auth acceptance matrix. +# Tests the auth resolution chain across ALL token sources, host types, repo +# visibilities, and token types from the acceptance matrix: # -# LOCAL USAGE: -# # 1. Set required tokens: -# export APM_BINARY="/path/to/apm" # or uses 'apm' from PATH -# export AUTH_TEST_PUBLIC_REPO="microsoft/apm-sample-package" -# export AUTH_TEST_PRIVATE_REPO="your-org/private-repo" # optional -# export AUTH_TEST_EMU_REPO="emu-org/internal-repo" # optional -# export GITHUB_APM_PAT="ghp_..." # or github_pat_... -# export GITHUB_APM_PAT_YOURORG="github_pat_..." # for per-org test +# Sources: GITHUB_APM_PAT_{ORG}, GITHUB_APM_PAT, GITHUB_TOKEN, GH_TOKEN, +# ADO_APM_PAT, git-credential-fill, unauthenticated +# Hosts: github.com, *.ghe.com, custom GHES, Azure DevOps +# Repos: public, private, internal (EMU) +# Tokens: fine-grained (github_pat_), classic (ghp_), OAuth (ghu_) # -# # 2. Run: +# --------------------------------------------------------------------------- +# LOCAL USAGE +# --------------------------------------------------------------------------- +# +# # 1. Required — binary path: +# export APM_BINARY="/path/to/dist/apm/apm" +# +# # 2. Test repos (public has a default; others optional): +# export AUTH_TEST_PUBLIC_REPO="microsoft/apm-sample-package" # default +# export AUTH_TEST_PRIVATE_REPO="your-org/private-repo" # optional +# export AUTH_TEST_EMU_REPO="emu-org/internal-repo" # optional +# export AUTH_TEST_GHE_REPO="org/repo@ghe-host.ghe.com" # optional +# export AUTH_TEST_ADO_REPO="org/project/_git/repo" # optional +# +# # 3. Tokens — set as many as you want to test: +# export GITHUB_APM_PAT="github_pat_..." # fine-grained PAT +# export GITHUB_APM_PAT_YOURORG="github_pat_..." # per-org PAT +# export GITHUB_TOKEN="ghp_..." # classic PAT fallback +# export GH_TOKEN="ghu_..." # gh-cli OAuth fallback +# export ADO_APM_PAT="ado-pat-here" # Azure DevOps PAT +# +# # 4. Run: # ./scripts/test-auth-acceptance.sh # -# CI USAGE (GitHub Actions): +# Scenarios that need missing env vars auto-skip with SKIP status. +# +# --------------------------------------------------------------------------- +# CI USAGE (GitHub Actions) +# --------------------------------------------------------------------------- +# # Triggered via workflow_dispatch. Secrets injected as env vars. # See .github/workflows/auth-acceptance.yml # @@ -27,7 +50,7 @@ set -uo pipefail # --------------------------------------------------------------------------- -# Colors & symbols +# Colors & symbols (ASCII-safe, no emojis) # --------------------------------------------------------------------------- RED='\033[0;31m' GREEN='\033[0;32m' @@ -37,6 +60,11 @@ DIM='\033[2m' BOLD='\033[1m' NC='\033[0m' +SYM_PASS="[+]" +SYM_FAIL="[x]" +SYM_SKIP="[-]" +SYM_TEST="[>]" + # --------------------------------------------------------------------------- # Counters # --------------------------------------------------------------------------- @@ -52,11 +80,14 @@ APM_BINARY="${APM_BINARY:-apm}" AUTH_TEST_PUBLIC_REPO="${AUTH_TEST_PUBLIC_REPO:-microsoft/apm-sample-package}" AUTH_TEST_PRIVATE_REPO="${AUTH_TEST_PRIVATE_REPO:-}" AUTH_TEST_EMU_REPO="${AUTH_TEST_EMU_REPO:-}" +AUTH_TEST_GHE_REPO="${AUTH_TEST_GHE_REPO:-}" # format: org/repo@host +AUTH_TEST_ADO_REPO="${AUTH_TEST_ADO_REPO:-}" # Azure DevOps repo # Stash original env so we can restore between tests _ORIG_GITHUB_APM_PAT="${GITHUB_APM_PAT:-}" _ORIG_GITHUB_TOKEN="${GITHUB_TOKEN:-}" _ORIG_GH_TOKEN="${GH_TOKEN:-}" +_ORIG_ADO_APM_PAT="${ADO_APM_PAT:-}" # --------------------------------------------------------------------------- # Temp dir & cleanup @@ -77,14 +108,14 @@ log_header() { log_scenario() { echo "" - echo -e "${BOLD}🧪 Scenario: $1${NC}" + echo -e "${BOLD}${SYM_TEST} Scenario: $1${NC}" } record_pass() { local name="$1" TESTS_PASSED=$((TESTS_PASSED + 1)) RESULTS+=("PASS $name") - echo -e " ${GREEN}✅ PASS${NC} — $name" + echo -e " ${GREEN}${SYM_PASS} PASS${NC} -- $name" } record_fail() { @@ -92,7 +123,7 @@ record_fail() { local detail="${2:-}" TESTS_FAILED=$((TESTS_FAILED + 1)) RESULTS+=("FAIL $name") - echo -e " ${RED}❌ FAIL${NC} — $name" + echo -e " ${RED}${SYM_FAIL} FAIL${NC} -- $name" if [[ -n "$detail" ]]; then echo -e " ${DIM} $detail${NC}" fi @@ -103,7 +134,7 @@ record_skip() { local reason="${2:-missing env var}" TESTS_SKIPPED=$((TESTS_SKIPPED + 1)) RESULTS+=("SKIP $name") - echo -e " ${YELLOW}⏭️ SKIP${NC} — $name ($reason)" + echo -e " ${YELLOW}${SYM_SKIP} SKIP${NC} -- $name ($reason)" } # Prepare a minimal apm.yml in a fresh temp directory and echo the path. @@ -191,6 +222,7 @@ restore_auth() { [[ -n "$_ORIG_GITHUB_APM_PAT" ]] && export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" [[ -n "$_ORIG_GITHUB_TOKEN" ]] && export GITHUB_TOKEN="$_ORIG_GITHUB_TOKEN" [[ -n "$_ORIG_GH_TOKEN" ]] && export GH_TOKEN="$_ORIG_GH_TOKEN" + [[ -n "$_ORIG_ADO_APM_PAT" ]] && export ADO_APM_PAT="$_ORIG_ADO_APM_PAT" } # Derive the org-env-suffix from an owner/repo string. @@ -486,14 +518,14 @@ test_scenario_10_verbose_contract() { # Both should fail APM_EXIT="$non_verbose_exit" - assert_exit_code 1 "$name — non-verbose fails" || ok=false + assert_exit_code 1 "$name -- non-verbose fails" || ok=false APM_EXIT="$verbose_exit" - assert_exit_code 1 "$name — verbose fails" || ok=false + assert_exit_code 1 "$name -- verbose fails" || ok=false # Non-verbose: should NOT expose auth resolution details APM_OUTPUT="$non_verbose_output" - assert_output_not_contains "Auth resolved:" "$name — non-verbose hides auth details" || ok=false - assert_output_contains "--verbose" "$name — non-verbose hints at --verbose" || ok=false + assert_output_not_contains "Auth resolved:" "$name -- non-verbose hides auth details" || ok=false + assert_output_contains "--verbose" "$name -- non-verbose hints at --verbose" || ok=false # Verbose: should show auth diagnostic info APM_OUTPUT="$verbose_output" @@ -501,7 +533,7 @@ test_scenario_10_verbose_contract() { if echo "$verbose_output" | grep -qiE "Auth resolved|unauthenticated|API .* →"; then : # ok else - record_fail "$name — verbose shows auth steps" "no auth diagnostic lines found" + record_fail "$name -- verbose shows auth steps" "no auth diagnostic lines found" ok=false fi @@ -510,6 +542,274 @@ test_scenario_10_verbose_contract() { restore_auth } +# --------------------------------------------------------------------------- +# Scenario 11: GH_TOKEN fallback (lowest priority global env var) +# --------------------------------------------------------------------------- +test_scenario_11_gh_token_fallback() { + local name="GH_TOKEN fallback (lowest priority)" + log_scenario "$name" + + if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then + record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" + return + fi + + # Use GH_TOKEN or fall back to any available token for the test + local token="${_ORIG_GH_TOKEN:-${_ORIG_GITHUB_TOKEN:-${_ORIG_GITHUB_APM_PAT:-}}}" + if [[ -z "$token" ]]; then + record_skip "$name" "No token available (need GH_TOKEN, GITHUB_TOKEN, or GITHUB_APM_PAT)" + return + fi + + unset_all_auth + export GH_TOKEN="$token" + + run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose + + local ok=true + assert_exit_code 0 "$name -- succeeds" || ok=false + assert_output_contains "source=GH_TOKEN" "$name -- shows GH_TOKEN source" || ok=false + + $ok && record_pass "$name" + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 12: EMU internal repo with org-scoped fine-grained PAT +# --------------------------------------------------------------------------- +test_scenario_12_emu_internal_repo() { + local name="EMU internal repo" + log_scenario "$name" + + if [[ -z "$AUTH_TEST_EMU_REPO" ]]; then + record_skip "$name" "AUTH_TEST_EMU_REPO not set" + return + fi + if [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then + record_skip "$name" "GITHUB_APM_PAT not set" + return + fi + + unset_all_auth + export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + + run_apm_install "$AUTH_TEST_EMU_REPO" --verbose + + local ok=true + assert_exit_code 0 "$name -- succeeds" || ok=false + # Should show the auth chain: unauth fails, then token succeeds + assert_output_contains "retrying with token|source=GITHUB_APM_PAT" \ + "$name -- token used for EMU repo" || ok=false + + $ok && record_pass "$name" + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 13: Credential helper only (no env vars) +# --------------------------------------------------------------------------- +test_scenario_13_credential_helper_only() { + local name="Credential helper only (gh auth / keychain)" + log_scenario "$name" + + if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then + record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" + return + fi + + # Verify gh auth is available + if ! command -v gh &>/dev/null || ! gh auth status &>/dev/null; then + record_skip "$name" "gh CLI not authenticated (run 'gh auth login' first)" + return + fi + + unset_all_auth + # Leave GIT_TERMINAL_PROMPT unset so credential helpers CAN run + + run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose + + local ok=true + assert_exit_code 0 "$name -- succeeds" || ok=false + assert_output_contains "credential" "$name -- credential fill used" || ok=false + + $ok && record_pass "$name" + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 14: Token type detection (fine-grained vs classic) +# --------------------------------------------------------------------------- +test_scenario_14_token_type_detection() { + local name="Token type detection in verbose output" + log_scenario "$name" + + if [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then + record_skip "$name" "GITHUB_APM_PAT not set" + return + fi + + unset_all_auth + export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + + run_apm_install "$AUTH_TEST_PUBLIC_REPO" --verbose + + local ok=true + assert_exit_code 0 "$name -- succeeds" || ok=false + # Should show type= in auth resolved line + assert_output_contains "type=(fine-grained|classic|oauth)" \ + "$name -- token type detected and shown" || ok=false + + $ok && record_pass "$name" + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 15: Mixed manifest (public + private in same apm.yml) +# --------------------------------------------------------------------------- +test_scenario_15_mixed_manifest() { + local name="Mixed manifest: public + private" + log_scenario "$name" + + if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then + record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" + return + fi + if [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then + record_skip "$name" "GITHUB_APM_PAT not set" + return + fi + + # Create apm.yml with BOTH public and private deps + local dir + dir="$(mktemp -d "$WORK_DIR/test-XXXXXX")" + cat > "$dir/apm.yml" <&1)" && APM_EXIT=0 || APM_EXIT=$? + + local ok=true + assert_exit_code 0 "$name -- succeeds" || ok=false + # Both should be installed + assert_output_contains "Installed.*2|2.*dependenc" \ + "$name -- both deps installed" || ok=false + + $ok && record_pass "$name" + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 16: ADO repo with ADO_APM_PAT +# --------------------------------------------------------------------------- +test_scenario_16_ado_repo() { + local name="ADO repo with ADO_APM_PAT" + log_scenario "$name" + + if [[ -z "$AUTH_TEST_ADO_REPO" ]]; then + record_skip "$name" "AUTH_TEST_ADO_REPO not set" + return + fi + if [[ -z "$_ORIG_ADO_APM_PAT" ]]; then + record_skip "$name" "ADO_APM_PAT not set" + return + fi + + unset_all_auth + export ADO_APM_PAT="$_ORIG_ADO_APM_PAT" + + run_apm_install "$AUTH_TEST_ADO_REPO" --verbose + + local ok=true + assert_exit_code 0 "$name -- succeeds" || ok=false + + $ok && record_pass "$name" + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 17: ADO repo without ADO_APM_PAT (should fail, no credential fill) +# --------------------------------------------------------------------------- +test_scenario_17_ado_no_pat() { + local name="ADO repo without ADO_APM_PAT (no credential fill)" + log_scenario "$name" + + if [[ -z "$AUTH_TEST_ADO_REPO" ]]; then + record_skip "$name" "AUTH_TEST_ADO_REPO not set" + return + fi + + unset_all_auth + export GIT_TERMINAL_PROMPT=0 + export GCM_INTERACTIVE=never + + run_apm_install "$AUTH_TEST_ADO_REPO" --verbose + + local ok=true + assert_exit_code 1 "$name -- fails without ADO PAT" || ok=false + assert_output_contains "not accessible" "$name -- clear error message" || ok=false + # Should NOT attempt credential fill for ADO + assert_output_not_contains "credential fill" \ + "$name -- no credential fill for ADO" || ok=false + + $ok && record_pass "$name" + unset GCM_INTERACTIVE + restore_auth +} + +# --------------------------------------------------------------------------- +# Scenario 18: Fine-grained PAT with wrong resource owner (user-scoped for org repo) +# --------------------------------------------------------------------------- +test_scenario_18_fine_grained_wrong_owner() { + local name="Fine-grained PAT wrong resource owner" + log_scenario "$name" + + # This test requires a fine-grained PAT that is user-scoped (not org-scoped) + # trying to access an org repo — it should fail with 404 + if [[ -z "$AUTH_TEST_EMU_REPO" ]]; then + record_skip "$name" "AUTH_TEST_EMU_REPO not set (need org repo to test)" + return + fi + + # Check if GITHUB_APM_PAT is fine-grained + if [[ "$_ORIG_GITHUB_APM_PAT" != github_pat_* ]]; then + record_skip "$name" "GITHUB_APM_PAT is not a fine-grained PAT (github_pat_*)" + return + fi + + # This scenario only works if the PAT is user-scoped, not org-scoped. + # We can't programmatically detect this, so we try and check the outcome. + # If it succeeds, the PAT has org scope — skip. + unset_all_auth + export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + export GIT_TERMINAL_PROMPT=0 + export GCM_INTERACTIVE=never + + run_apm_install "$AUTH_TEST_EMU_REPO" --verbose + + if [[ "$APM_EXIT" -eq 0 ]]; then + record_skip "$name" "PAT has org scope (test needs user-scoped fine-grained PAT)" + else + local ok=true + assert_output_contains "not accessible" "$name -- fails with 404" || ok=false + assert_output_not_contains "Traceback" "$name -- no Python traceback" || ok=false + $ok && record_pass "$name" + fi + + unset GCM_INTERACTIVE + restore_auth +} + # --------------------------------------------------------------------------- # Run all scenarios # --------------------------------------------------------------------------- @@ -520,8 +820,11 @@ echo -e "${DIM}Binary: $APM_BINARY${NC}" echo -e "${DIM}Public repo: $AUTH_TEST_PUBLIC_REPO${NC}" echo -e "${DIM}Private repo: ${AUTH_TEST_PRIVATE_REPO:-}${NC}" echo -e "${DIM}EMU repo: ${AUTH_TEST_EMU_REPO:-}${NC}" +echo -e "${DIM}GHE repo: ${AUTH_TEST_GHE_REPO:-}${NC}" +echo -e "${DIM}ADO repo: ${AUTH_TEST_ADO_REPO:-}${NC}" echo "" +# --- P0 core scenarios --- test_scenario_1_public_no_auth test_scenario_2_public_with_pat test_scenario_3_private_global_pat @@ -533,6 +836,16 @@ test_scenario_8_nonexistent_repo test_scenario_9_no_auth_private_repo test_scenario_10_verbose_contract +# --- Extended coverage --- +test_scenario_11_gh_token_fallback +test_scenario_12_emu_internal_repo +test_scenario_13_credential_helper_only +test_scenario_14_token_type_detection +test_scenario_15_mixed_manifest +test_scenario_16_ado_repo +test_scenario_17_ado_no_pat +test_scenario_18_fine_grained_wrong_owner + # --------------------------------------------------------------------------- # Summary # --------------------------------------------------------------------------- @@ -551,9 +864,9 @@ for entry in "${RESULTS[@]}"; do status="${entry%% *}" scenario="${entry#* }" case "$status" in - PASS) echo -e " ${GREEN}✅${NC} $scenario" ;; - FAIL) echo -e " ${RED}❌${NC} $scenario" ;; - SKIP) echo -e " ${YELLOW}⏭️${NC} $scenario" ;; + PASS) echo -e " ${GREEN}${SYM_PASS}${NC} $scenario" ;; + FAIL) echo -e " ${RED}${SYM_FAIL}${NC} $scenario" ;; + SKIP) echo -e " ${YELLOW}${SYM_SKIP}${NC} $scenario" ;; esac done From 61f0e8872d2354cd3a181cf49e4ceb97572b9c44 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 11:17:47 +0100 Subject: [PATCH 17/40] refactor: rewrite auth acceptance as world-class E2E test suite Complete rewrite with: - Top-level scenario matrix documenting all 18 scenarios across 4 dimensions (sources A1-A7, hosts H1/H2/H4, repos V1-V3, tokens T1-T5) - Per-function docblock explaining the auth dimension, expected behavior, and WHY the assertion matters - Full local usage guide covering ALL env vars: GITHUB_APM_PAT, GITHUB_APM_PAT_{ORG}, GITHUB_TOKEN, GH_TOKEN, ADO_APM_PAT - All repo types: public, private, EMU internal, ADO - ASCII-safe symbols matching CLI conventions - Auto-skip with clear reason when required env vars are missing - Startup banner showing which tokens and repos are configured Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- scripts/test-auth-acceptance.sh | 1200 +++++++++++++++---------------- 1 file changed, 596 insertions(+), 604 deletions(-) diff --git a/scripts/test-auth-acceptance.sh b/scripts/test-auth-acceptance.sh index 2e006920..60ccb751 100755 --- a/scripts/test-auth-acceptance.sh +++ b/scripts/test-auth-acceptance.sh @@ -3,46 +3,102 @@ # APM Auth Acceptance Tests # ============================================================================= # -# Tests the auth resolution chain across ALL token sources, host types, repo -# visibilities, and token types from the acceptance matrix: +# Comprehensive auth E2E test suite covering every dimension of APM's +# authentication resolution chain. Designed to run against a REAL binary +# with REAL tokens and REAL repos — no mocks. # -# Sources: GITHUB_APM_PAT_{ORG}, GITHUB_APM_PAT, GITHUB_TOKEN, GH_TOKEN, -# ADO_APM_PAT, git-credential-fill, unauthenticated -# Hosts: github.com, *.ghe.com, custom GHES, Azure DevOps -# Repos: public, private, internal (EMU) -# Tokens: fine-grained (github_pat_), classic (ghp_), OAuth (ghu_) +# ============================================================================= +# SCENARIO MATRIX +# ============================================================================= # -# --------------------------------------------------------------------------- +# Dimension 1: Token Sources (resolution priority order) +# A1 GITHUB_APM_PAT_{ORG} Per-org PAT (highest priority) +# A2 GITHUB_APM_PAT Global APM PAT +# A3 GITHUB_TOKEN GitHub token (fallback) +# A4 GH_TOKEN GH CLI token (lowest env var) +# A5 git credential fill Credential helper (gh auth, keychain) +# A6 (none) Unauthenticated +# A7 ADO_APM_PAT Azure DevOps PAT +# +# Dimension 2: Token Types +# T1 github_pat_* Fine-grained PAT (org-scoped) +# T2 ghp_* Classic PAT +# T3 ghu_* OAuth (gh auth login) +# T5 (invalid) Expired/wrong token +# +# Dimension 3: Host Types +# H1 github.com Public GitHub (unauth-first validation) +# H2 *.ghe.com GHE Cloud (auth-only, no public repos) +# H4 dev.azure.com Azure DevOps (ADO_APM_PAT only, no cred fill) +# +# Dimension 4: Repo Visibility +# V1 Public Works unauthenticated on github.com +# V2 Private Requires auth with repo access +# V3 Internal (EMU) Requires org-scoped fine-grained PAT +# +# ============================================================================= +# SCENARIOS +# ============================================================================= +# +# # | Name | Source | Host | Repo | Key Assertion +# ----|-------------------------------|--------|------|------|--------------------------- +# 1 | Public, no auth | A6 | H1 | V1 | Unauth succeeds +# 2 | Public, PAT set | A2 | H1 | V1 | Unauth-first (rate-limit) +# 3 | Private, GITHUB_APM_PAT | A2 | H1 | V2 | Token fallback after 404 +# 4 | Private, per-org PAT | A1 | H1 | V2 | Per-org source shown +# 5 | Priority: per-org > global | A1+A2 | H1 | V2 | Per-org wins +# 6 | Fallback: GITHUB_TOKEN | A3 | H1 | V2 | GITHUB_TOKEN source shown +# 7 | Fallback: GH_TOKEN | A4 | H1 | V2 | GH_TOKEN source shown +# 8 | Credential helper only | A5 | H1 | V2 | credential fill used +# 9 | EMU internal repo | A2 | H1 | V3 | Token needed for internal +# 10 | Mixed manifest: pub + priv | A2 | H1 | V1+2 | Both deps installed +# 11 | Token type detection | A2 | H1 | V1 | type=fine-grained|classic +# 12 | ADO repo with ADO_APM_PAT | A7 | H4 | V2 | ADO PAT used +# 13 | ADO no PAT (no cred fill) | -- | H4 | V2 | Fails, no cred fill +# 14 | Invalid token, graceful fail | A2(bad)| H1 | V2 | No crash, actionable msg +# 15 | Nonexistent repo | A6 | H1 | -- | Clear error message +# 16 | No auth, private repo | A6 | H1 | V2 | Suggests auth guidance +# 17 | Fine-grained wrong owner | A2 | H1 | V3 | Fails, no crash +# 18 | Verbose output contract | -- | H1 | -- | Auth details only w/ flag +# +# ============================================================================= # LOCAL USAGE -# --------------------------------------------------------------------------- +# ============================================================================= # -# # 1. Required — binary path: +# # 1. Build binary (from repo root): +# uv run pyinstaller build/apm.spec --distpath dist --workpath build/tmp --noconfirm +# +# # 2. Set binary path: # export APM_BINARY="/path/to/dist/apm/apm" # -# # 2. Test repos (public has a default; others optional): -# export AUTH_TEST_PUBLIC_REPO="microsoft/apm-sample-package" # default -# export AUTH_TEST_PRIVATE_REPO="your-org/private-repo" # optional -# export AUTH_TEST_EMU_REPO="emu-org/internal-repo" # optional -# export AUTH_TEST_GHE_REPO="org/repo@ghe-host.ghe.com" # optional -# export AUTH_TEST_ADO_REPO="org/project/_git/repo" # optional +# # 3. Set test repos (only PUBLIC_REPO has a default): +# export AUTH_TEST_PUBLIC_REPO="microsoft/apm-sample-package" # default +# export AUTH_TEST_PRIVATE_REPO="your-org/your-private-repo" # optional +# export AUTH_TEST_EMU_REPO="emu-org/internal-repo" # optional +# export AUTH_TEST_ADO_REPO="org/project/_git/repo" # optional # -# # 3. Tokens — set as many as you want to test: -# export GITHUB_APM_PAT="github_pat_..." # fine-grained PAT -# export GITHUB_APM_PAT_YOURORG="github_pat_..." # per-org PAT -# export GITHUB_TOKEN="ghp_..." # classic PAT fallback -# export GH_TOKEN="ghu_..." # gh-cli OAuth fallback -# export ADO_APM_PAT="ado-pat-here" # Azure DevOps PAT +# # 4. Set ALL tokens you want to test (missing = scenarios skip): +# export GITHUB_APM_PAT="github_pat_..." # fine-grained, org-scoped +# export GITHUB_APM_PAT_MYORG="github_pat_..." # per-org PAT (MYORG = uppercase org) +# export GITHUB_TOKEN="ghp_..." # classic PAT fallback +# export GH_TOKEN="$(gh auth token 2>/dev/null)" # OAuth from gh CLI +# export ADO_APM_PAT="ado-pat-here" # Azure DevOps PAT # -# # 4. Run: +# # 5. Run: # ./scripts/test-auth-acceptance.sh # -# Scenarios that need missing env vars auto-skip with SKIP status. +# Scenarios auto-SKIP when their required env vars or repos are missing. +# A minimal run (no tokens) still tests scenarios 1, 15, 18. # -# --------------------------------------------------------------------------- +# ============================================================================= # CI USAGE (GitHub Actions) -# --------------------------------------------------------------------------- +# ============================================================================= +# +# Triggered via workflow_dispatch. Configure secrets in the +# 'auth-acceptance' environment: +# AUTH_TEST_GITHUB_APM_PAT, AUTH_TEST_GITHUB_TOKEN, +# AUTH_TEST_GH_TOKEN, AUTH_TEST_ADO_APM_PAT # -# Triggered via workflow_dispatch. Secrets injected as env vars. # See .github/workflows/auth-acceptance.yml # # ============================================================================= @@ -50,7 +106,7 @@ set -uo pipefail # --------------------------------------------------------------------------- -# Colors & symbols (ASCII-safe, no emojis) +# Logging (matches existing scripts/test-integration.sh style) # --------------------------------------------------------------------------- RED='\033[0;31m' GREEN='\033[0;32m' @@ -60,35 +116,45 @@ DIM='\033[2m' BOLD='\033[1m' NC='\033[0m' -SYM_PASS="[+]" -SYM_FAIL="[x]" -SYM_SKIP="[-]" -SYM_TEST="[>]" +log_info() { echo -e "${BLUE}[i] $1${NC}"; } +log_success() { echo -e "${GREEN}[+] $1${NC}"; } +log_error() { echo -e "${RED}[x] $1${NC}"; } +log_test() { echo -e "${BOLD}[>] $1${NC}"; } +log_dim() { echo -e "${DIM} $1${NC}"; } # --------------------------------------------------------------------------- -# Counters +# Counters & state # --------------------------------------------------------------------------- TESTS_PASSED=0 TESTS_FAILED=0 TESTS_SKIPPED=0 -RESULTS=() # array of "STATUS scenario_name" +RESULTS=() # --------------------------------------------------------------------------- -# Config +# Config — repos and binary # --------------------------------------------------------------------------- APM_BINARY="${APM_BINARY:-apm}" AUTH_TEST_PUBLIC_REPO="${AUTH_TEST_PUBLIC_REPO:-microsoft/apm-sample-package}" AUTH_TEST_PRIVATE_REPO="${AUTH_TEST_PRIVATE_REPO:-}" AUTH_TEST_EMU_REPO="${AUTH_TEST_EMU_REPO:-}" -AUTH_TEST_GHE_REPO="${AUTH_TEST_GHE_REPO:-}" # format: org/repo@host -AUTH_TEST_ADO_REPO="${AUTH_TEST_ADO_REPO:-}" # Azure DevOps repo +AUTH_TEST_ADO_REPO="${AUTH_TEST_ADO_REPO:-}" -# Stash original env so we can restore between tests +# --------------------------------------------------------------------------- +# Config — stash ALL original tokens (restored between tests) +# --------------------------------------------------------------------------- _ORIG_GITHUB_APM_PAT="${GITHUB_APM_PAT:-}" _ORIG_GITHUB_TOKEN="${GITHUB_TOKEN:-}" _ORIG_GH_TOKEN="${GH_TOKEN:-}" _ORIG_ADO_APM_PAT="${ADO_APM_PAT:-}" +# Detect any per-org PATs already set (GITHUB_APM_PAT_*) +declare -A _ORIG_PER_ORG_PATS +while IFS='=' read -r name val; do + if [[ "$name" == GITHUB_APM_PAT_* && "$name" != "GITHUB_APM_PAT" ]]; then + _ORIG_PER_ORG_PATS["$name"]="$val" + fi +done < <(env) + # --------------------------------------------------------------------------- # Temp dir & cleanup # --------------------------------------------------------------------------- @@ -99,270 +165,247 @@ trap 'rm -rf "$WORK_DIR"' EXIT # Helpers # --------------------------------------------------------------------------- -log_header() { - echo "" - echo -e "${BOLD}${BLUE}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}" - echo -e "${BOLD}${BLUE} $1${NC}" - echo -e "${BOLD}${BLUE}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}" -} - -log_scenario() { - echo "" - echo -e "${BOLD}${SYM_TEST} Scenario: $1${NC}" -} - -record_pass() { - local name="$1" - TESTS_PASSED=$((TESTS_PASSED + 1)) - RESULTS+=("PASS $name") - echo -e " ${GREEN}${SYM_PASS} PASS${NC} -- $name" +# Unset ALL auth env vars for a clean test slate. +unset_all_auth() { + unset GITHUB_APM_PAT 2>/dev/null || true + unset GITHUB_TOKEN 2>/dev/null || true + unset GH_TOKEN 2>/dev/null || true + unset ADO_APM_PAT 2>/dev/null || true + # Unset any GITHUB_APM_PAT_* per-org vars + while IFS='=' read -r name _; do + if [[ "$name" == GITHUB_APM_PAT_* ]]; then + unset "$name" 2>/dev/null || true + fi + done < <(env) + # Block interactive credential prompts + export GIT_TERMINAL_PROMPT=0 + export GCM_INTERACTIVE=never } -record_fail() { - local name="$1" - local detail="${2:-}" - TESTS_FAILED=$((TESTS_FAILED + 1)) - RESULTS+=("FAIL $name") - echo -e " ${RED}${SYM_FAIL} FAIL${NC} -- $name" - if [[ -n "$detail" ]]; then - echo -e " ${DIM} $detail${NC}" - fi +# Restore original token env vars. +restore_auth() { + unset_all_auth + unset GIT_TERMINAL_PROMPT GCM_INTERACTIVE 2>/dev/null || true + [[ -n "$_ORIG_GITHUB_APM_PAT" ]] && export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + [[ -n "$_ORIG_GITHUB_TOKEN" ]] && export GITHUB_TOKEN="$_ORIG_GITHUB_TOKEN" + [[ -n "$_ORIG_GH_TOKEN" ]] && export GH_TOKEN="$_ORIG_GH_TOKEN" + [[ -n "$_ORIG_ADO_APM_PAT" ]] && export ADO_APM_PAT="$_ORIG_ADO_APM_PAT" + for name in "${!_ORIG_PER_ORG_PATS[@]}"; do + export "$name=${_ORIG_PER_ORG_PATS[$name]}" + done } -record_skip() { - local name="$1" - local reason="${2:-missing env var}" - TESTS_SKIPPED=$((TESTS_SKIPPED + 1)) - RESULTS+=("SKIP $name") - echo -e " ${YELLOW}${SYM_SKIP} SKIP${NC} -- $name ($reason)" +# Derive org env suffix: "my-org/repo" -> "MY_ORG" +org_env_suffix() { + local owner="${1%%/*}" + echo "$owner" | tr '[:lower:]-' '[:upper:]_' } -# Prepare a minimal apm.yml in a fresh temp directory and echo the path. -# Usage: test_dir=$(setup_test_dir "owner/repo") +# Create a temp dir with minimal apm.yml containing given deps. +# Usage: dir=$(setup_test_dir "owner/repo" ["owner2/repo2" ...]) setup_test_dir() { - local package="$1" local dir dir="$(mktemp -d "$WORK_DIR/test-XXXXXX")" - cat > "$dir/apm.yml" < "$dir/apm.yml" echo "$dir" } -# Run apm install, capturing combined stdout+stderr. -# Returns exit code. Output is stored in $APM_OUTPUT. -# Usage: run_apm_install [extra_args...] -run_apm_install() { +# Run apm install in an isolated temp dir. Sets APM_OUTPUT and APM_EXIT. +# Usage: run_install [extra_args...] +# or: run_install_manifest [extra_args...] (for pre-built dirs) +run_install() { local package="$1"; shift local dir dir="$(setup_test_dir "$package")" - APM_OUTPUT="$(cd "$dir" && "$APM_BINARY" install "$@" 2>&1)" && APM_EXIT=0 || APM_EXIT=$? } -# Assert that $APM_OUTPUT contains a pattern (extended grep). -assert_output_contains() { - local pattern="$1" - local msg="${2:-output should contain '$pattern'}" - if echo "$APM_OUTPUT" | grep -qiE "$pattern"; then - return 0 - else - record_fail "$msg" "pattern not found: $pattern" - return 1 - fi +run_install_manifest() { + local dir="$1"; shift + APM_OUTPUT="$(cd "$dir" && "$APM_BINARY" install "$@" 2>&1)" && APM_EXIT=0 || APM_EXIT=$? } -# Assert that $APM_OUTPUT does NOT contain a pattern. -assert_output_not_contains() { - local pattern="$1" - local msg="${2:-output should not contain '$pattern'}" - if echo "$APM_OUTPUT" | grep -qiE "$pattern"; then - record_fail "$msg" "unexpected pattern found: $pattern" - return 1 - else - return 0 +# Assertions — set $SCENARIO_OK=false on failure +assert_exit() { + local expected="$1" msg="$2" + if [[ "$APM_EXIT" -ne "$expected" ]]; then + log_error " FAIL: $msg (expected exit=$expected, got=$APM_EXIT)" + SCENARIO_OK=false; return 1 fi } -# Assert exit code. -assert_exit_code() { - local expected="$1" - local msg="${2:-exit code should be $expected}" - if [[ "$APM_EXIT" -eq "$expected" ]]; then - return 0 - else - record_fail "$msg" "expected exit=$expected, got exit=$APM_EXIT" - return 1 +assert_contains() { + local pattern="$1" msg="$2" + if ! echo "$APM_OUTPUT" | grep -qiE "$pattern"; then + log_error " FAIL: $msg" + log_dim "pattern not found: $pattern" + SCENARIO_OK=false; return 1 fi } -# Unset all auth env vars to guarantee a clean slate. -unset_all_auth() { - unset GITHUB_APM_PAT 2>/dev/null || true - unset GITHUB_TOKEN 2>/dev/null || true - unset GH_TOKEN 2>/dev/null || true - # Unset any per-org vars that may have been set - while IFS='=' read -r name _; do - if [[ "$name" == GITHUB_APM_PAT_* ]]; then - unset "$name" 2>/dev/null || true - fi - done < <(env) +assert_not_contains() { + local pattern="$1" msg="$2" + if echo "$APM_OUTPUT" | grep -qiE "$pattern"; then + log_error " FAIL: $msg" + log_dim "unexpected pattern: $pattern" + SCENARIO_OK=false; return 1 + fi } -# Restore original auth env vars. -restore_auth() { - unset_all_auth - [[ -n "$_ORIG_GITHUB_APM_PAT" ]] && export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" - [[ -n "$_ORIG_GITHUB_TOKEN" ]] && export GITHUB_TOKEN="$_ORIG_GITHUB_TOKEN" - [[ -n "$_ORIG_GH_TOKEN" ]] && export GH_TOKEN="$_ORIG_GH_TOKEN" - [[ -n "$_ORIG_ADO_APM_PAT" ]] && export ADO_APM_PAT="$_ORIG_ADO_APM_PAT" +# Record test result +record_pass() { TESTS_PASSED=$((TESTS_PASSED+1)); RESULTS+=("PASS $1"); log_success "PASS: $1"; } +record_fail() { TESTS_FAILED=$((TESTS_FAILED+1)); RESULTS+=("FAIL $1"); log_error "FAIL: $1"; } +record_skip() { TESTS_SKIPPED=$((TESTS_SKIPPED+1)); RESULTS+=("SKIP $1"); echo -e " ${YELLOW}[-] SKIP: $1${NC} ($2)"; } + +# Check if a required env var is set; skip scenario if not +require_env() { + local var_name="$1" scenario_name="$2" + local val="${!var_name:-}" + if [[ -z "$val" ]]; then + record_skip "$scenario_name" "$var_name not set" + return 1 + fi } -# Derive the org-env-suffix from an owner/repo string. -# "my-org/repo" → "MY_ORG" -org_env_suffix() { - local owner="${1%%/*}" - echo "$owner" | tr '[:lower:]-' '[:upper:]_' +require_repo() { + local var_name="$1" scenario_name="$2" + local val="${!var_name:-}" + if [[ -z "$val" ]]; then + record_skip "$scenario_name" "$var_name not set" + return 1 + fi } -# --------------------------------------------------------------------------- -# Scenario 1: Public repo, no auth -# --------------------------------------------------------------------------- -test_scenario_1_public_no_auth() { - local name="Public repo, no auth" - log_scenario "$name" +# ========================================================================== +# SCENARIO 1: Public repo, no auth (A6, H1, V1) +# -------------------------------------------------------------------------- +# Validates that public repos work with zero tokens. The unauth-first +# validation path should succeed on the first API attempt (200). +# No token source should appear in output. +# ========================================================================== +test_01_public_no_auth() { + local name="01: Public repo, no auth [A6,H1,V1]" + log_test "$name" unset_all_auth - export GIT_TERMINAL_PROMPT=0 - export GCM_INTERACTIVE=never + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_PUBLIC_REPO" --verbose + run_install "$AUTH_TEST_PUBLIC_REPO" --verbose - local ok=true - assert_exit_code 0 "$name — succeeds" || ok=false - assert_output_contains "unauthenticated" "$name — shows unauthenticated access" || ok=false - assert_output_not_contains "source=GITHUB_APM_PAT" "$name — no PAT source shown" || ok=false + assert_exit 0 "install succeeds" + assert_contains "unauthenticated" "tries unauthenticated access" + assert_not_contains "source=GITHUB_APM_PAT" "no PAT source in output" - $ok && record_pass "$name" + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 2: Public repo, PAT set (rate-limit behavior) -# --------------------------------------------------------------------------- -test_scenario_2_public_with_pat() { - local name="Public repo, PAT set" - log_scenario "$name" - - if [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then - record_skip "$name" "GITHUB_APM_PAT not set" - return - fi - +# ========================================================================== +# SCENARIO 2: Public repo, global PAT set (A2, H1, V1) +# -------------------------------------------------------------------------- +# Public repos should still validate unauthenticated FIRST to save API +# rate limits, even when a PAT is available. The PAT should only be used +# for the download phase (higher rate limits for git clone). +# ========================================================================== +test_02_public_with_pat() { + local name="02: Public repo, PAT set [A2,H1,V1]" + log_test "$name" + require_env _ORIG_GITHUB_APM_PAT "$name" || return unset_all_auth export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_PUBLIC_REPO" --verbose + run_install "$AUTH_TEST_PUBLIC_REPO" --verbose - local ok=true - assert_exit_code 0 "$name — succeeds" || ok=false - # Public repos try unauthenticated first to save rate limits - assert_output_contains "unauthenticated" "$name — tries unauth first" || ok=false + assert_exit 0 "install succeeds" + assert_contains "unauthenticated" "tries unauth first (rate-limit safe)" - $ok && record_pass "$name" + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 3: Private repo, global PAT -# --------------------------------------------------------------------------- -test_scenario_3_private_global_pat() { - local name="Private repo, global PAT" - log_scenario "$name" - - if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then - record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" - return - fi - if [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then - record_skip "$name" "GITHUB_APM_PAT not set" - return - fi - +# ========================================================================== +# SCENARIO 3: Private repo, GITHUB_APM_PAT (A2, H1, V2) +# -------------------------------------------------------------------------- +# Unauth validation returns 404 for private repos. AuthResolver retries +# with GITHUB_APM_PAT. Verbose output must show the fallback chain: +# "Trying unauthenticated" -> 404 -> "retrying with token (source: GITHUB_APM_PAT)" +# ========================================================================== +test_03_private_global_pat() { + local name="03: Private repo, GITHUB_APM_PAT [A2,H1,V2]" + log_test "$name" + require_repo AUTH_TEST_PRIVATE_REPO "$name" || return + require_env _ORIG_GITHUB_APM_PAT "$name" || return unset_all_auth export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose + run_install "$AUTH_TEST_PRIVATE_REPO" --verbose - local ok=true - assert_exit_code 0 "$name — succeeds" || ok=false - # Verbose should show the auth fallback chain - assert_output_contains "source=GITHUB_APM_PAT" "$name — shows PAT source" || ok=false + assert_exit 0 "install succeeds" + assert_contains "source=GITHUB_APM_PAT" "shows GITHUB_APM_PAT as source" - $ok && record_pass "$name" + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 4: Private repo, per-org PAT -# --------------------------------------------------------------------------- -test_scenario_4_private_per_org_pat() { - local name="Private repo, per-org PAT" - log_scenario "$name" - - if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then - record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" - return - fi +# ========================================================================== +# SCENARIO 4: Private repo, per-org PAT (A1, H1, V2) +# -------------------------------------------------------------------------- +# Per-org PATs (GITHUB_APM_PAT_{ORG}) have highest priority. When set, +# they shadow the global GITHUB_APM_PAT. Verbose must show +# source=GITHUB_APM_PAT_{ORG} +# The org suffix is derived from the repo owner: my-org -> MY_ORG +# ========================================================================== +test_04_private_per_org_pat() { + local name="04: Private repo, per-org PAT [A1,H1,V2]" + log_test "$name" + require_repo AUTH_TEST_PRIVATE_REPO "$name" || return local org_suffix org_suffix="$(org_env_suffix "$AUTH_TEST_PRIVATE_REPO")" local per_org_var="GITHUB_APM_PAT_${org_suffix}" - local per_org_val="${!per_org_var:-}" - if [[ -z "$per_org_val" ]] && [[ -n "$_ORIG_GITHUB_APM_PAT" ]]; then - # Fall back to the global PAT for testing the per-org path - per_org_val="$_ORIG_GITHUB_APM_PAT" - fi + # Use the per-org var if already set, else use global PAT for testing + local per_org_val="${!per_org_var:-${_ORIG_GITHUB_APM_PAT:-}}" if [[ -z "$per_org_val" ]]; then - record_skip "$name" "$per_org_var not set" + record_skip "$name" "$per_org_var and GITHUB_APM_PAT both unset" return fi unset_all_auth export "$per_org_var=$per_org_val" + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose + run_install "$AUTH_TEST_PRIVATE_REPO" --verbose - local ok=true - assert_exit_code 0 "$name — succeeds" || ok=false - assert_output_contains "source=GITHUB_APM_PAT_${org_suffix}" "$name — shows per-org source" || ok=false + assert_exit 0 "install succeeds" + assert_contains "source=GITHUB_APM_PAT_${org_suffix}" "per-org source shown" - $ok && record_pass "$name" + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 5: Token priority (per-org > global) -# --------------------------------------------------------------------------- -test_scenario_5_token_priority() { - local name="Token priority: per-org > global" - log_scenario "$name" - - if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then - record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" - return - fi - if [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then - record_skip "$name" "GITHUB_APM_PAT not set" - return - fi +# ========================================================================== +# SCENARIO 5: Token priority — per-org > global (A1+A2, H1, V2) +# -------------------------------------------------------------------------- +# When BOTH per-org and global PATs are set, per-org must win. +# Verbose output should show source=GITHUB_APM_PAT_{ORG}, not +# source=GITHUB_APM_PAT. +# ========================================================================== +test_05_token_priority() { + local name="05: Priority: per-org > global [A1+A2,H1,V2]" + log_test "$name" + require_repo AUTH_TEST_PRIVATE_REPO "$name" || return + require_env _ORIG_GITHUB_APM_PAT "$name" || return local org_suffix org_suffix="$(org_env_suffix "$AUTH_TEST_PRIVATE_REPO")" @@ -371,506 +414,455 @@ test_scenario_5_token_priority() { unset_all_auth export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" export "$per_org_var=$_ORIG_GITHUB_APM_PAT" + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose + run_install "$AUTH_TEST_PRIVATE_REPO" --verbose - local ok=true - assert_exit_code 0 "$name — succeeds" || ok=false - # Per-org should win over global - assert_output_contains "source=GITHUB_APM_PAT_${org_suffix}" "$name — per-org wins" || ok=false + assert_exit 0 "install succeeds" + assert_contains "source=GITHUB_APM_PAT_${org_suffix}" "per-org wins over global" - $ok && record_pass "$name" + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 6: GITHUB_TOKEN fallback -# --------------------------------------------------------------------------- -test_scenario_6_github_token_fallback() { - local name="GITHUB_TOKEN fallback" - log_scenario "$name" - - if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then - record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" - return - fi - if [[ -z "$_ORIG_GITHUB_TOKEN" ]] && [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then - record_skip "$name" "GITHUB_TOKEN and GITHUB_APM_PAT not set" +# ========================================================================== +# SCENARIO 6: GITHUB_TOKEN fallback (A3, H1, V2) +# -------------------------------------------------------------------------- +# When GITHUB_APM_PAT is unset but GITHUB_TOKEN is set, the resolver +# falls through: A1(skip) -> A2(skip) -> A3(GITHUB_TOKEN) -> use it. +# Verbose must show source=GITHUB_TOKEN. +# ========================================================================== +test_06_github_token_fallback() { + local name="06: GITHUB_TOKEN fallback [A3,H1,V2]" + log_test "$name" + require_repo AUTH_TEST_PRIVATE_REPO "$name" || return + + local token="${_ORIG_GITHUB_TOKEN:-${_ORIG_GITHUB_APM_PAT:-}}" + if [[ -z "$token" ]]; then + record_skip "$name" "GITHUB_TOKEN and GITHUB_APM_PAT both unset" return fi - local token="${_ORIG_GITHUB_TOKEN:-$_ORIG_GITHUB_APM_PAT}" - unset_all_auth export GITHUB_TOKEN="$token" + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose + run_install "$AUTH_TEST_PRIVATE_REPO" --verbose - local ok=true - assert_exit_code 0 "$name — succeeds" || ok=false - assert_output_contains "source=GITHUB_TOKEN" "$name — shows GITHUB_TOKEN source" || ok=false + assert_exit 0 "install succeeds" + assert_contains "source=GITHUB_TOKEN" "GITHUB_TOKEN source shown" - $ok && record_pass "$name" + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 7: Invalid token, graceful failure -# --------------------------------------------------------------------------- -test_scenario_7_invalid_token() { - local name="Invalid token, graceful failure" - log_scenario "$name" - - if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then - record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" +# ========================================================================== +# SCENARIO 7: GH_TOKEN fallback — lowest priority env var (A4, H1, V2) +# -------------------------------------------------------------------------- +# GH_TOKEN is the last env var in the chain. Only used when A1-A3 are unset. +# Verbose must show source=GH_TOKEN. +# ========================================================================== +test_07_gh_token_fallback() { + local name="07: GH_TOKEN fallback [A4,H1,V2]" + log_test "$name" + require_repo AUTH_TEST_PRIVATE_REPO "$name" || return + + local token="${_ORIG_GH_TOKEN:-${_ORIG_GITHUB_APM_PAT:-}}" + if [[ -z "$token" ]]; then + record_skip "$name" "GH_TOKEN and GITHUB_APM_PAT both unset" return fi unset_all_auth - export GITHUB_APM_PAT="ghp_invalidtoken1234567890abcdefghijklmn" - export GIT_TERMINAL_PROMPT=0 - export GCM_INTERACTIVE=never - - run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose - - local ok=true - assert_exit_code 1 "$name — fails with exit 1" || ok=false - # Should not crash or produce a traceback - assert_output_not_contains "Traceback" "$name — no Python traceback" || ok=false - - $ok && record_pass "$name" - unset GCM_INTERACTIVE - restore_auth -} - -# --------------------------------------------------------------------------- -# Scenario 8: Nonexistent repo -# --------------------------------------------------------------------------- -test_scenario_8_nonexistent_repo() { - local name="Nonexistent repo" - log_scenario "$name" - - unset_all_auth - export GIT_TERMINAL_PROMPT=0 - export GCM_INTERACTIVE=never + export GH_TOKEN="$token" + SCENARIO_OK=true - run_apm_install "owner/this-repo-does-not-exist-12345" --verbose + run_install "$AUTH_TEST_PRIVATE_REPO" --verbose - local ok=true - assert_exit_code 1 "$name — fails with exit 1" || ok=false - assert_output_contains "not accessible or doesn't exist" "$name — clear error message" || ok=false + assert_exit 0 "install succeeds" + assert_contains "source=GH_TOKEN" "GH_TOKEN source shown" - $ok && record_pass "$name" - unset GCM_INTERACTIVE + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 9: No auth, private repo -# --------------------------------------------------------------------------- -test_scenario_9_no_auth_private_repo() { - local name="No auth, private repo" - log_scenario "$name" - - if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then - record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" +# ========================================================================== +# SCENARIO 8: Credential helper only — no env vars (A5, H1, V2) +# -------------------------------------------------------------------------- +# All env vars unset. The resolver exhausts A1-A4, then falls back to +# git credential fill (gh auth, macOS Keychain, Windows Credential Manager). +# Requires gh auth login or equivalent. Verbose should show "credential". +# ========================================================================== +test_08_credential_helper_only() { + local name="08: Credential helper only [A5,H1,V2]" + log_test "$name" + require_repo AUTH_TEST_PRIVATE_REPO "$name" || return + + if ! command -v gh &>/dev/null || ! gh auth status &>/dev/null 2>&1; then + record_skip "$name" "gh CLI not authenticated (run 'gh auth login')" return fi unset_all_auth - export GIT_TERMINAL_PROMPT=0 - export GCM_INTERACTIVE=never + # ALLOW credential prompts for this test (undo the block from unset_all_auth) + unset GIT_TERMINAL_PROMPT GCM_INTERACTIVE 2>/dev/null || true + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_PRIVATE_REPO" + run_install "$AUTH_TEST_PRIVATE_REPO" --verbose - local ok=true - assert_exit_code 1 "$name — fails" || ok=false - assert_output_contains "not accessible|--verbose|GITHUB_APM_PAT|GITHUB_TOKEN|auth" \ - "$name — suggests auth guidance" || ok=false + assert_exit 0 "install succeeds" + assert_contains "credential" "credential fill path used" - $ok && record_pass "$name" - unset GCM_INTERACTIVE + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 10: Verbose vs non-verbose output contract -# --------------------------------------------------------------------------- -test_scenario_10_verbose_contract() { - local name="Verbose vs non-verbose output contract" - log_scenario "$name" - +# ========================================================================== +# SCENARIO 9: EMU internal repo (A2, H1, V3) +# -------------------------------------------------------------------------- +# EMU (Enterprise Managed Users) internal repos are not public. They require +# an org-scoped fine-grained PAT (resource owner = org, not user). +# Unauth returns 404, token must succeed. +# ========================================================================== +test_09_emu_internal_repo() { + local name="09: EMU internal repo [A2,H1,V3]" + log_test "$name" + require_repo AUTH_TEST_EMU_REPO "$name" || return + require_env _ORIG_GITHUB_APM_PAT "$name" || return unset_all_auth - export GIT_TERMINAL_PROMPT=0 - export GCM_INTERACTIVE=never - - # Non-verbose run - run_apm_install "owner/this-repo-does-not-exist-12345" - local non_verbose_output="$APM_OUTPUT" - local non_verbose_exit="$APM_EXIT" + export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + SCENARIO_OK=true - # Verbose run - run_apm_install "owner/this-repo-does-not-exist-12345" --verbose - local verbose_output="$APM_OUTPUT" - local verbose_exit="$APM_EXIT" + run_install "$AUTH_TEST_EMU_REPO" --verbose - local ok=true + assert_exit 0 "install succeeds" + assert_contains "retrying with token|source=GITHUB_APM_PAT" "token used for EMU repo" - # Both should fail - APM_EXIT="$non_verbose_exit" - assert_exit_code 1 "$name -- non-verbose fails" || ok=false - APM_EXIT="$verbose_exit" - assert_exit_code 1 "$name -- verbose fails" || ok=false - - # Non-verbose: should NOT expose auth resolution details - APM_OUTPUT="$non_verbose_output" - assert_output_not_contains "Auth resolved:" "$name -- non-verbose hides auth details" || ok=false - assert_output_contains "--verbose" "$name -- non-verbose hints at --verbose" || ok=false - - # Verbose: should show auth diagnostic info - APM_OUTPUT="$verbose_output" - # Verbose output should contain auth-related diagnostic lines - if echo "$verbose_output" | grep -qiE "Auth resolved|unauthenticated|API .* →"; then - : # ok - else - record_fail "$name -- verbose shows auth steps" "no auth diagnostic lines found" - ok=false - fi - - $ok && record_pass "$name" - unset GCM_INTERACTIVE + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 11: GH_TOKEN fallback (lowest priority global env var) -# --------------------------------------------------------------------------- -test_scenario_11_gh_token_fallback() { - local name="GH_TOKEN fallback (lowest priority)" - log_scenario "$name" - - if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then - record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" - return - fi - - # Use GH_TOKEN or fall back to any available token for the test - local token="${_ORIG_GH_TOKEN:-${_ORIG_GITHUB_TOKEN:-${_ORIG_GITHUB_APM_PAT:-}}}" - if [[ -z "$token" ]]; then - record_skip "$name" "No token available (need GH_TOKEN, GITHUB_TOKEN, or GITHUB_APM_PAT)" - return - fi - +# ========================================================================== +# SCENARIO 10: Mixed manifest — public + private (A2, H1, V1+V2) +# -------------------------------------------------------------------------- +# A single apm.yml with BOTH public and private deps. The resolver must +# handle each independently: public validates unauthenticated, private +# requires token. Both should install successfully. +# ========================================================================== +test_10_mixed_manifest() { + local name="10: Mixed manifest: public + private [A2,H1,V1+V2]" + log_test "$name" + require_repo AUTH_TEST_PRIVATE_REPO "$name" || return + require_env _ORIG_GITHUB_APM_PAT "$name" || return unset_all_auth - export GH_TOKEN="$token" + export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose + local dir + dir="$(setup_test_dir "$AUTH_TEST_PUBLIC_REPO" "$AUTH_TEST_PRIVATE_REPO")" + run_install_manifest "$dir" --verbose - local ok=true - assert_exit_code 0 "$name -- succeeds" || ok=false - assert_output_contains "source=GH_TOKEN" "$name -- shows GH_TOKEN source" || ok=false + assert_exit 0 "install succeeds" + # Both deps should appear in output + assert_contains "Installed.*2|2.*dependenc|Installed.*APM" "both deps installed" - $ok && record_pass "$name" + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 12: EMU internal repo with org-scoped fine-grained PAT -# --------------------------------------------------------------------------- -test_scenario_12_emu_internal_repo() { - local name="EMU internal repo" - log_scenario "$name" - - if [[ -z "$AUTH_TEST_EMU_REPO" ]]; then - record_skip "$name" "AUTH_TEST_EMU_REPO not set" - return - fi - if [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then - record_skip "$name" "GITHUB_APM_PAT not set" - return - fi - +# ========================================================================== +# SCENARIO 11: Token type detection in verbose (A2, H1, V1) +# -------------------------------------------------------------------------- +# Verbose output must include type= in the "Auth resolved" line, correctly +# identifying the token type: fine-grained, classic, oauth, etc. +# ========================================================================== +test_11_token_type_detection() { + local name="11: Token type detection [A2,H1,V1]" + log_test "$name" + require_env _ORIG_GITHUB_APM_PAT "$name" || return unset_all_auth export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_EMU_REPO" --verbose + run_install "$AUTH_TEST_PUBLIC_REPO" --verbose - local ok=true - assert_exit_code 0 "$name -- succeeds" || ok=false - # Should show the auth chain: unauth fails, then token succeeds - assert_output_contains "retrying with token|source=GITHUB_APM_PAT" \ - "$name -- token used for EMU repo" || ok=false + assert_exit 0 "install succeeds" + assert_contains "type=(fine-grained|classic|oauth|unknown)" "token type detected" - $ok && record_pass "$name" + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 13: Credential helper only (no env vars) -# --------------------------------------------------------------------------- -test_scenario_13_credential_helper_only() { - local name="Credential helper only (gh auth / keychain)" - log_scenario "$name" - - if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then - record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" - return - fi - - # Verify gh auth is available - if ! command -v gh &>/dev/null || ! gh auth status &>/dev/null; then - record_skip "$name" "gh CLI not authenticated (run 'gh auth login' first)" - return - fi - +# ========================================================================== +# SCENARIO 12: ADO repo with ADO_APM_PAT (A7, H4, V2) +# -------------------------------------------------------------------------- +# Azure DevOps uses a completely separate auth path: ADO_APM_PAT env var. +# No GitHub env vars apply. No credential fill fallback (ADO excluded). +# Git ls-remote with Basic auth (base64 :PAT). +# ========================================================================== +test_12_ado_repo() { + local name="12: ADO repo with ADO_APM_PAT [A7,H4,V2]" + log_test "$name" + require_repo AUTH_TEST_ADO_REPO "$name" || return + require_env _ORIG_ADO_APM_PAT "$name" || return unset_all_auth - # Leave GIT_TERMINAL_PROMPT unset so credential helpers CAN run + export ADO_APM_PAT="$_ORIG_ADO_APM_PAT" + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_PRIVATE_REPO" --verbose + run_install "$AUTH_TEST_ADO_REPO" --verbose - local ok=true - assert_exit_code 0 "$name -- succeeds" || ok=false - assert_output_contains "credential" "$name -- credential fill used" || ok=false + assert_exit 0 "install succeeds" - $ok && record_pass "$name" + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 14: Token type detection (fine-grained vs classic) -# --------------------------------------------------------------------------- -test_scenario_14_token_type_detection() { - local name="Token type detection in verbose output" - log_scenario "$name" - - if [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then - record_skip "$name" "GITHUB_APM_PAT not set" - return - fi - +# ========================================================================== +# SCENARIO 13: ADO without PAT — no credential fill (H4) +# -------------------------------------------------------------------------- +# ADO is explicitly excluded from git credential fill. Without ADO_APM_PAT, +# the operation must fail cleanly. Output must NOT mention "credential fill". +# ========================================================================== +test_13_ado_no_pat() { + local name="13: ADO no PAT, no credential fill [H4]" + log_test "$name" + require_repo AUTH_TEST_ADO_REPO "$name" || return unset_all_auth - export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_PUBLIC_REPO" --verbose + run_install "$AUTH_TEST_ADO_REPO" --verbose - local ok=true - assert_exit_code 0 "$name -- succeeds" || ok=false - # Should show type= in auth resolved line - assert_output_contains "type=(fine-grained|classic|oauth)" \ - "$name -- token type detected and shown" || ok=false + assert_exit 1 "fails without ADO PAT" + assert_contains "not accessible" "clear error message" + assert_not_contains "credential fill" "no credential fill for ADO" - $ok && record_pass "$name" + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 15: Mixed manifest (public + private in same apm.yml) -# --------------------------------------------------------------------------- -test_scenario_15_mixed_manifest() { - local name="Mixed manifest: public + private" - log_scenario "$name" - - if [[ -z "$AUTH_TEST_PRIVATE_REPO" ]]; then - record_skip "$name" "AUTH_TEST_PRIVATE_REPO not set" - return - fi - if [[ -z "$_ORIG_GITHUB_APM_PAT" ]]; then - record_skip "$name" "GITHUB_APM_PAT not set" - return - fi - - # Create apm.yml with BOTH public and private deps - local dir - dir="$(mktemp -d "$WORK_DIR/test-XXXXXX")" - cat > "$dir/apm.yml" < credential fill) +# and produce an actionable error. +# ========================================================================== +test_14_invalid_token() { + local name="14: Invalid token, graceful failure [A2-bad,H1,V2]" + log_test "$name" + require_repo AUTH_TEST_PRIVATE_REPO "$name" || return unset_all_auth - export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + export GITHUB_APM_PAT="ghp_invalidtoken1234567890abcdefghijklmn" + SCENARIO_OK=true - APM_OUTPUT="$(cd "$dir" && "$APM_BINARY" install --verbose 2>&1)" && APM_EXIT=0 || APM_EXIT=$? + run_install "$AUTH_TEST_PRIVATE_REPO" --verbose - local ok=true - assert_exit_code 0 "$name -- succeeds" || ok=false - # Both should be installed - assert_output_contains "Installed.*2|2.*dependenc" \ - "$name -- both deps installed" || ok=false + assert_exit 1 "fails with exit 1" + assert_not_contains "Traceback" "no Python traceback" - $ok && record_pass "$name" + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 16: ADO repo with ADO_APM_PAT -# --------------------------------------------------------------------------- -test_scenario_16_ado_repo() { - local name="ADO repo with ADO_APM_PAT" - log_scenario "$name" - - if [[ -z "$AUTH_TEST_ADO_REPO" ]]; then - record_skip "$name" "AUTH_TEST_ADO_REPO not set" - return - fi - if [[ -z "$_ORIG_ADO_APM_PAT" ]]; then - record_skip "$name" "ADO_APM_PAT not set" - return - fi - +# ========================================================================== +# SCENARIO 15: Nonexistent repo (A6, H1) +# -------------------------------------------------------------------------- +# A repo that doesn't exist should produce a clear, non-confusing message: +# "not accessible or doesn't exist" +# No auth noise since there's nothing to authenticate against. +# ========================================================================== +test_15_nonexistent_repo() { + local name="15: Nonexistent repo [A6,H1]" + log_test "$name" unset_all_auth - export ADO_APM_PAT="$_ORIG_ADO_APM_PAT" + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_ADO_REPO" --verbose + run_install "owner/this-repo-does-not-exist-12345" --verbose - local ok=true - assert_exit_code 0 "$name -- succeeds" || ok=false + assert_exit 1 "fails with exit 1" + assert_contains "not accessible or doesn.t exist" "clear error message" - $ok && record_pass "$name" + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 17: ADO repo without ADO_APM_PAT (should fail, no credential fill) -# --------------------------------------------------------------------------- -test_scenario_17_ado_no_pat() { - local name="ADO repo without ADO_APM_PAT (no credential fill)" - log_scenario "$name" - - if [[ -z "$AUTH_TEST_ADO_REPO" ]]; then - record_skip "$name" "AUTH_TEST_ADO_REPO not set" - return - fi - +# ========================================================================== +# SCENARIO 16: No auth, private repo (A6, H1, V2) +# -------------------------------------------------------------------------- +# Private repo with zero tokens and credential helpers blocked. +# Must fail with actionable guidance: suggest setting env vars or +# running with --verbose for diagnostics. +# ========================================================================== +test_16_no_auth_private_repo() { + local name="16: No auth, private repo [A6,H1,V2]" + log_test "$name" + require_repo AUTH_TEST_PRIVATE_REPO "$name" || return unset_all_auth - export GIT_TERMINAL_PROMPT=0 - export GCM_INTERACTIVE=never + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_ADO_REPO" --verbose + run_install "$AUTH_TEST_PRIVATE_REPO" - local ok=true - assert_exit_code 1 "$name -- fails without ADO PAT" || ok=false - assert_output_contains "not accessible" "$name -- clear error message" || ok=false - # Should NOT attempt credential fill for ADO - assert_output_not_contains "credential fill" \ - "$name -- no credential fill for ADO" || ok=false + assert_exit 1 "fails" + assert_contains "not accessible|--verbose|GITHUB_APM_PAT|auth" "suggests auth guidance" - $ok && record_pass "$name" - unset GCM_INTERACTIVE + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Scenario 18: Fine-grained PAT with wrong resource owner (user-scoped for org repo) -# --------------------------------------------------------------------------- -test_scenario_18_fine_grained_wrong_owner() { - local name="Fine-grained PAT wrong resource owner" - log_scenario "$name" - - # This test requires a fine-grained PAT that is user-scoped (not org-scoped) - # trying to access an org repo — it should fail with 404 - if [[ -z "$AUTH_TEST_EMU_REPO" ]]; then - record_skip "$name" "AUTH_TEST_EMU_REPO not set (need org repo to test)" - return - fi +# ========================================================================== +# SCENARIO 17: Fine-grained PAT, wrong resource owner (A2, H1, V3) +# -------------------------------------------------------------------------- +# A user-scoped fine-grained PAT (github_pat_*) CANNOT access org repos, +# even internal ones. Must fail without crash. This is a common gotcha +# for EMU users who create user-scoped PATs instead of org-scoped. +# Auto-skips if the PAT actually has org scope (succeeds). +# ========================================================================== +test_17_fine_grained_wrong_owner() { + local name="17: Fine-grained PAT wrong owner [A2,H1,V3]" + log_test "$name" + require_repo AUTH_TEST_EMU_REPO "$name" || return + require_env _ORIG_GITHUB_APM_PAT "$name" || return - # Check if GITHUB_APM_PAT is fine-grained if [[ "$_ORIG_GITHUB_APM_PAT" != github_pat_* ]]; then - record_skip "$name" "GITHUB_APM_PAT is not a fine-grained PAT (github_pat_*)" + record_skip "$name" "GITHUB_APM_PAT is not fine-grained (github_pat_*)" return fi - # This scenario only works if the PAT is user-scoped, not org-scoped. - # We can't programmatically detect this, so we try and check the outcome. - # If it succeeds, the PAT has org scope — skip. unset_all_auth export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" - export GIT_TERMINAL_PROMPT=0 - export GCM_INTERACTIVE=never + SCENARIO_OK=true - run_apm_install "$AUTH_TEST_EMU_REPO" --verbose + run_install "$AUTH_TEST_EMU_REPO" --verbose if [[ "$APM_EXIT" -eq 0 ]]; then - record_skip "$name" "PAT has org scope (test needs user-scoped fine-grained PAT)" + # PAT has org scope — can't test wrong-owner with this token + record_skip "$name" "PAT has org scope (need user-scoped PAT to test)" else - local ok=true - assert_output_contains "not accessible" "$name -- fails with 404" || ok=false - assert_output_not_contains "Traceback" "$name -- no Python traceback" || ok=false - $ok && record_pass "$name" + assert_not_contains "Traceback" "no Python traceback" + $SCENARIO_OK && record_pass "$name" || record_fail "$name" fi + restore_auth +} + +# ========================================================================== +# SCENARIO 18: Verbose vs non-verbose output contract +# -------------------------------------------------------------------------- +# The core UX contract: auth diagnostics are INVISIBLE without --verbose. +# Run the SAME failing operation twice (with and without --verbose) and +# verify: +# Non-verbose: NO "Auth resolved:", HAS "--verbose" hint +# Verbose: HAS auth diagnostic lines (Auth resolved, API, unauthenticated) +# ========================================================================== +test_18_verbose_contract() { + local name="18: Verbose output contract" + log_test "$name" + unset_all_auth + SCENARIO_OK=true - unset GCM_INTERACTIVE + # Non-verbose run + run_install "owner/this-repo-does-not-exist-12345" + local nv_output="$APM_OUTPUT" nv_exit="$APM_EXIT" + + # Verbose run + run_install "owner/this-repo-does-not-exist-12345" --verbose + local v_output="$APM_OUTPUT" v_exit="$APM_EXIT" + + # Both should fail + APM_EXIT="$nv_exit" + assert_exit 1 "non-verbose fails" + APM_EXIT="$v_exit" + assert_exit 1 "verbose fails" + + # Non-verbose: auth details hidden, --verbose hint shown + APM_OUTPUT="$nv_output" + assert_not_contains "Auth resolved:" "non-verbose hides auth details" + assert_contains "--verbose" "non-verbose hints at --verbose" + + # Verbose: auth details shown + APM_OUTPUT="$v_output" + if ! echo "$v_output" | grep -qiE "Auth resolved|unauthenticated|API .* →|API .* ->"; then + log_error " FAIL: verbose output missing auth diagnostic lines" + SCENARIO_OK=false + fi + + $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth } -# --------------------------------------------------------------------------- -# Run all scenarios -# --------------------------------------------------------------------------- +# ========================================================================== +# RUN ALL SCENARIOS +# ========================================================================== -log_header "APM Auth Acceptance Tests" echo "" -echo -e "${DIM}Binary: $APM_BINARY${NC}" -echo -e "${DIM}Public repo: $AUTH_TEST_PUBLIC_REPO${NC}" -echo -e "${DIM}Private repo: ${AUTH_TEST_PRIVATE_REPO:-}${NC}" -echo -e "${DIM}EMU repo: ${AUTH_TEST_EMU_REPO:-}${NC}" -echo -e "${DIM}GHE repo: ${AUTH_TEST_GHE_REPO:-}${NC}" -echo -e "${DIM}ADO repo: ${AUTH_TEST_ADO_REPO:-}${NC}" +echo -e "${BOLD}${BLUE}================================================================${NC}" +echo -e "${BOLD}${BLUE} APM Auth Acceptance Tests${NC}" +echo -e "${BOLD}${BLUE}================================================================${NC}" +echo "" +echo -e "${DIM}Binary: ${APM_BINARY}${NC}" +echo -e "${DIM}Public repo: ${AUTH_TEST_PUBLIC_REPO}${NC}" +echo -e "${DIM}Private repo: ${AUTH_TEST_PRIVATE_REPO:-}${NC}" +echo -e "${DIM}EMU repo: ${AUTH_TEST_EMU_REPO:-}${NC}" +echo -e "${DIM}ADO repo: ${AUTH_TEST_ADO_REPO:-}${NC}" +echo -e "${DIM}Tokens: GITHUB_APM_PAT=${_ORIG_GITHUB_APM_PAT:+SET} GITHUB_TOKEN=${_ORIG_GITHUB_TOKEN:+SET} GH_TOKEN=${_ORIG_GH_TOKEN:+SET} ADO_APM_PAT=${_ORIG_ADO_APM_PAT:+SET}${NC}" +# Show per-org PATs +for name in "${!_ORIG_PER_ORG_PATS[@]}"; do + echo -e "${DIM} ${name}=SET${NC}" +done echo "" -# --- P0 core scenarios --- -test_scenario_1_public_no_auth -test_scenario_2_public_with_pat -test_scenario_3_private_global_pat -test_scenario_4_private_per_org_pat -test_scenario_5_token_priority -test_scenario_6_github_token_fallback -test_scenario_7_invalid_token -test_scenario_8_nonexistent_repo -test_scenario_9_no_auth_private_repo -test_scenario_10_verbose_contract - -# --- Extended coverage --- -test_scenario_11_gh_token_fallback -test_scenario_12_emu_internal_repo -test_scenario_13_credential_helper_only -test_scenario_14_token_type_detection -test_scenario_15_mixed_manifest -test_scenario_16_ado_repo -test_scenario_17_ado_no_pat -test_scenario_18_fine_grained_wrong_owner +# Core auth scenarios +test_01_public_no_auth +test_02_public_with_pat +test_03_private_global_pat +test_04_private_per_org_pat +test_05_token_priority +test_06_github_token_fallback +test_07_gh_token_fallback +test_08_credential_helper_only +test_09_emu_internal_repo +test_10_mixed_manifest +test_11_token_type_detection + +# ADO scenarios +test_12_ado_repo +test_13_ado_no_pat + +# Error scenarios +test_14_invalid_token +test_15_nonexistent_repo +test_16_no_auth_private_repo +test_17_fine_grained_wrong_owner + +# Output contract +test_18_verbose_contract + +# ========================================================================== +# SUMMARY +# ========================================================================== -# --------------------------------------------------------------------------- -# Summary -# --------------------------------------------------------------------------- TOTAL=$((TESTS_PASSED + TESTS_FAILED + TESTS_SKIPPED)) -log_header "Summary" echo "" -printf " %-8s %s\n" "Total:" "$TOTAL" -printf " ${GREEN}%-8s %s${NC}\n" "Passed:" "$TESTS_PASSED" -printf " ${RED}%-8s %s${NC}\n" "Failed:" "$TESTS_FAILED" -printf " ${YELLOW}%-8s %s${NC}\n" "Skipped:" "$TESTS_SKIPPED" +echo -e "${BOLD}${BLUE}================================================================${NC}" +echo -e "${BOLD}${BLUE} Summary${NC}" +echo -e "${BOLD}${BLUE}================================================================${NC}" +echo "" +printf " %-10s %s\n" "Total:" "$TOTAL" +printf " ${GREEN}%-10s %s${NC}\n" "Passed:" "$TESTS_PASSED" +printf " ${RED}%-10s %s${NC}\n" "Failed:" "$TESTS_FAILED" +printf " ${YELLOW}%-10s %s${NC}\n" "Skipped:" "$TESTS_SKIPPED" echo "" -echo -e "${DIM}──────────────────────────────────────────────────${NC}" for entry in "${RESULTS[@]}"; do status="${entry%% *}" scenario="${entry#* }" case "$status" in - PASS) echo -e " ${GREEN}${SYM_PASS}${NC} $scenario" ;; - FAIL) echo -e " ${RED}${SYM_FAIL}${NC} $scenario" ;; - SKIP) echo -e " ${YELLOW}${SYM_SKIP}${NC} $scenario" ;; + PASS) echo -e " ${GREEN}[+]${NC} $scenario" ;; + FAIL) echo -e " ${RED}[x]${NC} $scenario" ;; + SKIP) echo -e " ${YELLOW}[-]${NC} $scenario" ;; esac done -echo -e "${DIM}──────────────────────────────────────────────────${NC}" echo "" if [[ "$TESTS_FAILED" -gt 0 ]]; then @@ -878,5 +870,5 @@ if [[ "$TESTS_FAILED" -gt 0 ]]; then exit 1 fi -echo -e "${GREEN}${BOLD}Auth acceptance tests PASSED${NC}" +echo -e "${GREEN}${BOLD}Auth acceptance tests PASSED${NC} (${TESTS_SKIPPED} skipped)" exit 0 From 18dca458ce8d82cef1c476fc03d215a93772245b Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 11:22:27 +0100 Subject: [PATCH 18/40] fix: stream APM output live in auth acceptance tests MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Use tee + PIPESTATUS to stream APM install output to stdout in real-time while capturing it for assertions. Enables live debugging in CI/CD environments — no need to wait for scenario completion to see what happened. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- scripts/test-auth-acceptance.sh | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/scripts/test-auth-acceptance.sh b/scripts/test-auth-acceptance.sh index 60ccb751..b2e16fa0 100755 --- a/scripts/test-auth-acceptance.sh +++ b/scripts/test-auth-acceptance.sh @@ -224,14 +224,25 @@ setup_test_dir() { # or: run_install_manifest [extra_args...] (for pre-built dirs) run_install() { local package="$1"; shift - local dir + local dir tmpout dir="$(setup_test_dir "$package")" - APM_OUTPUT="$(cd "$dir" && "$APM_BINARY" install "$@" 2>&1)" && APM_EXIT=0 || APM_EXIT=$? + tmpout="$(mktemp "$WORK_DIR/output-XXXXXX")" + set +e + (cd "$dir" && "$APM_BINARY" install "$@") 2>&1 | tee "$tmpout" + APM_EXIT="${PIPESTATUS[0]}" + set -e + APM_OUTPUT="$(cat "$tmpout")" } run_install_manifest() { local dir="$1"; shift - APM_OUTPUT="$(cd "$dir" && "$APM_BINARY" install "$@" 2>&1)" && APM_EXIT=0 || APM_EXIT=$? + local tmpout + tmpout="$(mktemp "$WORK_DIR/output-XXXXXX")" + set +e + (cd "$dir" && "$APM_BINARY" install "$@") 2>&1 | tee "$tmpout" + APM_EXIT="${PIPESTATUS[0]}" + set -e + APM_OUTPUT="$(cat "$tmpout")" } # Assertions — set $SCENARIO_OK=false on failure From 45a5d9a7d751fe9246dcb377311780afcb5f1905 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 11:34:33 +0100 Subject: [PATCH 19/40] feat: add mega-manifest and multi-org auth scenarios (#19, #20) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Add two new E2E auth scenarios that test the hardest real-world case: a single apm.yml mixing dependencies from multiple auth domains. Scenario 19 (mega-manifest): builds one manifest with all configured repos (public, private, EMU, ADO, second-org) and verifies every dep installs when all tokens are set simultaneously. Scenario 20 (multi-org per-org PAT routing): two private repos from different orgs with ONLY per-org PATs — no global GITHUB_APM_PAT. Validates the resolver routes each dep to its own per-org token. Also fixes: - Replace bash 3.2-incompatible `declare -A` with indexed arrays (macOS portability) - Fix `set -e` bug in run_install that enabled errexit and caused the script to exit on first assertion failure Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- scripts/test-auth-acceptance.sh | 211 ++++++++++++++++++++++++++++++-- 1 file changed, 202 insertions(+), 9 deletions(-) diff --git a/scripts/test-auth-acceptance.sh b/scripts/test-auth-acceptance.sh index b2e16fa0..dfe53846 100755 --- a/scripts/test-auth-acceptance.sh +++ b/scripts/test-auth-acceptance.sh @@ -60,6 +60,8 @@ # 16 | No auth, private repo | A6 | H1 | V2 | Suggests auth guidance # 17 | Fine-grained wrong owner | A2 | H1 | V3 | Fails, no crash # 18 | Verbose output contract | -- | H1 | -- | Auth details only w/ flag +# 19 | Mega-manifest: all sources | A1+A7 | H1+4 | V1-3 | All deps in one install +# 20 | Multi-org PAT routing | A1+A1 | H1 | V2+3 | 2 orgs, per-org only, no global # # ============================================================================= # LOCAL USAGE @@ -74,6 +76,7 @@ # # 3. Set test repos (only PUBLIC_REPO has a default): # export AUTH_TEST_PUBLIC_REPO="microsoft/apm-sample-package" # default # export AUTH_TEST_PRIVATE_REPO="your-org/your-private-repo" # optional +# export AUTH_TEST_PRIVATE_REPO_2="other-org/other-private-repo" # optional (2nd org) # export AUTH_TEST_EMU_REPO="emu-org/internal-repo" # optional # export AUTH_TEST_ADO_REPO="org/project/_git/repo" # optional # @@ -136,6 +139,7 @@ RESULTS=() APM_BINARY="${APM_BINARY:-apm}" AUTH_TEST_PUBLIC_REPO="${AUTH_TEST_PUBLIC_REPO:-microsoft/apm-sample-package}" AUTH_TEST_PRIVATE_REPO="${AUTH_TEST_PRIVATE_REPO:-}" +AUTH_TEST_PRIVATE_REPO_2="${AUTH_TEST_PRIVATE_REPO_2:-}" AUTH_TEST_EMU_REPO="${AUTH_TEST_EMU_REPO:-}" AUTH_TEST_ADO_REPO="${AUTH_TEST_ADO_REPO:-}" @@ -148,10 +152,12 @@ _ORIG_GH_TOKEN="${GH_TOKEN:-}" _ORIG_ADO_APM_PAT="${ADO_APM_PAT:-}" # Detect any per-org PATs already set (GITHUB_APM_PAT_*) -declare -A _ORIG_PER_ORG_PATS +_ORIG_PER_ORG_PAT_NAMES=() +_ORIG_PER_ORG_PAT_VALUES=() while IFS='=' read -r name val; do if [[ "$name" == GITHUB_APM_PAT_* && "$name" != "GITHUB_APM_PAT" ]]; then - _ORIG_PER_ORG_PATS["$name"]="$val" + _ORIG_PER_ORG_PAT_NAMES+=("$name") + _ORIG_PER_ORG_PAT_VALUES+=("$val") fi done < <(env) @@ -190,8 +196,8 @@ restore_auth() { [[ -n "$_ORIG_GITHUB_TOKEN" ]] && export GITHUB_TOKEN="$_ORIG_GITHUB_TOKEN" [[ -n "$_ORIG_GH_TOKEN" ]] && export GH_TOKEN="$_ORIG_GH_TOKEN" [[ -n "$_ORIG_ADO_APM_PAT" ]] && export ADO_APM_PAT="$_ORIG_ADO_APM_PAT" - for name in "${!_ORIG_PER_ORG_PATS[@]}"; do - export "$name=${_ORIG_PER_ORG_PATS[$name]}" + for i in "${!_ORIG_PER_ORG_PAT_NAMES[@]}"; do + export "${_ORIG_PER_ORG_PAT_NAMES[$i]}=${_ORIG_PER_ORG_PAT_VALUES[$i]}" done } @@ -230,7 +236,7 @@ run_install() { set +e (cd "$dir" && "$APM_BINARY" install "$@") 2>&1 | tee "$tmpout" APM_EXIT="${PIPESTATUS[0]}" - set -e + set +e # keep errexit off (script uses -u, not -e) APM_OUTPUT="$(cat "$tmpout")" } @@ -241,7 +247,7 @@ run_install_manifest() { set +e (cd "$dir" && "$APM_BINARY" install "$@") 2>&1 | tee "$tmpout" APM_EXIT="${PIPESTATUS[0]}" - set -e + set +e # keep errexit off (script uses -u, not -e) APM_OUTPUT="$(cat "$tmpout")" } @@ -801,7 +807,189 @@ test_18_verbose_contract() { } # ========================================================================== -# RUN ALL SCENARIOS +# SCENARIO 19: Mega-manifest — all auth sources in a single install +# -------------------------------------------------------------------------- +# The hardest real-world case: a SINGLE apm.yml that contains dependencies +# from MULTIPLE auth domains. The resolver must route each dependency to +# its correct token independently within one install pass: +# - Public github.com repo → unauthenticated validation +# - Private github.com repo → GITHUB_APM_PAT / per-org PAT +# - EMU internal repo (diff org) → per-org PAT for EMU org +# - ADO repo → ADO_APM_PAT +# +# Progressive: builds the manifest from whatever repos are configured. +# Requires at least 2 repos from different auth domains to be meaningful. +# ========================================================================== +test_19_mega_manifest() { + local name="19: Mega-manifest: all sources in one install" + log_test "$name" + + # Build dep list from whatever repos are configured + local -a deps=() + local -a desc=() + + # Always include public (no auth needed) + deps+=("$AUTH_TEST_PUBLIC_REPO") + desc+=("public") + + # Private repo (needs GITHUB_APM_PAT or per-org) + if [[ -n "$AUTH_TEST_PRIVATE_REPO" && -n "$_ORIG_GITHUB_APM_PAT" ]]; then + deps+=("$AUTH_TEST_PRIVATE_REPO") + desc+=("private") + fi + + # EMU repo (different org, needs per-org or global PAT) + if [[ -n "$AUTH_TEST_EMU_REPO" && -n "$_ORIG_GITHUB_APM_PAT" ]]; then + # Only add if it's from a DIFFERENT org than PRIVATE_REPO + local priv_org="${AUTH_TEST_PRIVATE_REPO%%/*}" + local emu_org="${AUTH_TEST_EMU_REPO%%/*}" + if [[ "$priv_org" != "$emu_org" || -z "$AUTH_TEST_PRIVATE_REPO" ]]; then + deps+=("$AUTH_TEST_EMU_REPO") + desc+=("EMU-internal") + fi + fi + + # Second private repo from a different org + if [[ -n "$AUTH_TEST_PRIVATE_REPO_2" ]]; then + local org2_suffix + org2_suffix="$(org_env_suffix "$AUTH_TEST_PRIVATE_REPO_2")" + local per_org_var2="GITHUB_APM_PAT_${org2_suffix}" + local per_org_val2="${!per_org_var2:-${_ORIG_GITHUB_APM_PAT:-}}" + if [[ -n "$per_org_val2" ]]; then + deps+=("$AUTH_TEST_PRIVATE_REPO_2") + desc+=("private-org2") + fi + fi + + # ADO repo (completely separate auth: ADO_APM_PAT) + if [[ -n "$AUTH_TEST_ADO_REPO" && -n "$_ORIG_ADO_APM_PAT" ]]; then + deps+=("$AUTH_TEST_ADO_REPO") + desc+=("ADO") + fi + + # Need at least 2 deps from different auth domains to be meaningful + if [[ "${#deps[@]}" -lt 2 ]]; then + record_skip "$name" "need ≥2 repos from different auth domains" + return + fi + + log_dim "Deps: ${desc[*]} (${#deps[@]} total)" + unset_all_auth + + # Restore ALL tokens — each dep picks its own + [[ -n "$_ORIG_GITHUB_APM_PAT" ]] && export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" + [[ -n "$_ORIG_ADO_APM_PAT" ]] && export ADO_APM_PAT="$_ORIG_ADO_APM_PAT" + for i in "${!_ORIG_PER_ORG_PAT_NAMES[@]}"; do + export "${_ORIG_PER_ORG_PAT_NAMES[$i]}=${_ORIG_PER_ORG_PAT_VALUES[$i]}" + done + + SCENARIO_OK=true + + local dir + dir="$(setup_test_dir "${deps[@]}")" + log_dim "Manifest: $(cat "$dir/apm.yml" | grep ' -' | sed 's/^ - / /')" + run_install_manifest "$dir" --verbose + + assert_exit 0 "all deps install successfully" + # Verify the install count matches (or at least mentions installing) + assert_contains "Installed.*${#deps[@]}|${#deps[@]}.*dependenc|Installed.*APM" \ + "all ${#deps[@]} deps accounted for" + + $SCENARIO_OK && record_pass "$name" || record_fail "$name" + restore_auth +} + +# ========================================================================== +# SCENARIO 20: Multi-org per-org PAT routing — no global PAT +# -------------------------------------------------------------------------- +# Two private repos from DIFFERENT orgs, with ONLY per-org PATs set. +# No GITHUB_APM_PAT, no GITHUB_TOKEN, no GH_TOKEN. The resolver must +# route each dep to its own GITHUB_APM_PAT_{ORG} independently. +# +# This is the critical test for per-dependency token isolation: if the +# resolver incorrectly uses a single token for all deps, one of them +# will fail with 404. +# +# Requires: AUTH_TEST_PRIVATE_REPO + (AUTH_TEST_PRIVATE_REPO_2 or +# AUTH_TEST_EMU_REPO from a different org) + per-org PATs for both. +# ========================================================================== +test_20_multi_org_per_org_pats() { + local name="20: Multi-org per-org PAT routing [A1+A1,H1,V2+V3]" + log_test "$name" + + # Find two repos from different orgs + local repo_a="" repo_b="" org_a="" org_b="" + + if [[ -n "$AUTH_TEST_PRIVATE_REPO" ]]; then + repo_a="$AUTH_TEST_PRIVATE_REPO" + org_a="${repo_a%%/*}" + fi + + # Prefer PRIVATE_REPO_2 for the second org, fall back to EMU_REPO + if [[ -n "$AUTH_TEST_PRIVATE_REPO_2" ]]; then + local candidate_org="${AUTH_TEST_PRIVATE_REPO_2%%/*}" + if [[ "$candidate_org" != "$org_a" ]]; then + repo_b="$AUTH_TEST_PRIVATE_REPO_2" + org_b="$candidate_org" + fi + fi + if [[ -z "$repo_b" && -n "$AUTH_TEST_EMU_REPO" ]]; then + local candidate_org="${AUTH_TEST_EMU_REPO%%/*}" + if [[ "$candidate_org" != "$org_a" ]]; then + repo_b="$AUTH_TEST_EMU_REPO" + org_b="$candidate_org" + fi + fi + + if [[ -z "$repo_a" || -z "$repo_b" ]]; then + record_skip "$name" "need 2 repos from different orgs" + return + fi + + # Derive per-org env var names + local suffix_a suffix_b + suffix_a="$(org_env_suffix "$repo_a")" + suffix_b="$(org_env_suffix "$repo_b")" + local var_a="GITHUB_APM_PAT_${suffix_a}" + local var_b="GITHUB_APM_PAT_${suffix_b}" + + # Get token values: use existing per-org PAT or fall back to global + local token_a="" token_b="" + for i in "${!_ORIG_PER_ORG_PAT_NAMES[@]}"; do + [[ "${_ORIG_PER_ORG_PAT_NAMES[$i]}" == "$var_a" ]] && token_a="${_ORIG_PER_ORG_PAT_VALUES[$i]}" + [[ "${_ORIG_PER_ORG_PAT_NAMES[$i]}" == "$var_b" ]] && token_b="${_ORIG_PER_ORG_PAT_VALUES[$i]}" + done + [[ -z "$token_a" ]] && token_a="${_ORIG_GITHUB_APM_PAT:-}" + [[ -z "$token_b" ]] && token_b="${_ORIG_GITHUB_APM_PAT:-}" + + if [[ -z "$token_a" || -z "$token_b" ]]; then + record_skip "$name" "need tokens for both $var_a and $var_b" + return + fi + + log_dim "Org A: $org_a ($var_a) → $repo_a" + log_dim "Org B: $org_b ($var_b) → $repo_b" + + unset_all_auth + # Set ONLY per-org PATs — no global, no GITHUB_TOKEN, no GH_TOKEN + export "$var_a=$token_a" + export "$var_b=$token_b" + SCENARIO_OK=true + + local dir + dir="$(setup_test_dir "$repo_a" "$repo_b")" + run_install_manifest "$dir" --verbose + + assert_exit 0 "both deps install with per-org PATs only" + # Verify BOTH per-org sources appear in verbose output + assert_contains "source=${var_a}" "org A resolved via $var_a" + assert_contains "source=${var_b}" "org B resolved via $var_b" + + $SCENARIO_OK && record_pass "$name" || record_fail "$name" + restore_auth +} + +# ========================================================================== # ========================================================================== echo "" @@ -812,12 +1000,13 @@ echo "" echo -e "${DIM}Binary: ${APM_BINARY}${NC}" echo -e "${DIM}Public repo: ${AUTH_TEST_PUBLIC_REPO}${NC}" echo -e "${DIM}Private repo: ${AUTH_TEST_PRIVATE_REPO:-}${NC}" +echo -e "${DIM}Private #2: ${AUTH_TEST_PRIVATE_REPO_2:-}${NC}" echo -e "${DIM}EMU repo: ${AUTH_TEST_EMU_REPO:-}${NC}" echo -e "${DIM}ADO repo: ${AUTH_TEST_ADO_REPO:-}${NC}" echo -e "${DIM}Tokens: GITHUB_APM_PAT=${_ORIG_GITHUB_APM_PAT:+SET} GITHUB_TOKEN=${_ORIG_GITHUB_TOKEN:+SET} GH_TOKEN=${_ORIG_GH_TOKEN:+SET} ADO_APM_PAT=${_ORIG_ADO_APM_PAT:+SET}${NC}" # Show per-org PATs -for name in "${!_ORIG_PER_ORG_PATS[@]}"; do - echo -e "${DIM} ${name}=SET${NC}" +for i in "${!_ORIG_PER_ORG_PAT_NAMES[@]}"; do + echo -e "${DIM} ${_ORIG_PER_ORG_PAT_NAMES[$i]}=SET${NC}" done echo "" @@ -847,6 +1036,10 @@ test_17_fine_grained_wrong_owner # Output contract test_18_verbose_contract +# Mixed-source manifests +test_19_mega_manifest +test_20_multi_org_per_org_pats + # ========================================================================== # SUMMARY # ========================================================================== From 5d17dbc03cff682832dd48d3a4c3c1e8be1c3534 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 12:49:52 +0100 Subject: [PATCH 20/40] feat: chaos mega-manifest test, --mega flag, .env template MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Rewrite scenario 19 as a truly brutal chaos mega-manifest that mixes every dependency format and auth source in a single apm.yml install: public shorthand, public FQDN, private org A, private org B, EMU internal, and ADO — all in one pass. The resolver must route each dep to its correct token independently. Add --mega flag to run ONLY the chaos manifest (vs progressive mode which runs all 20 scenarios). Usage: set -a && source .env && set +a ./scripts/test-auth-acceptance.sh # progressive (all 20) ./scripts/test-auth-acceptance.sh --mega # chaos mega only Create .env (gitignored) with documented token placeholders and usage instructions for running the test suite locally. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- scripts/test-auth-acceptance.sh | 227 ++++++++++++++++++++------------ 1 file changed, 144 insertions(+), 83 deletions(-) diff --git a/scripts/test-auth-acceptance.sh b/scripts/test-auth-acceptance.sh index dfe53846..f204f635 100755 --- a/scripts/test-auth-acceptance.sh +++ b/scripts/test-auth-acceptance.sh @@ -60,7 +60,7 @@ # 16 | No auth, private repo | A6 | H1 | V2 | Suggests auth guidance # 17 | Fine-grained wrong owner | A2 | H1 | V3 | Fails, no crash # 18 | Verbose output contract | -- | H1 | -- | Auth details only w/ flag -# 19 | Mega-manifest: all sources | A1+A7 | H1+4 | V1-3 | All deps in one install +# 19 | CHAOS mega-manifest | ALL | H1+4 | V1-3 | Every format+source in 1 install # 20 | Multi-org PAT routing | A1+A1 | H1 | V2+3 | 2 orgs, per-org only, no global # # ============================================================================= @@ -87,12 +87,16 @@ # export GH_TOKEN="$(gh auth token 2>/dev/null)" # OAuth from gh CLI # export ADO_APM_PAT="ado-pat-here" # Azure DevOps PAT # -# # 5. Run: -# ./scripts/test-auth-acceptance.sh +# # 5. Run (choose one): +# ./scripts/test-auth-acceptance.sh # progressive — all 20 scenarios +# ./scripts/test-auth-acceptance.sh --mega # chaos mega-manifest ONLY (#19) # # Scenarios auto-SKIP when their required env vars or repos are missing. # A minimal run (no tokens) still tests scenarios 1, 15, 18. # +# Load tokens from .env (if present): +# set -a && source .env && set +a && ./scripts/test-auth-acceptance.sh +# # ============================================================================= # CI USAGE (GitHub Actions) # ============================================================================= @@ -108,6 +112,15 @@ set -uo pipefail +# --------------------------------------------------------------------------- +# Mode: --mega runs ONLY the chaos mega-manifest (scenario 19) +# --------------------------------------------------------------------------- +RUN_MODE="progressive" # default: all 20 scenarios +if [[ "${1:-}" == "--mega" ]]; then + RUN_MODE="mega" + shift +fi + # --------------------------------------------------------------------------- # Logging (matches existing scripts/test-integration.sh style) # --------------------------------------------------------------------------- @@ -807,76 +820,115 @@ test_18_verbose_contract() { } # ========================================================================== -# SCENARIO 19: Mega-manifest — all auth sources in a single install +# SCENARIO 19: CHAOS MEGA-MANIFEST — the ultimate auth stress test # -------------------------------------------------------------------------- -# The hardest real-world case: a SINGLE apm.yml that contains dependencies -# from MULTIPLE auth domains. The resolver must route each dependency to -# its correct token independently within one install pass: -# - Public github.com repo → unauthenticated validation -# - Private github.com repo → GITHUB_APM_PAT / per-org PAT -# - EMU internal repo (diff org) → per-org PAT for EMU org -# - ADO repo → ADO_APM_PAT +# A single apm.yml that combines EVERY dependency format, auth source, +# host type, and visibility level the user has configured — all in one +# install pass. This is what a power user's real-world manifest looks like: # -# Progressive: builds the manifest from whatever repos are configured. -# Requires at least 2 repos from different auth domains to be meaningful. +# 1. Public repo, string shorthand (no auth) +# 2. Public repo, explicit github.com FQDN (no auth, different format) +# 3. Private repo from org A, pinned by tag (GITHUB_APM_PAT_ORG_A) +# 4. Private repo from org B, pinned by tag (GITHUB_APM_PAT_ORG_B) +# 5. EMU internal repo from a third org (per-org or global PAT) +# 6. ADO repo via FQDN (ADO_APM_PAT, completely separate auth) +# 7. Virtual file dep — single .prompt.md from a public repo +# +# The resolver must: +# - Route each dep to its correct token independently +# - Use unauthenticated-first for public deps on github.com +# - Use per-org PATs when available, fall back to global +# - Use ADO_APM_PAT for ADO deps (no credential fill) +# - Handle mixed string/FQDN/virtual formats in one manifest +# +# Progressive: builds the manifest from whatever repos/tokens are +# configured. Minimum: 2 deps from different auth domains. +# Maximum: all 7 dep slots filled for full chaos coverage. # ========================================================================== test_19_mega_manifest() { - local name="19: Mega-manifest: all sources in one install" + local name="19: CHAOS mega-manifest: all sources, all formats" log_test "$name" - # Build dep list from whatever repos are configured - local -a deps=() - local -a desc=() - - # Always include public (no auth needed) - deps+=("$AUTH_TEST_PUBLIC_REPO") - desc+=("public") - - # Private repo (needs GITHUB_APM_PAT or per-org) + # We'll build raw YAML to mix string, FQDN, and virtual formats + local dir + dir="$(mktemp -d "$WORK_DIR/chaos-XXXXXX")" + local dep_count=0 + local -a dep_desc=() + + # Start YAML header + cat > "$dir/apm.yml" <<'HEADER' +name: chaos-mega-manifest-test +version: 0.0.1 +description: "Brutal auth stress test — every format, every auth source, one install" +dependencies: + apm: +HEADER + + # --- Slot 1: Public repo, string shorthand (always available) --- + echo " - \"${AUTH_TEST_PUBLIC_REPO}\"" >> "$dir/apm.yml" + dep_count=$((dep_count + 1)) + dep_desc+=("public-shorthand") + + # --- Slot 2: Same public repo, FQDN format (validates format parsing) --- + # Use a different public virtual file to avoid duplicate key + echo " - \"github.com/github/awesome-copilot\"" >> "$dir/apm.yml" + dep_count=$((dep_count + 1)) + dep_desc+=("public-fqdn") + + # --- Slot 3: Private repo from org A, pinned by tag --- if [[ -n "$AUTH_TEST_PRIVATE_REPO" && -n "$_ORIG_GITHUB_APM_PAT" ]]; then - deps+=("$AUTH_TEST_PRIVATE_REPO") - desc+=("private") - fi - - # EMU repo (different org, needs per-org or global PAT) - if [[ -n "$AUTH_TEST_EMU_REPO" && -n "$_ORIG_GITHUB_APM_PAT" ]]; then - # Only add if it's from a DIFFERENT org than PRIVATE_REPO - local priv_org="${AUTH_TEST_PRIVATE_REPO%%/*}" - local emu_org="${AUTH_TEST_EMU_REPO%%/*}" - if [[ "$priv_org" != "$emu_org" || -z "$AUTH_TEST_PRIVATE_REPO" ]]; then - deps+=("$AUTH_TEST_EMU_REPO") - desc+=("EMU-internal") - fi + echo " - \"${AUTH_TEST_PRIVATE_REPO}\"" >> "$dir/apm.yml" + dep_count=$((dep_count + 1)) + dep_desc+=("private-orgA") fi - # Second private repo from a different org + # --- Slot 4: Private repo from org B (different org) --- if [[ -n "$AUTH_TEST_PRIVATE_REPO_2" ]]; then local org2_suffix org2_suffix="$(org_env_suffix "$AUTH_TEST_PRIVATE_REPO_2")" local per_org_var2="GITHUB_APM_PAT_${org2_suffix}" local per_org_val2="${!per_org_var2:-${_ORIG_GITHUB_APM_PAT:-}}" if [[ -n "$per_org_val2" ]]; then - deps+=("$AUTH_TEST_PRIVATE_REPO_2") - desc+=("private-org2") + echo " - \"${AUTH_TEST_PRIVATE_REPO_2}\"" >> "$dir/apm.yml" + dep_count=$((dep_count + 1)) + dep_desc+=("private-orgB") fi fi - # ADO repo (completely separate auth: ADO_APM_PAT) + # --- Slot 5: EMU internal repo (third org, different visibility) --- + if [[ -n "$AUTH_TEST_EMU_REPO" && -n "$_ORIG_GITHUB_APM_PAT" ]]; then + local priv_org="${AUTH_TEST_PRIVATE_REPO%%/*}" + local emu_org="${AUTH_TEST_EMU_REPO%%/*}" + if [[ "$priv_org" != "$emu_org" || -z "$AUTH_TEST_PRIVATE_REPO" ]]; then + echo " - \"${AUTH_TEST_EMU_REPO}\"" >> "$dir/apm.yml" + dep_count=$((dep_count + 1)) + dep_desc+=("EMU-internal") + fi + fi + + # --- Slot 6: ADO repo (completely different auth domain) --- if [[ -n "$AUTH_TEST_ADO_REPO" && -n "$_ORIG_ADO_APM_PAT" ]]; then - deps+=("$AUTH_TEST_ADO_REPO") - desc+=("ADO") + echo " - \"${AUTH_TEST_ADO_REPO}\"" >> "$dir/apm.yml" + dep_count=$((dep_count + 1)) + dep_desc+=("ADO") fi - # Need at least 2 deps from different auth domains to be meaningful - if [[ "${#deps[@]}" -lt 2 ]]; then - record_skip "$name" "need ≥2 repos from different auth domains" + # Close YAML + echo " mcp: []" >> "$dir/apm.yml" + + # Need at least 3 deps to call it a mega test + if [[ "$dep_count" -lt 3 ]]; then + record_skip "$name" "need ≥3 deps from different auth domains (got $dep_count: ${dep_desc[*]})" return fi - log_dim "Deps: ${desc[*]} (${#deps[@]} total)" - unset_all_auth + log_dim "Chaos manifest: ${dep_desc[*]} ($dep_count deps)" + log_dim "--- apm.yml ---" + while IFS= read -r line; do log_dim "$line"; done < "$dir/apm.yml" + log_dim "--- end ---" # Restore ALL tokens — each dep picks its own + unset_all_auth [[ -n "$_ORIG_GITHUB_APM_PAT" ]] && export GITHUB_APM_PAT="$_ORIG_GITHUB_APM_PAT" [[ -n "$_ORIG_ADO_APM_PAT" ]] && export ADO_APM_PAT="$_ORIG_ADO_APM_PAT" for i in "${!_ORIG_PER_ORG_PAT_NAMES[@]}"; do @@ -885,15 +937,17 @@ test_19_mega_manifest() { SCENARIO_OK=true - local dir - dir="$(setup_test_dir "${deps[@]}")" - log_dim "Manifest: $(cat "$dir/apm.yml" | grep ' -' | sed 's/^ - / /')" run_install_manifest "$dir" --verbose - assert_exit 0 "all deps install successfully" - # Verify the install count matches (or at least mentions installing) - assert_contains "Installed.*${#deps[@]}|${#deps[@]}.*dependenc|Installed.*APM" \ - "all ${#deps[@]} deps accounted for" + assert_exit 0 "all $dep_count deps install in one pass" + + # Verify at least the public deps succeeded + assert_contains "apm-sample-package|awesome-copilot" "at least one public dep resolved" + + # If private deps were included, verify token sources appear in verbose + if [[ -n "$AUTH_TEST_PRIVATE_REPO" && -n "$_ORIG_GITHUB_APM_PAT" ]]; then + assert_contains "source=GITHUB_APM_PAT" "private dep used token" + fi $SCENARIO_OK && record_pass "$name" || record_fail "$name" restore_auth @@ -1008,37 +1062,44 @@ echo -e "${DIM}Tokens: GITHUB_APM_PAT=${_ORIG_GITHUB_APM_PAT:+SET} GITHUB_ for i in "${!_ORIG_PER_ORG_PAT_NAMES[@]}"; do echo -e "${DIM} ${_ORIG_PER_ORG_PAT_NAMES[$i]}=SET${NC}" done +echo -e "${DIM}Mode: ${RUN_MODE}${NC}" echo "" -# Core auth scenarios -test_01_public_no_auth -test_02_public_with_pat -test_03_private_global_pat -test_04_private_per_org_pat -test_05_token_priority -test_06_github_token_fallback -test_07_gh_token_fallback -test_08_credential_helper_only -test_09_emu_internal_repo -test_10_mixed_manifest -test_11_token_type_detection - -# ADO scenarios -test_12_ado_repo -test_13_ado_no_pat - -# Error scenarios -test_14_invalid_token -test_15_nonexistent_repo -test_16_no_auth_private_repo -test_17_fine_grained_wrong_owner - -# Output contract -test_18_verbose_contract - -# Mixed-source manifests -test_19_mega_manifest -test_20_multi_org_per_org_pats +if [[ "$RUN_MODE" == "mega" ]]; then + # --mega: run ONLY the chaos mega-manifest + test_19_mega_manifest +else + # progressive: all 20 scenarios (auto-skip when deps missing) + # Core auth scenarios + test_01_public_no_auth + test_02_public_with_pat + test_03_private_global_pat + test_04_private_per_org_pat + test_05_token_priority + test_06_github_token_fallback + test_07_gh_token_fallback + test_08_credential_helper_only + test_09_emu_internal_repo + test_10_mixed_manifest + test_11_token_type_detection + + # ADO scenarios + test_12_ado_repo + test_13_ado_no_pat + + # Error scenarios + test_14_invalid_token + test_15_nonexistent_repo + test_16_no_auth_private_repo + test_17_fine_grained_wrong_owner + + # Output contract + test_18_verbose_contract + + # Mixed-source manifests + test_19_mega_manifest + test_20_multi_org_per_org_pats +fi # ========================================================================== # SUMMARY From f441ed6879225ecc75bd1985bc7013a66834c3ef Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 12:55:53 +0100 Subject: [PATCH 21/40] feat: add git URL object format to chaos mega-manifest MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Add slots 7 and 8 to the mega-manifest for YAML dict format deps: - Slot 7: private repo via { git: https://...git } (AUTH_TEST_GIT_URL_REPO) - Slot 8: public repo via { git: https://...git } (AUTH_TEST_GIT_URL_PUBLIC_REPO) These test parse_from_dict() — a completely different parser path than string shorthand or FQDN. Auth resolves from the URL's host+org. Both slots use separate env vars to avoid unique-key dedup with the string shorthand slots. Slot 7 falls back to PRIVATE_REPO_2 if unset. Also updates .env template with the new repo vars. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- scripts/test-auth-acceptance.sh | 39 +++++++++++++++++++++++++++++++-- 1 file changed, 37 insertions(+), 2 deletions(-) diff --git a/scripts/test-auth-acceptance.sh b/scripts/test-auth-acceptance.sh index f204f635..27b0c17f 100755 --- a/scripts/test-auth-acceptance.sh +++ b/scripts/test-auth-acceptance.sh @@ -77,6 +77,7 @@ # export AUTH_TEST_PUBLIC_REPO="microsoft/apm-sample-package" # default # export AUTH_TEST_PRIVATE_REPO="your-org/your-private-repo" # optional # export AUTH_TEST_PRIVATE_REPO_2="other-org/other-private-repo" # optional (2nd org) +# export AUTH_TEST_GIT_URL_REPO="org/repo-for-git-url-test" # optional (git: object) # export AUTH_TEST_EMU_REPO="emu-org/internal-repo" # optional # export AUTH_TEST_ADO_REPO="org/project/_git/repo" # optional # @@ -153,6 +154,8 @@ APM_BINARY="${APM_BINARY:-apm}" AUTH_TEST_PUBLIC_REPO="${AUTH_TEST_PUBLIC_REPO:-microsoft/apm-sample-package}" AUTH_TEST_PRIVATE_REPO="${AUTH_TEST_PRIVATE_REPO:-}" AUTH_TEST_PRIVATE_REPO_2="${AUTH_TEST_PRIVATE_REPO_2:-}" +AUTH_TEST_GIT_URL_REPO="${AUTH_TEST_GIT_URL_REPO:-}" +AUTH_TEST_GIT_URL_PUBLIC_REPO="${AUTH_TEST_GIT_URL_PUBLIC_REPO:-}" AUTH_TEST_EMU_REPO="${AUTH_TEST_EMU_REPO:-}" AUTH_TEST_ADO_REPO="${AUTH_TEST_ADO_REPO:-}" @@ -832,14 +835,16 @@ test_18_verbose_contract() { # 4. Private repo from org B, pinned by tag (GITHUB_APM_PAT_ORG_B) # 5. EMU internal repo from a third org (per-org or global PAT) # 6. ADO repo via FQDN (ADO_APM_PAT, completely separate auth) -# 7. Virtual file dep — single .prompt.md from a public repo +# 7. Private repo via git: URL object (YAML dict format, credential helper) +# 8. Public repo via git: URL object (YAML dict, unauthenticated clone) # # The resolver must: # - Route each dep to its correct token independently # - Use unauthenticated-first for public deps on github.com # - Use per-org PATs when available, fall back to global # - Use ADO_APM_PAT for ADO deps (no credential fill) -# - Handle mixed string/FQDN/virtual formats in one manifest +# - Handle mixed string/FQDN/git-object formats in one manifest +# - Parse both string entries AND dict entries in the same YAML list # # Progressive: builds the manifest from whatever repos/tokens are # configured. Minimum: 2 deps from different auth domains. @@ -913,6 +918,35 @@ HEADER dep_desc+=("ADO") fi + # --- Slot 7: Private repo via git: URL object (dict format) --- + # Uses the YAML object syntax { git: https://..., ref: ... } which goes + # through parse_from_dict() — a completely different parser path than + # string shorthand. Auth resolves from the URL's host+org. + # Uses AUTH_TEST_GIT_URL_REPO to avoid dedup with slot 3 (same repo_url + # would be deduplicated by the resolver). Falls back to PRIVATE_REPO_2. + local git_url_repo="${AUTH_TEST_GIT_URL_REPO:-${AUTH_TEST_PRIVATE_REPO_2:-}}" + if [[ -n "$git_url_repo" && -n "$_ORIG_GITHUB_APM_PAT" ]]; then + local git_owner="${git_url_repo%%/*}" + local git_repo="${git_url_repo#*/}" + git_repo="${git_repo%%#*}" + cat >> "$dir/apm.yml" <> "$dir/apm.yml" <> "$dir/apm.yml" @@ -1055,6 +1089,7 @@ echo -e "${DIM}Binary: ${APM_BINARY}${NC}" echo -e "${DIM}Public repo: ${AUTH_TEST_PUBLIC_REPO}${NC}" echo -e "${DIM}Private repo: ${AUTH_TEST_PRIVATE_REPO:-}${NC}" echo -e "${DIM}Private #2: ${AUTH_TEST_PRIVATE_REPO_2:-}${NC}" +echo -e "${DIM}Git URL repo: ${AUTH_TEST_GIT_URL_REPO:-}${NC}" echo -e "${DIM}EMU repo: ${AUTH_TEST_EMU_REPO:-}${NC}" echo -e "${DIM}ADO repo: ${AUTH_TEST_ADO_REPO:-}${NC}" echo -e "${DIM}Tokens: GITHUB_APM_PAT=${_ORIG_GITHUB_APM_PAT:+SET} GITHUB_TOKEN=${_ORIG_GITHUB_TOKEN:+SET} GH_TOKEN=${_ORIG_GH_TOKEN:+SET} ADO_APM_PAT=${_ORIG_ADO_APM_PAT:+SET}${NC}" From 7ce4c1e0a62ffcbf3339a62411ddf457060f8a6d Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 13:10:05 +0100 Subject: [PATCH 22/40] feat: add parent chain breadcrumb to transitive dep error messages MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit When a transitive dependency fails to download (e.g., auth failure for an unknown org), error messages now include the full dependency chain that led to the failure: Failed to resolve transitive dep other-org/leaf-pkg (via acme/root-pkg > other-org/leaf-pkg): 401 Unauthorized Architecture (clean SoC): - Resolver (_build_parent_chain): computes breadcrumb by walking up DependencyNode.parent links — it owns the tree structure - Callback (install.py): receives chain as optional 3rd arg, uses it for verbose logging and diagnostics — it owns the output - DownloadCallback type: changed to Callable[...] to accept the new parent_chain parameter without breaking existing signatures Transitive failures are now: 1. Logged inline via logger.verbose_detail() (verbose mode) 2. Collected into DiagnosticCollector for the deferred summary (always) Tests: - 3 unit tests for _build_parent_chain (3-level, single, None) - 1 integration test: resolver with callback tracking verifies the chain string contains parent dep name and '>' separator Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- src/apm_cli/commands/install.py | 35 ++++++-- src/apm_cli/deps/apm_resolver.py | 46 ++++++++-- tests/unit/test_install_command.py | 129 +++++++++++++++++++++++++++++ 3 files changed, 197 insertions(+), 13 deletions(-) diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index 657f7a64..40da6985 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -1069,10 +1069,21 @@ def _install_apm_dependencies( # Maps dep_key -> resolved_commit (SHA or None) so the cached path can use it callback_downloaded = {} + # Collect transitive dep failures during resolution — they'll be routed to + # diagnostics after the DiagnosticCollector is created (later in the flow). + transitive_failures: list[tuple[str, str]] = [] # (dep_display, message) + # Create a download callback for transitive dependency resolution # This allows the resolver to fetch packages on-demand during tree building - def download_callback(dep_ref, modules_dir): - """Download a package during dependency resolution.""" + def download_callback(dep_ref, modules_dir, parent_chain=""): + """Download a package during dependency resolution. + + Args: + dep_ref: The dependency to download. + modules_dir: Target apm_modules directory. + parent_chain: Human-readable breadcrumb (e.g. "root > mid") + showing which dependency path led to this transitive dep. + """ install_path = dep_ref.get_install_path(modules_dir) if install_path.exists(): return install_path @@ -1109,11 +1120,20 @@ def download_callback(dep_ref, modules_dir): callback_downloaded[dep_ref.get_unique_key()] = resolved_sha return install_path except Exception as e: - # Log but don't fail - allow resolution to continue + # Build contextual message including the dependency chain breadcrumb + chain_hint = f" (via {parent_chain})" if parent_chain else "" + dep_display = dep_ref.get_display_name() + fail_msg = ( + f"Failed to resolve transitive dep " + f"{dep_ref.repo_url}{chain_hint}: {e}" + ) + # Verbose: inline detail if logger: - logger.verbose_detail(f" Failed to resolve transitive dep {dep_ref.repo_url}: {e}") + logger.verbose_detail(f" {fail_msg}") elif verbose: - _rich_error(f" └─ Failed to resolve transitive dep {dep_ref.repo_url}: {e}") + _rich_error(f" └─ {fail_msg}") + # Collect for deferred diagnostics summary (always, even non-verbose) + transitive_failures.append((dep_display, fail_msg)) return None # Resolve dependencies with transitive download support @@ -1257,6 +1277,11 @@ def _collect_descendants(node, visited=None): hook_integrator = HookIntegrator() instruction_integrator = InstructionIntegrator() diagnostics = DiagnosticCollector(verbose=verbose) + + # Drain transitive failures collected during resolution into diagnostics + for dep_display, fail_msg in transitive_failures: + diagnostics.error(fail_msg, package=dep_display) + total_prompts_integrated = 0 total_agents_integrated = 0 total_skills_integrated = 0 diff --git a/src/apm_cli/deps/apm_resolver.py b/src/apm_cli/deps/apm_resolver.py index 43f1f77b..d6b28f69 100644 --- a/src/apm_cli/deps/apm_resolver.py +++ b/src/apm_cli/deps/apm_resolver.py @@ -10,9 +10,12 @@ CircularRef, ConflictInfo ) -# Type alias for the download callback -# Takes a DependencyReference and apm_modules_dir, returns the install path if successful -DownloadCallback = Callable[[DependencyReference, Path], Optional[Path]] +# Type alias for the download callback. +# Takes (dep_ref, apm_modules_dir, parent_chain) and returns the install path +# if successful. ``parent_chain`` is a human-readable breadcrumb string like +# "root-pkg > mid-pkg" showing which dependency path led here, or "" for +# direct (depth-1) dependencies. +DownloadCallback = Callable[..., Optional[Path]] class APMDependencyResolver: @@ -40,6 +43,22 @@ def __init__( self._download_callback = download_callback self._downloaded_packages: Set[str] = set() # Track what we downloaded during this resolution + @staticmethod + def _build_parent_chain(node: Optional[DependencyNode]) -> str: + """Build a human-readable breadcrumb from a node's ancestry. + + Walks up ``parent`` links to produce e.g. ``"root-pkg > mid-pkg"`` + so error messages can show which dependency path led to a transitive + download failure. Returns ``""`` for root-level (direct) deps. + """ + parts: list[str] = [] + current = node + while current is not None: + parts.append(current.get_display_name()) + current = current.parent + parts.reverse() + return " > ".join(parts) + def resolve_dependencies(self, project_root: Path) -> DependencyGraph: """ Resolve all APM dependencies recursively. @@ -199,9 +218,13 @@ def build_dependency_tree(self, root_apm_yml: Path) -> DependencyTree: # For Task 3, this focuses on the resolution algorithm structure # Package loading integration will be completed in Tasks 2 & 4 try: - # Attempt to load package - currently returns None (placeholder implementation) - # This will integrate with Task 2 (GitHub downloader) and Task 4 (apm_modules scanning) - loaded_package = self._try_load_dependency_package(dep_ref) + # Compute breadcrumb chain from this node's ancestry so download + # errors can report "root > mid > failing-dep" context. + parent_chain = self._build_parent_chain(node) + + loaded_package = self._try_load_dependency_package( + dep_ref, parent_chain=parent_chain + ) if loaded_package: # Update the node with the actual loaded package node.package = loaded_package @@ -344,7 +367,9 @@ def _validate_dependency_reference(self, dep_ref: DependencyReference) -> bool: return True - def _try_load_dependency_package(self, dep_ref: DependencyReference) -> Optional[APMPackage]: + def _try_load_dependency_package( + self, dep_ref: DependencyReference, parent_chain: str = "" + ) -> Optional[APMPackage]: """ Try to load a dependency package from apm_modules/. @@ -355,6 +380,9 @@ def _try_load_dependency_package(self, dep_ref: DependencyReference) -> Optional Args: dep_ref: Reference to the dependency to load + parent_chain: Human-readable breadcrumb of the dependency path + that led here (e.g. "root-pkg > mid-pkg"). Forwarded to the + download callback for contextual error messages. Returns: APMPackage: Loaded package if found, None otherwise @@ -376,7 +404,9 @@ def _try_load_dependency_package(self, dep_ref: DependencyReference) -> Optional # Avoid re-downloading the same package in a single resolution if unique_key not in self._downloaded_packages: try: - downloaded_path = self._download_callback(dep_ref, self._apm_modules_dir) + downloaded_path = self._download_callback( + dep_ref, self._apm_modules_dir, parent_chain + ) if downloaded_path and downloaded_path.exists(): self._downloaded_packages.add(unique_key) install_path = downloaded_path diff --git a/tests/unit/test_install_command.py b/tests/unit/test_install_command.py index 4f517e24..e5d3e420 100644 --- a/tests/unit/test_install_command.py +++ b/tests/unit/test_install_command.py @@ -313,3 +313,132 @@ def test_verbose_validation_failure_calls_build_error_context(self, mock_urlopen call_args = mock_build_ctx.call_args assert "github.com" in call_args[0][0] # host assert "owner/repo" in call_args[0][1] # operation + + +# --------------------------------------------------------------------------- +# Transitive dep parent chain breadcrumb +# --------------------------------------------------------------------------- + + +class TestTransitiveDepParentChain: + """Tests for the parent chain breadcrumb in transitive dep errors.""" + + def test_build_parent_chain_returns_breadcrumb(self): + """_build_parent_chain walks up parent links and returns 'a > b > c'.""" + from apm_cli.deps.apm_resolver import APMDependencyResolver + from apm_cli.deps.dependency_graph import DependencyNode + from apm_cli.models.apm_package import APMPackage, DependencyReference + + root_ref = DependencyReference.parse("acme/root-pkg") + mid_ref = DependencyReference.parse("acme/mid-pkg") + leaf_ref = DependencyReference.parse("other-org/leaf-pkg") + + root_node = DependencyNode( + package=APMPackage(name="root-pkg", version="1.0", source="acme/root-pkg"), + dependency_ref=root_ref, + depth=1, + ) + mid_node = DependencyNode( + package=APMPackage(name="mid-pkg", version="1.0", source="acme/mid-pkg"), + dependency_ref=mid_ref, + depth=2, + parent=root_node, + ) + leaf_node = DependencyNode( + package=APMPackage(name="leaf-pkg", version="1.0", source="other-org/leaf-pkg"), + dependency_ref=leaf_ref, + depth=3, + parent=mid_node, + ) + + chain = APMDependencyResolver._build_parent_chain(leaf_node) + assert chain == "acme/root-pkg > acme/mid-pkg > other-org/leaf-pkg" + + def test_build_parent_chain_single_node(self): + """Direct dep (no parent) returns just its own name.""" + from apm_cli.deps.apm_resolver import APMDependencyResolver + from apm_cli.deps.dependency_graph import DependencyNode + from apm_cli.models.apm_package import APMPackage, DependencyReference + + ref = DependencyReference.parse("acme/direct-pkg") + node = DependencyNode( + package=APMPackage(name="direct-pkg", version="1.0", source="acme/direct-pkg"), + dependency_ref=ref, + depth=1, + ) + chain = APMDependencyResolver._build_parent_chain(node) + assert chain == "acme/direct-pkg" + + def test_build_parent_chain_none_returns_empty(self): + """None node returns empty string.""" + from apm_cli.deps.apm_resolver import APMDependencyResolver + assert APMDependencyResolver._build_parent_chain(None) == "" + + def test_download_callback_includes_chain_in_error(self, tmp_path): + """When a transitive dep download fails, the error message includes + the parent chain breadcrumb for debugging. + + Tests the resolver + callback interaction directly: we create a + resolver with a callback that fails on the leaf dep, and verify + the parent_chain arg is passed through correctly. + """ + from apm_cli.deps.apm_resolver import APMDependencyResolver + from apm_cli.models.apm_package import APMPackage, DependencyReference + + # Set up apm_modules with root-pkg that declares leaf-pkg as dep + modules_dir = tmp_path / "apm_modules" + root_dir = modules_dir / "acme" / "root-pkg" + root_dir.mkdir(parents=True) + (root_dir / "apm.yml").write_text(yaml.safe_dump({ + "name": "root-pkg", + "version": "1.0.0", + "dependencies": {"apm": ["other-org/leaf-pkg"], "mcp": []}, + })) + + # Write root apm.yml that depends on root-pkg + (tmp_path / "apm.yml").write_text(yaml.safe_dump({ + "name": "test-project", + "version": "0.0.1", + "dependencies": {"apm": ["acme/root-pkg"], "mcp": []}, + })) + + # Track what the callback receives + callback_calls = [] + + def tracking_callback(dep_ref, mods_dir, parent_chain=""): + callback_calls.append({ + "dep": dep_ref.get_display_name(), + "parent_chain": parent_chain, + }) + if "leaf-pkg" in dep_ref.get_display_name(): + # Simulate what the real callback does: catch internal error, + # return None (non-blocking). The resolver treats None as + # "download failed, skip transitive deps". + return None + # Root-pkg is already on disk, return its path + return dep_ref.get_install_path(mods_dir) + + resolver = APMDependencyResolver( + apm_modules_dir=modules_dir, + download_callback=tracking_callback, + ) + + os.chdir(tmp_path) + resolver.resolve_dependencies(tmp_path) + + # The callback should have been called for leaf-pkg + leaf_calls = [c for c in callback_calls if "leaf-pkg" in c["dep"]] + assert len(leaf_calls) == 1, ( + f"Expected 1 call for leaf-pkg, got {len(leaf_calls)}. " + f"All calls: {callback_calls}" + ) + + # The parent chain should contain root-pkg + chain = leaf_calls[0]["parent_chain"] + assert "root-pkg" in chain, ( + f"Expected 'root-pkg' in parent chain, got: '{chain}'" + ) + # Chain should show the full path: root > leaf + assert ">" in chain, ( + f"Expected '>' separator in chain, got: '{chain}'" + ) From 930c4b957bde3a62b7f86a6e9b573bcae6e6b083 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 13:52:41 +0100 Subject: [PATCH 23/40] =?UTF-8?q?refactor:=20Wave=200=20=E2=80=94=20Protoc?= =?UTF-8?q?ol=20type,=20DependencyNode.get=5Fancestor=5Fchain,=20dedup=20a?= =?UTF-8?q?uth,=20traffic-light=20fixes?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Replace Callable[...] DownloadCallback with typed Protocol class - Move _build_parent_chain() to DependencyNode.get_ancestor_chain() - Remove unused _resolution_path field from resolver - Deduplicate AuthResolver instantiation in _validate_package_exists - Fix traffic-light: 'no deps' warning→info, lockfile failure warning→error - All 2874 tests passing Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- src/apm_cli/commands/install.py | 14 +++++-------- src/apm_cli/deps/apm_resolver.py | 30 ++++++++++------------------ src/apm_cli/deps/dependency_graph.py | 15 ++++++++++++++ tests/test_apm_resolver.py | 2 -- tests/unit/test_install_command.py | 30 +++++++++++++++++----------- 5 files changed, 48 insertions(+), 43 deletions(-) diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index 40da6985..49bc8ba3 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -242,8 +242,10 @@ def _validate_package_exists(package, verbose=False): import os import subprocess import tempfile + from apm_cli.core.auth import AuthResolver verbose_log = (lambda msg: _rich_echo(f" {msg}", color="dim")) if verbose else None + auth_resolver = AuthResolver() try: # Parse the package to check if it's a virtual package or ADO @@ -324,9 +326,6 @@ def _validate_package_exists(package, verbose=False): return result.returncode == 0 # For GitHub.com, use AuthResolver with unauth-first fallback - from apm_cli.core.auth import AuthResolver - - auth_resolver = AuthResolver() host = dep_ref.host or default_host() org = dep_ref.repo_url.split('/')[0] if dep_ref.repo_url and '/' in dep_ref.repo_url else None host_info = auth_resolver.classify_host(host) @@ -386,9 +385,6 @@ def _check_repo(token, git_env): except Exception: # If parsing fails, assume it's a regular GitHub package - from apm_cli.core.auth import AuthResolver - - auth_resolver = AuthResolver() host = default_host() org = package.split('/')[0] if '/' in package else None repo_path = package # owner/repo format @@ -580,7 +576,7 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo logger.progress(f" - {dep}") if not apm_deps and not dev_apm_deps and not mcp_deps: - logger.warning("No dependencies found in apm.yml") + logger.progress("No dependencies found in apm.yml") logger.success("Dry run complete - no changes made") return @@ -2005,9 +2001,9 @@ def _collect_descendants(node, visited=None): _lock_msg = f"Could not generate apm.lock.yaml: {e}" diagnostics.error(_lock_msg) if logger: - logger.warning(_lock_msg) + logger.error(_lock_msg) else: - _rich_warning(_lock_msg) + _rich_error(_lock_msg) # Show integration stats (verbose-only when logger is available) if total_links_resolved > 0: diff --git a/src/apm_cli/deps/apm_resolver.py b/src/apm_cli/deps/apm_resolver.py index d6b28f69..0146591a 100644 --- a/src/apm_cli/deps/apm_resolver.py +++ b/src/apm_cli/deps/apm_resolver.py @@ -1,7 +1,7 @@ """APM dependency resolution engine with recursive resolution and conflict detection.""" from pathlib import Path -from typing import List, Set, Optional, Tuple, Callable +from typing import List, Set, Optional, Protocol, Tuple, runtime_checkable from collections import deque from ..models.apm_package import APMPackage, DependencyReference @@ -15,7 +15,14 @@ # if successful. ``parent_chain`` is a human-readable breadcrumb string like # "root-pkg > mid-pkg" showing which dependency path led here, or "" for # direct (depth-1) dependencies. -DownloadCallback = Callable[..., Optional[Path]] +@runtime_checkable +class DownloadCallback(Protocol): + def __call__( + self, + dep_ref: 'DependencyReference', + apm_modules_dir: Path, + parent_chain: str = "", + ) -> Optional[Path]: ... class APMDependencyResolver: @@ -37,28 +44,11 @@ def __init__( the resolver will attempt to fetch uninstalled transitive deps. """ self.max_depth = max_depth - self._resolution_path = [] # For test compatibility self._apm_modules_dir: Optional[Path] = apm_modules_dir self._project_root: Optional[Path] = None self._download_callback = download_callback self._downloaded_packages: Set[str] = set() # Track what we downloaded during this resolution - @staticmethod - def _build_parent_chain(node: Optional[DependencyNode]) -> str: - """Build a human-readable breadcrumb from a node's ancestry. - - Walks up ``parent`` links to produce e.g. ``"root-pkg > mid-pkg"`` - so error messages can show which dependency path led to a transitive - download failure. Returns ``""`` for root-level (direct) deps. - """ - parts: list[str] = [] - current = node - while current is not None: - parts.append(current.get_display_name()) - current = current.parent - parts.reverse() - return " > ".join(parts) - def resolve_dependencies(self, project_root: Path) -> DependencyGraph: """ Resolve all APM dependencies recursively. @@ -220,7 +210,7 @@ def build_dependency_tree(self, root_apm_yml: Path) -> DependencyTree: try: # Compute breadcrumb chain from this node's ancestry so download # errors can report "root > mid > failing-dep" context. - parent_chain = self._build_parent_chain(node) + parent_chain = node.get_ancestor_chain() loaded_package = self._try_load_dependency_package( dep_ref, parent_chain=parent_chain diff --git a/src/apm_cli/deps/dependency_graph.py b/src/apm_cli/deps/dependency_graph.py index 1a59b016..7fadab2d 100644 --- a/src/apm_cli/deps/dependency_graph.py +++ b/src/apm_cli/deps/dependency_graph.py @@ -29,6 +29,21 @@ def get_display_name(self) -> str: """Get display name for this dependency.""" return self.dependency_ref.get_display_name() + def get_ancestor_chain(self) -> str: + """Build a human-readable breadcrumb from this node's ancestry. + + Walks up ``parent`` links to produce e.g. ``"root-pkg > mid-pkg > this-pkg"`` + so error messages can show which dependency path led here. + Returns just the node's display name for root-level (depth-0/1) deps. + """ + parts: list[str] = [] + current: 'DependencyNode' | None = self + while current is not None: + parts.append(current.get_display_name()) + current = current.parent + parts.reverse() + return " > ".join(parts) + @dataclass class CircularRef: diff --git a/tests/test_apm_resolver.py b/tests/test_apm_resolver.py index 5b9a2198..214deaee 100644 --- a/tests/test_apm_resolver.py +++ b/tests/test_apm_resolver.py @@ -25,8 +25,6 @@ def test_resolver_initialization(self): # Default initialization resolver = APMDependencyResolver() assert resolver.max_depth == 50 - assert resolver._resolution_path == [] - # Custom initialization custom_resolver = APMDependencyResolver(max_depth=10) assert custom_resolver.max_depth == 10 diff --git a/tests/unit/test_install_command.py b/tests/unit/test_install_command.py index e5d3e420..80f286be 100644 --- a/tests/unit/test_install_command.py +++ b/tests/unit/test_install_command.py @@ -321,11 +321,10 @@ def test_verbose_validation_failure_calls_build_error_context(self, mock_urlopen class TestTransitiveDepParentChain: - """Tests for the parent chain breadcrumb in transitive dep errors.""" + """Tests for DependencyNode.get_ancestor_chain() breadcrumb.""" - def test_build_parent_chain_returns_breadcrumb(self): - """_build_parent_chain walks up parent links and returns 'a > b > c'.""" - from apm_cli.deps.apm_resolver import APMDependencyResolver + def test_get_ancestor_chain_returns_breadcrumb(self): + """get_ancestor_chain walks up parent links and returns 'a > b > c'.""" from apm_cli.deps.dependency_graph import DependencyNode from apm_cli.models.apm_package import APMPackage, DependencyReference @@ -351,12 +350,11 @@ def test_build_parent_chain_returns_breadcrumb(self): parent=mid_node, ) - chain = APMDependencyResolver._build_parent_chain(leaf_node) + chain = leaf_node.get_ancestor_chain() assert chain == "acme/root-pkg > acme/mid-pkg > other-org/leaf-pkg" - def test_build_parent_chain_single_node(self): + def test_get_ancestor_chain_single_node(self): """Direct dep (no parent) returns just its own name.""" - from apm_cli.deps.apm_resolver import APMDependencyResolver from apm_cli.deps.dependency_graph import DependencyNode from apm_cli.models.apm_package import APMPackage, DependencyReference @@ -366,13 +364,21 @@ def test_build_parent_chain_single_node(self): dependency_ref=ref, depth=1, ) - chain = APMDependencyResolver._build_parent_chain(node) + chain = node.get_ancestor_chain() assert chain == "acme/direct-pkg" - def test_build_parent_chain_none_returns_empty(self): - """None node returns empty string.""" - from apm_cli.deps.apm_resolver import APMDependencyResolver - assert APMDependencyResolver._build_parent_chain(None) == "" + def test_get_ancestor_chain_root_node(self): + """Root node (no parent) returns just the node's display name.""" + from apm_cli.deps.dependency_graph import DependencyNode + from apm_cli.models.apm_package import APMPackage, DependencyReference + + ref = DependencyReference.parse("acme/root-pkg") + node = DependencyNode( + package=APMPackage(name="root-pkg", version="1.0", source="acme/root-pkg"), + dependency_ref=ref, + depth=0, + ) + assert node.get_ancestor_chain() == "acme/root-pkg" def test_download_callback_includes_chain_in_error(self, tmp_path): """When a transitive dep download fails, the error message includes From f3796c754fd40c56b258fea94daead7934d830a0 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 14:07:58 +0100 Subject: [PATCH 24/40] =?UTF-8?q?feat:=20Wave=201+2=20=E2=80=94=20verbose?= =?UTF-8?q?=20coverage,=20CommandLogger=20migration=20across=20codebase?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Wave 1 (verbose coverage): - Add dep tree resolution summary with transitive breadcrumbs - Add auth source/type logging in download phase (verbose only) - Add manifest parsing + per-dep lockfile SHA in verbose output - Add download URL verbose logging via verbose_callback in downloader Wave 2 (CommandLogger migration): - compile/watcher.py: 19 _rich_* calls → CommandLogger - uninstall/engine.py: 15 _rich_* calls → CommandLogger, remove unicode symbols - safe_installer.py: 6 calls with logger fallback pattern - _helpers.py, packer.py, plugin_exporter.py: logger threading - audit.py already migrated (verified) All 2874 tests passing. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- src/apm_cli/bundle/packer.py | 8 +++- src/apm_cli/bundle/plugin_exporter.py | 28 +++++++++--- src/apm_cli/commands/_helpers.py | 17 ++++++-- src/apm_cli/commands/compile/cli.py | 6 +-- src/apm_cli/commands/compile/watcher.py | 55 ++++++++++++------------ src/apm_cli/commands/install.py | 42 +++++++++++++++++- src/apm_cli/commands/pack.py | 1 + src/apm_cli/commands/uninstall/cli.py | 16 +++---- src/apm_cli/commands/uninstall/engine.py | 44 +++++++++---------- src/apm_cli/core/safe_installer.py | 45 ++++++++++++++----- src/apm_cli/deps/github_downloader.py | 45 +++++++++++++++---- 11 files changed, 212 insertions(+), 95 deletions(-) diff --git a/src/apm_cli/bundle/packer.py b/src/apm_cli/bundle/packer.py index ff7d75f9..43cb4dd4 100644 --- a/src/apm_cli/bundle/packer.py +++ b/src/apm_cli/bundle/packer.py @@ -47,6 +47,7 @@ def pack_bundle( archive: bool = False, dry_run: bool = False, force: bool = False, + logger=None, ) -> PackResult: """Create a self-contained bundle from installed APM dependencies. @@ -81,6 +82,7 @@ def pack_bundle( archive=archive, dry_run=dry_run, force=force, + logger=logger, ) lockfile_path = get_lockfile_path(project_root) @@ -196,10 +198,14 @@ def pack_bundle( ) _scan_findings_total += len(verdict.all_findings) if _scan_findings_total: - _rich_warning( + _warn_msg = ( f"Bundle contains {_scan_findings_total} hidden character(s) across source files " f"— run 'apm audit' to inspect before publishing" ) + if logger: + logger.warning(_warn_msg) + else: + _rich_warning(_warn_msg) # 6. Build output directory bundle_dir = output_dir / f"{pkg_name}-{pkg_version}" diff --git a/src/apm_cli/bundle/plugin_exporter.py b/src/apm_cli/bundle/plugin_exporter.py index e8617786..45f6214e 100644 --- a/src/apm_cli/bundle/plugin_exporter.py +++ b/src/apm_cli/bundle/plugin_exporter.py @@ -319,7 +319,7 @@ def _get_dev_dependency_urls(apm_yml_path: Path) -> Set[Tuple[str, str]]: def _find_or_synthesize_plugin_json( - project_root: Path, apm_yml_path: Path + project_root: Path, apm_yml_path: Path, logger=None, ) -> dict: """Locate an existing ``plugin.json`` or synthesise one from ``apm.yml``.""" from ..deps.plugin_parser import synthesize_plugin_json_from_apm_yml @@ -330,16 +330,24 @@ def _find_or_synthesize_plugin_json( try: return json.loads(plugin_json_path.read_text(encoding="utf-8")) except (json.JSONDecodeError, OSError) as exc: - _rich_warning( + _warn_msg = ( f"Found plugin.json at {plugin_json_path} but could not parse it: {exc}. " "Falling back to synthesis from apm.yml." ) + if logger: + logger.warning(_warn_msg) + else: + _rich_warning(_warn_msg) else: - _rich_warning( + _warn_msg = ( "No plugin.json found. Synthesizing from apm.yml. " "Consider running 'apm init --plugin'." ) + if logger: + logger.warning(_warn_msg) + else: + _rich_warning(_warn_msg) return synthesize_plugin_json_from_apm_yml(apm_yml_path) @@ -400,6 +408,7 @@ def export_plugin_bundle( archive: bool = False, dry_run: bool = False, force: bool = False, + logger=None, ) -> PackResult: """Export the project as a plugin-native directory. @@ -439,7 +448,7 @@ def export_plugin_bundle( ) # 3. Find or synthesize plugin.json - plugin_json = _find_or_synthesize_plugin_json(project_root, apm_yml_path) + plugin_json = _find_or_synthesize_plugin_json(project_root, apm_yml_path, logger=logger) # 4. devDependencies filtering dev_dep_urls = _get_dev_dependency_urls(apm_yml_path) @@ -510,7 +519,10 @@ def export_plugin_bundle( # 7. Emit collision warnings for msg in collisions: - _rich_warning(msg) + if logger: + logger.warning(msg) + else: + _rich_warning(msg) # 8. Build output file list (sorted for determinism) output_files = sorted(file_map.keys()) @@ -548,10 +560,14 @@ def export_plugin_bundle( verdict = SecurityGate.scan_text(text, str(src), policy=WARN_POLICY) scan_findings_total += len(verdict.all_findings) if scan_findings_total: - _rich_warning( + _warn_msg = ( f"Bundle contains {scan_findings_total} hidden character(s) across " f"source files — run 'apm audit' to inspect before publishing" ) + if logger: + logger.warning(_warn_msg) + else: + _rich_warning(_warn_msg) # 11. Write files to output directory (clean slate to prevent symlink attacks) if bundle_dir.exists(): diff --git a/src/apm_cli/commands/_helpers.py b/src/apm_cli/commands/_helpers.py index bc0c886a..09158a5d 100644 --- a/src/apm_cli/commands/_helpers.py +++ b/src/apm_cli/commands/_helpers.py @@ -283,7 +283,7 @@ def _atomic_write(path: Path, data: str) -> None: raise -def _update_gitignore_for_apm_modules(): +def _update_gitignore_for_apm_modules(logger=None): """Add apm_modules/ to .gitignore if not already present.""" gitignore_path = Path(GITIGNORE_FILENAME) apm_modules_pattern = APM_MODULES_GITIGNORE_PATTERN @@ -295,7 +295,10 @@ def _update_gitignore_for_apm_modules(): with open(gitignore_path, "r", encoding="utf-8") as f: current_content = [line.rstrip("\n\r") for line in f.readlines()] except Exception as e: - _rich_warning(f"Could not read .gitignore: {e}") + if logger: + logger.warning(f"Could not read .gitignore: {e}") + else: + _rich_warning(f"Could not read .gitignore: {e}") return # Check if apm_modules/ is already in .gitignore @@ -310,9 +313,15 @@ def _update_gitignore_for_apm_modules(): f.write("\n") f.write(f"\n# APM dependencies\n{apm_modules_pattern}\n") - _rich_info(f"Added {apm_modules_pattern} to .gitignore") + if logger: + logger.progress(f"Added {apm_modules_pattern} to .gitignore") + else: + _rich_info(f"Added {apm_modules_pattern} to .gitignore") except Exception as e: - _rich_warning(f"Could not update .gitignore: {e}") + if logger: + logger.warning(f"Could not update .gitignore: {e}") + else: + _rich_warning(f"Could not update .gitignore: {e}") # ------------------------------------------------------------------ diff --git a/src/apm_cli/commands/compile/cli.py b/src/apm_cli/commands/compile/cli.py index a972de27..3d041c25 100644 --- a/src/apm_cli/commands/compile/cli.py +++ b/src/apm_cli/commands/compile/cli.py @@ -10,13 +10,9 @@ from ...core.command_logger import CommandLogger from ...primitives.discovery import discover_primitives from ...utils.console import ( - STATUS_SYMBOLS, - _rich_echo, _rich_error, _rich_info, _rich_panel, - _rich_success, - _rich_warning, ) from .._helpers import ( _atomic_write, @@ -339,7 +335,7 @@ def compile( # Watch mode if watch: - _watch_mode(output, chatmode, no_links, dry_run) + _watch_mode(output, chatmode, no_links, dry_run, verbose=verbose) return logger.start("Starting context compilation...", symbol="cogs") diff --git a/src/apm_cli/commands/compile/watcher.py b/src/apm_cli/commands/compile/watcher.py index 987419ac..b706bc98 100644 --- a/src/apm_cli/commands/compile/watcher.py +++ b/src/apm_cli/commands/compile/watcher.py @@ -2,15 +2,15 @@ import time -import click - from ...constants import AGENTS_MD_FILENAME, APM_DIR, APM_YML_FILENAME from ...compilation import AgentsCompiler, CompilationConfig -from ...utils.console import _rich_error, _rich_info, _rich_success, _rich_warning +from ...core.command_logger import CommandLogger -def _watch_mode(output, chatmode, no_links, dry_run): +def _watch_mode(output, chatmode, no_links, dry_run, verbose=False): """Watch for changes in .apm/ directories and auto-recompile.""" + logger = CommandLogger("compile-watch", verbose=verbose, dry_run=dry_run) + try: # Try to import watchdog for file system monitoring from pathlib import Path @@ -19,11 +19,12 @@ def _watch_mode(output, chatmode, no_links, dry_run): from watchdog.observers import Observer class APMFileHandler(FileSystemEventHandler): - def __init__(self, output, chatmode, no_links, dry_run): + def __init__(self, output, chatmode, no_links, dry_run, logger): self.output = output self.chatmode = chatmode self.no_links = no_links self.dry_run = dry_run + self.logger = logger self.last_compile = 0 self.debounce_delay = 1.0 # 1 second debounce @@ -44,8 +45,8 @@ def on_modified(self, event): def _recompile(self, changed_file): """Recompile after file change.""" try: - _rich_info(f"File changed: {changed_file}", symbol="eyes") - _rich_info("Recompiling...", symbol="gear") + self.logger.progress(f"File changed: {changed_file}", symbol="eyes") + self.logger.progress("Recompiling...", symbol="gear") # Create configuration from apm.yml with overrides config = CompilationConfig.from_apm_yml( @@ -61,23 +62,23 @@ def _recompile(self, changed_file): if result.success: if self.dry_run: - _rich_success( + self.logger.success( "Recompilation successful (dry run)", symbol="sparkles" ) else: - _rich_success( + self.logger.success( f"Recompiled to {result.output_path}", symbol="sparkles" ) else: - _rich_error("Recompilation failed") + self.logger.error("Recompilation failed") for error in result.errors: - click.echo(f" [x] {error}") + self.logger.error(f" [x] {error}") except Exception as e: - _rich_error(f"Error during recompilation: {e}") + self.logger.error(f"Error during recompilation: {e}") # Set up file watching - event_handler = APMFileHandler(output, chatmode, no_links, dry_run) + event_handler = APMFileHandler(output, chatmode, no_links, dry_run, logger) observer = Observer() # Watch patterns for APM files @@ -109,19 +110,19 @@ def _recompile(self, changed_file): watch_paths.append(APM_YML_FILENAME) if not watch_paths: - _rich_warning("No APM directories found to watch") - _rich_info("Run 'apm init' to create an APM project") + logger.warning("No APM directories found to watch") + logger.progress("Run 'apm init' to create an APM project") return # Start watching observer.start() - _rich_info( + logger.progress( f" Watching for changes in: {', '.join(watch_paths)}", symbol="eyes" ) - _rich_info("Press Ctrl+C to stop watching...", symbol="info") + logger.progress("Press Ctrl+C to stop watching...", symbol="info") # Do initial compilation - _rich_info("Performing initial compilation...", symbol="gear") + logger.progress("Performing initial compilation...", symbol="gear") config = CompilationConfig.from_apm_yml( output_path=output if output != AGENTS_MD_FILENAME else None, @@ -135,37 +136,37 @@ def _recompile(self, changed_file): if result.success: if dry_run: - _rich_success( + logger.success( "Initial compilation successful (dry run)", symbol="sparkles" ) else: - _rich_success( + logger.success( f"Initial compilation complete: {result.output_path}", symbol="sparkles", ) else: - _rich_error("Initial compilation failed") + logger.error("Initial compilation failed") for error in result.errors: - click.echo(f" [x] {error}") + logger.error(f" [x] {error}") try: while True: time.sleep(1) except KeyboardInterrupt: observer.stop() - _rich_info("Stopped watching for changes", symbol="info") + logger.progress("Stopped watching for changes", symbol="info") observer.join() except ImportError: - _rich_error("Watch mode requires the 'watchdog' library") - _rich_info("Install it with: uv pip install watchdog") - _rich_info( + logger.error("Watch mode requires the 'watchdog' library") + logger.progress("Install it with: uv pip install watchdog") + logger.progress( "Or reinstall APM: uv pip install -e . (from the apm directory)" ) import sys sys.exit(1) except Exception as e: - _rich_error(f"Error in watch mode: {e}") + logger.error(f"Error in watch mode: {e}") import sys sys.exit(1) diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index 49bc8ba3..4034d442 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -542,6 +542,13 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo logger.error(f"Failed to parse {APM_YML_FILENAME}: {e}") sys.exit(1) + logger.verbose_detail( + f"Parsed {APM_YML_FILENAME}: {len(apm_package.get_apm_dependencies())} APM deps, " + f"{len(apm_package.get_mcp_dependencies())} MCP deps" + + (f", {len(apm_package.get_dev_apm_dependencies())} dev deps" + if apm_package.get_dev_apm_dependencies() else "") + ) + # Get APM and MCP dependencies apm_deps = apm_package.get_apm_dependencies() dev_apm_deps = apm_package.get_dev_apm_dependencies() @@ -1049,6 +1056,10 @@ def _install_apm_dependencies( lockfile_count = len(existing_lockfile.dependencies) if logger: logger.verbose_detail(f"Using apm.lock.yaml ({lockfile_count} locked dependencies)") + if logger.verbose: + for locked_dep in existing_lockfile.dependencies: + sha_short = locked_dep.resolved_commit[:8] if locked_dep.resolved_commit else "no-sha" + logger.verbose_detail(f" {locked_dep.get_unique_key()}: locked at {sha_short}") else: _rich_info(f"Using apm.lock.yaml ({lockfile_count} locked dependencies)") @@ -1141,6 +1152,24 @@ def download_callback(dep_ref, modules_dir, parent_chain=""): try: dependency_graph = resolver.resolve_dependencies(project_root) + # Verbose: show resolved tree summary + if logger: + tree = dependency_graph.dependency_tree + direct_count = len(tree.get_nodes_at_depth(1)) + transitive_count = len(tree.nodes) - direct_count + if transitive_count > 0: + logger.verbose_detail( + f"Resolved dependency tree: {direct_count} direct + " + f"{transitive_count} transitive deps (max depth {tree.max_depth})" + ) + for node in tree.nodes.values(): + if node.depth > 1: + logger.verbose_detail( + f" {node.get_ancestor_chain()}" + ) + else: + logger.verbose_detail(f"Resolved {direct_count} direct dependencies (no transitive)") + # Check for circular dependencies if dependency_graph.circular_dependencies: if logger: @@ -1796,6 +1825,17 @@ def _collect_descendants(node, visited=None): ref_suffix = f"#{resolved}" if resolved else "" if logger: logger.download_complete(display_name, ref_suffix=ref_suffix) + # Log auth source for this download (verbose only) + if verbose: + try: + from apm_cli.core.auth import AuthResolver + _auth = AuthResolver() + _host = dep_ref.host or "github.com" + _org = dep_ref.repo_url.split('/')[0] if dep_ref.repo_url and '/' in dep_ref.repo_url else None + _ctx = _auth.resolve(_host, org=_org) + logger.verbose_detail(f" Auth: {_ctx.source} ({_ctx.token_type or 'none'})") + except Exception: + pass else: _rich_success(f"✓ {display_name}{ref_suffix}") @@ -1896,7 +1936,7 @@ def _collect_descendants(node, visited=None): continue # Update .gitignore - _update_gitignore_for_apm_modules() + _update_gitignore_for_apm_modules(logger=logger) # ------------------------------------------------------------------ # Orphan cleanup: remove deployed files for packages that were diff --git a/src/apm_cli/commands/pack.py b/src/apm_cli/commands/pack.py index f36a58db..370478a3 100644 --- a/src/apm_cli/commands/pack.py +++ b/src/apm_cli/commands/pack.py @@ -49,6 +49,7 @@ def pack_cmd(ctx, fmt, target, archive, output, dry_run, force): archive=archive, dry_run=dry_run, force=force, + logger=logger, ) if dry_run: diff --git a/src/apm_cli/commands/uninstall/cli.py b/src/apm_cli/commands/uninstall/cli.py index 28ca343e..2160f542 100644 --- a/src/apm_cli/commands/uninstall/cli.py +++ b/src/apm_cli/commands/uninstall/cli.py @@ -9,9 +9,7 @@ from ...constants import APM_MODULES_DIR, APM_YML_FILENAME from ...core.command_logger import CommandLogger -from ...deps.lockfile import LockFile -from ...models.apm_package import APMPackage, DependencyReference -from ...integration.mcp_integrator import MCPIntegrator +from ...models.apm_package import APMPackage from .engine import ( _parse_dependency_entry, @@ -73,14 +71,14 @@ def uninstall(ctx, packages, dry_run): current_deps = data["dependencies"]["apm"] or [] # Step 1: Validate packages - packages_to_remove, packages_not_found = _validate_uninstall_packages(packages, current_deps) + packages_to_remove, packages_not_found = _validate_uninstall_packages(packages, current_deps, logger) if not packages_to_remove: logger.warning("No packages found in apm.yml to remove") return # Step 2: Dry run if dry_run: - _dry_run_uninstall(packages_to_remove, Path(APM_MODULES_DIR)) + _dry_run_uninstall(packages_to_remove, Path(APM_MODULES_DIR), logger) return # Step 3: Remove from apm.yml @@ -104,11 +102,11 @@ def uninstall(ctx, packages, dry_run): _pre_uninstall_mcp_servers = builtins.set(lockfile.mcp_servers) if lockfile else builtins.set() # Step 5: Remove packages from disk - removed_from_modules = _remove_packages_from_disk(packages_to_remove, apm_modules_dir) + removed_from_modules = _remove_packages_from_disk(packages_to_remove, apm_modules_dir, logger) # Step 6: Cleanup transitive orphans orphan_removed, actual_orphans = _cleanup_transitive_orphans( - lockfile, packages_to_remove, apm_modules_dir, apm_yml_path + lockfile, packages_to_remove, apm_modules_dir, apm_yml_path, logger ) removed_from_modules += orphan_removed @@ -159,13 +157,13 @@ def uninstall(ctx, packages, dry_run): try: apm_package = APMPackage.from_apm_yml(Path(APM_YML_FILENAME)) project_root = Path(".") - cleaned = _sync_integrations_after_uninstall(apm_package, project_root, all_deployed_files) + cleaned = _sync_integrations_after_uninstall(apm_package, project_root, all_deployed_files, logger) except Exception: pass # Best effort cleanup for label, count in cleaned.items(): if count > 0: - logger.progress(f"\u2713 Cleaned up {count} integrated {label}") + logger.progress(f"Cleaned up {count} integrated {label}", symbol="check") # Step 10: MCP cleanup try: diff --git a/src/apm_cli/commands/uninstall/engine.py b/src/apm_cli/commands/uninstall/engine.py index 41d3db4c..ec75ad36 100644 --- a/src/apm_cli/commands/uninstall/engine.py +++ b/src/apm_cli/commands/uninstall/engine.py @@ -4,7 +4,7 @@ from pathlib import Path from ...constants import APM_MODULES_DIR, APM_YML_FILENAME -from ...utils.console import _rich_error, _rich_info, _rich_success, _rich_warning +from ...core.command_logger import CommandLogger from ...utils.path_security import PathTraversalError, safe_rmtree from ...deps.lockfile import LockFile @@ -23,14 +23,14 @@ def _parse_dependency_entry(dep_entry): raise ValueError(f"Unsupported dependency entry type: {type(dep_entry).__name__}") -def _validate_uninstall_packages(packages, current_deps): +def _validate_uninstall_packages(packages, current_deps, logger): """Validate which packages can be removed and return matched/unmatched lists.""" packages_to_remove = [] packages_not_found = [] for package in packages: if "/" not in package: - _rich_error(f"Invalid package format: {package}. Use 'owner/repo' format.") + logger.error(f"Invalid package format: {package}. Use 'owner/repo' format.") continue matched_dep = None @@ -54,19 +54,19 @@ def _validate_uninstall_packages(packages, current_deps): if matched_dep is not None: packages_to_remove.append(matched_dep) - _rich_info(f"\u2713 {package} - found in apm.yml") + logger.progress(f"{package} - found in apm.yml", symbol="check") else: packages_not_found.append(package) - _rich_warning(f"\u2717 {package} - not found in apm.yml") + logger.warning(f"{package} - not found in apm.yml") return packages_to_remove, packages_not_found -def _dry_run_uninstall(packages_to_remove, apm_modules_dir): +def _dry_run_uninstall(packages_to_remove, apm_modules_dir, logger): """Show what would be removed without making changes.""" - _rich_info(f"Dry run: Would remove {len(packages_to_remove)} package(s):") + logger.progress(f"Dry run: Would remove {len(packages_to_remove)} package(s):") for pkg in packages_to_remove: - _rich_info(f" - {pkg} from apm.yml") + logger.progress(f" - {pkg} from apm.yml") try: dep_ref = _parse_dependency_entry(pkg) package_path = dep_ref.get_install_path(apm_modules_dir) @@ -74,7 +74,7 @@ def _dry_run_uninstall(packages_to_remove, apm_modules_dir): pkg_str = pkg if isinstance(pkg, str) else str(pkg) package_path = apm_modules_dir / pkg_str.split("/")[-1] if apm_modules_dir.exists() and package_path.exists(): - _rich_info(f" - {pkg} from apm_modules/") + logger.progress(f" - {pkg} from apm_modules/") from ...deps.lockfile import LockFile, get_lockfile_path lockfile_path = get_lockfile_path(Path(".")) @@ -99,14 +99,14 @@ def _dry_run_uninstall(packages_to_remove, apm_modules_dir): potential_orphans.add(key) queue.append(dep.repo_url) if potential_orphans: - _rich_info(f" Transitive dependencies that would be removed:") + logger.progress(f" Transitive dependencies that would be removed:") for orphan_key in sorted(potential_orphans): - _rich_info(f" - {orphan_key}") + logger.progress(f" - {orphan_key}") - _rich_success("Dry run complete - no changes made") + logger.success("Dry run complete - no changes made") -def _remove_packages_from_disk(packages_to_remove, apm_modules_dir): +def _remove_packages_from_disk(packages_to_remove, apm_modules_dir, logger): """Remove direct packages from apm_modules/ and return removal count.""" removed = 0 if not apm_modules_dir.exists(): @@ -118,7 +118,7 @@ def _remove_packages_from_disk(packages_to_remove, apm_modules_dir): dep_ref = _parse_dependency_entry(package) package_path = dep_ref.get_install_path(apm_modules_dir) except (PathTraversalError,) as e: - _rich_error(f"x Refusing to remove {package}: {e}") + logger.error(f"Refusing to remove {package}: {e}") continue except (ValueError, TypeError, AttributeError, KeyError): package_str = package if isinstance(package, str) else str(package) @@ -131,20 +131,20 @@ def _remove_packages_from_disk(packages_to_remove, apm_modules_dir): if package_path.exists(): try: safe_rmtree(package_path, apm_modules_dir) - _rich_info(f"+ Removed {package} from apm_modules/") + logger.progress(f"Removed {package} from apm_modules/") removed += 1 deleted_pkg_paths.append(package_path) except Exception as e: - _rich_error(f"x Failed to remove {package} from apm_modules/: {e}") + logger.error(f"Failed to remove {package} from apm_modules/: {e}") else: - _rich_warning(f"Package {package} not found in apm_modules/") + logger.warning(f"Package {package} not found in apm_modules/") from ...integration.base_integrator import BaseIntegrator as _BI2 _BI2.cleanup_empty_parents(deleted_pkg_paths, stop_at=apm_modules_dir) return removed -def _cleanup_transitive_orphans(lockfile, packages_to_remove, apm_modules_dir, apm_yml_path): +def _cleanup_transitive_orphans(lockfile, packages_to_remove, apm_modules_dir, apm_yml_path, logger): """Remove orphaned transitive deps and return (removed_count, actual_orphan_keys).""" import yaml @@ -211,18 +211,18 @@ def _cleanup_transitive_orphans(lockfile, packages_to_remove, apm_modules_dir, a if orphan_path.exists(): try: safe_rmtree(orphan_path, apm_modules_dir) - _rich_info(f"+ Removed transitive dependency {orphan_key} from apm_modules/") + logger.progress(f"Removed transitive dependency {orphan_key} from apm_modules/") removed += 1 deleted_orphan_paths.append(orphan_path) except Exception as e: - _rich_error(f"x Failed to remove transitive dep {orphan_key}: {e}") + logger.error(f"Failed to remove transitive dep {orphan_key}: {e}") from ...integration.base_integrator import BaseIntegrator as _BI _BI.cleanup_empty_parents(deleted_orphan_paths, stop_at=apm_modules_dir) return removed, actual_orphans -def _sync_integrations_after_uninstall(apm_package, project_root, all_deployed_files): +def _sync_integrations_after_uninstall(apm_package, project_root, all_deployed_files, logger): """Remove deployed files and re-integrate from remaining packages.""" from ...integration.base_integrator import BaseIntegrator from ...models.apm_package import PackageInfo, validate_apm_package @@ -360,7 +360,7 @@ def _sync_integrations_after_uninstall(apm_package, project_root, all_deployed_f instruction_integrator_reint.integrate_package_instructions_cursor(pkg_info, project_root) except Exception: pkg_id = dep_ref.get_identity() if hasattr(dep_ref, "get_identity") else str(dep_ref) - _rich_warning(f"Best-effort re-integration skipped for {pkg_id}") + logger.warning(f"Best-effort re-integration skipped for {pkg_id}") return counts diff --git a/src/apm_cli/core/safe_installer.py b/src/apm_cli/core/safe_installer.py index b897f6bd..e51b2181 100644 --- a/src/apm_cli/core/safe_installer.py +++ b/src/apm_cli/core/safe_installer.py @@ -1,10 +1,10 @@ """Safe MCP server installation with conflict detection.""" -from typing import List, Dict, Any +from typing import List, Dict, Any, Optional from dataclasses import dataclass from ..factory import ClientFactory from .conflict_detector import MCPConflictDetector -from ..utils.console import _rich_warning, _rich_success, _rich_error, _rich_info +from ..utils.console import _rich_warning, _rich_success, _rich_error @dataclass @@ -32,32 +32,43 @@ def has_any_changes(self) -> bool: """Check if any installations or failures occurred.""" return len(self.installed) > 0 or len(self.failed) > 0 - def log_summary(self): + def log_summary(self, logger=None): """Log a summary of installation results.""" if self.installed: - _rich_success(f"[+] Installed: {', '.join(self.installed)}") + if logger: + logger.success(f"[+] Installed: {', '.join(self.installed)}") + else: + _rich_success(f"[+] Installed: {', '.join(self.installed)}") if self.skipped: for item in self.skipped: - _rich_warning(f"[!] Skipped {item['server']}: {item['reason']}") + if logger: + logger.warning(f"[!] Skipped {item['server']}: {item['reason']}") + else: + _rich_warning(f"[!] Skipped {item['server']}: {item['reason']}") if self.failed: for item in self.failed: - _rich_error(f"[x] Failed {item['server']}: {item['reason']}") + if logger: + logger.error(f"[x] Failed {item['server']}: {item['reason']}") + else: + _rich_error(f"[x] Failed {item['server']}: {item['reason']}") class SafeMCPInstaller: """Safe MCP server installation with conflict detection.""" - def __init__(self, runtime: str): + def __init__(self, runtime: str, logger=None): """Initialize the safe installer. Args: runtime: Target runtime (copilot, codex, vscode). + logger: Optional CommandLogger for structured output. """ self.runtime = runtime self.adapter = ClientFactory.create_client(runtime) self.conflict_detector = MCPConflictDetector(self.adapter) + self.logger = logger def install_servers(self, server_references: List[str], env_overrides: Dict[str, str] = None, server_info_cache: Dict[str, Any] = None, runtime_vars: Dict[str, str] = None) -> InstallationSummary: """Install MCP servers with conflict detection. @@ -105,19 +116,31 @@ def install_servers(self, server_references: List[str], env_overrides: Dict[str, def _log_skip(self, server_ref: str): """Log when a server is skipped due to existing configuration.""" - _rich_warning(f" {server_ref} already configured, skipping") + if self.logger: + self.logger.warning(f" {server_ref} already configured, skipping") + else: + _rich_warning(f" {server_ref} already configured, skipping") def _log_success(self, server_ref: str): """Log successful server installation.""" - _rich_success(f" + {server_ref}") + if self.logger: + self.logger.success(f" + {server_ref}") + else: + _rich_success(f" + {server_ref}") def _log_failure(self, server_ref: str): """Log failed server installation.""" - _rich_warning(f" x {server_ref} installation failed") + if self.logger: + self.logger.warning(f" x {server_ref} installation failed") + else: + _rich_warning(f" x {server_ref} installation failed") def _log_error(self, server_ref: str, error: Exception): """Log error during server installation.""" - _rich_error(f" x {server_ref}: {error}") + if self.logger: + self.logger.error(f" x {server_ref}: {error}") + else: + _rich_error(f" x {server_ref}: {error}") def check_conflicts_only(self, server_references: List[str]) -> Dict[str, Any]: """Check for conflicts without installing. diff --git a/src/apm_cli/deps/github_downloader.py b/src/apm_cli/deps/github_downloader.py index 86841559..249bcdff 100644 --- a/src/apm_cli/deps/github_downloader.py +++ b/src/apm_cli/deps/github_downloader.py @@ -550,7 +550,7 @@ def _build_repo_url(self, repo_ref: str, use_ssh: bool = False, dep_ref: Depende # Generic hosts: plain HTTPS, let git credential helpers handle auth return build_https_clone_url(host, repo_ref, token=None) - def _clone_with_fallback(self, repo_url_base: str, target_path: Path, progress_reporter=None, dep_ref: DependencyReference = None, **clone_kwargs) -> Repo: + def _clone_with_fallback(self, repo_url_base: str, target_path: Path, progress_reporter=None, dep_ref: DependencyReference = None, verbose_callback=None, **clone_kwargs) -> Repo: """Attempt to clone a repository with fallback authentication methods. Uses authentication patterns appropriate for the platform: @@ -562,6 +562,7 @@ def _clone_with_fallback(self, repo_url_base: str, target_path: Path, progress_r target_path: Target path for cloning progress_reporter: GitProgressReporter instance for progress updates dep_ref: Optional DependencyReference for platform-specific URL building + verbose_callback: Optional callable for verbose logging (receives str messages) **clone_kwargs: Additional arguments for Repo.clone_from Returns: @@ -611,7 +612,11 @@ def _clone_with_fallback(self, repo_url_base: str, target_path: Path, progress_r try: auth_url = self._build_repo_url(repo_url_base, use_ssh=False, dep_ref=dep_ref, token=dep_token) _debug(f"Attempting clone with authenticated HTTPS (URL sanitized)") - return Repo.clone_from(auth_url, target_path, env=clone_env, progress=progress_reporter, **clone_kwargs) + repo = Repo.clone_from(auth_url, target_path, env=clone_env, progress=progress_reporter, **clone_kwargs) + if verbose_callback: + masked = self._sanitize_git_error(auth_url) + verbose_callback(f"Cloned from: {masked}") + return repo except GitCommandError as e: last_error = e # Continue to next method @@ -619,7 +624,10 @@ def _clone_with_fallback(self, repo_url_base: str, target_path: Path, progress_r # Method 2: Try SSH (works with SSH keys for any host) try: ssh_url = self._build_repo_url(repo_url_base, use_ssh=True, dep_ref=dep_ref) - return Repo.clone_from(ssh_url, target_path, env=clone_env, progress=progress_reporter, **clone_kwargs) + repo = Repo.clone_from(ssh_url, target_path, env=clone_env, progress=progress_reporter, **clone_kwargs) + if verbose_callback: + verbose_callback(f"Cloned from: {ssh_url}") + return repo except GitCommandError as e: last_error = e # Continue to next method @@ -627,7 +635,10 @@ def _clone_with_fallback(self, repo_url_base: str, target_path: Path, progress_r # Method 3: Try standard HTTPS (public repos, or git credential helper for generic hosts) try: https_url = self._build_repo_url(repo_url_base, use_ssh=False, dep_ref=dep_ref) - return Repo.clone_from(https_url, target_path, env=clone_env, progress=progress_reporter, **clone_kwargs) + repo = Repo.clone_from(https_url, target_path, env=clone_env, progress=progress_reporter, **clone_kwargs) + if verbose_callback: + verbose_callback(f"Cloned from: {https_url}") + return repo except GitCommandError as e: last_error = e @@ -795,13 +806,14 @@ def resolve_git_reference(self, repo_ref: Union[str, "DependencyReference"]) -> ref_name=ref_name ) - def download_raw_file(self, dep_ref: DependencyReference, file_path: str, ref: str = "main") -> bytes: + def download_raw_file(self, dep_ref: DependencyReference, file_path: str, ref: str = "main", verbose_callback=None) -> bytes: """Download a single file from repository (GitHub or Azure DevOps). Args: dep_ref: Parsed dependency reference file_path: Path to file within the repository (e.g., "prompts/code-review.prompt.md") ref: Git reference (branch, tag, or commit SHA). Defaults to "main" + verbose_callback: Optional callable for verbose logging (receives str messages) Returns: bytes: File content @@ -835,7 +847,7 @@ def download_raw_file(self, dep_ref: DependencyReference, file_path: str, ref: s return self._download_ado_file(dep_ref, file_path, ref) # GitHub API - return self._download_github_file(dep_ref, file_path, ref) + return self._download_github_file(dep_ref, file_path, ref, verbose_callback=verbose_callback) def _download_ado_file(self, dep_ref: DependencyReference, file_path: str, ref: str = "main") -> bytes: """Download a file from Azure DevOps repository. @@ -932,7 +944,7 @@ def _try_raw_download(self, owner: str, repo: str, ref: str, file_path: str) -> pass return None - def _download_github_file(self, dep_ref: DependencyReference, file_path: str, ref: str = "main") -> bytes: + def _download_github_file(self, dep_ref: DependencyReference, file_path: str, ref: str = "main", verbose_callback=None) -> bytes: """Download a file from GitHub repository. For github.com without a token, tries raw.githubusercontent.com first @@ -943,6 +955,7 @@ def _download_github_file(self, dep_ref: DependencyReference, file_path: str, re dep_ref: Parsed dependency reference file_path: Path to file within the repository ref: Git reference (branch, tag, or commit SHA) + verbose_callback: Optional callable for verbose logging (receives str messages) Returns: bytes: File content @@ -968,6 +981,8 @@ def _download_github_file(self, dep_ref: DependencyReference, file_path: str, re if host.lower() == "github.com" and not token: content = self._try_raw_download(owner, repo, ref, file_path) if content is not None: + if verbose_callback: + verbose_callback(f"Downloaded file: {host}/{dep_ref.repo_url}/{file_path}") return content # raw download returned 404 — could be wrong default branch. # Try the other default branch before falling through to the API. @@ -975,6 +990,8 @@ def _download_github_file(self, dep_ref: DependencyReference, file_path: str, re fallback_ref = "master" if ref == "main" else "main" content = self._try_raw_download(owner, repo, fallback_ref, file_path) if content is not None: + if verbose_callback: + verbose_callback(f"Downloaded file: {host}/{dep_ref.repo_url}/{file_path}") return content # All raw attempts failed — fall through to API path which # handles private repos, rate-limit messaging, and SAML errors. @@ -999,6 +1016,8 @@ def _download_github_file(self, dep_ref: DependencyReference, file_path: str, re try: response = self._resilient_get(api_url, headers=headers, timeout=30) response.raise_for_status() + if verbose_callback: + verbose_callback(f"Downloaded file: {host}/{dep_ref.repo_url}/{file_path}") return response.content except requests.exceptions.HTTPError as e: if e.response.status_code == 404: @@ -1021,6 +1040,8 @@ def _download_github_file(self, dep_ref: DependencyReference, file_path: str, re try: response = self._resilient_get(fallback_url, headers=headers, timeout=30) response.raise_for_status() + if verbose_callback: + verbose_callback(f"Downloaded file: {host}/{dep_ref.repo_url}/{file_path}") return response.content except requests.exceptions.HTTPError: raise RuntimeError( @@ -1064,6 +1085,8 @@ def _download_github_file(self, dep_ref: DependencyReference, file_path: str, re unauth_headers = {'Accept': 'application/vnd.github.v3.raw'} response = self._resilient_get(api_url, headers=unauth_headers, timeout=30) response.raise_for_status() + if verbose_callback: + verbose_callback(f"Downloaded file: {host}/{dep_ref.repo_url}/{file_path}") return response.content except requests.exceptions.HTTPError: pass # Fall through to the original error @@ -1790,7 +1813,8 @@ def download_package( repo_ref: Union[str, "DependencyReference"], target_path: Path, progress_task_id=None, - progress_obj=None + progress_obj=None, + verbose_callback=None ) -> PackageInfo: """Download a GitHub repository and validate it as an APM package. @@ -1804,6 +1828,7 @@ def download_package( target_path: Local path where package should be downloaded progress_task_id: Rich Progress task ID for progress updates progress_obj: Rich Progress object for progress updates + verbose_callback: Optional callable for verbose logging (receives str messages) Returns: PackageInfo: Information about the downloaded package @@ -1883,7 +1908,8 @@ def download_package( dep_ref.repo_url, target_path, progress_reporter=progress_reporter, - dep_ref=dep_ref + dep_ref=dep_ref, + verbose_callback=verbose_callback ) repo.git.checkout(resolved_ref.resolved_commit) else: @@ -1894,6 +1920,7 @@ def download_package( target_path, progress_reporter=progress_reporter, dep_ref=dep_ref, + verbose_callback=verbose_callback, depth=1, branch=resolved_ref.ref_name ) From e4b7c362f4be2d0efba371f22791b2f9e63eb23b Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 15:01:37 +0100 Subject: [PATCH 25/40] =?UTF-8?q?feat:=20Wave=202=20=E2=80=94=20CommandLog?= =?UTF-8?q?ger=20migration=20+=20DiagnosticCollector=20threading?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit install.py: - Remove 30 dead else-branch _rich_* fallbacks (logger always available) - Remove 3 duplicate _rich_* calls alongside logger calls - Remove unused _rich_info import; clean test mocks MCPIntegrator: - Add diagnostics param to collect_transitive() and install() - Thread diagnostics from install.py; transitive trust warning uses diagnostics skill_integrator: normalization warning already routes through diagnostics ✅ All 2874 tests passing. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- src/apm_cli/commands/install.py | 141 ++++++++-------------- src/apm_cli/integration/mcp_integrator.py | 23 ++-- tests/unit/test_canonicalization.py | 22 ++-- 3 files changed, 67 insertions(+), 119 deletions(-) diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index 4034d442..0e039abf 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -19,7 +19,7 @@ from ..drift import build_download_ref, detect_orphans, detect_ref_change from ..models.results import InstallResult from ..core.command_logger import InstallLogger, _ValidationOutcome -from ..utils.console import _rich_echo, _rich_error, _rich_info, _rich_success, _rich_warning +from ..utils.console import _rich_echo, _rich_error, _rich_info, _rich_success from ..utils.diagnostics import DiagnosticCollector from ..utils.github_host import default_host, is_valid_fqdn from ..utils.path_security import safe_rmtree @@ -121,8 +121,6 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo if logger: logger.validation_start(len(packages)) - else: - _rich_info(f"Validating {len(packages)} package(s)...") for package in packages: # Validate package format (should be owner/repo, a git URL, or a local path) @@ -131,8 +129,6 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo invalid_outcomes.append((package, reason)) if logger: logger.validation_fail(package, reason) - else: - _rich_error(f"Invalid package format: {package}. Use 'owner/repo' format.") continue # Canonicalize input @@ -145,8 +141,6 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo invalid_outcomes.append((package, reason)) if logger: logger.validation_fail(package, reason) - else: - _rich_error(f"Invalid package: {package} — {e}") continue # Check if package is already in dependencies (by identity) @@ -158,12 +152,6 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo valid_outcomes.append((canonical, already_in_deps)) if logger: logger.validation_pass(canonical, already_present=already_in_deps) - elif already_in_deps: - _rich_info( - f"✓ {canonical} - already in apm.yml, ensuring installation..." - ) - else: - _rich_info(f"✓ {canonical} - accessible") if not already_in_deps: validated_packages.append(canonical) @@ -171,12 +159,10 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo else: reason = "not accessible or doesn't exist" if not verbose: - reason += " — run with --verbose for auth details" + reason += " -- run with --verbose for auth details" invalid_outcomes.append((package, reason)) if logger: logger.validation_fail(package, reason) - else: - _rich_error(f"✗ {package} — {reason}") outcome = _ValidationOutcome(valid=valid_outcomes, invalid=invalid_outcomes) @@ -188,7 +174,8 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo if not validated_packages: if dry_run: - _rich_info("No new packages to add") if not logger else None + if logger: + logger.progress("No new packages to add") # If all packages already exist in apm.yml, that's OK - we'll reinstall them return [], outcome @@ -199,12 +186,6 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo ) for pkg in validated_packages: logger.verbose_detail(f" + {pkg}") - else: - _rich_info( - f"Dry run: Would add {len(validated_packages)} package(s) to apm.yml:" - ) - for pkg in validated_packages: - _rich_info(f" + {pkg}") return validated_packages, outcome # Add validated packages to dependencies (already canonical) @@ -213,8 +194,6 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo current_deps.append(package) if logger: logger.verbose_detail(f"Added {package} to {dep_label}") - else: - _rich_info(f"Added {package} to {dep_label}") # Update dependencies data[dep_section]["apm"] = current_deps @@ -225,8 +204,6 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo yaml.safe_dump(data, f, default_flow_style=False, sort_keys=False) if logger: logger.success(f"Updated {APM_YML_FILENAME} with {len(validated_packages)} new package(s)") - else: - _rich_success(f"Updated {APM_YML_FILENAME} with {len(validated_packages)} new package(s)") except Exception as e: if logger: logger.error(f"Failed to write {APM_YML_FILENAME}: {e}") @@ -646,7 +623,10 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo apm_modules_path = Path.cwd() / APM_MODULES_DIR if should_install_mcp and apm_modules_path.exists(): lock_path = get_lockfile_path(Path.cwd()) - transitive_mcp = MCPIntegrator.collect_transitive(apm_modules_path, lock_path, trust_transitive_mcp) + transitive_mcp = MCPIntegrator.collect_transitive( + apm_modules_path, lock_path, trust_transitive_mcp, + diagnostics=apm_diagnostics, + ) if transitive_mcp: logger.verbose_detail(f"Collected {len(transitive_mcp)} transitive MCP dependency(ies)") mcp_deps = MCPIntegrator.deduplicate(mcp_deps + transitive_mcp) @@ -658,6 +638,7 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo mcp_count = MCPIntegrator.install( mcp_deps, runtime, exclude, verbose, stored_mcp_configs=old_mcp_configs, + diagnostics=apm_diagnostics, ) new_mcp_servers = MCPIntegrator.get_server_names(mcp_deps) new_mcp_configs = MCPIntegrator.get_server_configs(mcp_deps) @@ -701,9 +682,9 @@ def install(ctx, packages, runtime, exclude, only, update, dry_run, force, verbo sys.exit(1) except Exception as e: - _rich_error(f"Error installing dependencies: {e}") + logger.error(f"Error installing dependencies: {e}") if not verbose: - _rich_info("Run with --verbose for detailed diagnostics") + logger.progress("Run with --verbose for detailed diagnostics") sys.exit(1) @@ -717,10 +698,11 @@ def _pre_deploy_security_scan( diagnostics: DiagnosticCollector, package_name: str = "", force: bool = False, + logger=None, ) -> bool: """Scan package source files for hidden characters BEFORE deployment. - Delegates to :class:`SecurityGate` for the scan→classify→decide pipeline. + Delegates to :class:`SecurityGate` for the scan->classify->decide pipeline. Inline CLI feedback (error/info lines) is kept here because it is install-specific formatting. @@ -739,12 +721,13 @@ def _pre_deploy_security_scan( SecurityGate.report(verdict, diagnostics, package=package_name, force=force) if verdict.should_block: - _rich_error( - f" Blocked: {package_name or 'package'} contains " - f"critical hidden character(s)" - ) - _rich_info(f" └─ Inspect source: {install_path}") - _rich_info(" └─ Use --force to deploy anyway") + if logger: + logger.error( + f" Blocked: {package_name or 'package'} contains " + f"critical hidden character(s)" + ) + logger.progress(f" └─ Inspect source: {install_path}") + logger.progress(" └─ Use --force to deploy anyway") return False return True @@ -767,6 +750,7 @@ def _integrate_package_primitives( managed_files, diagnostics, package_name="", + logger=None, ): """Run the full integration pipeline for a single package. @@ -789,6 +773,10 @@ def _integrate_package_primitives( if not (integrate_vscode or integrate_claude or integrate_opencode): return result + def _log_integration(msg): + if logger: + logger.progress(msg) + # --- prompts --- prompt_result = prompt_integrator.integrate_package_prompts( package_info, project_root, @@ -797,9 +785,9 @@ def _integrate_package_primitives( ) if prompt_result.files_integrated > 0: result["prompts"] += prompt_result.files_integrated - _rich_info(f" └─ {prompt_result.files_integrated} prompts integrated → .github/prompts/") + _log_integration(f" └─ {prompt_result.files_integrated} prompts integrated -> .github/prompts/") if prompt_result.files_updated > 0: - _rich_info(f" └─ {prompt_result.files_updated} prompts updated") + _log_integration(f" └─ {prompt_result.files_updated} prompts updated") result["links_resolved"] += prompt_result.links_resolved for tp in prompt_result.target_paths: deployed.append(tp.relative_to(project_root).as_posix()) @@ -812,9 +800,9 @@ def _integrate_package_primitives( ) if agent_result.files_integrated > 0: result["agents"] += agent_result.files_integrated - _rich_info(f" └─ {agent_result.files_integrated} agents integrated → .github/agents/") + _log_integration(f" └─ {agent_result.files_integrated} agents integrated -> .github/agents/") if agent_result.files_updated > 0: - _rich_info(f" └─ {agent_result.files_updated} agents updated") + _log_integration(f" └─ {agent_result.files_updated} agents updated") result["links_resolved"] += agent_result.links_resolved for tp in agent_result.target_paths: deployed.append(tp.relative_to(project_root).as_posix()) @@ -827,10 +815,10 @@ def _integrate_package_primitives( ) if skill_result.skill_created: result["skills"] += 1 - _rich_info(f" └─ Skill integrated → .github/skills/") + _log_integration(f" └─ Skill integrated -> .github/skills/") if skill_result.sub_skills_promoted > 0: result["sub_skills"] += skill_result.sub_skills_promoted - _rich_info(f" └─ {skill_result.sub_skills_promoted} skill(s) integrated → .github/skills/") + _log_integration(f" └─ {skill_result.sub_skills_promoted} skill(s) integrated -> .github/skills/") for tp in skill_result.target_paths: deployed.append(tp.relative_to(project_root).as_posix()) @@ -843,7 +831,7 @@ def _integrate_package_primitives( ) if instruction_result.files_integrated > 0: result["instructions"] += instruction_result.files_integrated - _rich_info(f" └─ {instruction_result.files_integrated} instruction(s) integrated → .github/instructions/") + _log_integration(f" └─ {instruction_result.files_integrated} instruction(s) integrated -> .github/instructions/") result["links_resolved"] += instruction_result.links_resolved for tp in instruction_result.target_paths: deployed.append(tp.relative_to(project_root).as_posix()) @@ -856,7 +844,7 @@ def _integrate_package_primitives( ) if cursor_rules_result.files_integrated > 0: result["instructions"] += cursor_rules_result.files_integrated - _rich_info(f" └─ {cursor_rules_result.files_integrated} rule(s) integrated → .cursor/rules/") + _log_integration(f" └─ {cursor_rules_result.files_integrated} rule(s) integrated -> .cursor/rules/") result["links_resolved"] += cursor_rules_result.links_resolved for tp in cursor_rules_result.target_paths: deployed.append(tp.relative_to(project_root).as_posix()) @@ -870,7 +858,7 @@ def _integrate_package_primitives( ) if claude_agent_result.files_integrated > 0: result["agents"] += claude_agent_result.files_integrated - _rich_info(f" └─ {claude_agent_result.files_integrated} agents integrated → .claude/agents/") + _log_integration(f" └─ {claude_agent_result.files_integrated} agents integrated -> .claude/agents/") result["links_resolved"] += claude_agent_result.links_resolved for tp in claude_agent_result.target_paths: deployed.append(tp.relative_to(project_root).as_posix()) @@ -883,7 +871,7 @@ def _integrate_package_primitives( ) if cursor_agent_result.files_integrated > 0: result["agents"] += cursor_agent_result.files_integrated - _rich_info(f" └─ {cursor_agent_result.files_integrated} agents integrated → .cursor/agents/") + _log_integration(f" └─ {cursor_agent_result.files_integrated} agents integrated -> .cursor/agents/") result["links_resolved"] += cursor_agent_result.links_resolved for tp in cursor_agent_result.target_paths: deployed.append(tp.relative_to(project_root).as_posix()) @@ -896,7 +884,7 @@ def _integrate_package_primitives( ) if opencode_agent_result.files_integrated > 0: result["agents"] += opencode_agent_result.files_integrated - _rich_info(f" └─ {opencode_agent_result.files_integrated} agents integrated → .opencode/agents/") + _log_integration(f" └─ {opencode_agent_result.files_integrated} agents integrated -> .opencode/agents/") result["links_resolved"] += opencode_agent_result.links_resolved for tp in opencode_agent_result.target_paths: deployed.append(tp.relative_to(project_root).as_posix()) @@ -909,9 +897,9 @@ def _integrate_package_primitives( ) if command_result.files_integrated > 0: result["commands"] += command_result.files_integrated - _rich_info(f" └─ {command_result.files_integrated} commands integrated → .claude/commands/") + _log_integration(f" └─ {command_result.files_integrated} commands integrated -> .claude/commands/") if command_result.files_updated > 0: - _rich_info(f" └─ {command_result.files_updated} commands updated") + _log_integration(f" └─ {command_result.files_updated} commands updated") result["links_resolved"] += command_result.links_resolved for tp in command_result.target_paths: deployed.append(tp.relative_to(project_root).as_posix()) @@ -924,7 +912,7 @@ def _integrate_package_primitives( ) if opencode_command_result.files_integrated > 0: result["commands"] += opencode_command_result.files_integrated - _rich_info(f" └─ {opencode_command_result.files_integrated} commands integrated → .opencode/commands/") + _log_integration(f" └─ {opencode_command_result.files_integrated} commands integrated -> .opencode/commands/") result["links_resolved"] += opencode_command_result.links_resolved for tp in opencode_command_result.target_paths: deployed.append(tp.relative_to(project_root).as_posix()) @@ -938,7 +926,7 @@ def _integrate_package_primitives( ) if hook_result.hooks_integrated > 0: result["hooks"] += hook_result.hooks_integrated - _rich_info(f" └─ {hook_result.hooks_integrated} hook(s) integrated → .github/hooks/") + _log_integration(f" └─ {hook_result.hooks_integrated} hook(s) integrated -> .github/hooks/") for tp in hook_result.target_paths: deployed.append(tp.relative_to(project_root).as_posix()) if integrate_claude: @@ -949,7 +937,7 @@ def _integrate_package_primitives( ) if hook_result_claude.hooks_integrated > 0: result["hooks"] += hook_result_claude.hooks_integrated - _rich_info(f" └─ {hook_result_claude.hooks_integrated} hook(s) integrated → .claude/settings.json") + _log_integration(f" └─ {hook_result_claude.hooks_integrated} hook(s) integrated -> .claude/settings.json") for tp in hook_result_claude.target_paths: deployed.append(tp.relative_to(project_root).as_posix()) @@ -961,7 +949,7 @@ def _integrate_package_primitives( ) if hook_result_cursor.hooks_integrated > 0: result["hooks"] += hook_result_cursor.hooks_integrated - _rich_info(f" └─ {hook_result_cursor.hooks_integrated} hook(s) integrated → .cursor/hooks.json") + _log_integration(f" └─ {hook_result_cursor.hooks_integrated} hook(s) integrated -> .cursor/hooks.json") for tp in hook_result_cursor.target_paths: deployed.append(tp.relative_to(project_root).as_posix()) @@ -1060,8 +1048,6 @@ def _install_apm_dependencies( for locked_dep in existing_lockfile.dependencies: sha_short = locked_dep.resolved_commit[:8] if locked_dep.resolved_commit else "no-sha" logger.verbose_detail(f" {locked_dep.get_unique_key()}: locked at {sha_short}") - else: - _rich_info(f"Using apm.lock.yaml ({lockfile_count} locked dependencies)") apm_modules_dir = project_root / APM_MODULES_DIR apm_modules_dir.mkdir(exist_ok=True) @@ -1174,14 +1160,10 @@ def download_callback(dep_ref, modules_dir, parent_chain=""): if dependency_graph.circular_dependencies: if logger: logger.error("Circular dependencies detected:") - else: - _rich_error("Circular dependencies detected:") for circular in dependency_graph.circular_dependencies: cycle_path = " -> ".join(circular.cycle_path) if logger: logger.error(f" {cycle_path}") - else: - _rich_error(f" {cycle_path}") raise RuntimeError("Cannot install packages with circular dependencies") # Get flattened dependencies for installation @@ -1231,8 +1213,6 @@ def _collect_descendants(node, visited=None): if not deps_to_install: if logger: logger.nothing_to_install() - else: - _rich_info("No APM dependencies to install", symbol="check") return InstallResult() # ------------------------------------------------------------------ @@ -1273,10 +1253,6 @@ def _collect_descendants(node, visited=None): logger.verbose_detail( "Created .github/ as standard skills root (.github/skills/) and to enable VSCode/Copilot integration" ) - else: - _rich_info( - "Created .github/ as standard skills root (.github/skills/) and to enable VSCode/Copilot integration" - ) detected_target, detection_reason = detect_target( project_root=project_root, @@ -1467,8 +1443,6 @@ def _collect_descendants(node, visited=None): installed_count += 1 if logger: logger.download_complete(dep_ref.local_path, ref_suffix="local") - else: - _rich_success(f"✓ {dep_ref.local_path} (local)") # Build minimal PackageInfo for integration from apm_cli.models.apm_package import ( @@ -1540,6 +1514,7 @@ def _collect_descendants(node, visited=None): if not _pre_deploy_security_scan( install_path, diagnostics, package_name=dep_key, force=force, + logger=logger, ): package_deployed_files[dep_key] = [] continue @@ -1559,6 +1534,7 @@ def _collect_descendants(node, visited=None): managed_files=managed_files, diagnostics=diagnostics, package_name=dep_key, + logger=logger, ) total_prompts_integrated += int_result["prompts"] total_agents_integrated += int_result["agents"] @@ -1634,8 +1610,6 @@ def _collect_descendants(node, visited=None): diagnostics.warn(_hash_msg, package=dep_ref.get_unique_key()) if logger: logger.progress(_hash_msg) - else: - _rich_info(_hash_msg) safe_rmtree(install_path, apm_modules_dir) skip_download = False @@ -1655,8 +1629,6 @@ def _collect_descendants(node, visited=None): ref_str = f"#{dep_ref.reference}" if logger: logger.download_complete(display_name, ref_suffix=f"{ref_str} (cached)" if ref_str else "cached") - else: - _rich_info(f"✓ {display_name}{ref_str} (cached)") installed_count += 1 if not dep_ref.reference: unpinned_count += 1 @@ -1740,6 +1712,7 @@ def _collect_descendants(node, visited=None): if not _pre_deploy_security_scan( install_path, diagnostics, package_name=dep_key, force=force, + logger=logger, ): package_deployed_files[dep_key] = [] continue @@ -1759,6 +1732,7 @@ def _collect_descendants(node, visited=None): managed_files=managed_files, diagnostics=diagnostics, package_name=dep_key, + logger=logger, ) total_prompts_integrated += int_result["prompts"] total_agents_integrated += int_result["agents"] @@ -1874,14 +1848,13 @@ def _collect_descendants(node, visited=None): if _type_label: if logger: logger.verbose_detail(f" Package type: {_type_label}") - else: - _rich_info(f" └─ Package type: {_type_label}") # Auto-integrate prompts and agents if enabled # Pre-deploy security gate if not _pre_deploy_security_scan( package_info.install_path, diagnostics, package_name=dep_ref.get_unique_key(), force=force, + logger=logger, ): package_deployed_files[dep_ref.get_unique_key()] = [] continue @@ -1903,6 +1876,7 @@ def _collect_descendants(node, visited=None): managed_files=managed_files, diagnostics=diagnostics, package_name=dep_ref.get_unique_key(), + logger=logger, ) total_prompts_integrated += int_result["prompts"] total_agents_integrated += int_result["agents"] @@ -1968,8 +1942,6 @@ def _collect_descendants(node, visited=None): diagnostics.error(_orphan_msg) if logger: logger.verbose_detail(f" {_orphan_msg}") - else: - _rich_error(f" └─ {_orphan_msg}") _failed_orphan_count += 1 # Clean up empty parent directories left after file removal if _deleted_orphan_paths: @@ -1980,11 +1952,6 @@ def _collect_descendants(node, visited=None): f"Removed {_removed_orphan_count} file(s) from packages " "no longer in apm.yml" ) - else: - _rich_info( - f"Removed {_removed_orphan_count} file(s) from packages " - "no longer in apm.yml" - ) # Generate apm.lock for reproducible installs (T4: lockfile generation) if installed_packages: @@ -2035,40 +2002,28 @@ def _collect_descendants(node, visited=None): lockfile.save(lockfile_path) if logger: logger.verbose_detail(f"Generated apm.lock.yaml with {len(lockfile.dependencies)} dependencies") - else: - _rich_info(f"Generated apm.lock.yaml with {len(lockfile.dependencies)} dependencies") except Exception as e: _lock_msg = f"Could not generate apm.lock.yaml: {e}" diagnostics.error(_lock_msg) if logger: logger.error(_lock_msg) - else: - _rich_error(_lock_msg) # Show integration stats (verbose-only when logger is available) if total_links_resolved > 0: if logger: logger.verbose_detail(f"Resolved {total_links_resolved} context file links") - else: - _rich_info(f"✓ Resolved {total_links_resolved} context file links") if total_commands_integrated > 0: if logger: logger.verbose_detail(f"Integrated {total_commands_integrated} command(s)") - else: - _rich_info(f"✓ Integrated {total_commands_integrated} command(s)") if total_hooks_integrated > 0: if logger: logger.verbose_detail(f"Integrated {total_hooks_integrated} hook(s)") - else: - _rich_info(f"✓ Integrated {total_hooks_integrated} hook(s)") if total_instructions_integrated > 0: if logger: logger.verbose_detail(f"Integrated {total_instructions_integrated} instruction(s)") - else: - _rich_info(f"✓ Integrated {total_instructions_integrated} instruction(s)") # Summary is now emitted by the caller via logger.install_summary() if not logger: diff --git a/src/apm_cli/integration/mcp_integrator.py b/src/apm_cli/integration/mcp_integrator.py index 43de1e52..3ea5e067 100644 --- a/src/apm_cli/integration/mcp_integrator.py +++ b/src/apm_cli/integration/mcp_integrator.py @@ -61,6 +61,7 @@ def collect_transitive( lock_path: Optional[Path] = None, trust_private: bool = False, logger=None, + diagnostics=None, ) -> list: """Collect MCP dependencies from resolved APM packages listed in apm.lock. @@ -136,18 +137,17 @@ def collect_transitive( f"from transitive package '{pkg.name}' (--trust-transitive-mcp)" ) else: - if logger: - logger.warning( - f"Transitive package '{pkg.name}' declares self-defined " - f"MCP server '{dep.name}' (registry: false). " - f"Re-declare it in your apm.yml or use --trust-transitive-mcp." - ) + _trust_msg = ( + f"Transitive package '{pkg.name}' declares self-defined " + f"MCP server '{dep.name}' (registry: false). " + f"Re-declare it in your apm.yml or use --trust-transitive-mcp." + ) + if diagnostics: + diagnostics.warn(_trust_msg) + elif logger: + logger.warning(_trust_msg) else: - _rich_warning( - f"Transitive package '{pkg.name}' declares self-defined " - f"MCP server '{dep.name}' (registry: false). " - f"Re-declare it in your apm.yml or use --trust-transitive-mcp." - ) + _rich_warning(_trust_msg) continue collected.append(dep) except Exception: @@ -803,6 +803,7 @@ def install( apm_config: dict = None, stored_mcp_configs: dict = None, logger=None, + diagnostics=None, ) -> int: """Install MCP dependencies. diff --git a/tests/unit/test_canonicalization.py b/tests/unit/test_canonicalization.py index 1fcfa5f8..75a8b81e 100644 --- a/tests/unit/test_canonicalization.py +++ b/tests/unit/test_canonicalization.py @@ -261,9 +261,8 @@ class TestNormalizeOnWrite: """Test that _validate_and_add_packages_to_apm_yml canonicalizes inputs.""" @patch("apm_cli.commands.install._validate_package_exists", return_value=True) - @patch("apm_cli.commands.install._rich_info") @patch("apm_cli.commands.install._rich_success") - def test_https_url_stored_as_shorthand(self, mock_success, mock_info, mock_validate, tmp_path, monkeypatch): + def test_https_url_stored_as_shorthand(self, mock_success, mock_validate, tmp_path, monkeypatch): """HTTPS GitHub URL is stored as owner/repo in apm.yml.""" import yaml apm_yml = tmp_path / "apm.yml" @@ -280,9 +279,8 @@ def test_https_url_stored_as_shorthand(self, mock_success, mock_info, mock_valid assert "microsoft/apm-sample-package" in data["dependencies"]["apm"] @patch("apm_cli.commands.install._validate_package_exists", return_value=True) - @patch("apm_cli.commands.install._rich_info") @patch("apm_cli.commands.install._rich_success") - def test_ssh_url_stored_as_shorthand(self, mock_success, mock_info, mock_validate, tmp_path, monkeypatch): + def test_ssh_url_stored_as_shorthand(self, mock_success, mock_validate, tmp_path, monkeypatch): """SSH GitHub URL is stored as owner/repo in apm.yml.""" import yaml apm_yml = tmp_path / "apm.yml" @@ -297,9 +295,8 @@ def test_ssh_url_stored_as_shorthand(self, mock_success, mock_info, mock_validat assert validated == ["microsoft/apm-sample-package"] @patch("apm_cli.commands.install._validate_package_exists", return_value=True) - @patch("apm_cli.commands.install._rich_info") @patch("apm_cli.commands.install._rich_success") - def test_fqdn_github_stored_as_shorthand(self, mock_success, mock_info, mock_validate, tmp_path, monkeypatch): + def test_fqdn_github_stored_as_shorthand(self, mock_success, mock_validate, tmp_path, monkeypatch): """FQDN github.com/owner/repo is stored as owner/repo.""" import yaml apm_yml = tmp_path / "apm.yml" @@ -314,9 +311,8 @@ def test_fqdn_github_stored_as_shorthand(self, mock_success, mock_info, mock_val assert validated == ["microsoft/apm-sample-package"] @patch("apm_cli.commands.install._validate_package_exists", return_value=True) - @patch("apm_cli.commands.install._rich_info") @patch("apm_cli.commands.install._rich_success") - def test_gitlab_url_preserves_host(self, mock_success, mock_info, mock_validate, tmp_path, monkeypatch): + def test_gitlab_url_preserves_host(self, mock_success, mock_validate, tmp_path, monkeypatch): """GitLab URL preserves the host in canonical form.""" import yaml apm_yml = tmp_path / "apm.yml" @@ -333,9 +329,7 @@ def test_gitlab_url_preserves_host(self, mock_success, mock_info, mock_validate, assert "gitlab.com/acme/standards" in data["dependencies"]["apm"] @patch("apm_cli.commands.install._validate_package_exists", return_value=True) - @patch("apm_cli.commands.install._rich_info") - @patch("apm_cli.commands.install._rich_warning") - def test_duplicate_detection_different_forms(self, mock_warn, mock_info, mock_validate, tmp_path, monkeypatch): + def test_duplicate_detection_different_forms(self, mock_validate, tmp_path, monkeypatch): """Installing the same package in different forms doesn't create duplicates.""" import yaml apm_yml = tmp_path / "apm.yml" @@ -357,9 +351,8 @@ def test_duplicate_detection_different_forms(self, mock_warn, mock_info, mock_va assert data["dependencies"]["apm"].count("microsoft/apm-sample-package") == 1 @patch("apm_cli.commands.install._validate_package_exists", return_value=True) - @patch("apm_cli.commands.install._rich_info") @patch("apm_cli.commands.install._rich_success") - def test_batch_dedup(self, mock_success, mock_info, mock_validate, tmp_path, monkeypatch): + def test_batch_dedup(self, mock_success, mock_validate, tmp_path, monkeypatch): """Installing the same package twice in one batch only adds once.""" import yaml apm_yml = tmp_path / "apm.yml" @@ -376,9 +369,8 @@ def test_batch_dedup(self, mock_success, mock_info, mock_validate, tmp_path, mon assert validated[0] == "microsoft/apm-sample-package" @patch("apm_cli.commands.install._validate_package_exists", return_value=True) - @patch("apm_cli.commands.install._rich_info") @patch("apm_cli.commands.install._rich_success") - def test_ref_preserved_in_canonical(self, mock_success, mock_info, mock_validate, tmp_path, monkeypatch): + def test_ref_preserved_in_canonical(self, mock_success, mock_validate, tmp_path, monkeypatch): """Reference is preserved in the canonical form.""" import yaml apm_yml = tmp_path / "apm.yml" From e2e0970c8ead92c7affa589912a089d28bb16052 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 15:11:50 +0100 Subject: [PATCH 26/40] =?UTF-8?q?refactor:=20Wave=203=20=E2=80=94=20replac?= =?UTF-8?q?e=20all=20unicode=20symbols=20with=20ASCII=20STATUS=5FSYMBOLS?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - command_logger.py: ✓→[+], ✗→[x], —→-- - diagnostics.py: ⚠→[!], ✗→[x], —→-- - install.py: ✓→[+], →→->, —→-- - Updated test assertions to match new ASCII symbols All 2874 tests passing. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- src/apm_cli/commands/install.py | 10 +++++----- src/apm_cli/core/command_logger.py | 12 ++++++------ src/apm_cli/utils/diagnostics.py | 10 +++++----- tests/acceptance/test_logging_acceptance.py | 10 +++++----- tests/unit/test_command_logger.py | 4 ++-- 5 files changed, 23 insertions(+), 23 deletions(-) diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index 0e039abf..83aed5d1 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -125,7 +125,7 @@ def _validate_and_add_packages_to_apm_yml(packages, dry_run=False, dev=False, lo for package in packages: # Validate package format (should be owner/repo, a git URL, or a local path) if "/" not in package and not DependencyReference.is_local_path(package): - reason = "invalid format — use 'owner/repo'" + reason = "invalid format -- use 'owner/repo'" invalid_outcomes.append((package, reason)) if logger: logger.validation_fail(package, reason) @@ -329,11 +329,11 @@ def _check_repo(token, git_env): try: resp = urllib.request.urlopen(req, timeout=15) if verbose_log: - verbose_log(f"API {api_url} → {resp.status}") + verbose_log(f"API {api_url} -> {resp.status}") return True except urllib.error.HTTPError as e: if verbose_log: - verbose_log(f"API {api_url} → {e.code} {e.reason}") + verbose_log(f"API {api_url} -> {e.code} {e.reason}") if e.code == 404 and token: # 404 with token could mean no access — raise to trigger fallback raise RuntimeError(f"API returned {e.code}") @@ -385,7 +385,7 @@ def _check_repo_fallback(token, git_env): return True except urllib.error.HTTPError as e: if verbose_log: - verbose_log(f"API fallback → {e.code} {e.reason}") + verbose_log(f"API fallback -> {e.code} {e.reason}") raise RuntimeError(f"API returned {e.code}") except Exception as e: if verbose_log: @@ -1811,7 +1811,7 @@ def _collect_descendants(node, visited=None): except Exception: pass else: - _rich_success(f"✓ {display_name}{ref_suffix}") + _rich_success(f"[+] {display_name}{ref_suffix}") # Track unpinned deps for aggregated diagnostic if not dep_ref.reference: diff --git a/src/apm_cli/core/command_logger.py b/src/apm_cli/core/command_logger.py index 438d539a..0a3b5e7c 100644 --- a/src/apm_cli/core/command_logger.py +++ b/src/apm_cli/core/command_logger.py @@ -110,7 +110,7 @@ def should_execute(self) -> bool: def auth_step(self, step: str, success: bool, detail: str = ""): """Log an auth resolution step (verbose only).""" if self.verbose: - status = "✓" if success else "✗" + status = "[+]" if success else "[x]" msg = f" auth: {status} {step}" if detail: msg += f" ({detail})" @@ -164,13 +164,13 @@ def validation_start(self, count: int): def validation_pass(self, canonical: str, already_present: bool): """Log a package that passed validation.""" if already_present: - _rich_echo(f" ✓ {canonical} (already in apm.yml)", color="dim") + _rich_echo(f" [+] {canonical} (already in apm.yml)", color="dim") else: - _rich_success(f" ✓ {canonical}") + _rich_success(f" [+] {canonical}") def validation_fail(self, package: str, reason: str): """Log a package that failed validation.""" - _rich_error(f" ✗ {package} — {reason}") + _rich_error(f" [x] {package} -- {reason}") def validation_summary(self, outcome: _ValidationOutcome): """Log validation summary and decide whether to continue. @@ -229,14 +229,14 @@ def download_start(self, dep_name: str, cached: bool): def download_complete(self, dep_name: str, ref_suffix: str = ""): """Log completion of a package download.""" - msg = f" ✓ {dep_name}" + msg = f" [+] {dep_name}" if ref_suffix: msg += f" ({ref_suffix})" _rich_echo(msg, color="green") def download_failed(self, dep_name: str, error: str): """Log a download failure.""" - _rich_error(f" ✗ {dep_name} — {error}") + _rich_error(f" [x] {dep_name} -- {error}") # --- Install summary --- diff --git a/src/apm_cli/utils/diagnostics.py b/src/apm_cli/utils/diagnostics.py index 82868061..24895450 100644 --- a/src/apm_cli/utils/diagnostics.py +++ b/src/apm_cli/utils/diagnostics.py @@ -309,7 +309,7 @@ def _render_collision_group(self, items: List[Diagnostic]) -> None: count = len(items) noun = "file" if count == 1 else "files" _rich_warning( - f" ⚠ {count} {noun} skipped — local files exist, not managed by APM" + f" [!] {count} {noun} skipped -- local files exist, not managed by APM" ) _rich_info(" Use 'apm install --force' to overwrite") if not self.verbose: @@ -327,7 +327,7 @@ def _render_overwrite_group(self, items: List[Diagnostic]) -> None: count = len(items) noun = "skill" if count == 1 else "skills" _rich_warning( - f" ⚠ {count} {noun} replaced by a different package (last installed wins)" + f" [!] {count} {noun} replaced by a different package (last installed wins)" ) if not self.verbose: _rich_info(" Run with --verbose to see details") @@ -344,16 +344,16 @@ def _render_overwrite_group(self, items: List[Diagnostic]) -> None: def _render_warning_group(self, items: List[Diagnostic]) -> None: for d in items: pkg_prefix = f"[{d.package}] " if d.package else "" - _rich_warning(f" ⚠ {pkg_prefix}{d.message}") + _rich_warning(f" [!] {pkg_prefix}{d.message}") if d.detail and self.verbose: _rich_echo(f" └─ {d.detail}", color="dim") def _render_error_group(self, items: List[Diagnostic]) -> None: count = len(items) noun = "package" if count == 1 else "packages" - _rich_echo(f" ✗ {count} {noun} failed:", color="red") + _rich_echo(f" [x] {count} {noun} failed:", color="red") for d in items: - pkg_prefix = f"{d.package} — " if d.package else "" + pkg_prefix = f"{d.package} -- " if d.package else "" _rich_echo(f" └─ {pkg_prefix}{d.message}", color="red") if d.detail and self.verbose: _rich_echo(f" {d.detail}", color="dim") diff --git a/tests/acceptance/test_logging_acceptance.py b/tests/acceptance/test_logging_acceptance.py index e636fb33..7beb521d 100644 --- a/tests/acceptance/test_logging_acceptance.py +++ b/tests/acceptance/test_logging_acceptance.py @@ -140,7 +140,7 @@ def test_happy_path_output( # Validation phase assert "Validating 1 package" in out - assert "✓ owner/repo" in out + assert "[+] owner/repo" in out # Installation phase assert "Installing" in out @@ -167,7 +167,7 @@ def test_not_accessible_message(self, mock_validate): out = result.output assert "not accessible or doesn't exist" in out - assert "✗" in out + assert "[x]" in out @patch(_InstallAcceptanceBase._VALIDATE) def test_verbose_hint_when_not_verbose(self, mock_validate): @@ -193,7 +193,7 @@ def test_no_verbose_hint_when_verbose(self, mock_validate): # The validation failure reason should NOT contain the verbose hint # when already in verbose mode. - lines_with_cross = [l for l in result.output.splitlines() if "✗" in l] + lines_with_cross = [l for l in result.output.splitlines() if "[x]" in l] for line in lines_with_cross: assert "run with --verbose" not in line.lower(), ( f"Redundant --verbose hint found in verbose mode: {line}" @@ -305,8 +305,8 @@ def test_mixed_shows_check_and_cross( assert result.exit_code == 0, f"Exit {result.exit_code}: {out}" # Check mark for good package, cross for bad - assert "✓" in out, "Expected ✓ for valid package" - assert "✗" in out, "Expected ✗ for invalid package" + assert "[+]" in out, "Expected [+] for valid package" + assert "[x]" in out, "Expected [x] for invalid package" # Continues to install the valid one assert "1" in out and "failed validation" in out diff --git a/tests/unit/test_command_logger.py b/tests/unit/test_command_logger.py index 3b7bb600..8d655b94 100644 --- a/tests/unit/test_command_logger.py +++ b/tests/unit/test_command_logger.py @@ -105,7 +105,7 @@ def test_auth_step_verbose(self, mock_echo): logger.auth_step("Trying GITHUB_APM_PAT", success=True, detail="found") mock_echo.assert_called_once() call_args = mock_echo.call_args[0][0] - assert "✓" in call_args + assert "[+]" in call_args assert "GITHUB_APM_PAT" in call_args @patch("apm_cli.core.command_logger._rich_echo") @@ -166,7 +166,7 @@ def test_auth_step_failure(self, mock_echo): logger = CommandLogger("test", verbose=True) logger.auth_step("Trying gh CLI", success=False) mock_echo.assert_called_once() - assert "✗" in mock_echo.call_args[0][0] + assert "[x]" in mock_echo.call_args[0][0] class TestInstallLogger: From 49068f23998d1af8b2113a453015c221537cc2ea Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 15:12:23 +0100 Subject: [PATCH 27/40] docs: update CHANGELOG for auth+logging architecture overhaul (#393) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- CHANGELOG.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 70a8a3d2..16b70f97 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -11,6 +11,20 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Added - Documented `${input:...}` variable support in `headers` and `env` MCP server fields, with runtime support matrix and examples (#343) +- Parent chain breadcrumb in transitive dependency error messages — failures now show "root-pkg > mid-pkg > failing-dep" (#393) +- Verbose output coverage: dependency tree resolution summary, auth source/type per download, manifest parsing details, per-dep lockfile SHA, download URL (#393) +- `DownloadCallback` Protocol type for type-safe resolver callbacks (#393) +- `DependencyNode.get_ancestor_chain()` method for human-readable dependency ancestry (#393) +- `diagnostics` parameter threaded through `MCPIntegrator.install()` for deferred warning summaries (#393) +- Chaos mega-manifest auth acceptance test (`--mega` flag) covering 8 auth scenarios in a single install (#393) + +### Changed + +- All CLI output now uses ASCII symbols (`[+]`, `[x]`, `[!]`) instead of Unicode characters (`✓`, `✗`, `⚠`) (#393) +- Migrated `_rich_*` calls to `CommandLogger` across install, compile, uninstall, audit, pack, and bundle modules (#393) +- "No dependencies found" downgraded from warning to info (non-actionable state) (#393) +- Lockfile generation failure upgraded from warning to error (actual failure) (#393) +- Deduplicated `AuthResolver` instantiation in package validation (#393) ## [0.8.3] - 2026-03-20 From 420e97014555b5f86e8f87d3d8d9010960483ba1 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 16:30:53 +0100 Subject: [PATCH 28/40] docs: add Agentic SDLC Practitioner Handbook (WIP) Comprehensive handbook for AI Engineers using GitHub Copilot CLI to ship large multi-concern changes via agent fleets. - Three-actor model: User (strategist), Harness (Copilot CLI), Agents (fleet) - Full meta-process: Audit, Plan, Wave Execution, Validate, Ship - Repository instrumentation guide (agents, skills, instructions) - Wave execution model with dependency mapping and parallelization - Test ring pipeline (unit, acceptance, integration) - Escalation protocol and feedback loop for primitive improvement - Autonomous CI/CD patterns for drift detection - Live Dashboard POC via Copilot CLI Hooks (Appendix D) - Prompt examples written from the user's perspective Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- ...agentic-sdlc-for-practitioners-handbook.md | 1306 +++++++++++++++++ 1 file changed, 1306 insertions(+) create mode 100644 WIP/agentic-sdlc-for-practitioners-handbook.md diff --git a/WIP/agentic-sdlc-for-practitioners-handbook.md b/WIP/agentic-sdlc-for-practitioners-handbook.md new file mode 100644 index 00000000..bf096cd1 --- /dev/null +++ b/WIP/agentic-sdlc-for-practitioners-handbook.md @@ -0,0 +1,1306 @@ +# Agentic SDLC for Practitioners + +**A handbook for AI Engineers who ship large chunks of software by orchestrating agent fleets — without writing a single line of classic code.** + +--- + +## Who This Is For + +You are an AI Native Developer. Your job is to think about overall plan and orchestration, and capture it in markdown files that compose together — agents, skills, instructions, plugin bundles. You do not write production code. You may not even type — you might speak to terminals running Copilot CLI using voice input. You review specs, not diffs. + +This handbook codifies battle-tested patterns for using GitHub Copilot CLI as your **harness** — the orchestration engine that translates your strategic intent into parallel agent work, tracks progress internally, validates results, and ships. + +The patterns were forged on a real PR: **70 files changed, +5,886 / -1,030 lines, 30 commits, 2,874 tests green** — an auth + logging architecture overhaul touching five cross-cutting concerns. One human. Two AI teams. Four waves. Zero regressions. Roughly 90 minutes of wall-clock time for what would be 2-3 days of manual work. + +This is not theory. Every section is immediately actionable. + +--- + +## Table of Contents + +1. [The Three Actors](#1-the-three-actors) +2. [The Thesis](#2-the-thesis) +3. [Repository Instrumentation: Markdown Is Your Codebase](#3-repository-instrumentation-markdown-is-your-codebase) +4. [The Meta-Process](#4-the-meta-process) +5. [The Planning Discipline](#5-the-planning-discipline) +6. [Team Topology](#6-team-topology) +7. [Wave Execution](#7-wave-execution) +8. [Checkpoint Discipline](#8-checkpoint-discipline) +9. [The Test Ring Pipeline](#9-the-test-ring-pipeline) +10. [Escalation Protocol](#10-escalation-protocol) +11. [The Feedback Loop](#11-the-feedback-loop) +12. [Autonomous CI/CD](#12-autonomous-cicd) +13. [Anti-Patterns](#13-anti-patterns) +14. [Scaling Characteristics](#14-scaling-characteristics) +15. [Example Scenarios](#15-example-scenarios) + +--- + +## 1. The Three Actors + +Every interaction in the agentic SDLC involves three distinct actors. Keeping them separate is critical to understanding the system. + +### You (the AI Native Developer) + +You are the strategist. You think in plans, not in code. You: + +- **Iterate on specs** — your primary creative output is the plan (scope, teams, waves, principles) +- **Commission audits** — you tell the harness what to investigate +- **Validate results** — you skim specs, spot-check outputs, approve or reject +- **Make strategic calls** — scope decisions, trade-offs, escalation handling +- **Extract lessons** — when something fails, you improve the agent primitives + +You parallelize *planning tasks*, not coding tasks. You might have multiple Copilot CLI sessions running — one exploring architecture, one drafting a logging plan, one reviewing a security audit — all converging into a single spec. + +You might work from a laptop, a phone (GitHub App on iPhone), or by speaking to terminals using voice input (e.g., Handy for macOS). The interface is natural language. The output is markdown. + +### The Harness (GitHub Copilot CLI) + +The harness is your orchestration engine. When you describe what you want, the harness: + +- **Translates your intent into agent dispatches** — it decides which tools to call, how to parallelize, when to checkpoint +- **Tracks state internally** — the harness maintains task lists, dependency graphs, and progress tracking in its own session database (SQL tables, session state). You see the plan; the harness manages the machinery. +- **Runs the test pipeline** — after each wave, the harness executes test suites and reports results +- **Manages agent lifecycle** — dispatching, monitoring, reading results, handling failures +- **Activates skills automatically** — when code patterns match, the harness loads the relevant skill rules + +You interact with the harness in natural language. You say "dispatch the architecture team for Wave 0" and the harness translates that into parallel agent launches, SQL state updates, file edits, and test runs. The harness's internal mechanics (tool calls, SQL queries, session state management) are implementation details — visible if you want to inspect them, but not something you need to manage. + +### The Agents (the Fleet) + +Agents are specialized AI engineers dispatched by the harness. Each agent: + +- Has a **persona** defined by an agent file (`.github/agents/*.agent.md`) +- Follows **skill rules** activated by the code it touches (`.github/skills/*/SKILL.md`) +- Operates in a **stateless context** — every dispatch starts fresh, with only the prompt as context +- **Writes code, runs tests, reports back** — then terminates + +Agents don't know about each other. They don't know about the wave graph. They don't manage state. They receive surgical instructions and execute them. The harness coordinates everything. + +``` +You (strategist) + │ + │ natural language prompts + ▼ +Harness (Copilot CLI) + │ + │ parallel dispatches with precise instructions + ▼ +Agents (the fleet) + │ + │ code changes, test results, findings + ▼ +Harness (validates, checkpoints, reports back to you) +``` + +--- + +## 2. The Thesis + +Traditional software development scales linearly with humans. Agentic development scales with orchestration quality. + +**You do not parallelize coding tasks. You parallelize planning tasks.** Your creative energy goes into the spec — the plan, the team composition, the wave structure, the principles. Once the spec is right, execution is mechanical: the harness dispatches agents, agents write code, tests validate, you merge. + +**Your spec always carries the definition of the Agent Team who will implement it.** The plan isn't just "what to build" — it's "who builds it, in what order, with what constraints." The team composition (architect, logging expert, auth specialist) is part of the spec, not an afterthought. + +**If you are not confident in your pipeline, you haven't engineered it correctly.** Confidence comes from: + +- Agent primitives (personas, skills, instructions) that encode your project's patterns +- Test rings that catch regressions at every checkpoint +- Escalation protocols that surface genuine decisions to you and handle everything else autonomously +- Feedback loops that harden the system after every failure + +**Green CI/CD means click merge and don't look back.** If your test rings, code review agents, and security scans pass — that's the signal. You don't re-read the code. You trust the pipeline you engineered. + +**An AI Engineer extracts lessons from failure and improves the Agent Primitives.** When an agent makes a mistake, you don't fix the code — you fix the agent's persona, the skill rules, or the instructions that led to the mistake. The system gets better with every iteration. You code in markdown. + +--- + +## 3. Repository Instrumentation: Markdown Is Your Codebase + +Before the agentic SDLC works at scale, your repository needs three instrumentation layers. These are one-time investments that pay dividends on every future change. They are all markdown files. + +### Layer 1: Agent Personas (`.github/agents/*.agent.md`) + +Agent files define *who* your AI engineers are. Each file creates a specialist with domain knowledge, calibrated judgment, and a consistent voice. + +```yaml +# .github/agents/python-architect.agent.md +--- +name: python-architect +description: >- + Expert on Python design patterns, modularization, and scalable architecture. + Activate when creating new modules, refactoring class hierarchies, or making + cross-cutting architectural decisions. +model: claude-opus-4.6 +--- + +# Python Architect + +You are an expert Python architect specializing in CLI tool design. + +## Design Philosophy +- Speed and simplicity over complexity +- Solid foundation, iterate +- Pay only for what you touch + +## Patterns You Enforce +- BaseIntegrator for all file-level integrators +- CommandLogger for all CLI output +- AuthResolver for all credential access +``` + +**Design principles for agent personas:** + +| Principle | Why | Example | +|-----------|-----|---------| +| Domain-specific knowledge | Generic agents make generic mistakes | Auth expert knows EMU tokens use standard prefixes | +| Opinionated defaults | Reduces decisions per task | "Always use `logger.progress()`, never `_rich_info()`" | +| Named patterns | Agents can reference by name | "Follow the BaseIntegrator pattern" | +| Anti-patterns section | Prevent known mistakes | "Never instantiate AuthResolver per-request" | + +**Recommended personas for a typical project:** + +``` +.github/agents/ +├── python-architect.agent.md # Structure, patterns, SoC +├── cli-logging-expert.agent.md # Output UX, CommandLogger +├── auth-expert.agent.md # Token management, credential flows +├── doc-writer.agent.md # Documentation consistency +└── security-reviewer.agent.md # Injection, traversal, leaks +``` + +### Layer 2: Skills (`.github/skills/*/SKILL.md`) + +Skills are *when-to-activate* rules paired with *how-to-do-it* guidelines. The harness fires them automatically when it detects matching code patterns. + +```yaml +# .github/skills/cli-logging-ux/SKILL.md +--- +name: cli-logging-ux +description: > + Activate whenever code touches console helpers, DiagnosticCollector, + STATUS_SYMBOLS, CommandLogger, or any user-facing terminal output. +--- + +## Decision Framework + +### 1. The "So What?" Test +Every warning must answer: what should the user do about this? + +### 2. The Traffic Light Rule +| Color | Helper | Meaning | +|--------|------------------|--------------------| +| Green | _rich_success() | Completed | +| Yellow | _rich_warning() | User action needed | +| Red | _rich_error() | Cannot continue | +| Blue | _rich_info() | Status update | + +### 3. The Newspaper Test +Can the user scan output like headlines? +``` + +**Skills vs. Agents**: Agents are *who* (persona + model). Skills are *what* (rules + patterns). A skill references an agent persona for its voice but provides the domain-specific rules the agent follows. + +### Layer 3: Instructions (`.github/instructions/*.instructions.md`) + +Instructions are file-pattern-scoped rules that the harness applies automatically when code in matching paths is edited. + +```yaml +# .github/instructions/integrators.instructions.md +--- +applyTo: "src/app/integration/**" +description: "Architecture rules for file-level integrators" +--- + +# Integrator Architecture + +## Required structure +Every integrator MUST extend BaseIntegrator and return IntegrationResult. + +## Base-class methods — use, don't reimplement +| Operation | Use | Never | +|--------------------|------------------------------|--------------------------| +| Collision detection| self.check_collision() | Custom existence checks | +| File discovery | self.find_files_by_glob() | Ad-hoc os.walk | +``` + +**The three layers form a cascade:** + +``` +Instructions (auto-scoped by file path) + └─ Skills (auto-activated by code patterns) + └─ Agents (dispatched by harness for specific tasks) +``` + +These markdown files *are* your codebase as an AI Engineer. When an agent makes a mistake, you don't fix the generated code — you fix the agent persona, the skill rules, or the instruction file that led to the mistake. + +--- + +## 4. The Meta-Process + +Every large change goes through these phases, in order: + +``` +AUDIT ──→ PLAN ──→ WAVE[0..N] ──→ VALIDATE ──→ SHIP + ↑ │ + └── ADAPT (on escalation only) +``` + +### Phase: AUDIT + +**Your action**: Tell the harness to dispatch expert agents to analyze the codebase from different angles. + +**What you say**: "Dispatch the architect and the logging expert to audit the auth and logging code. I want severity-ranked findings with file:line citations." + +**What happens**: The harness launches 2-4 parallel explore agents, each with a distinct audit lens (architecture, logging/UX, security, performance). They produce ranked findings with `CRITICAL / HIGH / MODERATE / LOW` severity, exact file:line references, and remediation guidance. + +**Key rule**: Audits are *read-only*. The agents explore, they don't modify. + +### Phase: PLAN + +**Your action**: Review audit findings. Decide scope. Define teams. Approve the wave structure. + +**What you say**: "Include all findings in scope. Use two teams: architecture led by the python-architect, logging led by the cli-logging-expert. Organize into waves." + +**What happens**: The harness synthesizes audit reports into a plan (`plan.md`) with scope, findings, wave breakdown, and team assignments. Internally, it tracks tasks and dependencies in its session database. You see the plan; the harness manages the execution graph. + +**Key rule**: No implementation starts until you approve the plan. This is the single most important gate. Take your time here — this is where your leverage is highest. + +### Phase: WAVE EXECUTION + +**Your action**: Approve each wave. Monitor progress. Intervene only on escalation. + +**What you say**: "Execute Wave 0" or "Deploy the fleet" (if you trust the plan enough to run all waves). + +**What happens**: The harness dispatches parallel agents for each wave, grouped by file to avoid conflicts. It tracks which tasks are in progress, waits for agent completions, runs the test suite, and reports results. Between waves, it checkpoints: commit, update task status, verify no regressions. + +### Phase: VALIDATE + +**Your action**: Review the final state. Spot-check critical changes. + +**What happens**: The harness runs the full test suite, acceptance tests, and optionally integration/E2E tests. It produces a summary of what changed, what passed, and any diagnostics. + +### Phase: SHIP + +**Your action**: Approve the push. Update changelog if the harness hasn't already. + +**What happens**: Commit, changelog, push. If CI is green, merge. Don't look back. + +--- + +## 5. The Planning Discipline + +Planning is where you have the most leverage. A mediocre plan with perfect execution produces mediocre software. A great plan with imperfect execution produces great software — because the test rings catch the imperfections. + +### The Spec Carries the Team + +Your plan isn't just "what to build." It includes: + +1. **Scope**: What's in, what's out, what's follow-up +2. **Agent Team**: Which personas implement which concerns +3. **Wave Graph**: Dependency-ordered execution batches +4. **Principles**: Priority-ordered values that anchor every decision +5. **Constraints**: What NOT to change (critical for surgical edits) + +Example plan structure: + +```markdown +## Scope +Auth resolver dedup, verbose coverage gaps, CommandLogger migration, unicode cleanup. +Out of scope: New auth providers, CLI help text changes. + +## Teams +- Architecture: python-architect leads. Owns: type safety, SoC, dead code. +- Logging/UX: cli-logging-expert leads. Owns: verbose coverage, CommandLogger, symbols. + +## Waves +Wave 0 (foundation): Protocol types, method moves, dedup — fully parallel +Wave 1 (core): Verbose coverage — depends on Wave 0 APIs +Wave 2 (migration): CommandLogger migration — depends on Wave 1 patterns +Wave 3 (polish): Unicode cleanup — depends on Wave 2 completeness + +## Principles (priority order) +1. SECURITY — no token leaks, no path traversal +2. CORRECTNESS — tests pass, behavior preserved +3. UX — world-class developer experience in every message +4. KISS — simplest correct solution +5. SHIP SPEED — favor shipping over perfection +``` + +### Red Teaming and Panel Discussions + +For critical plans, have the agent team iterate through adversarial review before execution: + +1. **Architect reviews the logging plan** — "Does this create coupling between modules?" +2. **Logging expert reviews the architecture plan** — "Does this break verbose output contracts?" +3. **Security reviewer scans both** — "Does any change expose tokens in logs?" + +The agents iterate until reaching consensus. The release manager agent (or a business-owner persona) has the last word on trade-offs. You review the consensus, not the individual arguments. + +### Code Review + Security as Parallel Gates + +After the agent team produces code, two more agents run in parallel: + +- **Code Reviewer**: Surfaces only bugs, logic errors, and security vulnerabilities. Never comments on style. +- **Security Scanner**: Checks for token leaks, path traversal, injection, unsafe operations. + +If either finds issues, the work returns to the agent team in a loop. You're only pulled in if the loop doesn't converge (escalation — see §10). + +--- + +## 6. Team Topology + +Structure your AI teams by concern, not by file. Each team has a lead persona (the agent file) and members (agents dispatched by the harness following the relevant skill). + +### Reference: Two-Team Structure + +For most cross-cutting changes, two teams cover the space: + +``` +┌─────────────────────────────┐ ┌─────────────────────────────┐ +│ ARCHITECTURE TEAM │ │ DOMAIN EXPERT TEAM │ +│ │ │ │ +│ Lead: python-architect │ │ Lead: cli-logging-expert │ +│ Skill: python-architecture│ │ Skill: cli-logging-ux │ +│ │ │ │ +│ Owns: │ │ Owns: │ +│ - Type safety │ │ - Verbose coverage │ +│ - Pattern compliance │ │ - CommandLogger migration │ +│ - SoC violations │ │ - Traffic-light fixes │ +│ - Dead code removal │ │ - Unicode cleanup │ +│ - DiagnosticCollector │ │ - Actionability audit │ +│ routing │ │ │ +└─────────────────────────────┘ └─────────────────────────────┘ +``` + +### Scaling Up + +For larger changes, split further: + +| Team | Lead Agent | Skill | Owns | +|------|-----------|-------|------| +| Architecture | python-architect | python-architecture | Patterns, types, SoC | +| Logging/UX | cli-logging-expert | cli-logging-ux | Output, verbose, symbols | +| Auth | auth-expert | auth | Tokens, credentials, hosts | +| Security | security-reviewer | (inline) | Scanning, traversal, leaks | +| Docs | doc-writer | (inline) | Guides, reference, changelog | + +### Scaling Down + +For focused changes (single concern, < 20 files): + +| Approach | When | +|----------|------| +| Solo expert | One agent with the relevant skill, single wave | +| Audit + fix | One explore agent to audit, one general-purpose to fix | + +--- + +## 7. Wave Execution + +Waves are the core execution unit. Each wave is a batch of tasks with no unmet dependencies, dispatched as parallel agents grouped by file, followed by a checkpoint. + +### Wave Structure + +``` +Wave 0: FOUNDATION ← No dependencies, fully parallel +Wave 1: CORE CHANGES ← Depends on Wave 0 outputs +Wave 2: MIGRATION ← Depends on Wave 1 patterns being stable +Wave 3: POLISH ← Depends on Wave 2 being complete +Wave 4: VALIDATE ← Final gate +``` + +### Rules for Wave Design + +**Rule 1: One file, one agent per wave.** + +The harness edits files using string matching. If two parallel agents edit the same file, the second agent's edits will fail because the first agent changed the file content. + +``` +# GOOD: Each agent owns distinct files +Agent A: apm_resolver.py, dependency_graph.py +Agent B: install.py (all sections) + +# BAD: Two agents on the same file +Agent B: install.py (lines 240-440) +Agent C: install.py (lines 580-2100) ← CONFLICT +``` + +**Rule 2: Foundation before migration.** + +Type changes, protocol definitions, and method moves go in Wave 0. Code that *uses* those new APIs goes in Wave 1+. + +**Rule 3: Small waves ship faster than large waves.** + +A wave with 2-3 agents completes in 3-5 minutes. A wave with 8 agents takes 8-10 minutes (longest agent dominates). Prefer more smaller waves. + +**Rule 4: Every wave ends with green tests.** + +Non-negotiable. The harness runs the full test suite after each wave and commits only if all tests pass. + +### How the Harness Executes a Wave + +When you say "execute Wave 0", the harness: + +1. Identifies which tasks in the plan are ready (no unfinished dependencies) +2. Groups tasks by file ownership to avoid conflicts +3. Dispatches parallel agents with precise instructions (files, line numbers, code patterns, constraints) +4. Waits for all agents to complete +5. Runs the full test suite +6. If green: commits with a wave-specific message, marks tasks as done +7. If red: reports failures to you for triage + +You see the results. The harness manages the dispatching, state tracking, and checkpointing internally. + +--- + +## 8. Checkpoint Discipline + +A checkpoint is the pause point between waves. It serves four purposes: + +### 1. Validation Gate + +The harness runs the full test suite after every wave. This is non-negotiable — no wave is considered complete until tests are green. + +### 2. Spot-Check + +You review a sample of agent changes. Focus on: + +- **Boundary conditions**: Did the agent handle the edge case you specified? +- **Pattern compliance**: Did the agent follow established patterns, or invent new ones? +- **Scope discipline**: Did the agent change only what was asked? + +Quick checks you can ask for: + +``` +"Show me the diff for install.py" +"How many _rich_info calls remain in the codebase?" +"Did the agent add tests for the new code path?" +``` + +### 3. Commit Boundary + +Every wave gets its own commit. This enables: +- Bisection if a later wave introduces a regression +- Reverting a single wave without affecting others +- Clean PR history for reviewers + +### 4. Process Adaptation (Rare) + +At a checkpoint, you may adapt the remaining plan if: +- An agent discovered a blocker not in the original audit +- A task turned out to be larger than expected +- Two tasks created an unexpected conflict + +**Rule**: Adaptation is conservative. Add tasks, split tasks, reorder waves. Never skip validation. + +--- + +## 9. The Test Ring Pipeline + +Tests are the safety net that makes the entire system trustworthy. Without them, you cannot confidently merge. With them, green CI/CD means click merge and don't look back. + +### Ring 1: Unit Tests (Every Wave) + +Fast, deterministic, run after every wave. These catch regressions in logic, type errors, and broken interfaces. + +**Coverage principle**: When modifying existing code, add tests for the code paths you touch, on top of tests for new functionality. + +### Ring 2: Acceptance Tests (After Final Wave) + +Scenario-based tests that verify end-to-end behavior from the user's perspective. Mocked external dependencies, but real command invocations and output verification. + +### Ring 3: Integration / E2E Tests (Pre-Ship) + +Real-world tests against actual infrastructure. These require credentials, network access, and real repositories. They validate the exact binary/package that ships. + +### Test Ring Policy + +| Ring | When | Blocks | Flake Policy | +|------|------|--------|--------------| +| Unit | Every wave | Next wave | Zero tolerance | +| Acceptance | Final wave | Ship | Zero tolerance | +| Integration | Pre-ship | Ship | Re-run once, then investigate | + +### The Confidence Argument + +If your test rings are comprehensive and passing, you don't need to read every line of agent-generated code. The tests *are* the specification. If the tests pass, the code meets the spec. If you're not confident in this, it means your test coverage isn't good enough — fix the tests, not the process. + +--- + +## 10. Escalation Protocol + +The agentic process runs autonomously within the plan. Escalation happens when the harness or an agent encounters something outside the plan's scope. + +### Escalation Levels + +| Level | Trigger | Who Handles | Action | +|-------|---------|-------------|--------| +| **L0: Self-heal** | Agent hits a test failure it can debug | Agent (via harness retry) | Fix and continue | +| **L1: Harness** | Agent reports a blocker or unexpected finding | Harness adapts plan | Re-dispatch with refined prompt | +| **L2: You decide** | Trade-off between competing principles | You | Decide, document rationale | +| **L3: Scope change** | Finding requires work outside the current PR | You + stakeholders | Create follow-up issue | + +### When the Harness Escalates to You + +The harness brings you in (L2) only when: + +1. **Principle conflict**: "KISS says skip this, but security says we must fix it." +2. **Scope explosion**: "Fixing this properly requires changing 15 more files." +3. **Breaking change**: "This fix changes CLI output that users depend on." +4. **Ambiguity**: "The audit found two valid approaches; both have trade-offs." + +Everything else the harness handles autonomously. If an agent fails, the harness retries with a refined prompt. If a test fails, the harness debugs it. If a task is larger than expected, the harness splits it. + +### The Anchoring Principle + +Every decision — yours or the harness's — is anchored on project principles, in priority order: + +``` +1. SECURITY — No token leaks, no path traversal, no injection +2. CORRECTNESS — Tests pass, behavior preserved, edge cases handled +3. UX — World-class developer experience in every message +4. KISS — Simplest solution that's correct and secure +5. SHIP SPEED — Favor shipping over perfection +``` + +When principles conflict, higher-priority wins. Document the trade-off in the commit message. + +--- + +## 11. The Feedback Loop + +When something goes wrong — an agent makes a mistake, a test ring misses a bug, a pattern drifts — you don't fix the symptom. You fix the system. + +### The Primitive Improvement Cycle + +``` +Failure observed + │ + ▼ +Root cause: which primitive failed? + │ + ├─ Agent persona too generic? → Add domain knowledge to .agent.md + ├─ Skill rules incomplete? → Add anti-pattern to SKILL.md + ├─ Instructions missing? → Add file-pattern rule to .instructions.md + ├─ Test coverage gap? → Add acceptance test for the scenario + └─ Harness prompt too vague? → Refine your prompt template +``` + +**Examples from the reference case:** + +| Failure | Root Cause | Primitive Fix | +|---------|-----------|---------------| +| Agent used `_rich_info()` directly instead of `logger.progress()` | Skill didn't explicitly ban direct calls | Added "Rule: No direct `_rich_*` in commands" to cli-logging-ux SKILL.md | +| Agent invented a new collision detection pattern | Instructions didn't list all base-class methods | Added full "use, don't reimplement" table to integrators.instructions.md | +| Agent claimed success but file wasn't persisted | Harness trusted agent self-report | Added spot-check step to checkpoint protocol | +| Unicode symbols weren't consistent | No single source of truth for symbols | Created STATUS_SYMBOLS dict, added to skill rules | + +This is how you "code" as an AI Engineer. Every failure makes the markdown primitives better. Every improvement makes future agents more reliable. The system compounds. + +--- + +## 12. Autonomous CI/CD + +The agentic SDLC doesn't stop when you merge. Autonomous GitHub Agentic Workflows run on a schedule to catch drift, gaps, and issues that accumulate over time. + +### Scheduled Agentic Workflows + +| Workflow | Schedule | What It Does | +|----------|----------|-------------| +| Drift detection | Daily | Compares code patterns against instruction rules, flags violations | +| Dependency audit | Weekly | Scans for outdated deps, security advisories, license issues | +| Test coverage check | On PR | Verifies new code has adequate test coverage | +| Documentation sync | On PR | Checks if code changes require doc updates | + +### The Autonomous Fix Loop + +For low-risk issues (formatting, dependency bumps, doc sync), the workflow can: + +1. Create a branch +2. Dispatch an agent to fix the issue +3. Run the test ring pipeline +4. Open a PR with the fix +5. If CI is green, auto-merge (or notify you for approval) + +For higher-risk issues (pattern violations, security findings), the workflow opens an issue with findings and waits for you to plan the fix using the standard AUDIT → PLAN → WAVE flow. + +### Why This Matters + +Without autonomous workflows, entropy wins. Patterns drift. Dependencies rot. Documentation goes stale. The scheduled workflows are your immune system — they detect problems before they compound. + +--- + +## 13. Anti-Patterns + +### The Solo Hero + +Dispatching one massive agent to do everything. It will lose context, make inconsistent decisions, and produce unreviewed code. + +**Fix**: Split into focused agents with clear scope boundaries. One file, one agent per wave. + +### The Context Bomb + +Giving an agent the entire codebase as context. Agents work best with *precise* instructions: exact files, line numbers, before/after patterns. + +**Fix**: Have the harness read the relevant files first, then give agents surgical instructions. + +### The Trust Fall + +Accepting agent output without validation. Agents can miss edge cases, introduce subtle bugs, claim success when tests actually fail, or report edits that weren't persisted. + +**Fix**: The test ring pipeline catches most issues. Spot-check critical changes at checkpoints. Always verify file state matches what the agent reported. + +### The Scope Creep Agent + +An agent told to "fix logging in install.py" decides to also refactor imports, add type hints to unrelated functions, and reorganize the file. + +**Fix**: Include explicit "Do NOT modify" rules in the plan. Be specific about scope boundaries. + +### Same-File Parallel Edits + +Two agents editing the same file simultaneously. The second agent's changes won't apply because the first agent changed the file. + +**Fix**: One file, one agent per wave. Group related changes to the same file into a single agent's task. + +### Skipping Checkpoints + +"Wave 1 worked, Wave 2 probably works too, let me just commit both." Then Wave 3 fails and you can't bisect. + +**Fix**: Test after every wave. Commit after every wave. The 2-minute cost saves hours of debugging. + +### Not Fixing the Primitives + +An agent keeps making the same mistake across sessions. You keep correcting it manually. + +**Fix**: Find the root primitive (agent persona, skill, instruction) and add the missing rule. The system should learn, not repeat. + +--- + +## 14. Scaling Characteristics + +### What Scales Linearly + +| Dimension | How It Scales | +|-----------|---------------| +| Files per wave | +1 agent per non-overlapping file group | +| Concerns per change | +1 team per concern | +| Test count | Run time increases, but the test ring pipeline structure is fixed | + +### What Doesn't Scale + +| Dimension | Bottleneck | Mitigation | +|-----------|-----------|------------| +| Same-file changes | Sequential within file | Group into fewer, larger agents | +| Cross-file dependencies | Wave serialization | Minimize cross-file APIs in Wave 0 | +| Your attention | Review bandwidth | Trust the test ring; spot-check, don't audit every line | + +### Observed Performance (Reference Case) + +``` +Concern scope: 5 cross-cutting concerns +Files changed: 70 +Lines changed: +5,886 / -1,030 +Commits: 30 +Tests: 2,874 passing +Agents dispatched: 15 (across 4 waves + 2 audits) +Agent failures: 2 (1 connection error, 1 incomplete — both recovered) +Your interventions: 3 (scope decision, agent recovery, test fix) +Wall-clock time: ~90 minutes (including audit, plan, all waves) +Regressions: 0 +``` + +### The Safety Argument + +Agentic development is *safer* than manual development for large changes because: + +1. **Forced decomposition**: You must plan before coding. Most bugs come from insufficient planning. +2. **Parallel review**: Multiple specialized agents catch different classes of bugs. +3. **Mandatory test gates**: Every wave runs the full suite. No "I'll test later." +4. **Scope discipline**: Agents do exactly what they're told. No "while I'm here" changes. +5. **Audit trail**: Wave commits + plan.md = full provenance. + +The pattern doesn't eliminate bugs. It eliminates the *categories* of bugs that come from cognitive overload, inconsistency across files, and deferred testing. + +--- + +## 15. Example Scenarios + +### Scenario A: Auth + Logging Overhaul (the reference case) + +**Scope**: 70 files, 5 concerns (auth, logging, migration, unicode, testing) + +| Phase | What You Did | What the Harness Did | Duration | +|-------|-------------|---------------------|----------| +| Audit | "Dispatch architect + logging expert" | 2 parallel explore agents | 3 min | +| Plan | Reviewed findings, set scope, approved waves | Created plan.md, tracked 19 tasks internally | 5 min | +| Wave 0 | "Execute Wave 0" | 2 parallel agents (resolver + install) | 5 min | +| Wave 1+2 | "Deploy the fleet" | 5 parallel agents (verbose + migration) | 8 min | +| Wave 2b | Recovered a stuck agent manually | 2 parallel agents + harness retry | 7 min | +| Wave 3 | "Polish wave" | 1 agent (unicode cleanup) | 4 min | +| Validate | Spot-checked install.py, reviewed CHANGELOG | Full suite, commit, push | 2 min | + +**Total wall-clock**: ~35 minutes for what would be 2-3 days of manual work. + +### Scenario B: New Module Addition + +**Scope**: Add a new `apm bundle` command with export functionality. + +``` +Audit: 1 explore agent to assess existing patterns +Plan: You define module structure + team +Wave 0: Architecture team designs module skeleton (1 agent) +Wave 1: Implement core module (1 agent) +Wave 2: Wire into CLI + add tests (2 agents: CLI wiring + test writing) +Wave 3: Documentation (1 doc-writer agent) +``` + +### Scenario C: Cross-Cutting Refactor + +**Scope**: Replace all direct `os.getenv()` calls with a centralized config system. + +``` +Audit: 1 explore agent to find all os.getenv() call sites +Plan: Group by module, design config class +Wave 0: Create config module + tests (1 agent) +Wave 1: Migrate each module in parallel (5 agents, one per module) +Wave 2: Remove old imports, verify no direct calls remain (1 agent) +``` + +### Scenario D: Security Hardening + +**Scope**: Add path traversal protection across all file operations. + +``` +Audit: Security expert + architecture expert (parallel) +Wave 0: Create path_security.py utility (1 agent) +Wave 1: Replace shutil.rmtree with safe_rmtree everywhere (3 agents by module) +Wave 2: Add ensure_path_within() to all user-derived paths (3 agents) +Wave 3: Security-focused test suite (1 agent) +``` + +--- + +## Appendix A: Repository Setup Checklist + +``` +□ .github/agents/ — At least: architect, domain-expert, doc-writer +□ .github/skills/ — One skill per cross-cutting concern +□ .github/instructions/ — File-pattern rules for key directories +□ .github/copilot-instructions.md — Project-wide conventions +□ Test suite — Fast unit tests (< 3 min), acceptance tests +□ CHANGELOG.md — Keep a Changelog format +□ CI pipeline — PR tests, post-merge validation +``` + +## Appendix B: What You Say to the Harness (Prompt Examples) + +These are examples of what *you* type (or speak) to Copilot CLI at each phase. The harness translates these into agent dispatches, tool calls, and state management internally. + +### Audit Prompt + +``` +Dispatch the python-architect and cli-logging-expert to audit the auth and +logging code. For each finding, I want severity (CRITICAL/HIGH/MODERATE/LOW), +file:line, current behavior, expected behavior, and remediation. + +Focus on: pattern violations, type safety, verbose coverage gaps, +traffic-light compliance. Do NOT suggest changes to the test infrastructure. +``` + +### Planning Prompt + +``` +Synthesize both audit reports into a plan. Include ALL findings in scope — +nothing deferred. Use two teams: architecture led by python-architect, +logging led by cli-logging-expert. Organize into waves by dependency. +Every wave must end with green tests. +``` + +### Wave Execution Prompt + +``` +Execute Wave 0. The foundation tasks have no dependencies — dispatch them +in parallel. Group by file to avoid conflicts. +``` + +Or, if you trust the plan fully: + +``` +Deploy the fleet in autopilot. Execute all waves sequentially. Stop and +escalate only if tests fail or an agent reports a blocker. +``` + +### Spot-Check Prompt + +``` +Show me the diff for install.py since the last commit. How many +_rich_info calls remain? Did any agent touch files outside their scope? +``` + +### Recovery Prompt (when an agent gets stuck) + +``` +The wave2-install-logger agent seems stuck. Take over its remaining tasks +manually. The agent was supposed to migrate 58 _rich_* calls in install.py +to use CommandLogger. Check what it completed and finish the rest. +``` + +## Appendix C: Harness Internals + +This section documents how the harness (Copilot CLI) manages state internally. You don't need to manage these details, but understanding them helps you debug issues and make better prompts. + +### Task Tracking + +The harness maintains a SQL database in its session state with: + +- **`todos` table**: Task ID, title, description, status (pending/in_progress/done/blocked) +- **`todo_deps` table**: Dependency edges between tasks + +When you say "execute Wave 0", the harness queries for tasks with no unfinished dependencies, marks them as in_progress, dispatches agents, and marks them done when tests pass. + +### Session State + +The harness maintains: +- `plan.md` — human-readable plan (this is what you review and approve) +- Checkpoints — snapshots after each wave with history, decisions, and file lists +- Session database — SQL tables for task tracking (internal to the harness) + +### Agent Dispatch + +When the harness dispatches a `general-purpose` agent, it includes: + +1. **Role statement**: "You are a [role] on the [team] team." +2. **Context**: What was done in previous waves that this depends on. +3. **Precise instructions**: Exact files, line numbers, old-to-new patterns. +4. **Rules**: What NOT to change. +5. **Verification commands**: Test commands to run before reporting done. + +### Skill Activation + +Skills activate automatically when the harness detects matching code patterns. They can also be activated explicitly when you mention a relevant concern. + +### Why This Matters + +Understanding the harness internals helps you: +- **Debug stuck waves**: "Check the task status — is something blocked?" +- **Refine prompts**: "The agent needs more precise file:line instructions" +- **Recover from failures**: "The agent said it finished but the file wasn't updated — check the harness state" + + +## Appendix D: Live Dashboard POC via Copilot CLI Hooks + +### The Vision + +The agentic SDLC described in this handbook is powerful but invisible — everything happens inside terminal scrollback. What if you could *see* it? A live browser dashboard showing: + +- The wave dependency graph with real-time status (pending → running → done) +- Agent cards spawning and completing with duration timers +- File edits streaming in as they happen +- Test rings lighting up green/red after each checkpoint +- The todo board updating as SQL queries fire + +This appendix describes a **proof-of-concept** using [Copilot CLI Hooks](https://docs.github.com/en/copilot/how-tos/copilot-cli/customize-copilot/use-hooks) to intercept every tool call and stream it to a live web UI — turning the agentic process into something you can watch, share on a screen, or use as an observability layer. + +### Architecture + +``` +┌──────────────────────────────────────────────────────────────────┐ +│ Copilot CLI Session │ +│ │ +│ preToolUse ──→ ┌──────────────┐ │ +│ postToolUse ──→│ Hook Scripts │──→ JSONL event log │ +│ sessionStart ─→│ (.github/ │──→ HTTP push to dashboard │ +│ sessionEnd ───→│ hooks/) │ │ +│ └──────────────┘ │ +└──────────────────────────────────────────────────────────────────┘ + │ │ + ▼ ▼ + .hooks/events.jsonl http://localhost:3391 + │ + ▼ + ┌────────────────────┐ + │ Browser Dashboard │ + │ │ + │ ┌──────────────┐ │ + │ │ Wave Graph │ │ + │ │ o-o-o-*-o │ │ + │ └──────────────┘ │ + │ ┌──────────────┐ │ + │ │ Agent Fleet │ │ + │ │ G Y G W │ │ + │ └──────────────┘ │ + │ ┌──────────────┐ │ + │ │ Test Ring │ │ + │ │ Unit + Acc + │ │ + │ └──────────────┘ │ + │ ┌──────────────┐ │ + │ │ Todo Board │ │ + │ │ 12/19 done │ │ + │ └──────────────┘ │ + └────────────────────┘ +``` + +### Hook Event Model + +Every Copilot CLI tool call passes through `preToolUse` and `postToolUse` hooks. The tool name and arguments tell us *exactly* what the orchestrator is doing: + +| Tool Name | Args Pattern | Dashboard Event | +|-----------|-------------|----------------| +| `task` | `agent_type`, `name`, `mode` | Agent spawned — show card with spinner | +| `read_agent` | `agent_id` | Agent result read — update card with result | +| `sql` | `INSERT INTO todos` | New todo — add to board | +| `sql` | `UPDATE todos SET status` | Todo status change — move card | +| `bash` | command contains `pytest` | Test run — show ring with spinner | +| `bash` | command contains `git commit` | Checkpoint — mark wave complete | +| `edit` / `create` | `path` | File change — flash in activity feed | +| `report_intent` | `intent` | Phase change — update header | +| `skill` | `skill` name | Skill activated — show badge | + +### hooks.json + +```json +{ + "version": 1, + "hooks": { + "sessionStart": [ + { + "type": "command", + "bash": ".github/hooks/dashboard-start.sh", + "timeoutSec": 5 + } + ], + "preToolUse": [ + { + "type": "command", + "bash": ".github/hooks/dashboard-event.sh", + "timeoutSec": 3 + } + ], + "postToolUse": [ + { + "type": "command", + "bash": ".github/hooks/dashboard-event.sh", + "timeoutSec": 3 + } + ], + "sessionEnd": [ + { + "type": "command", + "bash": ".github/hooks/dashboard-stop.sh", + "timeoutSec": 5 + } + ] + } +} +``` + +### Event Collector Script + +`.github/hooks/dashboard-event.sh` — runs on every tool call, must be fast (< 100ms): + +```bash +#!/bin/bash +INPUT=$(cat) +TOOL_NAME=$(echo "$INPUT" | jq -r '.toolName // empty') +TIMESTAMP=$(echo "$INPUT" | jq -r '.timestamp') +RESULT_TYPE=$(echo "$INPUT" | jq -r '.toolResult.resultType // empty') + +EVENT_LOG="${CWD:-.}/.hooks/events.jsonl" +WS_PORT="${DASHBOARD_PORT:-3391}" + +# Phase: "pre" if no result, "post" if result present +if [ -n "$RESULT_TYPE" ]; then PHASE="post"; else PHASE="pre"; fi + +EVENT_TYPE="tool" +DETAIL="" + +case "$TOOL_NAME" in + task) + TOOL_ARGS=$(echo "$INPUT" | jq -r '.toolArgs // empty') + AGENT_NAME=$(echo "$TOOL_ARGS" | jq -r '.name // empty') + AGENT_TYPE=$(echo "$TOOL_ARGS" | jq -r '.agent_type // empty') + AGENT_MODE=$(echo "$TOOL_ARGS" | jq -r '.mode // "sync"') + AGENT_DESC=$(echo "$TOOL_ARGS" | jq -r '.description // empty') + if [ "$PHASE" = "pre" ]; then + EVENT_TYPE="agent_dispatch" + DETAIL=$(jq -nc --arg n "$AGENT_NAME" --arg t "$AGENT_TYPE" \ + --arg m "$AGENT_MODE" --arg d "$AGENT_DESC" \ + '{name:$n, type:$t, mode:$m, description:$d}') + fi ;; + + read_agent) + if [ "$PHASE" = "post" ]; then + AGENT_ID=$(echo "$INPUT" | jq -r '.toolArgs // empty' | jq -r '.agent_id // empty') + SUMMARY=$(echo "$INPUT" | jq -r '.toolResult.textResultForLlm // ""' | head -c 200) + EVENT_TYPE="agent_complete" + DETAIL=$(jq -nc --arg id "$AGENT_ID" --arg s "$SUMMARY" '{agent_id:$id, summary:$s}') + fi ;; + + sql) + QUERY=$(echo "$INPUT" | jq -r '.toolArgs // empty' | jq -r '.query // empty') + if echo "$QUERY" | grep -qi "UPDATE todos SET status"; then + EVENT_TYPE="todo_update" + STATUS=$(echo "$QUERY" | grep -oP "status\s*=\s*'\K[^']+") + DETAIL=$(jq -nc --arg s "$STATUS" '{status:$s}') + elif echo "$QUERY" | grep -qi "INSERT INTO todos"; then + EVENT_TYPE="todo_create" + fi ;; + + bash) + COMMAND=$(echo "$INPUT" | jq -r '.toolArgs // empty' | jq -r '.command // empty') + if echo "$COMMAND" | grep -q "pytest"; then + [ "$PHASE" = "pre" ] && EVENT_TYPE="test_run_start" + if [ "$PHASE" = "post" ]; then + EVENT_TYPE="test_run_complete" + RESULT_TEXT=$(echo "$INPUT" | jq -r '.toolResult.textResultForLlm // ""') + PASSED=$(echo "$RESULT_TEXT" | grep -oP '\d+ passed' | head -1) + FAILED=$(echo "$RESULT_TEXT" | grep -oP '\d+ failed' | head -1) + DETAIL=$(jq -nc --arg p "$PASSED" --arg f "$FAILED" '{passed:$p, failed:$f}') + fi + elif echo "$COMMAND" | grep -q "git commit"; then + EVENT_TYPE="checkpoint_commit" + fi ;; + + edit|create) + [ "$PHASE" = "pre" ] && { + FILE_PATH=$(echo "$INPUT" | jq -r '.toolArgs // empty' | jq -r '.path // empty') + EVENT_TYPE="file_change" + DETAIL=$(jq -nc --arg p "$FILE_PATH" --arg op "$TOOL_NAME" '{path:$p, operation:$op}') + } ;; + + report_intent) + INTENT=$(echo "$INPUT" | jq -r '.toolArgs // empty' | jq -r '.intent // empty') + EVENT_TYPE="intent_change" + DETAIL=$(jq -nc --arg i "$INTENT" '{intent:$i}') ;; + + skill) + SKILL=$(echo "$INPUT" | jq -r '.toolArgs // empty' | jq -r '.skill // empty') + EVENT_TYPE="skill_activated" + DETAIL=$(jq -nc --arg s "$SKILL" '{skill:$s}') ;; +esac + +# Emit JSONL event +EVENT=$(jq -nc --arg type "$EVENT_TYPE" --arg phase "$PHASE" \ + --arg tool "$TOOL_NAME" --arg ts "$TIMESTAMP" --arg result "$RESULT_TYPE" \ + --argjson detail "${DETAIL:-null}" \ + '{type:$type, phase:$phase, tool:$tool, timestamp:$ts, result:$result, detail:$detail}') + +echo "$EVENT" >> "$EVENT_LOG" + +# Push to dashboard (non-blocking, fire-and-forget) +curl -s -X POST "http://localhost:$WS_PORT/event" \ + -H "Content-Type: application/json" -d "$EVENT" 2>/dev/null & +``` + +### Dashboard Server + +`.github/hooks/dashboard-start.sh`: + +```bash +#!/bin/bash +INPUT=$(cat) +CWD=$(echo "$INPUT" | jq -r '.cwd') +PORT="${DASHBOARD_PORT:-3391}" + +mkdir -p "$CWD/.hooks" +: > "$CWD/.hooks/events.jsonl" + +if command -v node &>/dev/null; then + node "$CWD/.github/hooks/dashboard-server.mjs" "$PORT" "$CWD/.hooks/events.jsonl" & + echo $! > "$CWD/.hooks/dashboard.pid" + echo "Dashboard: http://localhost:$PORT" >&2 +fi +``` + +`.github/hooks/dashboard-server.mjs` — minimal SSE server: + +```javascript +import { createServer } from 'http'; +import { readFileSync, watchFile } from 'fs'; + +const PORT = parseInt(process.argv[2] || '3391'); +const EVENTS_FILE = process.argv[3] || '.hooks/events.jsonl'; +const clients = new Set(); +let lastLineCount = 0; + +watchFile(EVENTS_FILE, { interval: 200 }, () => { + try { + const lines = readFileSync(EVENTS_FILE, 'utf8').trim().split('\n'); + const newLines = lines.slice(lastLineCount); + lastLineCount = lines.length; + for (const line of newLines) { + if (!line) continue; + for (const client of clients) client.write(`data: ${line}\n\n`); + } + } catch {} +}); + +createServer((req, res) => { + if (req.url === '/events') { + res.writeHead(200, { + 'Content-Type': 'text/event-stream', + 'Cache-Control': 'no-cache', + 'Access-Control-Allow-Origin': '*', + }); + clients.add(res); + req.on('close', () => clients.delete(res)); + return; + } + if (req.method === 'POST' && req.url === '/event') { + let body = ''; + req.on('data', c => body += c); + req.on('end', () => { + for (const client of clients) client.write(`data: ${body}\n\n`); + res.writeHead(200).end('ok'); + }); + return; + } + // Serve inline dashboard HTML + res.writeHead(200, { 'Content-Type': 'text/html' }); + res.end(DASHBOARD_HTML); +}).listen(PORT, () => console.log(`Dashboard: http://localhost:${PORT}`)); + +const DASHBOARD_HTML = ``; +``` + +### Dashboard UI Layout + +The dashboard renders four panels connected by SSE: + +``` ++-----------------------------------------------------------+ +| Agentic SDLC Dashboard Phase: Executing Wave 2 | ++------------------------+----------------------------------+ +| WAVE GRAPH | AGENT FLEET | +| | | +| Wave 0 [========] + | +-----------------------------+ | +| Wave 1 [========] + | | wave2-compile-logger | | +| Wave 2 [==== ] * | | general-purpose 3m 22s | | +| Wave 3 [ ] | | Status: running | | +| Wave 4 [ ] | +-----------------------------+ | +| | +-----------------------------+ | +| | | wave2-uninstall-logger | | +| | | general-purpose 4m 10s | | +| | | Status: + complete | | +| | +-----------------------------+ | ++------------------------+----------------------------------+ +| TODO BOARD | TEST RING & FILE ACTIVITY | +| | | +| Done (14): | Ring 1: Unit 2874 + 103s | +| + a0-1 Protocol | Ring 2: Accept 39 + 2.1s | +| + a0-2 Ancestor | Ring 3: E2E pending | +| + l0-1 TrafficLight | --- | +| ... | Recent files: | +| | * install.py (edit) | +| In Progress (3): | * watcher.py (edit) | +| * l2-2 compile/ | * engine.py (edit) | +| * l2-3 uninstall/ | | +| | Last commit: | +| Pending (2): | 930c4b9 Wave 0 -- Protocol... | +| o l3-1 unicode | | +| o l3-2 arrows | | ++------------------------+----------------------------------+ +``` + +**UI behaviors by event type:** + +| Event | UI Response | +|-------|------------| +| `intent_change` | Update phase header | +| `agent_dispatch` | Add agent card with spinning timer | +| `agent_complete` | Stop timer, flash green/red | +| `todo_create` | Add cards to Pending column | +| `todo_update` | Animate card between columns | +| `test_run_start` | Show spinner on test ring | +| `test_run_complete` | Flash green/red with counts | +| `file_change` | Flash path in activity feed | +| `checkpoint_commit` | Mark wave complete, advance indicator | +| `skill_activated` | Show badge on current phase | + +### SSE Client (dashboard core) + +```javascript +const events = new EventSource('/events'); + +events.onmessage = (e) => { + const ev = JSON.parse(e.data); + switch (ev.type) { + case 'intent_change': updatePhase(ev.detail.intent); break; + case 'agent_dispatch': addAgentCard(ev.detail); break; + case 'agent_complete': completeAgent(ev.detail); break; + case 'todo_create': refreshTodoBoard(); break; + case 'todo_update': moveTodos(ev.detail.status); break; + case 'test_run_start': startTestSpinner(); break; + case 'test_run_complete': completeTestRing(ev.detail); break; + case 'file_change': flashFile(ev.detail); break; + case 'checkpoint_commit': advanceWave(ev.detail); break; + case 'skill_activated': showSkillBadge(ev.detail.skill); break; + } +}; +``` + +### Running the POC + +```bash +# 1. Ensure hooks and server scripts are in place +ls .github/hooks/hooks.json dashboard-event.sh dashboard-start.sh dashboard-server.mjs + +# 2. Make scripts executable +chmod +x .github/hooks/*.sh + +# 3. Start a Copilot CLI session (dashboard launches via sessionStart hook) +copilot # opens http://localhost:3391 automatically + +# 4. Open the dashboard +open http://localhost:3391 + +# 5. Work normally — every tool call streams to the dashboard live +``` + +### What This Enables + +**For practitioners:** Watch your agent fleet work live. See the dependency graph resolve. Catch stuck agents (timer keeps counting) before wasting minutes. + +**For team demos:** Share the dashboard URL on screen. Make the agentic process tangible — not a black box of terminal output. + +**For CI/CD observability:** Stream events to Datadog/Grafana. Track agent dispatch counts, test pass rates, wall-clock time per wave. Detect regressions in the process itself. + +**For research:** The JSONL log is a complete trace of every AI decision. Analyze retries, tool frequency, wave parallelism. Compare sessions to find convergence patterns. + +### Extension: Interactive Control Plane + +The `preToolUse` hook can return `{"permissionDecision": "deny"}` — enabling **human-in-the-loop via the dashboard**: + +1. Agent dispatches `git push` via `bash` +2. `preToolUse` fires, sends event to dashboard +3. Dashboard shows a modal: "Agent wants to push to remote. Allow?" +4. User clicks Allow/Deny in the browser +5. Hook returns the decision to Copilot CLI + +```bash +# Interactive hook with HTTP callback +#!/bin/bash +INPUT=$(cat) +if echo "$INPUT" | jq -r '.toolArgs' | grep -q "git push"; then + DECISION=$(curl -s "http://localhost:3391/approve?tool=bash" --max-time 30) + echo "$DECISION" # {"permissionDecision":"allow"} or "deny" +fi +``` + +This turns the dashboard from a read-only observer into a **control plane for agentic development** — the human watches the process and intervenes at decision points without leaving the browser. + +### Limitations and Future Work + +| Limitation | Reason | Future Path | +|-----------|--------|-------------| +| 3s hook timeout | Must stay fast | Async queue with batch flush | +| No prompt injection | `userPromptSubmitted` output is ignored | Future: prompt modification support | +| File-based event log | Simple but not durable | SQLite or Redis for production | +| Single-session view | One dashboard per session | Multi-session picker with history | +| No auth on dashboard | localhost-only for dev | Basic auth for shared/remote use | + +--- + +*This handbook is a living document. The patterns evolve as Copilot CLI evolves. The principles don't.* From 3970d72b5c161bde9d98c8f7224cfd12e6a55e67 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 23:29:59 +0100 Subject: [PATCH 29/40] =?UTF-8?q?fix:=20auth=20test=20harness=20=E2=80=94?= =?UTF-8?q?=20pipe=20/dev/null=20to=20stdin,=20match=20both=20verbose=20fo?= =?UTF-8?q?rmats?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Pipe < /dev/null to apm install in run_install/run_install_manifest to prevent MCP env prompts from blocking in non-interactive test runs - Update mega test assertion to match both validation-phase (source=X) and install-phase (Auth: X) verbose formats Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- scripts/test-auth-acceptance.sh | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/scripts/test-auth-acceptance.sh b/scripts/test-auth-acceptance.sh index 27b0c17f..478a2821 100755 --- a/scripts/test-auth-acceptance.sh +++ b/scripts/test-auth-acceptance.sh @@ -250,7 +250,7 @@ run_install() { dir="$(setup_test_dir "$package")" tmpout="$(mktemp "$WORK_DIR/output-XXXXXX")" set +e - (cd "$dir" && "$APM_BINARY" install "$@") 2>&1 | tee "$tmpout" + (cd "$dir" && "$APM_BINARY" install "$@") < /dev/null 2>&1 | tee "$tmpout" APM_EXIT="${PIPESTATUS[0]}" set +e # keep errexit off (script uses -u, not -e) APM_OUTPUT="$(cat "$tmpout")" @@ -261,7 +261,7 @@ run_install_manifest() { local tmpout tmpout="$(mktemp "$WORK_DIR/output-XXXXXX")" set +e - (cd "$dir" && "$APM_BINARY" install "$@") 2>&1 | tee "$tmpout" + (cd "$dir" && "$APM_BINARY" install "$@") < /dev/null 2>&1 | tee "$tmpout" APM_EXIT="${PIPESTATUS[0]}" set +e # keep errexit off (script uses -u, not -e) APM_OUTPUT="$(cat "$tmpout")" @@ -980,7 +980,7 @@ EOF # If private deps were included, verify token sources appear in verbose if [[ -n "$AUTH_TEST_PRIVATE_REPO" && -n "$_ORIG_GITHUB_APM_PAT" ]]; then - assert_contains "source=GITHUB_APM_PAT" "private dep used token" + assert_contains "source=GITHUB_APM_PAT|Auth: GITHUB_APM_PAT" "private dep used token" fi $SCENARIO_OK && record_pass "$name" || record_fail "$name" From 093ed4af3e16bc6e4f82ee68d8dcfc2fb51562fb Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 23:31:33 +0100 Subject: [PATCH 30/40] feat: auth-acceptance workflow supports mega test and all inputs - Add mode input (all/mega) to select test mode - Add missing inputs: private_repo_2, git_url_repo, git_url_public_repo - Wire per-org PAT secrets: MCAPS_MICROSOFT, DEVEXPGBB - ADO repo description now shows FQDN format requirement Secrets to configure in 'auth-acceptance' environment: AUTH_TEST_GITHUB_APM_PAT, AUTH_TEST_GITHUB_TOKEN, AUTH_TEST_GH_TOKEN, AUTH_TEST_ADO_APM_PAT, AUTH_TEST_GITHUB_APM_PAT_MCAPS_MICROSOFT, AUTH_TEST_GITHUB_APM_PAT_DEVEXPGBB Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .github/workflows/auth-acceptance.yml | 30 +++++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) diff --git a/.github/workflows/auth-acceptance.yml b/.github/workflows/auth-acceptance.yml index eb431a5f..52045b99 100644 --- a/.github/workflows/auth-acceptance.yml +++ b/.github/workflows/auth-acceptance.yml @@ -3,12 +3,22 @@ name: Auth Acceptance Tests on: workflow_dispatch: inputs: + mode: + description: 'Test mode' + type: choice + options: + - all + - mega + default: 'all' public_repo: description: 'Public test repo (owner/repo)' default: 'microsoft/apm-sample-package' private_repo: description: 'Private test repo (owner/repo, optional)' required: false + private_repo_2: + description: 'Private test repo from 2nd org (owner/repo, optional)' + required: false emu_repo: description: 'EMU internal test repo (owner/repo, optional)' required: false @@ -16,7 +26,13 @@ on: description: 'GHE Cloud test repo (org/repo@host, optional)' required: false ado_repo: - description: 'Azure DevOps test repo (org/project/_git/repo, optional)' + description: 'Azure DevOps test repo (dev.azure.com/org/project/_git/repo, optional)' + required: false + git_url_repo: + description: 'Private repo for git: URL object format (owner/repo, optional)' + required: false + git_url_public_repo: + description: 'Public repo for git: URL object format (owner/repo, optional)' required: false env: @@ -54,11 +70,21 @@ jobs: APM_BINARY: .venv/bin/apm AUTH_TEST_PUBLIC_REPO: ${{ inputs.public_repo }} AUTH_TEST_PRIVATE_REPO: ${{ inputs.private_repo }} + AUTH_TEST_PRIVATE_REPO_2: ${{ inputs.private_repo_2 }} AUTH_TEST_EMU_REPO: ${{ inputs.emu_repo }} AUTH_TEST_GHE_REPO: ${{ inputs.ghe_repo }} AUTH_TEST_ADO_REPO: ${{ inputs.ado_repo }} + AUTH_TEST_GIT_URL_REPO: ${{ inputs.git_url_repo }} + AUTH_TEST_GIT_URL_PUBLIC_REPO: ${{ inputs.git_url_public_repo }} GITHUB_APM_PAT: ${{ secrets.AUTH_TEST_GITHUB_APM_PAT }} GITHUB_TOKEN: ${{ secrets.AUTH_TEST_GITHUB_TOKEN }} GH_TOKEN: ${{ secrets.AUTH_TEST_GH_TOKEN }} ADO_APM_PAT: ${{ secrets.AUTH_TEST_ADO_APM_PAT }} - run: ./scripts/test-auth-acceptance.sh + GITHUB_APM_PAT_MCAPS_MICROSOFT: ${{ secrets.AUTH_TEST_GITHUB_APM_PAT_MCAPS_MICROSOFT }} + GITHUB_APM_PAT_DEVEXPGBB: ${{ secrets.AUTH_TEST_GITHUB_APM_PAT_DEVEXPGBB }} + run: | + if [ "${{ inputs.mode }}" = "mega" ]; then + ./scripts/test-auth-acceptance.sh --mega + else + ./scripts/test-auth-acceptance.sh + fi From 8cceba0970059e5c2f7ba992f09d97b892528466 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 23:36:21 +0100 Subject: [PATCH 31/40] fix: resolve CodeQL incomplete URL substring sanitization alert Use exact equality for host assertion and endswith() for operation instead of 'in' substring check that CodeQL flagged as incomplete URL sanitization (false positive in test code, but clean fix). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- tests/unit/test_install_command.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tests/unit/test_install_command.py b/tests/unit/test_install_command.py index 80f286be..4d65946e 100644 --- a/tests/unit/test_install_command.py +++ b/tests/unit/test_install_command.py @@ -311,8 +311,8 @@ def test_verbose_validation_failure_calls_build_error_context(self, mock_urlopen assert result is False mock_build_ctx.assert_called_once() call_args = mock_build_ctx.call_args - assert "github.com" in call_args[0][0] # host - assert "owner/repo" in call_args[0][1] # operation + assert call_args[0][0] == "github.com" # host + assert call_args[0][1].endswith("owner/repo") # operation # --------------------------------------------------------------------------- From f9a63bbe1931121b8fb6d9a2110e437b529f54dc Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sat, 21 Mar 2026 23:57:26 +0100 Subject: [PATCH 32/40] docs: expand CI/CD private deps section with per-org and multi-platform patterns Clarifies the GITHUB_ prefix secret naming workaround, adds multi-org (GITHUB_APM_PAT_{ORG}) and ADO examples, links to auth guide. Related: microsoft/apm-action#18 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- docs/src/content/docs/integrations/ci-cd.md | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/docs/src/content/docs/integrations/ci-cd.md b/docs/src/content/docs/integrations/ci-cd.md index b10131d6..9d654f4b 100644 --- a/docs/src/content/docs/integrations/ci-cd.md +++ b/docs/src/content/docs/integrations/ci-cd.md @@ -33,7 +33,8 @@ jobs: ### Private Dependencies -For private repositories, pass a GitHub token: +For private repositories, set `GITHUB_APM_PAT` via the workflow `env:` block. +GitHub Actions forbids secrets named with a `GITHUB_` prefix, so use any name you like for the secret: ```yaml - name: Install APM packages @@ -42,6 +43,19 @@ For private repositories, pass a GitHub token: GITHUB_APM_PAT: ${{ secrets.APM_PAT }} ``` +For multi-org setups, add per-org tokens (`GITHUB_APM_PAT_{ORG}` — org name uppercased, hyphens replaced with underscores): + +```yaml + - name: Install APM packages + uses: microsoft/apm-action@v1 + env: + GITHUB_APM_PAT: ${{ secrets.APM_PAT }} + GITHUB_APM_PAT_CONTOSO: ${{ secrets.APM_PAT_CONTOSO }} + ADO_APM_PAT: ${{ secrets.ADO_PAT }} +``` + +See the [Authentication guide](../../getting-started/authentication/) for the full token priority chain. + ### Verify Compiled Output (Optional) If your project uses `apm compile` to target tools like Cursor, Codex, or Gemini, add a check to ensure compiled output stays in sync: From d420d3fa230937cc336bc4e677d2b12556d6d005 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sun, 22 Mar 2026 00:08:09 +0100 Subject: [PATCH 33/40] fix: address PR #394 review comments - auth.py: normalize cache key (host.lower(), org.lower()) to prevent case-sensitive duplication; gate per-org GITHUB_APM_PAT_{ORG} to GitHub-like hosts only (not ADO) - command_logger.py: replace inline Unicode glyphs with symbol= param; remove leading spaces from validation messages (symbol prefix handles it) - dependency_graph.py: use Optional[] instead of '|' union syntax - watcher.py: pass logger to compile(); remove inline [x] prefixes - token_manager.py: platform-aware GIT_ASKPASS (empty on Unix, echo on Win) - safe_installer.py: remove inline status prefixes when using logger - operations.py: map token var names to appropriate purposes (ado, copilot) - apm_resolver.py: fix DownloadCallback docstring to match actual behavior - ci-cd.md: trim private deps section, link to auth guide instead of bloat Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- docs/src/content/docs/integrations/ci-cd.md | 16 +--------------- src/apm_cli/commands/compile/watcher.py | 4 ++-- src/apm_cli/core/auth.py | 6 +++--- src/apm_cli/core/command_logger.py | 11 +++++------ src/apm_cli/core/safe_installer.py | 4 ++-- src/apm_cli/core/token_manager.py | 4 +++- src/apm_cli/deps/apm_resolver.py | 4 ++-- src/apm_cli/deps/dependency_graph.py | 2 +- src/apm_cli/registry/operations.py | 10 +++++++--- tests/unit/test_command_logger.py | 8 ++++---- 10 files changed, 30 insertions(+), 39 deletions(-) diff --git a/docs/src/content/docs/integrations/ci-cd.md b/docs/src/content/docs/integrations/ci-cd.md index 9d654f4b..2aee8e7d 100644 --- a/docs/src/content/docs/integrations/ci-cd.md +++ b/docs/src/content/docs/integrations/ci-cd.md @@ -33,8 +33,7 @@ jobs: ### Private Dependencies -For private repositories, set `GITHUB_APM_PAT` via the workflow `env:` block. -GitHub Actions forbids secrets named with a `GITHUB_` prefix, so use any name you like for the secret: +For private repositories, pass a token via the workflow `env:` block. See the [Authentication guide](../../getting-started/authentication/) for all supported tokens and priority rules. ```yaml - name: Install APM packages @@ -43,19 +42,6 @@ GitHub Actions forbids secrets named with a `GITHUB_` prefix, so use any name yo GITHUB_APM_PAT: ${{ secrets.APM_PAT }} ``` -For multi-org setups, add per-org tokens (`GITHUB_APM_PAT_{ORG}` — org name uppercased, hyphens replaced with underscores): - -```yaml - - name: Install APM packages - uses: microsoft/apm-action@v1 - env: - GITHUB_APM_PAT: ${{ secrets.APM_PAT }} - GITHUB_APM_PAT_CONTOSO: ${{ secrets.APM_PAT_CONTOSO }} - ADO_APM_PAT: ${{ secrets.ADO_PAT }} -``` - -See the [Authentication guide](../../getting-started/authentication/) for the full token priority chain. - ### Verify Compiled Output (Optional) If your project uses `apm compile` to target tools like Cursor, Codex, or Gemini, add a check to ensure compiled output stays in sync: diff --git a/src/apm_cli/commands/compile/watcher.py b/src/apm_cli/commands/compile/watcher.py index b706bc98..3f5ef3ea 100644 --- a/src/apm_cli/commands/compile/watcher.py +++ b/src/apm_cli/commands/compile/watcher.py @@ -58,7 +58,7 @@ def _recompile(self, changed_file): # Create compiler and compile compiler = AgentsCompiler(".") - result = compiler.compile(config) + result = compiler.compile(config, logger=self.logger) if result.success: if self.dry_run: @@ -72,7 +72,7 @@ def _recompile(self, changed_file): else: self.logger.error("Recompilation failed") for error in result.errors: - self.logger.error(f" [x] {error}") + self.logger.error(f" {error}") except Exception as e: self.logger.error(f"Error during recompilation: {e}") diff --git a/src/apm_cli/core/auth.py b/src/apm_cli/core/auth.py index cd1797c9..e1e56ad7 100644 --- a/src/apm_cli/core/auth.py +++ b/src/apm_cli/core/auth.py @@ -180,7 +180,7 @@ def detect_token_type(token: str) -> str: def resolve(self, host: str, org: Optional[str] = None) -> AuthContext: """Resolve auth for *(host, org)*. Cached & thread-safe.""" - key = (host, org) + key = (host.lower() if host else host, org.lower() if org else org) with self._lock: if key in self._cache: return self._cache[key] @@ -363,8 +363,8 @@ def _resolve_token( security boundary. Host-gating global env vars is unnecessary and creates DX friction for multi-host setups. """ - # 1. Per-org env var (any host) - if org: + # 1. Per-org env var (GitHub-like hosts only — ADO uses ADO_APM_PAT) + if org and host_info.kind not in ("ado",): env_name = f"GITHUB_APM_PAT_{_org_to_env_suffix(org)}" token = os.environ.get(env_name) if token: diff --git a/src/apm_cli/core/command_logger.py b/src/apm_cli/core/command_logger.py index 0a3b5e7c..70dee35f 100644 --- a/src/apm_cli/core/command_logger.py +++ b/src/apm_cli/core/command_logger.py @@ -110,11 +110,10 @@ def should_execute(self) -> bool: def auth_step(self, step: str, success: bool, detail: str = ""): """Log an auth resolution step (verbose only).""" if self.verbose: - status = "[+]" if success else "[x]" - msg = f" auth: {status} {step}" + msg = f" auth: {step}" if detail: msg += f" ({detail})" - _rich_echo(msg, color="dim") + _rich_echo(msg, color="dim", symbol="check" if success else "error") def auth_resolved(self, ctx): """Log the resolved auth context (verbose only). @@ -164,13 +163,13 @@ def validation_start(self, count: int): def validation_pass(self, canonical: str, already_present: bool): """Log a package that passed validation.""" if already_present: - _rich_echo(f" [+] {canonical} (already in apm.yml)", color="dim") + _rich_echo(f"{canonical} (already in apm.yml)", color="dim", symbol="check") else: - _rich_success(f" [+] {canonical}") + _rich_success(canonical, symbol="check") def validation_fail(self, package: str, reason: str): """Log a package that failed validation.""" - _rich_error(f" [x] {package} -- {reason}") + _rich_error(f"{package} -- {reason}", symbol="error") def validation_summary(self, outcome: _ValidationOutcome): """Log validation summary and decide whether to continue. diff --git a/src/apm_cli/core/safe_installer.py b/src/apm_cli/core/safe_installer.py index e51b2181..6ce93af4 100644 --- a/src/apm_cli/core/safe_installer.py +++ b/src/apm_cli/core/safe_installer.py @@ -43,14 +43,14 @@ def log_summary(self, logger=None): if self.skipped: for item in self.skipped: if logger: - logger.warning(f"[!] Skipped {item['server']}: {item['reason']}") + logger.warning(f"Skipped {item['server']}: {item['reason']}") else: _rich_warning(f"[!] Skipped {item['server']}: {item['reason']}") if self.failed: for item in self.failed: if logger: - logger.error(f"[x] Failed {item['server']}: {item['reason']}") + logger.error(f"Failed {item['server']}: {item['reason']}") else: _rich_error(f"[x] Failed {item['server']}: {item['reason']}") diff --git a/src/apm_cli/core/token_manager.py b/src/apm_cli/core/token_manager.py index d70235fa..1bc152ac 100644 --- a/src/apm_cli/core/token_manager.py +++ b/src/apm_cli/core/token_manager.py @@ -20,6 +20,7 @@ import os import subprocess +import sys from typing import Dict, Optional, Tuple @@ -111,7 +112,8 @@ def resolve_credential_from_git(host: str) -> Optional[str]: capture_output=True, text=True, timeout=GitHubTokenManager._get_credential_timeout(), - env={**os.environ, 'GIT_TERMINAL_PROMPT': '0', 'GIT_ASKPASS': ''}, + env={**os.environ, 'GIT_TERMINAL_PROMPT': '0', + 'GIT_ASKPASS': '' if sys.platform != 'win32' else 'echo'}, ) if result.returncode != 0: return None diff --git a/src/apm_cli/deps/apm_resolver.py b/src/apm_cli/deps/apm_resolver.py index 0146591a..69aa48fa 100644 --- a/src/apm_cli/deps/apm_resolver.py +++ b/src/apm_cli/deps/apm_resolver.py @@ -13,8 +13,8 @@ # Type alias for the download callback. # Takes (dep_ref, apm_modules_dir, parent_chain) and returns the install path # if successful. ``parent_chain`` is a human-readable breadcrumb string like -# "root-pkg > mid-pkg" showing which dependency path led here, or "" for -# direct (depth-1) dependencies. +# "root-pkg > mid-pkg > this-pkg" showing the full dependency path including +# the current node, or just the node's display name for direct (depth-1) deps. @runtime_checkable class DownloadCallback(Protocol): def __call__( diff --git a/src/apm_cli/deps/dependency_graph.py b/src/apm_cli/deps/dependency_graph.py index 7fadab2d..cd5cfa7d 100644 --- a/src/apm_cli/deps/dependency_graph.py +++ b/src/apm_cli/deps/dependency_graph.py @@ -37,7 +37,7 @@ def get_ancestor_chain(self) -> str: Returns just the node's display name for root-level (depth-0/1) deps. """ parts: list[str] = [] - current: 'DependencyNode' | None = self + current: Optional['DependencyNode'] = self while current is not None: parts.append(current.get_display_name()) current = current.parent diff --git a/src/apm_cli/registry/operations.py b/src/apm_cli/registry/operations.py index 68089122..0ced867c 100644 --- a/src/apm_cli/registry/operations.py +++ b/src/apm_cli/registry/operations.py @@ -330,10 +330,14 @@ def _prompt_for_environment_variables(self, required_vars: Dict[str, Dict]) -> D if var_name == 'GITHUB_DYNAMIC_TOOLSETS': env_vars[var_name] = '1' # Enable dynamic toolsets for GitHub MCP server elif 'token' in var_name.lower() or 'key' in var_name.lower(): - # Use centralized token manager for consistent precedence - # (GITHUB_APM_PAT → GITHUB_TOKEN → GH_TOKEN) + # Map known token vars to appropriate purposes _tm = GitHubTokenManager() - env_vars[var_name] = _tm.get_token_for_purpose('modules') or '' + if 'ado' in var_name.lower(): + env_vars[var_name] = _tm.get_token_for_purpose('ado_modules') or '' + elif 'copilot' in var_name.lower(): + env_vars[var_name] = _tm.get_token_for_purpose('copilot') or '' + else: + env_vars[var_name] = _tm.get_token_for_purpose('modules') or '' else: # For other variables, use empty string or reasonable default env_vars[var_name] = '' diff --git a/tests/unit/test_command_logger.py b/tests/unit/test_command_logger.py index 8d655b94..c3cd8c11 100644 --- a/tests/unit/test_command_logger.py +++ b/tests/unit/test_command_logger.py @@ -104,9 +104,9 @@ def test_auth_step_verbose(self, mock_echo): logger = CommandLogger("test", verbose=True) logger.auth_step("Trying GITHUB_APM_PAT", success=True, detail="found") mock_echo.assert_called_once() - call_args = mock_echo.call_args[0][0] - assert "[+]" in call_args - assert "GITHUB_APM_PAT" in call_args + call_args = mock_echo.call_args + assert "GITHUB_APM_PAT" in call_args[0][0] + assert call_args[1].get("symbol") == "check" @patch("apm_cli.core.command_logger._rich_echo") def test_auth_step_not_verbose(self, mock_echo): @@ -166,7 +166,7 @@ def test_auth_step_failure(self, mock_echo): logger = CommandLogger("test", verbose=True) logger.auth_step("Trying gh CLI", success=False) mock_echo.assert_called_once() - assert "[x]" in mock_echo.call_args[0][0] + assert mock_echo.call_args[1].get("symbol") == "error" class TestInstallLogger: From b458627259afc7c7c33d09b5cf83d1799e01bee4 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sun, 22 Mar 2026 00:16:53 +0100 Subject: [PATCH 34/40] fix: bundle lockfile includes non-target deployed_files causing unpack verification failure When packing with --target vscode, the bundle only contains .github/ files but the lockfile still listed .claude/ deployed_files. Unpack verification then failed because those files were 'missing from the bundle'. Fix: enrich_lockfile_for_pack() now filters each dependency's deployed_files to match the pack target. Moved _TARGET_PREFIXES and _filter_files_by_target to lockfile_enrichment (single source of truth), packer imports from there. Fixes microsoft/apm-action test-restore-artifact CI failure with APM 0.8.3. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- src/apm_cli/bundle/lockfile_enrichment.py | 37 ++++++++++++++++++- src/apm_cli/bundle/packer.py | 19 +--------- tests/unit/test_lockfile_enrichment.py | 45 +++++++++++++++++++++++ 3 files changed, 82 insertions(+), 19 deletions(-) diff --git a/src/apm_cli/bundle/lockfile_enrichment.py b/src/apm_cli/bundle/lockfile_enrichment.py index ded692c9..df8f3225 100644 --- a/src/apm_cli/bundle/lockfile_enrichment.py +++ b/src/apm_cli/bundle/lockfile_enrichment.py @@ -1,10 +1,28 @@ """Lockfile enrichment for pack-time metadata.""" from datetime import datetime, timezone +from typing import List from ..deps.lockfile import LockFile +# Must stay in sync with packer._TARGET_PREFIXES +_TARGET_PREFIXES = { + "copilot": [".github/"], + "vscode": [".github/"], + "claude": [".claude/"], + "cursor": [".cursor/"], + "opencode": [".opencode/"], + "all": [".github/", ".claude/", ".cursor/", ".opencode/"], +} + + +def _filter_files_by_target(deployed_files: List[str], target: str) -> List[str]: + """Filter deployed file paths by target prefix.""" + prefixes = _TARGET_PREFIXES.get(target, _TARGET_PREFIXES["all"]) + return [f for f in deployed_files if any(f.startswith(p) for p in prefixes)] + + def enrich_lockfile_for_pack( lockfile: LockFile, fmt: str, @@ -12,6 +30,10 @@ def enrich_lockfile_for_pack( ) -> str: """Create an enriched copy of the lockfile YAML with a ``pack:`` section. + Filters each dependency's ``deployed_files`` to only include paths + matching the pack *target*, so the bundle lockfile is consistent with + the files actually shipped in the bundle. + Does NOT mutate the original *lockfile* object -- serialises a copy and prepends the pack metadata. @@ -38,4 +60,17 @@ def enrich_lockfile_for_pack( sort_keys=False, ) - return pack_section + lockfile.to_yaml() + # Build a filtered lockfile YAML: each dep's deployed_files is narrowed + # to only the paths matching the pack target. + data = yaml.safe_load(lockfile.to_yaml()) + if data and "dependencies" in data: + for dep in data["dependencies"]: + if "deployed_files" in dep: + dep["deployed_files"] = _filter_files_by_target( + dep["deployed_files"], target + ) + + lockfile_yaml = yaml.dump( + data, default_flow_style=False, sort_keys=False, allow_unicode=True + ) + return pack_section + lockfile_yaml diff --git a/src/apm_cli/bundle/packer.py b/src/apm_cli/bundle/packer.py index 43cb4dd4..1ddaaeb5 100644 --- a/src/apm_cli/bundle/packer.py +++ b/src/apm_cli/bundle/packer.py @@ -10,18 +10,7 @@ from ..deps.lockfile import LockFile, get_lockfile_path, migrate_lockfile_if_needed from ..models.apm_package import APMPackage from ..core.target_detection import detect_target -from .lockfile_enrichment import enrich_lockfile_for_pack - - -# Target prefix mapping ("copilot" and "vscode" both map to .github/) -_TARGET_PREFIXES = { - "copilot": [".github/"], - "vscode": [".github/"], - "claude": [".claude/"], - "cursor": [".cursor/"], - "opencode": [".opencode/"], - "all": [".github/", ".claude/", ".cursor/", ".opencode/"], -} +from .lockfile_enrichment import enrich_lockfile_for_pack, _TARGET_PREFIXES, _filter_files_by_target @dataclass @@ -33,12 +22,6 @@ class PackResult: lockfile_enriched: bool = False -def _filter_files_by_target(deployed_files: List[str], target: str) -> List[str]: - """Filter deployed file paths by target prefix.""" - prefixes = _TARGET_PREFIXES.get(target, _TARGET_PREFIXES["all"]) - return [f for f in deployed_files if any(f.startswith(p) for p in prefixes)] - - def pack_bundle( project_root: Path, output_dir: Path, diff --git a/tests/unit/test_lockfile_enrichment.py b/tests/unit/test_lockfile_enrichment.py index 7a33d389..75617785 100644 --- a/tests/unit/test_lockfile_enrichment.py +++ b/tests/unit/test_lockfile_enrichment.py @@ -54,3 +54,48 @@ def test_does_not_mutate_original(self): enrich_lockfile_for_pack(lf, fmt="apm", target="all") assert lf.to_yaml() == original_yaml + + def test_filters_deployed_files_by_target(self): + """Pack with --target vscode should exclude .claude/ files from lockfile.""" + lf = LockFile() + dep = LockedDependency( + repo_url="owner/repo", + resolved_commit="abc123", + version="1.0.0", + deployed_files=[ + ".github/agents/a.md", + ".github/skills/s1", + ".claude/commands/c.md", + ".claude/skills/review", + ], + ) + lf.add_dependency(dep) + + result = enrich_lockfile_for_pack(lf, fmt="apm", target="vscode") + parsed = yaml.safe_load(result) + + deployed = parsed["dependencies"][0]["deployed_files"] + assert ".github/agents/a.md" in deployed + assert ".github/skills/s1" in deployed + assert ".claude/commands/c.md" not in deployed + assert ".claude/skills/review" not in deployed + + def test_filters_deployed_files_target_all_keeps_everything(self): + """Pack with --target all should keep all deployed files.""" + lf = LockFile() + dep = LockedDependency( + repo_url="owner/repo", + resolved_commit="abc123", + version="1.0.0", + deployed_files=[ + ".github/agents/a.md", + ".claude/commands/c.md", + ], + ) + lf.add_dependency(dep) + + result = enrich_lockfile_for_pack(lf, fmt="apm", target="all") + parsed = yaml.safe_load(result) + + deployed = parsed["dependencies"][0]["deployed_files"] + assert len(deployed) == 2 From cdcd61950a7135b124eba947506d3b67e0a216d3 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sun, 22 Mar 2026 00:42:19 +0100 Subject: [PATCH 35/40] fix: verbose lockfile iteration used dict keys instead of values `existing_lockfile.dependencies` is a Dict[str, LockedDependency]. Iterating it yields string keys, not LockedDependency objects. Use `get_all_dependencies()` to iterate values. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- src/apm_cli/commands/install.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index 83aed5d1..ce677bdb 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -1045,7 +1045,7 @@ def _install_apm_dependencies( if logger: logger.verbose_detail(f"Using apm.lock.yaml ({lockfile_count} locked dependencies)") if logger.verbose: - for locked_dep in existing_lockfile.dependencies: + for locked_dep in existing_lockfile.get_all_dependencies(): sha_short = locked_dep.resolved_commit[:8] if locked_dep.resolved_commit else "no-sha" logger.verbose_detail(f" {locked_dep.get_unique_key()}: locked at {sha_short}") From eb2a6fffe504020c22ee457a162e7d2d5af24f85 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sun, 22 Mar 2026 01:12:31 +0100 Subject: [PATCH 36/40] =?UTF-8?q?feat:=20verbose=20logging=20UX=20overhaul?= =?UTF-8?q?=20=E2=80=94=20CommandLogger=20SoC=20architecture?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Extend CommandLogger with tree_item(), package_inline_warning() - Extend InstallLogger with lockfile_entry(), package_auth(), package_type_info(), and refactored download_complete(ref=, sha=, cached=) - Add count_for_package() to DiagnosticCollector for inline verbose hints - Fix double hash/parens in ref display (#tag @sha format) - Fix 'locked at no-sha' — show 'pinned to ref' or omit unpinned - Normalize verbose sub-item indentation to 4-space - Remove [i] symbol from integration tree lines (use tree_item) - Show per-package skip/error counts inline in verbose mode - Create AuthResolver once per install, not per-package - Add --verbose to uninstall, pack, unpack commands - Remove redundant 'files scanned' in audit verbose - Fix SoC: _log_unpack_file_list uses logger not _rich_echo - Update CLI logging skill with full architecture documentation Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .github/skills/cli-logging-ux/SKILL.md | 173 +++++++++++++++++++++-- src/apm_cli/commands/audit.py | 2 - src/apm_cli/commands/install.py | 97 +++++++++---- src/apm_cli/commands/pack.py | 28 ++-- src/apm_cli/commands/uninstall/cli.py | 6 +- src/apm_cli/commands/uninstall/engine.py | 2 + src/apm_cli/core/command_logger.py | 71 +++++++++- src/apm_cli/utils/diagnostics.py | 9 ++ 8 files changed, 335 insertions(+), 53 deletions(-) diff --git a/.github/skills/cli-logging-ux/SKILL.md b/.github/skills/cli-logging-ux/SKILL.md index edaa0a64..3b07adb7 100644 --- a/.github/skills/cli-logging-ux/SKILL.md +++ b/.github/skills/cli-logging-ux/SKILL.md @@ -151,16 +151,163 @@ if SkillIntegrator._dirs_equal(source, target): ## CommandLogger Architecture -All CLI commands must use `CommandLogger` (or a subclass) for output: +APM is a large and growing CLI with 10+ commands, 8+ integrators, and dozens of output sites. The logging architecture enforces **Separation of Concerns**: commands declare *what* happened; the logger decides *how* to render it. This keeps output consistent, testable, and evolvable without shotgun surgery across command files. -- **`CommandLogger`** (`src/apm_cli/core/command_logger.py`): Base for all commands. Provides `start()`, `progress()`, `success()`, `error()`, `warning()`, `verbose_detail()`, `dry_run_notice()`, `auth_step()`, `render_summary()`. -- **`InstallLogger(CommandLogger)`**: Install-specific with `validation_start()`, `resolution_start()`, `nothing_to_install()`, `download_start()`, `install_summary()`. -- **`DiagnosticCollector`**: Injected via `logger.diagnostics`. Collect-then-render pattern. +### The three layers -### Rule: No direct _rich_* in commands -Command functions must NOT call `_rich_info()`, `_rich_error()`, etc. directly. Use `logger.progress()`, `logger.error()`, etc. instead. The _rich_* helpers are internal to CommandLogger. +``` +┌─────────────────────────────────────────────────────┐ +│ Command layer (install.py, pack.py, audit.py …) │ +│ Calls: logger.success(), logger.tree_item(), … │ +│ NEVER calls: _rich_*, click.echo(), print() │ +├─────────────────────────────────────────────────────┤ +│ Logger layer (command_logger.py) │ +│ CommandLogger ← InstallLogger, future subclasses │ +│ Owns: verbose gating, symbol choice, indentation │ +│ Delegates to: _rich_* helpers │ +├─────────────────────────────────────────────────────┤ +│ Rendering layer (console.py) │ +│ _rich_echo, _rich_success, _rich_error, … │ +│ Owns: Rich/colorama fallback, color, STATUS_SYMBOLS │ +└─────────────────────────────────────────────────────┘ +``` + +Changes to output style (colors, symbols, indentation) happen in the **logger or rendering layer only** — command code is untouched. New output patterns (e.g. a tree sub-item, a package metadata line) become new logger methods, not ad-hoc format strings in commands. + +### Base class: `CommandLogger` + +`src/apm_cli/core/command_logger.py` — base for all commands. + +| Method | Purpose | When to use | +|--------|---------|-------------| +| `start(msg, symbol=)` | Operation start | Beginning of a command | +| `progress(msg, symbol=)` | Status update with `[i]` prefix | Mid-operation phase changes | +| `success(msg, symbol=)` | Green success | Operation completed | +| `warning(msg, symbol=)` | Yellow warning | User action needed | +| `error(msg, symbol=)` | Red error | Operation failed | +| `verbose_detail(msg)` | Dim text, verbose-only | Internal details (paths, hashes) | +| `tree_item(msg)` | Green text, no symbol prefix | `└─` sub-items under a package | +| `package_inline_warning(msg)` | Yellow text, verbose-only | Per-package diagnostic hints | +| `dry_run_notice(msg)` | `[dry-run]` prefix | Dry-run explanation | +| `auth_step(step, success, detail)` | Auth resolution step | Verbose auth tracing | +| `render_summary()` | Render DiagnosticCollector | End of command | + +### Subclass: `InstallLogger(CommandLogger)` + +Install-specific phases. Commands that don't need these use `CommandLogger` directly. + +| Method | Purpose | Output | +|--------|---------|--------| +| `validation_start(count)` | Start validation | `[*] Validating N package(s)...` | +| `validation_pass(name, present)` | Package OK | `[+] name` or `name (already in apm.yml)` | +| `validation_fail(name, reason)` | Package bad | `[x] name -- reason` | +| `resolution_start(count, lockfile)` | Start resolution | Context-aware install/update message | +| `download_complete(name, ref=, sha=, cached=)` | Package installed | `[+] name #tag @sha` or `(cached)` | +| `download_failed(name, error)` | Download error | `[x] name -- error` | +| `lockfile_entry(key, ref=, sha=)` | Lockfile verbose line | `key: locked at sha` / `pinned to ref` / omitted | +| `package_auth(source, token_type=)` | Auth source verbose | `Auth: source (type)` | +| `package_type_info(label)` | Package type verbose | `Package type: label` | +| `install_summary(apm, mcp, errors)` | Final summary | `Installed N APM dependencies.` | + +### When to add a new logger method + +If a command needs a new output pattern (new indentation level, new semantic meaning, new verbose gate), **add a method to CommandLogger or a subclass**. Signs you need a new method: + +- You're writing `_rich_echo(f" Something: {value}", color="dim")` in a command file +- You're checking `if logger.verbose:` before calling `_rich_echo` in a command +- You're formatting a string with specific indentation that other commands might reuse +- Multiple commands emit the same kind of line (e.g., file lists, auth info) + +### Rule: No direct `_rich_*` in commands + +Command functions must NOT call `_rich_info()`, `_rich_error()`, etc. directly. Use `logger.progress()`, `logger.error()`, etc. instead. The `_rich_*` helpers are **internal** to the logger and rendering layers. + +**Exception:** Rich tables and panels for display (not lifecycle logging) may use `console.print()` directly — these are data presentation, not status reporting. + +### Rule: Every command gets a `CommandLogger` + +Every Click command function must instantiate a `CommandLogger` (or subclass) and pass it to helpers: + +```python +@cli.command() +@click.option("--verbose", "-v", is_flag=True) +@click.option("--dry-run", is_flag=True) +def my_command(verbose, dry_run): + logger = CommandLogger("my-command", verbose=verbose, dry_run=dry_run) + logger.start("Starting operation...") + _do_work(logger=logger) + logger.render_summary() +``` + +### Rule: Verbose gating lives in the logger -Exception: Rich tables and panels for display (not lifecycle logging) may use `console.print()` directly. +Never check `if verbose:` in command code. Use methods that gate internally: + +```python +# Bad — manual verbose check in command +if verbose: + _rich_echo(f" Auth: {source}", color="dim") + +# Good — logger handles the gate +logger.package_auth(source, token_type) # No-ops when not verbose +logger.verbose_detail(f" Path: {path}") # No-ops when not verbose +``` + +### DiagnosticCollector integration + +Access via `logger.diagnostics` (lazy-initialized). The collector owns the collect-then-render lifecycle: + +```python +# During operation — collect +diagnostics.skip(file, package=pkg_name) # Collision +diagnostics.overwrite(file, package=pkg_name) # Cross-package replacement +diagnostics.error(msg, package=pkg_name) # Failure +diagnostics.auth(msg, package=pkg_name) # Auth issue + +# Query during operation (e.g., for inline verbose hints) +count = diagnostics.count_for_package(pkg_name, category="collision") +if count > 0: + logger.package_inline_warning(f" [!] {count} files skipped") + +# After operation — render grouped summary +logger.render_summary() # Delegates to diagnostics.render_summary() +``` + +### Visual hierarchy contract + +Multi-package operations follow this tree structure: + +``` + [+] package-name #v1.0 @b0cbd3df # download_complete + Auth: git-credential-fill (oauth) # package_auth (verbose) + Package type: Skill (SKILL.md detected) # package_type_info (verbose) + └─ 3 skill(s) integrated -> .github/skills/ # tree_item + └─ 1 prompt integrated -> .github/prompts/ # tree_item + [!] 2 files skipped (local files exist) # package_inline_warning (verbose) + [+] another-package (cached) # download_complete + +── Diagnostics ── # render_summary + [!] 2 files skipped -- local files exist # Grouped by category + Use 'apm install --force' to overwrite + +[*] Installed 2 APM dependencies. # install_summary +``` + +Key rules: +- `[+]` package lines are the top-level anchors (green, no indent beyond 2-space) +- Verbose metadata (Auth, Package type) uses 4-space indent, dim color +- Tree items (`└─`) use 4-space indent, green color, no symbol prefix +- Inline warnings use 4-space indent, yellow color, verbose-only +- Diagnostics summary appears AFTER all packages, not inline (except verbose hints) + +### Scaling guidance + +As the CLI grows, this architecture scales by: +- **New commands**: Instantiate `CommandLogger`, use existing methods. Add subclass only if the command has distinct phases (like `InstallLogger`). +- **New output patterns**: Add methods to `CommandLogger`. Every command benefits. +- **New integrators**: Accept `diagnostics=` param, push to collector. No direct output. +- **Theme changes**: Modify rendering layer (`console.py`). Zero command changes. +- **Testing**: Mock `CommandLogger` in tests to assert semantic calls without parsing output strings. ## Anti-patterns @@ -176,8 +323,14 @@ Exception: Rich tables and panels for display (not lifecycle logging) may use `c 6. **Walls of text** — Use Rich tables for structured data, panels for grouped content. Break up long output with visual hierarchy (indentation, `└─` tree connectors). -7. **Calling `_rich_info("Installing...")` directly in a command** — Use `logger.start("Installing...")` instead. The `_rich_*` helpers are internal to `CommandLogger`. +7. **Direct `_rich_*` calls in commands** — Use `logger.start()`, `logger.progress()`, `logger.tree_item()` etc. The `_rich_*` helpers are internal to CommandLogger and console.py. Adding a `_rich_echo` call in a command file is a SoC violation. + +8. **Manual `if verbose:` checks** — Use `logger.verbose_detail()`, `logger.package_auth()`, or other verbose-gated methods. The logger owns the gate. + +9. **Manual `if dry_run:` checks** — Use `logger.should_execute` or `logger.dry_run_notice()`. + +10. **Format strings for indentation in commands** — Don't write `f" Auth: {source}"` in command code. Use `logger.package_auth(source)` which owns the indent level. When a new indentation pattern is needed, add a method to CommandLogger. -8. **Checking `if verbose:` manually** — Use `logger.verbose_detail("...")` which handles the check internally. +11. **Re-creating shared objects per iteration** — Expensive objects like `AuthResolver` should be created once before loops and reused per-package. The logger and diagnostics collector are already singletons per command invocation. -9. **Checking `if dry_run:` manually** — Use `logger.should_execute` or `logger.dry_run_notice("...")` instead. +12. **Using `logger.progress()` for tree sub-items** — `progress()` adds a `[i]` symbol prefix. Tree continuation lines (`└─`) should use `logger.tree_item()` which renders with no symbol. diff --git a/src/apm_cli/commands/audit.py b/src/apm_cli/commands/audit.py index a0dd38e1..dfe35c00 100644 --- a/src/apm_cli/commands/audit.py +++ b/src/apm_cli/commands/audit.py @@ -256,8 +256,6 @@ def _render_summary( if info > 0 and (critical > 0 or warning > 0): logger.progress(f" Plus {info} info-level finding(s) (use --verbose to see)") - logger.verbose_detail(f" {files_scanned} file(s) scanned") - def _apply_strip( findings_by_file: Dict[str, List[ScanFinding]], diff --git a/src/apm_cli/commands/install.py b/src/apm_cli/commands/install.py index ce677bdb..ed545485 100644 --- a/src/apm_cli/commands/install.py +++ b/src/apm_cli/commands/install.py @@ -775,7 +775,7 @@ def _integrate_package_primitives( def _log_integration(msg): if logger: - logger.progress(msg) + logger.tree_item(msg) # --- prompts --- prompt_result = prompt_integrator.integrate_package_prompts( @@ -1046,8 +1046,9 @@ def _install_apm_dependencies( logger.verbose_detail(f"Using apm.lock.yaml ({lockfile_count} locked dependencies)") if logger.verbose: for locked_dep in existing_lockfile.get_all_dependencies(): - sha_short = locked_dep.resolved_commit[:8] if locked_dep.resolved_commit else "no-sha" - logger.verbose_detail(f" {locked_dep.get_unique_key()}: locked at {sha_short}") + _sha = locked_dep.resolved_commit[:8] if locked_dep.resolved_commit else "" + _ref = locked_dep.resolved_ref if hasattr(locked_dep, 'resolved_ref') and locked_dep.resolved_ref else "" + logger.lockfile_entry(locked_dep.get_unique_key(), ref=_ref, sha=_sha) apm_modules_dir = project_root / APM_MODULES_DIR apm_modules_dir.mkdir(exist_ok=True) @@ -1410,6 +1411,14 @@ def _collect_descendants(node, visited=None): _pre_downloaded_keys = builtins.set(_pre_download_results.keys()) # Create progress display for sequential integration + _auth_resolver = None + if verbose: + try: + from apm_cli.core.auth import AuthResolver + _auth_resolver = AuthResolver() + except Exception: + pass + with Progress( SpinnerColumn(), TextColumn("[cyan]{task.description}[/cyan]"), @@ -1552,6 +1561,17 @@ def _collect_descendants(node, visited=None): ) package_deployed_files[dep_key] = dep_deployed_files + + # In verbose mode, show inline skip/error count for this package + if logger and logger.verbose: + _skip_count = diagnostics.count_for_package(dep_key, "collision") + _err_count = diagnostics.count_for_package(dep_key, "error") + if _skip_count > 0: + noun = "file" if _skip_count == 1 else "files" + logger.package_inline_warning(f" [!] {_skip_count} {noun} skipped (local files exist)") + if _err_count > 0: + noun = "error" if _err_count == 1 else "errors" + logger.package_inline_warning(f" [!] {_err_count} integration {noun}") continue # npm-like behavior: Branches always fetch latest, only tags/commits use cache @@ -1618,17 +1638,12 @@ def _collect_descendants(node, visited=None): str(dep_ref) if dep_ref.is_virtual else dep_ref.repo_url ) # Show resolved ref from lockfile for consistency with fresh installs - ref_str = "" + _ref = dep_ref.reference or "" + _sha = "" if _dep_locked_chk and _dep_locked_chk.resolved_commit and _dep_locked_chk.resolved_commit != "cached": - short_sha = _dep_locked_chk.resolved_commit[:8] - if dep_ref.reference: - ref_str = f"#{dep_ref.reference} ({short_sha})" - else: - ref_str = f"#{short_sha}" - elif dep_ref.reference: - ref_str = f"#{dep_ref.reference}" + _sha = _dep_locked_chk.resolved_commit[:8] if logger: - logger.download_complete(display_name, ref_suffix=f"{ref_str} (cached)" if ref_str else "cached") + logger.download_complete(display_name, ref=_ref, sha=_sha, cached=True) installed_count += 1 if not dep_ref.reference: unpinned_count += 1 @@ -1750,6 +1765,17 @@ def _collect_descendants(node, visited=None): package=dep_key, ) + # In verbose mode, show inline skip/error count for this package + if logger and logger.verbose: + _skip_count = diagnostics.count_for_package(dep_key, "collision") + _err_count = diagnostics.count_for_package(dep_key, "error") + if _skip_count > 0: + noun = "file" if _skip_count == 1 else "files" + logger.package_inline_warning(f" [!] {_skip_count} {noun} skipped (local files exist)") + if _err_count > 0: + noun = "error" if _err_count == 1 else "errors" + logger.package_inline_warning(f" [!] {_err_count} integration {noun}") + continue # Download the package with progress feedback @@ -1796,22 +1822,34 @@ def _collect_descendants(node, visited=None): # Show resolved ref alongside package name for visibility resolved = getattr(package_info, 'resolved_reference', None) - ref_suffix = f"#{resolved}" if resolved else "" if logger: - logger.download_complete(display_name, ref_suffix=ref_suffix) + _ref = "" + _sha = "" + if resolved: + _ref = resolved.ref_name if resolved.ref_name else "" + _sha = resolved.resolved_commit[:8] if resolved.resolved_commit else "" + logger.download_complete(display_name, ref=_ref, sha=_sha) # Log auth source for this download (verbose only) - if verbose: + if _auth_resolver: try: - from apm_cli.core.auth import AuthResolver - _auth = AuthResolver() _host = dep_ref.host or "github.com" _org = dep_ref.repo_url.split('/')[0] if dep_ref.repo_url and '/' in dep_ref.repo_url else None - _ctx = _auth.resolve(_host, org=_org) - logger.verbose_detail(f" Auth: {_ctx.source} ({_ctx.token_type or 'none'})") + _ctx = _auth_resolver.resolve(_host, org=_org) + logger.package_auth(_ctx.source, _ctx.token_type or "none") except Exception: pass else: - _rich_success(f"[+] {display_name}{ref_suffix}") + _ref_suffix = "" + if resolved: + _r = resolved.ref_name if resolved.ref_name else "" + _s = resolved.resolved_commit[:8] if resolved.resolved_commit else "" + if _r and _s: + _ref_suffix = f" #{_r} @{_s}" + elif _r: + _ref_suffix = f" #{_r}" + elif _s: + _ref_suffix = f" @{_s}" + _rich_success(f"[+] {display_name}{_ref_suffix}") # Track unpinned deps for aggregated diagnostic if not dep_ref.reference: @@ -1835,7 +1873,7 @@ def _collect_descendants(node, visited=None): package_types[dep_ref.get_unique_key()] = package_info.package_type.value # Show package type in verbose mode - if verbose and hasattr(package_info, "package_type"): + if hasattr(package_info, "package_type"): from apm_cli.models.apm_package import PackageType package_type = package_info.package_type @@ -1845,9 +1883,8 @@ def _collect_descendants(node, visited=None): PackageType.HYBRID: "Hybrid (apm.yml + SKILL.md)", PackageType.APM_PACKAGE: "APM Package (apm.yml)", }.get(package_type) - if _type_label: - if logger: - logger.verbose_detail(f" Package type: {_type_label}") + if _type_label and logger: + logger.package_type_info(_type_label) # Auto-integrate prompts and agents if enabled # Pre-deploy security gate @@ -1895,6 +1932,18 @@ def _collect_descendants(node, visited=None): package=dep_ref.get_unique_key(), ) + # In verbose mode, show inline skip/error count for this package + if logger and logger.verbose: + pkg_key = dep_ref.get_unique_key() + _skip_count = diagnostics.count_for_package(pkg_key, "collision") + _err_count = diagnostics.count_for_package(pkg_key, "error") + if _skip_count > 0: + noun = "file" if _skip_count == 1 else "files" + logger.package_inline_warning(f" [!] {_skip_count} {noun} skipped (local files exist)") + if _err_count > 0: + noun = "error" if _err_count == 1 else "errors" + logger.package_inline_warning(f" [!] {_err_count} integration {noun}") + except Exception as e: display_name = ( str(dep_ref) if dep_ref.is_virtual else dep_ref.repo_url diff --git a/src/apm_cli/commands/pack.py b/src/apm_cli/commands/pack.py index 370478a3..be89dcef 100644 --- a/src/apm_cli/commands/pack.py +++ b/src/apm_cli/commands/pack.py @@ -8,7 +8,6 @@ from ..bundle.packer import pack_bundle from ..bundle.unpacker import unpack_bundle from ..core.command_logger import CommandLogger -from ..utils.console import _rich_echo @click.command(name="pack", help="Create a self-contained bundle from installed dependencies") @@ -36,10 +35,11 @@ ) @click.option("--dry-run", is_flag=True, default=False, help="Show what would be packed without writing.") @click.option("--force", is_flag=True, default=False, help="On collision, last writer wins.") +@click.option("--verbose", "-v", is_flag=True, help="Show detailed packing information") @click.pass_context -def pack_cmd(ctx, fmt, target, archive, output, dry_run, force): +def pack_cmd(ctx, fmt, target, archive, output, dry_run, force, verbose): """Create a self-contained APM bundle.""" - logger = CommandLogger("pack", dry_run=dry_run) + logger = CommandLogger("pack", verbose=verbose, dry_run=dry_run) try: result = pack_bundle( project_root=Path("."), @@ -57,7 +57,7 @@ def pack_cmd(ctx, fmt, target, archive, output, dry_run, force): if result.files: logger.progress(f"Would pack {len(result.files)} file(s):") for f in result.files: - click.echo(f" {f}") + logger.tree_item(f" └─ {f}") else: logger.warning("No files to pack") return @@ -66,6 +66,8 @@ def pack_cmd(ctx, fmt, target, archive, output, dry_run, force): logger.warning("No deployed files found -- empty bundle created") else: logger.success(f"Packed {len(result.files)} file(s) -> {result.bundle_path}") + for f in result.files: + logger.verbose_detail(f" └─ {f}") if fmt == "plugin": logger.progress( "Plugin bundle ready -- contains plugin.json and " @@ -90,10 +92,11 @@ def pack_cmd(ctx, fmt, target, archive, output, dry_run, force): @click.option("--skip-verify", is_flag=True, default=False, help="Skip bundle completeness check.") @click.option("--dry-run", is_flag=True, default=False, help="Show what would be unpacked without writing.") @click.option("--force", is_flag=True, default=False, help="Deploy despite critical hidden-character findings.") +@click.option("--verbose", "-v", is_flag=True, help="Show detailed unpacking information") @click.pass_context -def unpack_cmd(ctx, bundle_path, output, skip_verify, dry_run, force): +def unpack_cmd(ctx, bundle_path, output, skip_verify, dry_run, force, verbose): """Extract an APM bundle into the project.""" - logger = CommandLogger("unpack", dry_run=dry_run) + logger = CommandLogger("unpack", verbose=verbose, dry_run=dry_run) try: logger.start(f"Unpacking {bundle_path} -> {output}") @@ -109,7 +112,7 @@ def unpack_cmd(ctx, bundle_path, output, skip_verify, dry_run, force): logger.dry_run_notice("No files written") if result.files: logger.progress(f"Would unpack {len(result.files)} file(s):") - _log_unpack_file_list(result) + _log_unpack_file_list(result, logger) else: logger.warning("No files in bundle") return @@ -117,7 +120,7 @@ def unpack_cmd(ctx, bundle_path, output, skip_verify, dry_run, force): if not result.files: logger.warning("No files were unpacked") else: - _log_unpack_file_list(result) + _log_unpack_file_list(result, logger) if result.skipped_count > 0: logger.warning( f" {result.skipped_count} file(s) skipped (missing from bundle)" @@ -140,14 +143,13 @@ def unpack_cmd(ctx, bundle_path, output, skip_verify, dry_run, force): sys.exit(1) -def _log_unpack_file_list(result): +def _log_unpack_file_list(result, logger): """Log unpacked files grouped by dependency, using tree-style output.""" if result.dependency_files: for dep_name, dep_files in result.dependency_files.items(): - _rich_echo(f" {dep_name}", color="cyan") + logger.progress(f" {dep_name}") for f in dep_files: - _rich_echo(f" └─ {f}", color="white") + logger.tree_item(f" └─ {f}") else: - # Fallback: flat file list (no dependency info) for f in result.files: - _rich_echo(f" └─ {f}", color="white") + logger.tree_item(f" └─ {f}") diff --git a/src/apm_cli/commands/uninstall/cli.py b/src/apm_cli/commands/uninstall/cli.py index 2160f542..0cca0523 100644 --- a/src/apm_cli/commands/uninstall/cli.py +++ b/src/apm_cli/commands/uninstall/cli.py @@ -27,8 +27,9 @@ @click.option( "--dry-run", is_flag=True, help="Show what would be removed without removing" ) +@click.option("--verbose", "-v", is_flag=True, help="Show detailed removal information") @click.pass_context -def uninstall(ctx, packages, dry_run): +def uninstall(ctx, packages, dry_run, verbose): """Remove APM packages from apm.yml and apm_modules (like npm uninstall). This command removes packages from both the apm.yml dependencies list @@ -39,7 +40,7 @@ def uninstall(ctx, packages, dry_run): apm uninstall org/pkg1 org/pkg2 # Remove multiple packages apm uninstall acme/my-package --dry-run # Show what would be removed """ - logger = CommandLogger("uninstall", dry_run=dry_run) + logger = CommandLogger("uninstall", verbose=verbose, dry_run=dry_run) try: # Check if apm.yml exists if not Path(APM_YML_FILENAME).exists(): @@ -164,6 +165,7 @@ def uninstall(ctx, packages, dry_run): for label, count in cleaned.items(): if count > 0: logger.progress(f"Cleaned up {count} integrated {label}", symbol="check") + logger.verbose_detail(f" Removed {count} deployed {label} file(s)") # Step 10: MCP cleanup try: diff --git a/src/apm_cli/commands/uninstall/engine.py b/src/apm_cli/commands/uninstall/engine.py index ec75ad36..a12aed3c 100644 --- a/src/apm_cli/commands/uninstall/engine.py +++ b/src/apm_cli/commands/uninstall/engine.py @@ -132,6 +132,7 @@ def _remove_packages_from_disk(packages_to_remove, apm_modules_dir, logger): try: safe_rmtree(package_path, apm_modules_dir) logger.progress(f"Removed {package} from apm_modules/") + logger.verbose_detail(f" Path: {package_path.relative_to(apm_modules_dir)}") removed += 1 deleted_pkg_paths.append(package_path) except Exception as e: @@ -212,6 +213,7 @@ def _cleanup_transitive_orphans(lockfile, packages_to_remove, apm_modules_dir, a try: safe_rmtree(orphan_path, apm_modules_dir) logger.progress(f"Removed transitive dependency {orphan_key} from apm_modules/") + logger.verbose_detail(f" Path: {orphan_path.relative_to(apm_modules_dir)}") removed += 1 deleted_orphan_paths.append(orphan_path) except Exception as e: diff --git a/src/apm_cli/core/command_logger.py b/src/apm_cli/core/command_logger.py index 70dee35f..3ff1436b 100644 --- a/src/apm_cli/core/command_logger.py +++ b/src/apm_cli/core/command_logger.py @@ -94,6 +94,23 @@ def verbose_detail(self, message: str): if self.verbose: _rich_echo(message, color="dim") + def tree_item(self, message: str): + """Log a tree sub-item (└─ line) under a package block. + + Renders green text with no symbol prefix — these are visual + continuation lines, not standalone status messages. + """ + _rich_echo(message, color="green") + + def package_inline_warning(self, message: str): + """Log an inline warning under a package block (verbose only). + + Use for per-package diagnostic hints shown inline during install, + supplementing the deferred DiagnosticCollector summary. + """ + if self.verbose: + _rich_echo(message, color="yellow") + # --- Dry-run awareness --- def dry_run_notice(self, what_would_happen: str): @@ -226,17 +243,67 @@ def download_start(self, dep_name: str, cached: bool): elif self.verbose: _rich_info(f" Downloading: {dep_name}", symbol="download") - def download_complete(self, dep_name: str, ref_suffix: str = ""): - """Log completion of a package download.""" + def download_complete( + self, dep_name: str, ref: str = "", sha: str = "", cached: bool = False, + # Legacy compat: if callers pass ref_suffix= we handle it + ref_suffix: str = "", + ): + """Log completion of a package download. + + Args: + dep_name: Package display name (repo_url or virtual path). + ref: Git reference (tag name, branch) if any. + sha: Short commit SHA (8 chars) if any. + cached: Whether this was a cache hit. + ref_suffix: DEPRECATED — legacy callers still pass this. + """ msg = f" [+] {dep_name}" if ref_suffix: + # Legacy path — pass-through until all callers are migrated msg += f" ({ref_suffix})" + else: + if ref and sha: + msg += f" #{ref} @{sha}" + elif ref: + msg += f" #{ref}" + elif sha: + msg += f" @{sha}" + if cached: + msg += " (cached)" _rich_echo(msg, color="green") def download_failed(self, dep_name: str, error: str): """Log a download failure.""" _rich_error(f" [x] {dep_name} -- {error}") + # --- Verbose sub-item methods (install-specific) --- + + def lockfile_entry(self, key: str, ref: str = "", sha: str = ""): + """Log a lockfile entry in verbose mode. + + Omits the line entirely for unpinned deps (no ref, no sha). + """ + if not self.verbose: + return + if sha: + _rich_echo(f" {key}: locked at {sha}", color="dim") + elif ref: + _rich_echo(f" {key}: pinned to {ref}", color="dim") + # Unpinned → omit entirely (nothing useful to show) + + def package_auth(self, source: str, token_type: str = ""): + """Log auth source for a package (verbose only). 4-space indent.""" + if not self.verbose: + return + type_str = f" ({token_type})" if token_type else "" + _rich_echo(f" Auth: {source}{type_str}", color="dim") + + def package_type_info(self, type_label: str): + """Log detected package type (verbose only). 4-space indent.""" + if not self.verbose: + return + _rich_echo(f" Package type: {type_label}", color="dim") + # --- Install summary --- def install_summary(self, apm_count: int, mcp_count: int, errors: int = 0): diff --git a/src/apm_cli/utils/diagnostics.py b/src/apm_cli/utils/diagnostics.py index 24895450..04aaa785 100644 --- a/src/apm_cli/utils/diagnostics.py +++ b/src/apm_cli/utils/diagnostics.py @@ -194,6 +194,15 @@ def by_category(self) -> Dict[str, List[Diagnostic]]: groups.setdefault(d.category, []).append(d) return groups + def count_for_package(self, package: str, category: str = "") -> int: + """Count diagnostics for a specific package, optionally filtered by category.""" + with self._lock: + return sum( + 1 + for d in self._diagnostics + if d.package == package and (not category or d.category == category) + ) + # ------------------------------------------------------------------ # Rendering # ------------------------------------------------------------------ From 12cf9d8a26f4fbbead25765ee02c95498e67e45c Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sun, 22 Mar 2026 01:15:40 +0100 Subject: [PATCH 37/40] test: add tests for verbose logging UX methods - 20 new CommandLogger/InstallLogger tests: tree_item, download_complete, lockfile_entry, package_auth, package_type_info, package_inline_warning - 3 new DiagnosticCollector tests: count_for_package - Verbose flag acceptance test for uninstall command Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- tests/unit/test_command_logger.py | 155 ++++++++++++++++++++++++++++++ tests/unit/test_diagnostics.py | 23 +++++ 2 files changed, 178 insertions(+) diff --git a/tests/unit/test_command_logger.py b/tests/unit/test_command_logger.py index c3cd8c11..3939f9c3 100644 --- a/tests/unit/test_command_logger.py +++ b/tests/unit/test_command_logger.py @@ -303,3 +303,158 @@ def test_download_complete_no_ref(self, mock_echo): logger = InstallLogger() logger.download_complete("pkg/repo") assert "pkg/repo" in mock_echo.call_args[0][0] + + # --- tree_item --- + + @patch("apm_cli.core.command_logger._rich_echo") + def test_tree_item_calls_rich_echo_green_no_symbol(self, mock_echo): + logger = CommandLogger("test") + logger.tree_item(" └─ .github/copilot-instructions.md") + mock_echo.assert_called_once_with( + " └─ .github/copilot-instructions.md", color="green" + ) + + @patch("apm_cli.core.command_logger._rich_echo") + def test_tree_item_renders_regardless_of_verbose(self, mock_echo): + """tree_item always renders — it is not gated on verbose.""" + logger_quiet = CommandLogger("test", verbose=False) + logger_verbose = CommandLogger("test", verbose=True) + + logger_quiet.tree_item("line1") + logger_verbose.tree_item("line2") + + assert mock_echo.call_count == 2 + + # --- package_inline_warning --- + + @patch("apm_cli.core.command_logger._rich_echo") + def test_package_inline_warning_verbose(self, mock_echo): + logger = CommandLogger("test", verbose=True) + logger.package_inline_warning(" ⚠ path collision on file.md") + mock_echo.assert_called_once_with( + " ⚠ path collision on file.md", color="yellow" + ) + + @patch("apm_cli.core.command_logger._rich_echo") + def test_package_inline_warning_not_verbose(self, mock_echo): + logger = CommandLogger("test", verbose=False) + logger.package_inline_warning(" ⚠ path collision on file.md") + mock_echo.assert_not_called() + + # --- download_complete (structured ref/sha/cached) --- + + @patch("apm_cli.core.command_logger._rich_echo") + def test_download_complete_ref_and_sha(self, mock_echo): + logger = InstallLogger() + logger.download_complete("owner/repo", ref="v1.0", sha="abc12345") + msg = mock_echo.call_args[0][0] + assert "#v1.0" in msg + assert "@abc12345" in msg + + @patch("apm_cli.core.command_logger._rich_echo") + def test_download_complete_cached_no_ref(self, mock_echo): + logger = InstallLogger() + logger.download_complete("owner/repo", ref="", sha="", cached=True) + msg = mock_echo.call_args[0][0] + assert "(cached)" in msg + + @patch("apm_cli.core.command_logger._rich_echo") + def test_download_complete_ref_sha_and_cached(self, mock_echo): + logger = InstallLogger() + logger.download_complete("owner/repo", ref="v1.0", sha="abc12345", cached=True) + msg = mock_echo.call_args[0][0] + assert "#v1.0" in msg + assert "(cached)" in msg + + @patch("apm_cli.core.command_logger._rich_echo") + def test_download_complete_legacy_ref_suffix(self, mock_echo): + logger = InstallLogger() + logger.download_complete("owner/repo", ref_suffix="old-style") + msg = mock_echo.call_args[0][0] + assert "(old-style)" in msg + + @patch("apm_cli.core.command_logger._rich_echo") + def test_download_complete_no_args(self, mock_echo): + logger = InstallLogger() + logger.download_complete("owner/repo") + msg = mock_echo.call_args[0][0] + assert "owner/repo" in msg + assert "#" not in msg + assert "@" not in msg + assert "(cached)" not in msg + + # --- lockfile_entry --- + + @patch("apm_cli.core.command_logger._rich_echo") + def test_lockfile_entry_sha_verbose(self, mock_echo): + logger = InstallLogger(verbose=True) + logger.lockfile_entry("owner/repo", sha="abc12345") + msg = mock_echo.call_args[0][0] + assert "locked at abc12345" in msg + + @patch("apm_cli.core.command_logger._rich_echo") + def test_lockfile_entry_ref_verbose(self, mock_echo): + logger = InstallLogger(verbose=True) + logger.lockfile_entry("owner/repo", ref="main") + msg = mock_echo.call_args[0][0] + assert "pinned to main" in msg + + @patch("apm_cli.core.command_logger._rich_echo") + def test_lockfile_entry_no_ref_no_sha_verbose(self, mock_echo): + """Unpinned deps omit the line entirely.""" + logger = InstallLogger(verbose=True) + logger.lockfile_entry("owner/repo") + mock_echo.assert_not_called() + + @patch("apm_cli.core.command_logger._rich_echo") + def test_lockfile_entry_not_verbose(self, mock_echo): + """All lockfile_entry calls are suppressed when not verbose.""" + logger = InstallLogger(verbose=False) + logger.lockfile_entry("owner/repo", sha="abc12345") + logger.lockfile_entry("owner/repo", ref="main") + logger.lockfile_entry("owner/repo") + mock_echo.assert_not_called() + + # --- package_auth --- + + @patch("apm_cli.core.command_logger._rich_echo") + def test_package_auth_verbose(self, mock_echo): + logger = InstallLogger(verbose=True) + logger.package_auth("GITHUB_TOKEN", token_type="fine-grained") + msg = mock_echo.call_args[0][0] + assert "Auth: GITHUB_TOKEN" in msg + assert "(fine-grained)" in msg + + @patch("apm_cli.core.command_logger._rich_echo") + def test_package_auth_not_verbose(self, mock_echo): + logger = InstallLogger(verbose=False) + logger.package_auth("GITHUB_TOKEN", token_type="fine-grained") + mock_echo.assert_not_called() + + # --- package_type_info --- + + @patch("apm_cli.core.command_logger._rich_echo") + def test_package_type_info_verbose(self, mock_echo): + logger = InstallLogger(verbose=True) + logger.package_type_info("GitHub repository (rules-only)") + msg = mock_echo.call_args[0][0] + assert "Package type: GitHub repository (rules-only)" in msg + + @patch("apm_cli.core.command_logger._rich_echo") + def test_package_type_info_not_verbose(self, mock_echo): + logger = InstallLogger(verbose=False) + logger.package_type_info("GitHub repository (rules-only)") + mock_echo.assert_not_called() + + +class TestVerboseFlagAcceptance: + """Verify CLI commands accept --verbose without crashing on unknown option.""" + + def test_uninstall_accepts_verbose_flag(self): + from click.testing import CliRunner + from apm_cli.commands.uninstall.cli import uninstall + + runner = CliRunner() + result = runner.invoke(uninstall, ["some-package", "--verbose"]) + # exit code 2 = click UsageError (unknown option) — must not happen + assert result.exit_code != 2 diff --git a/tests/unit/test_diagnostics.py b/tests/unit/test_diagnostics.py index 92889614..5e4d9acd 100644 --- a/tests/unit/test_diagnostics.py +++ b/tests/unit/test_diagnostics.py @@ -156,6 +156,29 @@ def test_by_category_preserves_insertion_order(self): collisions = dc.by_category()[CATEGORY_COLLISION] assert [d.message for d in collisions] == ["first", "second", "third"] + # ── count_for_package ─────────────────────────────────────────── + + def test_count_for_package_filtered_by_category(self): + dc = DiagnosticCollector() + dc.skip("a.md", package="pkg1") + dc.skip("b.md", package="pkg1") + dc.error("fail", package="pkg1") + dc.warn("w", package="pkg2") + assert dc.count_for_package("pkg1", CATEGORY_COLLISION) == 2 + + def test_count_for_package_all_categories(self): + dc = DiagnosticCollector() + dc.skip("a.md", package="pkg1") + dc.error("fail", package="pkg1") + dc.warn("w", package="pkg1") + dc.warn("other", package="pkg2") + assert dc.count_for_package("pkg1") == 3 + + def test_count_for_package_nonexistent(self): + dc = DiagnosticCollector() + dc.skip("a.md", package="pkg1") + assert dc.count_for_package("nonexistent") == 0 + # ── DiagnosticCollector — rendering ───────────────────────────────── From ee20f51eccd60081be3ea6a929796a407c25f861 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sun, 22 Mar 2026 01:23:26 +0100 Subject: [PATCH 38/40] perf: optimize test execution for agent workflows - Add pytest-xdist for parallel test execution (-n auto) - Update copilot instructions with targeted test guidance - Update CONTRIBUTING.md with fast/full/targeted test commands - Add root tests/conftest.py documenting test structure Agent test runs go from ~128s (full suite serial) to ~11s (unit suite parallel), a ~12x improvement. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .github/copilot-instructions.md | 6 +++++- CONTRIBUTING.md | 22 +++++++++++++++++----- pyproject.toml | 3 ++- tests/conftest.py | 13 +++++++++++++ uv.lock | 24 ++++++++++++++++++++++++ 5 files changed, 61 insertions(+), 7 deletions(-) create mode 100644 tests/conftest.py diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 2bc4e2fa..1d7b0a1d 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -2,7 +2,11 @@ - Use `uv sync` to create the virtual environment and install all dependencies automatically. - Use `uv run ` to run commands in the uv-managed environment. - For development dependencies, use `uv sync --extra dev`. -- Unit tests are run with pytest, but remember you must activate the virtual environment first as described above. +- **Running tests**: Use pytest via `uv run`. Prefer targeted test runs during development: + - **Targeted (fastest, use during iteration):** `uv run pytest tests/unit/path/to/relevant_test.py -x` + - **Unit suite (default validation):** `uv run pytest tests/unit tests/test_console.py -x` (~2,400 tests, matches CI) + - **Full suite (only before final commit):** `uv run pytest` + - When modifying a specific module, run only its corresponding test file(s) first. Run the full unit suite once as final validation before considering your work done. - **Test coverage principle**: When modifying existing code, add tests for the code paths you touch, on top of tests for the new functionality. - **Development Workflow**: To run APM from source while working in other directories: - Install in development mode: `cd /path/to/awd-cli && uv run pip install -e .` diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 503c5c3a..95207fb2 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -35,7 +35,7 @@ Enhancement suggestions are welcome! Please: 1. Fork the repository. 2. Create a new branch for your feature/fix: `git checkout -b feature/your-feature-name` or `git checkout -b fix/issue-description`. 3. Make your changes. -4. Run tests: `uv run pytest` +4. Run tests: `uv run pytest tests/unit tests/test_console.py -x` 5. Ensure your code follows our coding style (we use Black and isort). 6. Commit your changes with a descriptive message. 7. Push to your fork. @@ -74,12 +74,24 @@ uv sync --extra dev ## Testing -We use pytest for testing. After completing the setup above, run the test suite with: +We use pytest for testing with `pytest-xdist` for parallel execution. After completing the setup above: ```bash -uv run pytest -q +# Run the unit test suite (recommended — matches CI, fast) +uv run pytest tests/unit tests/test_console.py -x + +# Run a specific test file (fastest, use during development) +uv run pytest tests/unit/path/to/relevant_test.py -x + +# Run the full test suite (includes integration & acceptance tests) +uv run pytest + +# Run with verbose output +uv run pytest tests/unit -x -v ``` +Tests run in parallel automatically (`-n auto` is configured in `pyproject.toml`). To force serial execution, add `-n0`. + If you don't have `uv` available, you can use a standard Python venv and pip: ```bash @@ -91,8 +103,8 @@ source .venv/bin/activate pip install -U pip pip install -e .[dev] -# run tests -pytest -q +# run unit tests +pytest tests/unit tests/test_console.py -x ``` ## Coding Style diff --git a/pyproject.toml b/pyproject.toml index f6f12545..49e627f1 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -41,6 +41,7 @@ dependencies = [ dev = [ "pytest>=7.0.0", "pytest-cov>=4.0.0", + "pytest-xdist>=3.0.0", "black>=26.3.1; python_version>='3.10'", "isort>=5.0.0", "mypy>=1.0.0", @@ -70,7 +71,7 @@ warn_return_any = true warn_unused_configs = true [tool.pytest.ini_options] -addopts = "-m 'not benchmark'" +addopts = "-m 'not benchmark' -n auto" markers = [ "integration: marks tests as integration tests that may require network access", "slow: marks tests as slow running tests", diff --git a/tests/conftest.py b/tests/conftest.py new file mode 100644 index 00000000..b5bd9bb8 --- /dev/null +++ b/tests/conftest.py @@ -0,0 +1,13 @@ +# Root conftest.py — shared pytest configuration +# +# Test directory structure: +# tests/unit/ — Fast isolated unit tests (default CI scope) +# tests/integration/ — E2E tests requiring network / external services +# tests/acceptance/ — Acceptance criteria tests +# tests/benchmarks/ — Performance benchmarks (excluded by default) +# tests/test_*.py — Root-level tests (mixed unit/integration) +# +# Quick reference: +# uv run pytest tests/unit tests/test_console.py -x # CI-equivalent fast run +# uv run pytest # Full suite +# uv run pytest -m benchmark # Benchmarks only diff --git a/uv.lock b/uv.lock index 28e5007a..5ad4ef36 100644 --- a/uv.lock +++ b/uv.lock @@ -207,6 +207,7 @@ dev = [ { name = "mypy" }, { name = "pytest" }, { name = "pytest-cov" }, + { name = "pytest-xdist" }, ] [package.metadata] @@ -222,6 +223,7 @@ requires-dist = [ { name = "pyinstaller", marker = "extra == 'build'", specifier = ">=6.0.0" }, { name = "pytest", marker = "extra == 'dev'", specifier = ">=7.0.0" }, { name = "pytest-cov", marker = "extra == 'dev'", specifier = ">=4.0.0" }, + { name = "pytest-xdist", marker = "extra == 'dev'", specifier = ">=3.0.0" }, { name = "python-frontmatter", specifier = ">=1.0.0" }, { name = "pyyaml", specifier = ">=6.0.0" }, { name = "requests", specifier = ">=2.28.0" }, @@ -548,6 +550,15 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/36/f4/c6e662dade71f56cd2f3735141b265c3c79293c109549c1e6933b0651ffc/exceptiongroup-1.3.0-py3-none-any.whl", hash = "sha256:4d111e6e0c13d0644cad6ddaa7ed0261a0b36971f6d23e7ec9b4b9097da78a10", size = 16674 }, ] +[[package]] +name = "execnet" +version = "2.1.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/bf/89/780e11f9588d9e7128a3f87788354c7946a9cbb1401ad38a48c4db9a4f07/execnet-2.1.2.tar.gz", hash = "sha256:63d83bfdd9a23e35b9c6a3261412324f964c2ec8dcd8d3c6916ee9373e0befcd", size = 166622 } +wheels = [ + { url = "https://files.pythonhosted.org/packages/ab/84/02fc1827e8cdded4aa65baef11296a9bbe595c474f0d6d758af082d849fd/execnet-2.1.2-py3-none-any.whl", hash = "sha256:67fba928dd5a544b783f6056f449e5e3931a5c378b128bc18501f7ea79e296ec", size = 40708 }, +] + [[package]] name = "frozenlist" version = "1.7.0" @@ -1404,6 +1415,19 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/ee/49/1377b49de7d0c1ce41292161ea0f721913fa8722c19fb9c1e3aa0367eecb/pytest_cov-7.0.0-py3-none-any.whl", hash = "sha256:3b8e9558b16cc1479da72058bdecf8073661c7f57f7d3c5f22a1c23507f2d861", size = 22424 }, ] +[[package]] +name = "pytest-xdist" +version = "3.8.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "execnet" }, + { name = "pytest" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/78/b4/439b179d1ff526791eb921115fca8e44e596a13efeda518b9d845a619450/pytest_xdist-3.8.0.tar.gz", hash = "sha256:7e578125ec9bc6050861aa93f2d59f1d8d085595d6551c2c90b6f4fad8d3a9f1", size = 88069 } +wheels = [ + { url = "https://files.pythonhosted.org/packages/ca/31/d4e37e9e550c2b92a9cbc2e4d0b7420a27224968580b5a447f420847c975/pytest_xdist-3.8.0-py3-none-any.whl", hash = "sha256:202ca578cfeb7370784a8c33d6d05bc6e13b4f25b5053c30a152269fd10f0b88", size = 46396 }, +] + [[package]] name = "python-dateutil" version = "2.9.0.post0" From 5279a9c01819b9a787b96c8208b752f22d189277 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sun, 22 Mar 2026 01:28:13 +0100 Subject: [PATCH 39/40] docs: update authentication page with flow diagram and accuracy fixes MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Add mermaid auth flow diagram (resolution chain visualization) - Fix per-org scope: 'any host' → 'GitHub-like hosts — not ADO' - Clarify rate limits: validation=unauth-first, downloads=auth-first - Add configuration variables table (APM_GIT_CREDENTIAL_TIMEOUT, GITHUB_HOST) - Add package source behavior matrix (per-source auth + fallback) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .../docs/getting-started/authentication.md | 46 +++++++++++++++++-- 1 file changed, 43 insertions(+), 3 deletions(-) diff --git a/docs/src/content/docs/getting-started/authentication.md b/docs/src/content/docs/getting-started/authentication.md index f54c5236..8b65eef1 100644 --- a/docs/src/content/docs/getting-started/authentication.md +++ b/docs/src/content/docs/getting-started/authentication.md @@ -10,7 +10,7 @@ APM works without tokens for public packages on github.com. Authentication is ne APM resolves tokens per `(host, org)` pair. For each dependency, it walks a resolution chain until it finds a token: -1. **Per-org env var** — `GITHUB_APM_PAT_{ORG}` (checked for any host) +1. **Per-org env var** — `GITHUB_APM_PAT_{ORG}` (GitHub-like hosts — not ADO) 2. **Global env vars** — `GITHUB_APM_PAT` → `GITHUB_TOKEN` → `GH_TOKEN` (any host) 3. **Git credential helper** — `git credential fill` (any host except ADO) @@ -20,11 +20,34 @@ Results are cached per-process — the same `(host, org)` pair is resolved once. All token-bearing requests use HTTPS. Tokens are never sent over unencrypted connections. +```mermaid +flowchart TD + A[Dependency Reference] --> B{Per-org env var?} + B -->|GITHUB_APM_PAT_ORG| C[Use per-org token] + B -->|Not set| D{Global env var?} + D -->|GITHUB_APM_PAT / GITHUB_TOKEN / GH_TOKEN| E[Use global token] + D -->|Not set| F{Git credential fill?} + F -->|Found| G[Use credential] + F -->|Not found| H[No token] + + E --> I{try_with_fallback} + C --> I + G --> I + H --> I + + I -->|Token works| J[Success] + I -->|Token fails| K{Credential-fill fallback} + K -->|Found credential| J + K -->|No credential| L{Host has public repos?} + L -->|Yes| M[Try unauthenticated] + L -->|No| N[Auth error with actionable message] +``` + ## Token lookup | Priority | Variable | Scope | Notes | |----------|----------|-------|-------| -| 1 | `GITHUB_APM_PAT_{ORG}` | Per-org, any host | Org name uppercased, hyphens → underscores | +| 1 | `GITHUB_APM_PAT_{ORG}` | Per-org, GitHub-like hosts | Org name uppercased, hyphens → underscores | | 2 | `GITHUB_APM_PAT` | Any host | Falls back to git credential helpers if rejected | | 3 | `GITHUB_TOKEN` | Any host | Shared with GitHub Actions | | 4 | `GH_TOKEN` | Any host | Set by `gh auth login` | @@ -36,6 +59,13 @@ For JFrog Artifactory, use `ARTIFACTORY_APM_TOKEN`. For runtime features (`GITHUB_COPILOT_PAT`), see [Agent Workflows](../../guides/agent-workflows/). +### Configuration variables + +| Variable | Purpose | +|----------|---------| +| `APM_GIT_CREDENTIAL_TIMEOUT` | Timeout in seconds for `git credential fill` (default: 60, max: 180) | +| `GITHUB_HOST` | Default host for bare package names (e.g., GHES hostname) | + ## Multi-org setup When your manifest pulls from multiple GitHub organizations, use per-org env vars: @@ -133,11 +163,21 @@ apm install mycompany.visualstudio.com/org/project/repo # legacy URL Create the PAT at `https://dev.azure.com/{org}/_usersSettings/tokens` with **Code (Read)** permission. +## Package source behavior + +| Package source | Host | Auth behavior | Fallback | +|---|---|---|---| +| `org/repo` (bare) | `default_host()` | Global env vars → credential fill | Unauth for public repos | +| `github.com/org/repo` | github.com | Global env vars → credential fill | Unauth for public repos | +| `contoso.ghe.com/org/repo` | *.ghe.com | Global env vars → credential fill | Auth-only (no public repos) | +| GHES via `GITHUB_HOST` | ghes.company.com | Global env vars → credential fill | Unauth for public repos | +| `dev.azure.com/org/proj/repo` | ADO | `ADO_APM_PAT` only | Auth-only | + ## Troubleshooting ### Rate limits on github.com -APM tries unauthenticated access first for public repos to conserve rate limits. If you hit limits, set any token: +APM tries unauthenticated access first for public repos to conserve rate limits during validation (e.g., checking if a repo exists). For downloads, authenticated requests are preferred — with unauthenticated fallback for public repos on github.com. If you hit rate limits, set any token: ```bash export GITHUB_TOKEN=ghp_any_valid_token From 997dbf40709482bd2a0dd8332dc76553964ed4d6 Mon Sep 17 00:00:00 2001 From: danielmeppiel Date: Sun, 22 Mar 2026 01:32:39 +0100 Subject: [PATCH 40/40] docs: move auth diagram from top to troubleshooting section Users doing quick setup see the simple 3-step resolution list. Users debugging auth failures find the flowchart where they need it. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .../docs/getting-started/authentication.md | 48 ++++++++++--------- 1 file changed, 25 insertions(+), 23 deletions(-) diff --git a/docs/src/content/docs/getting-started/authentication.md b/docs/src/content/docs/getting-started/authentication.md index 8b65eef1..ec84f511 100644 --- a/docs/src/content/docs/getting-started/authentication.md +++ b/docs/src/content/docs/getting-started/authentication.md @@ -20,29 +20,6 @@ Results are cached per-process — the same `(host, org)` pair is resolved once. All token-bearing requests use HTTPS. Tokens are never sent over unencrypted connections. -```mermaid -flowchart TD - A[Dependency Reference] --> B{Per-org env var?} - B -->|GITHUB_APM_PAT_ORG| C[Use per-org token] - B -->|Not set| D{Global env var?} - D -->|GITHUB_APM_PAT / GITHUB_TOKEN / GH_TOKEN| E[Use global token] - D -->|Not set| F{Git credential fill?} - F -->|Found| G[Use credential] - F -->|Not found| H[No token] - - E --> I{try_with_fallback} - C --> I - G --> I - H --> I - - I -->|Token works| J[Success] - I -->|Token fails| K{Credential-fill fallback} - K -->|Found credential| J - K -->|No credential| L{Host has public repos?} - L -->|Yes| M[Try unauthenticated] - L -->|No| N[Auth error with actionable message] -``` - ## Token lookup | Priority | Variable | Scope | Notes | @@ -205,6 +182,31 @@ apm install --verbose your-org/package The output shows which env var matched (or `none`), the detected token type (`fine-grained`, `classic`, `oauth`, `github-app`), and the host classification (`github`, `ghe_cloud`, `ghes`, `ado`, `generic`). +The full resolution and fallback flow: + +```mermaid +flowchart TD + A[Dependency Reference] --> B{Per-org env var?} + B -->|GITHUB_APM_PAT_ORG| C[Use per-org token] + B -->|Not set| D{Global env var?} + D -->|GITHUB_APM_PAT / GITHUB_TOKEN / GH_TOKEN| E[Use global token] + D -->|Not set| F{Git credential fill?} + F -->|Found| G[Use credential] + F -->|Not found| H[No token] + + E --> I{try_with_fallback} + C --> I + G --> I + H --> I + + I -->|Token works| J[Success] + I -->|Token fails| K{Credential-fill fallback} + K -->|Found credential| J + K -->|No credential| L{Host has public repos?} + L -->|Yes| M[Try unauthenticated] + L -->|No| N[Auth error with actionable message] +``` + ### Git credential helper not found APM calls `git credential fill` as a fallback (60s timeout). If your credential helper needs more time (e.g., Windows account picker), set `APM_GIT_CREDENTIAL_TIMEOUT` (seconds, max 180):