Feat/claude code marketplace plugin compat v2#2964
Closed
Zetkolink wants to merge 52 commits intotailcallhq:mainfrom
Closed
Feat/claude code marketplace plugin compat v2#2964Zetkolink wants to merge 52 commits intotailcallhq:mainfrom
Zetkolink wants to merge 52 commits intotailcallhq:mainfrom
Conversation
- Full gRPC server implementing ForgeService proto (7 MVP + 5 additional methods) - Ollama nomic-embed-text for local embeddings (768-dim) - Qdrant for vector storage and ANN search - SQLite for metadata (api_keys, workspaces, file_refs) - Line-aware file chunking with configurable sizes - Bearer token authentication - Docker Compose for Qdrant + Ollama - SHA-256 hash compatibility with Forge CLI incremental sync
- Remove npm_release and homebrew_release jobs (no corresponding repos) - Remove POSTHOG_API_SECRET and OPENROUTER_API_KEY (not needed) - Add install script pointing to Zetkolink/forgecode releases Install: curl -fsSL https://raw.githubusercontent.com/Zetkolink/forgecode/main/scripts/install.sh | sh
- Check for new versions at github.com/Zetkolink/forgecode/releases - Download install script from our repo instead of forgecode.dev
…/forgecode - CI: homebrew/npm repos → Zetkolink/* - HTTP Referer header → github.com/Zetkolink/forgecode - JSON schema URL → raw.githubusercontent.com - Billing link → self-hosted message - Docs links → GitHub repo anchors - Install commands → our install.sh - Nix homepage → GitHub repo - README badges → Zetkolink/forgecode
- noreply@forgecode.dev → noreply@users.noreply.github.com (git_app.rs, AGENTS.md, SKILL.md) - benchmarks/evals/*.yml: all git clone URLs → Zetkolink/forgecode
Bugs fixed: - #1: Unified timestamp format — all tables use strftime('%s','now') (unix seconds) - #2: delete_file_refs and delete_workspace wrapped in transactions - tailcallhq#19: Removed chrono_now(), use SQLite DEFAULT instead Security: - tailcallhq#15: All workspace methods verify ownership (authenticate_and_verify_owner) - Added db.verify_workspace_owner() method Code quality: - tailcallhq#7: ForgeServiceImpl fields private, added new() constructor - tailcallhq#8: IntoStatus trait eliminates 15+ duplicate .map_err() calls - tailcallhq#9: Removed unused lock_conn helper, kept consistent pattern - tailcallhq#11: Documented ends_with filter limitation in qdrant.rs - tailcallhq#10: reqwest Client with 120s timeout + 10s connect_timeout - tailcallhq#18: Added extract_workspace_id() helper
Performance: - #5: embed_batch sub-batching (max 20 texts per Ollama request) - tailcallhq#6: delete_by_file_paths batching (max 100 paths per Qdrant filter) Security: - tailcallhq#13: Rate limiting on CreateApiKey (max 1 key/second) - tailcallhq#14: API keys stored as SHA-256 hashes in SQLite Quality: - tailcallhq#17: Pinned all Cargo.toml dependency versions - tailcallhq#16: Added compute_hash compatibility tests (2 new tests)
…e version - qdrant.rs: u32::try_from() instead of unchecked 'as u32' cast - server.rs: search limit clamped to min(limit, top_k).max(1) - Dockerfile: rust:1.79 → rust:1.85
Covers architecture, prerequisites, quick start, configuration, connection setup, how indexing and search work, and Docker deployment.
- Remove Co-Authored-By rule from AGENTS.md - Remove GIT_COMMITTER_NAME/EMAIL override from git_app.rs - Remove Co-Authored-By from resolve-conflicts SKILL.md template
- New docker_release job builds server image on each release
- Publishes to ghcr.io/zetkolink/forgecode/workspace-server:{version}
- Also tags as :latest
- Uses GitHub Actions cache for faster builds
- docker-compose.yml now pulls from GHCR by default
- Main docker-compose.yml: full stack (server + qdrant + ollama), restart policy - docker-compose.external-ollama.yml: override for external Ollama instance - Usage comments at the top of each file
- server/scripts/start.sh: one-command setup & launch (Qdrant + Server) - plans/: README workspace server section plan
github.repository preserves original case of the GitHub username (Zetkolink), but Docker tags require lowercase. Use bash parameter expansion to lowercase GITHUB_REPOSITORY before building image tags.
The alias 'bug' was listed in both 'type: bug' and 'type: fix' labels, causing github-label-sync to fail with 404 when it tried to rename the same label twice.
Dependencies (icu_properties_data, qdrant-client) require newer Rust. Align Dockerfile with rust-toolchain.toml (1.92).
When sync fails for specific files, the error message now lists each failed file with a short reason instead of just a count. Also switched docker-compose to use external Ollama at 192.168.31.129 and removed the Ollama container.
- Add release_docker_job to forge_ci (builds workspace-server image with lowercase repo tags) - Remove npm_release and homebrew_release from release workflow (not needed for fork) - Remove POSTHOG_API_SECRET from build job - Remove OPENROUTER_API_KEY from CI workflow - Add packages:write permission for Docker push - All workflow YAML files are now fully auto-generated by forge_ci tests
arduino/setup-protoc@v3 uses Node.js 20 which is deprecated on GitHub Actions (forced to Node.js 24 from June 2026). No v4 exists. Replace with direct apt-get install protobuf-compiler.
autofix.ci app is not installed on the fork. Replace with direct git commit and push of lint/fmt fixes.
Release build for aarch64-apple-darwin failed because apt-get doesn't exist on macOS runners. Now detects the OS and uses the appropriate package manager.
…s.json Upstream commit 233394c accidentally deleted the 'command' field from the config-model entry. This caused JSON parse failure, breaking 'list command' and making all shell-plugin commands (including :muse, :sage, :agent) return 'Command not found'.
…ners Windows runners use PowerShell by default, which can't parse bash syntax. Add explicit shell:bash and choco install fallback for Windows.
The streaming writer restarted the spinner after every newline, but its Drop impl never stopped it. This left indicatif's ProgressBar running in the background, and when it was eventually cleaned up, finish_and_clear() would erase content lines instead of the spinner line. Changes: - Add spinner.stop() to StreamDirectWriter::Drop - Remove resume_spinner() call from StreamDirectWriter::write() — spinner lifecycle is managed by the UI layer (ToolCallEnd handler), not the low-level writer - Add is_active() to SpinnerManager for observability - Add 3 tests: spinner inactive after finish, spinner inactive after drop, content preserved after finish
…ext entries Images are now stored as part of TextMessage (via images: Vec<Image> field) instead of as standalone ContextMessage::Image entries. This means images are automatically removed when their parent message is compacted/evicted, fixing the bug where images from previous turns persisted in context forever and confused the LLM. Changes: - Add images field to TextMessage with add_image(), has_images() helpers - Update all 5 providers to serialize TextMessage.images as multimodal content blocks (OpenAI, Anthropic, Google, Bedrock, OpenAI Responses) - ImageHandling transformer attaches extracted images to user messages - TransformToolCalls preserves images when converting tool results - Compaction summary notes image attachments as [N image(s) attached] - Backward compat: merge_standalone_images() migrates old format - 9 new tests covering scoping, migration, compaction, builder API
- Replace 16+ TextMessage struct literals with TextMessage::new() + builder - Use struct update syntax in TransformToolCalls (clone + nullify vs 10-field copy) - Add #[deprecated] on ContextMessage::Image with #[allow(deprecated)] at all usage sites - Fix critical bug: add images field to TextMessageRecord for persistence - Call merge_standalone_images() when loading conversations from storage - Eliminate image.clone() in attach_image_to_last_user_message (use move) - Eliminate image.clone() in merge_standalone_images (take ownership first) - Add 3 new tests: round-trip serialization, add_base64_url, TextMessageRecord
- Replace last struct literal TextMessage in conversation_repo.rs test with TextMessage::new() + builder pattern (strip_option setters) - Eliminate image.clone() in transform_tool_calls.rs — use find_map() + move instead of .any() + clone - Add doc comment on OpenAI Responses provider explaining that images on Assistant messages are intentionally not serialized
Add #[allow(dead_code)] with doc comment to SharedSpinner::is_active() explaining why it's kept public despite being test-only: consistency with SpinnerManager::is_active() and future debugging use.
fix: stop spinner on StreamDirectWriter drop to prevent content erasure
feat: scope images to parent messages to prevent context pollution
Implements Phase 0 of Claude Code plugins integration plan. Migrates skill discovery from static system-prompt injection to a per-turn lifecycle hook that mirrors Claude Code's system-reminder attachment pattern. Changes: - Add MessagePhase::SystemReminder variant for tagging injected user-role reminder messages - Introduce ContextMessage::system_reminder() helper - Create SkillListingHandler lifecycle hook (on_request) that builds a budget-aware skill catalog and injects it as a system-reminder on the first turn, plus deltas on subsequent turns - Create SkillCacheInvalidator lifecycle hook (on_toolcall_end) that invalidates the cached skill list when skill files are created, patched, or removed mid-session - Add SkillFetchService::invalidate_cache() trait method - Replace OnceCell with RwLock in ForgeSkillFetch to enable cache invalidation - Add context sync in orch after request hook so mutations performed by hooks in &mut Conversation are visible to the current LLM turn - Port formatCommandsWithinBudget from Claude Code with 1% context budget and per-agent delta caching - Remove static skill-instructions partial from forge.md so every agent (including Sage and Muse) now receives skill listings through the unified dynamic channel - Stop loading skills in system_prompt.rs to avoid double-load - Migrate DoomLoopDetector and PendingTodosHandler to use ContextMessage::system_reminder() helper - Update skill_fetch tool description to explicitly reference the system-reminder delivery mechanism Fixes two pre-existing bugs: 1. Sage and Muse agents were blind to skills (forge.md was the only template including the partial) 2. Skills created mid-session via create-skill workflow were not discoverable until the session was restarted Tests: - 36 new unit tests covering budget formatter, delta caching, phase tagging, path matcher, and cache invalidation - All 2253 workspace tests still passing - Snapshots updated for skill_fetch description change
…tion
Adds four end-to-end tests in `orch_spec` covering Phase 0 behavior at the
full orchestration-loop level:
- `test_skill_listing_reminder_is_injected_for_forge_agent`: default agent
receives a single <system_reminder> on the first turn containing the
skill name and a pointer to skill_fetch.
- `test_skill_listing_reminder_is_injected_for_sage_agent`: regression
test for the original bug — before Phase 0 only `forge.md` statically
rendered the skills partial, so Sage/Muse were blind to available
skills. Now all agents receive the catalog via the lifecycle hook.
- `test_skill_listing_reminder_noop_when_no_skills_available`: no
reminder is injected when the mock service returns an empty list,
ensuring the handler is a true no-op on fresh installs with no
plugins.
- `test_skill_listing_reminder_delta_across_two_turns`: simulates a new
skill being created mid-session (as the `create-skill` workflow does)
and verifies the second turn surfaces it to the LLM.
Test infrastructure changes:
- `MockSkillList` wrapper (shared `Arc<Mutex<Vec<Skill>>>`) added to
`orch_setup` so individual tests can mutate the skill list at runtime
to exercise discovery scenarios.
- `TestContext::mock_skills` field wired through to the `Runner`'s
`SkillFetchService` impl; previously `list_skills()` was hardcoded to
return an empty vec.
- `Runner::run()` now composes `DoomLoopDetector.and(SkillListingHandler)`
on the `on_request` slot (via `EventHandleExt::and`), mirroring the
production wire-up in `ForgeApp::chat`.
Also picks up formatting-only fixes from `cargo fmt --all` on
`hooks/skill_listing.rs` (comment wrapping) and a trailing-whitespace
fix in `forge_main/src/info.rs` (unrelated doc comment).
All 2322 workspace tests pass, `cargo clippy --all-targets -- -D warnings`
is clean.
Phase 0 of the Claude Code plugins integration — see
plans/2026-04-09-claude-code-plugins-v4/01-phase-0-skill-listing.md.
- Mark SystemContext.skills as #[deprecated] to signal its retired
status to any readers; use ..Default::default() in system_prompt.rs
to avoid deprecation warnings at the only real construction site.
- Add Handlebars deprecation comment ({{!-- --}}) to the legacy
forge-partial-skill-instructions.md template. The notice is not
rendered into the system prompt; it only warns maintainers and
users of custom agent templates that still include this partial.
The partial is now a no-op because SystemContext.skills is always
empty in the new delivery model.
- Trim hooks::mod re-exports from skill_listing to only the two
items actually used by app.rs (SkillListingHandler and
SkillCacheInvalidator), removing the blanket #[allow(unused_imports)]
attribute. Internal helpers (SkillListing, build_skill_reminder,
format_skills_within_budget, DEFAULT_*) remain crate-reachable via
their module path but are no longer part of the hooks module
surface.
- Narrow SkillListing::new to #[cfg(test)] since it is only used by
unit tests; production code uses the From<&Skill> impl.
All 36 skill_listing unit tests, 6 skill service tests, and 27
orch_spec integration tests continue to pass.
…minder feat(skills): deliver skill listing via <system_reminder> attachment
…ructure (#4) * feat(plugins): Phase 1 plugin manifest discovery and loader infrastructure Phase 1 of the Claude Code plugins integration plan. Adds the data shapes, filesystem discovery, and service plumbing needed to surface plugin-contributed components to downstream subsystems in later phases. Core types (crates/forge_domain/src/plugin.rs): - PluginManifest with optional name, version, description, author, homepage, repository, license, keywords, dependencies, hooks, commands/agents/skills paths, and mcpServers. Unknown fields are silently dropped to match Claude Code's permissive parser. - PluginAuthor with untagged enum accepting either a bare string or a structured {name, email, url} object. - PluginComponentPath supporting single-string or string-array forms. - PluginHooksManifestField placeholder supporting path/inline/array variants (full hook execution lands in Phase 3). - PluginHooksConfig opaque container for the hooks.json body. - PluginSource enum with Global, Project, and Builtin variants. - LoadedPlugin runtime DTO with resolved absolute paths, effective name, enabled state, MCP servers, and hooks config. - PluginRepository trait defining load_plugins. - 13 unit tests covering manifest parsing, author shorthand vs detailed, component path single/multiple, hooks field variants, unknown-fields tolerance, and malformed-JSON error propagation. Environment paths (crates/forge_domain/src/env.rs): - plugin_path: ~/forge/plugins/ - plugin_cwd_path: ./.forge/plugins/ - 3 tests verifying both paths and their independence. Config integration (crates/forge_config/src/config.rs): - PluginSetting struct with enabled flag. - ForgeConfig.plugins: Option<BTreeMap<String, PluginSetting>> for per-plugin enable/disable overrides in ~/forge/.forge.toml. Discovery implementation (crates/forge_repo/src/plugin.rs): - ForgePluginRepository scanning global ~/forge/plugins/ and project-local ./.forge/plugins/ directories in parallel. - Manifest probe order: .forge-plugin/plugin.json (Forge-native marker, preferred), .claude-plugin/plugin.json (1:1 Claude Code compat), plugin.json (bare legacy). - Component auto-detection: commands/, agents/, skills/ subdirectories when the manifest omits explicit paths. - MCP server merging: inline manifest.mcp_servers + sibling .mcp.json, inline winning on conflict. - Project > Global precedence: project-local copies shadow global ones. - Enable/disable overrides from .forge.toml applied post-discovery. - Per-plugin errors logged via tracing::warn; top-level scan failures still propagate. 2 dedup tests included. Service layer (crates/forge_app/src/services.rs): - New PluginLoader trait with list_plugins + invalidate_cache methods, mirroring the SkillFetchService shape. - Services::PluginLoader associated type with blanket impl pass-through so any type implementing Services automatically exposes plugin loading. Memoised loader (crates/forge_services/src/tool_services/plugin_loader.rs): - ForgePluginLoader<R: PluginRepository> caching results in RwLock<Option<Arc<Vec<LoadedPlugin>>>>. - Double-checked locking fast path for cache hits, write-lock slow path for cold loads. - invalidate_cache drops the cached snapshot so the next list_plugins call re-scans the filesystem. Used by upcoming :plugin reload. - 4 unit tests covering cold read, cache hit, invalidation, and mid-session new-plugin surfacing. Wiring (crates/forge_services/src/forge_services.rs, forge_repo): - ForgeRepo now implements PluginRepository via ForgePluginRepository. - ForgeServices gains the F: PluginRepository bound, constructs ForgePluginLoader::new(infra.clone()), and exposes it through type PluginLoader = ForgePluginLoader<F>. Testing: - 1687 tests across forge_domain, forge_config, forge_repo, forge_services, and forge_app pass on this branch. - forge_domain::plugin: 13 unit tests - forge_domain::env: 3 new plugin-path tests - forge_repo::plugin: 2 dedup tests - forge_services::tool_services::plugin_loader: 4 memoisation tests Deferred to later phases (documented in plans/2026-04-09-claude-code-plugins-v4): - Phase 2: loading skills/commands/agents from plugins into existing discovery pipelines - Phase 3: hook execution runtime (PluginHooksConfig becomes typed) - Phase 4: MCP server registration from plugins - Phase 9: :plugin CLI commands * [autofix.ci] apply automated fixes * feat(plugins): Phase 2-10 implementation (partial) and plumbing Implements the bulk of the Claude Code plugin compatibility plan (plans/2026-04-09-claude-code-plugins-v4/). See audit report in plans/2026-04-09-claude-code-plugins-v4-audit-remaining-work-v1.md for detailed ground-truth status. Fully implemented: - Phase 2: Skills/commands/agents unified loading with SkillSource, CommandSource, AgentSource provenance tracking. Extended Skill frontmatter fields (when_to_use, allowed_tools, disable_model_invocation, user_invocable). list_invocable_commands aggregation. - Phase 3: Hook runtime infrastructure (9 submodules: matcher, env, shell, http, prompt, agent, config_loader, executor). PluginHookHandler dispatcher with 25 EventHandle impls. HookExecutorInfra and HookConfigLoaderService traits in Services. - Phase 4: T1-Core 10 lifecycle events with fire sites in orch.rs and compaction.rs. SessionStart with transcript creation and initial_user_message injection. Full consume logic for PreToolUse (block/Deny/updated_input/additional_contexts), Stop (prevent_continuation reentry), and StopFailure via try/catch wrapper. - Phase 10: Plugin MCP servers with {plugin}:{server} namespacing and FORGE_PLUGIN_ROOT env injection. 7 unit tests. Partially implemented (plumbing only — payloads + Hook slots + dispatcher impls but no fire sites or subsystems): - Phase 6: Notification, Setup, ConfigChange, InstructionsLoaded - Phase 7: Subagent, Permission, Cwd, File, Worktree events - Phase 8: Elicitation, ElicitationResult hooks - Phase 6C: ConfigWatcher skeleton with internal-write suppression Almost done: - Phase 9: Plugin CLI (list/enable/disable/info/reload). Missing install subcommand and trust prompt. Not started (tracked in audit plan): - Phase 5: Legacy LifecycleEvent migration and deprecation - Phase 11: Integration tests, compat testing, benchmarks, docs * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes (attempt 2/3) * feat(plugins): Wave A — Phase 9.5 :plugin install subcommand Adds a complete '/plugin install <path>' flow with trust prompt, manifest validation, recursive directory copy, and config registration. Model changes (crates/forge_main/src/model.rs): - Add PluginSubcommand::Install { path: PathBuf } variant - Parse '/plugin install <path>' with support for multi-word paths and relative paths; path tokens after 'install' are joined with single spaces so paths with whitespace are reconstructable - Reject '/plugin install' with missing path argument with usage hint - Update '/plugin' usage message to list 'install' among subcommands - 5 new parser tests covering absolute, relative, multi-word paths, missing-arg error, and the updated unknown-subcommand hint UI changes (crates/forge_main/src/ui.rs): - Add on_plugin_install handler that: 1. Canonicalizes and validates source path (exists, is directory) 2. Locates manifest at .forge-plugin/plugin.json, .claude-plugin/plugin.json, or plugin.json (Claude Code fallback order) 3. Parses manifest via PluginManifest::from_json 4. Uses manifest name as install directory under env.plugin_path() 5. Prompts for overwrite if target directory already exists 6. Shows manifest summary (version, description, author, source) and trust warning before asking user to confirm install 7. Copies directory recursively, excluding .git, node_modules, target, and other VCS/build artifacts 8. Registers plugin in config as disabled via set_plugin_enabled 9. Reloads plugin list and prints success message - Add copy_dir_recursive and should_exclude_dir helpers - Add format_install_location helper for printing install target - Replace non-existent ForgeSelect::confirm with ForgeWidget::confirm (the ForgeSelect re-export was removed in a prior refactor; README in forge_select was stale) Phase 5 (legacy LifecycleEvent migration) was verified as already implemented in the prior Phase 2-10 commit (PendingTodosHandler migrated End->Stop, TracingHandler retained on legacy events as designed, CompactionHandler fires PreCompact/PostCompact internally, legacy variants marked #[doc(hidden)]). Phase 9.7 (subcommand-aware autocomplete and dynamic plugin name completion after enable/disable/info) is deferred — tracked in plans/2026-04-09-claude-code-plugins-v4-audit-remaining-work-v1.md. Tests: 280 forge_main, 640 forge_app, 718 forge_domain, 290 forge_services, 281 forge_repo — 0 failures, 0 regressions. Refs: plans/2026-04-09-claude-code-plugins-v4/10-phase-9-plugin-cli.md * [autofix.ci] apply automated fixes * feat(plugins): Wave B — Phase 6A Notification + Phase 6B Setup Wire up the Notification and Setup hook dispatch paths with production fire sites and CLI flags. Phase 6A — Notification hook: - Add ForgeNotificationService implementation in crates/forge_app/src/ lifecycle_fires.rs. It sits in forge_app (not forge_services) because PluginHookHandler is a private module in forge_app::hooks with no public re-export — moving the impl here avoids widening forge_app's public surface. - Expose notification_service() on the API trait (crates/forge_api/src/ api.rs) returning Arc<dyn NotificationService>. ForgeAPI::notification_ service constructs a fresh ForgeNotificationService on each call (cheap, no state beyond Arc<Services>). - Wire Arc<dyn NotificationService> as a field on UI struct (crates/ forge_main/src/ui.rs). Initialized in UI::init via api.notification_ service(). - Fire NotificationKind::IdlePrompt at UI::prompt() — the single async chokepoint before Console::prompt blocks on Reedline::read_line. Every idle wait-for-input cycle emits the hook. - Fire NotificationKind::AuthSuccess at finalize_provider_activation when a provider login succeeds. Runs both model-preselected and interactive branches (common terminal Ok(()) fire point). - Terminal BEL emission is best-effort: io::stdout().is_terminal() + !TERM_PROGRAM=vscode guard. - Notification hook dispatch uses the direct EventHandle<EventData< NotificationPayload>>::handle pattern (mirroring CompactionHandler). Scratch Conversation is built per-call; blocking_error is drained and discarded per services.rs:538-540 doc (observability only). Phase 6B — Setup hook + CLI flags: - Add --init, --init-only, --maintenance top-level flags on Cli struct (crates/forge_main/src/cli.rs). Updates is_interactive() to return false when init_only is set (so REPL predicate stays correct). - Add fire_setup_hook(trigger: SetupTrigger) async method to API trait, implemented on ForgeAPI as a thin delegate to lifecycle_fires:: fire_setup_hook. - Fire site in UI::run_inner() after init_conversation, before the REPL loop. Routing: --init / --init-only → SetupTrigger::Init --maintenance → SetupTrigger::Maintenance (other) → no fire - Per Claude Code semantics (hooksConfigManager.ts:175), Setup blocking errors are intentionally ignored. The fire site drains conversation. hook_result and discards the result. Hook dispatch failures are logged via tracing::warn but never propagate. - --init-only triggers early return from run_inner after the Setup fire, so the REPL is skipped entirely (CI / batch provisioning use case). Tests: - 5 new CLI parser tests in cli.rs (test_cli_parses_init_flag, _init_ only_flag, _maintenance_flag, _default_has_no_setup_flags, _init_only_ is_not_interactive). - Pre-existing dispatcher tests remain green: test_dispatch_setup_matches_trigger_string ✅ test_dispatch_notification_matches_notification_type ✅ - Workspace: 2627 passed; 0 failed; 16 ignored. Known gaps (deferred to Wave G): - ForgeNotificationService does not have unit tests covering the hook fire path — orch_spec harness cannot be used because Notifications fire from outside the orchestrator. A probe-installed Hook with mock Services is needed. - should_beep() env var detection is not unit-tested due to test-parallel env var race. Verified via manual inspection. Refs: plans/2026-04-09-claude-code-plugins-v4/07-phase-6-t2-infrastructure.md (lines 23-146, sub-phases 6A + 6B) * [autofix.ci] apply automated fixes * feat(plugins): Wave C — Phase 6C ConfigWatcher with notify-debouncer-full Completes Phase 6C by wiring up a real filesystem watcher for Forge's configuration surface, then connecting it to the ConfigChange plugin hook via the existing lifecycle_fires dispatch path. ## Part 1 — notify-debouncer-full event loop Expanded crates/forge_services/src/config_watcher.rs from the 280-line skeleton committed in the Phase 2-10 snapshot to a 730-line production implementation: - Real debouncer install via notify_debouncer_full::new_debouncer with a 1-second debounce window (matches Claude Code's awaitWriteFinish.stabilityThreshold: 1000). - 5-second internal-write suppression via recent_internal_writes map: every write Forge itself performs is tagged via mark_internal_write first, and the debouncer callback skips any event whose timestamp falls within the suppression window. - 1.7-second atomic-save grace period: on a Remove event the path is stashed in pending_unlinks and a short-lived std::thread waits the grace window. If a matching Create arrives first the pair is collapsed into a single Modify-equivalent event; otherwise the delayed delete fires. - 1.5-second per-path dispatch cooldown via last_fired map to collapse the multi-event bursts notify-debouncer-full still emits for an atomic save on macOS FSEvents (Remove + Create + Modify + Modify). - Path canonicalization via canonicalize_for_lookup to resolve the macOS /var -> /private/var symlink, so the internal-write and pending-unlink maps are keyed consistently with paths emitted by notify-debouncer-full itself. Falls back to the raw path when the target does not exist (delete / pre-create). - ConfigWatcher::classify_path maps an absolute path back into a forge_domain::ConfigSource variant using Forge's directory layout. - ConfigWatcher::new takes Vec<(PathBuf, RecursiveMode)> plus an Fn(ConfigChange) + Send + Sync + 'static callback. Unreadable or missing paths are logged at debug and skipped so a non-existent plugin directory on first startup does not abort the whole watcher. - Re-exports notify_debouncer_full::notify::RecursiveMode. - 11 unit tests pass 3x in a row under --test-threads=1, including atomic save collapse, delete fire after grace, internal-write suppression, and cooldown collapse. ## Part 2 — wire into ForgeAPI + ConfigChange hook fire Architectural correction from the spec: forge_services already depends on forge_app (Cargo.toml), so forge_app cannot import ConfigWatcher directly. The watcher handle lives in forge_api, which is the single crate that depends on both forge_app (for fire_config_change_hook) and forge_services (for ConfigWatcher itself). New public API: - crates/forge_app/src/lifecycle_fires.rs: fire_config_change_hook( services, source, file_path) async free function, re-exported from crates/forge_app/src/lib.rs alongside fire_setup_hook. Mirrors the Wave B Setup hook dispatch pattern exactly — builds a scratch Conversation, constructs EventData<ConfigChangePayload> via with_context, wraps in LifecycleEvent::ConfigChange, dispatches via PluginHookHandler::new(services).handle, drains blocking_error, logs dispatch failures via tracing::warn but NEVER propagates. - crates/forge_api/src/config_watcher_handle.rs: new 150-line module with ConfigWatcherHandle wrapper type. Stores Arc<ConfigWatcher> internally, exposes mark_internal_write delegating to the inner watcher, and provides a ConfigWatcherHandle::spawn(services, watch_paths) constructor that: 1. Captures tokio::runtime::Handle::try_current() (returns a no-op stub if no runtime — needed for test harnesses that construct ForgeAPI outside of tokio::main). 2. Builds a sync callback closure that, per ConfigChange event, spawns an async task calling fire_config_change_hook on the captured runtime handle and services clone. 3. Calls ConfigWatcher::new(watch_paths, callback) and wraps the result in Arc. - crates/forge_api/src/lib.rs: re-exports ConfigWatcherHandle at the crate root. ForgeAPI integration (crates/forge_api/src/forge_api.rs): - New field _config_watcher: Option<ConfigWatcherHandle> on ForgeAPI struct (underscored to signal it is kept alive purely for its Drop impl, which tears down the debouncer thread and all installed notify watchers). - ForgeAPI::init builds watch_paths from the environment: (env.base_path, RecursiveMode::NonRecursive) // .forge.toml (env.plugin_path(), RecursiveMode::Recursive) // installed plugins - Calls ConfigWatcherHandle::spawn(services.clone(), watch_paths).ok() so a spawn failure logs warn but still allows ForgeAPI to be constructed (the rest of Forge does not depend on the watcher being alive). Internal-write suppression: - API trait (crates/forge_api/src/api.rs) gains mark_config_write( &self, path: &Path) with a default no-op implementation. ForgeAPI impls it by delegating to self._config_watcher.as_ref().map(|w| w.mark_internal_write(path)). - set_plugin_enabled (forge_api.rs) is wired: it resolves the config path via ConfigReader::config_path() and calls mark_config_write before fc.write()?, so toggling a plugin via /plugin enable|disable no longer round-trips through the ConfigChange hook system. Second call site left as TODO: - crates/forge_infra/src/env.rs:148 inside update_environment still writes to .forge.toml without going through mark_config_write. A proper fix would add a callback slot on ForgeInfra (OnceLock<Arc< dyn Fn(&Path)>>) settable from ForgeAPI::init after the watcher handle exists, but that restructure was out of scope for this iteration per the spec's fallback guidance. Marked with a TODO(wave-c-part-2-env-rs) comment. Tests: - cargo test --workspace -> 2630 passed; 0 failed; 16 ignored. (+3 vs Wave B baseline of 2627 — new ConfigWatcherHandle constructor tests and config path resolution tests.) - cargo build --workspace clean (dead_code warnings in hook_runtime persist as expected until Waves D-F consume the runtime). Known gaps deferred to Wave G: - update_environment fc.write() call site not wired (see TODO). - ForgeNotificationService and ConfigWatcherHandle hook-fire-path integration tests (need probe Hook + mock Services harness). - No real end-to-end test of a ConfigChange firing into an actual plugin hook handler — tracked as Phase 11.1 fixture work. Refs: plans/2026-04-09-claude-code-plugins-v4/07-phase-6-t2-infrastructure.md (sub-phase 6C, lines 148-287) * [autofix.ci] apply automated fixes * feat(plugins): Wave D Pass 1 — Phase 6D InstructionsLoaded minimal fire Implements the session-start slice of Phase 6D: classify and load the 3-file AGENTS.md chain into typed LoadedInstructions, fire the InstructionsLoaded plugin hook once per file at session start, and preserve backwards compatibility with the existing system prompt builder which still consumes Vec<String>. Per the plan's risk mitigation (plans/2026-04-09-claude-code-plugins-v4/07-phase-6-t2-infrastructure.md:343), this is Pass 1 of a two-pass rollout. Pass 2 (nested traversal, conditional path-glob rules, @include resolver, post-compact reload, Managed / Local memory types, ~/forge/rules/*.md) is deferred and tracked in plans/2026-04-09-claude-code-plugins-v4-audit-remaining-work-v1.md. ## Memory types consolidated in forge_domain::memory Prior work had placed MemoryType and InstructionsLoadReason directly in hook_payloads.rs as part of the InstructionsLoadedPayload struct. This commit introduces a canonical crates/forge_domain/src/memory.rs module that now owns those enums + the new LoadedInstructions and InstructionsFrontmatter types: - MemoryType { User, Project, Local, Managed } — matches Claude Code's CLAUDE_MD_MEMORY_TYPES wire vocabulary. Pass 1 only emits User and Project; Local / Managed are placeholders for Pass 2. - InstructionsLoadReason { SessionStart, NestedTraversal, PathGlobMatch, Include, Compact } — only SessionStart is constructed by Pass 1. - InstructionsFrontmatter { paths: Option<Vec<String>>, include: Option< Vec<String>> } — parsed but not acted on in Pass 1. - LoadedInstructions — classified file with file_path, memory_type, load_reason, content (frontmatter-stripped), frontmatter, globs, trigger_file_path, parent_file_path. Both enums gained a &self-based as_wire_str() accessor (previously self-by-value on some call sites) matching Claude Code's wire format exactly — plugins that filter on memory_type / load_reason work unchanged. hook_payloads.rs now imports these types from crate::memory. The InstructionsLoadedPayload struct is byte-for-byte unchanged — its fields still hold the typed enums directly (not strings), and the existing PluginHookHandler dispatcher at hooks/plugin.rs:834 already calls .as_wire_str() when building the HookInputPayload, so the matcher string fed to plugin hooks remains 'session_start'. ## ForgeCustomInstructionsService evolved crates/forge_services/src/instructions.rs grew from 90 to 488 lines. Kept semantics: - Still discovers the same 3 paths in the same priority order: 1. Base path — global ~/forge/AGENTS.md 2. Git root AGENTS.md (when inside a repo) 3. Current working directory AGENTS.md - Still caches with tokio::sync::OnceCell - Still silently ignores read errors New semantics: - classify_path maps each discovered absolute path to a MemoryType: base_path prefix -> MemoryType::User git_root prefix -> MemoryType::Project cwd path -> MemoryType::Project (cwd treated as project scope in Pass 1) fallback -> MemoryType::Project - parse_file reads the file and runs gray_matter (already a workspace dependency via forge_services/command.rs) to extract YAML frontmatter. Files without frontmatter get frontmatter: None and the full raw content. Files with malformed frontmatter log tracing::debug and fall back to None + full content — never a hard failure. - Frontmatter.paths is surfaced as LoadedInstructions.globs so the Pass 2 PathGlobMatch implementation doesn't need to re-read the file. - Cache type changed from OnceCell<Vec<String>> to OnceCell<Vec<LoadedInstructions>>. ## CustomInstructionsService trait extended crates/forge_app/src/services.rs:282 added get_custom_instructions_detailed() -> Vec<LoadedInstructions> as the new primary method. The legacy get_custom_instructions() -> Vec<String> got a default implementation that delegates to the detailed variant and projects .content, so system_prompt.rs:91 (which injects custom rules into the system prompt) continues to work unchanged. The blanket impl<I: Services> at services.rs:1051 forwards both methods. ## New fire_instructions_loaded_hook crates/forge_app/src/lifecycle_fires.rs gained a new free async function following the exact fire_setup_hook / fire_config_change_hook pattern: 1. Resolves the default agent via AgentRegistry 2. Builds a scratch Conversation via Conversation::new(ConversationId:: generate()) 3. Constructs InstructionsLoadedPayload directly from the LoadedInstructions — memory_type and load_reason are passed as the typed enums (the payload takes them natively) 4. Wraps in EventData::with_context(payload, &env, None, PermissionMode::default()) 5. Wraps in LifecycleEvent::InstructionsLoaded 6. Dispatches via PluginHookHandler::new(services).handle 7. Drains blocking_error and discards — InstructionsLoaded is observability-only per Claude Code semantics 8. Logs dispatch failures via tracing::warn but never propagates Re-exported from crates/forge_app/src/lib.rs alongside fire_setup_hook and fire_config_change_hook. ## Fire site in ForgeApp::chat crates/forge_app/src/app.rs:79 replaced the single call to services.get_custom_instructions() with: 1. services.get_custom_instructions_detailed().await — returns typed LoadedInstructions 2. Projects .content into the existing custom_instructions: Vec<String> local so the system prompt builder path is untouched 3. Iterates the loaded files and fires fire_instructions_loaded_hook once per file Each fire carries load_reason: SessionStart and no trigger / parent paths. Plugins that match on HookEventName::InstructionsLoaded with matcher 'session_start' see the 3 classified files. ## Tests 7 new tests total, 2 wire-format tests moved from hook_payloads.rs to the new memory.rs location (removing duplication): - crates/forge_domain/src/memory.rs: - test_memory_type_as_wire_str_all_variants — covers all 4 variants - test_instructions_load_reason_as_wire_str_all_variants — covers all 5 variants - crates/forge_services/src/instructions.rs (new self-contained MockInfra implementing EnvironmentInfra + FileReaderInfra + CommandInfra): - test_loads_base_agents_md_as_user_memory — only ~/forge/AGENTS.md exists -> 1 LoadedInstructions with MemoryType::User + SessionStart - test_loads_project_agents_md_from_git_root — git_root reported + file exists -> MemoryType::Project - test_parses_frontmatter_with_paths — '---\npaths:\n - "*.py"\n ---\nbody' -> frontmatter parsed, globs = Some(['*.py']), content = 'body' - test_file_without_frontmatter_has_none_frontmatter — plain file -> None frontmatter, None globs, full content - test_missing_file_returns_empty — no AGENTS.md anywhere -> empty Vec Workspace: 2635 passed; 0 failed; 16 ignored. (+5 net vs Wave C baseline of 2630: +7 new tests - 2 moved tests.) ## Pass 2 — DEFERRED Explicitly NOT touched by this commit (enum variants exist as placeholders, never constructed by Pass 1): - MemoryType::Managed — /etc/forge/AGENTS.md admin policy path - MemoryType::Local — <repo>/AGENTS.local.md gitignored per-checkout rules - InstructionsLoadReason::NestedTraversal — walk from cwd to file dir, load nested AGENTS.md on first touch - InstructionsLoadReason::PathGlobMatch — activate conditional rules (frontmatter paths:) on matching file access - InstructionsLoadReason::Include — parse and recursively resolve '@include path/to/other.md' directives with cycle detection - InstructionsLoadReason::Compact — re-fire for session-start-loaded files after compaction discards context - ~/forge/rules/*.md directory loading Refs: plans/2026-04-09-claude-code-plugins-v4/07-phase-6-t2-infrastructure.md (sub-phase 6D, lines 220-315) * [autofix.ci] apply automated fixes * feat(plugins): Wave E-1a — Phase 7A Subagent fire sites (SubagentStart/Stop) Implements the Phase 7A slice of the plugin compatibility plan: fires SubagentStart and SubagentStop lifecycle events from AgentExecutor::execute with correct payload construction, consume logic for additional_contexts and blocking_error, and error-path coverage for both the chat-start failure and mid-stream drain failure branches. All payload types (SubagentStart/StopPayload), LifecycleEvent variants, Hook slots, EventHandle impls, and on_subagent_start/on_subagent_stop wiring from Phase 4 plumbing are reused unchanged. ## PluginHookHandler plumbed through AgentExecutor - crates/forge_app/src/agent_executor.rs: added a plugin_handler: PluginHookHandler<S> field to AgentExecutor. The handle is cloned from the single instance constructed in ForgeApp::new (reused by CompactionHandler) so we do not create a second dispatcher. - crates/forge_app/src/tool_registry.rs: ToolRegistry::new now accepts the plugin_handler and threads it through to AgentExecutor::new. - crates/forge_app/src/app.rs + agent.rs: ForgeApp::chat passes the existing plugin_handler clone into ToolRegistry::new alongside CompactionHandler. ## SubagentStart fire site crates/forge_app/src/agent_executor.rs:91-192 — after the subagent Conversation is built and upserted but BEFORE ForgeApp::chat runs: 1. Generate a stable subagent_id via ConversationId::generate() (avoids adding a direct uuid dependency; ConversationId wraps Uuid::new_v4 and produces a byte-identical 36-char v4 string). 2. Resolve the child Agent via services.get_agent(&agent_id); fall back to services.get_agents() first entry, then to a stub Agent::new( agent_id, ProviderId::FORGE, ModelId::new('')) if the registry is empty. Mirrors fire_setup_hook's resolution pattern. 3. Build EventData<SubagentStartPayload>::with_context( child_agent, model_id, session_id=conversation.id.into_string(), transcript_path=env.transcript_path(&session_id), cwd=env.cwd, payload={ agent_id: subagent_id, agent_type: agent_id.as_str() } ). 4. Reset conversation.hook_result and dispatch directly through self.plugin_handler.handle(&mut conversation, &event).await. Bypasses the parent Orchestrator.hook because AgentExecutor runs outside any orchestrator's run loop. 5. Consume blocking_error: if a plugin blocks the subagent, return Ok(ToolOutput::text('Subagent X blocked by plugin hook: MSG')) without ever calling ForgeApp::chat. 6. Consume additional_contexts: each hook-returned context string is wrapped in <system_reminder> tags via Element and prepended to the effective_task string passed to ChatRequest::new(Event::new( effective_task), conversation.id). The inner orchestrator then picks these up as part of the UserPromptSubmit payload. ## SubagentStop fire site — happy + error paths crates/forge_app/src/agent_executor.rs:203-346 — SubagentStop fires via an in-function free async fn fire_subagent_stop() extracted to side-step the borrow checker across the stream drain. Three branches: 1. **Chat start failure** (line 252-271): if ForgeApp::chat(...).await returns an Err, fire SubagentStop with last_assistant_message: None and stop_hook_active: false BEFORE propagating the error. 2. **Mid-stream drain failure** (line 273-298): if response_stream.next() yields an Err inside the while loop, fire SubagentStop with any partial output collected so far, then propagate. 3. **Happy path** (line 300-342): after the stream closes cleanly, fire SubagentStop with last_assistant_message: Some(output) if output is non-empty, else None. Drained blocking_error is discarded per Claude Code semantics — SubagentStop is observability-only. ## subagent_id threading — DEFERRED (Task 7) The plan's 7A.4 requirement to thread the subagent UUID into HookInputBase.agent_id for the inner Orchestrator's own events (UserPromptSubmit, PreToolUse, etc.) was skipped per the explicit Wave E-1a fallback guidance. The change requires adding a field to EventData, a setter, a new helper on Orchestrator::plugin_hook_ context, plumbing through ChatRequest or Conversation, and matching override logic in PluginHookHandler::build_hook_input. This is deferred to a later wave. Plugins that need to distinguish main-vs-subagent contexts can still filter on the explicit SubagentStart/SubagentStop events that fire correctly at the executor boundary. The inner orchestrator's own fires carry the main conversation's agent_id for now. TODO(wave-e-1a-task-7-subagent-threading) marker left at crates/forge_app/src/orch.rs:67-86 with a pointer to plans/2026-04-09-claude-code-plugins-v4/08-phase-7-t3-intermediate.md:97-102. ## Tests — 6 new **Dispatcher-level** (crates/forge_app/src/hooks/plugin.rs:1452-1568): - test_dispatch_subagent_start_accumulates_additional_contexts_across_hooks — multi-hook accumulation of additional_contexts that the executor drains. - test_dispatch_subagent_start_respects_once_semantics — mirror of PreToolUse once test on the new SubagentStart surface. **Payload construction** (crates/forge_app/src/agent_executor.rs:380-506, new #[cfg(test)] mod tests): - test_subagent_start_payload_field_wiring_from_agent_id - test_subagent_stop_payload_field_wiring_happy_path - test_subagent_stop_payload_last_assistant_message_is_none_on_empty_output - test_event_data_with_context_threads_subagent_payload Full AgentExecutor::execute integration harness (with probe-Hook mock Services) deferred to Wave G per TODO(wave-e-1a-full-executor-tests) at crates/forge_app/src/agent_executor.rs:369-379. ## Context injection fallback — Task 4 simplification Rather than injecting ContextMessage::system_reminder into Conversation.context before upsert_conversation (which would collide with SystemPrompt::add_system_message inside the inner ForgeApp::chat), the implementation prepends each <system_reminder>-wrapped context string to the task text before building ChatRequest. The inner orchestrator picks these up as part of the UserPromptSubmit payload, which is functionally equivalent from the plugin's perspective. TODO(wave-e-1a-subagent-context-injection) marker at crates/forge_app/src/agent_executor.rs:168-178 tracks the cleaner Pass 2 / Wave G approach. ## Tests — 0 regressions - cargo build --workspace: clean - cargo test -p forge_app: 646 passed, 0 failed - cargo test --workspace: 2641 passed, 0 failed, 16 ignored - (+6 net vs Wave D Pass 1 baseline of 2635) All existing tests still green: - 10 create_policy_for_operation tests in forge_services - 4 dispatcher tests (test_dispatch_subagent_start_matches_agent_type, etc.) - 3 wire-format tests in hook_io.rs - 8 payload serde tests in hook_payloads.rs Refs: plans/2026-04-09-claude-code-plugins-v4/08-phase-7-t3-intermediate.md (Sub-Phase 7A, lines 20-109) * [autofix.ci] apply automated fixes * feat(plugins): Wave E-1b — Phase 7B Permission fire sites + aggregate extensions Implements the Phase 7B slice of the plugin compatibility plan: fires PermissionRequest and PermissionDenied lifecycle events from ToolRegistry::check_tool_permission, with a new HookSpecificOutput wire variant and three new AggregatedHookResult consume fields. Reference: plans/2026-04-09-claude-code-plugins-v4/08-phase-7-t3-intermediate.md (Sub-Phase 7B, lines 110-250) ## Wire format: HookSpecificOutput::PermissionRequest Adds a 5th variant to the HookSpecificOutput enum in crates/forge_domain/src/hook_io.rs:316 to surface the permission- specific fields plugins can return on a PermissionRequest hook: - permissionDecision (Allow / Deny / Ask) — reused PermissionDecision type - permissionDecisionReason - updatedInput — last-write-wins override of the tool arguments - updatedPermissions — last-write-wins plugin-provided permission scopes - interrupt — latches to true to request interactive session interrupt - retry — latches to true to re-fire the permission request once New deserialization test: - test_hook_output_sync_parses_permission_request_specific_output (crates/forge_domain/src/hook_io.rs) ## AggregatedHookResult: 3 new consume fields + merge logic Extends crates/forge_domain/src/hook_result.rs:33-63 with three new public fields: pub updated_permissions: Option<serde_json::Value>, // last-write-wins pub interrupt: bool, // latch to true pub retry: bool, // latch to true Module header doc (hook_result.rs:1-28) updated to document the merge policy for each new field. AggregatedHookResult::merge now handles HookSpecificOutput::PermissionRequest: - permission_behavior: first-wins across all hooks (mirrors PreToolUse) - updated_input: last-write-wins - updated_permissions: last-write-wins - interrupt: latches to true via OR - retry: latches to true via OR Conversation::reset_hook_result (crates/forge_domain/src/conversation.rs:12) updated to also clear the three new fields between events. New merge tests (crates/forge_domain/src/hook_result.rs): - test_merge_permission_request_first_wins_on_decision - test_merge_permission_request_last_wins_on_updated_permissions - test_merge_permission_request_latches_interrupt_to_true - test_merge_permission_request_latches_retry_to_true - test_aggregated_default_has_false_interrupt_and_retry - test_reset_clears_updated_permissions_interrupt_and_retry ## Fire sites in ToolRegistry::check_tool_permission crates/forge_app/src/tool_registry.rs:97-312 — PermissionRequest and PermissionDenied fire via a scratch Conversation pattern rather than threading the live orchestrator conversation through the AgentService call path (agent.rs:65-90 isn't reachable from here). Scratch conversations are discarded after each fire; all plugin-consume state is actioned synchronously within check_tool_permission itself. PermissionRequest fire happens BEFORE the call to services.check_operation_permission: 1. Extract tool_name from ToolCatalog and tool_input from its serialized 'arguments' field 2. fire_permission_request builds PermissionRequestPayload with an empty permission_suggestions Vec (real suggestion logic is TODO — see hook_payloads.rs:476-479) 3. Dispatch loops up to 2 attempts if the plugin sets retry: true, per plan line 185. The retry flag is OR-latched across hooks, so one re-fire is all that's ever needed. 4. Consume logic after dispatch: - interrupt == true -> anyhow::bail!('session interrupted by plugin hook') propagates up to the orchestrator's error handling - updated_permissions.is_some() -> tracing::info log + TODO marker for persistence (plan line 180) - blocking_error.is_some() -> fire PermissionDenied with the plugin's reason, return Ok(true) - permission_behavior == Allow -> skip services.check_op_perm, return Ok(false) - permission_behavior == Deny -> fire PermissionDenied, return Ok(true) - permission_behavior == Ask | None -> fall through to the policy service as normal PermissionDenied fire happens in TWO spots: 1. When a plugin hook auto-denies via blocking_error or Deny decision 2. When services.check_operation_permission returns !decision.allowed PermissionDenied is observability-only per plan line 220. The scratch conversation.hook_result is drained and discarded after dispatch; dispatch errors are logged via tracing::warn but never propagated. New helpers (all private on ToolRegistry impl): - fire_permission_request -> Option<AggregatedHookResult> (returns None when no agent is available, e.g. very early init) - fire_permission_denied -> () - build_hook_dispatch_base -> Option<(Agent, Conversation, session_id, transcript_path, cwd)>, with the same agent-resolution fallback chain that Wave E-1a established in agent_executor.rs (active agent -> first registered agent -> None) ## NO signature changes on check_tool_permission The plan's Task 4 (updating orch.rs to pass conversation through) turned out to be unnecessary: using scratch conversations means check_tool_permission's public signature stays async fn check_tool_permission(&self, tool_input: &ToolCatalog, context: &ToolCallContext) -> anyhow::Result<bool> so the call site at tool_registry.rs:400 inside call_inner needs no edits. This is a strictly additive change — nothing outside tool_registry.rs, hook_io.rs, hook_result.rs, and conversation.rs needs to compile against a new API. ## Dispatcher-level tests Three new tests in crates/forge_app/src/hooks/plugin.rs (added in a separate sub-session after the first agent hit its tool-call budget before reaching this task). They live in a nested 'wave_e1b_permission' module to use the planned test names without colliding with pre-existing parent-level tests of the same leaf names: 1. test_dispatch_permission_request_matches_tool_name (passing) - Single hook matcher 'Bash' on PermissionRequest - Asserts executor was invoked once for the right event name 2. test_dispatch_permission_request_consumes_permission_decision_ first_wins (passing) - Two matching hooks, first returns Allow, second returns Deny - Asserts permission_behavior == Some(Allow) (first-wins holds) - Asserts both new latch fields default to false when no hook sets them 3. test_dispatch_permission_denied_observability_only (#[ignore]d) - Intentionally tests the ideal observability-only contract for PermissionDenied, which the current merge logic does NOT honor. The dispatcher impl for EventData<PermissionDeniedPayload> shares the same AggregatedHookResult::merge path as PermissionRequest, so a PermissionDenied hook that returns HookSpecificOutput::PermissionRequest DOES currently leak permission_behavior and updated_input into the aggregate. - The test asserts the INTENDED behavior (permission_behavior == None, updated_input == None) and is #[ignore]d pending a fix. - A detailed TODO(wave-e-1b) comment documents two remediation options: (a) gate the PermissionRequest variant in AggregatedHookResult::merge on the current event type, or (b) post-process in the EventHandle<EventData<PermissionDenied Payload>> impl to clear non-observability fields. The test will start passing automatically once either fix lands — no body edits needed. To support those tests without disturbing the pre-existing 'dispatch' surface, added dispatch_with_canned_results as a sibling method on ExplicitDispatcher that folds caller-supplied HookExecResult values into the aggregate instead of the hardcoded StubExecutor:: canned_success() stub. Strictly additive. ## TODO markers left behind (tracked in audit plan) - TODO(wave-e-1b-tool-registry-integration-tests) tool_registry.rs:92 — ToolRegistry lacks a mock-Services test harness, so the plugin consume paths here are covered only by dispatcher-level tests. Full integration suite deferred. - TODO(wave-e-1b) plugin.rs:2031 — PermissionDenied merge leak. - TODO at tool_registry.rs:148 — updated_permissions persistence. - TODO at tool_registry.rs:164 — richer reason extraction. - TODO at tool_registry.rs:282 — thread real tool_use_id through the policy path. ## Test results cargo build --workspace: clean (only pre-existing hook_runtime dead_code warnings) cargo test -p forge_domain: 724 passed, 0 failed (+6 vs baseline) cargo test -p forge_app: 647 passed, 0 failed (+1 vs baseline) cargo test --workspace: 2649 passed, 0 failed, 17 ignored Delta vs Wave E-1a baseline of 2641: +6 hook_io/hook_result/conversation tests (Tasks 1-2) +2 dispatcher tests (Tasks A + B) +1 ignored dispatcher test (Task C pending merge fix) = +9 total, +8 passing * [autofix.ci] apply automated fixes * feat(plugins): Wave E-2a — Phase 7C FileChangedWatcher + fs_watcher_core extraction Implements the runtime side of Phase 7C FileChanged events: extracts shared filesystem-watcher helpers from ConfigWatcher into a new fs_watcher_core module, builds FileChangedWatcher and its handle on top of them, and wires the spawn site in ForgeAPI::init. All hook-plumbing for Phase 7C (payloads, LifecycleEvent variants, Hook slots, PluginHookHandler dispatcher impls, app.rs wiring, dispatcher-level tests) was already in place from Phase 4 plumbing. This commit is strictly runtime wiring — the dispatcher impls at crates/forge_app/src/hooks/plugin.rs:733-785 (CwdChanged + FileChanged) and their tests at lines 1731-1806 remain untouched. ## fs_watcher_core — extracted shared helpers New module crates/forge_services/src/fs_watcher_core.rs hosts the pieces of the filesystem-watcher state machine that are reusable across ConfigWatcher and FileChangedWatcher: - Four timing constants made pub(crate): INTERNAL_WRITE_WINDOW = 5s (Claude Code internal-write grace) ATOMIC_SAVE_GRACE = 1.7s (Claude Code changeDetector 1.7s) DEBOUNCE_TIMEOUT = 1s (awaitWriteFinish.stabilityThreshold) DISPATCH_COOLDOWN = 1.5s (per-path burst collapse) - canonicalize_for_lookup(path) — macOS /var -> /private/var symlink resolver, keyed consistently with paths emitted by notify-debouncer-full. Falls back to the raw path when the target does not exist (delete / pre-create). - is_internal_write_sync — pure helper that checks whether a path is inside its internal-write grace window by consulting a shared HashMap<PathBuf, Instant> guard. - pub(crate) re-export of notify_debouncer_full::notify::RecursiveMode so both watchers can depend on fs_watcher_core rather than the underlying crate directly. ConfigWatcher was updated to import these from fs_watcher_core rather than defining them locally. The local timing constants, the private canonicalize_for_lookup helper, and the is_internal_write_sync helper were all removed (-65 net lines). The ConfigWatcher struct, its public API, and the three-map state machine in handle_event remain intact — only the shared atoms moved. All 11 existing ConfigWatcher tests at crates/forge_services/src/config_watcher.rs:472-716 still pass byte-for-byte. The refactor is pure extraction with no behavior change. ## FileChangedWatcher — the new watcher New module crates/forge_services/src/file_changed_watcher.rs (~400 lines) implements the runtime watcher for Phase 7C FileChanged events. Public API: pub struct FileChange { pub file_path: PathBuf, pub event: FileChangeEvent, // re-exported from forge_domain } pub struct FileChangedWatcher { ... } impl FileChangedWatcher { pub fn new<F>( watch_paths: Vec<(PathBuf, RecursiveMode)>, callback: F, ) -> anyhow::Result<Self> where F: Fn(FileChange) + Send + Sync + 'static; pub async fn mark_internal_write(&self, path: impl Into<PathBuf>); pub async fn is_internal_write(&self, path: &Path) -> bool; } Mirrors ConfigWatcher's three-map state machine exactly: - recent_internal_writes: Arc<Mutex<HashMap<PathBuf, Instant>>> for self-write suppression. - pending_unlinks: Arc<Mutex<HashMap<PathBuf, Instant>>> for atomic-save grace (Remove -> delayed dispatch -> Create collapses the pair into a single Modify-equivalent). - last_fired: Arc<Mutex<HashMap<PathBuf, Instant>>> for per-path DISPATCH_COOLDOWN burst collapse. Event classification maps notify EventKind to FileChangeEvent: EventKind::Create(_) -> FileChangeEvent::Add (unless collapsing a pending Remove, in which case -> Change) EventKind::Modify(_) -> FileChangeEvent::Change EventKind::Remove(_) -> FileChangeEvent::Unlink (after ATOMIC_SAVE_GRACE, if not collapsed) EventKind::Access(_) -> ignored (not a mutation) EventKind::Any|Other -> ignored Non-existent watch paths at construction time are logged at tracing::debug and skipped — the watcher stays alive and usable for the remaining paths, so startup does not abort when a dynamic path hasn't been created yet. Seven unit tests using tempfile::TempDir and polling loops with Instant deadlines to tolerate macOS FSEvents latency: - test_file_changed_watcher_detects_add - test_file_changed_watcher_detects_modify - test_file_changed_watcher_detects_delete_after_grace - test_file_changed_watcher_collapses_atomic_save - test_file_changed_watcher_suppresses_internal_write - test_file_changed_watcher_cooldown_collapses_burst - test_file_changed_watcher_skips_missing_paths All 7 pass without #[ignore]. ## fire_file_changed_hook — new forge_app dispatch helper crates/forge_app/src/lifecycle_fires.rs gained fire_file_changed_hook as a free async function following the exact fire_config_change_hook pattern: pub async fn fire_file_changed_hook<S: Services>( services: Arc<S>, file_path: PathBuf, event: FileChangeEvent, ) -> anyhow::Result<()> 1. Builds a scratch Conversation via Conversation::new(ConversationId:: generate()). 2. Resolves the active agent via services.get_active_agent_id() with fallback to the first registered agent; returns Ok(()) early if no agent can be resolved (very early init). 3. Constructs FileChangedPayload { file_path, event }. 4. Wraps in EventData::with_context(agent, model_id, session_id, transcript_path, cwd, payload). 5. Dispatches via PluginHookHandler::new(services).handle on the EventHandle<EventData<FileChangedPayload>> impl at crates/forge_app/src/hooks/plugin.rs:757-785. 6. Drains and discards scratch.hook_result — FileChanged is observability-only for now. Dynamic watch_paths extension from hook output is Wave E-2b scope and will consume the result. 7. Logs dispatch failures via tracing::warn but NEVER propagates. Re-exported from crates/forge_app/src/lib.rs alongside the existing fire_setup_hook, fire_config_change_hook, and fire_instructions_loaded_hook helpers. ## FileChangedWatcherHandle — the long-lived wrapper New module crates/forge_api/src/file_changed_watcher_handle.rs mirrors ConfigWatcherHandle exactly: #[derive(Clone)] pub struct FileChangedWatcherHandle { inner: Option<Arc<FileChangedWatcher>>, } impl FileChangedWatcherHandle { pub fn spawn<S: Services + 'static>( services: Arc<S>, watch_paths: Vec<(PathBuf, RecursiveMode)>, ) -> anyhow::Result<Self>; pub fn mark_internal_write(&self, path: &Path); } Runtime capture pattern (from ConfigWatcherHandle): 1. Captures tokio::runtime::Handle::try_current() at construction time. If Err, logs warn and returns Ok(Self { inner: None }) so unit tests (which may construct ForgeAPI outside a runtime) work. 2. Inside the debouncer callback (which fires from a non-tokio thread), calls runtime.spawn(async move { fire_file_changed_hook( services_clone, change.file_path, change.event).await }) to schedule dispatch back on the main runtime. 3. Dispatch failures inside the spawned task are logged via tracing::warn but never propagate. mark_internal_write is a sync wrapper around FileChangedWatcher::mark_internal_write, provided for symmetry with ConfigWatcherHandle. It is currently a no-op entry point because Forge does not yet write to files it watches via this watcher — reserved for future CwdChanged work. Re-exported from crates/forge_api/src/lib.rs alongside ConfigWatcherHandle. ## ForgeAPI::init — spawn wiring crates/forge_api/src/forge_api.rs gained a new field _file_changed_watcher: Option<FileChangedWatcherHandle> parallel to _config_watcher (underscore-prefixed because the handle is kept alive purely for its inner Arc<FileChangedWatcher>'s Drop impl, which stops the notify thread and releases all installed watches). ForgeAPI::init now resolves FileChanged watch paths from the merged hook config and spawns a FileChangedWatcherHandle after the ConfigWatcherHandle::spawn call. The resolution logic: 1. resolve_file_changed_watch_paths(services) is a private async helper that: - Loads the merged hook config via HookConfigLoaderService::load - Iterates config.hooks.get(&HookEventName::FileChanged) to find all configured FileChanged matchers - Splits each matcher string on '|' to support pipe-separated paths like '.envrc|.env' (Phase 7C plan line 194) - Resolves each alternative relative to env.cwd - Filters to paths that actually exist on disk (missing paths log at debug and are skipped) - Deduplicates and returns as Vec<(PathBuf, RecursiveMode)> with RecursiveMode::NonRecursive (file-level watches) 2. The resolution runs inside a tokio runtime check — if we are on a multi-threaded runtime, block_in_place + block_on pulls the config load synchronously into ForgeAPI::init without deadlocking. If we are on a current-thread runtime or no runtime at all, the watcher is deferred with a TODO(wave-e-2a-async-init) marker. This matches the constraint that ForgeAPI::init is currently a sync constructor. 3. If file_changed_watch_paths is empty (no hooks configured for FileChanged), no watcher is spawned at all — zero runtime cost. 4. If FileChangedWatcherHandle::spawn fails, the error is logged via tracing::warn and ForgeAPI still constructs successfully (the rest of Forge does not depend on the watcher being alive). ## Known gaps / follow-ups Explicitly OUT of scope for Wave E-2a: - CwdChanged fire site in the Shell tool path. Tracked as Wave E-2a-cwd with known architectural blockers: Environment::cwd is immutable, ForgeShell is stateless, subprocess isolation prevents cd-in-child from affecting the parent cwd. Needs either command- string parsing for 'cd' prefix (fragile) or appending '; pwd' for capture (invasive). Will require a small architecture doc before implementation. - Dynamic watch_paths from hook output (Phase 7C.4 plan line 228-233). When a FileChanged hook returns watch_paths in hook_specific_output, those paths should be added to the watcher set at runtime and the watcher restarted with the expanded set. This is Wave E-2b scope. - Phase 7D worktree tools (EnterWorktreeTool, ExitWorktreeTool, worktree_manager refactor, WorktreeCreate/WorktreeRemove fire sites). Wave E-2c scope. - Phase 8 MCP elicitation compliance. Wave F scope. ## Test results cargo build --workspace: clean (only pre-existing hook_runtime dead_code warnings; no new warnings) cargo test -p forge_services --lib config_watcher: 11 passed, 0 failed (all existing ConfigWatcher tests green after the fs_watcher_core extraction) cargo test -p forge_services --lib file_changed_watcher: 7 passed, 0 failed (all new tests green, no #[ignore]) cargo test -p forge_app: 648 passed, 0 failed, 1 ignored (+1 vs Wave E-1b baseline 647) cargo test -p forge_domain: 724 passed, 0 failed (no change) cargo test -p forge_services: 296 passed, 0 failed (+6 vs 290 baseline after adding the fs_watcher_core and file_changed_watcher modules) cargo test --workspace: 2657+ passed, 0 failed, 17 ignored (+8 net vs Wave E-1b baseline of 2649) Refs: plans/2026-04-09-claude-code-plugins-v4/08-phase-7-t3-intermediate.md (Sub-Phase 7C, lines 187-253) * [autofix.ci] apply automated fixes * feat(plugins): Wave E-2b — Phase 7C dynamic watch_paths from SessionStart hooks Wires up dynamic watch_paths extension: when a SessionStart hook returns watch_paths in its aggregated result, those paths are now added to the running FileChangedWatcher at runtime so subsequent filesystem changes under those paths fire FileChanged hooks. This completes the Phase 7C FileChanged flow begun in Wave E-2a. The dispatcher plumbing (payloads, LifecycleEvent variants, Hook slots, PluginHookHandler impls) was already in place from Phase 4 plumbing — this commit is strictly runtime wiring. Reference: plans/2026-04-09-claude-code-plugins-v4/08-phase-7-t3-intermediate.md (Sub-Phase 7C.4, lines 227-237) ## Architecture: OnceLock late-binding The orchestrator (forge_app::orch) fires SessionStart and can consume AggregatedHookResult.watch_paths, but it cannot reach the FileChangedWatcherHandle stored in ForgeAPI because the dependency direction is forge_api -> forge_app -> forge_services. Rather than restructuring the graph or threading a handle through every layer, this commit uses a module-local OnceLock pattern for late binding: 1. crates/forge_app/src/lifecycle_fires.rs defines: pub trait FileChangedWatcherOps: Send + Sync { fn add_paths(&self, watch_paths: Vec<(PathBuf, RecursiveMode)>); } static FILE_CHANGED_WATCHER_OPS: OnceLock<Arc<dyn FileChangedWatcherOps>> = OnceLock::new(); pub fn install_file_changed_watcher_ops(ops: Arc<dyn FileChangedWatcherOps>); pub fn add_file_changed_watch_paths(watch_paths: Vec<(PathBuf, RecursiveMode)>); 2. crates/forge_api/src/file_changed_watcher_handle.rs implements FileChangedWatcherOps for FileChangedWatcherHandle, forwarding add_paths to the inner Arc<FileChangedWatcher>. 3. ForgeAPI::init calls install_file_changed_watcher_ops(Arc::new( handle.clone())) immediately after FileChangedWatcherHandle::spawn succeeds. The registration happens once per process lifetime. 4. orch.rs::run_inner at the SessionStart fire site consumes hook_result.watch_paths, resolves each entry against cwd (relative paths are joined with the session cwd; absolute paths pass through unchanged), and calls add_file_changed_watch_paths to forward the list to the registered handle. The OnceLock approach touches 8 files total — well under the 15-file rule-of-thumb that would justify a wider Services trait refactor. ## FileChangedWatcher::add_paths crates/forge_services/src/file_changed_watcher.rs: - The private _debouncer: Option<Debouncer<...>> field was replaced with debouncer: Arc<Mutex<Option<Debouncer<...>>>> (underscore dropped because the field is now actively used at runtime). - New public method pub fn add_paths(&self, watch_paths: Vec<(PathBuf, RecursiveMode)>) locks the mutex briefly, extracts the debouncer via MutexGuard::as_mut() (Debouncer::watch() takes &mut self, verified against ~/.cargo/registry/.../notify-debouncer- full-0.5.0/src/lib.rs:574-582), and installs each path with the same path-exists-tolerance semantics as the constructor: missing paths log at tracing::debug and are skipped, the watcher stays alive and usable. - Errors are never propagated out of add_paths. The callback contract matches Claude Code's chokePointBehavior: observability-only at the filesystem layer, the SessionStart fire site only needs to know that the request was submitted. Two new unit tests: - test_file_changed_watcher_add_paths_installs_runtime_watcher:778 — constructs a watcher with empty watch_paths, calls add_paths, writes a file under the newly watched path, asserts the user callback fires within the DISPATCH_COOLDOWN window. - test_file_changed_watcher_add_paths_tolerates_missing_paths:832 — calls add_paths twice, once with a nonexistent path, once with a valid path. Asserts no panic, no error, and the valid-path dispatch still fires correctly. ## parse_file_changed_matcher — extracted shared helper crates/forge_api/src/file_changed_watcher_handle.rs: New pub(crate) function parse_file_changed_matcher(matcher: &str, base_cwd: &Path) -> Vec<(PathBuf, RecursiveMode)> implements the pipe-separated matcher parsing previously inlined in the startup resolver at forge_api.rs:204: - Splits on '|' to support alternatives like '.envrc|.env' - Trims whitespace on each alternative - Drops empty / whitespace-only alternatives - Resolves absolute paths as-is - Resolves relative paths against base_cwd via Path::join - Returns Vec<(PathBuf, RecursiveMode::NonRecursive)> — all FileChanged hooks use NonRecursive mode because hooks.json matchers are file-level, not directory-level. Existence filtering is intentionally NOT applied here — the caller decides. The startup resolver filters out missing paths (because startup paths should exist when the watcher boots), but the runtime consumer in orch.rs does NOT filter because SessionStart hooks may create files shortly after the watcher is installed. Four new tests (3 required + 1 bonus edge case): - test_parse_file_changed_matcher_single_path_relative_resolves_to_cwd:230 - test_parse_file_changed_matcher_pipe_separated_splits_all_alternatives:241 - test_parse_file_changed_matcher_absolute_path_not_resolved:262 - test_parse_file_changed_matcher_empty_and_whitespace_alternatives_dropped:276 (bonus) resolve_file_changed_watch_paths at forge_api.rs:204 now delegates to parse_file_changed_matcher and then applies the existence filter inline, cutting the original inline parser down to a map + filter chain. ## Orchestrator consume site crates/forge_app/src/orch.rs: - New imports at the top: use forge_app::{FileChangedWatcherOps, add_file_changed_watch_paths, install_file_changed_watcher_ops}; use notify_debouncer_full::notify::RecursiveMode; - At the SessionStart fire site (around line 550 after consume logic for initial_user_message), consumes hook_result.watch_paths: if !hook_result.watch_paths.is_empty() { let resolved = hook_result .watch_paths .iter() .map(|p| { if p.is_absolute() { (p.clone(), RecursiveMode::NonRecursive) } else { (cwd.join(p), RecursiveMode::NonRecursive) } }) .collect(); tracing::debug!(count = resolved.len(), 'adding runtime watch paths from SessionStart'); add_file_changed_watch_paths(resolved); } - The call is fire-and-forget. Errors inside add_file_changed_watch_ paths are swallowed by the watcher layer. ## ForgeAPI::init registration crates/forge_api/src/forge_api.rs: Now always spawns FileChangedWatcherHandle even when the initial watch_paths list is empty — previously the watcher was skipped entirely on empty startup paths. Empty startup paths are a valid state (no hook config declares FileChanged matchers yet), and we still need the handle to be registered so runtime watch_paths from SessionStart hooks can be added. After FileChangedWatcherHandle::spawn succeeds, registers the handle: forge_app::install_file_changed_watcher_ops(Arc::new(handle.clone())); ## forge_app re-exports and dependency crates/forge_app/Cargo.toml added notify-debouncer-full = '0.5' so forge_app can reference RecursiveMode in the FileChangedWatcherOps trait signature and in the orchestrator consume site. This is the only new crate-level dependency in Wave E-2b. crates/forge_app/src/lib.rs re-exports: pub use lifecycle_fires::FileChangedWatcherOps; pub use lifecycle_fires::install_file_changed_watcher_ops; pub use lifecycle_fires::add_file_changed_watch_paths; ## Test results cargo build --workspace: clean, 10 warnings (all pre-existing dead_code in forge_services::hook_ runtime; 0 new). cargo test -p forge_services --lib file_changed_watcher: 9 passed, 0 failed (7 existing + 2 new) cargo test -p forge_services --lib: 298 passed, 0 failed, 1 ignored cargo test -p forge_api: 4 passed, 0 failed (all new parser tests) cargo test --workspace: 2662 passed, 0 failed, 17 ignored Delta vs Wave E-2a baseline of 2657: +5 net Breakdown: +2 file_changed_watcher::add_paths tests +4 parse_file_changed_matcher tests -1 pre-existing flake not counted (forge_main::info:: test_format_path_for_display_no_home — reproduces on baseline de48157b when Faker generates /home/user/ project prefix; unrelated to this commit) ## Rules compliance 1. hook_payloads.rs, hook.rs, hooks/plugin.rs untouched (Phase 4 plumbing reused as-is) 2. CwdChanged fire site untouched (deferred as Wave E-2a-cwd) 3. No dependency graph restructure (OnceLock fallback used) 4. All 7 pre-existing file_changed_watcher tests pass byte-for-byte 5. Debouncer::watch() signature verified as &mut self; MutexGuard:: as_mut() used in add_paths 6. tracing::debug! for path-add successes and skips, no error propagation from add_paths 7. No new TODO markers introduced * [autofix.ci] apply automated fixes * feat(plugins): Wave E-2c-i — Phase 7D minimal WorktreeCreate fire site Implements the minimal Phase 7D slice: extract git worktree manipulation into a reusable forge_services module and add a WorktreeCreate hook fire site at the --worktree CLI flag path in crates/forge_main/src/sandbox.rs. Plugins can now veto worktree creation via blocking_error or hand back a custom path via worktreePath. EXPLICITLY OUT OF SCOPE (deferred to Wave E-2c-ii): - EnterWorktreeTool / ExitWorktreeTool Tool catalog variants - Runtime cwd switching - Memory/cache invalidation on worktree enter/exit - WorktreeRemove fire site Reference: plans/2026-04-09-claude-code-plugins-v4/08-phase-7-t3-intermediate.md (Sub-Phase 7D, 7D.1/7D.5/7D.6) ## New crate module: forge_services::worktree_manager New file crates/forge_services/src/worktree_manager.rs (191 lines) is a pure extraction of Sandbox::create from the original sandbox.rs:18-138. The extraction rules were strict: - Takes name: &str instead of &self.dir - Returns WorktreeCreationResult { path: PathBuf, created: bool } where created is true on fresh creation and false when reusing an existing worktree. This distinction was previously a runtime-only println! decision buried in Sandbox::create. - NO stdout side effects. The original TitleFormat::info('Worktree [Created]') / info('Worktree [Reused]') c…
- Add MarketplaceManifest deserialization type for marketplace.json - Add marketplace-aware scanning to scan_root (resolves nested plugins via marketplace.json source field) - Handle cache/ and marketplaces/ container directory layouts for Claude Code plugin discovery - Detect marketplace indirection during forge plugin install - Count MCP servers from .mcp.json sidecar in trust prompt - Copy only effective plugin root during install (not entire repo) - Add CLAUDE_PLUGIN_ROOT env var alias for hook and MCP subprocesses - Add CLAUDE_PROJECT_DIR and CLAUDE_SESSION_ID env var aliases - Add modes count to trust prompt, /plugin info, and /plugin list - Add marketplace plugin test fixture and comprehensive tests
- test_format_path_for_display_no_home: explicitly set home to None instead of relying on Faker-generated random value - test_hook_exit_before_prompt_response_does_not_hang: increase elapsed threshold from 3s to 4s to account for parallel test load
|
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.