Open
Conversation
- Expand PROMPT_EXPLORATION_TOOLS with knowledge management instructions - Add knowledge and trajectory search tools to subagent documentation - Enhance EXTRACTION_PROMPT with structured overview and title generation - Increase minimum message threshold from 4 to 10 for trajectory processing - Add TrajectoryMeta struct to capture overview and auto-generated titles - Refactor extract_memos to extract_memos_and_meta for dual extraction - Support dynamic title updates for auto-generated trajectory titles - Add title hint when current title is auto-generated
Import memories module and create enriched memory entries from subagent execution results with appropriate tags and metadata. This allows subagent tasks to be persisted and retrieved for future context.
Update create_knowledge tool description to mention "Use it if you need to remember something" for better user understanding.
Implement automatic context enrichment in AGENT chat mode by injecting relevant knowledge and trajectories before the user message. This includes: - New knowledge_enrichment module with signal-based heuristics to determine when enrichment should occur (first message, error keywords, file refs, etc.) - Enhanced memories_search to support separate top_n for knowledge vs trajectories - Score field added to MemoRecord for relevance filtering - Pre-stream messages passed through restream for UI display - Updated tool descriptions to mention trajectory search capability - Removed standalone search_trajectories tool (now integrated into knowledge) Refs #123
Enhance tool output messages by appending knowledge base save confirmation when enriched memories are successfully created. This provides users with visibility into where their tool results are being persisted. Changes: - Capture memory save path from memories_add_enriched result - Append formatted memory note to final tool output messages - Applied to deep_research, strategic_planning, and subagent tools Refs #123
Trajectories tools
…ed error handling Replace EventSource-based chat subscription with fetch-based streaming for better control and error handling. Refactor chat commands to use UUID v4 for request IDs and add comprehensive protocol validation tests. Add queue_size field to thread runtime state. - Migrate subscribeToChatEvents from EventSource to fetch API - Implement sequence number validation with gap detection - Add reconnection logic with configurable delays - Refactor sendChatCommand with improved error handling - Add comprehensive SSE protocol tests - Add chat validation tests
…nt logic Refactor message handling in `run_llm_generation` to improve preamble injection and knowledge enrichment. Move knowledge enrichment after preamble setup and add session synchronization for enriched context files. Simplify system prompt logic in `prepend_the_right_system_prompt_and_maybe_more_initial_messages` by checking for cd_instruction presence upfront.
Add new subchat_update event type to handle subchat creation and file attachment tracking. Implement subchat bridge in tool execution to emit updates when subchats are spawned, and update tool calls with subchat IDs and attached files in the reducer.
Extract message serialization into a variable for clarity and move title generation logic from handle_v1_trajectories_save into save_trajectory_snapshot to reduce code duplication and improve separation of concerns.
…l support Remove CodeCompletionReplaceScratchpad and CodeCompletionReplacePassthroughScratchpad implementations along with their dependencies (comments_parser module). Simplify known_models.json to only include actively supported models (FIM-PSM/FIM-SPM based completions). Update scratchpad factory to only support FIM-PSM and FIM-SPM modes, removing REPLACE and REPLACE_PASSTHROUGH variants. This reduces codebase complexity and focuses on maintained completion strategies.
…l support Remove CodeCompletionReplaceScratchpad and CodeCompletionReplacePassthroughScratchpad implementations along with their dependencies (comments_parser module). Simplify known_models.json to only include actively supported models (FIM-PSM/FIM-SPM based completions). Update scratchpad factory to only support FIM-PSM and FIM-SPM modes, removing REPLACE and REPLACE_PASSTHROUGH variants. This reduces codebase complexity and focuses on maintained completion strategies.
Extract LLM stream handling logic from generation.rs into a new stream_core module to enable reuse across chat and subchat implementations. Introduce StreamCollector trait for flexible result handling, ChoiceFinal struct for accumulating choice results, and StreamRunParams for parametrizing stream runs. This refactoring reduces code duplication and improves maintainability by centralizing stream processing logic.
- Add CaptureBuffer for smart output truncation with head/tail strategy - Add RowLimiter for database query result limiting (100 rows, 200 chars/cell) - Add pp_guidance module with standardized truncation messages - Update integrations (cmdline, docker, mysql, postgres, shell) to use new limiters - Add OutputFilter.skip flag and parse_output_filter_args for dynamic filtering - Improve error messages with actionable hints (⚠️ emoji + 💡 suggestions) - Update tool error messages (search, tree, ast_definition, regex_search, web, etc.) - Add deduplication and merging of context files in pp_tool_results - Fix edge cases in pp_context_files and pp_utils - Improve shell command streaming with CaptureBuffer instead of Vec
- Replace CaptureBuffer Tail strategy with HeadAndTail for better context - Add empty string handling in path trimming for edge cases - Improve error messages with⚠️ emoji and 💡 actionable hints - Enhance str_replace validation with empty string checks - Update tool error messages (create_textdoc, update_textdoc, mv, rm, cat, regex_search, trajectory_context) - Fix scope path separator handling in resolve_scope and create_scope_filter - Improve cat() image limiting message and line range validation - Add #[allow(dead_code)] annotations to unused RowLimiter methods - Simplify mv() error handling and remove cross-device fallback logic
Extract common path and argument parsing logic into reusable helpers (parse_path_for_create, parse_path_for_update, parse_string_arg, parse_bool_arg) to reduce duplication across create_textdoc, update_textdoc, update_textdoc_by_lines, and update_textdoc_regex tools. Add edit_result_summary helper for consistent operation feedback with emoji and line count deltas. Update tool execution signatures to return summary string as fourth element in result tuple.
Extract common path and argument parsing logic into reusable helpers (parse_path_for_create, parse_path_for_update, parse_string_arg, parse_bool_arg) to reduce duplication across create_textdoc, update_textdoc, update_textdoc_by_lines, and update_textdoc_regex tools. Add edit_result_summary helper for consistent operation feedback with emoji and line count deltas. Update tool execution signatures to return summary string as fourth element in result tuple. Introduce new tools: update_textdoc_anchored (anchor-based editing), apply_patch (unified diff), and undo_textdoc (session undo history). Add undo_history module to track file edits with bounded memory. Enhance error messages with emoji hints and improve line ending handling.
Collect and render diff messages that follow assistant messages directly in the assistant rendering block, before usage info. This improves the message flow by keeping diffs visually associated with their assistant context rather than processing them as separate top-level messages. Also simplify undo_textdoc response to send only diff chunks without the summary wrapper.
- Add `update_textdoc_anchored`, `apply_patch`, and `undo_textdoc` to PATCH_LIKE_FUNCTIONS constant for better patch operation detection - Improve ChatContent handling in history_limit.rs to support both ContextFiles variant and SimpleText parsing for robustness - Fix UTF-8 safe string truncation across multiple files using char_indices to prevent panic on multibyte character boundaries - Improve postprocess_tool_results to collect and report context file notes separately as structured feedback - Add null safety check in ToolConfirmation.tsx for lastAssistantMessage to prevent runtime errors - Improve trajectory message extraction with proper UTF-8 aware truncation in vdb_trajectory_splitter.rs
…et allocation - Refactor postprocess_tool_results to process text messages before context files and implement dynamic budget reallocation based on actual usage - Change ToolBudget calculation: tokens_for_code now receives full budget total, tokens_for_text uses 30% ratio instead of 20% - Update postprocess_context_file_results return type to include tokens_used for accurate budget tracking across processing stages - Simplify context file notes collection: move notes into result vector instead of prepending to text messages - Update test expectations to reflect new budget allocation ratios
…et allocation - Process text messages before context files in postprocess_tool_results - Implement dynamic budget reallocation based on actual token usage - Update ToolBudget: tokens_for_code receives full budget, tokens_for_text uses 30% - Modify postprocess_context_file_results to return tokens_used for tracking - Simplify context file notes: move into result vector instead of prepending - Update test expectations for new budget allocation ratios
Replace array-building approach with early returns for clearer control flow when combining reasoning content and thinking blocks.
…orage Enable searching and loading knowledge bases and trajectories across multiple workspace directories instead of just the first one. This allows users to organize their projects in multiple locations while maintaining a unified knowledge and trajectory system. Changes: - Add get_all_*_dirs() functions to retrieve all workspace directories - Update search/load functions to iterate across all directories - Simplify knowledge enrichment return type and error handling - Use ContextFiles format for knowledge context messages - Support trajectory discovery across all workspace roots
…ent support Add new tools for searching and retrieving past conversations: - search_trajectories: Find relevant past conversations by query - get_trajectory_context: Retrieve full context from specific trajectory Enhance context file display to support knowledge enrichment: - Add isEnrichment prop to ContextFiles component - Categorize enrichment files (memories, trajectories, related) - Display relevance scores and improved formatting Update knowledge tool to focus on semantic search without trajectory mixing. Integrate trajectory vectorization into save pipeline for search indexing. Update configuration to use get_trajectory_context instead of trajectory_context. Improve UI for memory and trajectory display with better formatting and icons.
Exclude context files and tool results marked with "knowledge_enrichment" tool_call_id from compression stages 1 and 5 to maintain enriched context quality. Also clarify tool result filtering logic.
return Ok(()); }
- Add file size and line count metadata to tree output - Implement binary file and hidden directory filtering - Add folder truncation for large directories (configurable max_files) - Improve tree formatting with extension summaries - Rename print_files_tree_with_budget to tree_for_tools for clarity - Update UI to support project context variant in ContextFiles - Add SystemPrompt component for displaying system messages - Refactor at_tree.rs to use TreeNode with metadata fields - Update tool_tree.rs to pass max_files parameter - Improve git status filtering to exclude hidden files - Add PROJECT_CONTEXT_MARKER constant for project context messages - Simplify ToolConfirmation logic and fix array access pattern
- Remove AgentCapabilities UI, ToolUseSwitch, and hardcoded agent toggles
- Add chat-modes API endpoint returning modes with tools, UI tags, thread defaults
- Implement project customization editor (CRUD for modes/subagents/toolbox/code-lens)
- Replace ChatMode enum with mode_id string system using yaml_configs/defaults/modes/*.yaml
- Add subagent phases: gather_files_phase with configurable prompts/tools/retry logic
- Bootstrap .refact/ directories with default configs on project init
- Update tool selection to use get_tools_for_mode(registry, mode_id, model_id)
- Add ModeSelect UI component replacing ToolUseSwitch
- Registry cache with project_root invalidation
- Config subagents as first-class tools from yaml configs
- New defaults: code_review, deep_research, toolbox commands (shorter/bugs/etc), code lens
BREAKING CHANGE: Replaces legacy tool_use enum ("quick"/"explore"/"agent") with configurable modes
Add comprehensive customization editor supporting: - Global (~/.config/refact/) and local (.refact/) config scopes - Form-based editors for modes, subagents, toolbox_commands, code_lens - Live YAML/JSON bidirectional editing with validation - Default config bootstrapping with checksum validation - Atomic file writes and cross-scope task/trajectory storage - Text file attachments with inline code block rendering Additional improvements: - UnifiedSendButton with streaming/queued/resend states - ChatSettingsDropdown combining model/context/reasoning controls - ModeSelect popover with auto-scroll and post-message locking - StreamingTokenCounter simplified to output-only display - Task/trajectory storage now searches global+project directories - ChatMode serde aliases (agent/explore/configure → lowercase)
- Add safe type guards (safeMessageArray, safeToolConfirmRules, etc.) and parsing utilities (parseIntSafe, parseFloatSafe) to configUtils - Extract inline validation logic to shared utilities across forms - Add validateConfigId with pattern matching for secure IDs - Add 400+ line test suite covering all config utilities and serialization - Improve editor stability with JSON.stringify() dependency tracking - Add sorting to registry responses for consistent UI ordering - Add project_root detection flag to customization API - Fix early return handling in chat_modes endpoint - Update ToolConfirmRule field naming (match_pattern → match) with backward compatibility - Add extensive Rust tests for serialization, overrides, and matching logic Closes #customization-safety
Apply consistent formatting across Customization components and related files by breaking long prop lists, imports, and complex expressions onto separate lines for improved readability. Includes ChatContent, SubagentForm, ModeForm, configUtils, and various other UI components.
… safety - Add react-hooks/exhaustive-deps disable comments in RulesTableEditor and MessageListEditor to document intentional dependency omission for deep value comparison - Extract selection range values with explicit unknown typing before narrowing in safeSelectionRange for better type safety These changes address ESLint warnings and improve code robustness.
… commands Add thread settings to automatically approve editing tools (like apply_patch) and dangerous commands (like shell rm -rf). Include Redux actions, reducer cases, selectors, middleware sync, type updates, and test fixtures. Also enable apply_patch tool by default and add tool_name to confirmation reasons for better UI display. Introduce OpenAI agent mode YAML. Fixes #TODO
Backend: - Add auto_approve_editing_tools and auto_approve_dangerous_commands to ThreadParams - Add mode defaults support for new flags in ModeThreadDefaults - Implement tool confirmation partitioning based on auto-approve flags - Fix incremental tool decisions with accepted_tool_ids accumulation - Scope tool execution to paused_message_index - Clear all pause bookkeeping when pause is cleared via any path - Apply mode defaults on new session creation - Add /v1/project-information endpoints with path traversal protection - Standardize mode casing to lowercase Frontend: - Add ChatInputTopControls with Project Info button and auto-approve toggles - Add ProjectInformationDialog with per-section config and token estimation - Show command details in tool confirmation UI for dangerous tools - Fix Allow for This Chat race condition by awaiting set_params - Convert preview endpoint to mutation to avoid cache explosion - Handle new flags in reducer snapshot/thread_updated events Mode defaults: - agent mode: auto_approve_editing_tools=true, auto_approve_dangerous_commands=false - other modes: both flags default to false
Add comprehensive task progress system for multi-step agent workflows: **Backend:** - New `tasks_set` tool with validation (max 100 tasks, unique IDs, length limits) - Rust implementation with TaskItem/TaskStatus types - Auto-approval configuration in agent.yaml - Integrated into tools list **Frontend:** - TaskProgressWidget with collapsible UI, progress bar, status icons - Redux state management (task_widget_expanded, selectors) - deriveTasksFromMessages logic to parse from tool calls - Comprehensive taskDerivation.test.ts (100+ tests) - Real-time selectors for current tasks, progress, hasTasks **Agent Instructions:** - Detailed task tracking guidelines in agent.yaml - Proper usage patterns for multi-step workflows Supports pending/in_progress/completed/failed states with visual feedback.
- Add ESLint disable comment for control regex in sanitizeText - Remove optional chaining from guaranteed function.name accesses - Remove nullish coalescing from non-nullable arguments - Add explicit Promise<void> types and await wrappers - Convert query/mutation args to explicit undefined type - Wrap async handlers in void expressions - Memoize blocks array and simplify section mapping
Introduce `allow_parallel` field to ToolDesc and ToolConfig with security-aware YAML overrides (can only disable, not enable unsafe tools). Key improvements: - Per-tool mutex registry enables true parallelism - Barrier scheduling: parallel tools batch together, non-parallel act as barriers - Bounded concurrency (32 max) in regex_search prevents I/O overload - Comprehensive test suite for scheduling, serialization, security policies - Updated all tools with appropriate parallel settings (readers=parallel, mutators=sequential) - GUI support for configuring parallelism in subagents Maintains result order and aggregates corrections across batches.
- Replace Card with Box and ThinkingButton/ContextCapButton/SendButtonWithDropdown with UnifiedSendButton, ModeSelect, ChatSettingsDropdown - Add context indicator with StreamingTokenCounter, UsageCounter, TrajectoryButton - Integrate text file handling with AttachmentsPreview and reset logic - Add thread mode management with useChatActions and setThreadMode dispatch - Reorganize button layout and remove legacy components (TokensPreview, FileList, CapsSelect) - Update submit logic to support text files alongside images
Introduce @file, @web, @tree, @search and other commands that render as interactive chips in chat input and message history. Chips support file opening, web links, and line ranges (e.g. @file main.rs:10-20). Simplify ChatControls by moving checkboxes to ChatInputTopControls. Replace PlainText hover with collapsible UI. Add full @-command parser with tests and types. Includes: - TextAreaWithChips overlay rendering - Line range parsing/formatting - Smart filename deduplication in chips - Code fence awareness
…text Replace separate AttachmentsPreview/FilesPreview with UnifiedAttachmentsTray using new AttachmentTile component with file type colors and copy-to-clipboard. Introduce .refact/project_information.yaml config with per-section limits, per-file overrides, and token-aware truncation. Respects mode/thread settings. Add task progress tracking (tasks_total/done/failed) to trajectories/history. Enhance TaskProgressWidget with StatusDot animations and staggered task list. Add thread_defaults.auto_approve_(editing_tools|dangerous_commands) to modes. Replace ChatInputTopControls switches with icon-only HoverCard buttons. Improve command preview parsing and at-command chip logic.
- Add UI buttons (copy, branch, delete) to user/assistant messages on hover - Implement branch chat feature: creates new chat copying messages up to selected point - Implement delete message with cascading tool response removal - Add backend support: BranchFromChat/RestoreMessages commands, thread metadata (parent/root) - Integrate with existing chat queue and session management Includes minor ScrollToBottomButton positioning fix.
- Remove react-syntax-highlighter/CodeBlock.tsx and highlightjs.css - Add ShikiCodeBlock.tsx with useShiki hook for on-demand syntax highlighting - Improve copy button UX with hover reveal and consistent styling - Add task progress tracking (tasks_total/done/failed) to trajectory metadata - Enhance user message rendering with simplified text/image separation - Improve RetryForm UX with better keyboard handling and click-outside cancel - Update UI components to use consistent ghost button styling - Add safer link handling in Markdown with protocol validation
…stem Remove legacy customization_compiled_in.yaml and customization_loader.rs, replacing with new customization_registry system using project configs. Key changes: - Extract hardcoded prompts to chat/prompt_snippets.rs - Move subagent/toolbox configs to yaml_configs/defaults/ - Update all consumers (config_chat, project_summary_chat, subchat.rs, main.rs print_customization) to use registry APIs - Add bootstrap for required subagent configs - Improve registry validation with comprehensive tests GUI improvements: - New ToolCard system for consistent tool result rendering - Specialized components for read/list/search/web/knowledge/edit tools - Hover-based message footers (copy/branch/delete/usage) - Unified icon button styling across components
Replace logical OR initialization with nullish coalescing operator (??) and update type definitions to allow undefined values for cleaner, more precise handling of optional tool context and diffs.
Filter undefined values from contextFilesByToolId and diffsByToolId in ChatContent to ensure clean Record<string, T[]> props. Improve Shiki hook type safety with explicit Highlighter annotations and proper promise handling with void return. style: consistent unknown error typing
- Remove PromptSelect component and ChatControls (no longer shown after first message) - Enhance ReasoningContent with collapsible UI, thinking duration tracking, and animations - Add streaming state to AssistantInput and message rendering logic - Update ContextFileList and ReadTool CSS to remove redundant backgrounds - Improve message container structure for better streaming UX Remove unused system prompt selector in favor of improved model reasoning display.
Replace backslashes with forward slashes in relative paths to ensure consistent path formatting across platforms.
… handling Remove comprehensive git cleanup tests and object deletion code that risked corrupting active repositories by deleting reachable objects. feat(checkpoints): add restore mode selector and improve session locking - Add "files only" vs "files + messages" restore option in UI - Extract checkpoint data before acquiring session lock to prevent deadlocks - Pass chat_id/mode explicitly to frontend/backend checkpoint APIs - Extract helper functions find_latest_checkpoint() and create_checkpoint_async() fix(git/checkpoints): improve nested repo handling and typo fixes - Better workdir handling when resetting nested repo indexes - Fix "flatened" → "flattened" typos - Remove unused git_operations_abort_flag access feat(ui/retry): add model selector dropdown to RetryForm style(main): ensure rayon thread pool has minimum 1 thread
Introduce new interaction tools for agent workflows: - task_done: mark tasks complete, auto-save to knowledge base, notify IDE - ask_questions: interactive questions (yes/no, select, free text), pauses generation Add UI components, session states (waiting_user_input, completed), IDE event bus integration, and notification sidebar events. Update modes to support new tools. Remove deprecated default_customization.yaml and agentic field from ToolDesc. Update status indicators and chat rendering to hide auto-generated QA messages.
…ts channel - Add handling for `ToolStepOutcome::Stop` in chat tests (treat as terminal state) - Initialize `notification_events_tx` channel in GlobalContext with capacity 256 BREAKING CHANGE: Introduces new `ToolStepOutcome::Stop` variant which terminates tool execution loops (previously unhandled).
…virtuoso Replace custom ScrollAreaWithAnchor with VirtualizedChatList powered by react-virtuoso for better performance with long chat histories. Key changes: - Introduce buildDisplayItems() to create typed display items from messages - Add VirtualizedChatList component with proper initial scroll behavior - Implement useCollapsibleState hook for controlled collapsible states - Memoize AssistantInput, UserInput, ContextFiles, and GroupedDiffs - Add virtuoso-specific CSS styling - Reset collapsible state when switching chats Improves scroll performance and rendering efficiency for large message lists.
- Remove border-radius and background from resultContent across multiple tool cards - Add consistent left padding to content containers - Move file list display from meta prop to inline summary in ReadTool - Add styling for inline file arguments in ReadTool
- Centralize scrollbar styles in shared/scrollbar.module.css and compose across tool cards - Extract shared iconButton with variants (active, danger, stop, send, queue, priority) - Add design tokens (z-index scale, motion timing, disabled opacity) - Replace Framer Motion animations with lightweight CSS grid transitions using useDelayedUnmount hook - Update tool result content styling with consistent padding and scrollbar composition - Refactor z-index values to use CSS variables
…points - Remove `forward_to_hf_endpoint.rs` and deprecate HF endpoint style - Add pluggable LLM wire adapters (OpenAI Chat, OpenAI Responses, Anthropic) - Introduce canonical `LlmRequest`/`LlmResponse`/`LlmStreamDelta` types - Add `WireFormat` enum to model capabilities with provider defaults - Migrate chat preparation/generation to new adapter system - Add new conversation modes: ask, plan, review, debug, learn, shell, past_work, quick_agent - Improve streaming parsing, logging, and error handling - Enhance tool call safety with pending call verification BREAKING CHANGE: HuggingFace endpoint style removed - use OpenAI-compatible endpoints instead
- Remove `forward_to_hf_endpoint.rs` and deprecate HF endpoint style - Add pluggable LLM wire adapters (OpenAI Chat, OpenAI Responses, Anthropic) - Introduce canonical `LlmRequest`/`LlmResponse`/`LlmStreamDelta` types - Add `WireFormat` enum to model capabilities with provider defaults - Migrate chat preparation/generation to new adapter system - Add new conversation modes: ask, plan, review, debug, learn, shell, past_work, quick_agent - Improve streaming parsing, logging, and error handling - Enhance tool call safety with pending call verification - Add `CacheControl`, `ResponseFormat` support and usage token parsing - Add xAI Responses provider config BREAKING CHANGE: HuggingFace endpoint style removed - use OpenAI-compatible endpoints instead
- Remove scope parameter from search_files_with_regex, use files_to_search slice directly - Add limits: MAX_PATH_MATCHES_TO_LIST (25), MAX_PATH_MATCHES_TO_ATTACH (10), PATH_MATCH_PREVIEW_LINES (30) - Limit path matches listing and attachment to prevent context bloat - Update recursive call to pass files_in_scope instead of scope string refactor(memories): add source_chat_id tracking and exclusion logic - Add source_chat_id field to MemoRecord and EnrichmentParams - Track root chat ID when creating enriched memories across tools - Exclude current chat's knowledge from memory search using source_chat_id - Move root resolution to function start, pass through fallback refactor(postprocessing): improve tool budget handling - Cap total tool budget at MAX_TOOL_BUDGET (16384) for large contexts - Add MAX_PER_FILE_BUDGET (2048) and better budget distribution logic - Improve skip_pp file handling with clearer budget warnings
… & mode transitions Remove legacy openai_convert.rs (676 lines) and subchat HTTP endpoints as message conversion is now handled directly in LLM adapters (OpenAI Chat, Anthropic, Refact). Add per-thread sampling parameters (temperature, frequency_penalty, max_tokens, parallel_tool_calls, reasoning_effort) with GUI controls and backend persistence. Add mode transition system: - New `mode_transition` subagent extracts context from existing chats - `/trajectory/mode-transition/apply` endpoint creates linked child chats - GUI mode selector now supports switching/restarting with context preservation - New `canonical_mode_id()`/`normalize_mode_id()`/`is_agentic_mode_id()` utilities Update provider configs with model-specific defaults (default_max_tokens, default_frequency_penalty) and comprehensive chat model definitions. Other: - Extract LLM logging/sanitization utilities - Add `eof_is_done` model capability - Preserve citations in chat sanitization - Mode badges with consistent colors in UI - Update mode YAML files to schema_version: 4 with enhanced `ask_questions()` guidance and subagent delegation patterns BREAKING CHANGE: Legacy ChatMode enum removed - all modes now use string IDs. Update any hardcoded enum usage to use canonical_mode_id().
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.