Skip to content

Releases: Mage212/supercoder

v0.3.0

21 Apr 20:47

Choose a tag to compare

Native API Tool Calling: Migrated from text-based streaming parsing to native OpenAI-compatible tools parameter. More reliable, no format-dependent parsing. Streaming mode still available via --stream.
Interactive Session Picker: /continue now shows an arrow-key navigable session list with relative timestamps ("5m ago") and message counts. No more typing numbers.
Visual Session History: Restored sessions render full conversation history with the same styling as live output — tool calls interleaved with results, reasoning blocks, markdown responses.
Message Display Types: Each message now stores its role (user_input, thinking, response, tool_call, tool_result, error) for accurate session restoration. Backward compatible with old sessions.
Fuzzy Matching for Edits: Three-tier edit matching — exact → whitespace-normalized → fuzzy (SequenceMatcher) — so local/weak models with formatting inconsistencies still apply edits correctly.
Lean Prompt Mode: Optional 75% shorter system prompts for weak/local models via lean: true in model profile config.
Parser Hardening: Fixed 4 Qwen3.5-4B failure modes — missing closing tags, single-quoted JSON, extra characters, and hallucinated tool names.
Live Generation Progress: Spinner shows token count, phase label ("response" / "tool call"), and elapsed time during generation. Works even when providers buffer output.
Streaming Abort: Double-ESC now works during active API streaming, with checkpoint rollback on abort.
Inline Auto-suggest: Gray-text suggestions for slash commands while typing. Enter to accept.
Command Approval Menu: Instant key-press approval ([y]/[a]/[n]) for shell commands instead of typing.
Interactive Setup Wizard: On first launch, a guided TUI wizard configures provider, model, context size, and API key.
Full Traceback Logging: All exceptions logged with complete Python tracebacks to ~/.supercoder/logs/ (JSONL format).

v0.2.6

21 Jan 11:58

Choose a tag to compare

Release v0.2.6: GLM-4 support

v0.2.5

07 Jan 19:55

Choose a tag to compare

Fix local model support and Qwen parser

0.2.4

20 Dec 12:25

Choose a tag to compare

Release v0.2.4: Atomic writes, Checkpoints, and double-ESC interruption

0.2.3

20 Dec 09:17

Choose a tag to compare

implement Ask mode with /ask and /code commands

  • Add agent_modes.py with AgentMode enum (CODE/ASK)
  • Add mode-aware tool filtering for ask mode
  • Add /ask and /code REPL commands
  • Fix tool calling prompts (fileName, filepath)
  • Enhance CODE mode prompt with explicit tool instructions"

v0.2.2

19 Dec 13:52

Choose a tag to compare

  • Enhanced chat interface with styled user/assistant messages
  • EOF-based input handling for interactive commands to prevent hangs
  • Custom HTTP headers with tool name and repository link
  • New tool_calling_type config parameter for model-specific tool calling
  • Updated README with version badges and What's New section

v0.2.1

18 Dec 11:40

Choose a tag to compare

Add Session Persistence logic

v0.2.0 - Python Migration Release

18 Dec 10:31

Choose a tag to compare

Pre-release

v0.2.0 - Python Migration Release
First stable Python version after migration from Scala.
Full feature parity with enhanced LLM support.