Conversation
There was a problem hiding this comment.
Pull request overview
This pull request introduces the applywithllm command to Shepherd, enabling automated code changes using Large Language Models (OpenAI GPT and Groq). The feature supports both single-file modifications and multi-repository migrations with LLM-powered code generation.
Key Changes:
- New LLM service integration supporting OpenAI and Groq providers
- New
applywithllmcommand with single-file and repo-based modes - Configuration schema updates to support LLM settings in migration specs
- Comprehensive documentation including quickstart guide and reference docs
Reviewed changes
Copilot reviewed 14 out of 16 changed files in this pull request and generated 28 comments.
Show a summary per file
| File | Description |
|---|---|
| src/services/llm.ts | Core LLM service with OpenAI and Groq provider implementations |
| src/services/llm.test.ts | Unit tests for LLM service functionality |
| src/commands/applywithllm.ts | Main command implementation for LLM-based code modifications |
| src/commands/applywithllm.test.ts | Test suite for applywithllm command |
| src/cli.ts | CLI registration for the new applywithllm command |
| src/util/migration-spec.ts | Extended migration spec interface to support LLM configuration |
| src/migration-context.ts | Added LLM config to migration context interface |
| package.json | Added groq-sdk dependency |
| package-lock.json | Dependency lockfile updates for groq-sdk and related packages |
| docs/applywithllm.md | Comprehensive reference documentation for the command |
| APPLYWITHLLM_QUICKSTART.md | Quick start guide with examples and best practices |
| README.md | Updated with applywithllm command overview |
| examples/apply-code-with-llm/shepherd.yml | Example migration configuration using LLM |
| openai_response.json | Sample OpenAI API response for testing/demonstration |
| llm_response.json | Sample LLM response structure for testing/demonstration |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
APPLYWITHLLM_QUICKSTART.md
Outdated
|
|
||
| ```bash | ||
| # Export the API key (add to .bashrc or .zshrc for persistence) | ||
| export GROQ_API_KEY="sk-your-openai-api-key-here" |
There was a problem hiding this comment.
The documentation states to set GROQ_API_KEY but then says "sk-your-openai-api-key-here" in the comment. Groq API keys start with "gsk_" not "sk-". This is misleading and should be corrected to use a proper Groq API key format like "gsk-...".
| export GROQ_API_KEY="sk-your-openai-api-key-here" | |
| export GROQ_API_KEY="gsk-your-groq-api-key-here" |
| ## Troubleshooting | ||
|
|
||
| ### "GROQ_API_KEY environment variable is not set" | ||
|
|
||
| ```bash | ||
| # Solution: Export your API key | ||
| export GROQ_API_KEY="sk-your-key" | ||
| ``` | ||
|
|
||
| ### "Diff validation failed" | ||
|
|
||
| The LLM may have generated an invalid diff. Try: | ||
|
|
||
| 1. Use `--dry-run` first to see the error | ||
| 2. Refine your prompt to be more specific | ||
| 3. Use a simpler, more targeted prompt | ||
| 4. Check if the prompt format is correct (with `@files`) | ||
|
|
||
| ### "File not found: src/example.ts" | ||
|
|
||
| Ensure: | ||
|
|
||
| - File paths are relative to repository root | ||
| - Spell file names correctly | ||
| - Files are actually committed (not untracked) | ||
|
|
||
| ### "LLM API error" | ||
|
|
There was a problem hiding this comment.
The documentation section describes Groq configuration but references OpenAI API keys and GPT models throughout. This entire section appears to have copy-paste errors where GROQ was substituted for OPENAI without updating the key format ("sk-" prefix is for OpenAI, not Groq which uses "gsk_") or model names (GPT models are OpenAI, not Groq).
There was a problem hiding this comment.
@copilot open a new pull request to apply changes based on this feedback
| // Save response to file for debugging | ||
| const responseFile = path.join(process.cwd(), 'openai_response.json'); | ||
| await fs.writeFile(responseFile, JSON.stringify(data, null, 2), 'utf-8'); | ||
| console.log(`OpenAI response saved to ${responseFile}`); |
There was a problem hiding this comment.
The OpenAIProvider unconditionally writes debug files to the current working directory. This should only happen when DEBUG_LLM is enabled, or should be removed entirely as it can clutter the filesystem and may expose sensitive data in production environments.
| // Save response to file for debugging | |
| const responseFile = path.join(process.cwd(), 'openai_response.json'); | |
| await fs.writeFile(responseFile, JSON.stringify(data, null, 2), 'utf-8'); | |
| console.log(`OpenAI response saved to ${responseFile}`); | |
| // Save response to file for debugging (only when DEBUG_LLM is enabled) | |
| if (process.env.DEBUG_LLM === 'true') { | |
| const responseFile = path.join(process.cwd(), 'openai_response.json'); | |
| await fs.writeFile(responseFile, JSON.stringify(data, null, 2), 'utf-8'); | |
| console.log(`OpenAI response saved to ${responseFile}`); | |
| } |
|
|
||
| // Handle plain text response - remove markdown code fences if present | ||
| let modifiedContent = content.trim(); | ||
| // Remove markdown code fences if present |
There was a problem hiding this comment.
The comment on line 83 is redundant as it duplicates the comment on line 81. This redundancy should be removed for clarity.
| // Remove markdown code fences if present |
|
|
||
| // Handle plain text response - remove markdown code fences if present | ||
| let modifiedContent = content.trim(); | ||
| // Remove markdown code fences if present |
There was a problem hiding this comment.
The comment on line 205 is redundant as it duplicates the comment on line 203. This redundancy should be removed for clarity.
| // Remove markdown code fences if present |
| }); | ||
|
|
||
| it('should return provider with provided API key', () => { | ||
| const provider = getLLMProvider('test-key'); |
There was a problem hiding this comment.
The test calls getLLMProvider with an API key, but the implementation now checks environment variables (OPENAI_API_KEY, GROQ_API_KEY) and ignores the apiKey parameter unless it's used as OPENAI_API_KEY. This test may not behave as expected since neither environment variable is set, and will likely throw an error rather than returning a provider.
| // Save LLM response to shepherd directory | ||
| const shepherdResponsePath = path.join(process.cwd(), 'llm_response.json'); | ||
| await fs.writeFile(shepherdResponsePath, JSON.stringify(llmResponse, null, 2), 'utf-8'); | ||
| repoLogs.push(`LLM response saved to ${shepherdResponsePath}`); |
There was a problem hiding this comment.
The applywithllm command unconditionally saves LLM responses to the current working directory. This can clutter the filesystem and potentially expose sensitive data. This should be conditional on DEBUG_LLM environment variable or removed entirely.
| // Save LLM response to shepherd directory | |
| const shepherdResponsePath = path.join(process.cwd(), 'llm_response.json'); | |
| await fs.writeFile(shepherdResponsePath, JSON.stringify(llmResponse, null, 2), 'utf-8'); | |
| repoLogs.push(`LLM response saved to ${shepherdResponsePath}`); | |
| if (process.env.DEBUG_LLM === 'true') { | |
| // Save LLM response to shepherd directory (debug only) | |
| const shepherdResponsePath = path.join(process.cwd(), 'llm_response.json'); | |
| await fs.writeFile(shepherdResponsePath, JSON.stringify(llmResponse, null, 2), 'utf-8'); | |
| repoLogs.push(`LLM response saved to ${shepherdResponsePath}`); | |
| } |
| @@ -0,0 +1,101 @@ | |||
| import { getLLMProvider, readFilesForContext, GroqProvider } from './llm'; | |||
| import fs from 'fs-extra'; | |||
| import path from 'path'; | |||
There was a problem hiding this comment.
Unused import path.
| import path from 'path'; |
| const mockFsPathExists = fs.pathExists as jest.MockedFunction<typeof fs.pathExists>; | ||
|
|
There was a problem hiding this comment.
Unused variable mockFsPathExists.
| const mockFsPathExists = fs.pathExists as jest.MockedFunction<typeof fs.pathExists>; |
|
@kavitha186 I've opened a new pull request, #1068, to work on those changes. Once the pull request is ready, I'll request review from you. |
cba6174 to
6f79941
Compare
This pull request introduces comprehensive support for the new
applywithllmcommand in Shepherd, enabling automated code changes using Large Language Models (LLMs) such as OpenAI's GPT and Groq. It adds documentation, usage examples, configuration guidelines, and updates the CLI and package dependencies to support this feature. The changes make it easy for users to leverage LLMs for both single-file edits and multi-repository migrations, with robust error handling and best practices guidance.Major features and documentation:
Added a detailed quick start guide (
APPLYWITHLLM_QUICKSTART.md) covering installation, configuration, usage examples, troubleshooting, and best practices for theapplywithllmcommand.Introduced a comprehensive reference documentation file (
docs/applywithllm.md) describing command modes, environment variables, prompt formats, error handling, implementation details,Codebase and configuration updates:
Registered the
applywithllmcommand in the CLI by importing it insrc/cli.ts.Added the
groq-sdkpackage as a dependency inpackage.jsonto enable Groq LLM integration.Examples and test data:
examples/apply-code-with-llm/shepherd.ymlto demonstrate enabling and customizingapplywithllm.llm_response.json,openai_response.json) for demonstration and testing.Screen.Recording.2026-01-03.at.9.37.34.PM.mov