Conversation
# Conflicts: # README.md # action.yml
* feat: add deepseek and gemini * fix: action * fix: merge conflicts
WalkthroughThis pull request updates the GitHub Action configuration by replacing deprecated OpenAI parameters with new keys for Gemini and adding support for DeepSeek. The changes introduce new optional inputs in the action metadata and README, update environment variable usage, and expand the main summarization logic to conditionally invoke new functions for DeepSeek and Gemini. Additionally, minor updates to .gitignore and requirements.txt improve file management and dependency tracking. Changes
Sequence Diagram(s)sequenceDiagram
participant M as Main Script
participant DS as deepseek_summary
participant GM as gemini_summary
M->>M: Read environment variables
alt DEEPSEEK_KEY provided
M->>DS: Call deepseek_summary(issues, prompt, DEEPSEEK_KEY, DEEPSEEK_MODEL)
DS-->>M: Return DeepSeek summary
end
alt GEMINI_KEY provided
M->>GM: Call gemini_summary(issues, prompt, GEMINI_KEY, GEMINI_MODEL)
GM-->>M: Return Gemini summary
end
Poem
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Release NotesThis release includes new features, refactoring, and general improvements. New Features:
Refactoring:
Improvements:
Other:
|
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
src/deepseek_summary.py (1)
4-8: Move API configuration to a separate config file.The API endpoint and headers should be moved to a configuration file for better maintainability.
Create a new file
src/config.py:DEEPSEEK_API_CONFIG = { "url": "https://api.deepseek.com/chat/completions", "headers": { "Content-Type": "application/json" } }Then update the function to use this config.
src/main.py (1)
43-53: Refactor provider selection for better maintainability.The current implementation has repeated patterns that could be refactored into a more maintainable structure.
Consider refactoring to a provider factory pattern:
PROVIDERS = [ { 'key': 'ANTHROPIC_KEY', 'model': 'ANTHROPIC_MODEL', 'func': claude_summary }, { 'key': 'OPENAI_KEY', 'model': 'OPENAI_MODEL', 'org': 'OPENAI_ORG', 'func': openai_summary }, { 'key': 'DEEPSEEK_KEY', 'model': 'DEEPSEEK_MODEL', 'func': deepseek_summary }, { 'key': 'GEMINI_KEY', 'model': 'GEMINI_MODEL', 'func': gemini_summary } ] def get_provider(): for provider in PROVIDERS: key = os.environ.get(provider['key']) if not is_empty(key): model = os.environ.get(provider['model']) if 'org' in provider: org = os.environ.get(provider['org']) if not is_empty(org): return lambda issues, prompt: provider['func']( issues, prompt, key, org, model ) else: return lambda issues, prompt: provider['func']( issues, prompt, key, model ) return None.github/workflows/ci.yml (1)
28-31: Add comments explaining provider configuration.Consider adding comments to explain why certain providers are commented out and what steps are needed to enable them.
Apply this diff to improve documentation:
- #deepseekKey: ${{ secrets.DEEPSEEK_KEY }} - geminiKey: ${{ secrets.GEMINI_KEY }} - #openAiKey: ${{ secrets.OPENAI_KEY }} - #openAiOrg: ${{ secrets.OPENAI_ORG }} + # DeepSeek integration (uncomment and configure DEEPSEEK_KEY secret to enable) + #deepseekKey: ${{ secrets.DEEPSEEK_KEY }} + # Gemini integration (requires GEMINI_KEY secret) + geminiKey: ${{ secrets.GEMINI_KEY }} + # OpenAI integration (legacy, commented out in favor of Gemini) + #openAiKey: ${{ secrets.OPENAI_KEY }} + #openAiOrg: ${{ secrets.OPENAI_ORG }}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
.github/workflows/ci.yml(1 hunks).gitignore(1 hunks)README.md(2 hunks)action.yml(3 hunks)requirements.txt(1 hunks)src/deepseek_summary.py(1 hunks)src/gemini_summary.py(1 hunks)src/main.py(3 hunks)
✅ Files skipped from review due to trivial changes (1)
- requirements.txt
🔇 Additional comments (7)
.github/workflows/ci.yml (1)
28-29: Verify GitHub secrets configuration.Ensure that the
GEMINI_KEYsecret is configured in the GitHub repository settings. If you plan to enable DeepSeek in the future, you'll need to configure theDEEPSEEK_KEYsecret as well..gitignore (1)
106-108: New.env*andTODO.mdEntries Added
The addition of.env*ensures that all files beginning with.env(e.g.,.env.development,.env.production) are ignored, which is a good practice for keeping sensitive configurations out of version control. The exclusion ofTODO.mdprevents accidental commits of temporary or project planning files.action.yml (3)
24-30: Addition of New DeepSeek Input Parameters
The new inputsdeepseekKeyanddeepseekModelhave been added with clear descriptions and a default model value of"deepseek-chat". This update aligns well with the PR objective of adding support for the DeepSeek provider.
31-37: Addition of New Gemini Input Parameters
The parametersgeminiKeyandgeminiModelare introduced with appropriate descriptions and a default value of"gemini-2.0-flash". This extension further enhances the action’s compatibility with additional AI models.
126-129: Propagation of New Environment Variables
The environment variable mappings forDEEPSEEK_KEY,DEEPSEEK_MODEL,GEMINI_KEY, andGEMINI_MODELin the run step are correctly added. Ensure that the downstream logic (e.g., insrc/main.py) properly handles these new variables to invoke the respective DeepSeek and Gemini functionality when their keys are provided.README.md (2)
32-35: Documentation Update – New Input Parameters
The README has been updated to include detailed descriptions for the new inputs:deepseekKey,deepseekModel,geminiKey, andgeminiModel. The default values and operational details are clearly documented, which helps users understand how to configure the GitHub Action for the new providers.
124-126: Updated TODO List Reflecting Provider Support Completion
The TODO section now shows that support for DeepSeek and Gemini is complete (via the checked items), aligning documentation with the current feature set. Confirm that this update fits with your project’s roadmap and communication strategy.
| def gemini_summary(issues, prompt, key, model="gemini-2.0-flash"): | ||
| client = genai.Client(api_key=key) | ||
|
|
||
| prompt = f"{prompt} {issues}" | ||
| response = client.models.generate_content( | ||
| model=model, contents=prompt | ||
| ) | ||
|
|
||
| if not response or not response.text: | ||
| raise ValueError("Summary is null or empty.") | ||
|
|
||
| return response.text |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add error handling and input validation.
The function needs additional error handling and input validation:
- Add validation for empty/null input parameters.
- Add try-catch block for API errors.
- Consider reusing the client instance for better performance.
Apply this diff to improve error handling and validation:
def gemini_summary(issues, prompt, key, model="gemini-2.0-flash"):
+ if not issues or not prompt or not key:
+ raise ValueError("Required parameters (issues, prompt, key) cannot be empty.")
+
client = genai.Client(api_key=key)
prompt = f"{prompt} {issues}"
- response = client.models.generate_content(
- model=model, contents=prompt
- )
+ try:
+ response = client.models.generate_content(
+ model=model, contents=prompt
+ )
+ except Exception as e:
+ raise ValueError(f"Failed to generate content: {str(e)}")
if not response or not response.text:
raise ValueError("Summary is null or empty.")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def gemini_summary(issues, prompt, key, model="gemini-2.0-flash"): | |
| client = genai.Client(api_key=key) | |
| prompt = f"{prompt} {issues}" | |
| response = client.models.generate_content( | |
| model=model, contents=prompt | |
| ) | |
| if not response or not response.text: | |
| raise ValueError("Summary is null or empty.") | |
| return response.text | |
| def gemini_summary(issues, prompt, key, model="gemini-2.0-flash"): | |
| if not issues or not prompt or not key: | |
| raise ValueError("Required parameters (issues, prompt, key) cannot be empty.") | |
| client = genai.Client(api_key=key) | |
| prompt = f"{prompt} {issues}" | |
| try: | |
| response = client.models.generate_content( | |
| model=model, contents=prompt | |
| ) | |
| except Exception as e: | |
| raise ValueError(f"Failed to generate content: {str(e)}") | |
| if not response or not response.text: | |
| raise ValueError("Summary is null or empty.") | |
| return response.text |
| response = requests.post(url, headers=headers, json=data) | ||
|
|
||
| if response.status_code == 200: | ||
| result = response.json() | ||
| if not result.choices: | ||
| raise ValueError("No response choices available") | ||
| if not result.choices[0]: | ||
| raise ValueError("First choice is null") | ||
| if not result.choices[0].message: | ||
| raise ValueError("Message is null") | ||
|
|
||
| summary = result.choices[0].message.content | ||
| if not summary: | ||
| raise ValueError("Summary is null or empty.") | ||
|
|
||
| return summary | ||
| else: | ||
| raise ValueError("Request failed with status code: " + str(response.status_code)) No newline at end of file |
There was a problem hiding this comment.
Fix response parsing and add request error handling.
- The response parsing has incorrect attribute access.
- Missing try-catch for request errors.
Apply this diff to fix the issues:
- response = requests.post(url, headers=headers, json=data)
+ try:
+ response = requests.post(url, headers=headers, json=data)
+ except requests.exceptions.RequestException as e:
+ raise ValueError(f"Request failed: {str(e)}")
if response.status_code == 200:
result = response.json()
- if not result.choices:
+ if not result.get('choices'):
raise ValueError("No response choices available")
- if not result.choices[0]:
+ if not result['choices'][0]:
raise ValueError("First choice is null")
- if not result.choices[0].message:
+ if not result['choices'][0].get('message'):
raise ValueError("Message is null")
- summary = result.choices[0].message.content
+ summary = result['choices'][0]['message']['content']📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| response = requests.post(url, headers=headers, json=data) | |
| if response.status_code == 200: | |
| result = response.json() | |
| if not result.choices: | |
| raise ValueError("No response choices available") | |
| if not result.choices[0]: | |
| raise ValueError("First choice is null") | |
| if not result.choices[0].message: | |
| raise ValueError("Message is null") | |
| summary = result.choices[0].message.content | |
| if not summary: | |
| raise ValueError("Summary is null or empty.") | |
| return summary | |
| else: | |
| raise ValueError("Request failed with status code: " + str(response.status_code)) | |
| try: | |
| response = requests.post(url, headers=headers, json=data) | |
| except requests.exceptions.RequestException as e: | |
| raise ValueError(f"Request failed: {str(e)}") | |
| if response.status_code == 200: | |
| result = response.json() | |
| if not result.get('choices'): | |
| raise ValueError("No response choices available") | |
| if not result['choices'][0]: | |
| raise ValueError("First choice is null") | |
| if not result['choices'][0].get('message'): | |
| raise ValueError("Message is null") | |
| summary = result['choices'][0]['message']['content'] | |
| if not summary: | |
| raise ValueError("Summary is null or empty.") | |
| return summary | |
| else: | |
| raise ValueError("Request failed with status code: " + str(response.status_code)) |
Summary by CodeRabbit
New Features
Documentation
Chores