Skip to content

feat: release updates#7

Merged
nejcm merged 7 commits intomainfrom
develop
Feb 19, 2025
Merged

feat: release updates#7
nejcm merged 7 commits intomainfrom
develop

Conversation

@nejcm
Copy link
Owner

@nejcm nejcm commented Feb 19, 2025

Summary by CodeRabbit

  • New Features

    • Expanded release note generation by integrating additional AI summarization options, enabling users to optionally utilize enhanced providers.
  • Documentation

    • Updated configuration guidance and setup instructions to reflect the new optional inputs for the supplemental summarization services.
  • Chores

    • Streamlined configuration settings and dependency management, and refined ignore rules for environment and temporary files.

@coderabbitai
Copy link

coderabbitai bot commented Feb 19, 2025

Walkthrough

This pull request updates the GitHub Action configuration by replacing deprecated OpenAI parameters with new keys for Gemini and adding support for DeepSeek. The changes introduce new optional inputs in the action metadata and README, update environment variable usage, and expand the main summarization logic to conditionally invoke new functions for DeepSeek and Gemini. Additionally, minor updates to .gitignore and requirements.txt improve file management and dependency tracking.

Changes

File(s) Change Summary
.github/workflows/ci.yml, README.md, action.yml Updated GitHub Action configuration: commented out OpenAI keys; added new optional inputs for DeepSeek (deepseekKey, deepseekModel) and Gemini (geminiKey, geminiModel); README updated to document these changes.
src/deepseek_summary.py, src/gemini_summary.py, src/main.py Introduced new functions deepseek_summary and gemini_summary for interacting with their respective APIs, and updated the main script to conditionally invoke these based on environment variable checks.
.gitignore, requirements.txt Added new ignore entries (.env*, TODO.md) to .gitignore and introduced the dependency google-genai==1.2.0 in requirements.txt.

Sequence Diagram(s)

sequenceDiagram
    participant M as Main Script
    participant DS as deepseek_summary
    participant GM as gemini_summary

    M->>M: Read environment variables
    alt DEEPSEEK_KEY provided
        M->>DS: Call deepseek_summary(issues, prompt, DEEPSEEK_KEY, DEEPSEEK_MODEL)
        DS-->>M: Return DeepSeek summary
    end
    alt GEMINI_KEY provided
        M->>GM: Call gemini_summary(issues, prompt, GEMINI_KEY, GEMINI_MODEL)
        GM-->>M: Return Gemini summary
    end
Loading

Poem

I'm a rabbit with a code-filled cheer,
Hopping through workflows, keys now clear.
DeepSeek and Gemini join the play,
In my tidy code garden, bugs scurry away.
With every new line, my hops grow bright—
A spring of logic in the moonlit night! 🐰💻

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@nejcm
Copy link
Owner Author

nejcm commented Feb 19, 2025

Release Notes

This release includes new features, refactoring, and general improvements.

New Features:

  • Added DeepSeek and Gemini Integration: Introduced support for DeepSeek and Gemini models. (feat: add llms #6)

Refactoring:

  • Enabled OpenAI: Refactored the codebase to enable the use of OpenAI models.

Improvements:

  • Updated PR Commit Fetching: Improved the process of fetching commit information from Pull Requests.

Other:

  • General merge commits and chore tasks to keep branches synchronized.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
src/deepseek_summary.py (1)

4-8: Move API configuration to a separate config file.

The API endpoint and headers should be moved to a configuration file for better maintainability.

Create a new file src/config.py:

DEEPSEEK_API_CONFIG = {
    "url": "https://api.deepseek.com/chat/completions",
    "headers": {
        "Content-Type": "application/json"
    }
}

Then update the function to use this config.

src/main.py (1)

43-53: Refactor provider selection for better maintainability.

The current implementation has repeated patterns that could be refactored into a more maintainable structure.

Consider refactoring to a provider factory pattern:

PROVIDERS = [
    {
        'key': 'ANTHROPIC_KEY',
        'model': 'ANTHROPIC_MODEL',
        'func': claude_summary
    },
    {
        'key': 'OPENAI_KEY',
        'model': 'OPENAI_MODEL',
        'org': 'OPENAI_ORG',
        'func': openai_summary
    },
    {
        'key': 'DEEPSEEK_KEY',
        'model': 'DEEPSEEK_MODEL',
        'func': deepseek_summary
    },
    {
        'key': 'GEMINI_KEY',
        'model': 'GEMINI_MODEL',
        'func': gemini_summary
    }
]

def get_provider():
    for provider in PROVIDERS:
        key = os.environ.get(provider['key'])
        if not is_empty(key):
            model = os.environ.get(provider['model'])
            if 'org' in provider:
                org = os.environ.get(provider['org'])
                if not is_empty(org):
                    return lambda issues, prompt: provider['func'](
                        issues, prompt, key, org, model
                    )
            else:
                return lambda issues, prompt: provider['func'](
                    issues, prompt, key, model
                )
    return None
.github/workflows/ci.yml (1)

28-31: Add comments explaining provider configuration.

Consider adding comments to explain why certain providers are commented out and what steps are needed to enable them.

Apply this diff to improve documentation:

-          #deepseekKey: ${{ secrets.DEEPSEEK_KEY }}
-          geminiKey: ${{ secrets.GEMINI_KEY }}
-          #openAiKey: ${{ secrets.OPENAI_KEY }}
-          #openAiOrg: ${{ secrets.OPENAI_ORG }}
+          # DeepSeek integration (uncomment and configure DEEPSEEK_KEY secret to enable)
+          #deepseekKey: ${{ secrets.DEEPSEEK_KEY }}
+          # Gemini integration (requires GEMINI_KEY secret)
+          geminiKey: ${{ secrets.GEMINI_KEY }}
+          # OpenAI integration (legacy, commented out in favor of Gemini)
+          #openAiKey: ${{ secrets.OPENAI_KEY }}
+          #openAiOrg: ${{ secrets.OPENAI_ORG }}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ceb97ec and 05e89cd.

📒 Files selected for processing (8)
  • .github/workflows/ci.yml (1 hunks)
  • .gitignore (1 hunks)
  • README.md (2 hunks)
  • action.yml (3 hunks)
  • requirements.txt (1 hunks)
  • src/deepseek_summary.py (1 hunks)
  • src/gemini_summary.py (1 hunks)
  • src/main.py (3 hunks)
✅ Files skipped from review due to trivial changes (1)
  • requirements.txt
🔇 Additional comments (7)
.github/workflows/ci.yml (1)

28-29: Verify GitHub secrets configuration.

Ensure that the GEMINI_KEY secret is configured in the GitHub repository settings. If you plan to enable DeepSeek in the future, you'll need to configure the DEEPSEEK_KEY secret as well.

.gitignore (1)

106-108: New .env* and TODO.md Entries Added
The addition of .env* ensures that all files beginning with .env (e.g., .env.development, .env.production) are ignored, which is a good practice for keeping sensitive configurations out of version control. The exclusion of TODO.md prevents accidental commits of temporary or project planning files.

action.yml (3)

24-30: Addition of New DeepSeek Input Parameters
The new inputs deepseekKey and deepseekModel have been added with clear descriptions and a default model value of "deepseek-chat". This update aligns well with the PR objective of adding support for the DeepSeek provider.


31-37: Addition of New Gemini Input Parameters
The parameters geminiKey and geminiModel are introduced with appropriate descriptions and a default value of "gemini-2.0-flash". This extension further enhances the action’s compatibility with additional AI models.


126-129: Propagation of New Environment Variables
The environment variable mappings for DEEPSEEK_KEY, DEEPSEEK_MODEL, GEMINI_KEY, and GEMINI_MODEL in the run step are correctly added. Ensure that the downstream logic (e.g., in src/main.py) properly handles these new variables to invoke the respective DeepSeek and Gemini functionality when their keys are provided.

README.md (2)

32-35: Documentation Update – New Input Parameters
The README has been updated to include detailed descriptions for the new inputs: deepseekKey, deepseekModel, geminiKey, and geminiModel. The default values and operational details are clearly documented, which helps users understand how to configure the GitHub Action for the new providers.


124-126: Updated TODO List Reflecting Provider Support Completion
The TODO section now shows that support for DeepSeek and Gemini is complete (via the checked items), aligning documentation with the current feature set. Confirm that this update fits with your project’s roadmap and communication strategy.

Comment on lines +3 to +14
def gemini_summary(issues, prompt, key, model="gemini-2.0-flash"):
client = genai.Client(api_key=key)

prompt = f"{prompt} {issues}"
response = client.models.generate_content(
model=model, contents=prompt
)

if not response or not response.text:
raise ValueError("Summary is null or empty.")

return response.text
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling and input validation.

The function needs additional error handling and input validation:

  1. Add validation for empty/null input parameters.
  2. Add try-catch block for API errors.
  3. Consider reusing the client instance for better performance.

Apply this diff to improve error handling and validation:

 def gemini_summary(issues, prompt, key, model="gemini-2.0-flash"):
+    if not issues or not prompt or not key:
+        raise ValueError("Required parameters (issues, prompt, key) cannot be empty.")
+
     client = genai.Client(api_key=key)
     
     prompt = f"{prompt} {issues}"
-    response = client.models.generate_content(
-        model=model, contents=prompt
-    )
+    try:
+        response = client.models.generate_content(
+            model=model, contents=prompt
+        )
+    except Exception as e:
+        raise ValueError(f"Failed to generate content: {str(e)}")
  
     if not response or not response.text:
         raise ValueError("Summary is null or empty.")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def gemini_summary(issues, prompt, key, model="gemini-2.0-flash"):
client = genai.Client(api_key=key)
prompt = f"{prompt} {issues}"
response = client.models.generate_content(
model=model, contents=prompt
)
if not response or not response.text:
raise ValueError("Summary is null or empty.")
return response.text
def gemini_summary(issues, prompt, key, model="gemini-2.0-flash"):
if not issues or not prompt or not key:
raise ValueError("Required parameters (issues, prompt, key) cannot be empty.")
client = genai.Client(api_key=key)
prompt = f"{prompt} {issues}"
try:
response = client.models.generate_content(
model=model, contents=prompt
)
except Exception as e:
raise ValueError(f"Failed to generate content: {str(e)}")
if not response or not response.text:
raise ValueError("Summary is null or empty.")
return response.text

Comment on lines +19 to +36
response = requests.post(url, headers=headers, json=data)

if response.status_code == 200:
result = response.json()
if not result.choices:
raise ValueError("No response choices available")
if not result.choices[0]:
raise ValueError("First choice is null")
if not result.choices[0].message:
raise ValueError("Message is null")

summary = result.choices[0].message.content
if not summary:
raise ValueError("Summary is null or empty.")

return summary
else:
raise ValueError("Request failed with status code: " + str(response.status_code)) No newline at end of file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix response parsing and add request error handling.

  1. The response parsing has incorrect attribute access.
  2. Missing try-catch for request errors.

Apply this diff to fix the issues:

-    response = requests.post(url, headers=headers, json=data)
+    try:
+        response = requests.post(url, headers=headers, json=data)
+    except requests.exceptions.RequestException as e:
+        raise ValueError(f"Request failed: {str(e)}")

     if response.status_code == 200:
         result = response.json()
-        if not result.choices:
+        if not result.get('choices'):
             raise ValueError("No response choices available")
-        if not result.choices[0]:
+        if not result['choices'][0]:
             raise ValueError("First choice is null")
-        if not result.choices[0].message:
+        if not result['choices'][0].get('message'):
             raise ValueError("Message is null")

-        summary = result.choices[0].message.content
+        summary = result['choices'][0]['message']['content']
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
response = requests.post(url, headers=headers, json=data)
if response.status_code == 200:
result = response.json()
if not result.choices:
raise ValueError("No response choices available")
if not result.choices[0]:
raise ValueError("First choice is null")
if not result.choices[0].message:
raise ValueError("Message is null")
summary = result.choices[0].message.content
if not summary:
raise ValueError("Summary is null or empty.")
return summary
else:
raise ValueError("Request failed with status code: " + str(response.status_code))
try:
response = requests.post(url, headers=headers, json=data)
except requests.exceptions.RequestException as e:
raise ValueError(f"Request failed: {str(e)}")
if response.status_code == 200:
result = response.json()
if not result.get('choices'):
raise ValueError("No response choices available")
if not result['choices'][0]:
raise ValueError("First choice is null")
if not result['choices'][0].get('message'):
raise ValueError("Message is null")
summary = result['choices'][0]['message']['content']
if not summary:
raise ValueError("Summary is null or empty.")
return summary
else:
raise ValueError("Request failed with status code: " + str(response.status_code))

@nejcm nejcm merged commit ebf0e54 into main Feb 19, 2025
3 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Mar 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant