Skip to content

Add factorial function#6

Closed
Sekiro4321 wants to merge 5 commits intomainfrom
feature/new-function
Closed

Add factorial function#6
Sekiro4321 wants to merge 5 commits intomainfrom
feature/new-function

Conversation

@Sekiro4321
Copy link
Copy Markdown
Owner

Testing AI test generation

@github-actions
Copy link
Copy Markdown

🤖 AI Code Review

As an expert code reviewer, I've thoroughly examined the provided code diff. My review focuses on security vulnerabilities, bug risks, performance issues, and best practice violations.

Here's my feedback:

File: .github/workflows/test-gen.yml

  1. Security Vulnerability: Command Injection via Unquoted Output

    • Severity: HIGH
    • Description: The Generate tests step constructs a shell command using output from a previous step (steps.changed-files.outputs.files) without proper quoting. If a malicious actor could commit a file with a name containing shell metacharacters (e.g., payload.py; rm -rf /;), these characters would be interpreted by the shell, leading to arbitrary command execution on the GitHub Actions runner.
    • Suggested Fix: Use xargs to safely pass the list of files as individual arguments to the Python script. This ensures each file path is treated as a distinct argument, even if it contains spaces or special characters.
      # In 'Generate tests' step:
      # ...
      run: |
        echo "${{ steps.changed-files.outputs.files }}" | xargs python scripts/generate_tests.py
  2. Security Vulnerability: Arbitrary Code Execution from PRs with Write Permissions

    • Severity: HIGH
    • Description: This workflow runs on pull_request and has contents: write permissions. It executes scripts/generate_tests.py from the PR's branch. If a malicious PR modifies scripts/generate_tests.py (alongside a Python file that triggers the workflow), the modified, potentially malicious script will be executed with write permissions, allowing it to modify the repository, exfiltrate secrets, or perform other harmful actions.
    • Suggested Fix: This is a fundamental risk when automating code modifications on PRs.
      1. Implement CODEOWNERS: Add a CODEOWNERS file to your repository that requires review for changes to the scripts/ directory and other sensitive parts of the repository. This ensures that any malicious changes to scripts/generate_tests.py are reviewed by a human before they are merged, thus preventing the workflow from executing them.
      2. Restrict git push to trusted sources/contexts: If the repository is public and allows forks, consider adding an if condition to the Commit generated tests step to only run if github.event.pull_request.head.repo.full_name == github.repository or if the author is a trusted member. However, even for same-repo PRs, internal threats exist.
      3. Alternative: Instead of pushing directly, have the bot comment on the PR with the proposed test changes, allowing a human reviewer to verify and manually apply them. This eliminates the contents: write permission for the bot in this context. Given the current structure, CODEOWNERS is the most direct mitigation.
  3. Bug Risk: Duplicate Git Commit/Push Commands

    • Severity: MEDIUM
    • Description: The Commit generated tests step contains a duplicate set of git add tests/, git commit, and git push commands at the very end. This redundancy can lead to unnecessary command execution or potential errors if the first push succeeds and the second attempts to push nothing or identical commits.
    • Suggested Fix: Remove the duplicated lines at the end of the Commit generated tests step.
      # ... (inside Commit generated tests step)
            git commit -m "Auto-generate tests for new code [skip ci]"
            git push
      # Remove the following three lines:
      #   git add tests/
      #   git commit -m "Auto-generate tests for updated Python files [skip ci]"
      #   git push
  4. Best Practice Violation: No Caching for Python Dependencies

    • Severity: LOW
    • Description: The workflow installs Python dependencies using pip install -r requirements.txt on every run without utilizing caching. This can increase workflow execution time, especially for projects with many dependencies.
    • Suggested Fix: Enable pip caching through the actions/setup-python action.
      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.11'
          cache: 'pip' # Add this line to enable pip caching
          cache-dependency-path: 'requirements.txt' # Specify the dependency file for cache key invalidation

File: app.py

  1. Bug Risk: Potential for Recursion Depth Exceeded

    • Severity: MEDIUM
    • Description: The factorial function uses a recursive implementation. For very large integer inputs, this could exceed Python's default recursion limit (typically around 1000-3000), leading to a RecursionError.
    • Suggested Fix: Consider using an iterative approach for the factorial function to avoid recursion depth limits, or add a check for extremely large inputs.
      def factorial(n: int) -> int:
          """Calculate the factorial of a number."""
          if n < 0:
              raise ValueError("Factorial is not defined for negative numbers.")
          if n == 0:
              return 1
          result = 1
          for i in range(1, n + 1):
              result *= i
          return result
  2. Best Practice Violation: No Newline at End of File

    • Severity: LOW
    • Description: The file app.py is missing a newline character at its end. This is a common best practice (POSIX standard for text files) and can prevent issues with some tools and version control systems.
    • Suggested Fix: Add a newline character at the end of the file.

File: scripts/dangerous.py

  1. Security Vulnerability: Critical Command Injection via subprocess.call(shell=True)

    • Severity: HIGH
    • Description: The run_command function takes user_input and directly executes it using subprocess.call(user_input, shell=True). This is an extremely dangerous vulnerability as it allows any user_input to be executed as a shell command, potentially leading to complete system compromise (e.g., rm -rf /, data exfiltration).
    • Suggested Fix: Remove this function entirely unless there is an absolute, unavoidable, and highly controlled need for it. If command execution is truly necessary, never use shell=True with untrusted input. Instead, pass the command and its arguments as a list to subprocess.run (recommended) or subprocess.call, with shell=False (the default).
      import subprocess
      
      def run_command_safe(command_and_args: list):
          """Safely runs a command with its arguments as a list."""
          try:
              # 'check=True' raises an error for non-zero exit codes.
              # 'capture_output=True' captures stdout/stderr.
              # 'text=True' decodes output as text.
              result = subprocess.run(command_and_args, check=True, capture_output=True, text=True)
              print(f"Command executed successfully. Output:\n{result.stdout}")
          except subprocess.CalledProcessError as e:
              print(f"Command failed with error: {e}", file=sys.stderr)
              print(f"Stdout:\n{e.stdout}", file=sys.stderr)
              print(f"Stderr:\n{e.stderr}", file=sys.stderr)
          except FileNotFoundError:
              print(f"Error: Command '{command_and_args[0]}' not found.", file=sys.stderr)
      
      # To use: run_command_safe(["ls", "-l", "/tmp"])
  2. Security Vulnerability: Hardcoded API Key

    • Severity: HIGH
    • Description: The API_KEY is hardcoded directly within the script. This exposes a sensitive credential, making it vulnerable to compromise if the code is committed to a public repository, accessed by unauthorized individuals, or ends up in logs.
    • Suggested Fix: Remove the hardcoded API_KEY. Store API keys securely using environment variables or a dedicated secret management service. Access the key at runtime using os.getenv().
      import os
      # ... other imports
      
      # Remove this line:
      # API_KEY = "sk-live-abc123def456"
      
      # Access where needed, e.g.:
      # api_key = os.getenv("GEMINI_API_KEY")
      # if not api_key:
      #     raise ValueError("GEMINI_API_KEY environment variable not set.")

File: scripts/generate_tests.py

  1. Bug Risk: Missing Imports for Generated Tests

    • Severity: HIGH
    • Description: The test_generated.py file created by the script will contain pytest functions but lacks the necessary import statements to access the original functions being tested (e.g., app.reverse_string). This will cause NameError exceptions when the tests are run, making them non-functional.
    • Suggested Fix: The script must dynamically generate and include the correct import statements at the beginning of the test file. A good strategy is to create a separate test file for each modified source file, making imports simpler and test organization clearer.
      # Modified 'main' function to generate per-file tests with imports:
      # ...
      import os
      import sys
      import ast
      import google.generativeai as genai
      from google.api_core.exceptions import GoogleAPIError
      
      def main():
          genai.configure(api_key=os.getenv("GEMINI_API_KEY")) # Configure API once
      
          changed_files = sys.argv[1:] if len(sys.argv) > 1 else []
      
          if not changed_files:
              print("No changed Python files provided for Test Generation.")
              return
      
          for file_path in changed_files:
              if not file_path.endswith('.py'):
                  continue
              if file_path.startswith('tests/'):
                  continue
      
              print(f"Analyzing: {file_path}")
              functions = extract_functions(file_path)
      
              if not functions:
                  print(f"No functions found in {file_path} to generate tests for.")
                  continue
      
              tests_for_this_file = []
              # Determine the module name for importing. Assumes project root is on sys.path.
              module_name = os.path.splitext(file_path)[0].replace(os.sep, '.')
              functions_to_import = []
      
              for func in functions:
                  if func['name'].startswith('_'):
                      continue # Skip private functions
      
                  print(f"Generating tests for function: {func['name']} in {file_path}")
                  tests = generate_tests_for_functions(func) # Pass configured client if needed
                  if tests:
                      tests_for_this_file.append(f"# Tests for {func['name']} from {file_path}\n{tests}")
                      functions_to_import.append(func['name'])
      
              if tests_for_this_file:
                  os.makedirs('tests', exist_ok=True)
                  # Create a unique test file name, e.g., 'test_app_py.py'
                  test_file_name = f"test_{os.path.basename(file_path).replace('.', '_')}.py"
                  test_file_path = os.path.join('tests', test_file_name)
      
                  with open(test_file_path, 'w') as f:
                      f.write("import pytest\n")
                      # Add the import statement for the module
                      if functions_to_import:
                          f.write(f"from {module_name} import {', '.join(sorted(functions_to_import))}\n")
                      f.write("\n\n")
                      f.write("\n\n".join(tests_for_this_file))
      
                  print(f"Generated tests written to: {test_file_path}")
              else:
                  print(f"No tests generated for {file_path}")
      
      # Ensure generate_tests_for_functions uses the globally configured API:
      def generate_tests_for_functions(func_info):
          # ... prompt construction ...
          try:
              model = genai.GenerativeModel("gemini-2.5-flash")
              response = model.generate_content(contents=prompt)
              return response.text
          except GoogleAPIError as e:
              print(f"Error calling Gemini API for function {func_info['name']}: {e}", file=sys.stderr)
              return f"# Error: Could not generate tests for {func['name']} due to API error: {e}"
          except Exception as e:
              print(f"An unexpected error occurred: {e}", file=sys.stderr)
              return f"# Error: Could not generate tests for {func['name']} due to an unexpected error: {e}"
      
      # ... (rest of the file) ...
  2. Bug Risk: Overwriting test_generated.py on Subsequent Runs

    • Severity: MEDIUM
    • Description: The script currently writes all generated tests to a single file, tests/test_generated.py, using mode='w'. This means that if the workflow runs multiple times (e.g., for different PRs or for changes in different files within the same PR), previous generated tests will be overwritten, leading to an incomplete or incorrect test suite.
    • Suggested Fix: As demonstrated in the previous fix, generate a distinct test file for each source file that was changed. This provides better isolation, easier review, and prevents accidental loss of generated tests. For example, tests/test_app_py.py, tests/test_another_module_py.py.
  3. Performance Issue: Repeated Gemini API Client Initialization

    • Severity: LOW
    • Description: The genai.Client() is initialized inside generate_tests_for_functions, meaning a new client object is created for every function. While the google.generativeai library often relies on global configuration (genai.configure()), if a client object is used, repeated instantiation can introduce minor overhead.
    • Suggested Fix: If genai.Client() is strictly needed for the API calls (often genai.configure and then genai.GenerativeModel is sufficient), initialize it once in the main function and pass it as an argument to generate_tests_for_functions.
      (Note: The suggested fix for missing imports already integrates genai.configure(api_key=...) in main and uses genai.GenerativeModel directly, addressing this performance concern more effectively.)
  4. Best Practice Violation: Lack of Error Handling for API Calls

    • Severity: LOW
    • Description: The generate_tests_for_functions function directly calls the Gemini API without explicit error handling (e.g., for network issues, invalid API key, or API-specific errors). This can lead to unhandled exceptions and workflow failures.
    • Suggested Fix: Wrap the API call in a try-except block to gracefully handle potential API errors and provide informative feedback.
      (Note: The suggested fix for missing imports already includes try-except GoogleAPIError in generate_tests_for_functions.)
  5. Best Practice Violation: No Newline at End of File

    • Severity: LOW
    • Description: The file scripts/generate_tests.py is missing a newline character at its end.
    • Suggested Fix: Add a newline character at the end of the file.

SEVERITY_SUMMARY: CRITICAL


Powered by Gemini AI

@github-actions github-actions bot added the ai-review: critical AI review found critical issues label Feb 12, 2026
@Sekiro4321 Sekiro4321 closed this Feb 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai-review: critical AI review found critical issues

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant