Skip to content

fix auto merge problem#109

Merged
github-actions[bot] merged 32 commits intocontainerfrom
main
Mar 13, 2026
Merged

fix auto merge problem#109
github-actions[bot] merged 32 commits intocontainerfrom
main

Conversation

@DaTiC0
Copy link
Owner

@DaTiC0 DaTiC0 commented Mar 13, 2026

Summary

  • What changed?
  • Why was it needed?

Validation

  • Local checks passed (make test)
  • Runtime check done where relevant (make health)
  • CI checks are green

AI-Assisted Review (if applicable)

  • I used AI assistance for parts of this change
  • I manually reviewed all AI-generated code
  • I verified no secrets/credentials were added

Risk & Rollback

  • Risk level: Low / Medium / High
  • Rollback plan (if needed):

Summary by Sourcery

Simplify application startup by assuming all core extensions are available, centralizing configuration and Firebase initialization, and improving logging and database setup.

New Features:

  • Add a Flask CLI command to initialize the database schema and use a shared helper for DB initialization.
  • Introduce additional authentication, device, and action handler tests to cover signup/login, smart home endpoints, and action_devices behavior.
  • Support an overridable service account template file path while defaulting to a bundled template in the project root.

Bug Fixes:

  • Ensure tests and the TestingConfig use an in-memory SQLite database via SQLALCHEMY_DATABASE_URI, avoiding accidental use of the runtime database file.
  • Prevent duplicate or failing Firebase initialization by checking for an existing app and requiring configuration before initializing.
  • Validate smarthome POST requests and return a 400 error for malformed payloads instead of failing unpredictably.
  • Sanitize device IDs used in Firebase paths to avoid path traversal issues in query and execute operations.
  • Update OAuth token and grant expiry handling to be timezone-safe and compatible with modern datetime usage.
  • Handle JWT return types from google-auth robustly in ReportState to avoid decode errors.
  • Ensure login-required device listing and smart home APIs behave consistently regardless of optional module availability.

Enhancements:

  • Replace print-based diagnostics across the app, action_devices, OAuth, notifications, ReportState, and service account generation with structured logging.
  • Modernize app configuration by deriving the config object from APP_ENV/FLASK_ENV and loading Flask config via from_object.
  • Refine MQTT, database, OAuth, and login initialization order and error handling for more robust startup behavior.
  • Update database initialization to use SQLAlchemy session APIs compatible with newer Flask-SQLAlchemy versions.
  • Clarify README service account guidance to describe the bundled template and optional override environment variable.
  • Upgrade gunicorn, firebase_admin, PyMySQL, and related dependencies and add newer security-related libraries like cryptography and recent requests/google-auth versions.

CI:

  • Upgrade GitHub Actions workflows to use newer checkout and setup-python action versions and keep auxiliary workflows in sync.

Documentation:

  • Update README instructions for Google service account configuration to reflect the default template file and optional override mechanism.

Tests:

  • Expand test coverage for root and health endpoints independent of feature flags, authentication flows, smart home endpoints, and action_devices logic, including error paths.
  • Ensure test setup isolates DB state, creates and cleans up test users, and confirms error handling and login requirements for protected routes.

dependabot bot and others added 30 commits March 9, 2026 08:56
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](actions/checkout@v2...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
- Update runtime.txt to python-3.12.2
- Update non-breaking dependencies:
  - gunicorn 20.1.0 → 22.0.0 (security patches)
  - firebase-admin 3.2.1 → 6.5.0
  - pymysql 1.0.2 → 1.1.1
  - python-dotenv 0.15.0 → 1.0.0
  - Add google-auth>=2.22.0, requests>=2.31.0, cryptography>=41.0.0
- Fix Python 3.12 deprecations:
  - Replace datetime.utcnow() with datetime.now(timezone.utc) in my_oauth.py
  - Remove .decode('utf-8') from jwt.encode() in ReportState.py (google-auth 2.x returns str)
- Replace all print() statements with logging module:
  - action_devices.py: 16 print() → logger calls
  - app.py: 12 print() → logger calls
  - notifications.py: 5 print() → logger calls
  - generate_service_account_file.py: 4 print() → logger calls
  - ReportState.py: 3 print() → logger calls

Tests: All 6 tests pass ✓
Replace all 11 print() statements with structured logger.debug() calls:
- get_current_user(): log user lookup results
- load_client(): log client ID and lookup results
- load_grant(): log grant request
- save_grant(): log grant creation
- load_token(): log token request
- save_token(): log token creation and existing tokens

Maintains consistency with logging infrastructure added in earlier Phase 1 commits.
All tests passing (10/10).
…unction for schema setup and update CLI command
In `action_devices.py`, replaced the insecure `import random` and
`random.randint` with the cryptographically strong `import secrets` and
`secrets.randbelow` when generating the Request ID for state reports.
This ensures the predictability of request IDs is eliminated, preventing
potential spoofing or replay vulnerabilities related to ID generation.

Co-authored-by: DaTiC0 <13198638+DaTiC0@users.noreply.github.com>
Co-authored-by: DaTiC0 <13198638+DaTiC0@users.noreply.github.com>
Fix invalid YAML indentation in python-app.yml workflow
…/checkout-6

Bump actions/checkout from 2 to 6
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 4.5.0 to 6.2.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](actions/setup-python@v4.5.0...v6.2.0)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: 6.2.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
…setup-python-6.2.0

Bump actions/setup-python from 4.5.0 to 6.2.0
In `action_devices.py`, replaced the insecure `import random` and
`random.randint` with the cryptographically strong `import secrets` and
`secrets.randbelow` when generating the Request ID for state reports.
This ensures the predictability of request IDs is eliminated, preventing
potential spoofing or replay vulnerabilities related to ID generation.

Co-authored-by: DaTiC0 <13198638+DaTiC0@users.noreply.github.com>
Update the random number generation logic based on PR feedback.
Changed `10**20 - 10**19 + 1` to the simplified equivalent
`9 * 10**19 + 1` to make the code easier to understand at a glance.

Co-authored-by: DaTiC0 <13198638+DaTiC0@users.noreply.github.com>
Moved `import secrets` to the top of `action_devices.py` to comply with
PEP 8 style guidelines, improving readability and making dependencies
clear at a glance.

Co-authored-by: DaTiC0 <13198638+DaTiC0@users.noreply.github.com>
…92052650724018031

🔒 Fix insecure random number generator for Request ID
Bumps [gunicorn](https://github.com/benoitc/gunicorn) from 22.0.0 to 25.1.0.
- [Release notes](https://github.com/benoitc/gunicorn/releases)
- [Commits](benoitc/gunicorn@22.0.0...25.1.0)

---
updated-dependencies:
- dependency-name: gunicorn
  dependency-version: 25.1.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Mar 13, 2026

Reviewer's Guide

Refactors app initialization to be deterministic and test-friendly, strengthens Firebase and device handling safety, expands test coverage for auth and device flows, modernizes logging and dependencies, and updates CI workflows to stabilize automated merges and Dependabot PR handling.

Sequence diagram for the updated smarthome intent handling

sequenceDiagram
    actor GoogleAssistant
    participant FlaskApp as FlaskApp
    participant Routes as routes_smarthome
    participant ActionDevices as action_devices
    participant MQTT as MQTTBroker
    participant Firebase as FirebaseRTDB

    GoogleAssistant->>FlaskApp: POST /smarthome (JSON)
    FlaskApp->>Routes: route request
    Routes->>Routes: validate JSON (requestId, inputs)
    alt invalid request
        Routes-->>FlaskApp: 400 Invalid request format
        FlaskApp-->>GoogleAssistant: 400 error JSON
    else valid request
        Routes->>ActionDevices: actions(req)
        loop for each intent in inputs
            alt intent == SYNC
                ActionDevices->>ActionDevices: onSync()
                alt Firebase available and initialized
                    ActionDevices->>Firebase: rsync() list devices
                    Firebase-->>ActionDevices: devices data
                else Firebase unavailable
                    ActionDevices->>ActionDevices: use MOCK_DEVICES
                end
            else intent == QUERY
                ActionDevices->>ActionDevices: onQuery(req)
                ActionDevices->>Firebase: rquery(deviceId)
                Firebase-->>ActionDevices: device state or {online: False}
            else intent == EXECUTE
                ActionDevices->>ActionDevices: onExecute(req)
                loop per device command
                    ActionDevices->>ActionDevices: commands(payload, deviceId, execCommand, params)
                    ActionDevices->>Firebase: rexecute(deviceId, params)
                    Firebase-->>ActionDevices: updated states
                    ActionDevices->>MQTT: publish notification
                    MQTT-->>ActionDevices: ack or error
                end
            else intent == DISCONNECT
                ActionDevices->>ActionDevices: clear payload
            else unknown intent
                ActionDevices->>ActionDevices: log warning and ignore
            end
        end
        ActionDevices-->>Routes: payload
        Routes->>Routes: build response {requestId, payload}
        Routes-->>FlaskApp: 200 JSON
        FlaskApp-->>GoogleAssistant: 200 JSON
    end
Loading

File-Level Changes

Change Details Files
Refactor Flask app initialization to use config objects, deterministic extension setup, and safe Firebase init, removing FULL_FEATURES/FIREBASE_AVAILABLE flags.
  • Introduce _get_config_object to pick config class via APP_ENV/FLASK_ENV and load it with app.config.from_object
  • Replace ad‑hoc config defaults and secret generation with config-driven values, logging current ENV and agent user ID
  • Always register route and auth blueprints, and initialize MQTT, SQLAlchemy, OAuth, and LoginManager in a fixed order with error handling around MQTT
  • Add _init_firebase to safely initialize Firebase only when SERVICE_ACCOUNT_DATA and DATABASEURL are configured and avoid double initialization via get_app
  • Replace before_first_request DB creation with an explicit init_database_schema helper and a Flask CLI command init-db, and call it in main before app.run
app.py
Simplify app behavior assumptions in tests and add coverage for auth, device endpoints, and action_devices helpers using an in-memory DB.
  • Force SQLALCHEMY_DATABASE_URI to sqlite:///:memory: and APP_ENV=development in tests to isolate from runtime configuration
  • Remove conditionals around FULL_FEATURES and treat root and /health endpoints as always present HTML/JSON responses
  • Add AuthTests to exercise signup and login flows and clean up the created user between tests
  • Add DeviceEndpointTests to assert /devices requires login and /smarthome accepts SYNC requests and returns payload
  • Add ActionDevicesUnitTests to exercise onSync, onQuery, onExecute error behavior and request_sync(false) semantics within app context
tests/test_app.py
Harden action_devices and related modules with logging, input validation, and better randomness for report_state.
  • Replace print debugging with structured logging via module-level loggers and log levels appropriate to context (debug, warning, error)
  • Add deviceId sanitization in rquery and rexecute to prevent path traversal, returning safe defaults on invalid IDs
  • Improve error handling in rstate, rsync, onSync/onQuery/onExecute/commands/actions/request_sync/report_state to avoid noisy prints and to return stable fallback payloads
  • Use secrets.randbelow to generate large random request IDs in report_state instead of random.randint for better randomness properties
action_devices.py
ReportState.py
Modernize OAuth and notification handling with logging and timezone-aware expiry timestamps.
  • Add logging to get_current_user, OAuth client/grant/token getters/setters, and remove print statements to avoid noisy STDOUT in production and CI
  • Change OAuth expiry calculations from datetime.utcnow to timezone-aware datetime.now(timezone.utc) before stripping tzinfo for DB compatibility
  • Add logging-based MQTT callbacks for message, publish, subscribe, and topic handlers instead of print calls
my_oauth.py
notifications.py
Adjust service account generation and configuration to use a repo-local template and clarify README docs.
  • Change generate_service_account_file.generate_file to always load service_account_file.json from the project root (overridable via SERVICE_ACCOUNT_FILE in config logic) and switch to logger-based diagnostics
  • Update README to explain that the repo ships service_account_file.json as a template and that SERVICE_ACCOUNT_FILE is optional for overriding its path
  • Tighten TestingConfig to use SQLALCHEMY_DATABASE_URI in-memory sqlite and disable SQLALCHEMY_TRACK_MODIFICATIONS
generate_service_account_file.py
README.md
config.py
Update dependencies and CI workflows to current versions and ensure tests run consistently in GitHub Actions.
  • Bump gunicorn, firebase_admin, PyMySQL, python-dotenv, and add pinned google-auth, requests, and cryptography versions for compatibility with newer Firebase and Google APIs
  • Upgrade GitHub Actions workflow versions (checkout/setup-python) across python-app, path-safety, and sync-main-to-container workflows
  • Keep python-app workflow behavior the same while updating action versions, so lint and pytest still run as before
requirements.txt
.github/workflows/python-app.yml
.github/workflows/path-safety.yml
.github/workflows/sync-main-to-container.yml
Wire devices/smarthome routes to stricter request handling and logging and add minimal Dependabot PR comments.
  • Add logging to /devices and /smarthome endpoints, including validation of smarthome request structure and early 400 response for malformed JSON
  • Introduce small markdown files to request Dependabot rebases for specific PR IDs to help stabilize automated merge flows
routes.py
.github/pull_requests/100_comment.md
pull_requests/99/comment.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces significant improvements to the application's stability, maintainability, and security. It primarily focuses on integrating a robust logging system, updating core dependencies to their latest versions, and refactoring the main application entry point for clearer structure and initialization. Additionally, security measures have been enhanced through input sanitization and better random number generation, while the testing suite has been expanded to cover new areas. These changes collectively aim to modernize the codebase and ensure a more reliable and secure application environment.

Highlights

  • Logging Integration: Replaced direct 'print' statements with structured 'logging' across multiple Python files ('ReportState.py', 'action_devices.py', 'app.py', 'generate_service_account_file.py', 'my_oauth.py', 'notifications.py', 'routes.py') for better observability and error handling.
  • Dependency Updates & Python Version Upgrade: Upgraded several Python packages in 'requirements.txt', including 'gunicorn', 'firebase_admin', 'PyMySQL', 'requests', 'cryptography', 'google-auth', and 'python-dotenv', and updated the target Python runtime to '3.12.2' in 'runtime.txt'.
  • Codebase Refactoring & Cleanup: Streamlined 'app.py' by removing the 'FULL_FEATURES' flag, centralizing Flask app configuration, and introducing dedicated functions for Firebase and database initialization, leading to a cleaner and more modular application structure.
  • Security Enhancements: Implemented input sanitization for 'deviceId' in 'action_devices.py' to prevent potential path traversal vulnerabilities and switched to 'secrets.randbelow' for cryptographically strong random number generation for 'requestId'.
  • Improved Testing Infrastructure: Expanded 'tests/test_app.py' with new test classes for authentication and device endpoints, and removed conditional logic related to the deprecated 'FULL_FEATURES' flag, enhancing test coverage and reliability.
  • Google Service Account Handling Simplification: Updated 'README.md' and 'generate_service_account_file.py' to simplify the handling of Google Service Account files, defaulting to a template file and removing 'SERVICE_ACCOUNT_FILE' from required environment variables.
  • OAuth2 Timezone Awareness: Adjusted 'datetime.utcnow()' to 'datetime.now(timezone.utc)' in 'my_oauth.py' for more robust timezone handling in OAuth2 token and grant expiration calculations.
Changelog
  • .github/pull_requests/100_comment.md
    • Added a Dependabot rebase command.
  • README.md
    • Removed 'SERVICE_ACCOUNT_FILE' from the list of required environment variables.
    • Updated the description for Google Service Account setup to reference a default template file and clarify environment variable overrides.
  • ReportState.py
    • Imported the 'logging' module.
    • Replaced 'print' statements with 'logger.info' and 'logger.error' calls.
    • Modified JWT decoding to handle both string and bytes results from 'generate_jwt' for 'google-auth' compatibility.
  • action_devices.py
    • Imported 'logging' and 'secrets' modules.
    • Replaced 'print' statements with 'logger.debug', 'logger.info', 'logger.warning', and 'logger.error' calls.
    • Added input sanitization checks for 'deviceId' in 'rquery' and 'rexecute' functions.
    • Replaced 'random.randint' with 'secrets.randbelow' for 'requestId' generation in 'report_state'.
  • app.py
    • Removed 'dotenv' import and related conditional loading.
    • Removed 'FULL_FEATURES' flag and all associated conditional logic.
    • Introduced '_get_config_object' function to dynamically load Flask configuration based on 'APP_ENV' or 'FLASK_ENV'.
    • Moved Firebase initialization into a dedicated '_init_firebase' function, ensuring it's only initialized once and with proper configuration.
    • Removed fallback defaults for 'AGENT_USER_ID', 'API_KEY', and 'DATABASEURL' as they are now handled by config classes.
    • Removed 'SECRET_KEY' generation warning.
    • Streamlined extension initialization (MQTT, DB, OAuth, LoginManager).
    • Moved 'db.create_all()' into a new 'init_database_schema' function and exposed it as a Flask CLI command 'init-db'.
    • Removed fallback routes ('/', '/health', '/devices', '/smarthome', '/sync') that were conditionally registered when 'FULL_FEATURES' was false.
    • Updated 'load_user' to use 'db.session.get(User, int(user_id))' for SQLAlchemy 2.0 compatibility.
    • Updated 'app.run' to use 'debug' from app config.
  • config.py
    • Updated 'TestingConfig' to set 'TESTING = True', 'SQLALCHEMY_DATABASE_URI = 'sqlite:///:memory:'', and 'SQLALCHEMY_TRACK_MODIFICATIONS = False'.
    • Changed 'DATABASE_URI' to 'SQLALCHEMY_DATABASE_URI'.
  • generate_service_account_file.py
    • Imported 'logging' module.
    • Replaced 'print' statements with 'logger.debug' and 'logger.warning' calls.
    • Modified 's_file' to default to 'service_account_file.json' in the base directory, rather than relying on an environment variable.
  • my_oauth.py
    • Imported 'logging' and 'timezone' from 'datetime'.
    • Replaced 'print' statements with 'logger.debug' calls.
    • Updated 'datetime.utcnow()' to 'datetime.now(timezone.utc).replace(tzinfo=None)' for timezone-aware expiration calculations.
  • notifications.py
    • Imported 'logging' module.
    • Replaced 'print' statements with 'logger.debug' calls.
  • pull_requests/99/comment.md
    • Added a Dependabot rebase command.
  • requirements.txt
    • Updated 'gunicorn' from '20.1.0' to '25.1.0'.
    • Updated 'firebase_admin' from '3.2.1' to '6.5.0'.
    • Added 'google-auth>=2.22.0'.
    • Updated 'PyMySQL' from '1.0.2' to '1.1.1'.
    • Added 'requests>=2.31.0'.
    • Added 'cryptography>=41.0.0'.
    • Updated 'python-dotenv' from '0.15.0' to '1.0.0'.
  • routes.py
    • Imported 'logging' module.
    • Added 'logger.debug' calls for various route activities.
    • Added input validation for the 'smarthome' endpoint, returning a 400 error for malformed requests.
  • runtime.txt
    • Updated Python version from 'python-3.8.9' to 'python-3.12.2'.
  • tests/test_app.py
    • Added 'pyright: reportAttributeAccessIssue=false' comment.
    • Imported 'os', 'db', 'User', and 'select'.
    • Set 'SQLALCHEMY_DATABASE_URI' and 'APP_ENV' environment variables for testing.
    • Removed 'FULL_FEATURES' import and all conditional logic related to it.
    • Removed the 'test_root_endpoint_content_contract' JSON assertion, as the root endpoint now renders HTML.
    • Removed 'skipTest' for 'test_health_endpoint_contract'.
    • Added 'AuthTests' class with 'setUp', 'tearDown', and 'test_signup_and_login' methods.
    • Added 'DeviceEndpointTests' class with 'setUp', 'test_devices_requires_login', and 'test_smarthome_sync' methods.
    • Expanded 'ActionDevicesUnitTests' with a 'setUp' method and more comprehensive assertions in 'test_onSync_and_helpers'.
Ignored Files
  • Ignored by pattern: .github/workflows/** (3)
    • .github/workflows/path-safety.yml
    • .github/workflows/python-app.yml
    • .github/workflows/sync-main-to-container.yml
Activity
  • No human activity has been recorded on this pull request since its creation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions github-actions bot merged commit aa6bb39 into container Mar 13, 2026
5 of 7 checks passed
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant and valuable refactoring of the application. Key improvements include upgrading to Python 3.12, replacing print statements with structured logging, centralizing application configuration, enhancing security by sanitizing device IDs and using the secrets module, and greatly expanding test coverage. The application startup and database initialization logic have been modernized and made more robust.

My review has identified one issue where the implementation for loading the service account file in generate_service_account_file.py does not match the updated documentation in the README.md. A code suggestion has been provided to fix this discrepancy.

Overall, this is an excellent set of changes that significantly improves the quality, maintainability, and security of the codebase.


def generate_file():
s_file = environ.get('SERVICE_ACCOUNT_FILE')
s_file = path.join(basedir, 'service_account_file.json')

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The implementation for getting the service account file path does not align with the updated documentation in README.md. The README.md states that the SERVICE_ACCOUNT_FILE environment variable can be used as an optional override for the default path, but the current implementation hardcodes the path, ignoring the environment variable.

To match the documentation and restore the intended flexibility, the code should attempt to use the environment variable and fall back to the default path if it's not set.

Suggested change
s_file = path.join(basedir, 'service_account_file.json')
s_file = environ.get('SERVICE_ACCOUNT_FILE', path.join(basedir, 'service_account_file.json'))

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 6 issues, and left some high level feedback:

  • In app.py, uploaded_file still reads app.config['UPLOAD_FOLDER'] but the previous hardcoded default has been removed; consider either ensuring all configs define UPLOAD_FOLDER or keeping a safe fallback to avoid runtime KeyError in environments with minimal config.
  • generate_service_account_file.generate_file() now hardcodes service_account_file.json and ignores the SERVICE_ACCOUNT_FILE env var mentioned in the README and used previously; consider restoring the ability to override the path via an environment variable to avoid breaking existing deployments.
  • The __main__ block in app.py no longer honors FLASK_RUN_HOST or a configurable port and just calls app.run(debug=...); if you still need flexible hosting in non-gunicorn usage, consider reintroducing host/port configuration from environment or config.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `app.py`, `uploaded_file` still reads `app.config['UPLOAD_FOLDER']` but the previous hardcoded default has been removed; consider either ensuring all configs define `UPLOAD_FOLDER` or keeping a safe fallback to avoid runtime `KeyError` in environments with minimal config.
- `generate_service_account_file.generate_file()` now hardcodes `service_account_file.json` and ignores the `SERVICE_ACCOUNT_FILE` env var mentioned in the README and used previously; consider restoring the ability to override the path via an environment variable to avoid breaking existing deployments.
- The `__main__` block in `app.py` no longer honors `FLASK_RUN_HOST` or a configurable port and just calls `app.run(debug=...)`; if you still need flexible hosting in non-gunicorn usage, consider reintroducing host/port configuration from environment or config.

## Individual Comments

### Comment 1
<location path="notifications.py" line_range="15-16" />
<code_context>
 @mqtt.on_message()
 def handle_messages(_client, _userdata, message):
-    print(f'Received message on topic {message.topic}: {message.payload.decode()}')
+    logger.debug('Received message on topic %s: %s', message.topic, message.payload.decode())
     if message == 'hi':
-        print('== THIS IS NOT JOKE NO HI HERE ==')
+        logger.debug('== THIS IS NOT JOKE NO HI HERE ==')
</code_context>
<issue_to_address>
**issue (bug_risk):** The `message == 'hi'` check will never be true because `message` is an object, not the payload.

In `handle_messages`, `message` is the Paho/Flask-MQTT message object, so `if message == 'hi':` will never match. To react to a specific payload, compare against `message.payload.decode()`, e.g.:

```python
payload = message.payload.decode()
if payload == 'hi':
    logger.debug('== THIS IS NOT JOKE NO HI HERE ==')
```

Otherwise this branch is effectively dead code.
</issue_to_address>

### Comment 2
<location path="generate_service_account_file.py" line_range="14-17" />
<code_context>


 def generate_file():
-    s_file = environ.get('SERVICE_ACCOUNT_FILE')
+    s_file = path.join(basedir, 'service_account_file.json')
     with open(s_file) as json_file:
         data = json.load(json_file)
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Hardcoding the service account file path removes configurability and may break existing setups.

The previous behavior let deployments configure the JSON path via `SERVICE_ACCOUNT_FILE`; now it is fixed to `service_account_file.json` under `basedir`, which can break existing environments and complicate using multiple credentials locally. Please retain the env-based override, for example:

```python
s_file = environ.get('SERVICE_ACCOUNT_FILE', path.join(basedir, 'service_account_file.json'))
```

Suggested implementation:

```python
def generate_file():
    s_file = environ.get('SERVICE_ACCOUNT_FILE', path.join(basedir, 'service_account_file.json'))
    with open(s_file) as json_file:

```

If `environ` is not already imported in this file, add:

```python
from os import environ
```

near the existing imports (where `path` is imported) to ensure this works.
</issue_to_address>

### Comment 3
<location path="app.py" line_range="118-123" />
<code_context>
-
-    host = os.environ.get('FLASK_RUN_HOST', '127.0.0.1')
-    app.run(host=host, port=5000, debug=False)
+    init_database_schema(app)
+    app.run(debug=app.config.get('DEBUG', False))
</code_context>
<issue_to_address>
**question (bug_risk):** The `app.run` host/port behavior changed and no longer respects `FLASK_RUN_HOST`.

This means `app.run` will now always use `127.0.0.1:5000`, which can break setups that relied on `FLASK_RUN_HOST` or other host/port configs (e.g., containers needing external access). If that configurability is still required, please pass host/port explicitly from config or environment.
</issue_to_address>

### Comment 4
<location path="my_oauth.py" line_range="44" />
<code_context>
-    print("save grant")
-    expires = datetime.utcnow() + timedelta(seconds=100)
+    logger.debug("save grant")
+    expires = datetime.now(timezone.utc).replace(tzinfo=None) + timedelta(seconds=100)
     grant = Grant(
         client_id=client_id,
</code_context>
<issue_to_address>
**nitpick (bug_risk):** Datetime handling could be simplified and clarified to avoid timezone confusion.

`expires` is now built via `datetime.now(timezone.utc).replace(tzinfo=None)`, which creates a naive datetime that is implicitly UTC. To avoid subtle timezone assumptions, either use `datetime.utcnow()` consistently if the DB expects naive UTC, or keep `expires` timezone-aware end-to-end and configure the DB/models accordingly so it’s clear the stored times are UTC.
</issue_to_address>

### Comment 5
<location path="tests/test_app.py" line_range="89-109" />
<code_context>
         self.assertFalse(allowed_file('.hidden'))
+
+
+class AuthTests(unittest.TestCase):
+    def setUp(self):
+        flask_app.config.update(TESTING=True)
+        with flask_app.app_context():
+            db.create_all()
+        self.client = flask_app.test_client()
+
+    def tearDown(self):
+        # remove any test user that was created
+        with flask_app.app_context():
+            result = db.session.execute(select(User).filter_by(email='a@b.com')).scalar_one_or_none()
+            if result:
+                db.session.delete(result)
+                db.session.commit()
+
+    def test_signup_and_login(self):
+        resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'})
+        self.assertIn(resp.status_code, (302, 200))
</code_context>
<issue_to_address>
**suggestion (testing):** Add negative-path auth tests (invalid login, duplicate signup, missing fields) to better cover auth flows

The new `AuthTests` currently only cover the happy-path. Please add a few failure-case tests, e.g.:
- wrong password returns the expected status and error/redirect
- non-existent user login is handled correctly
- duplicate signup with the same email follows the designed behavior
- missing required fields (e.g. no `password`) are validated as expected
This will help ensure error handling still works after the refactor.

```suggestion
class AuthTests(unittest.TestCase):
    def setUp(self):
        flask_app.config.update(TESTING=True)
        with flask_app.app_context():
            db.create_all()
        self.client = flask_app.test_client()

    def tearDown(self):
        # remove any test user that was created
        with flask_app.app_context():
            result = db.session.execute(select(User).filter_by(email='a@b.com')).scalar_one_or_none()
            if result:
                db.session.delete(result)
                db.session.commit()

    def test_signup_and_login(self):
        resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'})
        self.assertIn(resp.status_code, (302, 200))
        resp2 = self.client.post('/login', data={'email': 'a@b.com', 'password': 'pass'}, follow_redirects=True)
        self.assertEqual(resp2.status_code, 200)
        self.assertIn('Profile', resp2.get_data(as_text=True))

    def test_login_wrong_password(self):
        # first create a valid user
        self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'})
        # attempt login with wrong password
        resp = self.client.post(
            '/login',
            data={'email': 'a@b.com', 'password': 'wrong'},
            follow_redirects=True,
        )
        # should not error and should not land on the profile
        self.assertIn(resp.status_code, (200, 302, 400, 401))
        self.assertNotIn('Profile', resp.get_data(as_text=True))

    def test_login_nonexistent_user(self):
        resp = self.client.post(
            '/login',
            data={'email': 'nonexistent@example.com', 'password': 'pass'},
            follow_redirects=True,
        )
        # app should handle gracefully and not log user in
        self.assertIn(resp.status_code, (200, 302, 400, 401))
        self.assertNotIn('Profile', resp.get_data(as_text=True))

    def test_duplicate_signup(self):
        first = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'})
        self.assertIn(first.status_code, (302, 200))
        # second signup with same email should be treated as an error or non-successful signup
        second = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'}, follow_redirects=True)
        self.assertIn(second.status_code, (200, 302, 400, 409))
        # ensure we are not automatically logged in to a profile page on duplicate signup
        self.assertNotIn('Profile', second.get_data(as_text=True))

    def test_signup_missing_password(self):
        # missing required password field
        resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A'}, follow_redirects=True)
        # should not crash and should not log the user in
        self.assertIn(resp.status_code, (200, 400))
        self.assertNotIn('Profile', resp.get_data(as_text=True))
```
</issue_to_address>

### Comment 6
<location path="tests/test_app.py" line_range="121-126" />
<code_context>
+        resp = self.client.get('/devices')
+        self.assertEqual(resp.status_code, 302)
+
+    def test_smarthome_sync(self):
+        payload = {'requestId': '1', 'inputs': [{'intent': 'action.devices.SYNC', 'payload': {}}]}
+        resp = self.client.post('/smarthome', json=payload)
+        self.assertEqual(resp.status_code, 200)
+        data = resp.get_json()
+        self.assertIn('payload', data)
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests for invalid and alternative /smarthome payloads to cover new validation logic

Since `/smarthome` now validates the request (e.g. missing `requestId`/`inputs` returns 400 with `{'error': 'Invalid request format'}`), it would be good to add tests that:
- send an empty body or omit `requestId`/`inputs` and assert the 400 + error payload
- cover other intents (QUERY, EXECUTE), checking both normal responses and error cases like `deviceNotFound`.
This will exercise the new validation and response shapes more thoroughly.

Suggested implementation:

```python
    def test_devices_requires_login(self):
        resp = self.client.get('/devices')
        self.assertEqual(resp.status_code, 302)

    def test_smarthome_sync(self):
        payload = {
            'requestId': '1',
            'inputs': [
                {
                    'intent': 'action.devices.SYNC',
                    'payload': {}
                }
            ],
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 200)
        data = resp.get_json()
        self.assertIn('payload', data)

    def test_smarthome_invalid_missing_request_id(self):
        payload = {
            # 'requestId' intentionally omitted
            'inputs': [
                {
                    'intent': 'action.devices.SYNC',
                    'payload': {}
                }
            ],
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 400)
        data = resp.get_json()
        self.assertEqual(data, {'error': 'Invalid request format'})

    def test_smarthome_invalid_missing_inputs(self):
        payload = {
            'requestId': '1',
            # 'inputs' intentionally omitted
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 400)
        data = resp.get_json()
        self.assertEqual(data, {'error': 'Invalid request format'})

    def test_smarthome_invalid_empty_body(self):
        # Send an empty JSON object to trigger validation error
        resp = self.client.post('/smarthome', json={})
        self.assertEqual(resp.status_code, 400)
        data = resp.get_json()
        self.assertEqual(data, {'error': 'Invalid request format'})

    def test_smarthome_query_intent(self):
        payload = {
            'requestId': '2',
            'inputs': [
                {
                    'intent': 'action.devices.QUERY',
                    'payload': {
                        # Adapt device IDs to match your implementation if needed
                        'devices': [{'id': 'device-1'}],
                    },
                }
            ],
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 200)
        data = resp.get_json()
        self.assertIn('payload', data)
        # At minimum, ensure the response echoes a payload for QUERY requests
        # Adjust these assertions to match the concrete response shape.
        self.assertIn('devices', data['payload'])

    def test_smarthome_query_device_not_found(self):
        payload = {
            'requestId': '3',
            'inputs': [
                {
                    'intent': 'action.devices.QUERY',
                    'payload': {
                        'devices': [{'id': 'non-existent-device'}],
                    },
                }
            ],
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 200)
        data = resp.get_json()
        self.assertIn('payload', data)
        # Check that an error for deviceNotFound is surfaced in the payload.
        # Depending on implementation this might be per-device or top-level.
        self.assertIn('errorCode', data['payload'])
        self.assertEqual(data['payload']['errorCode'], 'deviceNotFound')

    def test_smarthome_execute_intent(self):
        payload = {
            'requestId': '4',
            'inputs': [
                {
                    'intent': 'action.devices.EXECUTE',
                    'payload': {
                        'commands': [
                            {
                                'devices': [{'id': 'device-1'}],
                                'execution': [
                                    {
                                        'command': 'action.devices.commands.OnOff',
                                        'params': {'on': True},
                                    }
                                ],
                            }
                        ]
                    },
                }
            ],
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 200)
        data = resp.get_json()
        self.assertIn('payload', data)
        # Ensure execute returns commands payload; adjust shape to match implementation.
        self.assertIn('commands', data['payload'])

    def test_smarthome_execute_device_not_found(self):
        payload = {
            'requestId': '5',
            'inputs': [
                {
                    'intent': 'action.devices.EXECUTE',
                    'payload': {
                        'commands': [
                            {
                                'devices': [{'id': 'non-existent-device'}],
                                'execution': [
                                    {
                                        'command': 'action.devices.commands.OnOff',
                                        'params': {'on': True},
                                    }
                                ],
                            }
                        ]
                    },
                }
            ],
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 200)
        data = resp.get_json()
        self.assertIn('payload', data)
        # Check that a deviceNotFound error is surfaced for EXECUTE
        self.assertIn('errorCode', data['payload'])
        self.assertEqual(data['payload']['errorCode'], 'deviceNotFound')

```

These tests assume:
1. `/smarthome` returns `{'error': 'Invalid request format'}` on 400 for malformed bodies exactly as in your comment.
2. QUERY responses include `data['payload']['devices']`.
3. EXECUTE responses include `data['payload']['commands']`.
4. `deviceNotFound` errors are exposed as `data['payload']['errorCode'] == 'deviceNotFound'`.

If your actual `/smarthome` handler uses a different response structure (e.g. per-device `status`/`errorCode`, or different keys), adjust the corresponding assertions (`assertIn`/`assertEqual`) to match the real payload shape while keeping the same overall coverage: invalid format (400 + error JSON) and both success and deviceNotFound cases for QUERY and EXECUTE.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +15 to 16
logger.debug('Received message on topic %s: %s', message.topic, message.payload.decode())
if message == 'hi':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): The message == 'hi' check will never be true because message is an object, not the payload.

In handle_messages, message is the Paho/Flask-MQTT message object, so if message == 'hi': will never match. To react to a specific payload, compare against message.payload.decode(), e.g.:

payload = message.payload.decode()
if payload == 'hi':
    logger.debug('== THIS IS NOT JOKE NO HI HERE ==')

Otherwise this branch is effectively dead code.

Comment on lines -14 to 17
s_file = environ.get('SERVICE_ACCOUNT_FILE')
s_file = path.join(basedir, 'service_account_file.json')
with open(s_file) as json_file:
data = json.load(json_file)
data.update({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Hardcoding the service account file path removes configurability and may break existing setups.

The previous behavior let deployments configure the JSON path via SERVICE_ACCOUNT_FILE; now it is fixed to service_account_file.json under basedir, which can break existing environments and complicate using multiple credentials locally. Please retain the env-based override, for example:

s_file = environ.get('SERVICE_ACCOUNT_FILE', path.join(basedir, 'service_account_file.json'))

Suggested implementation:

def generate_file():
    s_file = environ.get('SERVICE_ACCOUNT_FILE', path.join(basedir, 'service_account_file.json'))
    with open(s_file) as json_file:

If environ is not already imported in this file, add:

from os import environ

near the existing imports (where path is imported) to ensure this works.

Comment on lines +118 to +123
init_database_schema(app)

@app.route('/health')
def health():
return jsonify({
'status': 'degraded',
'service': 'smart-google',
'mqtt_connected': False,
}), 503

try:
from action_devices import onSync, actions, request_sync, report_state

@app.route('/devices')
def devices():
try:
return onSync()
except Exception as e:
return {'error': str(e)}, 500

@app.route('/smarthome', methods=['POST'])
def smarthome():
try:
req_data = request.get_json()
result = {
'requestId': req_data.get('requestId', 'unknown'),
'payload': actions(req_data),
}
return jsonify(result)
except Exception as e:
return {'error': str(e)}, 500

@app.route('/sync')
def sync():
try:
success = request_sync(app.config['API_KEY'], app.config['AGENT_USER_ID'])
state_result = report_state()
return {'sync_requested': True, 'success': success, 'state_report': state_result}
except Exception as e:
return {'error': str(e)}, 500

except ImportError as e:
print(f"Could not import action_devices: {e}")

if FULL_FEATURES:
try:
@app.before_first_request
def create_db_command():
"""Search for tables and if there is no data create new tables."""
print('DB Engine: ' + app.config.get('SQLALCHEMY_DATABASE_URI', 'sqlite').split(':')[0])
db.create_all(app=app)
print('Initialized the database.')
except Exception as e:
print(f"Could not set up database initialization: {e}")

if __name__ == '__main__':
print("Starting Smart-Google Flask Application")
print(f"Full features: {FULL_FEATURES}")

if FULL_FEATURES:
try:
db.create_all(app=app)
except Exception:
pass

host = os.environ.get('FLASK_RUN_HOST', '127.0.0.1')
app.run(host=host, port=5000, debug=False)
init_database_schema(app)
app.run(debug=app.config.get('DEBUG', False))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question (bug_risk): The app.run host/port behavior changed and no longer respects FLASK_RUN_HOST.

This means app.run will now always use 127.0.0.1:5000, which can break setups that relied on FLASK_RUN_HOST or other host/port configs (e.g., containers needing external access). If that configurability is still required, please pass host/port explicitly from config or environment.

print("save grant")
expires = datetime.utcnow() + timedelta(seconds=100)
logger.debug("save grant")
expires = datetime.now(timezone.utc).replace(tzinfo=None) + timedelta(seconds=100)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick (bug_risk): Datetime handling could be simplified and clarified to avoid timezone confusion.

expires is now built via datetime.now(timezone.utc).replace(tzinfo=None), which creates a naive datetime that is implicitly UTC. To avoid subtle timezone assumptions, either use datetime.utcnow() consistently if the DB expects naive UTC, or keep expires timezone-aware end-to-end and configure the DB/models accordingly so it’s clear the stored times are UTC.

Comment on lines +89 to +109
class AuthTests(unittest.TestCase):
def setUp(self):
flask_app.config.update(TESTING=True)
with flask_app.app_context():
db.create_all()
self.client = flask_app.test_client()

def tearDown(self):
# remove any test user that was created
with flask_app.app_context():
result = db.session.execute(select(User).filter_by(email='a@b.com')).scalar_one_or_none()
if result:
db.session.delete(result)
db.session.commit()

def test_signup_and_login(self):
resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'})
self.assertIn(resp.status_code, (302, 200))
resp2 = self.client.post('/login', data={'email': 'a@b.com', 'password': 'pass'}, follow_redirects=True)
self.assertEqual(resp2.status_code, 200)
self.assertIn('Profile', resp2.get_data(as_text=True))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add negative-path auth tests (invalid login, duplicate signup, missing fields) to better cover auth flows

The new AuthTests currently only cover the happy-path. Please add a few failure-case tests, e.g.:

  • wrong password returns the expected status and error/redirect
  • non-existent user login is handled correctly
  • duplicate signup with the same email follows the designed behavior
  • missing required fields (e.g. no password) are validated as expected
    This will help ensure error handling still works after the refactor.
Suggested change
class AuthTests(unittest.TestCase):
def setUp(self):
flask_app.config.update(TESTING=True)
with flask_app.app_context():
db.create_all()
self.client = flask_app.test_client()
def tearDown(self):
# remove any test user that was created
with flask_app.app_context():
result = db.session.execute(select(User).filter_by(email='a@b.com')).scalar_one_or_none()
if result:
db.session.delete(result)
db.session.commit()
def test_signup_and_login(self):
resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'})
self.assertIn(resp.status_code, (302, 200))
resp2 = self.client.post('/login', data={'email': 'a@b.com', 'password': 'pass'}, follow_redirects=True)
self.assertEqual(resp2.status_code, 200)
self.assertIn('Profile', resp2.get_data(as_text=True))
class AuthTests(unittest.TestCase):
def setUp(self):
flask_app.config.update(TESTING=True)
with flask_app.app_context():
db.create_all()
self.client = flask_app.test_client()
def tearDown(self):
# remove any test user that was created
with flask_app.app_context():
result = db.session.execute(select(User).filter_by(email='a@b.com')).scalar_one_or_none()
if result:
db.session.delete(result)
db.session.commit()
def test_signup_and_login(self):
resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'})
self.assertIn(resp.status_code, (302, 200))
resp2 = self.client.post('/login', data={'email': 'a@b.com', 'password': 'pass'}, follow_redirects=True)
self.assertEqual(resp2.status_code, 200)
self.assertIn('Profile', resp2.get_data(as_text=True))
def test_login_wrong_password(self):
# first create a valid user
self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'})
# attempt login with wrong password
resp = self.client.post(
'/login',
data={'email': 'a@b.com', 'password': 'wrong'},
follow_redirects=True,
)
# should not error and should not land on the profile
self.assertIn(resp.status_code, (200, 302, 400, 401))
self.assertNotIn('Profile', resp.get_data(as_text=True))
def test_login_nonexistent_user(self):
resp = self.client.post(
'/login',
data={'email': 'nonexistent@example.com', 'password': 'pass'},
follow_redirects=True,
)
# app should handle gracefully and not log user in
self.assertIn(resp.status_code, (200, 302, 400, 401))
self.assertNotIn('Profile', resp.get_data(as_text=True))
def test_duplicate_signup(self):
first = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'})
self.assertIn(first.status_code, (302, 200))
# second signup with same email should be treated as an error or non-successful signup
second = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'}, follow_redirects=True)
self.assertIn(second.status_code, (200, 302, 400, 409))
# ensure we are not automatically logged in to a profile page on duplicate signup
self.assertNotIn('Profile', second.get_data(as_text=True))
def test_signup_missing_password(self):
# missing required password field
resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A'}, follow_redirects=True)
# should not crash and should not log the user in
self.assertIn(resp.status_code, (200, 400))
self.assertNotIn('Profile', resp.get_data(as_text=True))

Comment on lines +121 to +126
def test_smarthome_sync(self):
payload = {'requestId': '1', 'inputs': [{'intent': 'action.devices.SYNC', 'payload': {}}]}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 200)
data = resp.get_json()
self.assertIn('payload', data)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add tests for invalid and alternative /smarthome payloads to cover new validation logic

Since /smarthome now validates the request (e.g. missing requestId/inputs returns 400 with {'error': 'Invalid request format'}), it would be good to add tests that:

  • send an empty body or omit requestId/inputs and assert the 400 + error payload
  • cover other intents (QUERY, EXECUTE), checking both normal responses and error cases like deviceNotFound.
    This will exercise the new validation and response shapes more thoroughly.

Suggested implementation:

    def test_devices_requires_login(self):
        resp = self.client.get('/devices')
        self.assertEqual(resp.status_code, 302)

    def test_smarthome_sync(self):
        payload = {
            'requestId': '1',
            'inputs': [
                {
                    'intent': 'action.devices.SYNC',
                    'payload': {}
                }
            ],
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 200)
        data = resp.get_json()
        self.assertIn('payload', data)

    def test_smarthome_invalid_missing_request_id(self):
        payload = {
            # 'requestId' intentionally omitted
            'inputs': [
                {
                    'intent': 'action.devices.SYNC',
                    'payload': {}
                }
            ],
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 400)
        data = resp.get_json()
        self.assertEqual(data, {'error': 'Invalid request format'})

    def test_smarthome_invalid_missing_inputs(self):
        payload = {
            'requestId': '1',
            # 'inputs' intentionally omitted
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 400)
        data = resp.get_json()
        self.assertEqual(data, {'error': 'Invalid request format'})

    def test_smarthome_invalid_empty_body(self):
        # Send an empty JSON object to trigger validation error
        resp = self.client.post('/smarthome', json={})
        self.assertEqual(resp.status_code, 400)
        data = resp.get_json()
        self.assertEqual(data, {'error': 'Invalid request format'})

    def test_smarthome_query_intent(self):
        payload = {
            'requestId': '2',
            'inputs': [
                {
                    'intent': 'action.devices.QUERY',
                    'payload': {
                        # Adapt device IDs to match your implementation if needed
                        'devices': [{'id': 'device-1'}],
                    },
                }
            ],
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 200)
        data = resp.get_json()
        self.assertIn('payload', data)
        # At minimum, ensure the response echoes a payload for QUERY requests
        # Adjust these assertions to match the concrete response shape.
        self.assertIn('devices', data['payload'])

    def test_smarthome_query_device_not_found(self):
        payload = {
            'requestId': '3',
            'inputs': [
                {
                    'intent': 'action.devices.QUERY',
                    'payload': {
                        'devices': [{'id': 'non-existent-device'}],
                    },
                }
            ],
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 200)
        data = resp.get_json()
        self.assertIn('payload', data)
        # Check that an error for deviceNotFound is surfaced in the payload.
        # Depending on implementation this might be per-device or top-level.
        self.assertIn('errorCode', data['payload'])
        self.assertEqual(data['payload']['errorCode'], 'deviceNotFound')

    def test_smarthome_execute_intent(self):
        payload = {
            'requestId': '4',
            'inputs': [
                {
                    'intent': 'action.devices.EXECUTE',
                    'payload': {
                        'commands': [
                            {
                                'devices': [{'id': 'device-1'}],
                                'execution': [
                                    {
                                        'command': 'action.devices.commands.OnOff',
                                        'params': {'on': True},
                                    }
                                ],
                            }
                        ]
                    },
                }
            ],
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 200)
        data = resp.get_json()
        self.assertIn('payload', data)
        # Ensure execute returns commands payload; adjust shape to match implementation.
        self.assertIn('commands', data['payload'])

    def test_smarthome_execute_device_not_found(self):
        payload = {
            'requestId': '5',
            'inputs': [
                {
                    'intent': 'action.devices.EXECUTE',
                    'payload': {
                        'commands': [
                            {
                                'devices': [{'id': 'non-existent-device'}],
                                'execution': [
                                    {
                                        'command': 'action.devices.commands.OnOff',
                                        'params': {'on': True},
                                    }
                                ],
                            }
                        ]
                    },
                }
            ],
        }
        resp = self.client.post('/smarthome', json=payload)
        self.assertEqual(resp.status_code, 200)
        data = resp.get_json()
        self.assertIn('payload', data)
        # Check that a deviceNotFound error is surfaced for EXECUTE
        self.assertIn('errorCode', data['payload'])
        self.assertEqual(data['payload']['errorCode'], 'deviceNotFound')

These tests assume:

  1. /smarthome returns {'error': 'Invalid request format'} on 400 for malformed bodies exactly as in your comment.
  2. QUERY responses include data['payload']['devices'].
  3. EXECUTE responses include data['payload']['commands'].
  4. deviceNotFound errors are exposed as data['payload']['errorCode'] == 'deviceNotFound'.

If your actual /smarthome handler uses a different response structure (e.g. per-device status/errorCode, or different keys), adjust the corresponding assertions (assertIn/assertEqual) to match the real payload shape while keeping the same overall coverage: invalid format (400 + error JSON) and both success and deviceNotFound cases for QUERY and EXECUTE.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants