fix auto merge problem#109
Conversation
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 6. - [Release notes](https://github.com/actions/checkout/releases) - [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md) - [Commits](actions/checkout@v2...v6) --- updated-dependencies: - dependency-name: actions/checkout dependency-version: '6' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com>
- Update runtime.txt to python-3.12.2
- Update non-breaking dependencies:
- gunicorn 20.1.0 → 22.0.0 (security patches)
- firebase-admin 3.2.1 → 6.5.0
- pymysql 1.0.2 → 1.1.1
- python-dotenv 0.15.0 → 1.0.0
- Add google-auth>=2.22.0, requests>=2.31.0, cryptography>=41.0.0
- Fix Python 3.12 deprecations:
- Replace datetime.utcnow() with datetime.now(timezone.utc) in my_oauth.py
- Remove .decode('utf-8') from jwt.encode() in ReportState.py (google-auth 2.x returns str)
- Replace all print() statements with logging module:
- action_devices.py: 16 print() → logger calls
- app.py: 12 print() → logger calls
- notifications.py: 5 print() → logger calls
- generate_service_account_file.py: 4 print() → logger calls
- ReportState.py: 3 print() → logger calls
Tests: All 6 tests pass ✓
Replace all 11 print() statements with structured logger.debug() calls: - get_current_user(): log user lookup results - load_client(): log client ID and lookup results - load_grant(): log grant request - save_grant(): log grant creation - load_token(): log token request - save_token(): log token creation and existing tokens Maintains consistency with logging infrastructure added in earlier Phase 1 commits. All tests passing (10/10).
…ask-Login integration
…r optional features
…ogging; remove feature flags
…andling and test isolation
…d MQTT initialization error handling
…unction for schema setup and update CLI command
Upgrade/phase-1-python-312
In `action_devices.py`, replaced the insecure `import random` and `random.randint` with the cryptographically strong `import secrets` and `secrets.randbelow` when generating the Request ID for state reports. This ensures the predictability of request IDs is eliminated, preventing potential spoofing or replay vulnerabilities related to ID generation. Co-authored-by: DaTiC0 <13198638+DaTiC0@users.noreply.github.com>
Co-authored-by: DaTiC0 <13198638+DaTiC0@users.noreply.github.com>
Fix invalid YAML indentation in python-app.yml workflow
…/checkout-6 Bump actions/checkout from 2 to 6
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 4.5.0 to 6.2.0. - [Release notes](https://github.com/actions/setup-python/releases) - [Commits](actions/setup-python@v4.5.0...v6.2.0) --- updated-dependencies: - dependency-name: actions/setup-python dependency-version: 6.2.0 dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com>
…setup-python-6.2.0 Bump actions/setup-python from 4.5.0 to 6.2.0
In `action_devices.py`, replaced the insecure `import random` and `random.randint` with the cryptographically strong `import secrets` and `secrets.randbelow` when generating the Request ID for state reports. This ensures the predictability of request IDs is eliminated, preventing potential spoofing or replay vulnerabilities related to ID generation. Co-authored-by: DaTiC0 <13198638+DaTiC0@users.noreply.github.com>
Update the random number generation logic based on PR feedback. Changed `10**20 - 10**19 + 1` to the simplified equivalent `9 * 10**19 + 1` to make the code easier to understand at a glance. Co-authored-by: DaTiC0 <13198638+DaTiC0@users.noreply.github.com>
Moved `import secrets` to the top of `action_devices.py` to comply with PEP 8 style guidelines, improving readability and making dependencies clear at a glance. Co-authored-by: DaTiC0 <13198638+DaTiC0@users.noreply.github.com>
…92052650724018031 🔒 Fix insecure random number generator for Request ID
Bumps [gunicorn](https://github.com/benoitc/gunicorn) from 22.0.0 to 25.1.0. - [Release notes](https://github.com/benoitc/gunicorn/releases) - [Commits](benoitc/gunicorn@22.0.0...25.1.0) --- updated-dependencies: - dependency-name: gunicorn dependency-version: 25.1.0 dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com>
Bump gunicorn from 22.0.0 to 25.1.0
Reviewer's GuideRefactors app initialization to be deterministic and test-friendly, strengthens Firebase and device handling safety, expands test coverage for auth and device flows, modernizes logging and dependencies, and updates CI workflows to stabilize automated merges and Dependabot PR handling. Sequence diagram for the updated smarthome intent handlingsequenceDiagram
actor GoogleAssistant
participant FlaskApp as FlaskApp
participant Routes as routes_smarthome
participant ActionDevices as action_devices
participant MQTT as MQTTBroker
participant Firebase as FirebaseRTDB
GoogleAssistant->>FlaskApp: POST /smarthome (JSON)
FlaskApp->>Routes: route request
Routes->>Routes: validate JSON (requestId, inputs)
alt invalid request
Routes-->>FlaskApp: 400 Invalid request format
FlaskApp-->>GoogleAssistant: 400 error JSON
else valid request
Routes->>ActionDevices: actions(req)
loop for each intent in inputs
alt intent == SYNC
ActionDevices->>ActionDevices: onSync()
alt Firebase available and initialized
ActionDevices->>Firebase: rsync() list devices
Firebase-->>ActionDevices: devices data
else Firebase unavailable
ActionDevices->>ActionDevices: use MOCK_DEVICES
end
else intent == QUERY
ActionDevices->>ActionDevices: onQuery(req)
ActionDevices->>Firebase: rquery(deviceId)
Firebase-->>ActionDevices: device state or {online: False}
else intent == EXECUTE
ActionDevices->>ActionDevices: onExecute(req)
loop per device command
ActionDevices->>ActionDevices: commands(payload, deviceId, execCommand, params)
ActionDevices->>Firebase: rexecute(deviceId, params)
Firebase-->>ActionDevices: updated states
ActionDevices->>MQTT: publish notification
MQTT-->>ActionDevices: ack or error
end
else intent == DISCONNECT
ActionDevices->>ActionDevices: clear payload
else unknown intent
ActionDevices->>ActionDevices: log warning and ignore
end
end
ActionDevices-->>Routes: payload
Routes->>Routes: build response {requestId, payload}
Routes-->>FlaskApp: 200 JSON
FlaskApp-->>GoogleAssistant: 200 JSON
end
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces significant improvements to the application's stability, maintainability, and security. It primarily focuses on integrating a robust logging system, updating core dependencies to their latest versions, and refactoring the main application entry point for clearer structure and initialization. Additionally, security measures have been enhanced through input sanitization and better random number generation, while the testing suite has been expanded to cover new areas. These changes collectively aim to modernize the codebase and ensure a more reliable and secure application environment. Highlights
Changelog
Ignored Files
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a significant and valuable refactoring of the application. Key improvements include upgrading to Python 3.12, replacing print statements with structured logging, centralizing application configuration, enhancing security by sanitizing device IDs and using the secrets module, and greatly expanding test coverage. The application startup and database initialization logic have been modernized and made more robust.
My review has identified one issue where the implementation for loading the service account file in generate_service_account_file.py does not match the updated documentation in the README.md. A code suggestion has been provided to fix this discrepancy.
Overall, this is an excellent set of changes that significantly improves the quality, maintainability, and security of the codebase.
|
|
||
| def generate_file(): | ||
| s_file = environ.get('SERVICE_ACCOUNT_FILE') | ||
| s_file = path.join(basedir, 'service_account_file.json') |
There was a problem hiding this comment.
The implementation for getting the service account file path does not align with the updated documentation in README.md. The README.md states that the SERVICE_ACCOUNT_FILE environment variable can be used as an optional override for the default path, but the current implementation hardcodes the path, ignoring the environment variable.
To match the documentation and restore the intended flexibility, the code should attempt to use the environment variable and fall back to the default path if it's not set.
| s_file = path.join(basedir, 'service_account_file.json') | |
| s_file = environ.get('SERVICE_ACCOUNT_FILE', path.join(basedir, 'service_account_file.json')) |
There was a problem hiding this comment.
Hey - I've found 6 issues, and left some high level feedback:
- In
app.py,uploaded_filestill readsapp.config['UPLOAD_FOLDER']but the previous hardcoded default has been removed; consider either ensuring all configs defineUPLOAD_FOLDERor keeping a safe fallback to avoid runtimeKeyErrorin environments with minimal config. generate_service_account_file.generate_file()now hardcodesservice_account_file.jsonand ignores theSERVICE_ACCOUNT_FILEenv var mentioned in the README and used previously; consider restoring the ability to override the path via an environment variable to avoid breaking existing deployments.- The
__main__block inapp.pyno longer honorsFLASK_RUN_HOSTor a configurable port and just callsapp.run(debug=...); if you still need flexible hosting in non-gunicorn usage, consider reintroducing host/port configuration from environment or config.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `app.py`, `uploaded_file` still reads `app.config['UPLOAD_FOLDER']` but the previous hardcoded default has been removed; consider either ensuring all configs define `UPLOAD_FOLDER` or keeping a safe fallback to avoid runtime `KeyError` in environments with minimal config.
- `generate_service_account_file.generate_file()` now hardcodes `service_account_file.json` and ignores the `SERVICE_ACCOUNT_FILE` env var mentioned in the README and used previously; consider restoring the ability to override the path via an environment variable to avoid breaking existing deployments.
- The `__main__` block in `app.py` no longer honors `FLASK_RUN_HOST` or a configurable port and just calls `app.run(debug=...)`; if you still need flexible hosting in non-gunicorn usage, consider reintroducing host/port configuration from environment or config.
## Individual Comments
### Comment 1
<location path="notifications.py" line_range="15-16" />
<code_context>
@mqtt.on_message()
def handle_messages(_client, _userdata, message):
- print(f'Received message on topic {message.topic}: {message.payload.decode()}')
+ logger.debug('Received message on topic %s: %s', message.topic, message.payload.decode())
if message == 'hi':
- print('== THIS IS NOT JOKE NO HI HERE ==')
+ logger.debug('== THIS IS NOT JOKE NO HI HERE ==')
</code_context>
<issue_to_address>
**issue (bug_risk):** The `message == 'hi'` check will never be true because `message` is an object, not the payload.
In `handle_messages`, `message` is the Paho/Flask-MQTT message object, so `if message == 'hi':` will never match. To react to a specific payload, compare against `message.payload.decode()`, e.g.:
```python
payload = message.payload.decode()
if payload == 'hi':
logger.debug('== THIS IS NOT JOKE NO HI HERE ==')
```
Otherwise this branch is effectively dead code.
</issue_to_address>
### Comment 2
<location path="generate_service_account_file.py" line_range="14-17" />
<code_context>
def generate_file():
- s_file = environ.get('SERVICE_ACCOUNT_FILE')
+ s_file = path.join(basedir, 'service_account_file.json')
with open(s_file) as json_file:
data = json.load(json_file)
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Hardcoding the service account file path removes configurability and may break existing setups.
The previous behavior let deployments configure the JSON path via `SERVICE_ACCOUNT_FILE`; now it is fixed to `service_account_file.json` under `basedir`, which can break existing environments and complicate using multiple credentials locally. Please retain the env-based override, for example:
```python
s_file = environ.get('SERVICE_ACCOUNT_FILE', path.join(basedir, 'service_account_file.json'))
```
Suggested implementation:
```python
def generate_file():
s_file = environ.get('SERVICE_ACCOUNT_FILE', path.join(basedir, 'service_account_file.json'))
with open(s_file) as json_file:
```
If `environ` is not already imported in this file, add:
```python
from os import environ
```
near the existing imports (where `path` is imported) to ensure this works.
</issue_to_address>
### Comment 3
<location path="app.py" line_range="118-123" />
<code_context>
-
- host = os.environ.get('FLASK_RUN_HOST', '127.0.0.1')
- app.run(host=host, port=5000, debug=False)
+ init_database_schema(app)
+ app.run(debug=app.config.get('DEBUG', False))
</code_context>
<issue_to_address>
**question (bug_risk):** The `app.run` host/port behavior changed and no longer respects `FLASK_RUN_HOST`.
This means `app.run` will now always use `127.0.0.1:5000`, which can break setups that relied on `FLASK_RUN_HOST` or other host/port configs (e.g., containers needing external access). If that configurability is still required, please pass host/port explicitly from config or environment.
</issue_to_address>
### Comment 4
<location path="my_oauth.py" line_range="44" />
<code_context>
- print("save grant")
- expires = datetime.utcnow() + timedelta(seconds=100)
+ logger.debug("save grant")
+ expires = datetime.now(timezone.utc).replace(tzinfo=None) + timedelta(seconds=100)
grant = Grant(
client_id=client_id,
</code_context>
<issue_to_address>
**nitpick (bug_risk):** Datetime handling could be simplified and clarified to avoid timezone confusion.
`expires` is now built via `datetime.now(timezone.utc).replace(tzinfo=None)`, which creates a naive datetime that is implicitly UTC. To avoid subtle timezone assumptions, either use `datetime.utcnow()` consistently if the DB expects naive UTC, or keep `expires` timezone-aware end-to-end and configure the DB/models accordingly so it’s clear the stored times are UTC.
</issue_to_address>
### Comment 5
<location path="tests/test_app.py" line_range="89-109" />
<code_context>
self.assertFalse(allowed_file('.hidden'))
+
+
+class AuthTests(unittest.TestCase):
+ def setUp(self):
+ flask_app.config.update(TESTING=True)
+ with flask_app.app_context():
+ db.create_all()
+ self.client = flask_app.test_client()
+
+ def tearDown(self):
+ # remove any test user that was created
+ with flask_app.app_context():
+ result = db.session.execute(select(User).filter_by(email='a@b.com')).scalar_one_or_none()
+ if result:
+ db.session.delete(result)
+ db.session.commit()
+
+ def test_signup_and_login(self):
+ resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'})
+ self.assertIn(resp.status_code, (302, 200))
</code_context>
<issue_to_address>
**suggestion (testing):** Add negative-path auth tests (invalid login, duplicate signup, missing fields) to better cover auth flows
The new `AuthTests` currently only cover the happy-path. Please add a few failure-case tests, e.g.:
- wrong password returns the expected status and error/redirect
- non-existent user login is handled correctly
- duplicate signup with the same email follows the designed behavior
- missing required fields (e.g. no `password`) are validated as expected
This will help ensure error handling still works after the refactor.
```suggestion
class AuthTests(unittest.TestCase):
def setUp(self):
flask_app.config.update(TESTING=True)
with flask_app.app_context():
db.create_all()
self.client = flask_app.test_client()
def tearDown(self):
# remove any test user that was created
with flask_app.app_context():
result = db.session.execute(select(User).filter_by(email='a@b.com')).scalar_one_or_none()
if result:
db.session.delete(result)
db.session.commit()
def test_signup_and_login(self):
resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'})
self.assertIn(resp.status_code, (302, 200))
resp2 = self.client.post('/login', data={'email': 'a@b.com', 'password': 'pass'}, follow_redirects=True)
self.assertEqual(resp2.status_code, 200)
self.assertIn('Profile', resp2.get_data(as_text=True))
def test_login_wrong_password(self):
# first create a valid user
self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'})
# attempt login with wrong password
resp = self.client.post(
'/login',
data={'email': 'a@b.com', 'password': 'wrong'},
follow_redirects=True,
)
# should not error and should not land on the profile
self.assertIn(resp.status_code, (200, 302, 400, 401))
self.assertNotIn('Profile', resp.get_data(as_text=True))
def test_login_nonexistent_user(self):
resp = self.client.post(
'/login',
data={'email': 'nonexistent@example.com', 'password': 'pass'},
follow_redirects=True,
)
# app should handle gracefully and not log user in
self.assertIn(resp.status_code, (200, 302, 400, 401))
self.assertNotIn('Profile', resp.get_data(as_text=True))
def test_duplicate_signup(self):
first = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'})
self.assertIn(first.status_code, (302, 200))
# second signup with same email should be treated as an error or non-successful signup
second = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'}, follow_redirects=True)
self.assertIn(second.status_code, (200, 302, 400, 409))
# ensure we are not automatically logged in to a profile page on duplicate signup
self.assertNotIn('Profile', second.get_data(as_text=True))
def test_signup_missing_password(self):
# missing required password field
resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A'}, follow_redirects=True)
# should not crash and should not log the user in
self.assertIn(resp.status_code, (200, 400))
self.assertNotIn('Profile', resp.get_data(as_text=True))
```
</issue_to_address>
### Comment 6
<location path="tests/test_app.py" line_range="121-126" />
<code_context>
+ resp = self.client.get('/devices')
+ self.assertEqual(resp.status_code, 302)
+
+ def test_smarthome_sync(self):
+ payload = {'requestId': '1', 'inputs': [{'intent': 'action.devices.SYNC', 'payload': {}}]}
+ resp = self.client.post('/smarthome', json=payload)
+ self.assertEqual(resp.status_code, 200)
+ data = resp.get_json()
+ self.assertIn('payload', data)
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests for invalid and alternative /smarthome payloads to cover new validation logic
Since `/smarthome` now validates the request (e.g. missing `requestId`/`inputs` returns 400 with `{'error': 'Invalid request format'}`), it would be good to add tests that:
- send an empty body or omit `requestId`/`inputs` and assert the 400 + error payload
- cover other intents (QUERY, EXECUTE), checking both normal responses and error cases like `deviceNotFound`.
This will exercise the new validation and response shapes more thoroughly.
Suggested implementation:
```python
def test_devices_requires_login(self):
resp = self.client.get('/devices')
self.assertEqual(resp.status_code, 302)
def test_smarthome_sync(self):
payload = {
'requestId': '1',
'inputs': [
{
'intent': 'action.devices.SYNC',
'payload': {}
}
],
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 200)
data = resp.get_json()
self.assertIn('payload', data)
def test_smarthome_invalid_missing_request_id(self):
payload = {
# 'requestId' intentionally omitted
'inputs': [
{
'intent': 'action.devices.SYNC',
'payload': {}
}
],
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 400)
data = resp.get_json()
self.assertEqual(data, {'error': 'Invalid request format'})
def test_smarthome_invalid_missing_inputs(self):
payload = {
'requestId': '1',
# 'inputs' intentionally omitted
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 400)
data = resp.get_json()
self.assertEqual(data, {'error': 'Invalid request format'})
def test_smarthome_invalid_empty_body(self):
# Send an empty JSON object to trigger validation error
resp = self.client.post('/smarthome', json={})
self.assertEqual(resp.status_code, 400)
data = resp.get_json()
self.assertEqual(data, {'error': 'Invalid request format'})
def test_smarthome_query_intent(self):
payload = {
'requestId': '2',
'inputs': [
{
'intent': 'action.devices.QUERY',
'payload': {
# Adapt device IDs to match your implementation if needed
'devices': [{'id': 'device-1'}],
},
}
],
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 200)
data = resp.get_json()
self.assertIn('payload', data)
# At minimum, ensure the response echoes a payload for QUERY requests
# Adjust these assertions to match the concrete response shape.
self.assertIn('devices', data['payload'])
def test_smarthome_query_device_not_found(self):
payload = {
'requestId': '3',
'inputs': [
{
'intent': 'action.devices.QUERY',
'payload': {
'devices': [{'id': 'non-existent-device'}],
},
}
],
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 200)
data = resp.get_json()
self.assertIn('payload', data)
# Check that an error for deviceNotFound is surfaced in the payload.
# Depending on implementation this might be per-device or top-level.
self.assertIn('errorCode', data['payload'])
self.assertEqual(data['payload']['errorCode'], 'deviceNotFound')
def test_smarthome_execute_intent(self):
payload = {
'requestId': '4',
'inputs': [
{
'intent': 'action.devices.EXECUTE',
'payload': {
'commands': [
{
'devices': [{'id': 'device-1'}],
'execution': [
{
'command': 'action.devices.commands.OnOff',
'params': {'on': True},
}
],
}
]
},
}
],
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 200)
data = resp.get_json()
self.assertIn('payload', data)
# Ensure execute returns commands payload; adjust shape to match implementation.
self.assertIn('commands', data['payload'])
def test_smarthome_execute_device_not_found(self):
payload = {
'requestId': '5',
'inputs': [
{
'intent': 'action.devices.EXECUTE',
'payload': {
'commands': [
{
'devices': [{'id': 'non-existent-device'}],
'execution': [
{
'command': 'action.devices.commands.OnOff',
'params': {'on': True},
}
],
}
]
},
}
],
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 200)
data = resp.get_json()
self.assertIn('payload', data)
# Check that a deviceNotFound error is surfaced for EXECUTE
self.assertIn('errorCode', data['payload'])
self.assertEqual(data['payload']['errorCode'], 'deviceNotFound')
```
These tests assume:
1. `/smarthome` returns `{'error': 'Invalid request format'}` on 400 for malformed bodies exactly as in your comment.
2. QUERY responses include `data['payload']['devices']`.
3. EXECUTE responses include `data['payload']['commands']`.
4. `deviceNotFound` errors are exposed as `data['payload']['errorCode'] == 'deviceNotFound'`.
If your actual `/smarthome` handler uses a different response structure (e.g. per-device `status`/`errorCode`, or different keys), adjust the corresponding assertions (`assertIn`/`assertEqual`) to match the real payload shape while keeping the same overall coverage: invalid format (400 + error JSON) and both success and deviceNotFound cases for QUERY and EXECUTE.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| logger.debug('Received message on topic %s: %s', message.topic, message.payload.decode()) | ||
| if message == 'hi': |
There was a problem hiding this comment.
issue (bug_risk): The message == 'hi' check will never be true because message is an object, not the payload.
In handle_messages, message is the Paho/Flask-MQTT message object, so if message == 'hi': will never match. To react to a specific payload, compare against message.payload.decode(), e.g.:
payload = message.payload.decode()
if payload == 'hi':
logger.debug('== THIS IS NOT JOKE NO HI HERE ==')Otherwise this branch is effectively dead code.
| s_file = environ.get('SERVICE_ACCOUNT_FILE') | ||
| s_file = path.join(basedir, 'service_account_file.json') | ||
| with open(s_file) as json_file: | ||
| data = json.load(json_file) | ||
| data.update({ |
There was a problem hiding this comment.
suggestion (bug_risk): Hardcoding the service account file path removes configurability and may break existing setups.
The previous behavior let deployments configure the JSON path via SERVICE_ACCOUNT_FILE; now it is fixed to service_account_file.json under basedir, which can break existing environments and complicate using multiple credentials locally. Please retain the env-based override, for example:
s_file = environ.get('SERVICE_ACCOUNT_FILE', path.join(basedir, 'service_account_file.json'))Suggested implementation:
def generate_file():
s_file = environ.get('SERVICE_ACCOUNT_FILE', path.join(basedir, 'service_account_file.json'))
with open(s_file) as json_file:If environ is not already imported in this file, add:
from os import environnear the existing imports (where path is imported) to ensure this works.
| init_database_schema(app) | ||
|
|
||
| @app.route('/health') | ||
| def health(): | ||
| return jsonify({ | ||
| 'status': 'degraded', | ||
| 'service': 'smart-google', | ||
| 'mqtt_connected': False, | ||
| }), 503 | ||
|
|
||
| try: | ||
| from action_devices import onSync, actions, request_sync, report_state | ||
|
|
||
| @app.route('/devices') | ||
| def devices(): | ||
| try: | ||
| return onSync() | ||
| except Exception as e: | ||
| return {'error': str(e)}, 500 | ||
|
|
||
| @app.route('/smarthome', methods=['POST']) | ||
| def smarthome(): | ||
| try: | ||
| req_data = request.get_json() | ||
| result = { | ||
| 'requestId': req_data.get('requestId', 'unknown'), | ||
| 'payload': actions(req_data), | ||
| } | ||
| return jsonify(result) | ||
| except Exception as e: | ||
| return {'error': str(e)}, 500 | ||
|
|
||
| @app.route('/sync') | ||
| def sync(): | ||
| try: | ||
| success = request_sync(app.config['API_KEY'], app.config['AGENT_USER_ID']) | ||
| state_result = report_state() | ||
| return {'sync_requested': True, 'success': success, 'state_report': state_result} | ||
| except Exception as e: | ||
| return {'error': str(e)}, 500 | ||
|
|
||
| except ImportError as e: | ||
| print(f"Could not import action_devices: {e}") | ||
|
|
||
| if FULL_FEATURES: | ||
| try: | ||
| @app.before_first_request | ||
| def create_db_command(): | ||
| """Search for tables and if there is no data create new tables.""" | ||
| print('DB Engine: ' + app.config.get('SQLALCHEMY_DATABASE_URI', 'sqlite').split(':')[0]) | ||
| db.create_all(app=app) | ||
| print('Initialized the database.') | ||
| except Exception as e: | ||
| print(f"Could not set up database initialization: {e}") | ||
|
|
||
| if __name__ == '__main__': | ||
| print("Starting Smart-Google Flask Application") | ||
| print(f"Full features: {FULL_FEATURES}") | ||
|
|
||
| if FULL_FEATURES: | ||
| try: | ||
| db.create_all(app=app) | ||
| except Exception: | ||
| pass | ||
|
|
||
| host = os.environ.get('FLASK_RUN_HOST', '127.0.0.1') | ||
| app.run(host=host, port=5000, debug=False) | ||
| init_database_schema(app) | ||
| app.run(debug=app.config.get('DEBUG', False)) |
There was a problem hiding this comment.
question (bug_risk): The app.run host/port behavior changed and no longer respects FLASK_RUN_HOST.
This means app.run will now always use 127.0.0.1:5000, which can break setups that relied on FLASK_RUN_HOST or other host/port configs (e.g., containers needing external access). If that configurability is still required, please pass host/port explicitly from config or environment.
| print("save grant") | ||
| expires = datetime.utcnow() + timedelta(seconds=100) | ||
| logger.debug("save grant") | ||
| expires = datetime.now(timezone.utc).replace(tzinfo=None) + timedelta(seconds=100) |
There was a problem hiding this comment.
nitpick (bug_risk): Datetime handling could be simplified and clarified to avoid timezone confusion.
expires is now built via datetime.now(timezone.utc).replace(tzinfo=None), which creates a naive datetime that is implicitly UTC. To avoid subtle timezone assumptions, either use datetime.utcnow() consistently if the DB expects naive UTC, or keep expires timezone-aware end-to-end and configure the DB/models accordingly so it’s clear the stored times are UTC.
| class AuthTests(unittest.TestCase): | ||
| def setUp(self): | ||
| flask_app.config.update(TESTING=True) | ||
| with flask_app.app_context(): | ||
| db.create_all() | ||
| self.client = flask_app.test_client() | ||
|
|
||
| def tearDown(self): | ||
| # remove any test user that was created | ||
| with flask_app.app_context(): | ||
| result = db.session.execute(select(User).filter_by(email='a@b.com')).scalar_one_or_none() | ||
| if result: | ||
| db.session.delete(result) | ||
| db.session.commit() | ||
|
|
||
| def test_signup_and_login(self): | ||
| resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'}) | ||
| self.assertIn(resp.status_code, (302, 200)) | ||
| resp2 = self.client.post('/login', data={'email': 'a@b.com', 'password': 'pass'}, follow_redirects=True) | ||
| self.assertEqual(resp2.status_code, 200) | ||
| self.assertIn('Profile', resp2.get_data(as_text=True)) |
There was a problem hiding this comment.
suggestion (testing): Add negative-path auth tests (invalid login, duplicate signup, missing fields) to better cover auth flows
The new AuthTests currently only cover the happy-path. Please add a few failure-case tests, e.g.:
- wrong password returns the expected status and error/redirect
- non-existent user login is handled correctly
- duplicate signup with the same email follows the designed behavior
- missing required fields (e.g. no
password) are validated as expected
This will help ensure error handling still works after the refactor.
| class AuthTests(unittest.TestCase): | |
| def setUp(self): | |
| flask_app.config.update(TESTING=True) | |
| with flask_app.app_context(): | |
| db.create_all() | |
| self.client = flask_app.test_client() | |
| def tearDown(self): | |
| # remove any test user that was created | |
| with flask_app.app_context(): | |
| result = db.session.execute(select(User).filter_by(email='a@b.com')).scalar_one_or_none() | |
| if result: | |
| db.session.delete(result) | |
| db.session.commit() | |
| def test_signup_and_login(self): | |
| resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'}) | |
| self.assertIn(resp.status_code, (302, 200)) | |
| resp2 = self.client.post('/login', data={'email': 'a@b.com', 'password': 'pass'}, follow_redirects=True) | |
| self.assertEqual(resp2.status_code, 200) | |
| self.assertIn('Profile', resp2.get_data(as_text=True)) | |
| class AuthTests(unittest.TestCase): | |
| def setUp(self): | |
| flask_app.config.update(TESTING=True) | |
| with flask_app.app_context(): | |
| db.create_all() | |
| self.client = flask_app.test_client() | |
| def tearDown(self): | |
| # remove any test user that was created | |
| with flask_app.app_context(): | |
| result = db.session.execute(select(User).filter_by(email='a@b.com')).scalar_one_or_none() | |
| if result: | |
| db.session.delete(result) | |
| db.session.commit() | |
| def test_signup_and_login(self): | |
| resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'}) | |
| self.assertIn(resp.status_code, (302, 200)) | |
| resp2 = self.client.post('/login', data={'email': 'a@b.com', 'password': 'pass'}, follow_redirects=True) | |
| self.assertEqual(resp2.status_code, 200) | |
| self.assertIn('Profile', resp2.get_data(as_text=True)) | |
| def test_login_wrong_password(self): | |
| # first create a valid user | |
| self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'}) | |
| # attempt login with wrong password | |
| resp = self.client.post( | |
| '/login', | |
| data={'email': 'a@b.com', 'password': 'wrong'}, | |
| follow_redirects=True, | |
| ) | |
| # should not error and should not land on the profile | |
| self.assertIn(resp.status_code, (200, 302, 400, 401)) | |
| self.assertNotIn('Profile', resp.get_data(as_text=True)) | |
| def test_login_nonexistent_user(self): | |
| resp = self.client.post( | |
| '/login', | |
| data={'email': 'nonexistent@example.com', 'password': 'pass'}, | |
| follow_redirects=True, | |
| ) | |
| # app should handle gracefully and not log user in | |
| self.assertIn(resp.status_code, (200, 302, 400, 401)) | |
| self.assertNotIn('Profile', resp.get_data(as_text=True)) | |
| def test_duplicate_signup(self): | |
| first = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'}) | |
| self.assertIn(first.status_code, (302, 200)) | |
| # second signup with same email should be treated as an error or non-successful signup | |
| second = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A', 'password': 'pass'}, follow_redirects=True) | |
| self.assertIn(second.status_code, (200, 302, 400, 409)) | |
| # ensure we are not automatically logged in to a profile page on duplicate signup | |
| self.assertNotIn('Profile', second.get_data(as_text=True)) | |
| def test_signup_missing_password(self): | |
| # missing required password field | |
| resp = self.client.post('/signup', data={'email': 'a@b.com', 'name': 'A'}, follow_redirects=True) | |
| # should not crash and should not log the user in | |
| self.assertIn(resp.status_code, (200, 400)) | |
| self.assertNotIn('Profile', resp.get_data(as_text=True)) |
| def test_smarthome_sync(self): | ||
| payload = {'requestId': '1', 'inputs': [{'intent': 'action.devices.SYNC', 'payload': {}}]} | ||
| resp = self.client.post('/smarthome', json=payload) | ||
| self.assertEqual(resp.status_code, 200) | ||
| data = resp.get_json() | ||
| self.assertIn('payload', data) |
There was a problem hiding this comment.
suggestion (testing): Add tests for invalid and alternative /smarthome payloads to cover new validation logic
Since /smarthome now validates the request (e.g. missing requestId/inputs returns 400 with {'error': 'Invalid request format'}), it would be good to add tests that:
- send an empty body or omit
requestId/inputsand assert the 400 + error payload - cover other intents (QUERY, EXECUTE), checking both normal responses and error cases like
deviceNotFound.
This will exercise the new validation and response shapes more thoroughly.
Suggested implementation:
def test_devices_requires_login(self):
resp = self.client.get('/devices')
self.assertEqual(resp.status_code, 302)
def test_smarthome_sync(self):
payload = {
'requestId': '1',
'inputs': [
{
'intent': 'action.devices.SYNC',
'payload': {}
}
],
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 200)
data = resp.get_json()
self.assertIn('payload', data)
def test_smarthome_invalid_missing_request_id(self):
payload = {
# 'requestId' intentionally omitted
'inputs': [
{
'intent': 'action.devices.SYNC',
'payload': {}
}
],
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 400)
data = resp.get_json()
self.assertEqual(data, {'error': 'Invalid request format'})
def test_smarthome_invalid_missing_inputs(self):
payload = {
'requestId': '1',
# 'inputs' intentionally omitted
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 400)
data = resp.get_json()
self.assertEqual(data, {'error': 'Invalid request format'})
def test_smarthome_invalid_empty_body(self):
# Send an empty JSON object to trigger validation error
resp = self.client.post('/smarthome', json={})
self.assertEqual(resp.status_code, 400)
data = resp.get_json()
self.assertEqual(data, {'error': 'Invalid request format'})
def test_smarthome_query_intent(self):
payload = {
'requestId': '2',
'inputs': [
{
'intent': 'action.devices.QUERY',
'payload': {
# Adapt device IDs to match your implementation if needed
'devices': [{'id': 'device-1'}],
},
}
],
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 200)
data = resp.get_json()
self.assertIn('payload', data)
# At minimum, ensure the response echoes a payload for QUERY requests
# Adjust these assertions to match the concrete response shape.
self.assertIn('devices', data['payload'])
def test_smarthome_query_device_not_found(self):
payload = {
'requestId': '3',
'inputs': [
{
'intent': 'action.devices.QUERY',
'payload': {
'devices': [{'id': 'non-existent-device'}],
},
}
],
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 200)
data = resp.get_json()
self.assertIn('payload', data)
# Check that an error for deviceNotFound is surfaced in the payload.
# Depending on implementation this might be per-device or top-level.
self.assertIn('errorCode', data['payload'])
self.assertEqual(data['payload']['errorCode'], 'deviceNotFound')
def test_smarthome_execute_intent(self):
payload = {
'requestId': '4',
'inputs': [
{
'intent': 'action.devices.EXECUTE',
'payload': {
'commands': [
{
'devices': [{'id': 'device-1'}],
'execution': [
{
'command': 'action.devices.commands.OnOff',
'params': {'on': True},
}
],
}
]
},
}
],
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 200)
data = resp.get_json()
self.assertIn('payload', data)
# Ensure execute returns commands payload; adjust shape to match implementation.
self.assertIn('commands', data['payload'])
def test_smarthome_execute_device_not_found(self):
payload = {
'requestId': '5',
'inputs': [
{
'intent': 'action.devices.EXECUTE',
'payload': {
'commands': [
{
'devices': [{'id': 'non-existent-device'}],
'execution': [
{
'command': 'action.devices.commands.OnOff',
'params': {'on': True},
}
],
}
]
},
}
],
}
resp = self.client.post('/smarthome', json=payload)
self.assertEqual(resp.status_code, 200)
data = resp.get_json()
self.assertIn('payload', data)
# Check that a deviceNotFound error is surfaced for EXECUTE
self.assertIn('errorCode', data['payload'])
self.assertEqual(data['payload']['errorCode'], 'deviceNotFound')These tests assume:
/smarthomereturns{'error': 'Invalid request format'}on 400 for malformed bodies exactly as in your comment.- QUERY responses include
data['payload']['devices']. - EXECUTE responses include
data['payload']['commands']. deviceNotFounderrors are exposed asdata['payload']['errorCode'] == 'deviceNotFound'.
If your actual /smarthome handler uses a different response structure (e.g. per-device status/errorCode, or different keys), adjust the corresponding assertions (assertIn/assertEqual) to match the real payload shape while keeping the same overall coverage: invalid format (400 + error JSON) and both success and deviceNotFound cases for QUERY and EXECUTE.
Summary
Validation
make test)make health)AI-Assisted Review (if applicable)
Risk & Rollback
Summary by Sourcery
Simplify application startup by assuming all core extensions are available, centralizing configuration and Firebase initialization, and improving logging and database setup.
New Features:
Bug Fixes:
Enhancements:
CI:
Documentation:
Tests: