Conversation
# Conflicts: # poetry.lock # pyproject.toml
|
Note Currently processing new changes in this PR. This may take a few minutes, please wait... 📒 Files selected for processing (9)
✏️ Tip: You can disable in-progress messages and the fortune message in your review settings. Tip You can disable the changed files summary in the walkthrough.Disable the 📝 WalkthroughWalkthroughThis PR migrates configuration to Pydantic BaseSettings with snake_case fields, adds persistent user settings (DB + JSON fallback), introduces an asyncio AutostartManager, updates route handlers to use UserSettings, and adjusts event logging to use a centralized node_database connection. Changes
Sequence Diagram(s)sequenceDiagram
participant Server
participant AutostartManager
participant Settings as Settings
participant PodOrc as ExternalService
Server->>AutostartManager: update()
AutostartManager->>Settings: load persistent autostart config
Settings-->>AutostartManager: autostart.enabled, interval
alt autostart enabled
AutostartManager->>AutostartManager: start _run_autostart loop
loop every interval
AutostartManager->>PodOrc: probe/start request
PodOrc-->>AutostartManager: status/response
AutostartManager->>Server: initiate analysis start (if needed)
end
end
Server->>AutostartManager: stop()
AutostartManager->>AutostartManager: cancel background task
sequenceDiagram
participant Client
participant UserSettingsModule as user_settings
participant DB as PostgresDB
participant JSONFile as CachedJSON
Client->>UserSettingsModule: load_persistent_settings()
UserSettingsModule->>DB: attempt _load_from_database()
alt DB available
DB-->>UserSettingsModule: config dict
else DB unavailable
UserSettingsModule->>JSONFile: _load_from_json()
JSONFile-->>UserSettingsModule: config dict
end
UserSettingsModule-->>Client: UserSettings instance
Client->>UserSettingsModule: update_settings(new_values)
UserSettingsModule->>DB: _save_to_database(updated_dict)
DB-->>UserSettingsModule: success/failed
UserSettingsModule->>JSONFile: _save_to_json(updated_dict, db_saved)
JSONFile-->>UserSettingsModule: written
UserSettingsModule-->>Client: updated UserSettings
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (7)
hub_adapter/routers/podorc.py (1)
48-48: Prefer a single shared settings instance for decoratorservice_urlvalues.These lines repeatedly call
get_settings()during module import. Consider initializing one module-level settings object and reusing it for all decorator arguments.Refactor pattern
from hub_adapter.dependencies import compile_analysis_pod_data, get_settings @@ logger = logging.getLogger(__name__) +settings = get_settings() @@ - service_url=get_settings().podorc_service_url, + service_url=settings.podorc_service_url,Also applies to: 75-75, 91-91, 108-108, 124-124, 142-142, 158-158, 176-176, 192-192, 209-209, 225-225, 242-242, 258-258, 275-275
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hub_adapter/routers/podorc.py` at line 48, The module repeatedly calls get_settings() inline for decorator arguments (e.g., service_url=get_settings().podorc_service_url) causing redundant calls at import; create a single module-level settings variable (e.g., settings = get_settings()) at top of hub_adapter/routers/podorc.py and update all decorator usages (those with service_url=...) to reference settings.podorc_service_url (and any other settings properties) so every decorator (lines referencing service_url in the file) reuses the same settings instance.hub_adapter/database.py (1)
35-35: Uselogger.exceptioninside the exception handler.On Line 35, switch to
logger.exception(...)to preserve traceback context automatically.Proposed tweak
- logger.error(f"Unable to connect to database: {db_err}") + logger.exception("Unable to connect to database")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hub_adapter/database.py` at line 35, Replace the logger.error call in the exception handler with logger.exception to preserve the traceback: inside the block that currently does logger.error(f"Unable to connect to database: {db_err}"), call logger.exception("Unable to connect to database", exc_info=True) or simply logger.exception(f"Unable to connect to database: {db_err}") so the logger (logger) records the full exception context for the database connection error (db_err).hub_adapter/autostart.py (1)
463-471: Avoid unconditional restart when autostart is already enabled.Line 463 currently restarts the task on every
update()call, even if interval is unchanged. This can interrupt a running cycle unnecessarily.♻️ Suggested refactor
class AutostartManager: @@ def __init__(self): self._task: asyncio.Task | None = None self._enabled = False + self._interval: int | None = None @@ if user_enabled and not self._enabled: @@ self._task = asyncio.create_task(self._run_autostart(interval)) self._enabled = True + self._interval = interval @@ elif not user_enabled and self._enabled: @@ self._task = None self._enabled = False + self._interval = None elif user_enabled and self._enabled: - # Interval might have changed, restart if needed - if self._task and not self._task.done(): - self._task.cancel() - with suppress(asyncio.CancelledError): - await self._task - - logger.info(f"Restarting autostart with new interval {interval}s") - self._task = asyncio.create_task(self._run_autostart(interval)) + if interval != self._interval: + if self._task and not self._task.done(): + self._task.cancel() + with suppress(asyncio.CancelledError): + await self._task + logger.info(f"Restarting autostart with new interval {interval}s") + self._task = asyncio.create_task(self._run_autostart(interval)) + self._interval = interval🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hub_adapter/autostart.py` around lines 463 - 471, The current update logic in update()/autostart handling restarts the autostart task unconditionally when user_enabled and self._enabled are true; change it so the task is only restarted when the effective interval changed. In the block where you compare user_enabled and self._enabled, compute the new interval and compare it to the existing interval stored on the instance (e.g., self._interval or whatever field holds the active interval); only cancel/await and recreate self._task via asyncio.create_task(self._run_autostart(interval)) if the interval differs (otherwise leave the running self._task untouched), preserving the existing cancellation/await logic and the logger message to indicate when a restart actually occurs.hub_adapter/user_settings.py (4)
125-134: Uselogging.exceptionfor automatic traceback inclusion.♻️ Proposed fix
except Exception as file_err: - logger.error(f"Failed to save settings to JSON file: {file_err}") + logger.exception("Failed to save settings to JSON file") raise🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hub_adapter/user_settings.py` around lines 125 - 134, In _save_to_json, replace the logger.error call inside the except block with logger.exception so the traceback is captured automatically; keep the descriptive message (referencing SETTINGS_PATH/save failure) and re-raise the exception as before. This updates function _save_to_json and uses the existing logger and file_err context to log full traceback for easier debugging.
172-191: Add type annotation tonew_settingsparameter.The parameter lacks a type annotation, though the docstring specifies
dict[str, Any].♻️ Proposed fix
-def update_settings(new_settings) -> UserSettings: +def update_settings(new_settings: dict[str, Any]) -> UserSettings:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hub_adapter/user_settings.py` around lines 172 - 191, Update the update_settings signature to add the missing type annotation for new_settings (use dict[str, Any]) and ensure Any is imported from typing; update the function declaration of update_settings(new_settings) -> UserSettings to update_settings(new_settings: dict[str, Any]) -> UserSettings, and add or adjust the import statement (from typing import Any) at the top of the file if not present; keep the rest of the logic that uses load_persistent_settings, _deep_merge, UserSettings, and save_persistent_settings unchanged.
89-101: Improve exception handling and logging.Static analysis flags valid concerns here:
- Catching bare
Exceptioncan mask unexpected errorslogging.exceptionautomatically includes the traceback♻️ Proposed fix
def _load_from_json() -> dict: """Load settings from JSON file.""" if not SETTINGS_PATH.exists(): return {} try: - saved_settings = json.loads(SETTINGS_PATH.read_text()) + return json.loads(SETTINGS_PATH.read_text()) - logger.info("Loaded settings from JSON file backup") - return saved_settings - except Exception as file_err: - logger.error(f"Failed to load from JSON file: {file_err}") + except (json.JSONDecodeError, OSError) as file_err: + logger.exception(f"Failed to load from JSON file: {file_err}") return {} + + else: + logger.info("Loaded settings from JSON file backup")Note: The
elseblock with logging after return won't execute, so either keep the log insidetryor remove it entirely if the info-level message isn't needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hub_adapter/user_settings.py` around lines 89 - 101, In _load_from_json(), stop catching a bare Exception and instead handle specific errors: catch json.JSONDecodeError for invalid JSON and OSError (or subclasses) for file/read issues; use logger.exception(...) when logging these failures so the traceback is recorded; keep the existing logger.info("Loaded settings from JSON file backup") inside the try block (or remove it if you prefer), and return {} on error as before. Reference: function _load_from_json, constant SETTINGS_PATH, and the current logger usage in that function.
25-48: Consider strengthening the type signature for better type safety.The decorator claims to preserve the return type
T, butfallback_valueis typed asAny. This works in practice since callers provide matching types, but a generic approach would make this type-safe:♻️ Optional: Type-safe fallback signature
-def with_db_fallback(fallback_value: Any = None, log_message: str = "Database operation failed"): +def with_db_fallback(fallback_value: T = None, log_message: str = "Database operation failed") -> Callable[[Callable[..., T]], Callable[..., T]]:Alternatively, you could use
@overloadto express the relationship betweenfallback_valueand the decorated function's return type.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hub_adapter/user_settings.py` around lines 25 - 48, The decorator with_db_fallback currently declares a generic return type T but accepts fallback_value: Any, weakening type safety; fix by introducing a return-type type variable (e.g., R or Ret) and change fallback_value's type to Optional[Ret] (or Ret | None) and update the inner Callable/Wrapper signatures to use Ret so the decorated function's return type and fallback_value are tied together; alternatively add `@overload` signatures on with_db_fallback to express the relationship between the provided fallback_value and the decorated function's return type—update references to with_db_fallback, decorator, wrapper, fallback_value and T accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@hub_adapter/auth.py`:
- Around line 167-169: The call to _get_internal_token inside
_add_internal_token_if_missing is missing the required settings argument and
currently relies on Depends in the function signature which isn't injected in
normal calls; update the call to pass the settings explicitly (e.g., await
_get_internal_token(oidc_config, get_settings())) or refactor
_get_internal_token to accept Settings directly (remove Depends(...) from its
signature) and then pass the Settings from _add_internal_token_if_missing (using
get_settings()) so the function is invoked with both oidc_config and settings.
In `@hub_adapter/core.py`:
- Around line 276-279: The err_msg assignment currently drops the second
f-string (so the HTTP status fragment is lost); update the construction of
err_msg (used in the DecodingError path) to produce a single combined
string—e.g., merge both parts into one f-string or explicitly concatenate
them—so that the HTTP status value (status.HTTP_500_INTERNAL_SERVER_ERROR) is
included alongside method.upper() and microsvc_path in err_msg.
In `@hub_adapter/database.py`:
- Line 42: The module currently calls connect_to_db() at import time via the
node_database = connect_to_db() statement which prevents retries if Postgres is
down; remove the top-level call and replace it with a lazily-initialized
accessor (e.g., get_node_database() or ensure_node_database_connected()) that
calls connect_to_db() on first use and implements retry/backoff or surfaces
errors to callers; update any code that referenced the module-level
node_database to call the accessor instead so DB init happens at runtime and can
retry or fail gracefully.
In `@hub_adapter/event_logging.py`:
- Around line 155-160: The get_event_logger() startup path only catches
pw.PeeweeException but psycopg2.DatabaseError can also be raised when calling
node_database.connect(...) or creating EventLogger; update the exception handler
around the connection and EventLogger construction in get_event_logger to catch
both pw.PeeweeException and DatabaseError (imported DatabaseError) and, on
either exception, log the error and set logging_enabled/return a disabled
EventLogger so event logging is gracefully turned off rather than crashing the
app.
In `@hub_adapter/routers/kong.py`:
- Around line 328-333: The handler signatures use mutable list literals as
default values for parameters (methods, protocols, ds_type and in
create_datastore_and_project_with_link), which can lead to shared-state bugs;
change these to safe immutable defaults or use Body(default_factory=...) or None
+ set a new list inside the function. Specifically, for the parameters named
methods and protocols (and the analogous list parameter in
create_datastore_and_project_with_link), replace the list literal defaults with
either tuples (e.g. ("GET",)) or use Body(default_factory=lambda: ["GET"]) or
accept None and assign a fresh list at runtime, and ensure DataStoreType default
remains an immutable value; update the Annotated/Body calls around methods,
protocols and create_datastore_and_project_with_link accordingly.
In `@hub_adapter/routers/meta.py`:
- Line 140: The code constructs microsvc_path by calling get_settings()
directly; update it to use the injected settings instance instead to respect
dependency injection and test overrides—replace the call to get_settings() in
the microsvc_path assignment with the existing settings variable (keep using
analysis_id) so microsvc_path =
f"{settings.podorc_service_url}/po/delete/{analysis_id}" within the same scope
where settings is available.
In `@hub_adapter/routers/node.py`:
- Around line 33-49: The current handler calls
update_settings(node_settings.model_dump()) which serializes defaults and can
overwrite stored values; change this to pass only explicitly provided fields by
calling node_settings.model_dump(exclude_unset=True) before calling
update_settings, and replace the unreliable presence check if "autostart" in
node_settings with a check against the model_fields_set (e.g. "autostart" in
node_settings.model_fields_set) so autostart_manager.update() is triggered only
when the autostart field was actually supplied; keep references to
update_settings, node_settings, and autostart_manager.update when making these
edits.
In `@hub_adapter/user_settings.py`:
- Around line 165-169: The docstring in save_persistent_settings is malformed;
replace the current incorrect text with a concise, accurate description of the
function and its parameters and behavior: state that
save_persistent_settings(settings: UserSettings) serializes the UserSettings
(excluding None fields) to a dict, persists it via _save_to_database, then
writes JSON via _save_to_json, and mention what the function returns or that it
returns None; update the docstring above the save_persistent_settings definition
accordingly and keep it a proper triple-quoted string.
In `@tests/test_user_settings.py`:
- Line 87: Remove the unused "result" variable assignments in
tests/test_user_settings.py where you call functions like failing_function()
(e.g., lines showing "result = failing_function()") — these variables are never
read and trigger Ruff F841; simply replace the assignments with direct calls
(e.g., call failing_function() without assigning) or, if the return value is
needed for assertion, use it accordingly; update all occurrences mentioned
(including the other instances noted) to eliminate the unused "result" bindings.
---
Nitpick comments:
In `@hub_adapter/autostart.py`:
- Around line 463-471: The current update logic in update()/autostart handling
restarts the autostart task unconditionally when user_enabled and self._enabled
are true; change it so the task is only restarted when the effective interval
changed. In the block where you compare user_enabled and self._enabled, compute
the new interval and compare it to the existing interval stored on the instance
(e.g., self._interval or whatever field holds the active interval); only
cancel/await and recreate self._task via
asyncio.create_task(self._run_autostart(interval)) if the interval differs
(otherwise leave the running self._task untouched), preserving the existing
cancellation/await logic and the logger message to indicate when a restart
actually occurs.
In `@hub_adapter/database.py`:
- Line 35: Replace the logger.error call in the exception handler with
logger.exception to preserve the traceback: inside the block that currently does
logger.error(f"Unable to connect to database: {db_err}"), call
logger.exception("Unable to connect to database", exc_info=True) or simply
logger.exception(f"Unable to connect to database: {db_err}") so the logger
(logger) records the full exception context for the database connection error
(db_err).
In `@hub_adapter/routers/podorc.py`:
- Line 48: The module repeatedly calls get_settings() inline for decorator
arguments (e.g., service_url=get_settings().podorc_service_url) causing
redundant calls at import; create a single module-level settings variable (e.g.,
settings = get_settings()) at top of hub_adapter/routers/podorc.py and update
all decorator usages (those with service_url=...) to reference
settings.podorc_service_url (and any other settings properties) so every
decorator (lines referencing service_url in the file) reuses the same settings
instance.
In `@hub_adapter/user_settings.py`:
- Around line 125-134: In _save_to_json, replace the logger.error call inside
the except block with logger.exception so the traceback is captured
automatically; keep the descriptive message (referencing SETTINGS_PATH/save
failure) and re-raise the exception as before. This updates function
_save_to_json and uses the existing logger and file_err context to log full
traceback for easier debugging.
- Around line 172-191: Update the update_settings signature to add the missing
type annotation for new_settings (use dict[str, Any]) and ensure Any is imported
from typing; update the function declaration of update_settings(new_settings) ->
UserSettings to update_settings(new_settings: dict[str, Any]) -> UserSettings,
and add or adjust the import statement (from typing import Any) at the top of
the file if not present; keep the rest of the logic that uses
load_persistent_settings, _deep_merge, UserSettings, and
save_persistent_settings unchanged.
- Around line 89-101: In _load_from_json(), stop catching a bare Exception and
instead handle specific errors: catch json.JSONDecodeError for invalid JSON and
OSError (or subclasses) for file/read issues; use logger.exception(...) when
logging these failures so the traceback is recorded; keep the existing
logger.info("Loaded settings from JSON file backup") inside the try block (or
remove it if you prefer), and return {} on error as before. Reference: function
_load_from_json, constant SETTINGS_PATH, and the current logger usage in that
function.
- Around line 25-48: The decorator with_db_fallback currently declares a generic
return type T but accepts fallback_value: Any, weakening type safety; fix by
introducing a return-type type variable (e.g., R or Ret) and change
fallback_value's type to Optional[Ret] (or Ret | None) and update the inner
Callable/Wrapper signatures to use Ret so the decorated function's return type
and fallback_value are tied together; alternatively add `@overload` signatures on
with_db_fallback to express the relationship between the provided fallback_value
and the decorated function's return type—update references to with_db_fallback,
decorator, wrapper, fallback_value and T accordingly.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (28)
.pre-commit-config.yamlhub_adapter/auth.pyhub_adapter/autostart.pyhub_adapter/conf.pyhub_adapter/core.pyhub_adapter/database.pyhub_adapter/dependencies.pyhub_adapter/event_logging.pyhub_adapter/models/node.pyhub_adapter/oidc.pyhub_adapter/routers/auth.pyhub_adapter/routers/events.pyhub_adapter/routers/health.pyhub_adapter/routers/kong.pyhub_adapter/routers/meta.pyhub_adapter/routers/node.pyhub_adapter/routers/podorc.pyhub_adapter/routers/storage.pyhub_adapter/server.pyhub_adapter/user_settings.pyhub_adapter/utils.pypyproject.tomltests/conftest.pytests/router_tests/test_kong.pytests/test_auth.pytests/test_dependencies.pytests/test_oidc.pytests/test_user_settings.py
💤 Files with no reviewable changes (1)
- hub_adapter/models/node.py
| if logging_enabled: | ||
| logger.info(f"Event logging enabled, connecting to database at {settings.POSTGRES_EVENT_HOST}") | ||
| required = { | ||
| "database": settings.POSTGRES_EVENT_DB, | ||
| "user": settings.POSTGRES_EVENT_USER, | ||
| "password": settings.POSTGRES_EVENT_PASSWORD, | ||
| "host": settings.POSTGRES_EVENT_HOST, | ||
| "port": settings.POSTGRES_EVENT_PORT, | ||
| } | ||
|
|
||
| if not all(required.values()): | ||
| redacted = {**required, "password": "***"} | ||
| raise ValueError(f"Unable to connect to database due to incomplete configuration settings: {redacted}") | ||
|
|
||
| event_db = pw.PostgresqlDatabase(**required) | ||
|
|
||
| # Test connection | ||
| with bind_to(event_db): | ||
| event_db.connect(reuse_if_open=True) | ||
| with bind_to(node_database): | ||
| node_database.connect(reuse_if_open=True) | ||
|
|
||
| event_logger = EventLogger(event_db, enabled=logging_enabled) | ||
| event_logger = EventLogger(node_database, enabled=logging_enabled) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== setup_event_logging call sites =="
rg -n -C2 '\bsetup_event_logging\s*\('
echo
echo "== node_database definition and usage =="
db_file="$(fd '^database\.py$' hub_adapter | head -n1 || true)"
if [ -n "${db_file:-}" ]; then
rg -n -C3 '\bnode_database\b' "$db_file" hub_adapter/event_logging.py
else
echo "No hub_adapter/database.py found via fd"
fiRepository: PrivateAIM/node-hub-api-adapter
Length of output: 2436
🏁 Script executed:
sed -n '170,195p' hub_adapter/event_logging.pyRepository: PrivateAIM/node-hub-api-adapter
Length of output: 856
🏁 Script executed:
echo "== get_event_logger call sites =="
rg -n -C2 '\bget_event_logger\s*\('
echo
echo "== Full setup_event_logging function =="
sed -n '146,162p' hub_adapter/event_logging.pyRepository: PrivateAIM/node-hub-api-adapter
Length of output: 2282
🏁 Script executed:
sed -n '60,75p' hub_adapter/server.pyRepository: PrivateAIM/node-hub-api-adapter
Length of output: 482
🏁 Script executed:
cat -n hub_adapter/database.py | head -50Repository: PrivateAIM/node-hub-api-adapter
Length of output: 1567
Harden exception handling in get_event_logger() to catch all database-related exceptions.
The exception handler currently only catches pw.PeeweeException, but DatabaseError from psycopg2 is imported and could be raised during connection. An unhandled DatabaseError during app startup (in the lifespan context at server.py:66) would fail service boot. The exception handler should also catch DatabaseError to gracefully disable event logging.
Recommended fix
Expand the exception handler in get_event_logger() to catch both exception types:
if (not event_logger or not event_logger.event_db) and node_database:
try:
setup_event_logging()
- except pw.PeeweeException as db_err:
+ except (pw.PeeweeException, DatabaseError) as db_err:
logger.warning(str(db_err).strip())
logger.warning("Event logging disabled due to database configuration or connection error")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@hub_adapter/event_logging.py` around lines 155 - 160, The get_event_logger()
startup path only catches pw.PeeweeException but psycopg2.DatabaseError can also
be raised when calling node_database.connect(...) or creating EventLogger;
update the exception handler around the connection and EventLogger construction
in get_event_logger to catch both pw.PeeweeException and DatabaseError (imported
DatabaseError) and, on either exception, log the error and set
logging_enabled/return a disabled EventLogger so event logging is gracefully
turned off rather than crashing the app.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
pyproject.toml (1)
22-22: Consider adding an upper major bound forpydantic-settingsto prevent future breaking changes.On Line 22,
>=2.13.1is unbounded and could pull incompatible major versions once released. While version 3.x does not currently exist on PyPI, adding<3is a preventative best practice for safer dependency management.Suggested dependency constraint update
- "pydantic-settings>=2.13.1", + "pydantic-settings>=2.13.1,<3",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` at line 22, The dependency entry for "pydantic-settings>=2.13.1" in pyproject.toml is unbounded on major version; change the constraint to include an upper major bound (e.g., "<3") so the requirement becomes a range that prevents automatically pulling future breaking major releases—update the "pydantic-settings>=2.13.1" entry to use a bounded constraint like "pydantic-settings>=2.13.1,<3" to ensure safer dependency management.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@pyproject.toml`:
- Line 22: The dependency entry for "pydantic-settings>=2.13.1" in
pyproject.toml is unbounded on major version; change the constraint to include
an upper major bound (e.g., "<3") so the requirement becomes a range that
prevents automatically pulling future breaking major releases—update the
"pydantic-settings>=2.13.1" entry to use a bounded constraint like
"pydantic-settings>=2.13.1,<3" to ensure safer dependency management.
Summary by CodeRabbit
New Features
Refactor
Tests