Skip to content

feat(auth): add modular signup endpoint with language enum and standa…#37

Merged
aniebietafia merged 4 commits intomainfrom
feat/signup
Mar 15, 2026
Merged

feat(auth): add modular signup endpoint with language enum and standa…#37
aniebietafia merged 4 commits intomainfrom
feat/signup

Conversation

@aniebietafia
Copy link
Contributor

@aniebietafia aniebietafia commented Mar 15, 2026

…rdized errors

  • add /api/v1/auth/signup route and v1 router registration
  • add SupportedLanguage enum and schema normalization for user payloads
  • implement DB session dependency and user CRUD create/get by email
  • enforce duplicate email conflict with EMAIL_ALREADY_REGISTERED
  • add signup integration tests and schema validation tests
  • migrate model/schema timestamps away from datetime.utcnow deprecations

Summary by CodeRabbit

  • New Features

    • Public signup API added with signup response payload and duplicate-email conflict handling
    • Multi-language preferences (speaking/listening) added for users
    • API v1 router mounted so v1 endpoints are exposed
  • Improvements

    • Timestamps are now stored/returned as timezone-aware UTC values
    • Email and full-name inputs normalized/trimmed automatically
  • Tests

    • New integration and unit tests covering signup, validation, and schema behavior
  • Documentation

    • Added plan for a Mailgun-backed email service integration via message queue

…rdized errors

- add /api/v1/auth/signup route and v1 router registration
- add SupportedLanguage enum and schema normalization for user payloads
- implement DB session dependency and user CRUD create/get by email
- enforce duplicate email conflict with EMAIL_ALREADY_REGISTERED
- add signup integration tests and schema validation tests
- migrate model/schema timestamps away from datetime.utcnow deprecations

Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
@coderabbitai
Copy link

coderabbitai bot commented Mar 15, 2026

Warning

Rate limit exceeded

@aniebietafia has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 8 minutes and 25 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b7eff667-241f-4e85-8fa1-0e70a29affb1

📥 Commits

Reviewing files that changed from the base of the PR and between 1d2d841 and cd60bf6.

📒 Files selected for processing (2)
  • app/main.py
  • tests/test_auth/test_auth_signup.py
📝 Walkthrough

Walkthrough

Adds a user signup feature (CRUD, API endpoint, schemas, tests), introduces timezone-aware datetimes across models/migrations/Kafka schemas, adds DB session/engine utilities, tightens CI coverage threshold, and adds documentation for an email service.

Changes

Cohort / File(s) Summary
CI
.github/workflows/ci.yml
Increases pytest coverage threshold from 60% to 80%.
Database migrations & model timestamps
alembic/versions/11781e907181_initial_migration.py, app/models/user.py
Makes user datetime columns timezone-aware (DateTime(timezone=True)), adds deleted_at, adds utc_now default/onupdate.
DB engine & session utilities
app/db/session.py, app/db/__init__.py
Adds SessionLocal, get_engine (with driver fallback), get_db generator; re-exports get_engine instead of engine.
API routing
app/api/v1/api.py, app/api/v1/__init__.py, app/main.py
Introduces api_router, includes auth router, and mounts v1 router on app with configured prefix.
Auth endpoint
app/api/v1/endpoints/auth.py
Adds POST /auth/signup endpoint using UserCreate, returns SignupResponse, depends on DB session and create_user CRUD.
CRUD user logic
app/crud/user.py, app/crud/__init__.py
Adds hash_password, get_user_by_email, create_user with uniqueness check and ConflictException; re-exports functions at package level.
Schemas & validation
app/schemas/user.py, app/schemas/auth.py
Adds SupportedLanguage enum, validators (email normalization, full_name trimming), uses ConfigDict/from_attributes; adds SignupResponse.
Kafka schemas
app/kafka/schemas.py
Defaults for event timestamps changed to timezone-aware UTC (datetime.now(UTC)).
Tests
tests/test_auth/test_auth_signup.py, tests/test_auth/test_schemas_user.py
Adds integration tests for signup (success, duplicate email, invalid language) with in-memory DB and unit test for UserResponse model_validate mapping.
Docs
docs/email_service.md
Adds Mailgun-backed email service design and integration plan using Kafka.

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client
  participant API as FastAPI (api_router/auth)
  participant CRUD as CRUD Layer (create_user)
  participant DB as Database (Session/Engine)

  Client->>API: POST /api/v1/auth/signup (UserCreate)
  API->>CRUD: create_user(db, user_in)
  CRUD->>DB: INSERT user (hashed password, languages, timestamps)
  DB-->>CRUD: return persisted User
  CRUD-->>API: return User
  API-->>Client: 201 SignupResponse (public fields)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

size/L

Poem

🐰 A tiny hop, a user born today,
Timezones tidy, languages in play,
From client to DB the carrots flow,
Tests nod approval — off we go! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 5.56% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main changes: adding a signup endpoint with language enum support and standardized error handling, which aligns with the primary objectives of the PR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/signup
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@aniebietafia aniebietafia linked an issue Mar 15, 2026 that may be closed by this pull request
10 tasks
…rdized errors

- add /api/v1/auth/signup route and v1 router registration
- add SupportedLanguage enum and schema normalization for user payloads
- implement DB session dependency and user CRUD create/get by email
- enforce duplicate email conflict with EMAIL_ALREADY_REGISTERED
- add signup integration tests and schema validation tests
- migrate model/schema timestamps away from datetime.utcnow deprecations

Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (5)
app/schemas/auth.py (1)

4-5: Consider an explicit signup response schema to keep the API contract stable.

Using inheritance is fine now, but it can unintentionally expand /signup output if UserResponse grows later. An explicit schema here would decouple the auth contract.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/schemas/auth.py` around lines 4 - 5, SignupResponse currently inherits
from UserResponse which can accidentally expand the /signup output when
UserResponse changes; replace the inheritance with an explicit
dataclass/pydantic model (e.g., define SignupResponse fields explicitly) that
lists only the intended public fields for the signup endpoint, update any
references to SignupResponse accordingly, and ensure any schema export/tests use
the new explicit SignupResponse instead of relying on UserResponse propagation.
app/schemas/user.py (1)

46-52: Consider extracting the duplicated strip_full_name validator.

The same validator logic is duplicated in UserBase (lines 27-33) and UserUpdate (lines 46-52).

♻️ Optional: Extract to a reusable function
+def _strip_or_none(value: str | None) -> str | None:
+    if value is None:
+        return None
+    stripped = value.strip()
+    return stripped or None
+
+
 class UserBase(BaseModel):
     # ...
     `@field_validator`("full_name", mode="before")
     `@classmethod`
     def strip_full_name(cls, value: str | None) -> str | None:
-        if value is None:
-            return None
-        stripped_value = value.strip()
-        return stripped_value or None
+        return _strip_or_none(value)

Apply the same pattern to UserUpdate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/schemas/user.py` around lines 46 - 52, The strip_full_name validator
logic is duplicated between UserBase and UserUpdate; extract the
trimming/null-coercion logic into a shared helper (e.g., a function named
strip_optional_str or normalize_optional_str) and have both class-level
validators (strip_full_name in UserBase and UserUpdate) call that helper (keep
the `@field_validator`("full_name", mode="before") decorator and simply delegate
to the new function), so you remove duplication while preserving behavior and
return typing.
app/models/user.py (1)

37-38: Consider adding a CHECK constraint or using a DB-level enum for language columns.

The language columns store plain strings while the schema layer uses SupportedLanguage enum. This creates a gap where invalid language codes could be inserted directly via SQL or migrations.

💡 Optional: Add database-level validation
# Option 1: Use SQLAlchemy Enum type
from sqlalchemy import Enum as SAEnum

speaking_language: Mapped[str] = mapped_column(
    SAEnum("en", "fr", "de", "es", "it", "pt", name="supported_language"),
    default="en"
)

# Option 2: Add a CHECK constraint via __table_args__

This is optional since the application layer already enforces valid values through Pydantic validation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/models/user.py` around lines 37 - 38, The speaking_language and
listening_language columns are plain strings while the app uses
SupportedLanguage, so add DB-level validation: replace the mapped_column types
for speaking_language and listening_language to use a SQLAlchemy Enum (SAEnum)
constrained to SupportedLanguage values, or alternatively add a __table_args__
with a CheckConstraint that restricts those columns to the SupportedLanguage
set; update imports accordingly and ensure the default remains "en" while
preserving the attribute names speaking_language and listening_language.
tests/test_auth/test_schemas_user.py (1)

21-24: Consider adding assertions for remaining fields.

The test validates id, email, and language fields but doesn't assert on full_name, is_active, is_verified, or created_at.

💡 Optional: Add comprehensive assertions
     assert result.id == 123
     assert result.email == "person@example.com"
     assert result.speaking_language == SupportedLanguage.ENGLISH
     assert result.listening_language == SupportedLanguage.FRENCH
+    assert result.full_name == "Test Person"
+    assert result.is_active is True
+    assert result.is_verified is False
+    assert result.created_at == source.created_at
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_auth/test_schemas_user.py` around lines 21 - 24, Add assertions
for the remaining User schema fields: verify result.full_name equals the
expected name from the test fixture (e.g., "Person Name"), assert
result.is_active and result.is_verified match the fixture booleans, and assert
result.created_at is present and is a datetime (or matches the expected
timestamp). Update the test that references result and SupportedLanguage to
include these checks so the schema test verifies full_name, is_active,
is_verified, and created_at in addition to id, email, and language fields.
tests/test_auth/test_auth_signup.py (1)

40-43: Use pop() instead of clear() for safer cleanup.

Using clear() removes all dependency overrides, which could interfere with other fixtures or tests that set their own overrides. Removing only the specific override this fixture added is safer.

Proposed fix
     app.dependency_overrides[get_db] = _override_get_db
     with TestClient(app) as test_client:
         yield test_client
-    app.dependency_overrides.clear()
+    app.dependency_overrides.pop(get_db, None)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_auth/test_auth_signup.py` around lines 40 - 43, The fixture
currently calls app.dependency_overrides.clear() which indiscriminately removes
all overrides; instead remove only the override you set for get_db by popping it
after the TestClient context (use app.dependency_overrides.pop(get_db, None) or
check and del app.dependency_overrides[get_db]) so only the override added for
_override_get_db is removed and other tests' overrides remain intact; update the
cleanup in the block that currently contains app.dependency_overrides.clear() to
pop the get_db entry.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/crud/user.py`:
- Around line 30-49: The current read-before-write uniqueness check using
get_user_by_email is racy: wrap the db.commit() that persists the new User
(created in this block with email=user_in.email.lower() and
hashed_password=hash_password(...)) in a try/except that catches
sqlalchemy.exc.IntegrityError (or the DB driver's IntegrityError), calls
db.rollback(), and raises ConflictException(code="EMAIL_ALREADY_REGISTERED",
message="An account with this email already exists.") when the IntegrityError
indicates a duplicate-email constraint; re-raise other exceptions after
rollback. Ensure db.add(db_user) remains before the try and db.refresh/db_user
only run after a successful commit.

In `@app/db/session.py`:
- Line 10: The module currently creates a synchronous DB engine at import time
with create_engine(DATABASE_URL) (variable engine), which fails in CI because
only the async driver (asyncpg) is installed; either install a sync Postgres
driver (add psycopg2-binary or psycopg[binary] to requirements) or, preferably,
defer engine creation: remove the top-level engine instantiation and provide a
lazy factory such as get_engine() or create_engine_if_needed(DATABASE_URL) that
constructs the engine on first use (or accept an injected engine) so tests can
override DATABASE_URL or install drivers before the engine is created.

In `@docs/email_service.md`:
- Line 32: Update the Mailgun SDK reference text: replace the non-standard
package name string "mailgun2" with the correct PyPI package name "mailgun" and
adjust the install instruction to "pip install mailgun" in the documentation
line that currently reads '*   **Mailgun SDK**: Use the `mailgun2` library...';
ensure the sentence mentions the official package name "mailgun" and/or direct
httpx calls to the Mailgun /messages endpoint.

In `@tests/test_auth/test_auth_signup.py`:
- Around line 28-32: The teardown block currently closes the DB session and
drops tables but does not dispose the SQLAlchemy engine, which can leak
connections; after calling Base.metadata.drop_all(bind=engine) in the fixture
teardown (the try/finally that yields db and then calls db.close() and
Base.metadata.drop_all(bind=engine)), add a call to engine.dispose() to release
connection resources held by the engine.

---

Nitpick comments:
In `@app/models/user.py`:
- Around line 37-38: The speaking_language and listening_language columns are
plain strings while the app uses SupportedLanguage, so add DB-level validation:
replace the mapped_column types for speaking_language and listening_language to
use a SQLAlchemy Enum (SAEnum) constrained to SupportedLanguage values, or
alternatively add a __table_args__ with a CheckConstraint that restricts those
columns to the SupportedLanguage set; update imports accordingly and ensure the
default remains "en" while preserving the attribute names speaking_language and
listening_language.

In `@app/schemas/auth.py`:
- Around line 4-5: SignupResponse currently inherits from UserResponse which can
accidentally expand the /signup output when UserResponse changes; replace the
inheritance with an explicit dataclass/pydantic model (e.g., define
SignupResponse fields explicitly) that lists only the intended public fields for
the signup endpoint, update any references to SignupResponse accordingly, and
ensure any schema export/tests use the new explicit SignupResponse instead of
relying on UserResponse propagation.

In `@app/schemas/user.py`:
- Around line 46-52: The strip_full_name validator logic is duplicated between
UserBase and UserUpdate; extract the trimming/null-coercion logic into a shared
helper (e.g., a function named strip_optional_str or normalize_optional_str) and
have both class-level validators (strip_full_name in UserBase and UserUpdate)
call that helper (keep the `@field_validator`("full_name", mode="before")
decorator and simply delegate to the new function), so you remove duplication
while preserving behavior and return typing.

In `@tests/test_auth/test_auth_signup.py`:
- Around line 40-43: The fixture currently calls
app.dependency_overrides.clear() which indiscriminately removes all overrides;
instead remove only the override you set for get_db by popping it after the
TestClient context (use app.dependency_overrides.pop(get_db, None) or check and
del app.dependency_overrides[get_db]) so only the override added for
_override_get_db is removed and other tests' overrides remain intact; update the
cleanup in the block that currently contains app.dependency_overrides.clear() to
pop the get_db entry.

In `@tests/test_auth/test_schemas_user.py`:
- Around line 21-24: Add assertions for the remaining User schema fields: verify
result.full_name equals the expected name from the test fixture (e.g., "Person
Name"), assert result.is_active and result.is_verified match the fixture
booleans, and assert result.created_at is present and is a datetime (or matches
the expected timestamp). Update the test that references result and
SupportedLanguage to include these checks so the schema test verifies full_name,
is_active, is_verified, and created_at in addition to id, email, and language
fields.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 102b21b3-8505-4e74-82f2-6a3157ec3dfe

📥 Commits

Reviewing files that changed from the base of the PR and between ced48fd and 695f1fa.

📒 Files selected for processing (18)
  • .github/workflows/ci.yml
  • alembic/versions/11781e907181_initial_migration.py
  • app/api/v1/__init__.py
  • app/api/v1/api.py
  • app/api/v1/endpoints/auth.py
  • app/crud/__init__.py
  • app/crud/user.py
  • app/db/__init__.py
  • app/db/session.py
  • app/kafka/schemas.py
  • app/main.py
  • app/models/user.py
  • app/schemas/auth.py
  • app/schemas/user.py
  • docs/email_service.md
  • tests/test_auth/__init__.py
  • tests/test_auth/test_auth_signup.py
  • tests/test_auth/test_schemas_user.py

Comment on lines +30 to +49
existing_user = get_user_by_email(db, user_in.email)
if existing_user:
raise ConflictException(
code="EMAIL_ALREADY_REGISTERED",
message="An account with this email already exists.",
)

db_user = User(
email=user_in.email.lower(),
hashed_password=hash_password(user_in.password),
full_name=user_in.full_name,
speaking_language=user_in.speaking_language.value,
listening_language=user_in.listening_language.value,
is_active=True,
is_verified=False,
)
db.add(db_user)
db.commit()
db.refresh(db_user)
return db_user
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== create_user transaction/error handling ==="
rg -nP 'def create_user|db\.commit\(|db\.rollback\(|IntegrityError|ConflictException' app/crud/user.py -C 3

echo "=== email uniqueness constraints ==="
rg -nP 'email.*unique=True|ix_users_email|UniqueConstraint' app/models/user.py alembic/versions/11781e907181_initial_migration.py -C 2

Repository: Brints/FluentMeet

Length of output: 1885


Handle unique-email race condition at commit time with proper error handling.

The application-level read-before-write check is vulnerable to a race condition under concurrency. Two concurrent requests can both pass the uniqueness check on line 31 and attempt to insert the same email, causing the second request to fail at the database constraint (unique index on email exists in migrations). Without error handling around db.commit(), this surfaces as an unhandled IntegrityError instead of your intended ConflictException.

Wrap db.commit() in a try-except block to catch IntegrityError and map it to ConflictException, with rollback on any exception:

Suggested fix
 from typing import cast
 
 import bcrypt
 from passlib.context import CryptContext
+from sqlalchemy.exc import IntegrityError
 from sqlalchemy import select
 from sqlalchemy.orm import Session
@@
 def create_user(db: Session, user_in: UserCreate) -> User:
@@
     db.add(db_user)
-    db.commit()
+    try:
+        db.commit()
+    except IntegrityError as exc:
+        db.rollback()
+        raise ConflictException(
+            code="EMAIL_ALREADY_REGISTERED",
+            message="An account with this email already exists.",
+        ) from exc
+    except Exception:
+        db.rollback()
+        raise
     db.refresh(db_user)
     return db_user
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/crud/user.py` around lines 30 - 49, The current read-before-write
uniqueness check using get_user_by_email is racy: wrap the db.commit() that
persists the new User (created in this block with email=user_in.email.lower()
and hashed_password=hash_password(...)) in a try/except that catches
sqlalchemy.exc.IntegrityError (or the DB driver's IntegrityError), calls
db.rollback(), and raises ConflictException(code="EMAIL_ALREADY_REGISTERED",
message="An account with this email already exists.") when the IntegrityError
indicates a duplicate-email constraint; re-raise other exceptions after
rollback. Ensure db.add(db_user) remains before the try and db.refresh/db_user
only run after a successful commit.

8. Integration tests verify the full flow: producer publishes → consumer dispatches → Mailgun API is called.

**Proposed Technical Details**
* **Mailgun SDK**: Use the `mailgun2` library (already in `requirements.txt`) or direct `httpx` calls to the Mailgun `/messages` endpoint.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

Python Mailgun SDK package name PyPI

💡 Result:

The official Mailgun Python SDK on PyPI is published as the package:

Install with:

pip install mailgun

Citations:


Update the Mailgun library reference to use the correct package name.

Line 32 references mailgun2, but the official Mailgun Python SDK on PyPI is published as mailgun. Update the reference to use mailgun (installable via pip install mailgun) instead of the non-standard mailgun2 package name.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/email_service.md` at line 32, Update the Mailgun SDK reference text:
replace the non-standard package name string "mailgun2" with the correct PyPI
package name "mailgun" and adjust the install instruction to "pip install
mailgun" in the documentation line that currently reads '*   **Mailgun SDK**:
Use the `mailgun2` library...'; ensure the sentence mentions the official
package name "mailgun" and/or direct httpx calls to the Mailgun /messages
endpoint.

…rdized errors

- add /api/v1/auth/signup route and v1 router registration
- add SupportedLanguage enum and schema normalization for user payloads
- implement DB session dependency and user CRUD create/get by email
- enforce duplicate email conflict with EMAIL_ALREADY_REGISTERED
- add signup integration tests and schema validation tests
- migrate model/schema timestamps away from datetime.utcnow deprecations

Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/db/session.py`:
- Around line 22-28: The current except block silently falls back to SQLite when
a PostgreSQL DBAPI is missing (checking DATABASE_URL and creating cached_engine
with DEFAULT_SQLITE_URL); change this so missing drivers fail fast in non-test
environments: detect an explicit test-only flag (e.g., a TESTING or IS_TEST_ENV
environment/config variable) and only perform the SQLite fallback when that flag
is true, otherwise re-raise the ModuleNotFoundError or raise a clear
RuntimeError with context including DATABASE_URL and exc.name; update the code
paths around the create_engine/cached_engine handling in session.py to reference
DATABASE_URL, DEFAULT_SQLITE_URL, and the new TESTING flag so deployments never
silently switch to SQLite.
- Around line 18-32: The engine initialization performs an unsynchronized
check-then-set on _ENGINE_STATE which can race when two threads call
get_engine() concurrently; add a module-level threading.Lock (e.g., _ENGINE_LOCK
= Lock()) and use it to serialize the first-use initialization: perform a fast
check for _ENGINE_STATE["engine"], if missing acquire _ENGINE_LOCK, re-check
_ENGINE_STATE["engine"] (double-checked locking), then create the engine with
create_engine(...), call SessionLocal.configure(bind=...), and set
_ENGINE_STATE["engine"] before releasing the lock so only one engine/pool is
created.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: cd5e6dcc-1087-43b6-9692-54f411eca2ef

📥 Commits

Reviewing files that changed from the base of the PR and between 695f1fa and 1d2d841.

📒 Files selected for processing (2)
  • app/db/__init__.py
  • app/db/session.py

Comment on lines +18 to +32
cached_engine = _ENGINE_STATE.get("engine")
if cached_engine is None:
try:
cached_engine = create_engine(DATABASE_URL, pool_pre_ping=True)
except ModuleNotFoundError as exc:
# CI/test environments may not install PostgreSQL DBAPI drivers.
if DATABASE_URL.startswith("postgresql") and exc.name in {
"psycopg2",
"psycopg",
}:
cached_engine = create_engine(DEFAULT_SQLITE_URL, pool_pre_ping=True)
else:
raise
SessionLocal.configure(bind=cached_engine)
_ENGINE_STATE["engine"] = cached_engine
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make engine initialization thread-safe.

get_engine() does an unsynchronized check-then-set on _ENGINE_STATE. Two concurrent first requests can both create an engine and race SessionLocal.configure(...), leaving multiple pools alive and startup binding nondeterministic.

🧵 One way to serialize first-use initialization
 def get_engine() -> Engine:
     cached_engine = _ENGINE_STATE.get("engine")
     if cached_engine is None:
-        try:
-            cached_engine = create_engine(DATABASE_URL, pool_pre_ping=True)
-        except ModuleNotFoundError as exc:
-            # CI/test environments may not install PostgreSQL DBAPI drivers.
-            if DATABASE_URL.startswith("postgresql") and exc.name in {
-                "psycopg2",
-                "psycopg",
-            }:
-                cached_engine = create_engine(DEFAULT_SQLITE_URL, pool_pre_ping=True)
-            else:
-                raise
-        SessionLocal.configure(bind=cached_engine)
-        _ENGINE_STATE["engine"] = cached_engine
+        with _ENGINE_LOCK:
+            cached_engine = _ENGINE_STATE.get("engine")
+            if cached_engine is None:
+                try:
+                    cached_engine = create_engine(DATABASE_URL, pool_pre_ping=True)
+                except ModuleNotFoundError as exc:
+                    # CI/test environments may not install PostgreSQL DBAPI drivers.
+                    if DATABASE_URL.startswith("postgresql") and exc.name in {
+                        "psycopg2",
+                        "psycopg",
+                    }:
+                        cached_engine = create_engine(DEFAULT_SQLITE_URL, pool_pre_ping=True)
+                    else:
+                        raise
+                SessionLocal.configure(bind=cached_engine)
+                _ENGINE_STATE["engine"] = cached_engine
     return cached_engine

Add a module lock next to _ENGINE_STATE:

from threading import Lock

_ENGINE_LOCK = Lock()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/db/session.py` around lines 18 - 32, The engine initialization performs
an unsynchronized check-then-set on _ENGINE_STATE which can race when two
threads call get_engine() concurrently; add a module-level threading.Lock (e.g.,
_ENGINE_LOCK = Lock()) and use it to serialize the first-use initialization:
perform a fast check for _ENGINE_STATE["engine"], if missing acquire
_ENGINE_LOCK, re-check _ENGINE_STATE["engine"] (double-checked locking), then
create the engine with create_engine(...), call
SessionLocal.configure(bind=...), and set _ENGINE_STATE["engine"] before
releasing the lock so only one engine/pool is created.

Comment on lines +22 to +28
except ModuleNotFoundError as exc:
# CI/test environments may not install PostgreSQL DBAPI drivers.
if DATABASE_URL.startswith("postgresql") and exc.name in {
"psycopg2",
"psycopg",
}:
cached_engine = create_engine(DEFAULT_SQLITE_URL, pool_pre_ping=True)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fail fast outside tests instead of silently switching to SQLite.

This fallback is not actually scoped to CI/tests. Any runtime with a PostgreSQL URL and a missing driver will start writing to sqlite:///./fluentmeet.db instead of failing, which can hide a broken deployment and point the service at the wrong database. Please gate this behind an explicit test-only setting or remove the fallback in non-test environments.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/db/session.py` around lines 22 - 28, The current except block silently
falls back to SQLite when a PostgreSQL DBAPI is missing (checking DATABASE_URL
and creating cached_engine with DEFAULT_SQLITE_URL); change this so missing
drivers fail fast in non-test environments: detect an explicit test-only flag
(e.g., a TESTING or IS_TEST_ENV environment/config variable) and only perform
the SQLite fallback when that flag is true, otherwise re-raise the
ModuleNotFoundError or raise a clear RuntimeError with context including
DATABASE_URL and exc.name; update the code paths around the
create_engine/cached_engine handling in session.py to reference DATABASE_URL,
DEFAULT_SQLITE_URL, and the new TESTING flag so deployments never silently
switch to SQLite.

…rdized errors

- add /api/v1/auth/signup route and v1 router registration
- add SupportedLanguage enum and schema normalization for user payloads
- implement DB session dependency and user CRUD create/get by email
- enforce duplicate email conflict with EMAIL_ALREADY_REGISTERED
- add signup integration tests and schema validation tests
- migrate model/schema timestamps away from datetime.utcnow deprecations

Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
@aniebietafia aniebietafia merged commit 33d46b4 into main Mar 15, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Authentication & User Management

1 participant