feat(auth): add modular signup endpoint with language enum and standa…#37
feat(auth): add modular signup endpoint with language enum and standa…#37aniebietafia merged 4 commits intomainfrom
Conversation
…rdized errors - add /api/v1/auth/signup route and v1 router registration - add SupportedLanguage enum and schema normalization for user payloads - implement DB session dependency and user CRUD create/get by email - enforce duplicate email conflict with EMAIL_ALREADY_REGISTERED - add signup integration tests and schema validation tests - migrate model/schema timestamps away from datetime.utcnow deprecations Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughAdds a user signup feature (CRUD, API endpoint, schemas, tests), introduces timezone-aware datetimes across models/migrations/Kafka schemas, adds DB session/engine utilities, tightens CI coverage threshold, and adds documentation for an email service. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant API as FastAPI (api_router/auth)
participant CRUD as CRUD Layer (create_user)
participant DB as Database (Session/Engine)
Client->>API: POST /api/v1/auth/signup (UserCreate)
API->>CRUD: create_user(db, user_in)
CRUD->>DB: INSERT user (hashed password, languages, timestamps)
DB-->>CRUD: return persisted User
CRUD-->>API: return User
API-->>Client: 201 SignupResponse (public fields)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…rdized errors - add /api/v1/auth/signup route and v1 router registration - add SupportedLanguage enum and schema normalization for user payloads - implement DB session dependency and user CRUD create/get by email - enforce duplicate email conflict with EMAIL_ALREADY_REGISTERED - add signup integration tests and schema validation tests - migrate model/schema timestamps away from datetime.utcnow deprecations Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (5)
app/schemas/auth.py (1)
4-5: Consider an explicit signup response schema to keep the API contract stable.Using inheritance is fine now, but it can unintentionally expand
/signupoutput ifUserResponsegrows later. An explicit schema here would decouple the auth contract.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/schemas/auth.py` around lines 4 - 5, SignupResponse currently inherits from UserResponse which can accidentally expand the /signup output when UserResponse changes; replace the inheritance with an explicit dataclass/pydantic model (e.g., define SignupResponse fields explicitly) that lists only the intended public fields for the signup endpoint, update any references to SignupResponse accordingly, and ensure any schema export/tests use the new explicit SignupResponse instead of relying on UserResponse propagation.app/schemas/user.py (1)
46-52: Consider extracting the duplicatedstrip_full_namevalidator.The same validator logic is duplicated in
UserBase(lines 27-33) andUserUpdate(lines 46-52).♻️ Optional: Extract to a reusable function
+def _strip_or_none(value: str | None) -> str | None: + if value is None: + return None + stripped = value.strip() + return stripped or None + + class UserBase(BaseModel): # ... `@field_validator`("full_name", mode="before") `@classmethod` def strip_full_name(cls, value: str | None) -> str | None: - if value is None: - return None - stripped_value = value.strip() - return stripped_value or None + return _strip_or_none(value)Apply the same pattern to
UserUpdate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/schemas/user.py` around lines 46 - 52, The strip_full_name validator logic is duplicated between UserBase and UserUpdate; extract the trimming/null-coercion logic into a shared helper (e.g., a function named strip_optional_str or normalize_optional_str) and have both class-level validators (strip_full_name in UserBase and UserUpdate) call that helper (keep the `@field_validator`("full_name", mode="before") decorator and simply delegate to the new function), so you remove duplication while preserving behavior and return typing.app/models/user.py (1)
37-38: Consider adding a CHECK constraint or using a DB-level enum for language columns.The language columns store plain strings while the schema layer uses
SupportedLanguageenum. This creates a gap where invalid language codes could be inserted directly via SQL or migrations.💡 Optional: Add database-level validation
# Option 1: Use SQLAlchemy Enum type from sqlalchemy import Enum as SAEnum speaking_language: Mapped[str] = mapped_column( SAEnum("en", "fr", "de", "es", "it", "pt", name="supported_language"), default="en" ) # Option 2: Add a CHECK constraint via __table_args__This is optional since the application layer already enforces valid values through Pydantic validation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/models/user.py` around lines 37 - 38, The speaking_language and listening_language columns are plain strings while the app uses SupportedLanguage, so add DB-level validation: replace the mapped_column types for speaking_language and listening_language to use a SQLAlchemy Enum (SAEnum) constrained to SupportedLanguage values, or alternatively add a __table_args__ with a CheckConstraint that restricts those columns to the SupportedLanguage set; update imports accordingly and ensure the default remains "en" while preserving the attribute names speaking_language and listening_language.tests/test_auth/test_schemas_user.py (1)
21-24: Consider adding assertions for remaining fields.The test validates
id,full_name,is_active,is_verified, orcreated_at.💡 Optional: Add comprehensive assertions
assert result.id == 123 assert result.email == "person@example.com" assert result.speaking_language == SupportedLanguage.ENGLISH assert result.listening_language == SupportedLanguage.FRENCH + assert result.full_name == "Test Person" + assert result.is_active is True + assert result.is_verified is False + assert result.created_at == source.created_at🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_auth/test_schemas_user.py` around lines 21 - 24, Add assertions for the remaining User schema fields: verify result.full_name equals the expected name from the test fixture (e.g., "Person Name"), assert result.is_active and result.is_verified match the fixture booleans, and assert result.created_at is present and is a datetime (or matches the expected timestamp). Update the test that references result and SupportedLanguage to include these checks so the schema test verifies full_name, is_active, is_verified, and created_at in addition to id, email, and language fields.tests/test_auth/test_auth_signup.py (1)
40-43: Usepop()instead ofclear()for safer cleanup.Using
clear()removes all dependency overrides, which could interfere with other fixtures or tests that set their own overrides. Removing only the specific override this fixture added is safer.Proposed fix
app.dependency_overrides[get_db] = _override_get_db with TestClient(app) as test_client: yield test_client - app.dependency_overrides.clear() + app.dependency_overrides.pop(get_db, None)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_auth/test_auth_signup.py` around lines 40 - 43, The fixture currently calls app.dependency_overrides.clear() which indiscriminately removes all overrides; instead remove only the override you set for get_db by popping it after the TestClient context (use app.dependency_overrides.pop(get_db, None) or check and del app.dependency_overrides[get_db]) so only the override added for _override_get_db is removed and other tests' overrides remain intact; update the cleanup in the block that currently contains app.dependency_overrides.clear() to pop the get_db entry.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/crud/user.py`:
- Around line 30-49: The current read-before-write uniqueness check using
get_user_by_email is racy: wrap the db.commit() that persists the new User
(created in this block with email=user_in.email.lower() and
hashed_password=hash_password(...)) in a try/except that catches
sqlalchemy.exc.IntegrityError (or the DB driver's IntegrityError), calls
db.rollback(), and raises ConflictException(code="EMAIL_ALREADY_REGISTERED",
message="An account with this email already exists.") when the IntegrityError
indicates a duplicate-email constraint; re-raise other exceptions after
rollback. Ensure db.add(db_user) remains before the try and db.refresh/db_user
only run after a successful commit.
In `@app/db/session.py`:
- Line 10: The module currently creates a synchronous DB engine at import time
with create_engine(DATABASE_URL) (variable engine), which fails in CI because
only the async driver (asyncpg) is installed; either install a sync Postgres
driver (add psycopg2-binary or psycopg[binary] to requirements) or, preferably,
defer engine creation: remove the top-level engine instantiation and provide a
lazy factory such as get_engine() or create_engine_if_needed(DATABASE_URL) that
constructs the engine on first use (or accept an injected engine) so tests can
override DATABASE_URL or install drivers before the engine is created.
In `@docs/email_service.md`:
- Line 32: Update the Mailgun SDK reference text: replace the non-standard
package name string "mailgun2" with the correct PyPI package name "mailgun" and
adjust the install instruction to "pip install mailgun" in the documentation
line that currently reads '* **Mailgun SDK**: Use the `mailgun2` library...';
ensure the sentence mentions the official package name "mailgun" and/or direct
httpx calls to the Mailgun /messages endpoint.
In `@tests/test_auth/test_auth_signup.py`:
- Around line 28-32: The teardown block currently closes the DB session and
drops tables but does not dispose the SQLAlchemy engine, which can leak
connections; after calling Base.metadata.drop_all(bind=engine) in the fixture
teardown (the try/finally that yields db and then calls db.close() and
Base.metadata.drop_all(bind=engine)), add a call to engine.dispose() to release
connection resources held by the engine.
---
Nitpick comments:
In `@app/models/user.py`:
- Around line 37-38: The speaking_language and listening_language columns are
plain strings while the app uses SupportedLanguage, so add DB-level validation:
replace the mapped_column types for speaking_language and listening_language to
use a SQLAlchemy Enum (SAEnum) constrained to SupportedLanguage values, or
alternatively add a __table_args__ with a CheckConstraint that restricts those
columns to the SupportedLanguage set; update imports accordingly and ensure the
default remains "en" while preserving the attribute names speaking_language and
listening_language.
In `@app/schemas/auth.py`:
- Around line 4-5: SignupResponse currently inherits from UserResponse which can
accidentally expand the /signup output when UserResponse changes; replace the
inheritance with an explicit dataclass/pydantic model (e.g., define
SignupResponse fields explicitly) that lists only the intended public fields for
the signup endpoint, update any references to SignupResponse accordingly, and
ensure any schema export/tests use the new explicit SignupResponse instead of
relying on UserResponse propagation.
In `@app/schemas/user.py`:
- Around line 46-52: The strip_full_name validator logic is duplicated between
UserBase and UserUpdate; extract the trimming/null-coercion logic into a shared
helper (e.g., a function named strip_optional_str or normalize_optional_str) and
have both class-level validators (strip_full_name in UserBase and UserUpdate)
call that helper (keep the `@field_validator`("full_name", mode="before")
decorator and simply delegate to the new function), so you remove duplication
while preserving behavior and return typing.
In `@tests/test_auth/test_auth_signup.py`:
- Around line 40-43: The fixture currently calls
app.dependency_overrides.clear() which indiscriminately removes all overrides;
instead remove only the override you set for get_db by popping it after the
TestClient context (use app.dependency_overrides.pop(get_db, None) or check and
del app.dependency_overrides[get_db]) so only the override added for
_override_get_db is removed and other tests' overrides remain intact; update the
cleanup in the block that currently contains app.dependency_overrides.clear() to
pop the get_db entry.
In `@tests/test_auth/test_schemas_user.py`:
- Around line 21-24: Add assertions for the remaining User schema fields: verify
result.full_name equals the expected name from the test fixture (e.g., "Person
Name"), assert result.is_active and result.is_verified match the fixture
booleans, and assert result.created_at is present and is a datetime (or matches
the expected timestamp). Update the test that references result and
SupportedLanguage to include these checks so the schema test verifies full_name,
is_active, is_verified, and created_at in addition to id, email, and language
fields.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 102b21b3-8505-4e74-82f2-6a3157ec3dfe
📒 Files selected for processing (18)
.github/workflows/ci.ymlalembic/versions/11781e907181_initial_migration.pyapp/api/v1/__init__.pyapp/api/v1/api.pyapp/api/v1/endpoints/auth.pyapp/crud/__init__.pyapp/crud/user.pyapp/db/__init__.pyapp/db/session.pyapp/kafka/schemas.pyapp/main.pyapp/models/user.pyapp/schemas/auth.pyapp/schemas/user.pydocs/email_service.mdtests/test_auth/__init__.pytests/test_auth/test_auth_signup.pytests/test_auth/test_schemas_user.py
| existing_user = get_user_by_email(db, user_in.email) | ||
| if existing_user: | ||
| raise ConflictException( | ||
| code="EMAIL_ALREADY_REGISTERED", | ||
| message="An account with this email already exists.", | ||
| ) | ||
|
|
||
| db_user = User( | ||
| email=user_in.email.lower(), | ||
| hashed_password=hash_password(user_in.password), | ||
| full_name=user_in.full_name, | ||
| speaking_language=user_in.speaking_language.value, | ||
| listening_language=user_in.listening_language.value, | ||
| is_active=True, | ||
| is_verified=False, | ||
| ) | ||
| db.add(db_user) | ||
| db.commit() | ||
| db.refresh(db_user) | ||
| return db_user |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== create_user transaction/error handling ==="
rg -nP 'def create_user|db\.commit\(|db\.rollback\(|IntegrityError|ConflictException' app/crud/user.py -C 3
echo "=== email uniqueness constraints ==="
rg -nP 'email.*unique=True|ix_users_email|UniqueConstraint' app/models/user.py alembic/versions/11781e907181_initial_migration.py -C 2Repository: Brints/FluentMeet
Length of output: 1885
Handle unique-email race condition at commit time with proper error handling.
The application-level read-before-write check is vulnerable to a race condition under concurrency. Two concurrent requests can both pass the uniqueness check on line 31 and attempt to insert the same email, causing the second request to fail at the database constraint (unique index on email exists in migrations). Without error handling around db.commit(), this surfaces as an unhandled IntegrityError instead of your intended ConflictException.
Wrap db.commit() in a try-except block to catch IntegrityError and map it to ConflictException, with rollback on any exception:
Suggested fix
from typing import cast
import bcrypt
from passlib.context import CryptContext
+from sqlalchemy.exc import IntegrityError
from sqlalchemy import select
from sqlalchemy.orm import Session
@@
def create_user(db: Session, user_in: UserCreate) -> User:
@@
db.add(db_user)
- db.commit()
+ try:
+ db.commit()
+ except IntegrityError as exc:
+ db.rollback()
+ raise ConflictException(
+ code="EMAIL_ALREADY_REGISTERED",
+ message="An account with this email already exists.",
+ ) from exc
+ except Exception:
+ db.rollback()
+ raise
db.refresh(db_user)
return db_user🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/crud/user.py` around lines 30 - 49, The current read-before-write
uniqueness check using get_user_by_email is racy: wrap the db.commit() that
persists the new User (created in this block with email=user_in.email.lower()
and hashed_password=hash_password(...)) in a try/except that catches
sqlalchemy.exc.IntegrityError (or the DB driver's IntegrityError), calls
db.rollback(), and raises ConflictException(code="EMAIL_ALREADY_REGISTERED",
message="An account with this email already exists.") when the IntegrityError
indicates a duplicate-email constraint; re-raise other exceptions after
rollback. Ensure db.add(db_user) remains before the try and db.refresh/db_user
only run after a successful commit.
| 8. Integration tests verify the full flow: producer publishes → consumer dispatches → Mailgun API is called. | ||
|
|
||
| **Proposed Technical Details** | ||
| * **Mailgun SDK**: Use the `mailgun2` library (already in `requirements.txt`) or direct `httpx` calls to the Mailgun `/messages` endpoint. |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Python Mailgun SDK package name PyPI
💡 Result:
The official Mailgun Python SDK on PyPI is published as the package:
mailgun(pypi.org)
Install with:
pip install mailgunCitations:
Update the Mailgun library reference to use the correct package name.
Line 32 references mailgun2, but the official Mailgun Python SDK on PyPI is published as mailgun. Update the reference to use mailgun (installable via pip install mailgun) instead of the non-standard mailgun2 package name.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/email_service.md` at line 32, Update the Mailgun SDK reference text:
replace the non-standard package name string "mailgun2" with the correct PyPI
package name "mailgun" and adjust the install instruction to "pip install
mailgun" in the documentation line that currently reads '* **Mailgun SDK**:
Use the `mailgun2` library...'; ensure the sentence mentions the official
package name "mailgun" and/or direct httpx calls to the Mailgun /messages
endpoint.
…rdized errors - add /api/v1/auth/signup route and v1 router registration - add SupportedLanguage enum and schema normalization for user payloads - implement DB session dependency and user CRUD create/get by email - enforce duplicate email conflict with EMAIL_ALREADY_REGISTERED - add signup integration tests and schema validation tests - migrate model/schema timestamps away from datetime.utcnow deprecations Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/db/session.py`:
- Around line 22-28: The current except block silently falls back to SQLite when
a PostgreSQL DBAPI is missing (checking DATABASE_URL and creating cached_engine
with DEFAULT_SQLITE_URL); change this so missing drivers fail fast in non-test
environments: detect an explicit test-only flag (e.g., a TESTING or IS_TEST_ENV
environment/config variable) and only perform the SQLite fallback when that flag
is true, otherwise re-raise the ModuleNotFoundError or raise a clear
RuntimeError with context including DATABASE_URL and exc.name; update the code
paths around the create_engine/cached_engine handling in session.py to reference
DATABASE_URL, DEFAULT_SQLITE_URL, and the new TESTING flag so deployments never
silently switch to SQLite.
- Around line 18-32: The engine initialization performs an unsynchronized
check-then-set on _ENGINE_STATE which can race when two threads call
get_engine() concurrently; add a module-level threading.Lock (e.g., _ENGINE_LOCK
= Lock()) and use it to serialize the first-use initialization: perform a fast
check for _ENGINE_STATE["engine"], if missing acquire _ENGINE_LOCK, re-check
_ENGINE_STATE["engine"] (double-checked locking), then create the engine with
create_engine(...), call SessionLocal.configure(bind=...), and set
_ENGINE_STATE["engine"] before releasing the lock so only one engine/pool is
created.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cd5e6dcc-1087-43b6-9692-54f411eca2ef
📒 Files selected for processing (2)
app/db/__init__.pyapp/db/session.py
| cached_engine = _ENGINE_STATE.get("engine") | ||
| if cached_engine is None: | ||
| try: | ||
| cached_engine = create_engine(DATABASE_URL, pool_pre_ping=True) | ||
| except ModuleNotFoundError as exc: | ||
| # CI/test environments may not install PostgreSQL DBAPI drivers. | ||
| if DATABASE_URL.startswith("postgresql") and exc.name in { | ||
| "psycopg2", | ||
| "psycopg", | ||
| }: | ||
| cached_engine = create_engine(DEFAULT_SQLITE_URL, pool_pre_ping=True) | ||
| else: | ||
| raise | ||
| SessionLocal.configure(bind=cached_engine) | ||
| _ENGINE_STATE["engine"] = cached_engine |
There was a problem hiding this comment.
Make engine initialization thread-safe.
get_engine() does an unsynchronized check-then-set on _ENGINE_STATE. Two concurrent first requests can both create an engine and race SessionLocal.configure(...), leaving multiple pools alive and startup binding nondeterministic.
🧵 One way to serialize first-use initialization
def get_engine() -> Engine:
cached_engine = _ENGINE_STATE.get("engine")
if cached_engine is None:
- try:
- cached_engine = create_engine(DATABASE_URL, pool_pre_ping=True)
- except ModuleNotFoundError as exc:
- # CI/test environments may not install PostgreSQL DBAPI drivers.
- if DATABASE_URL.startswith("postgresql") and exc.name in {
- "psycopg2",
- "psycopg",
- }:
- cached_engine = create_engine(DEFAULT_SQLITE_URL, pool_pre_ping=True)
- else:
- raise
- SessionLocal.configure(bind=cached_engine)
- _ENGINE_STATE["engine"] = cached_engine
+ with _ENGINE_LOCK:
+ cached_engine = _ENGINE_STATE.get("engine")
+ if cached_engine is None:
+ try:
+ cached_engine = create_engine(DATABASE_URL, pool_pre_ping=True)
+ except ModuleNotFoundError as exc:
+ # CI/test environments may not install PostgreSQL DBAPI drivers.
+ if DATABASE_URL.startswith("postgresql") and exc.name in {
+ "psycopg2",
+ "psycopg",
+ }:
+ cached_engine = create_engine(DEFAULT_SQLITE_URL, pool_pre_ping=True)
+ else:
+ raise
+ SessionLocal.configure(bind=cached_engine)
+ _ENGINE_STATE["engine"] = cached_engine
return cached_engineAdd a module lock next to _ENGINE_STATE:
from threading import Lock
_ENGINE_LOCK = Lock()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/db/session.py` around lines 18 - 32, The engine initialization performs
an unsynchronized check-then-set on _ENGINE_STATE which can race when two
threads call get_engine() concurrently; add a module-level threading.Lock (e.g.,
_ENGINE_LOCK = Lock()) and use it to serialize the first-use initialization:
perform a fast check for _ENGINE_STATE["engine"], if missing acquire
_ENGINE_LOCK, re-check _ENGINE_STATE["engine"] (double-checked locking), then
create the engine with create_engine(...), call
SessionLocal.configure(bind=...), and set _ENGINE_STATE["engine"] before
releasing the lock so only one engine/pool is created.
| except ModuleNotFoundError as exc: | ||
| # CI/test environments may not install PostgreSQL DBAPI drivers. | ||
| if DATABASE_URL.startswith("postgresql") and exc.name in { | ||
| "psycopg2", | ||
| "psycopg", | ||
| }: | ||
| cached_engine = create_engine(DEFAULT_SQLITE_URL, pool_pre_ping=True) |
There was a problem hiding this comment.
Fail fast outside tests instead of silently switching to SQLite.
This fallback is not actually scoped to CI/tests. Any runtime with a PostgreSQL URL and a missing driver will start writing to sqlite:///./fluentmeet.db instead of failing, which can hide a broken deployment and point the service at the wrong database. Please gate this behind an explicit test-only setting or remove the fallback in non-test environments.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/db/session.py` around lines 22 - 28, The current except block silently
falls back to SQLite when a PostgreSQL DBAPI is missing (checking DATABASE_URL
and creating cached_engine with DEFAULT_SQLITE_URL); change this so missing
drivers fail fast in non-test environments: detect an explicit test-only flag
(e.g., a TESTING or IS_TEST_ENV environment/config variable) and only perform
the SQLite fallback when that flag is true, otherwise re-raise the
ModuleNotFoundError or raise a clear RuntimeError with context including
DATABASE_URL and exc.name; update the code paths around the
create_engine/cached_engine handling in session.py to reference DATABASE_URL,
DEFAULT_SQLITE_URL, and the new TESTING flag so deployments never silently
switch to SQLite.
…rdized errors - add /api/v1/auth/signup route and v1 router registration - add SupportedLanguage enum and schema normalization for user payloads - implement DB session dependency and user CRUD create/get by email - enforce duplicate email conflict with EMAIL_ALREADY_REGISTERED - add signup integration tests and schema validation tests - migrate model/schema timestamps away from datetime.utcnow deprecations Signed-off-by: aniebietafia <aniebietafia87@gmail.com>
…rdized errors
Summary by CodeRabbit
New Features
Improvements
Tests
Documentation