CIDA is an AI text platform with a core capability focused on:
- AI-generated text detection
It also includes admin analytics, async report generation, rate limiting, and Turnstile verification.
CIDA now runs detector scoring through a Hugging Face Space endpoint.
- Model:
desklib/ai-text-detector-v1.01 - Provider: Hugging Face Space (
aaravmaloo/ai-content-detector) - Task: AI-likelihood scoring (returns
ai_probabilityin[0, 1]) - Runtime: API calls Space Gradio API (default
/gradio_api/call/detect_ai_content) with{"data": ["<text>"]}payload and maps the returned class probabilities directly.
- User submits text or a file from
apps/web. - Frontend calls
/v1/*endpoints. services/api:- validates Turnstile,
- enforces Redis sliding-window rate limits,
- runs detector inference,
- stores events in Postgres.
- Report jobs are queued in Redis.
services/workerrenders and stores JSON/PDF reports.
If Space inference is unavailable or response parsing fails, API returns an inference error (no local AI fallback score is used).
apps/web: Next.js frontendservices/api: FastAPI backend (detector)services/worker: async report workerpackages/shared-schemas: shared OpenAPI/schema packageinfra/docker: Dockerfiles + composeinfra/railway: Railway service mapping
- API: FastAPI (
services/api) - Worker: ARQ (
services/worker) - Database: PostgreSQL
- Queue/rate-limit state: Redis
Required:
DATABASE_URLREDIS_URLJWT_SECRETADMIN_USERADMIN_PASSCORS_ALLOWED_ORIGINS
Model/runtime:
HF_SPACE_PREDICT_URL(defaulthttps://aaravmaloo-ai-content-detector.hf.space/gradio_api/call/detect_ai_content)HF_SPACE_API_TOKEN(optional)HF_SPACE_TIMEOUT_SECONDS(default20)HF_SPACE_MAX_INPUT_CHARS(default12000)HF_SPACE_MODEL_VERSION(defaultdesklib/ai-text-detector-v1.01)
Optional:
TURNSTILE_SECRETSENTRY_DSNR2_ENDPOINT,R2_BUCKET,R2_REGION,R2_ACCESS_KEY,R2_SECRET_KEYREPORT_LOCAL_DIRPUBLIC_BASE_URL
Required:
DATABASE_URLREDIS_URL
Optional:
REPORT_LOCAL_DIRR2_ENDPOINT,R2_BUCKET,R2_REGION,R2_ACCESS_KEY,R2_SECRET_KEYPUBLIC_BASE_URL
Required:
NEXT_PUBLIC_API_BASE_URL
From repo root:
docker compose -f infra/docker/docker-compose.yml up --buildServices:
- Web:
http://localhost:3000 - API:
http://localhost:8000 - Postgres:
localhost:5432 - Redis:
localhost:6379
Install web deps from repo root:
npm installRun API:
cd services/api
python -m venv .venv
. .venv/Scripts/Activate.ps1
pip install -r requirements.txt
uvicorn app.main:app --reload --port 8000Run worker:
cd services/worker
python -m venv .venv
. .venv/Scripts/Activate.ps1
pip install -r requirements.txt
arq app.worker.WorkerSettingsConfigRun web:
npm run dev:web- API Docker image no longer expects ONNX artifacts.
- Detector scoring is done via HF Space API calls.
- If your Space is private, configure
HF_SPACE_API_TOKENin deployment secrets.
curl -i https://<api-domain>/healthz
curl -i https://<api-domain>/readyz- Detector inference provider: Hugging Face Space
- Detector model: Desklib
desklib/ai-text-detector-v1.01
- Detector output is probabilistic, not proof of authorship.
- Keep human review in high-stakes decisions.
- Disclose machine-generated scoring to end users.
- Avoid storing sensitive text unless needed and consented.
Add or update your project license/contribution policy before external distribution.