What it does: Calls the Object Detection API to analyze uploaded images and returns normalized boxes.
Prompt:
Add/ensure a
/detections/{id}endpoint. If it doesn't exist, create a minimal version that reads the uploaded image from s### Task 2.3.10 — Read endpoints for UI (roles, candidates, detail)
Objective Provide minimal read APIs the React app can consume.
Prompt
"Add REST endpoints:
GET /roles→ list{id,title,department}.GET /candidates?search=&category=&role_id=&page=&page_size=→ returns a paginated list with fields{id,name,raw_category,fit_score}; filter bysearch in name OR resume_text,raw_category, and optionalapplied_role_id.GET /candidates/{id}→ returns full record includingresume_text, scoring fields. Return JSON only, no ORM objects. Include CORS forhttp://localhost:5173."
Acceptance Criteria
- Curling the endpoints returns JSON with expected fields.
- Server supports simple search and pagination.
- CORS works for local Vite dev server.
Implementation Summary:
- Added response models:
RoleResponse,CandidateListItem,CandidatesResponse,CandidateDetailResponse - Implemented
GET /rolesreturning all roles with id, title, department - Implemented
GET /candidateswith full filtering and pagination:- Search filter: name OR resume_text contains search term
- Category filter: exact match on raw_category
- Role ID filter: exact match on applied_role_id
- Pagination: page, page_size with total count and metadata
- Implemented
GET /candidates/{id}returning full candidate details including resume and scoring - All endpoints return clean JSON (no ORM objects)
- CORS already configured for
http://localhost:5173 - Comprehensive testing: all filters, pagination, error cases, and CORS preflight verifiedComputer Vision Object Detection (use v3.2 analyze API) using env vars
VISION_ENDPOINTandVISION_KEY. Normalize the response into{ boxes: [{ label, x, y, w, h, score }] }. On any failure (missing key, API error), return{ boxes: [] }and a non-200 error code with a helpful message.
Acceptance Criteria:
- Endpoint exists and runs for everyone.
- On success, returns real detection results.
- On failure, degrades gracefully with
{ "boxes": [] }and logs a readable error.
Implementation Summary:
- Created
/api/detections/{id}endpoint inapp/api/detection.py - Added Azure Computer Vision API integration using v3.2 analyze endpoint
- Configured
VISION_ENDPOINTandVISION_KEYenvironment variables - Implemented graceful error handling for missing credentials, API failures, and network issues
- Returns normalized bounding boxes with format
{label, x, y, w, h, score} - Added comprehensive test suite with 8 test cases
- All tests passing, endpoint functional and properly handles all error scenarios
Prompt:
In
UploadPage.jsx, after a successful upload, call GET/detections/{image_id}. Render returnedboxesas overlays on the uploaded image. Create aDetectionOverlayReact component that takes{ imageUrl, boxes }and draws bounding boxes with labels.
Acceptance Criteria:
- Upload an image → frontend shows the image.
- Bounding boxes with labels overlay correctly in the right positions.
- Empty result = image shows without overlays.
What it does: Builds Docker images for backend and frontend and runs them with docker-compose.
Prompt:
Create
Dockerfilefor backend (FastAPI + uvicorn) and frontend (Vite build → static server or served by backend). Adddocker-compose.ymlwiring ports and env vars. Commands:docker compose up --build. Document environment variables in a rootREADME.md.
Acceptance Criteria:
-
docker compose upstarts both services locally. - Frontend reachable; backend health endpoint returns OK.
- Upload → detection → report works locally with env set.
Implementation Summary:
- Created optimized
backend/Dockerfileusing official UV installation method - Created production-ready
frontend/Dockerfilewith multi-stage build (Node.js + Nginx) - Added
docker-compose.ymlwith proper service orchestration, health checks, and networking - Documented comprehensive setup in root
README.mdwith environment variables reference - Used Docker best practices: non-root users, layer caching, health checks, and security
- Backend optimizations: UV package manager, bytecode compilation, cache mounts
- Frontend optimizations: production build, custom nginx config, proper routing support
What it does: Publishes images to ACR and deploys to Azure Container Apps with all env vars.
Prompt:
Generate an az CLI script to: (1) create ACR if needed; (2) build & push both images; (3) deploy backend and frontend container apps; (4) set env vars
VISION_ENDPOINT,VISION_KEY,AOAI_ENDPOINT,AOAI_KEY,SEARCH_ENDPOINT,SEARCH_KEY, and frontendVITE_API_BASE_URL. Output the public URLs.
Acceptance Criteria:
- Public URL works for frontend.
- End-to-end upload → detection → report with RAG works on Azure.
By the end of this module, participants will have extended the FastAPI app from Day 1 with new endpoints that use Azure OpenAI (AOAI) to evaluate resumes against job roles. This sets up the backend service that we’ll later connect to the Power Platform in Module 4.
Objective Add AOAI client + config to the existing FastAPI project (no new app). Use env-based settings, a service API key header, and keep the surface small (one helper to call chat completions).
Prompt
You are editing our existing FastAPI app.
- Add configuration using pydantic-settings with:
API_TOKEN,AZURE_OPENAI_ENDPOINT,AZURE_OPENAI_API_KEY,AZURE_OPENAI_DEPLOYMENT.- Add a lightweight AOAI chat helper using the OpenAI 1.x SDK configured for Azure (base_url:
{endpoint}/openai/deployments/{deployment}).- Add a dependency that enforces header
x-api-key == API_TOKEN, returning 401 otherwise.- Keep code clean, testable, and importable (e.g.,
from app.ai import aoai_chat).- Don’t create a new FastAPI app—extend the current one.
Acceptance Criteria
- Running server with correct envs succeeds; missing/wrong
x-api-key→ 401. -
aoai_chat(messages, response_format=None)is importable and calls Azure OpenAI. - No duplicate FastAPI instances; reuses the Day-1 app object.
Implementation Summary:
- Extended
app/config.pywith Azure OpenAI configuration:API_TOKEN,AZURE_OPENAI_ENDPOINT,AZURE_OPENAI_API_KEY,AZURE_OPENAI_DEPLOYMENT - Created
app/ai.pymodule withaoai_chat()function using OpenAI 1.x SDK configured for Azure - Added
verify_api_key()FastAPI dependency for x-api-key header authentication - Added OpenAI dependency to
pyproject.tomland installed viauv sync - Authentication tested: missing header returns 422, wrong key returns 401, correct key returns 200
- Azure OpenAI integration functional with real API calls
- All existing endpoints remain functional
New API Endpoints:
GET /api/aoai/test- Test endpoint to verify Azure OpenAI authentication (requires x-api-key header)POST /api/aoai/chat- Test endpoint for Azure OpenAI chat functionality (requires x-api-key header, accepts array of messages)POST /score_candidate- Score candidate resumes against job roles with auto-rubric generation (requires x-api-key header)
Objective / Description Create a pure function that derives a role rubric when none is supplied. This is internal (called by the scoring endpoint). Keep text concise and structured.
Prompt
Add a function
synthesize_rubric(role_title: str | None, category: str | None, seniority: str | None, must: list[str] | None, nice: list[str] | None) -> str.
- Use a system prompt: “You create concise, unambiguous hiring rubrics…” (mission, must-haves, nice-to-haves, anti-requirements, 5–7 evaluation axes with 1/3/5 anchors).
- User prompt should insert role_title/category, optional seniority, must/nice lists; target ~350 words.
- Temperature 0.2.
- Return the rubric string.
- Ready to be imported by the endpoint.
Acceptance Criteria
- Calling
synthesize_rubric("Data Engineer", None, "Mid", ["SQL","ETL"], ["Airflow"])returns a non-empty rubric containing labeled sections and bullet points. - Handles
role_title=Noneand usescategoryfallback.
Objective
Add a single endpoint that (a) validates inputs, (b) derives rubric if missing, (c) calls scorer prompt, (d) returns strict JSON:
{ score_0_100, strengths[], risks[], explanation, role_profile_used }.
Prompt
In our existing FastAPI app, add
POST /score_candidate.
- Request model fields:
resume_text(required, min 50 chars), optionalrole_profile,role_title,category,seniority,must_have[],nice_to_have[].- If neither
role_profilenor (role_titleorcategory) is provided → return 400.- If
role_profilemissing, callsynthesize_rubric(...)to generate it.- Scoring call: system prompt “You are a rigorous hiring panel…”; user prompt includes the rubric and verbatim resume.
- Ask the model for JSON using response_format
{"type":"json_object"}with keys:score_0_100(int),strengths(3–6 strings, cite evidence),risks(3–6 strings, cite evidence),explanation(string).- Clamp score to [0,100].
- Return a pydantic
ScoreResponseincluding the final rubric inrole_profile_used.- Protect with the
x-api-keydependency.
Acceptance Criteria
-
curlhappy path (auto-derive rubric):curl -s -X POST http://localhost:8000/score_candidate \ -H "Content-Type: application/json" -H "x-api-key: $API_TOKEN" \ -d '{ "role_title":"Data Engineer", "seniority":"Mid", "resume_text":"<paste Resume_str here>", "must_have":["SQL","ETL","Data modeling"], "nice_to_have":["Airflow","Spark"] }'
Returns HTTP 200 with valid JSON and keys:
score_0_100(0–100),strengths(>=3),risks(>=3),explanation(non-empty),role_profile_used(non-empty). -
Error path: missing role context returns HTTP 400 with clear message.
-
Wrong API key returns 401.
Implementation Summary:
- Created
POST /score_candidateendpoint inapp/api/scoring.py - Added Pydantic models
ScoreRequestandScoreResponsewith proper validation - Implemented input validation: resume_text minimum 50 characters, role context requirements
- Auto-generates rubric using
synthesize_rubric()whenrole_profileis missing - Integrated Azure OpenAI with structured JSON response format for consistent scoring
- Added score clamping to [0,100] range and proper error handling
- Protected with
x-api-keyauthentication using existingverify_api_keydependency - All test scenarios passed: happy path (200), missing role context (400), wrong API key (401)
Objective / Description
Add a simple LRU cache keyed by (role_title|category, seniority, must[], nice[]) to avoid re-prompting rubric synthesis during the lab/demo.
Prompt
Add a tiny in-memory LRU cache (e.g.,
functools.lru_cacheor a bounded dict) forsynthesize_rubric(...).
- Build a stable key from normalized inputs (lowercased strings; sorted lists for must/nice).
- On cache hit, skip the AOAI rubric call.
- Add minimal logging: “rubric cache hit/miss”.
Acceptance Criteria
- Repeating the same request twice shows a cache hit log on the second call.
- Second response is materially identical and faster (observable locally).
- Code path is transparent to the endpoint (no behavior change).
Objective / Description Add pragmatic logging and basic safeguards without bloating the code: input length checks, hashed candidate token for correlation, and prompt/response size guards.
Prompt
Enhance the endpoint with:
- Truncate
resume_textto a max token/char budget (log truncation).- Compute a stable, non-PII correlation id:
sha256(first 512 chars of resume_text); log it with every request.- Log timing for (rubric, scoring) calls, and whether cache was used.
- If model returns malformed JSON, retry once; if still bad, return 502 with a clear error.
- Ensure we never log the full resume—only lengths and the hash.
Acceptance Criteria
- Logs include: correlation id, input lengths, cache hit/miss, AOAI call durations.
- Malformed model output path returns 502 with message “Upstream parsing error” (or similar) after one retry.
Objective / Description Provide ready-to-use cURL scripts and confirm CORS & response shape are Power Apps Custom Connector friendly.
Prompt
Add:
scripts/example_score.shwith two cURL examples (auto-derive rubric by role_title; and explicit role_profile).- Enable permissive CORS for the lab origin (we’ll replace later).
- Add a brief README section documenting request/response JSON for the Custom Connector (keys, types).
- Confirm response time target (< 3s with cached rubric; < 7s cold) in README troubleshooting.
Acceptance Criteria
scripts/example_score.shruns successfully (200) with valid JSON output.- CORS preflight succeeds for
POST /score_candidate. - README shows the exact request/response schema that matches the connector definition you’ll create in Module 4.
Objective
Create a local SQLite database and models for roles and candidates, including columns to store scoring outputs.
Prompt
“In
backend/FastAPI app, add SQLite persistence using SQLModel or SQLAlchemy.
DB file:
app.dbunderbackend/.Models:
- Role(id PK, title, department, role_profile TEXT).
- Candidate(id PK, name, raw_category, resume_text TEXT, applied_role_id FK→Role, fit_score INT NULL, strengths TEXT NULL, risks TEXT NULL, explanation TEXT NULL, updated_at).
Create
database.pywith engine, SessionLocal, andinit_db()that creates tables.Wire dependency
get_db()into FastAPI.Call
init_db()at startup.”
Acceptance Criteria
-
app.dbis created on first run. - Tables
roleandcandidateexist. - Server boots with no migration errors.
Implementation Summary:
- Added
sqlmodel>=0.0.14dependency topyproject.tomlfor type-safe database operations - Created database models in
app/models/database.py:- Role:
id(PK),title,department,role_profile(TEXT) with relationship to candidates - Candidate:
id(PK),name,raw_category,resume_text(TEXT),applied_role_id(FK→Role),fit_score(INT NULL),strengths(TEXT NULL),risks(TEXT NULL),explanation(TEXT NULL),updated_at(DATETIME)
- Role:
- Created
app/database.pywith SQLite engine,init_db()function, andget_db()FastAPI dependency - Added startup event handler in
app/main.pyto automatically callinit_db()on server start - Database file
app.db(12KB) created successfully with proper table schemas and foreign key relationships - All components tested: imports successful, server boots without errors, health endpoint responsive
Objective Quickly load a small subset of the open-resume dataset into SQLite. Keep it fast and deterministic.
Prompt
“Add a
scripts/seed.pythat:
- Reads
data/candidates.csvwith columns: id,name,Resume_str,Category.- Inserts/upsserts into Candidate:
name,resume_text=Resume_str,raw_category=Category.- Also insert 3 sample Roles with realistic
role_profileJDs (e.g., Information-Technology, ‘Accountant’, ‘HR Generalist’, ‘Sales’).- Provide a
make seedoruv run scripts/seed.pycommand in README.- If rows already exist, skip duplicates.”
Acceptance Criteria
- Running
uv run scripts/seed.pycompletes without error. -
SELECT COUNT(*) FROM candidatereturns expected row count. - At least 3 roles present.
Objective
Expose POST /score/{candidate_id} that takes either role_id or raw role_profile, calls AOAI (from Module 3), saves results into SQLite, and returns the payload.
Prompt
“In FastAPI, implement:
POST /score/{candidate_id}request body:{ role_id?: number, role_profile?: string }.- Resolve
role_profile(ifrole_idgiven, load from DB; else use provided string).- Call existing AOAI scoring helper (model name, endpoint, key from env
.env), returning{ score_0_100, strengths[], risks[], explanation }.- Persist to Candidate:
fit_score,strengths(join lines),risks(join lines),explanation,updated_at.- Return the saved result.
- Add basic error handling if candidate/role not found.”
Acceptance Criteria
-
POST /score/{id}with a valid role updates the candidate row. - Response JSON includes
fit_scoreandexplanation. - Refreshing candidate shows persisted values.
Implementation Summary:
- Implemented
POST /score/{candidate_id}inapp/api/scoring.py. - Accepts
{ role_id?: number, role_profile?: string }in request body, validates mutual exclusivity. - Resolves
role_profilefrom DB ifrole_idis given, or uses provided string. - Calls AOAI scoring helper, receives
{ score_0_100, strengths[], risks[], explanation }. - Persists results to Candidate:
fit_score,strengths(joined),risks(joined),explanation,updated_at. - Returns saved result in response.
- Handles errors for missing candidate/role, invalid input, and AOAI failures.
- Fully tested: valid/invalid candidate/role, both/neither fields, API key, and persistence.
Objective Provide minimal read APIs the React app can consume.
Prompt
“Add REST endpoints:
GET /roles→ list{id,title,department}.GET /candidates?search=&category=&role_id=&page=&page_size=→ returns a paginated list with fields{id,name,raw_category,fit_score}; filter bysearch in name OR resume_text,raw_category, and optionalapplied_role_id.GET /candidates/{id}→ returns full record includingresume_text, scoring fields. Return JSON only, no ORM objects. Include CORS forhttp://localhost:5173.”
Acceptance Criteria
- Curling the endpoints returns JSON with expected fields.
- Server supports simple search and pagination.
- CORS works for local Vite dev server.
Implementation Summary:
- Added response models:
RoleResponse,CandidateListItem,CandidatesResponse,CandidateDetailResponse - Implemented
GET /rolesreturning all roles with id, title, department - Implemented
GET /candidateswith full filtering and pagination:- Search filter: name OR resume_text contains search term
- Category filter: exact match on raw_category
- Role ID filter: exact match on applied_role_id
- Pagination: page, page_size with total count and metadata
- Implemented
GET /candidates/{id}returning full candidate details including resume and scoring - All endpoints return clean JSON (no ORM objects)
- CORS already configured for
http://localhost:5173 - Comprehensive testing: all filters, pagination, error cases, and CORS preflight verified
Objective Create a single page that lists roles and candidates, lets a user score one, and shows results.
Prompt
“In
frontend/(Vite React):
Add
.envwithVITE_API_BASE=http://localhost:8000.Create components:
RoleSelect→ dropdown fetching/roles.CandidateList→ list from/candidates, with search box.ScorePanel→ shows selected candidate’s latestfit_score, strengths/risks, explanation.‘Score Candidate’ button: calls
POST /score/{id}with selectedrole_id, then refreshes candidate detail.Show loading states, disable button while scoring.
Keep styling minimal (plain CSS/Tailwind optional).
Use
fetchoraxioswith base URL from env.”
Acceptance Criteria
- Roles load into a dropdown.
- Candidates list renders; search filters it.
- Clicking Score Candidate returns a score and updates the panel.
- Refreshing the page keeps the score (persisted in SQLite).
Implementation Summary:
- Added
.envconfiguration withVITE_API_BASE=http://localhost:8000(already configured) - Extended API library in
src/lib/api.jswith scoring endpoints:getRoles(),getCandidates(),getCandidate(),scoreCandidate() - Created
RoleSelectcomponent: dropdown fetching/api/roleswith loading states and error handling - Created
CandidateListcomponent: paginated list from/api/candidateswith search functionality and selection - Created
ScorePanelcomponent: displays candidate details, fit_score, strengths, risks, and explanation - Created
ScoringPagemain page combining all components with Score Candidate button - Implemented loading states: button disabled while scoring, spinner animation, form validation
- Added error handling and success messages throughout the UI
- Updated backend scoring endpoints to use
/apiprefix for consistency with other endpoints - Added routing in
App.jsxwith navigation button on homepage - All components use proper React patterns: hooks, callbacks, error boundaries
- Comprehensive testing: roles dropdown loads, candidate search/filtering works, scoring updates persist
Objective Make it zero-friction to start/verify locally for the 30-minute slot.
Prompt
“Update root README with:
- Prereqs: Python 3.12+,
uv, Node 22+.- Backend:
cd backend && uv sync && cp .env.example .env && uv run scripts/seed.py && uv run app.py(oruvicorn app:app --reload).- Frontend:
cd frontend && npm i && npm run dev.- Verify endpoints with curl and visit
localhost:5173.- Troubleshooting notes for CORS and missing env vars.”
Acceptance Criteria
- A fresh clone can be run locally end-to-end in <5 minutes.
- README includes exact commands and expected outputs.
- Demo path: select role → select candidate → score → see results.
- Pre-prepare
data/candidates.csv(10–50 rows). - Keep AOAI creds ready in
.env. - Have the Vite app scaffolded; participants mostly paste fetch logic + simple JSX.
- If time is tight: hardcode 3 roles in seeding and skip pagination.
If you want, I can also draft the exact Copilot-ready code blocks (models, routers, and a tiny React page) you’ll paste into the repo so the session flows even faster.