- Clone the repo:
git clone https://github.com/bskqd/sd_solutions_test_task.git - Go into project directory:
cd sd_solutions_test_task - Create
.envfile with variables provided in .env.example - Make sure that ports
8000,9000,9001aren't already occupied by other processes on your PC - Run docker compose:
docker-compose up -d --build
I created a FastAPI application which has 3 endpoints responsible for orchestrating 3 main agents:
- Question Generation: Dynamically generates interview questions.
- Response Evaluation: Evaluates candidate responses.
- Evaluation Validation: Validates the evaluation and logs the completed session.
All completed interview sessions are logged and stored in MinIO.
- Python Framework: As was mentioned in the task I chose
FastAPIas the python framework for this task to have fullyasyncAPI handlers. - Shared Context: I chose
Redisfor the shared context, convenient in memory DB for such purposes. - LLM: I chose
OpenAILLM as it was quite easy to understand and use as I don't have much of experience with any LLMs. - Storage: Integrated MinIO to provide a scalable and S3-compatible storage backend for file uploads as it was asked in the task.
- Candidate ID Generation: Candidate IDs are generated based on a hash of the candidate's job title, first name, and last name, it allows for easy cleanup of incomplete interview sessions by overwriting existing data for the same candidate in the shared context. Using UUID-like IDs would not allow this functionality.
- Simplified Infrastructure: All logs and full sessions' data are stored in the MinIO storage, avoiding the need for additional infrastructure like DynamoDB.
The biggest challenge was working with the LLM, as I don't have much experience. I researched existing solutions and frameworks, focusing on OpenAI's documentation. I tried different prompts to ensure the generated questions, evaluations, and validations met the task requirements. I adjusted output formats to make the data more structured and easier to work with.