SkillPilot is an AI-powered career learning platform that helps users plan their learning journey and prepare for technical interviews using a multi-agent architecture.
The platform analyzes a user’s goal, current skills, and learning preferences to:
- Generate structured learning roadmaps
- Create interview questions for practice using webSearch and AI.
- Provide contextual answers using a knowledge-based chat system (RAG)
Instead of relying on a single LLM call, SkillPilot uses graph-based multi-step workflows where different AI agents handle different responsibilities.
- ReactJs
- Html, TailwindCss, Typescript
- Node.js
- REST API
- TypeScript
- Jwt Tokens
- LangGraph
- LangChain
- Retrieval-Augmented Generation (RAG)
- LLM APIs (Groq, Cohere)
- Web Search (Tavily)
- MongoDB
- Vector Database
- Zod
The platform consists of three main AI agents:
An AI-powered learning roadmap generator agent built using a graph-based workflow architecture with LangGraph and LangChain.
The agent uses a multi-step, state-driven pipeline (instead of a single LLM call) to analyze a user’s goal, current skills, and available time, detect skill gaps, and generate a structured day-wise learning plan with human-in-the-loop options to save, regenerate, or discard the result.
This project implements an AI workflow agent that performs the following steps:
- Collect user learning data
- Discover target skills required for the goal
- Detect missing skills
- Generate a structured learning roadmap using an LLM
- Ask for user confirmation (save / regenerate / discard)
The agent uses:
- Graph-based workflow execution
- Typed state validation using Zod
- Thread-based memory using checkpointer
This project implements an AI-powered Question Generation Agent that combines web-extracted questions and AI-generated questions to produce a high-quality final question list for a given topic.
The agent follows a structured workflow including query building, web search, validation, extraction, AI generation and merging.
The agent pipeline works as follows:
- The workflow begins when a topic or input context is provided.
- The agent constructs an optimized search query from the input topic to retrieve relevant data from the web.
- The agent performs a web search using a search API (e.g. Tavily).
- The agent checks whether the retrieved results are sufficient. If not sufficient, the agent refines the query and repeats the web search step.
- The agent extracts relevant or frequently asked questions from the collected web content.
- The agent generates additional questions using an LLM to improve coverage and diversity.
- Web-extracted and AI-generated questions are merged, and a final structured list is prepared.
- The final question set is returned as the output.
Doc-Prep allows users to upload their own documents (PDFs, text, etc.), which are then processed by cleaning and splitting the content into smaller chunks and converting them into embeddings stored in a vector database. When a user asks a query, the system retrieves the most relevant parts of the uploaded documents and provides context-aware answers.
This agent follows the Retrieval-Augmented Generation pipeline, where external knowledge is retrieved before response generation.
Core Components
- Document Loader
- Text Chunking Module
- Embedding Generator
- Vector Database
- Query Processor
- LLM Response Generator
Flow
- Documents are loaded and split into chunks.
- Embeddings are generated and stored in a vector database.
- A user query is converted into an embedding.
- Similar chunks are retrieved using semantic search.
- The retrieved context is passed to the LLM.
- The LLM generates the final response.


