Skip to content

yashas2604/RAG_Chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RAG Chatbot

A Retrieval-Augmented Generation (RAG) chatbot built with FastAPI, Streamlit, FAISS, and Ollama.
This chatbot can read PDFs, create embeddings, and answer user queries with contextual memory.


Features

  • Upload PDF documents and build a vector index
  • FAISS-based semantic search for relevant chunks
  • Conversational memory (remembers chat history)
  • Powered by local Ollama LLM (llama3.2:3b by default)
  • Frontend with Streamlit, backend with FastAPI
  • Supports summarization, Q&A, multi-turn conversations

Project Structure

RAG_Chatbot/ │── backend/ │ ├── config.py │ ├── rag_qa.py │ ├── server.py │ ├── vector_store.py │ ├── faiss_index/ │ └── venv/ (ignored by git) │ │── frontend.py # Streamlit frontend │── requirements.txt │── .gitignore


🛠️ Installation

1. Clone the Repository

Project Structure

cd backend python3 -m venv venv source venv/bin/activate

Running the App

Start the Backend In one terminal: cd backend source venv/bin/activate uvicorn server:app --reload --port 8000 Backend will be running on: 👉 http://127.0.0.1:8000

Start the Frontend In another terminal: cd RAG_Chatbot source backend/venv/bin/activate streamlit run frontend.py Frontend will be running on: 👉 http://localhost:8501

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors