This is a sophisticated, personalized chatbot application powered by a FastAPI backend and a modern, responsive frontend. It features Retrieval-Augmented Generation (RAG) for knowledge-based responses and ultra-low latency inference via Groq LPU and Llama 3.
- Modern UI: Clean, mobile-friendly, responsive chat interface built using Vanilla HTML/CSS.
- RAG-based Intelligence: Combines local document retrieval with LLM response generation.
- FastAPI Backend: Async Python backend for low-latency interactions.
- High Performance: Powered by Groq's LPU and Meta's Llama 3.
- Modular Design: Easy to extend and maintain with separate layers for parsing, prompting, and vector DB.
| Layer | Technologies |
|---|---|
| Frontend | HTML, CSS, JavaScript |
| Backend | Python, FastAPI, Uvicorn, Pydantic, Groq (Llama 3), PyMuPDF, ChromaDB |
| Tools | uv (Python package management), make, streamlit |
personalized-chatbot/
├── backend/
│ ├── src/ # FastAPI app, routing and orchestration logic
│ ├── database/ # Vector store (ChromaDB) integration
│ ├── parser/ # Document parsing logic
│ ├── prompts/ # Prompt templates
│ ├── data/ # ChromaDB persistent storage
│ ├── .env.example # Environment config sample
│ ├── load_documents.py # Load files to vector store
│ ├── pyproject.toml # Backend deps (uv format)
│ ├── uv.lock # Lockfile
│ └── requirements.txt # Backend deps (pip format)
├── frontend/
│ ├── index.html # Main UI
│ ├── chatbot.js # Core chat logic
│ ├── script.js # Additional behavior
│ ├── chatbot.css # Chat styles
│ └── styles.css # General styles
├── .gitignore
├── .python-version
├── Makefile
├── .env # API keys and environment values
└── README.md
- uv (Astral) - Fast Python package manager
makeutility (available by default on Linux/macOS)
To install uv:
Linux/macOS
curl -Ls https://astral.sh/uv/install.sh | shWindows
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"| Step | With make |
Without make |
|---|---|---|
| Clone Repository | git clone https://github.com/devrahulbanjara/personalized-chatbot |
Same as left |
cd personalized-chatbot |
||
| Install Dependencies | make install |
cd backend && uv sync |
| Load Documents | make run-loaddoc |
cd backend && source .venv/bin/activate && streamlit run load_documents.py |
| Run Backend Server | make run-backend |
cd backend && source .venv/bin/activate && python -m src.main |
| Open Frontend | Open frontend/index.html in browser |
Same as left |
Note: Loading documents to the knowledge base is a mandatory step before using the chatbot.
Once the backend is running:
curl -X 'POST' \
'http://localhost:8000/api/chat' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"query": "Where does KP Oli live ?",
"session_id": "some_random_string"
}'| Command | Description |
|---|---|
make install |
Install all backend dependencies using uv |
make run-backend |
Start FastAPI backend |
make run-loaddoc |
Load documents (starts backend and streamlit loader) |
make clean |
Remove virtual environment and reset state |
make help |
Show all make commands |
- Fork the repository
- Create a feature branch:
git checkout -b feature/my-feature - Commit your changes:
git commit -m "Add my feature" - Push to the branch:
git push origin feature/my-feature - Open a pull request
This project is licensed under the GNU GENERAL PUBLIC LICENSE.