Epis is an extensible assistant designed to help you learn anything.
- Extensible LLM Provider Support: Easily swap or extend large language model backends (currently supports Ollama).
-
Clone the repository:
git clone https://github.com/mkermani144/epis cd epis/epis -
Build and run:
cargo run
-
Follow the prompts to start a conversation.
The easiest way to get started is using the provided DevContainer setup:
- Install VS Code and Dev Containers extension
- Clone and open the repository:
git clone https://github.com/mkermani144/epis code epis/epis
- Open in Container: When prompted, click "Reopen in Container"
- Create
.envfile with the following content:PROVIDER=ollama GENERATION_MODEL=some-gen-model EMBEDDING_MODEL=some-embedding-model DATABASE_URL=postgresql://epis_user:epis_password@postgres:5432/epis_db RUST_LOG=info
The DevContainer includes:
- Rust development environment
- PostgreSQL with pgvector extension
- All necessary dependencies pre-installed
- Rust (edition 2024)
- PostgreSQL with pgvector extension
- Ollama running locally (for LLM features)
-
Install PostgreSQL with pgvector:
# Using Docker docker run -d --name epis-postgres \ -e POSTGRES_DB=epis_db \ -e POSTGRES_USER=epis_user \ -e POSTGRES_PASSWORD=epis_password \ -p 5432:5432 \ pgvector/pgvector:pg16 -
Initialize the database:
docker exec -i epis-postgres psql -U epis_user -d epis_db < init-db.sql
-
Set environment variables:
export DATABASE_URL="postgresql://epis_user:epis_password@localhost:5432/epis_db" export PROVIDER="ollama" export GENERATION_MODEL="some-gen-model" export EMBEDDING_MODEL="some-embedding-model" export RUST_LOG="info"
# Format code
cargo fmt
# Run linter
cargo clippy
# Run tests
cargo test
# Build
cargo build
# Run the application
cargo run