Personal streaming service watch time tracker. Monitors and displays screen time across Netflix, YouTube TV, Amazon Video, Hulu, and other streaming platforms.
- 📊 Dashboard View: Monthly watch time totals per service
- 📈 Detailed Analytics: Historical trends, shows watched, episode breakdowns
- 🔄 Automated Scraping: Daily collection of viewing history
- 🎯 Single User: Simple setup, no authentication required
- Backend: Go with SQLite database
- Frontend: React with Tailwind CSS
- Infrastructure: Docker Compose
- Scraping: chromedp (headless Chrome)
- Docker and Docker Compose
- Or: Go 1.21+, Node.js 18+, Chrome/Chromium
-
Copy the example config:
cp config.example.yaml config.yaml
-
Export authentication cookies for your streaming services:
# Using Make (recommended) make refresh-cookies SERVICE=netflix make refresh-cookies SERVICE=youtube_tv make refresh-cookies SERVICE=amazon_video make refresh-cookies SERVICE=hulu # Or directly ./backend/export-cookies --service netflix ./backend/export-cookies --service youtube_tv ./backend/export-cookies --service amazon_video ./backend/export-cookies --service hulu
The tool will:
- Open a browser window for you to log in
- Automatically extract and validate cookies
- Update
config.yamlwith the correct configuration
See Cookie Export Tool Documentation for details.
-
(Optional) Configure scraping schedule (default: daily at 3 AM)
The project includes a Makefile for common tasks:
# Show all available commands
make help
# Build the backend server
make build
# Build all binaries (server + export-cookies)
make build-all
# Run the server locally
make run
# Cookie management
make refresh-cookies SERVICE=netflix # Refresh Netflix cookies
make refresh-cookies SERVICE=youtube_tv # Refresh YouTube TV cookies
make refresh-cookies SERVICE=amazon_video # Refresh Amazon Video cookies
make refresh-cookies SERVICE=hulu # Refresh Hulu cookies
# Docker commands
make docker-build # Build Docker images
make docker-up # Start Docker containers
make docker-down # Stop Docker containers
make docker-logs # Show backend logs
# Clean build artifacts
make cleanmake docker-up
# or
docker-compose up -dAccess the app at http://localhost:3000
Backend:
make run
# or manually:
cd backend
go mod download
go run cmd/server/main.goFrontend:
cd frontend
npm install
npm startTo install the backend on a remote server for cron-based scraping:
# Using defaults (mediaserver host, /opt/streamtime for binaries, /usr/local/etc/streamtime for config/db)
make install-remote
# Or override paths
export REMOTE_HOST=user@your-server.com
export REMOTE_BIN_DIR=/opt/streamtime
export REMOTE_ETC_DIR=/usr/local/etc/streamtime
make install-remoteThis will:
- Build both
serverandexport-cookiesbinaries - Create remote directories if needed
- Copy binaries to
/opt/streamtime - Copy
config.yamlto/usr/local/etc/streamtime
On the remote host, run:
# Using the installed paths
/opt/streamtime/server --config /usr/local/etc/streamtime/config.yaml --database /usr/local/etc/streamtime/streamtime.db
# Or with custom paths
/opt/streamtime/server --config /path/to/config.yaml --database /path/to/streamtime.dbCommand-line flags:
Server (/opt/streamtime/server):
--config <path>- Path to config.yaml file (default:./config.yaml)--database <path>- Path to database file, overrides config setting (optional)
Scraper (/opt/streamtime/scraper):
--config <path>- Path to config.yaml file (default:./config.yaml)--database <path>- Path to database file, overrides config setting (optional)--service <name>- Specific service to scrape (Netflix, YouTube TV, Amazon Video, Hulu). If omitted, runs all enabled services.
To run the scraper daily at 6am, add to crontab (crontab -e):
# Run StreamTime scraper daily at 6am
0 6 * * * /opt/streamtime/scraper --config /usr/local/etc/streamtime/config.yaml --database /usr/local/etc/streamtime/streamtime.db >> /var/log/streamtime.log 2>&1Or for a specific service:
# Run only YouTube TV scraper daily at 6am
0 6 * * * /opt/streamtime/scraper --service youtube_tv --config /usr/local/etc/streamtime/config.yaml --database /usr/local/etc/streamtime/streamtime.db >> /var/log/streamtime.log 2>&1Amazon cookies are session-bound and don't work when exported to a different browser context. Instead, use a persistent Chrome profile on the remote server with Xvfb (virtual display) and VNC for manual login.
# SSH to your remote server
ssh remote-hostname
# Install required packages
sudo apt-get update
sudo apt-get install -y chromium-browser xvfb x11vnc openbox
# Create persistent Chrome profile directory
mkdir -p ~/.config/chromium-amazonThe project includes a systemd service that starts Xvfb, openbox, and x11vnc automatically on boot:
# Sync files to remote server (includes VNC scripts)
make install-remote
# Enable and start the VNC display service
make setup-vnc-remoteThis installs:
/opt/streamtime/start-vnc-display.sh- Idempotent startup script/etc/systemd/system/streamtime-vnc.service- Systemd service
Useful commands:
# Check service status
ssh remote-hostname "systemctl status streamtime-vnc"
# Manually start/stop
ssh remote-hostname "sudo systemctl start streamtime-vnc"
ssh remote-hostname "sudo systemctl stop streamtime-vnc"
# Run script directly (safe to run multiple times)
ssh remote-hostname "/opt/streamtime/start-vnc-display.sh"
# Verify processes are running
ssh remote-hostname "pgrep -a Xvfb; pgrep -a openbox; pgrep -a x11vnc"When you need to log in (first time or when session expires):
-
Connect to your server via VNC client (e.g., RealVNC) at
remote-hostname:5900 -
In a terminal on the remote server, launch Chromium:
DISPLAY=:99 chromium-browser --no-sandbox --user-data-dir=/home/username/.config/chromium-amazon https://www.amazon.com/gp/video/settings/watch-history & -
The browser window will appear in your VNC viewer
-
Log into Amazon, complete any 2FA/puzzles
-
Verify you see your watch history, then close the browser
The session is now saved in the profile directory.
With the VNC display service running:
# Run Amazon scraper with the virtual display
DISPLAY=:99 /opt/streamtime/scraper --service 'Amazon Video' --config /usr/local/etc/streamtime/config.yaml --database /usr/local/etc/streamtime/streamtime.dbThe VNC display service starts automatically on boot, so cron just needs to run the scraper:
# Run Amazon scraper daily at 6am
0 6 * * * DISPLAY=:99 /opt/streamtime/scraper --service 'Amazon Video' --config /usr/local/etc/streamtime/config.yaml --database /usr/local/etc/streamtime/streamtime.db >> /var/log/streamtime.log 2>&1In your config.yaml, set the user_data_dir for Amazon:
services:
amazon_video:
enabled: true
user_data_dir: "/home/username/.config/chromium-amazon"When user_data_dir is set, the scraper uses the persistent profile instead of cookies.
Persistent browser sessions typically last weeks to months. Re-login via VNC when:
- The scraper reports being redirected to the login page
- After major Amazon security updates
- If you clear the profile directory
Cookies typically expire after 30-90 days. To maintain your scrapers:
Refresh expired cookies:
# Using Make (recommended)
make refresh-cookies SERVICE=netflix
make refresh-cookies SERVICE=youtube_tv
make refresh-cookies SERVICE=amazon_video
make refresh-cookies SERVICE=hulu
# Or directly
./backend/export-cookies --service netflix
./backend/export-cookies --service youtube_tv
./backend/export-cookies --service amazon_video
./backend/export-cookies --service huluCheck if cookies are valid:
./backend/export-cookies --service netflix --validate
./backend/export-cookies --service youtube_tv --validate
./backend/export-cookies --service amazon_video --validate
./backend/export-cookies --service hulu --validateThe tool will open a browser, you log in, and it automatically updates your config.
GET /api/services- List all services with current month totalsGET /api/services/:id/history- Get detailed watch historyPOST /api/scrape/:service- Manually trigger scrapingGET /api/health- Health check
See IMPLEMENTATION_PLAN.md for detailed development stages and technical decisions.