Skip to content

jgoulah/streamtime

Repository files navigation

StreamTime

Personal streaming service watch time tracker. Monitors and displays screen time across Netflix, YouTube TV, Amazon Video, Hulu, and other streaming platforms.

Features

  • 📊 Dashboard View: Monthly watch time totals per service
  • 📈 Detailed Analytics: Historical trends, shows watched, episode breakdowns
  • 🔄 Automated Scraping: Daily collection of viewing history
  • 🎯 Single User: Simple setup, no authentication required

Tech Stack

  • Backend: Go with SQLite database
  • Frontend: React with Tailwind CSS
  • Infrastructure: Docker Compose
  • Scraping: chromedp (headless Chrome)

Setup

Prerequisites

  • Docker and Docker Compose
  • Or: Go 1.21+, Node.js 18+, Chrome/Chromium

Configuration

  1. Copy the example config:

    cp config.example.yaml config.yaml
  2. Export authentication cookies for your streaming services:

    # Using Make (recommended)
    make refresh-cookies SERVICE=netflix
    make refresh-cookies SERVICE=youtube_tv
    make refresh-cookies SERVICE=amazon_video
    make refresh-cookies SERVICE=hulu
    
    # Or directly
    ./backend/export-cookies --service netflix
    ./backend/export-cookies --service youtube_tv
    ./backend/export-cookies --service amazon_video
    ./backend/export-cookies --service hulu

    The tool will:

    • Open a browser window for you to log in
    • Automatically extract and validate cookies
    • Update config.yaml with the correct configuration

    See Cookie Export Tool Documentation for details.

  3. (Optional) Configure scraping schedule (default: daily at 3 AM)

Building and Running

Using Make (Recommended)

The project includes a Makefile for common tasks:

# Show all available commands
make help

# Build the backend server
make build

# Build all binaries (server + export-cookies)
make build-all

# Run the server locally
make run

# Cookie management
make refresh-cookies SERVICE=netflix      # Refresh Netflix cookies
make refresh-cookies SERVICE=youtube_tv   # Refresh YouTube TV cookies
make refresh-cookies SERVICE=amazon_video # Refresh Amazon Video cookies
make refresh-cookies SERVICE=hulu         # Refresh Hulu cookies

# Docker commands
make docker-build    # Build Docker images
make docker-up       # Start Docker containers
make docker-down     # Stop Docker containers
make docker-logs     # Show backend logs

# Clean build artifacts
make clean

Running with Docker

make docker-up
# or
docker-compose up -d

Access the app at http://localhost:3000

Running Locally

Backend:

make run
# or manually:
cd backend
go mod download
go run cmd/server/main.go

Frontend:

cd frontend
npm install
npm start

Remote Installation

To install the backend on a remote server for cron-based scraping:

# Using defaults (mediaserver host, /opt/streamtime for binaries, /usr/local/etc/streamtime for config/db)
make install-remote

# Or override paths
export REMOTE_HOST=user@your-server.com
export REMOTE_BIN_DIR=/opt/streamtime
export REMOTE_ETC_DIR=/usr/local/etc/streamtime
make install-remote

This will:

  • Build both server and export-cookies binaries
  • Create remote directories if needed
  • Copy binaries to /opt/streamtime
  • Copy config.yaml to /usr/local/etc/streamtime

On the remote host, run:

# Using the installed paths
/opt/streamtime/server --config /usr/local/etc/streamtime/config.yaml --database /usr/local/etc/streamtime/streamtime.db

# Or with custom paths
/opt/streamtime/server --config /path/to/config.yaml --database /path/to/streamtime.db

Command-line flags:

Server (/opt/streamtime/server):

  • --config <path> - Path to config.yaml file (default: ./config.yaml)
  • --database <path> - Path to database file, overrides config setting (optional)

Scraper (/opt/streamtime/scraper):

  • --config <path> - Path to config.yaml file (default: ./config.yaml)
  • --database <path> - Path to database file, overrides config setting (optional)
  • --service <name> - Specific service to scrape (Netflix, YouTube TV, Amazon Video, Hulu). If omitted, runs all enabled services.

Cron Setup

To run the scraper daily at 6am, add to crontab (crontab -e):

# Run StreamTime scraper daily at 6am
0 6 * * * /opt/streamtime/scraper --config /usr/local/etc/streamtime/config.yaml --database /usr/local/etc/streamtime/streamtime.db >> /var/log/streamtime.log 2>&1

Or for a specific service:

# Run only YouTube TV scraper daily at 6am
0 6 * * * /opt/streamtime/scraper --service youtube_tv --config /usr/local/etc/streamtime/config.yaml --database /usr/local/etc/streamtime/streamtime.db >> /var/log/streamtime.log 2>&1

Amazon Video: Persistent Session Setup

Amazon cookies are session-bound and don't work when exported to a different browser context. Instead, use a persistent Chrome profile on the remote server with Xvfb (virtual display) and VNC for manual login.

One-Time Server Setup

# SSH to your remote server
ssh remote-hostname

# Install required packages
sudo apt-get update
sudo apt-get install -y chromium-browser xvfb x11vnc openbox

# Create persistent Chrome profile directory
mkdir -p ~/.config/chromium-amazon

Deploy VNC Display Service

The project includes a systemd service that starts Xvfb, openbox, and x11vnc automatically on boot:

# Sync files to remote server (includes VNC scripts)
make install-remote

# Enable and start the VNC display service
make setup-vnc-remote

This installs:

  • /opt/streamtime/start-vnc-display.sh - Idempotent startup script
  • /etc/systemd/system/streamtime-vnc.service - Systemd service

Useful commands:

# Check service status
ssh remote-hostname "systemctl status streamtime-vnc"

# Manually start/stop
ssh remote-hostname "sudo systemctl start streamtime-vnc"
ssh remote-hostname "sudo systemctl stop streamtime-vnc"

# Run script directly (safe to run multiple times)
ssh remote-hostname "/opt/streamtime/start-vnc-display.sh"

# Verify processes are running
ssh remote-hostname "pgrep -a Xvfb; pgrep -a openbox; pgrep -a x11vnc"

Manual Login via VNC

When you need to log in (first time or when session expires):

  1. Connect to your server via VNC client (e.g., RealVNC) at remote-hostname:5900

  2. In a terminal on the remote server, launch Chromium:

    DISPLAY=:99 chromium-browser --no-sandbox --user-data-dir=/home/username/.config/chromium-amazon https://www.amazon.com/gp/video/settings/watch-history &
  3. The browser window will appear in your VNC viewer

  4. Log into Amazon, complete any 2FA/puzzles

  5. Verify you see your watch history, then close the browser

The session is now saved in the profile directory.

Running the Scraper

With the VNC display service running:

# Run Amazon scraper with the virtual display
DISPLAY=:99 /opt/streamtime/scraper --service 'Amazon Video' --config /usr/local/etc/streamtime/config.yaml --database /usr/local/etc/streamtime/streamtime.db

Cron Setup for Amazon

The VNC display service starts automatically on boot, so cron just needs to run the scraper:

# Run Amazon scraper daily at 6am
0 6 * * * DISPLAY=:99 /opt/streamtime/scraper --service 'Amazon Video' --config /usr/local/etc/streamtime/config.yaml --database /usr/local/etc/streamtime/streamtime.db >> /var/log/streamtime.log 2>&1

Config.yaml Setup

In your config.yaml, set the user_data_dir for Amazon:

services:
  amazon_video:
    enabled: true
    user_data_dir: "/home/username/.config/chromium-amazon"

When user_data_dir is set, the scraper uses the persistent profile instead of cookies.

Session Lifespan

Persistent browser sessions typically last weeks to months. Re-login via VNC when:

  • The scraper reports being redirected to the login page
  • After major Amazon security updates
  • If you clear the profile directory

Cookie Maintenance

Cookies typically expire after 30-90 days. To maintain your scrapers:

Refresh expired cookies:

# Using Make (recommended)
make refresh-cookies SERVICE=netflix
make refresh-cookies SERVICE=youtube_tv
make refresh-cookies SERVICE=amazon_video
make refresh-cookies SERVICE=hulu

# Or directly
./backend/export-cookies --service netflix
./backend/export-cookies --service youtube_tv
./backend/export-cookies --service amazon_video
./backend/export-cookies --service hulu

Check if cookies are valid:

./backend/export-cookies --service netflix --validate
./backend/export-cookies --service youtube_tv --validate
./backend/export-cookies --service amazon_video --validate
./backend/export-cookies --service hulu --validate

The tool will open a browser, you log in, and it automatically updates your config.

API Endpoints

  • GET /api/services - List all services with current month totals
  • GET /api/services/:id/history - Get detailed watch history
  • POST /api/scrape/:service - Manually trigger scraping
  • GET /api/health - Health check

Important Notes

⚠️ For Personal Use Only: This application uses web scraping which may violate streaming service Terms of Service. Use at your own risk.

⚠️ Security: Store credentials securely. The config file contains sensitive information.

⚠️ 2FA: Services with two-factor authentication may require OAuth or manual session management.

Development

See IMPLEMENTATION_PLAN.md for detailed development stages and technical decisions.

About

streaming duration from various services

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published