Skip to content

aarjava/fluxlens-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

FluxLens - Predictive Delivery OS

FluxLens is an engineering delivery operating system that forecasts coordination breakdowns 30-90 days in advance using neural forecasting models on top of your existing toolchain (Jira, Slack, GitHub).

πŸ”¬ Research & Methodology

FluxLens is built on rigorous academic principles, moving beyond simple rule-based heuristics to true predictive modeling.

πŸ‘‰ Read the Academic Paper

  • Research Question: "Can neural forecasting models detect coordination breakdowns earlier than rule-based heuristics?"
  • Methodology: Multimodal metadata analysis (Interaction Graphs + Work Item Aging).
  • Experiment: Validated "Lead Time to Detection" superiority (+11 days) on historical launch data.

πŸš€ New Production-Ready Features

✨ Dashboard Polish

  • Signature FluxLens health gauge with glowing needle above 80%
  • Quick-view header nav with live search, notifications, and command palette
  • Expanded overview layout with enriched forecast, signals, and team heatmaps

βœ… Authentication & User Accounts

  • NextAuth.js v5 integration with credentials and Google OAuth
  • Sign-in page at /auth/signin
  • Demo credentials: demo@fluxlens.ai / demo123
  • Protected routes with middleware
  • Session management with JWT

βœ… Persistent User Preferences

  • Zustand store with localStorage persistence
  • Saves filters, theme, dashboard layout, and notification settings
  • API endpoints at /api/preferences for backend sync
  • Survives page refreshes

βœ… API Layer with Mock/Real Data Swap

  • RESTful endpoint: GET /api/flux/data
  • Easy toggle between mock and real data
  • Query parameters: department, startDate, endDate, mock
  • Documentation at /docs/integration

βœ… Error Boundaries & Loading States

  • React Error Boundary with graceful fallbacks
  • Skeleton loaders for all major components
  • Toast notifications via Sonner
  • Development error details, production-friendly messages

βœ… Documentation & Marketing Pages

  • /about - Company mission and technology
  • /pricing - Three-tier pricing model
  • /docs/integration - API integration guide
  • Professional layout with dark theme

βœ… Ops Agent (beta)

  • In-app action chips in the assistant to send weekly reports, trigger syncs, export snapshots, and run full integration syncs
  • New actions: rebuild snapshots from events (Supabase service role required), export CSV/PDF with download links
  • Backend endpoint /api/agent/execute with rate limits, audit logging (agent_runs table), and dry-run support
  • Enable via AGENT_ENABLED=true; set AGENT_DEFAULT_DRY_RUN=false to execute actions (flip to true if you want simulation-only).

🏁 Quick Start

For detailed setup instructions, see SETUP.md

TL;DR (local)

npm install
cp .env.example .env.local
npx tsx scripts/checkSupabaseReady.ts  # optional sanity check
npm run dx:doctor                      # quick env and connectivity doctor
npm run dev

Prerequisites

  • Node.js 18+
  • npm or yarn
  • OpenSSL (for generating secrets)

Quick Installation

# Install dependencies
npm install

# Copy environment variables
cp .env.example .env.local

# Generate secure NextAuth secret
openssl rand -base64 32
# Copy output to NEXTAUTH_SECRET in .env.local

# Start development server
npm run dev

Open http://localhost:3000 to view the app.

Default Login

Important: These are demo credentials for development only.

πŸ“ Project Structure

fluxlens-ai/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ app/                      # Next.js App Router pages
β”‚   β”‚   β”œβ”€β”€ about/                # About page
β”‚   β”‚   β”œβ”€β”€ api/                  # API routes
β”‚   β”‚   β”‚   β”œβ”€β”€ auth/             # NextAuth endpoints
β”‚   β”‚   β”‚   β”œβ”€β”€ flux/data/        # Flux data API
β”‚   β”‚   β”‚   └── preferences/      # User preferences API
β”‚   β”‚   β”œβ”€β”€ auth/signin/          # Sign-in page
β”‚   β”‚   β”œβ”€β”€ dashboard/            # Protected dashboard
β”‚   β”‚   β”œβ”€β”€ docs/integration/     # Integration docs
β”‚   β”‚   └── pricing/              # Pricing page
β”‚   β”œβ”€β”€ components/               # React components
β”‚   β”‚   β”œβ”€β”€ ErrorBoundary.tsx     # Error handling
β”‚   β”‚   β”œβ”€β”€ FluxDashboard.tsx     # Main dashboard
β”‚   β”‚   └── LoadingSkeletons.tsx  # Loading states
β”‚   β”œβ”€β”€ store/                    # Zustand state management
β”‚   β”‚   └── userPreferencesStore.ts
β”‚   └── auth.ts                   # NextAuth configuration
β”œβ”€β”€ .env.local                    # Environment variables
└── WARP.md                       # AI assistant context

πŸ”§ Configuration

Environment Variables

# AI Provider (defaults to OpenRouter + Llama 3.3)
NEXT_PUBLIC_AI_PROVIDER=openrouter
OPENROUTER_API_KEY=your_openrouter_key
OPENROUTER_DEFAULT_MODEL=meta-llama/llama-3.3-8b-instruct:free
AI_SIMPLE_MODEL=meta-llama/llama-3.3-8b-instruct:free
AI_COMPLEX_MODEL=meta-llama/llama-3.3-70b-instruct:free
NEXT_DISABLE_FONT_DOWNLOADS=1
NEXT_BINARY_CACHE_DIR=.next/cache/swc
NEXT_FORCE_WASM=1
PLAYWRIGHT_TEST_BASE_URL=http://127.0.0.1:3100

# Supabase (optional until live data pipeline is ready)
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=public-anon-key
SUPABASE_SERVICE_ROLE_KEY=service-role-key

# Optional OpenAI fallback
OPENAI_API_KEY=
OPENAI_DEFAULT_MODEL=gpt-4o-mini

# NextAuth
NEXTAUTH_URL=http://localhost:3000
NEXTAUTH_SECRET=your-secret-key

# Google OAuth (optional)
GOOGLE_CLIENT_ID=your_client_id
GOOGLE_CLIENT_SECRET=your_client_secret

# Integration OAuth (Slack, Jira, Linear)
# Get these from your OAuth app configurations
SLACK_CLIENT_ID=your_slack_client_id
SLACK_CLIENT_SECRET=your_slack_client_secret
JIRA_CLIENT_ID=your_jira_client_id
JIRA_CLIENT_SECRET=your_jira_client_secret
LINEAR_CLIENT_ID=your_linear_client_id
LINEAR_CLIENT_SECRET=your_linear_client_secret

# Feature Flags
NEXT_PUBLIC_ENABLE_AI_FEATURES=true
NEXT_PUBLIC_ENABLE_EXPORT=true
NEXT_PUBLIC_USE_REAL_DATA=false

Supabase Setup

  1. Install and authenticate the Supabase CLI.

  2. Configure SUPABASE_URL, SUPABASE_ANON_KEY, and SUPABASE_SERVICE_ROLE_KEY in .env.local.

  3. Apply the migrations (remote db seed --file is not supported in Supabase CLI v2.58):

    # If the CLI profile isn't linked, use the DB password from Supabase > Settings > Database
    # and project ref from SUPABASE_URL.
    SUPABASE_DB_PASSWORD=your_db_password \
    supabase db push --db-url "postgresql://postgres.<project_ref>:${SUPABASE_DB_PASSWORD}@aws-1-us-east-1.pooler.supabase.com:5432/postgres"
  4. Populate Supabase with a fresh mock snapshot (safe to rerun any time):

    # Requires tsx (`npm install -g tsx`) or run with `npx tsx`
    FLUXLENS_ORG_ID=demo-org FLUXLENS_RESET=true npx tsx scripts/ingestSupabaseSnapshots.ts
    • Set FLUXLENS_ORG_ID to target a different tenant.
    • Omit FLUXLENS_RESET or set it to false to append without clearing prior rows.
  5. Validate connectivity (optional but recommended) before flipping real-data mode:

    npx tsx scripts/checkSupabaseReady.ts
    • Confirms env variables are present and PostgREST responds.
    • Exits non-zero if NEXT_PUBLIC_USE_REAL_DATA is true but Supabase env is missing.
  6. When ready for real integrations, schedule the ingestion script (or a variant that calls your production data sources) via cron, Supabase Edge Functions, or your job runner of choice. For a local cron (requires tsx on your PATH), drop something like:

    0 2 * * * cd /path/to/fluxlens-ai && \
      FLUXLENS_ORG_ID=demo-org FLUXLENS_RESET=true \
      npx tsx scripts/ingestSupabaseSnapshots.ts >> logs/ingest.log 2>&1

    In CI, the new workflow at .github/workflows/nightly-supabase-ingest.yml runs the same command nightly at 02:00 UTC. Define the SUPABASE_URL, SUPABASE_ANON_KEY, and SUPABASE_SERVICE_ROLE_KEY repository secrets (and override FLUXLENS_ORG_ID if needed) so the job can authenticate.

Switching from Mock to Real Data

  1. Seed Supabase with scripts/ingestSupabaseSnapshots.ts (see above) so /api/flux/data has a snapshot to read.
  2. Set NEXT_PUBLIC_USE_REAL_DATA=true in .env.local and restart the dev server.
  3. Provide SUPABASE_URL, SUPABASE_ANON_KEY, and SUPABASE_SERVICE_ROLE_KEY so the repository can reach PostgREST.
  4. Hit any API with ?mock=false (e.g. /api/flux/data?mock=false) to verify the real snapshot is returned before mock fallbacks, or run SESSION_COOKIE="next-auth.session-token=..." ./scripts/smokeRealData.sh to exercise the key routes in one go.

Nightly Supabase Snapshot

  • Workflow: .github/workflows/nightly-supabase-ingest.yml
  • Schedule: 0 2 * * * (runs daily, plus manual workflow_dispatch)
  • Secrets required: SUPABASE_URL, SUPABASE_ANON_KEY, SUPABASE_SERVICE_ROLE_KEY
  • What it does: checks out the repo, installs dependencies, and runs npx tsx scripts/ingestSupabaseSnapshots.ts with FLUXLENS_ORG_ID=demo-org and FLUXLENS_RESET=true. Update these env vars if you need a different tenant or incremental ingest.

Slack Snapshot Ingestion (Prototype)

Use scripts/ingestSlackSnapshot.ts to copy a batch of Slack messages into Supabase for demos or local testing:

# Defaults to data/slackSnapshot.json; pass a path as argv[2] for custom payloads
FLUXLENS_ORG_ID=demo-org \
SUPABASE_URL=... \
SUPABASE_SERVICE_ROLE_KEY=... \
npx tsx scripts/ingestSlackSnapshot.ts data/slackSnapshot.json
  • Input format lives in data/slackSnapshot.json
  • Each message is persisted as an events row with type="slack_signal" plus severity metadata derived from urgency
  • Audit logging via /audit_log is best-effort; failures do not block ingestion

Jira Snapshot Ingestion (Prototype)

Use scripts/ingestJiraSnapshot.ts to ingest Jira issues into Supabase so dashboards show Jira freshness:

FLUXLENS_ORG_ID=demo-org \
SUPABASE_URL=... \
SUPABASE_SERVICE_ROLE_KEY=... \
npx tsx scripts/ingestJiraSnapshot.ts data/jiraSnapshot.json
  • Defaults to data/jiraSnapshot.json if no path is provided
  • Rebuilds the snapshot after ingest so UI freshness chips update
  • Requires real Supabase env keys and NEXT_PUBLIC_USE_REAL_DATA=true

Jira API Ingestion (Live)

Pull real Jira issues via API (requires a Jira access token and cloud ID):

FLUXLENS_ORG_ID=demo-org \
SUPABASE_URL=... \
SUPABASE_SERVICE_ROLE_KEY=... \
JIRA_ACCESS_TOKEN=... \
JIRA_CLOUD_ID=... \
JIRA_PROJECT_KEY=OPS \
npx tsx scripts/ingestJiraApi.ts
  • Optional: set JIRA_INGEST_LIMIT (default 50) to cap results.
  • Rebuilds the snapshot after ingest so dashboard freshness chips stay current.
  • Use this for real-data demos instead of the static snapshot script.

Real-Data Smoke Test

Use scripts/smokeRealData.sh to hit the critical API routes with the real-data flag in one go:

chmod +x scripts/smokeRealData.sh
SESSION_COOKIE="next-auth.session-token=..." ./scripts/smokeRealData.sh
  • Requires a valid SESSION_COOKIE (copy from the browser devtools) and optionally BASE_URL (defaults to http://localhost:3000).
  • Each hit prints the first ~400 bytes so you can eyeball the payload without overwhelming the console.
  • In CI, the Real Data Smoke workflow (.github/workflows/real-data-smoke.yml) runs daily if SESSION_COOKIE is set as a GitHub secret and can alert via ALERT_WEBHOOK_URL.

Next Integrations (Slack / Jira)

  1. Supabase targets: extend or reuse tables such as event_record, event_comment, diagnosis, and audit_log. If Slack/Jira payloads demand new columns, add them in a migration alongside mock fallbacks so the API contract stays backward compatible.
  2. Ingestion worker: create a script under scripts/ (e.g. scripts/ingestSlackEvents.ts) that normalizes provider payloads, then calls repo.addEvents, repo.addComments, or new repository helpers. Keep provider adapters pure so you can test them without network calls.
  3. API endpoints: expose /api/integrations/slack or /api/integrations/jira to trigger re-syncs, surface health, or accept signed webhooks. Route handlers should validate provider signatures, enqueue work, and respond quickly so dashboards stay responsive.
  4. Testing: add unit tests that feed recorded Slack/Jira JSON into the worker and assert Supabase insert batches. For end-to-end coverage, add a Playwright spec that mocks the provider endpoint, flips NEXT_PUBLIC_USE_REAL_DATA=true, and verifies the dashboard reflects the ingested artifacts.

πŸ›‘οΈ AI Quota Resilience

  • All OpenRouter calls now attach OPENROUTER_SITE_URL and OPENROUTER_APP_TITLE headers, parse rate-limit headers, and emit structured quota logs. Configure:

    OPENROUTER_SITE_URL=https://your-domain
    OPENROUTER_APP_TITLE="FluxLens AI"
    AI_QUOTA_ALERT_THRESHOLD=10   # optional, defaults to 10 remaining calls
    AI_ALERT_WEBHOOK_URL=https://hooks.slack.com/services/...
  • When remaining quota falls below the threshold (or hits 0), quotaMonitor logs a warning/exhausted event and optionally POSTs to AI_ALERT_WEBHOOK_URL (Slack, email bridge, etc.).

  • Backend endpoints return structured fallback payloads instead of silent 429 errors, for example:

    {
      "error": "AI features are temporarily unavailable due to free quota exhaustion.",
      "fallback": true,
      "provider": "simulated",
      "quotaDetails": { "remaining": 0, "limit": 50, "reset": "2025-10-31T00:00:00Z" }
    }
  • Frontend surfaces the failure with a global quota banner, labels simulated responses (chatbot, simulator, diagnoses, reports), and shows actionable guidance (β€œretry after reset or upgrade”).

  • Offline fallbacks include canned insights (chatbot replies, analyst summaries, diagnoses, reports) so product demos keep working even when the free plan is exhausted.

  • Recent quota events are persisted client-side via useQuotaStore, making it easy for admins/testers to see remaining calls.

πŸ§ͺ Testing

# Run unit tests
npm run test

# Type check
npm run build

DX helpers

  • npm run dx:doctor β€” validates env keys and Supabase connectivity.
  • Husky pre-push runs lint and targeted vitest --run (set SKIP_PREPUSH=1 to bypass briefly).

🎨 Key Features

Flux Health Score (0-100)

Calculates organizational health based on:

  • Response latency patterns
  • Handoff failure rates
  • Meeting density load
  • Cross-team collaboration index

Predictive Forecasting

  • 30/60/90-day forecast windows
  • Linear regression on metric slopes
  • Anomaly detection with 2-sigma deviation
  • Confidence scoring

AI-Powered Insights

  • OpenAI GPT-4 integration
  • Root cause analysis
  • Intervention recommendations
  • Natural language explanations
  • Event diagnosis on predicted risks with cached AI β€œWhy?” analysis

🚒 Deployment

For production deployment, see:

Quick Deploy to Vercel

# Install Vercel CLI
npm i -g vercel

# Login
vercel login

# Deploy
vercel --prod

Required Environment Variables

Must configure in Vercel dashboard:

  1. NEXTAUTH_SECRET - Generate: openssl rand -base64 32
  2. NEXTAUTH_URL - Your production domain (e.g., https://fluxlens.ai)
  3. OPENAI_API_KEY - Optional, for AI features
  4. GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET - Optional, for OAuth

See DEPLOYMENT_CHECKLIST.md for complete list.

πŸ“Š Architecture

  • Frontend: Next.js 16 (App Router), React 19, TypeScript
  • Styling: Tailwind CSS, Radix UI
  • State: Zustand (global), localStorage (persistence)
  • Auth: NextAuth.js v5
  • Charts: Recharts, D3.js
  • AI: OpenAI GPT-4
  • Deployment: Vercel

πŸ” Security

  • JWT-based session management
  • Protected API routes with auth middleware
  • HTTPS-only cookies in production
  • Environment variable encryption

πŸ”Œ Integrations

FluxLens supports OAuth integrations with Slack, Jira, and Linear:

  1. Set up OAuth apps - See docs/INTEGRATIONS.md for detailed instructions
  2. Configure environment variables - Add OAuth credentials to .env.local
  3. Run database migrations - npx prisma migrate dev to create the IntegrationToken table
  4. Connect integrations - Go to Settings β†’ Integrations and click "Connect"

Syncing Data

After connecting integrations, sync data manually or via background job:

# Manual sync via API
curl -X POST http://localhost:3000/api/integrations/sync \
  -H "Cookie: next-auth.session-token=..."

# Or run the background job script
npx tsx scripts/sync-integrations.ts

Set up a cron job to sync automatically:

# Runs every hour
0 * * * * cd /path/to/fluxlens-ai && npx tsx scripts/sync-integrations.ts
  • No message content analysis (metadata only)

πŸ“Œ Current Goals

  • Keep priorities in sync with CURRENT_GOALS.md (single source of truth)
  • All previous roadmap items are complete; pitch deck is the only carryover task

🀝 Contributing

This is a private project. For questions or support, contact:

πŸ“„ License

Proprietary - All rights reserved

🎯 Demo

Try the live demo at fluxlens.ai (when deployed)


Built with ❀️ using Next.js 16, React 19, and TypeScript

Additional Notes (Security, DB, Monitoring)

Environment & Secrets Summary

  • Server-only: OPENROUTER_API_KEY, OPENAI_API_KEY, optional default model vars
  • Public flags: NEXT_PUBLIC_AI_PROVIDER, NEXT_PUBLIC_APP_URL
  • OAuth: GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET

See .env.example for the authoritative list.

Production Security

  • CSP in src/middleware/security.ts removes 'unsafe-inline'/'unsafe-eval' in production and restricts connect-src to known domains.
  • AI routes include basic rate limiting.

Repository Layer

  • src/lib/db/repository.ts now prefers Supabase for preferences, audit logs, events, comments, health deltas, and AI diagnoses when the required environment variables are present. It automatically falls back to the JSON files under data/ for local demo mode.

Health Monitoring

  • UptimeRobot: monitor GET /api/health and the root page. Health response includes environment and org context.
  • Alerting webhook: ALERT_WEBHOOK_URL receives failures from scripts/refreshSupabaseSnapshots.ts (ingest + rebuild pipeline) and other ingest jobs.
  • Observability runbook: see docs/MONITORING.md for alerting, dashboards, and troubleshooting steps.

Proof-point packaging

Generate ready-to-share assets for investors/customers after updating docs/PROOF_POINTS.md:

# 1) Refresh markdown for slides + one-pagers
npx tsx scripts/generate_proof_points.ts

# 2) Export PDFs for the deck/data room
npx tsx scripts/export_proof_points_pdf.ts

Outputs land in docs/out/ and are referenced by docs/DATA_ROOM.md.

  • Observability smoke: run pnpm vitest run tests/observability.spec.ts to verify Sentry/PostHog keys/hosts are set before deploys.
  • Weekly report delivery: use npm run report:weekly or hit /api/cron/send-weekly-report (set CRON_SECRET, REPORT_EMAIL_TO, WEEKLY_REPORT_WEBHOOK_URL/DIGEST_SLACK_WEBHOOK_URL).
  • Weekly automations: .github/workflows/weekly-report-cron.yml posts to /api/cron/send-weekly-report with CRON_APP_BASE_URL + CRON_SECRET and also emits a funnel health Slack ping (FUNNEL_ALERT_WEBHOOK_URL fallback to ALERT_WEBHOOK_URL) based on /api/funnel/weekly authenticated via x-funnel-secret (FUNNEL_SECRET or CRON_SECRET).
  • Nightly Supabase refresh: .github/workflows/nightly-rebuild.yml runs npm run supabase:refresh with NEXT_PUBLIC_USE_REAL_DATA=true and Supabase keys; keep ALERT_WEBHOOK_URL wired so ingest failures notify you.