FluxLens is an engineering delivery operating system that forecasts coordination breakdowns 30-90 days in advance using neural forecasting models on top of your existing toolchain (Jira, Slack, GitHub).
FluxLens is built on rigorous academic principles, moving beyond simple rule-based heuristics to true predictive modeling.
- Research Question: "Can neural forecasting models detect coordination breakdowns earlier than rule-based heuristics?"
- Methodology: Multimodal metadata analysis (Interaction Graphs + Work Item Aging).
- Experiment: Validated "Lead Time to Detection" superiority (+11 days) on historical launch data.
- Signature FluxLens health gauge with glowing needle above 80%
- Quick-view header nav with live search, notifications, and command palette
- Expanded overview layout with enriched forecast, signals, and team heatmaps
- NextAuth.js v5 integration with credentials and Google OAuth
- Sign-in page at
/auth/signin - Demo credentials:
demo@fluxlens.ai/demo123 - Protected routes with middleware
- Session management with JWT
- Zustand store with localStorage persistence
- Saves filters, theme, dashboard layout, and notification settings
- API endpoints at
/api/preferencesfor backend sync - Survives page refreshes
- RESTful endpoint:
GET /api/flux/data - Easy toggle between mock and real data
- Query parameters:
department,startDate,endDate,mock - Documentation at
/docs/integration
- React Error Boundary with graceful fallbacks
- Skeleton loaders for all major components
- Toast notifications via Sonner
- Development error details, production-friendly messages
/about- Company mission and technology/pricing- Three-tier pricing model/docs/integration- API integration guide- Professional layout with dark theme
- In-app action chips in the assistant to send weekly reports, trigger syncs, export snapshots, and run full integration syncs
- New actions: rebuild snapshots from events (Supabase service role required), export CSV/PDF with download links
- Backend endpoint
/api/agent/executewith rate limits, audit logging (agent_runstable), and dry-run support - Enable via
AGENT_ENABLED=true; setAGENT_DEFAULT_DRY_RUN=falseto execute actions (flip to true if you want simulation-only).
For detailed setup instructions, see SETUP.md
npm install
cp .env.example .env.local
npx tsx scripts/checkSupabaseReady.ts # optional sanity check
npm run dx:doctor # quick env and connectivity doctor
npm run dev- Node.js 18+
- npm or yarn
- OpenSSL (for generating secrets)
# Install dependencies
npm install
# Copy environment variables
cp .env.example .env.local
# Generate secure NextAuth secret
openssl rand -base64 32
# Copy output to NEXTAUTH_SECRET in .env.local
# Start development server
npm run devOpen http://localhost:3000 to view the app.
- Email: demo@fluxlens.ai
- Password: demo123
Important: These are demo credentials for development only.
fluxlens-ai/
βββ src/
β βββ app/ # Next.js App Router pages
β β βββ about/ # About page
β β βββ api/ # API routes
β β β βββ auth/ # NextAuth endpoints
β β β βββ flux/data/ # Flux data API
β β β βββ preferences/ # User preferences API
β β βββ auth/signin/ # Sign-in page
β β βββ dashboard/ # Protected dashboard
β β βββ docs/integration/ # Integration docs
β β βββ pricing/ # Pricing page
β βββ components/ # React components
β β βββ ErrorBoundary.tsx # Error handling
β β βββ FluxDashboard.tsx # Main dashboard
β β βββ LoadingSkeletons.tsx # Loading states
β βββ store/ # Zustand state management
β β βββ userPreferencesStore.ts
β βββ auth.ts # NextAuth configuration
βββ .env.local # Environment variables
βββ WARP.md # AI assistant context
# AI Provider (defaults to OpenRouter + Llama 3.3)
NEXT_PUBLIC_AI_PROVIDER=openrouter
OPENROUTER_API_KEY=your_openrouter_key
OPENROUTER_DEFAULT_MODEL=meta-llama/llama-3.3-8b-instruct:free
AI_SIMPLE_MODEL=meta-llama/llama-3.3-8b-instruct:free
AI_COMPLEX_MODEL=meta-llama/llama-3.3-70b-instruct:free
NEXT_DISABLE_FONT_DOWNLOADS=1
NEXT_BINARY_CACHE_DIR=.next/cache/swc
NEXT_FORCE_WASM=1
PLAYWRIGHT_TEST_BASE_URL=http://127.0.0.1:3100
# Supabase (optional until live data pipeline is ready)
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=public-anon-key
SUPABASE_SERVICE_ROLE_KEY=service-role-key
# Optional OpenAI fallback
OPENAI_API_KEY=
OPENAI_DEFAULT_MODEL=gpt-4o-mini
# NextAuth
NEXTAUTH_URL=http://localhost:3000
NEXTAUTH_SECRET=your-secret-key
# Google OAuth (optional)
GOOGLE_CLIENT_ID=your_client_id
GOOGLE_CLIENT_SECRET=your_client_secret
# Integration OAuth (Slack, Jira, Linear)
# Get these from your OAuth app configurations
SLACK_CLIENT_ID=your_slack_client_id
SLACK_CLIENT_SECRET=your_slack_client_secret
JIRA_CLIENT_ID=your_jira_client_id
JIRA_CLIENT_SECRET=your_jira_client_secret
LINEAR_CLIENT_ID=your_linear_client_id
LINEAR_CLIENT_SECRET=your_linear_client_secret
# Feature Flags
NEXT_PUBLIC_ENABLE_AI_FEATURES=true
NEXT_PUBLIC_ENABLE_EXPORT=true
NEXT_PUBLIC_USE_REAL_DATA=false-
Install and authenticate the Supabase CLI.
-
Configure
SUPABASE_URL,SUPABASE_ANON_KEY, andSUPABASE_SERVICE_ROLE_KEYin.env.local. -
Apply the migrations (remote
db seed --fileis not supported in Supabase CLI v2.58):# If the CLI profile isn't linked, use the DB password from Supabase > Settings > Database # and project ref from SUPABASE_URL. SUPABASE_DB_PASSWORD=your_db_password \ supabase db push --db-url "postgresql://postgres.<project_ref>:${SUPABASE_DB_PASSWORD}@aws-1-us-east-1.pooler.supabase.com:5432/postgres"
-
Populate Supabase with a fresh mock snapshot (safe to rerun any time):
# Requires tsx (`npm install -g tsx`) or run with `npx tsx` FLUXLENS_ORG_ID=demo-org FLUXLENS_RESET=true npx tsx scripts/ingestSupabaseSnapshots.ts- Set
FLUXLENS_ORG_IDto target a different tenant. - Omit
FLUXLENS_RESETor set it tofalseto append without clearing prior rows.
- Set
-
Validate connectivity (optional but recommended) before flipping real-data mode:
npx tsx scripts/checkSupabaseReady.ts
- Confirms env variables are present and PostgREST responds.
- Exits non-zero if
NEXT_PUBLIC_USE_REAL_DATAis true but Supabase env is missing.
-
When ready for real integrations, schedule the ingestion script (or a variant that calls your production data sources) via cron, Supabase Edge Functions, or your job runner of choice. For a local cron (requires
tsxon your PATH), drop something like:0 2 * * * cd /path/to/fluxlens-ai && \ FLUXLENS_ORG_ID=demo-org FLUXLENS_RESET=true \ npx tsx scripts/ingestSupabaseSnapshots.ts >> logs/ingest.log 2>&1
In CI, the new workflow at
.github/workflows/nightly-supabase-ingest.ymlruns the same command nightly at 02:00 UTC. Define theSUPABASE_URL,SUPABASE_ANON_KEY, andSUPABASE_SERVICE_ROLE_KEYrepository secrets (and overrideFLUXLENS_ORG_IDif needed) so the job can authenticate.
- Seed Supabase with
scripts/ingestSupabaseSnapshots.ts(see above) so/api/flux/datahas a snapshot to read. - Set
NEXT_PUBLIC_USE_REAL_DATA=truein.env.localand restart the dev server. - Provide
SUPABASE_URL,SUPABASE_ANON_KEY, andSUPABASE_SERVICE_ROLE_KEYso the repository can reach PostgREST. - Hit any API with
?mock=false(e.g./api/flux/data?mock=false) to verify the real snapshot is returned before mock fallbacks, or runSESSION_COOKIE="next-auth.session-token=..." ./scripts/smokeRealData.shto exercise the key routes in one go.
- Workflow:
.github/workflows/nightly-supabase-ingest.yml - Schedule:
0 2 * * *(runs daily, plus manualworkflow_dispatch) - Secrets required:
SUPABASE_URL,SUPABASE_ANON_KEY,SUPABASE_SERVICE_ROLE_KEY - What it does: checks out the repo, installs dependencies, and runs
npx tsx scripts/ingestSupabaseSnapshots.tswithFLUXLENS_ORG_ID=demo-organdFLUXLENS_RESET=true. Update these env vars if you need a different tenant or incremental ingest.
Use scripts/ingestSlackSnapshot.ts to copy a batch of Slack messages into Supabase for demos or local testing:
# Defaults to data/slackSnapshot.json; pass a path as argv[2] for custom payloads
FLUXLENS_ORG_ID=demo-org \
SUPABASE_URL=... \
SUPABASE_SERVICE_ROLE_KEY=... \
npx tsx scripts/ingestSlackSnapshot.ts data/slackSnapshot.json- Input format lives in
data/slackSnapshot.json - Each message is persisted as an
eventsrow withtype="slack_signal"plus severity metadata derived fromurgency - Audit logging via
/audit_logis best-effort; failures do not block ingestion
Use scripts/ingestJiraSnapshot.ts to ingest Jira issues into Supabase so dashboards show Jira freshness:
FLUXLENS_ORG_ID=demo-org \
SUPABASE_URL=... \
SUPABASE_SERVICE_ROLE_KEY=... \
npx tsx scripts/ingestJiraSnapshot.ts data/jiraSnapshot.json- Defaults to
data/jiraSnapshot.jsonif no path is provided - Rebuilds the snapshot after ingest so UI freshness chips update
- Requires real Supabase env keys and
NEXT_PUBLIC_USE_REAL_DATA=true
Pull real Jira issues via API (requires a Jira access token and cloud ID):
FLUXLENS_ORG_ID=demo-org \
SUPABASE_URL=... \
SUPABASE_SERVICE_ROLE_KEY=... \
JIRA_ACCESS_TOKEN=... \
JIRA_CLOUD_ID=... \
JIRA_PROJECT_KEY=OPS \
npx tsx scripts/ingestJiraApi.ts- Optional: set
JIRA_INGEST_LIMIT(default 50) to cap results. - Rebuilds the snapshot after ingest so dashboard freshness chips stay current.
- Use this for real-data demos instead of the static snapshot script.
Use scripts/smokeRealData.sh to hit the critical API routes with the real-data flag in one go:
chmod +x scripts/smokeRealData.sh
SESSION_COOKIE="next-auth.session-token=..." ./scripts/smokeRealData.sh- Requires a valid
SESSION_COOKIE(copy from the browser devtools) and optionallyBASE_URL(defaults tohttp://localhost:3000). - Each hit prints the first ~400 bytes so you can eyeball the payload without overwhelming the console.
- In CI, the
Real Data Smokeworkflow (.github/workflows/real-data-smoke.yml) runs daily ifSESSION_COOKIEis set as a GitHub secret and can alert viaALERT_WEBHOOK_URL.
- Supabase targets: extend or reuse tables such as
event_record,event_comment,diagnosis, andaudit_log. If Slack/Jira payloads demand new columns, add them in a migration alongside mock fallbacks so the API contract stays backward compatible. - Ingestion worker: create a script under
scripts/(e.g.scripts/ingestSlackEvents.ts) that normalizes provider payloads, then callsrepo.addEvents,repo.addComments, or new repository helpers. Keep provider adapters pure so you can test them without network calls. - API endpoints: expose
/api/integrations/slackor/api/integrations/jirato trigger re-syncs, surface health, or accept signed webhooks. Route handlers should validate provider signatures, enqueue work, and respond quickly so dashboards stay responsive. - Testing: add unit tests that feed recorded Slack/Jira JSON into the worker and assert Supabase insert batches. For end-to-end coverage, add a Playwright spec that mocks the provider endpoint, flips
NEXT_PUBLIC_USE_REAL_DATA=true, and verifies the dashboard reflects the ingested artifacts.
-
All OpenRouter calls now attach
OPENROUTER_SITE_URLandOPENROUTER_APP_TITLEheaders, parse rate-limit headers, and emit structured quota logs. Configure:OPENROUTER_SITE_URL=https://your-domain OPENROUTER_APP_TITLE="FluxLens AI" AI_QUOTA_ALERT_THRESHOLD=10 # optional, defaults to 10 remaining calls AI_ALERT_WEBHOOK_URL=https://hooks.slack.com/services/...
-
When remaining quota falls below the threshold (or hits
0),quotaMonitorlogs a warning/exhausted event and optionally POSTs toAI_ALERT_WEBHOOK_URL(Slack, email bridge, etc.). -
Backend endpoints return structured fallback payloads instead of silent 429 errors, for example:
{ "error": "AI features are temporarily unavailable due to free quota exhaustion.", "fallback": true, "provider": "simulated", "quotaDetails": { "remaining": 0, "limit": 50, "reset": "2025-10-31T00:00:00Z" } } -
Frontend surfaces the failure with a global quota banner, labels simulated responses (chatbot, simulator, diagnoses, reports), and shows actionable guidance (βretry after reset or upgradeβ).
-
Offline fallbacks include canned insights (chatbot replies, analyst summaries, diagnoses, reports) so product demos keep working even when the free plan is exhausted.
-
Recent quota events are persisted client-side via
useQuotaStore, making it easy for admins/testers to see remaining calls.
# Run unit tests
npm run test
# Type check
npm run buildnpm run dx:doctorβ validates env keys and Supabase connectivity.- Husky pre-push runs
lintand targetedvitest --run(setSKIP_PREPUSH=1to bypass briefly).
Calculates organizational health based on:
- Response latency patterns
- Handoff failure rates
- Meeting density load
- Cross-team collaboration index
- 30/60/90-day forecast windows
- Linear regression on metric slopes
- Anomaly detection with 2-sigma deviation
- Confidence scoring
- OpenAI GPT-4 integration
- Root cause analysis
- Intervention recommendations
- Natural language explanations
- Event diagnosis on predicted risks with cached AI βWhy?β analysis
For production deployment, see:
- DEPLOYMENT.md - Detailed deployment guide
- DEPLOYMENT_CHECKLIST.md - Pre-deployment checklist
# Install Vercel CLI
npm i -g vercel
# Login
vercel login
# Deploy
vercel --prodMust configure in Vercel dashboard:
NEXTAUTH_SECRET- Generate:openssl rand -base64 32NEXTAUTH_URL- Your production domain (e.g.,https://fluxlens.ai)OPENAI_API_KEY- Optional, for AI featuresGOOGLE_CLIENT_IDandGOOGLE_CLIENT_SECRET- Optional, for OAuth
See DEPLOYMENT_CHECKLIST.md for complete list.
- Frontend: Next.js 16 (App Router), React 19, TypeScript
- Styling: Tailwind CSS, Radix UI
- State: Zustand (global), localStorage (persistence)
- Auth: NextAuth.js v5
- Charts: Recharts, D3.js
- AI: OpenAI GPT-4
- Deployment: Vercel
- JWT-based session management
- Protected API routes with auth middleware
- HTTPS-only cookies in production
- Environment variable encryption
FluxLens supports OAuth integrations with Slack, Jira, and Linear:
- Set up OAuth apps - See docs/INTEGRATIONS.md for detailed instructions
- Configure environment variables - Add OAuth credentials to
.env.local - Run database migrations -
npx prisma migrate devto create the IntegrationToken table - Connect integrations - Go to Settings β Integrations and click "Connect"
After connecting integrations, sync data manually or via background job:
# Manual sync via API
curl -X POST http://localhost:3000/api/integrations/sync \
-H "Cookie: next-auth.session-token=..."
# Or run the background job script
npx tsx scripts/sync-integrations.tsSet up a cron job to sync automatically:
# Runs every hour
0 * * * * cd /path/to/fluxlens-ai && npx tsx scripts/sync-integrations.ts- No message content analysis (metadata only)
- Keep priorities in sync with
CURRENT_GOALS.md(single source of truth) - All previous roadmap items are complete; pitch deck is the only carryover task
This is a private project. For questions or support, contact:
- Email: hello@fluxlens.ai
- Sales: sales@fluxlens.ai
- Support: support@fluxlens.ai
Proprietary - All rights reserved
Try the live demo at fluxlens.ai (when deployed)
Built with β€οΈ using Next.js 16, React 19, and TypeScript
- Server-only:
OPENROUTER_API_KEY,OPENAI_API_KEY, optional default model vars - Public flags:
NEXT_PUBLIC_AI_PROVIDER,NEXT_PUBLIC_APP_URL - OAuth:
GOOGLE_CLIENT_ID,GOOGLE_CLIENT_SECRET
See .env.example for the authoritative list.
- CSP in
src/middleware/security.tsremoves'unsafe-inline'/'unsafe-eval'in production and restrictsconnect-srcto known domains. - AI routes include basic rate limiting.
src/lib/db/repository.tsnow prefers Supabase for preferences, audit logs, events, comments, health deltas, and AI diagnoses when the required environment variables are present. It automatically falls back to the JSON files underdata/for local demo mode.
- UptimeRobot: monitor
GET /api/healthand the root page. Health response includes environment and org context. - Alerting webhook:
ALERT_WEBHOOK_URLreceives failures fromscripts/refreshSupabaseSnapshots.ts(ingest + rebuild pipeline) and other ingest jobs. - Observability runbook: see
docs/MONITORING.mdfor alerting, dashboards, and troubleshooting steps.
Generate ready-to-share assets for investors/customers after updating docs/PROOF_POINTS.md:
# 1) Refresh markdown for slides + one-pagers
npx tsx scripts/generate_proof_points.ts
# 2) Export PDFs for the deck/data room
npx tsx scripts/export_proof_points_pdf.tsOutputs land in docs/out/ and are referenced by docs/DATA_ROOM.md.
- Observability smoke: run
pnpm vitest run tests/observability.spec.tsto verify Sentry/PostHog keys/hosts are set before deploys. - Weekly report delivery: use
npm run report:weeklyor hit/api/cron/send-weekly-report(setCRON_SECRET,REPORT_EMAIL_TO,WEEKLY_REPORT_WEBHOOK_URL/DIGEST_SLACK_WEBHOOK_URL). - Weekly automations:
.github/workflows/weekly-report-cron.ymlposts to/api/cron/send-weekly-reportwithCRON_APP_BASE_URL+CRON_SECRETand also emits a funnel health Slack ping (FUNNEL_ALERT_WEBHOOK_URLfallback toALERT_WEBHOOK_URL) based on/api/funnel/weeklyauthenticated viax-funnel-secret(FUNNEL_SECRETorCRON_SECRET). - Nightly Supabase refresh:
.github/workflows/nightly-rebuild.ymlrunsnpm run supabase:refreshwithNEXT_PUBLIC_USE_REAL_DATA=trueand Supabase keys; keepALERT_WEBHOOK_URLwired so ingest failures notify you.