Skip to content

byue/TradeStrike

Repository files navigation

TradeStrike

Professional traders have access to expensive terminals and fast proprietary scanners that give them an edge in spotting momentum stocks. Retail traders are left with delayed feeds and often lack sufficient visibility and insight to interpret stock metrics correctly, resulting in lack of data-driven decisions when deciding on what actions to take. TradeStrike aims to level the playing field for retail traders versus professional traders by:

  • Providing near real-time stock anomaly detection to surface momentum opportunities as they happen.
  • Delivering simple, decisive alerts that cut through noise.
  • Offering low-barrier and low-cost usage without compromising notification speed or detection quality.

Architecture

TradeStrike Architecture

Stack Overview

TradeStrike runs as a Docker Compose stack with the following services:

Service Description Ports (host)
kafka Apache Kafka 4.x (KRaft mode) with JMX exporter baked in 9092, 19092, 9102
kafka-exporter Prometheus Kafka lag exporter (danielqsj/kafka-exporter) 9308
kafka-topic-init One-shot container that creates required Kafka topics idempotently n/a
feed-ingestor Streams Alpaca market data (real or fake) into Kafka 9200 (Prom metrics)
feed-archiver Writes Kafka bars into TimescaleDB 9300 (Prom metrics)
detection-engine Evaluates anomalies per ticker window 9600 (Prom metrics)
detection-archiver Stores detections in TimescaleDB + MinIO 9700 (Prom metrics)
statistics-calculator Maintains rolling stats and publishes aggregates 9400 (Prom metrics)
statistics-archiver Persists rolling stats to TimescaleDB 9500 (Prom metrics)
redis In-memory store for ephemeral stats caches 6379
redis-exporter Prometheus exporter for Redis metrics 9121
postgres TimescaleDB instance for persistent application state 5432
postgres-exporter Prometheus exporter for Timescale metrics 9187
minio S3-compatible object storage 9000, 9001
minio-init-bucket Initializes MinIO bucket during stack startup n/a
prometheus Metrics collection 9090
grafana Dashboards + alerting 3000
loki Centralized log aggregation backend 3100
promtail Log shipper forwarding container logs to Loki 9080
rabbitmq Message broker with management UI 5672, 15672, 15692

Source code and Dockerfiles live under services/<name>/, while shared monitoring configuration stays in monitoring/.

Prerequisites

  • Docker and Docker Compose (Compose v2) installed
  • Python 3.10+ (only required for local unit tests / tooling)
  • Alpaca Market Data API credentials

Installation and Running

To get started with the repository, follow these steps:

  1. Clone the repository:

    git clone https://github.com/byue/TradeStrike.git
    cd TradeStrike
  2. Setup Alpaca API Keys: Create account on Alpaca and retrieve API key/secret: https://app.markets/user/profile#manage-accounts. Add ALPACA_API_KEY and ALPACA_API_SECRET to environment variables.

  3. Update the .env file:

    ALPACA_API_KEY=your-key
    ALPACA_API_SECRET=your-secret
    # optional: change Prometheus metrics port
    # FEED_INGESTOR_METRICS_PORT=9200
    # optional: override Kafka connectivity or topic defaults
    # KAFKA_BOOTSTRAP_SERVERS=kafka:9092
    # KAFKA_MINUTE_BAR_TOPIC=ticker-feed
    # KAFKA_TICKER_DETECTION_TOPIC=ticker-detection
    # DETECTION_ENGINE_VOLUME_Z_THRESHOLD=2.5
    # DETECTION_ENGINE_PRICE_Z_THRESHOLD=2.5
    # KAFKA_MINUTE_BAR_PARTITIONS=3
    # KAFKA_MINUTE_BAR_REPLICATION_FACTOR=1
    

    Docker Compose automatically loads variables from .env. Kafka bootstrap/topic settings are pre-populated there; tweak them as needed.

  4. Start the stack:

    make create-stack

    This builds the custom images (services/kafka, services/feed-ingestor) and starts all services. The feed ingestor will fail fast if Alpaca credentials are missing. If you prefer to use Docker Compose directly, run docker compose up -d --build. The kafka-topic-init one-shot service will provision the ticker-feed topic (or the overrides you supply in .env) after Kafka is reachable.

    Synthetic data option — to launch the stack with the in-process fake Alpaca stream instead of hitting the real API, flip the Make flag (and optionally tweak the interval):

    make create-stack MOCK_ALPACA_CLIENT=true FAKE_ALPACA_INTERVAL_SECONDS=0.5

    Under the hood this sets USE_FAKE_ALPACA_CLIENT=1 (and related overrides) before invoking docker compose up. You can achieve the same by exporting those variables manually if you are starting the stack without Make.

    Prometheus automatically scrapes the feed ingestor on feed-ingestor:9200, so once the service is running you can explore counters such as num_alpaca_responses_total, num_alpaca_events_total, feed_ingestor_queue_enqueued_total, feed_ingestor_queue_dequeued_total, feed_ingestor_events_publish_succeeded_total, feed_ingestor_events_publish_failed_total, and feed_ingestor_stream_errors_total in Grafana (see “Feed Ingestor Metrics” under the TradeStrike Monitoring folder) or directly via curl localhost:9200/metrics.

  5. Access monitoring tools:

  6. Tail logs: Promtail ships container logs to Loki. You can browse them through Grafana (Explore tab) or via docker compose logs <service>.

Local Development (Python)

Even though services run in containers, a local virtual environment remains useful for unit tests or quick experimentation.

make install                               # create venv + install shared dev dependencies
make install-service SERVICE=feed-ingestor  # install runtime deps for a specific service
make lint                                   # run Ruff lint (read-only)
make lint-fix                               # run formatter + lint autofix for quick cleanup
make format                                 # Ruff formatter only (no lint)
make test                                   # run lint + pytest with 100% coverage gate
make clean                                  # tear down stack + remove venv, caches, coverage artifacts

Each Python microservice keeps its runtime dependencies under services/<name>/requirements.txt. Use make install-service SERVICE=<name> to layer that service's dependencies into your local venv when needed.

Optional Git Hooks

Install the bundled pre-commit hooks to automatically run Ruff before each commit:

pip install pre-commit
pre-commit install

This registers the hooks defined in .pre-commit-config.yaml. You can always run them manually with pre-commit run --all-files.

Maintenance Commands

  • Stop and remove the entire stack (including named volumes):

    make delete-stack

    ⚠️ This purges Kafka topics, Postgres data, etc.

  • Rebuild a single service image:

    docker compose build feed-ingestor
    docker compose build kafka
  • Apply only monitoring changes:

    docker compose restart prometheus grafana

Contributing

We welcome contributions to this project! If you'd like to contribute, follow these steps:

  1. Fork the repository and clone it to your local machine.
  2. Create a new branch for your feature or bug fix.
  3. Write tests for any new functionality or changes.
  4. Make sure all tests pass by running make test before submitting your pull request.
  5. Submit a pull request with a detailed explanation of your changes.

Please ensure your code follows the existing style and includes sufficient documentation.

License

This project is licensed under the MIT License. See the LICENSE file for more details.

By contributing to this repository, you agree that your contributions will be licensed under the MIT License.

About

Catch the Surge before the Herd

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors