Professional traders have access to expensive terminals and fast proprietary scanners that give them an edge in spotting momentum stocks. Retail traders are left with delayed feeds and often lack sufficient visibility and insight to interpret stock metrics correctly, resulting in lack of data-driven decisions when deciding on what actions to take. TradeStrike aims to level the playing field for retail traders versus professional traders by:
- Providing near real-time stock anomaly detection to surface momentum opportunities as they happen.
- Delivering simple, decisive alerts that cut through noise.
- Offering low-barrier and low-cost usage without compromising notification speed or detection quality.
TradeStrike runs as a Docker Compose stack with the following services:
| Service | Description | Ports (host) |
|---|---|---|
kafka |
Apache Kafka 4.x (KRaft mode) with JMX exporter baked in | 9092, 19092, 9102 |
kafka-exporter |
Prometheus Kafka lag exporter (danielqsj/kafka-exporter) |
9308 |
kafka-topic-init |
One-shot container that creates required Kafka topics idempotently | n/a |
feed-ingestor |
Streams Alpaca market data (real or fake) into Kafka | 9200 (Prom metrics) |
feed-archiver |
Writes Kafka bars into TimescaleDB | 9300 (Prom metrics) |
detection-engine |
Evaluates anomalies per ticker window | 9600 (Prom metrics) |
detection-archiver |
Stores detections in TimescaleDB + MinIO | 9700 (Prom metrics) |
statistics-calculator |
Maintains rolling stats and publishes aggregates | 9400 (Prom metrics) |
statistics-archiver |
Persists rolling stats to TimescaleDB | 9500 (Prom metrics) |
redis |
In-memory store for ephemeral stats caches | 6379 |
redis-exporter |
Prometheus exporter for Redis metrics | 9121 |
postgres |
TimescaleDB instance for persistent application state | 5432 |
postgres-exporter |
Prometheus exporter for Timescale metrics | 9187 |
minio |
S3-compatible object storage | 9000, 9001 |
minio-init-bucket |
Initializes MinIO bucket during stack startup | n/a |
prometheus |
Metrics collection | 9090 |
grafana |
Dashboards + alerting | 3000 |
loki |
Centralized log aggregation backend | 3100 |
promtail |
Log shipper forwarding container logs to Loki | 9080 |
rabbitmq |
Message broker with management UI | 5672, 15672, 15692 |
Source code and Dockerfiles live under services/<name>/, while shared monitoring configuration stays in monitoring/.
- Docker and Docker Compose (Compose v2) installed
- Python 3.10+ (only required for local unit tests / tooling)
- Alpaca Market Data API credentials
To get started with the repository, follow these steps:
-
Clone the repository:
git clone https://github.com/byue/TradeStrike.git cd TradeStrike -
Setup Alpaca API Keys: Create account on Alpaca and retrieve API key/secret: https://app.markets/user/profile#manage-accounts. Add ALPACA_API_KEY and ALPACA_API_SECRET to environment variables.
-
Update the
.envfile:ALPACA_API_KEY=your-key ALPACA_API_SECRET=your-secret # optional: change Prometheus metrics port # FEED_INGESTOR_METRICS_PORT=9200 # optional: override Kafka connectivity or topic defaults # KAFKA_BOOTSTRAP_SERVERS=kafka:9092 # KAFKA_MINUTE_BAR_TOPIC=ticker-feed # KAFKA_TICKER_DETECTION_TOPIC=ticker-detection # DETECTION_ENGINE_VOLUME_Z_THRESHOLD=2.5 # DETECTION_ENGINE_PRICE_Z_THRESHOLD=2.5 # KAFKA_MINUTE_BAR_PARTITIONS=3 # KAFKA_MINUTE_BAR_REPLICATION_FACTOR=1Docker Compose automatically loads variables from
.env. Kafka bootstrap/topic settings are pre-populated there; tweak them as needed. -
Start the stack:
make create-stack
This builds the custom images (
services/kafka,services/feed-ingestor) and starts all services. The feed ingestor will fail fast if Alpaca credentials are missing. If you prefer to use Docker Compose directly, rundocker compose up -d --build. Thekafka-topic-initone-shot service will provision theticker-feedtopic (or the overrides you supply in.env) after Kafka is reachable.Synthetic data option — to launch the stack with the in-process fake Alpaca stream instead of hitting the real API, flip the Make flag (and optionally tweak the interval):
make create-stack MOCK_ALPACA_CLIENT=true FAKE_ALPACA_INTERVAL_SECONDS=0.5
Under the hood this sets
USE_FAKE_ALPACA_CLIENT=1(and related overrides) before invokingdocker compose up. You can achieve the same by exporting those variables manually if you are starting the stack without Make.Prometheus automatically scrapes the feed ingestor on
feed-ingestor:9200, so once the service is running you can explore counters such asnum_alpaca_responses_total,num_alpaca_events_total,feed_ingestor_queue_enqueued_total,feed_ingestor_queue_dequeued_total,feed_ingestor_events_publish_succeeded_total,feed_ingestor_events_publish_failed_total, andfeed_ingestor_stream_errors_totalin Grafana (see “Feed Ingestor Metrics” under the TradeStrike Monitoring folder) or directly viacurl localhost:9200/metrics. -
Access monitoring tools:
- Prometheus: http://localhost:9090
- Grafana: http://localhost:3000 (default credentials
admin/admin) - Minio: http://localhost:9001 (default credentials
minio_admin/minio_admin)
-
Tail logs: Promtail ships container logs to Loki. You can browse them through Grafana (
Exploretab) or viadocker compose logs <service>.
Even though services run in containers, a local virtual environment remains useful for unit tests or quick experimentation.
make install # create venv + install shared dev dependencies
make install-service SERVICE=feed-ingestor # install runtime deps for a specific service
make lint # run Ruff lint (read-only)
make lint-fix # run formatter + lint autofix for quick cleanup
make format # Ruff formatter only (no lint)
make test # run lint + pytest with 100% coverage gate
make clean # tear down stack + remove venv, caches, coverage artifacts
Each Python microservice keeps its runtime dependencies under services/<name>/requirements.txt. Use make install-service SERVICE=<name> to layer that service's dependencies into your local venv when needed.
Install the bundled pre-commit hooks to automatically run Ruff before each commit:
pip install pre-commit
pre-commit install
This registers the hooks defined in .pre-commit-config.yaml. You can always run them manually with pre-commit run --all-files.
-
Stop and remove the entire stack (including named volumes):
make delete-stack
⚠️ This purges Kafka topics, Postgres data, etc. -
Rebuild a single service image:
docker compose build feed-ingestor docker compose build kafka
-
Apply only monitoring changes:
docker compose restart prometheus grafana
We welcome contributions to this project! If you'd like to contribute, follow these steps:
- Fork the repository and clone it to your local machine.
- Create a new branch for your feature or bug fix.
- Write tests for any new functionality or changes.
- Make sure all tests pass by running
make testbefore submitting your pull request. - Submit a pull request with a detailed explanation of your changes.
Please ensure your code follows the existing style and includes sufficient documentation.
This project is licensed under the MIT License. See the LICENSE file for more details.
By contributing to this repository, you agree that your contributions will be licensed under the MIT License.
