Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions .env.docker
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Docker Environment Configuration for RouteMQ
# Copy this file to .env and customize for your deployment

# MQTT Broker Configuration
MQTT_BROKER=test.mosquitto.org
MQTT_PORT=1883
MQTT_USERNAME=
MQTT_PASSWORD=
MQTT_GROUP_NAME=mqtt_framework_group

# MySQL Configuration
ENABLE_MYSQL=true
DB_HOST=mysql
DB_PORT=3306
DB_NAME=mqtt_framework
DB_USER=routemq
DB_PASS=routemq

# Redis Configuration
ENABLE_REDIS=true
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_DB=0
REDIS_PASSWORD=

# Queue Configuration
QUEUE_CONNECTION=redis

# Timezone
TIMEZONE=Asia/Jakarta

# Logging Configuration
LOG_LEVEL=INFO
LOG_FORMAT=%(asctime)s - %(name)s - %(levelname)s - %(message)s
6 changes: 6 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,12 @@ DB_PASS=
# Redis Configuration
ENABLE_REDIS=false

# Queue Configuration
# Queue connection driver: 'redis' or 'database'
# Redis queue is faster but requires ENABLE_REDIS=true
# Database queue is persistent but requires ENABLE_MYSQL=true
QUEUE_CONNECTION=redis

# Logging Configuration
# Enable/disable file logging (true/false)
LOG_TO_FILE=true
Expand Down
98 changes: 98 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
.PHONY: help build up down restart logs ps clean dev queue-work

help: ## Show this help message
@echo "RouteMQ Docker Commands"
@echo ""
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'

build: ## Build all Docker images
docker compose build

up: ## Start all services (production)
docker compose up -d

down: ## Stop all services
docker compose down

restart: ## Restart all services
docker compose restart

logs: ## View logs from all services
docker compose logs -f

logs-app: ## View logs from RouteMQ app
docker compose logs -f routemq

logs-worker: ## View logs from default queue worker
docker compose logs -f queue-worker-default

logs-redis: ## View logs from Redis
docker compose logs -f redis

logs-mysql: ## View logs from MySQL
docker compose logs -f mysql

ps: ## Show running services
docker compose ps

stats: ## Show resource usage stats
docker stats

clean: ## Stop and remove all containers, networks, and volumes
docker compose down -v

dev: ## Start development environment (Redis + MySQL only)
docker compose -f docker-compose.dev.yml up -d

dev-full: ## Start development environment (all services)
docker compose -f docker-compose.dev.yml --profile full up -d

dev-down: ## Stop development environment
docker compose -f docker-compose.dev.yml down

queue-work: ## Start queue worker on host (for development)
uv run python main.py --queue-work --queue default

queue-high: ## Start high-priority queue worker on host
uv run python main.py --queue-work --queue high-priority --sleep 1

queue-emails: ## Start emails queue worker on host
uv run python main.py --queue-work --queue emails --sleep 5

scale-default: ## Scale default queue workers to 3 instances
docker compose up -d --scale queue-worker-default=3

scale-emails: ## Scale email queue workers to 2 instances
docker compose up -d --scale queue-worker-emails=2

shell-app: ## Open shell in RouteMQ app container
docker compose exec routemq bash

shell-redis: ## Open Redis CLI
docker compose exec redis redis-cli

shell-mysql: ## Open MySQL CLI
docker compose exec mysql mysql -uroot -p

backup-mysql: ## Backup MySQL database
docker compose exec mysql mysqldump -uroot -p${DB_PASS} ${DB_NAME} > backup_mysql_$$(date +%Y%m%d_%H%M%S).sql

backup-redis: ## Backup Redis data
docker compose exec redis redis-cli SAVE
docker cp routemq-redis:/data/dump.rdb backup_redis_$$(date +%Y%m%d_%H%M%S).rdb

health: ## Check health of all services
@echo "Service Health Status:"
@docker compose ps --format "table {{.Service}}\t{{.Status}}"

install: ## Install dependencies on host
uv sync

run: ## Run RouteMQ on host
uv run python main.py --run

tinker: ## Start interactive REPL
uv run python main.py --tinker

init: ## Initialize new RouteMQ project
uv run python main.py --init
60 changes: 60 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,12 @@ uv run python main.py --run
- **Route-based MQTT topic handling** - Define routes using a clean, expressive syntax
- **Middleware support** - Process messages through middleware chains
- **Parameter extraction** - Extract variables from MQTT topics using Laravel-style syntax
- **Background Task Queue** - Laravel-style queue system for async job processing
- **Shared Subscriptions** - Horizontal scaling with worker processes
- **Redis Integration** - Optional Redis support for distributed caching and rate limiting
- **Advanced Rate Limiting** - Multiple rate limiting strategies with Redis backend
- **Optional MySQL integration** - Use with or without a database
- **Docker Support** - Production-ready Docker Compose setup with queue workers
- **Environment-based configuration** - Flexible configuration through .env files

## 📚 Documentation
Expand All @@ -43,6 +45,8 @@ uv run python main.py --run
- **[Routing](./docs/routing/README.md)** - Route definition, parameters, and organization
- **[Controllers](./docs/controllers/README.md)** - Creating and organizing business logic
- **[Middleware](./docs/middleware/README.md)** - Request processing and middleware chains
- **[Queue System](./docs/queue/README.md)** - Background task processing and job queues
- **[Docker Deployment](./docs/docker-deployment.md)** - Production deployment with Docker
- **[Redis Integration](./docs/redis/README.md)** - Caching, sessions, and distributed features
- **[Rate Limiting](./docs/rate-limiting/README.md)** - Advanced rate limiting strategies
- **[Examples](./docs/examples/README.md)** - Practical examples and use cases
Expand All @@ -58,12 +62,68 @@ RouteMQ/
│ ├── controllers/ # 🎮 Route handlers
│ ├── middleware/ # 🔧 Custom middleware
│ ├── models/ # 🗄️ Database models
│ ├── jobs/ # 📋 Background jobs
│ └── routers/ # 🛣️ Route definitions
├── core/ # ⚡ Framework core
│ ├── queue/ # 🔄 Queue system
│ ├── job.py # 📝 Base job class
│ └── ... # Other core components
├── bootstrap/ # 🌟 Application bootstrap
├── docker-compose.yml # 🐳 Production Docker setup
└── tests/ # 🧪 Test files
```

## 🐳 Docker Deployment

RouteMQ includes production-ready Docker Compose configuration with Redis, MySQL, and queue workers:

```bash
# Start all services (app + 3 queue workers + Redis + MySQL)
docker compose up -d

# View logs
docker compose logs -f

# Scale workers
docker compose up -d --scale queue-worker-default=5

# Or use Makefile
make up # Start all services
make logs # View logs
make ps # Show status
```

See [Docker Deployment Guide](./docs/docker-deployment.md) for detailed instructions.

## 📋 Background Task Queue

Process time-consuming tasks asynchronously with the built-in queue system:

```python
# Create a job
from core.job import Job

class SendEmailJob(Job):
max_tries = 3
queue = "emails"

async def handle(self):
# Send email logic
pass

# Dispatch the job
from core.queue.queue_manager import dispatch

job = SendEmailJob()
job.to = "user@example.com"
await dispatch(job)

# Run queue worker
python main.py --queue-work --queue emails
```

See [Queue System Documentation](./docs/queue-system.md) for complete guide.
Copy link

Copilot AI Oct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The link points to ./docs/queue-system.md but the actual documentation file is ./docs/queue/README.md. This broken link should be updated to point to the correct file.

Suggested change
See [Queue System Documentation](./docs/queue-system.md) for complete guide.
See [Queue System Documentation](./docs/queue/README.md) for complete guide.

Copilot uses AI. Check for mistakes.

## 🤝 Contributing

We welcome contributions! Please see our documentation for development setup and contribution guidelines.
Expand Down
1 change: 1 addition & 0 deletions app/jobs/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# This file marks the directory as a Python package
73 changes: 73 additions & 0 deletions app/jobs/example_data_processing_job.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
import asyncio
import logging
from core.job import Job

logger = logging.getLogger("RouteMQ.Jobs.ProcessDataJob")


class ProcessDataJob(Job):
"""
Example job for processing data in the background.

This demonstrates a job that processes sensor data from IoT devices.

Usage:
from app.jobs.example_data_processing_job import ProcessDataJob
from core.queue.queue_manager import dispatch

# Dispatch the job
job = ProcessDataJob()
job.device_id = "sensor-001"
job.sensor_data = {"temperature": 25.5, "humidity": 60}
await dispatch(job)
"""

# Configure job properties
max_tries = 5
timeout = 120 # Longer timeout for data processing
retry_after = 5
queue = "data-processing"

def __init__(self):
super().__init__()
self.device_id = None
self.sensor_data = None

async def handle(self) -> None:
"""
Execute the job - process sensor data.
"""
logger.info(f"Processing data from device {self.device_id}")
logger.info(f"Sensor data: {self.sensor_data}")

# Simulate data processing
await asyncio.sleep(3)

# Example: Calculate statistics
if isinstance(self.sensor_data, dict):
temperature = self.sensor_data.get("temperature")
humidity = self.sensor_data.get("humidity")

if temperature and temperature > 30:
logger.warning(f"High temperature detected: {temperature}°C")

if humidity and humidity > 80:
logger.warning(f"High humidity detected: {humidity}%")

# In a real application, you might:
# - Store processed data in a database
# - Calculate aggregations and statistics
# - Trigger alerts if thresholds are exceeded
# - Send data to analytics services

logger.info(f"Successfully processed data from device {self.device_id}")

async def failed(self, exception: Exception) -> None:
"""
Handle permanent job failure.
"""
logger.error(
f"Failed to process data from device {self.device_id} "
f"after {self.max_tries} attempts"
)
logger.error(f"Error: {str(exception)}")
65 changes: 65 additions & 0 deletions app/jobs/example_email_job.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
import asyncio
import logging
from core.job import Job

logger = logging.getLogger("RouteMQ.Jobs.SendEmailJob")


class SendEmailJob(Job):
"""
Example job for sending emails in the background.

Usage:
from app.jobs.example_email_job import SendEmailJob
from core.queue.queue_manager import dispatch

# Dispatch the job
job = SendEmailJob()
job.to = "user@example.com"
job.subject = "Welcome!"
job.message = "Thank you for signing up."
await dispatch(job)
"""

# Configure job properties
max_tries = 3
timeout = 30
retry_after = 10 # Retry after 10 seconds on failure
queue = "emails" # Use 'emails' queue instead of 'default'

def __init__(self):
super().__init__()
self.to = None
self.subject = None
self.message = None

async def handle(self) -> None:
"""
Execute the job - send an email.
In a real application, this would use an email service like SendGrid, AWS SES, etc.
"""
logger.info(f"Sending email to {self.to}")
logger.info(f"Subject: {self.subject}")
logger.info(f"Message: {self.message}")

# Simulate email sending (replace with actual email service)
await asyncio.sleep(2) # Simulate API call delay

# Uncomment to test job failure and retry
# if self.attempts == 1:
# raise Exception("Simulated email sending failure")

logger.info(f"Email sent successfully to {self.to}")

async def failed(self, exception: Exception) -> None:
"""
Handle permanent job failure.
This is called when the job exceeds max_tries.
"""
logger.error(f"Failed to send email to {self.to} after {self.max_tries} attempts")
logger.error(f"Error: {str(exception)}")

# In a real application, you might:
# - Log to a monitoring service
# - Send alert to administrators
# - Store failure in a database for manual review
Loading