Skip to content

efealtiparmakoglu/taskqueue-pro

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

16 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

TaskQueue Pro πŸ”„

Enterprise-grade distributed task queue system with scheduling, monitoring, and auto-scaling capabilities.

🌟 Features

Core Features

  • βœ… Distributed Task Processing - Scale workers horizontally across multiple machines
  • βœ… Priority Queues - High/medium/low priority task routing
  • βœ… Task Scheduling - Cron-like scheduling with timezone support
  • βœ… Retry Mechanisms - Exponential backoff, max retries, dead letter queues
  • βœ… Result Backend - Store task results in Redis, PostgreSQL, or S3
  • βœ… Task Chains & Groups - Compose complex workflows
  • βœ… Rate Limiting - Control task execution rate per worker
  • βœ… Auto-scaling - Scale workers based on queue depth

Backends Supported

Backend Type Use Case
Redis Broker + Result High performance, low latency
RabbitMQ Broker Reliable message delivery
PostgreSQL Result Persistent result storage
Amazon SQS Broker Cloud-native, serverless
Kafka Broker High throughput streaming

Worker Features

  • βœ… Multi-processing - Process multiple tasks concurrently
  • βœ… Auto-reload - Restart workers on code changes
  • βœ… Health Checks - Monitor worker health and restart if needed
  • βœ… Prefetch Multiplier - Control task prefetching
  • βœ… Task Time Limits - Hard and soft timeouts
  • βœ… Worker Pools - ThreadPool, ProcessPool, GreenletPool

Monitoring

  • βœ… Web Dashboard - Real-time task and worker monitoring
  • βœ… Prometheus Metrics - Export metrics for Grafana
  • βœ… Task Tracing - Track task execution flow
  • βœ… Alerts - Get notified on failures
  • βœ… Log Aggregation - Centralized logging

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                      TASKQUEUE PRO                              β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”‚
β”‚  β”‚   Client     β”‚    β”‚   Client     β”‚    β”‚   Client     β”‚     β”‚
β”‚  β”‚   (API)      β”‚    β”‚   (CLI)      β”‚    β”‚   (SDK)      β”‚     β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜     β”‚
β”‚         β”‚                    β”‚                    β”‚              β”‚
β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜              β”‚
β”‚                              β–Ό                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚                   BROKER (Redis/RabbitMQ)                β”‚   β”‚
β”‚  β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”‚   β”‚
β”‚  β”‚  β”‚ High Priorityβ”‚ β”‚Med Priority β”‚ β”‚Low Priority β”‚       β”‚   β”‚
β”‚  β”‚  β”‚   Queue      β”‚ β”‚   Queue     β”‚ β”‚   Queue     β”‚       β”‚   β”‚
β”‚  β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜       β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚                             β”‚                                   β”‚
β”‚         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”              β”‚
β”‚         β–Ό                   β–Ό                   β–Ό              β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
β”‚  β”‚   Worker 1   β”‚   β”‚   Worker 2   β”‚   β”‚   Worker N   β”‚      β”‚
β”‚  β”‚  (Process)   β”‚   β”‚  (Process)   β”‚   β”‚  (Process)   β”‚      β”‚
β”‚  β”‚              β”‚   β”‚              β”‚   β”‚              β”‚      β”‚
β”‚  β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚   β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚   β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚      β”‚
β”‚  β”‚ β”‚ Prefetch β”‚ β”‚   β”‚ β”‚ Prefetch β”‚ β”‚   β”‚ β”‚ Prefetch β”‚ β”‚      β”‚
β”‚  β”‚ β”‚   = 4    β”‚ β”‚   β”‚ β”‚   = 4    β”‚ β”‚   β”‚ β”‚   = 4    β”‚ β”‚      β”‚
β”‚  β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚   β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚   β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚      β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β”‚         β”‚                  β”‚                  β”‚                β”‚
β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                β”‚
β”‚                            β–Ό                                   β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚                   RESULT BACKEND                        β”‚   β”‚
β”‚  β”‚         (Redis / PostgreSQL / S3 / MongoDB)            β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚                                                                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                              β”‚
                              β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                      MONITORING DASHBOARD                       β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”          β”‚
β”‚  β”‚ Task Stats   β”‚  β”‚ Worker Stats β”‚  β”‚ Queue Depth  β”‚          β”‚
β”‚  β”‚ Real-time    β”‚  β”‚ Health       β”‚  β”‚ Chart        β”‚          β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜          β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”          β”‚
β”‚  β”‚ Success Rate β”‚  β”‚ Latency      β”‚  β”‚ Failed Tasks β”‚          β”‚
β”‚  β”‚ Graph        β”‚  β”‚ Histogram    β”‚  β”‚ Retry Queue  β”‚          β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸš€ Quick Start

Installation

pip install taskqueue-pro

Basic Usage

from taskqueue import TaskQueue, task

# Create task queue
tq = TaskQueue(broker_url="redis://localhost:6379/0")

# Define a task
@task(queue="default", max_retries=3)
def send_email(to, subject, body):
    """Send email task"""
    import smtplib
    # Email sending logic
    print(f"Sending email to {to}")
    return {"status": "sent", "to": to}

# Enqueue task
result = send_email.delay("user@example.com", "Hello", "World!")
print(f"Task ID: {result.id}")

# Get result
print(result.get(timeout=10))

Running Workers

# Start worker
taskqueue worker --queues default,high --concurrency 4

# Start with auto-reload
taskqueue worker --queues default --autoreload

# Start scheduler
taskqueue scheduler --config scheduler.yml

Scheduling Tasks

from taskqueue import crontab

# Schedule task to run every hour
@task(queue="reports")
def generate_daily_report():
    """Generate daily report"""
    print("Generating report...")
    return {"status": "completed"}

# Schedule using cron syntax
generate_daily_report.schedule(crontab(hour="*/1"))

# Or use human-readable syntax
generate_daily_report.every("1 hour")

πŸ“Š Advanced Features

Task Chains

from taskqueue import chain

# Chain tasks
workflow = chain(
    download_file.s("https://example.com/data.csv"),
    process_data.s(),
    send_notification.s()
)

result = workflow.delay()

Task Groups

from taskqueue import group

# Run tasks in parallel
tasks = group([
    send_email.s(f"user{i}@example.com", "Hello", "Body")
    for i in range(10)
])

result = tasks.delay()
results = result.get()  # Get all results

Rate Limiting

@task(queue="api", rate_limit="10/m")  # 10 per minute
def call_external_api(endpoint):
    """Rate limited API call"""
    import requests
    return requests.get(endpoint).json()

Task Routing

# Route specific tasks to specific queues
@task(queue="gpu", routing_key="ml.*")
def train_model(dataset):
    """ML training task - routed to GPU workers"""
    pass

@task(queue="io", routing_key="file.*")
def process_large_file(filepath):
    """File processing task - routed to IO workers"""
    pass

🐳 Docker Deployment

# Clone repository
git clone https://github.com/efealtiparmakoglu/taskqueue-pro.git
cd taskqueue-pro

# Start infrastructure
docker-compose up -d redis postgres

# Start workers
docker-compose up -d worker

# Start API server
docker-compose up -d api

# Access dashboard
open http://localhost:8080

πŸ“ˆ Monitoring

Web Dashboard

# Start dashboard
taskqueue dashboard --port 8080

Prometheus Metrics

from taskqueue.metrics import task_completed, task_failed

# Custom metrics
@task()
def my_task():
    try:
        # Task logic
        task_completed.inc()
    except Exception:
        task_failed.inc()
        raise

Health Checks

# Check worker health
curl http://localhost:8080/health

# Get queue stats
curl http://localhost:8080/api/queues

# Get worker stats
curl http://localhost:8080/api/workers

πŸ§ͺ Testing

# Test tasks synchronously
result = send_email.run("test@example.com", "Subject", "Body")

# Mock tasks for testing
from taskqueue.testing import task_mock

@task_mock
def test_task():
    assert send_email.delay().get() == {"status": "sent"}

πŸ“ Configuration

YAML Config

# config/taskqueue.yml
broker:
  url: redis://localhost:6379/0
  
result_backend:
  url: postgresql://user:pass@localhost/taskqueue
  
worker:
  concurrency: 4
  prefetch_multiplier: 4
  task_time_limit: 3600
  max_tasks_per_child: 1000
  
queues:
  - name: default
    priority: 5
  - name: high
    priority: 10
  - name: low
    priority: 1
    
monitoring:
  enabled: true
  port: 8080
  metrics:
    enabled: true
    port: 9090

πŸ›‘οΈ Production Checklist

  • Use Redis with persistence enabled
  • Configure task result expiration
  • Set up monitoring and alerts
  • Enable task rate limiting
  • Configure dead letter queues
  • Set up log aggregation
  • Enable SSL/TLS for broker connections
  • Configure worker auto-scaling
  • Set up backup for result backend

πŸ“š API Reference

Task Decorator Options

@task(
    queue="default",           # Queue name
    priority=5,                # Task priority (1-10)
    max_retries=3,             # Max retry attempts
    retry_delay=60,            # Initial retry delay (seconds)
    retry_backoff=True,        # Use exponential backoff
    time_limit=3600,           # Hard time limit
    soft_time_limit=3000,      # Soft time limit
    rate_limit="100/h",        # Rate limit
    result_expires=86400,      # Result expiration (seconds)
    bind=True,                 # Pass self as first arg
    ignore_result=False,       # Don't store result
    store_errors_even_if_ignored=True
)

πŸ‘₯ Authors

πŸ“„ License

MIT License - see LICENSE file

About

πŸ”„ Enterprise-grade distributed task queue system with scheduling, monitoring, and auto-scaling - Like Celery but modern!

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors