Version: 3.10.0 | Status: Production-Ready | Documentation: User Manual | Audit Report
MonitaQC is an industrial computer vision platform for automated quality control and product identification. Forked from PartQC Box Vision Engine, this project is the foundation for a unified quality control system.
MonitaQC combines advanced image processing, object detection, and hardware integration to provide real-time quality inspection capabilities for manufacturing and fulfillment operations.
MonitaQC features a streamlined architecture with only essential services:
- 7 containers covering vision, AI inference, database, caching, and monitoring
- Minimal memory footprint with optimized Redis (auto-tuned)
- Reduced logging overhead (5-10MB max per service)
- Auto-tuned resources based on detected CPU, RAM, and GPU
- Multi-Camera Vision System: Unlimited cameras with auto-detection (USB + IP cameras)
- USB cameras auto-detected via V4L2
- IP cameras via RTSP/HTTP (Hikvision, Dahua, Axis, etc.)
- Mix and match USB and network cameras
- AI-Powered Detection: YOLOv5-based object detection with custom model support
- DataMatrix Recognition: Advanced barcode decoding with multi-stage preprocessing
- OCR Capabilities: Text recognition using EasyOCR
- Object Nesting Detection: Hierarchical parent-child object relationships
- Real-Time Streaming: Live MJPEG video feed on status page
- Hardware Integration: Serial communication, barcode scanners, GPIO control
- Shipment Tracking: Django-based fulfillment management system
- Ejector Control: Automated defect rejection system
- Ejection Procedures: Rule-based rejection with count, area, and color ΔE conditions
- Color Quality Control: CIE L*a*b* color comparison with fixed, previous, and running average reference modes
- Multi-Language Support: 7 languages (EN, FA, AR, DE, TR, JA, ES)
graph LR
subgraph Hardware
CAM[USB/IP Cameras]
SER[Serial PLC/Arduino]
BAR[Barcode Scanner]
end
subgraph Docker Containers
VE[Vision Engine<br/>:80]
YOLO1[YOLO Inference<br/>Replica 1]
YOLO2[YOLO Inference<br/>Replica 2]
RED[Redis<br/>:6379]
TS[TimescaleDB<br/>:5432]
GF[Grafana<br/>:3000]
PG[PiGallery2<br/>:5000]
end
subgraph Storage
SSD[/mnt/SSD-RESERVE/]
end
CAM -->|frames| VE
SER <-->|commands| VE
BAR -->|scans| VE
VE -->|detect| YOLO1
VE -->|detect| YOLO2
VE <-->|cache| RED
VE -->|metrics| TS
TS -->|query| GF
VE -->|images| SSD
SSD -->|browse| PG
MonitaQC uses a microservices architecture:
MonitaQC/
├── vision_engine/ # Core QC processing engine
│ ├── main.py # FastAPI app entrypoint
│ ├── config.py # Configuration & startup
│ ├── routers/ # API route handlers
│ │ ├── ai.py # AI provider config
│ │ ├── cameras.py # Camera management
│ │ ├── health.py # Health checks
│ │ ├── inference.py # Detection pipeline
│ │ ├── procedures.py # Ejection procedures & color ref
│ │ ├── states.py # Camera state management
│ │ ├── timeline.py # Timeline & history
│ │ └── websocket.py # Real-time updates
│ ├── services/ # Business logic
│ │ ├── camera.py # Camera capture
│ │ ├── db.py # TimescaleDB client
│ │ ├── detection.py # Detection processing
│ │ ├── pipeline.py # Pipeline management
│ │ ├── state_machine.py # Multi-phase states
│ │ └── watcher.py # Serial & capture orchestration
│ └── static/ # Web UI (status page)
├── yolo_inference/ # YOLO AI inference service
├── timescaledb/ # Database init scripts
├── deploy/ # Packer + launcher + startup scripts
│ ├── pack.sh # Build an offline deployment archive
│ ├── start.py # Auto-detect OS/hardware & run compose
│ ├── start.sh # Linux launcher (calls start.py)
│ └── start.bat # Windows launcher (calls start.py)
├── setup.sh # Interactive installer (sudo bash setup.sh)
└── docker-compose.yml # Service orchestration
- Docker & Docker Compose
- Python 3 (for the startup script)
- NVIDIA GPU + NVIDIA Container Toolkit (for YOLO inference)
- Optional: USB cameras, serial device (Arduino/PLC), barcode scanner
-
Clone the repository:
git clone http://gitlab.virasad.ir/monitait/monitaqc.git cd monitaqc -
Load AI weights: Train your model at ai-trainer.monitait.com, then place
best.ptinvolumes/weights/best.pt. -
Start the application:
# Linux ./deploy/start.sh # Windows deploy\start.bat
start.pywill automatically:- Detect OS (Linux → production mode, Windows → dev mode)
- Detect hardware (CPU cores, RAM, GPUs)
- Auto-tune YOLO replicas/workers, shared memory, Redis memory
- Set
PRIVILEGED=trueon Linux for device access (serial, cameras, barcode scanner) - Set
DATA_ROOT=/mnt/SSD-RESERVEon Linux (.on Windows) - Write
.envand rundocker compose up -d
-
Open the web interface:
http://<server-ip>All configuration (cameras, inference, serial, ejector, capture settings, etc.) is done via the web interface and persisted to
.env.prepared_query_data.
If you prefer to configure manually instead of using start.py:
# Create .env file
cat > .env << EOF
DATA_ROOT=/mnt/SSD-RESERVE
PRIVILEGED=true
YOLO_REPLICAS=2
YOLO_WORKERS=2
SHM_SIZE=2g
REDIS_MAXMEMORY=256
EOF
# Build and run
docker compose up -dFor air-gapped servers:
# On a machine with internet: pack images
./deploy/pack.sh
# On the target server: extract, then run the interactive installer
tar xzf monitaqc-v*-offline.tar.gz
cd monitaqc-v*-offline/project
sudo bash setup.shsetup.sh can also be double-clicked in the Ubuntu file manager
(right-click → Run as a Program) — it re-launches itself in a
terminal and prompts for your sudo password.
All settings are configured via the web interface at http://<server-ip>/status and saved to .env.prepared_query_data for persistence across restarts. Click "Save All Configuration" in the top-right to persist changes.
After starting the application for the first time, configure it in this order:
USB cameras are auto-detected on startup. For IP cameras:
- Go to the Cameras tab
- Enter your network subnet (e.g.,
192.168.0) and click Scan - Discovered cameras will appear — click Save to add them
- Adjust per-camera settings: FPS, resolution, exposure, gain, brightness, contrast, saturation
- Optionally set ROI (Region of Interest) to crop to a specific area
States define how the system captures images using multi-phase lighting:
- Scroll to State Management in the Cameras tab
- Create a state with one or more phases, each defining:
- Light Mode:
U_ON_B_OFF(uplight),B_ON_U_OFF(backlight),U_ON_B_ON(both),U_OFF_B_OFF(off) - Delay: seconds to wait after setting lights before capturing
- Cameras: which camera IDs to capture in this phase (comma-separated)
- Steps: capture every N encoder pulses (
-1= continuous loop) - Analog Threshold: analog sensor trigger value (
-1= disabled)
- Light Mode:
- Set the active state for production
Define what AI model processes the captured images:
Single model:
- Go to the Inference tab
- Select the inference module:
Local YOLOorGradio HuggingFace - Set the inference URL (e.g.,
http://yolo_inference:4442/v1/object-detection/yolov5s/detect/) - Choose the model and set confidence threshold
Multi-model pipeline:
- Create multiple models (e.g., one YOLO for defect detection, one Gradio for classification)
- Create a pipeline with ordered phases, each referencing a model
- Activate the pipeline — frames will pass through each model in sequence
Upload custom weights:
- Train your model at ai-trainer.monitait.com
- Upload the
.ptfile via the Upload Weights button - Click Activate Weights to load on all YOLO replicas
Set up automatic rejection of defective items:
- Go to the Hardware tab → Ejector Configuration
- Enable the ejector toggle
- Set Ejector Offset: encoder counts from camera position to ejector position
- Set Ejector Duration: how long to activate the ejector (seconds)
- Define OK and NG parameters:
- Offset Delay (ms): timing between capture and trigger
- Duration Pulses: output pulse length (16μs units)
- Encoder Factor: scaling multiplier
Define rules for when products should be rejected:
- Go to the Process tab → Ejection Procedures
- Click + Add Procedure and give it a name
- Add rules with conditions:
- Count =, >, <: reject based on number of detected objects
- Area >, <, =: reject based on bounding box size (pixels)
- Color ΔE >: reject when color differs from reference (CIE L*a*b*)
- Set logic (ANY = OR, ALL = AND) and optionally restrict to specific cameras
- Click Save Procedures
Connect to Arduino/PLC for hardware control:
- Go to Hardware tab → Serial Port Configuration
- Set Device Path (e.g.,
/dev/ttyUSB0) - Set Baud Rate (typically
57600) - Choose Serial Mode:
new(recommended) orlegacy - Use the Light Controls to verify communication:
- Both On / U On, B Off / B On, U Off / Both Off
- Set PWM values (0-255) for fine brightness control
Configure how detected objects are processed:
- Parent Object List: enforce parent-child relationships (e.g., a "box" must contain a "logo")
- DataMatrix Settings: valid character sizes, confidence/overlap thresholds
- Histogram: enable for quality distribution analysis
- Store Annotations: save detection results to TimescaleDB
The Dashboard tab shows real-time results:
- Left column: encoder, speed, counters (OK/NG), ejector status, system metrics
- Right column: live timeline of captured images with detection overlays
- Use timeline navigation (First/Prev/Next/Last) to browse history
- Timeline auto-resumes after 30 seconds of inactivity
- Click images to zoom, use Reset to restore default view
Map detected objects to product identifiers:
- Go to Advanced tab → Data File
- Edit the JSON mapping:
[ { "dm": "6263957101037", "chars": [["box"], ["logo_en", "logo_fa"], ["product_name"]] } ] - If all specified objects are detected, the system identifies the product by its DataMatrix code
| Tab | Purpose |
|---|---|
| Dashboard | Real-time monitoring, timeline, counters |
| AI Assistant | Chat with AI about detections and quality |
| Gallery | Browse images with PiGallery2 |
| Charts | Grafana metrics and analytics |
| Hardware | Serial communication, lighting controls, encoder |
| Cameras | Camera config, states, IP discovery |
| Inference | Models, pipelines, weights, confidence |
| Process | Ejector, OK/NG config, detection alerts, image processing |
| Advanced | Timeline, Redis, AI, database, data file, config export/import |
Auto-generated by start.py. Override manually if needed:
| Variable | Default (Linux) | Default (Windows) | Description |
|---|---|---|---|
DATA_ROOT |
/mnt/SSD-RESERVE |
. |
Root path for images and volumes |
PRIVILEGED |
true |
false |
Docker privileged mode for /dev access |
YOLO_REPLICAS |
auto-detected | auto-detected | Number of YOLO container replicas |
YOLO_WORKERS |
auto-detected | auto-detected | Uvicorn workers per replica |
SHM_SIZE |
auto-detected | auto-detected | Shared memory for YOLO containers |
REDIS_MAXMEMORY |
auto-detected | auto-detected | Redis max memory (MB) |
| Service | Container | Port | Description |
|---|---|---|---|
| Vision Engine | monitait_vision_engine |
80 (→5050) | Core QC engine, web UI, API |
| YOLO Inference | yolo_inference (x2) |
4442 | AI object detection (GPU) |
| Redis | monitait_redis |
6379 | Cache & message queue |
| TimescaleDB | monitait_timescaledb |
5432 | Time-series database |
| Grafana | monitait_grafana |
3000 | Metrics visualization |
| PiGallery2 | monitait_pigallery2 |
5000 | Image gallery browser |
- Raw Images:
${DATA_ROOT}/raw_images/ - YOLO Weights:
./volumes/weights/ - TimescaleDB:
${DATA_ROOT}/volumes/timescaledb/ - Redis:
${DATA_ROOT}/volumes/redis/ - Grafana:
${DATA_ROOT}/volumes/grafana/ - Gallery:
${DATA_ROOT}/volumes/pigallery2_*/
GET /- Status monitoring web interfaceGET /health- Health checkGET /status- Configuration pageWS /ws/timeline- Real-time timeline updatesGET /api/procedures- Get ejection proceduresPOST /api/procedures- Update ejection proceduresGET /api/color-reference/{class}- Get color reference (L*a*b*)POST /api/color-reference/{class}- Set fixed color referenceGET /api/states- Get camera statesPOST /api/states- Create/update camera state
- Python 3.10 / FastAPI - API framework
- OpenCV 4.7 - Image processing
- YOLOv5 (PyTorch) - Object detection
- pylibdmtx - DataMatrix decoding
- Redis - Message broker & cache
- TimescaleDB (PostgreSQL 15) - Time-series database
- Grafana - Metrics dashboards
MonitaQC is evolving into a unified quality control platform. Planned features:
- Merge with fabric inspection capabilities (from FabriQC)
- Merge with signal counting capabilities (from PartQC Signal Vision Engine)
- Unified admin interface for all QC modes
- Multi-application mode support
- Enhanced API with OpenAPI documentation
- Advanced analytics and reporting
- Cloud synchronization improvements
Submit feature requests and ideas to the project issues backlog.
For issues or questions:
- Email: admin@smartfalcon-ai.com
Active Development - Production deployment ready
Proprietary - Smart Falcon AI (smartfalcon-ai.com)
Note: This project is forked from PartQC Box Counter and serves as the foundation for the unified MonitaQC platform.