YOLOv8-based object detection for construction site safety compliance. Detects 10 classes across PPE and site elements.
| ID | Class | ID | Class |
|---|---|---|---|
| 0 | Hardhat | 5 | Person |
| 1 | Mask | 6 | Safety Cone |
| 2 | NO-Hardhat | 7 | Safety Vest |
| 3 | NO-Mask | 8 | machinery |
| 4 | NO-Safety Vest | 9 | vehicle |
pip install ultralyticsDownload the dataset from Kaggle and place it under archive/css-data/:
Construction Site Safety Image Dataset (Roboflow) on Kaggle
archive/css-data/
├── train/
│ ├── images/
│ └── labels/
├── valid/
│ ├── images/
│ └── labels/
└── test/
├── images/
└── labels/
Pre-trained weights (included):
archive/results_yolov8n_100e/kaggle/working/runs/detect/train/weights/best.pt
Train from scratch using the YOLOv8n backbone:
python train_detect.py --mode trainFine-tune starting from the archive's pre-trained weights:
python train_detect.py --mode train --resumeOverride epochs and batch size:
python train_detect.py --mode train --epochs 100 --batch 8Use a different YOLOv8 model variant (e.g. yolov8s):
python train_detect.py --mode train --model yolov8s.ptTraining outputs are saved to runs/train/css_safety/. After training, the model is automatically evaluated on the test split and mAP metrics are printed.
Detect on an image:
python train_detect.py --mode detect --source path/to/image.jpgDetect on a folder of images:
python train_detect.py --mode detect --source path/to/folder/Detect on a video:
python train_detect.py --mode detect --source path/to/video.mp4Detect with webcam:
python train_detect.py --mode detect --source 0Use a specific weights file:
python train_detect.py --mode detect --source img.jpg --weights runs/train/css_safety/weights/best.ptDisplay detections in a window while running:
python train_detect.py --mode detect --source img.jpg --showAnnotated outputs are saved to runs/detect/css_safety/.
| Argument | Description | Default |
|---|---|---|
--mode |
train or detect (required) |
— |
--source |
Image/video path, folder, or webcam index | — |
--weights |
Weights file for detection (auto-resolved if omitted) | auto |
--resume |
Fine-tune from archive weights instead of yolov8n.pt | false |
--epochs |
Override training epochs | 50 |
--batch |
Override batch size | 16 |
--model |
YOLOv8 base model (ignored when --resume) |
yolov8n.pt |
--conf |
Confidence threshold | 0.25 |
--iou |
IoU NMS threshold | 0.45 |
--show |
Display detections in a window | false |
When --weights is not specified, the script resolves weights in this order:
runs/train/css_safety/weights/best.pt(from a local training run)archive/results_yolov8n_100e/.../best.pt(pre-trained weights from archive)