Skip to content

dnzckn/LazyLabel

Repository files navigation

LazyLabel

Python License

LazyLabel combines Meta's Segment Anything Model (SAM) with comprehensive manual annotation tools to accelerate the creation of pixel-perfect labels for computer vision applications.


Get Started

Full install (with AI segmentation):

pip install lazylabel-gui[include-ai]
lazylabel-gui

Core install (manual annotation only, no PyTorch required):

pip install lazylabel-gui
lazylabel-gui

From source:

git clone https://github.com/dnzckn/LazyLabel.git
cd LazyLabel
pip install -e ".[include-ai]"   # full install
# or: pip install -e .           # core only
lazylabel-gui

Requirements: Python 3.10+, 8GB RAM. Full install needs ~2.5GB additional disk space for model weights.


Core Features

Annotation Tools

Tool Create Erase
AI (SAM)
Point-based segmentation
SAM 1.0 & 2.1, GPU/CPU
Box
Bounding box annotations
Hold Shift to erase
Polygon
Vertex-level precision
Click to place vertices

Editing Tools

Move Polygon Move Vertex
  • Select: Click to select existing masks for editing, reclassing, or deletion. Hold Shift+Space to erase the overlap of a drawn segment from the selected mask.

Annotation Modes

  • Single View: Fine-tune individual masks with maximum precision
  • Multi View: Annotate up to 2 images simultaneously, ideal for objects in similar positions with slight variations
  • Sequence: Propagate a refined mask across thousands of frames using SAM 2's video predictor

Streaming Mode

For large image sets (1,000–10,000+ images), streaming mode processes the sequence in chunks of 250 frames with bounded memory (~3–4 GB) regardless of total size. Every chunk receives the full set of human-labeled reference images prepended to its batch, so SAM2 always has the complete object vocabulary available. Enable or disable via the Streaming checkbox in the Propagation controls (on by default for sequences over 250 frames).

Image Processing

  • FFT filtering: Remove noise and enhance edges
  • Channel thresholding: Isolate objects by color
  • Border cropping: Zero out pixels outside defined regions in saved outputs
  • View adjustments: Brightness, contrast, gamma correction, color saturation

Export Formats

Select one or more formats from Settings. All formats can be loaded back into LazyLabel.

NPZ - One-hot encoded mask tensors (.npz)

import numpy as np

data = np.load('image.npz')
mask = data['mask']  # Shape: (height, width, num_classes)

# Each channel represents one class
sky = mask[:, :, 0]
boats = mask[:, :, 1]
cats = mask[:, :, 2]
dogs = mask[:, :, 3]

Standard Formats

Format Output File Description
YOLO Detection image.txt Bounding boxes: class_id cx cy w h (normalized)
YOLO Segmentation image_seg.txt Polygon vertices: class_id x1 y1 x2 y2 ... (normalized)
COCO JSON image_coco.json Per-image COCO format with polygon segmentation, bounding boxes, and area
Pascal VOC image.xml XML bounding box annotations
CreateML image_createml.json Apple CreateML JSON with center-based bounding boxes

COCO supercategories: Set a class alias to name.supercategory (e.g. dog.animal) to populate the supercategory field in COCO JSON output.


Model Setup

SAM 1.0 models are downloaded automatically on first use.

If the automatic download doesn't work, you can manually download and place the model:

SAM 1.0

SAM 1.0 only requires the model weights file, no additional package installation needed.

  1. Download sam_vit_h_4b8939.pth from the SAM repository
  2. Place in LazyLabel's models folder:
    • Via pip: <site-packages>/lazylabel/models/ (run python -c "import lazylabel; print(lazylabel.__path__[0])" to find it)
    • From source: src/lazylabel/models/

SAM 2.1 (improved accuracy, required for Sequence mode)

SAM 2.1 requires both the sam2 package installed and the model weights file, since it relies on config files bundled with the package.

  1. Install SAM 2: pip install git+https://github.com/facebookresearch/sam2.git
  2. Download a model (e.g., sam2.1_hiera_large.pt) from the SAM 2 repository
  3. Place in LazyLabel's models folder:
    • Via pip: <site-packages>/lazylabel/models/ (run python -c "import lazylabel; print(lazylabel.__path__[0])" to find it)
    • From source: src/lazylabel/models/

Select the model from the dropdown in settings.

MobileNetV3 (used by Find Archetypes in Sequence mode)

The MobileNetV3 model (~4MB) is downloaded automatically on first use from torchvision and cached locally for offline use.

If the automatic download doesn't work:

  1. On a machine with internet, generate the weights file:
    python -c "import torch; from torchvision.models import mobilenet_v3_small, MobileNet_V3_Small_Weights; m = mobilenet_v3_small(weights=MobileNet_V3_Small_Weights.IMAGENET1K_V1); torch.save(m.state_dict(), 'mobilenetv3_small_tv.pth')"
  2. Copy mobilenetv3_small_tv.pth to LazyLabel's models folder:
    • Via pip: <site-packages>/lazylabel/models/
    • From source: src/lazylabel/models/

Building Windows Executable

Create a standalone Windows executable with bundled models for offline use:

Requirements:

  • Windows (native, not WSL)
  • Python 3.10+
  • PyInstaller: pip install pyinstaller

Build steps:

git clone https://github.com/dnzckn/LazyLabel.git
cd LazyLabel
python build_system/windows/build_windows.py

The executable will be created in dist/LazyLabel/. The entire folder (~7-8GB) can be moved anywhere and runs offline.


Documentation