Skip to content

pathology-data-mining/hover_next_inference

Repository files navigation

HoVer-NeXt Inference

HoVer-NeXt is a fast and efficient nuclei segmentation and classification pipeline.

Supported are a variety of data formats, including all OpenSlide supported datatypes, .npy numpy array dumps, and common image formats such as JPEG and PNG. If you are having trouble with using this repository, please create an issue and we will be happy to help!

For training code, please check the hover-next training repository

Find the Publication here: https://openreview.net/pdf?id=3vmB43oqIO

Quick Start

# 1. Clone the repository
git clone https://github.com/pathology-data-mining/hover_next_inference.git
cd hover_next_inference

# 2. Set up environment (using conda)
conda env create -f environment.yml
conda activate hovernext
pip install torch==2.1.1 torchvision==0.16.1 --index-url https://download.pytorch.org/whl/cu118

# 3. Run inference on a slide
python3 main.py \
    --input "/path-to-wsi/wsi.svs" \
    --output_dir "results/" \
    --cp "lizard_convnextv2_large" \
    --tta 4 \
    --inf_workers 16 \
    --pp_tiling 10 \
    --pp_workers 16

The model weights are automatically downloaded on first use.

Setup

Environments for train and inference are the same so if you already have set the environment up for training, you can use it for inference as well.

Option 1: Using Conda (Recommended)

conda env create -f environment.yml
conda activate hovernext
pip install torch==2.1.1 torchvision==0.16.1 --index-url https://download.pytorch.org/whl/cu118

Option 2: Install as Python Package

# Install dependencies first
pip install -r requirements.txt
pip install torch==2.1.1 torchvision==0.16.1 --index-url https://download.pytorch.org/whl/cu118

# Install the package
pip install -e .

After installation, you can run inference using either:

  • python3 main.py [arguments] from the repository directory
  • hover-next-inference [arguments] from anywhere (after pip install)
  • python3 -m inference [arguments] from anywhere (after pip install)

Option 3: Docker/Singularity Container

Use predefined docker/singularity container

Model Weights

Weights are hosted on Zenodo By specifying one of the ID's listed, weights are automatically downloaded and loaded.

Dataset ID Weights
Lizard-Mitosis "lizard_convnextv2_large" Large
"lizard_convnextv2_base" Base
"lizard_convnextv2_tiny" Tiny
PanNuke "pannuke_convnextv2_tiny_1" Tiny Fold 1
"pannuke_convnextv2_tiny_2" Tiny Fold 2
"pannuke_convnextv2_tiny_3" Tiny Fold 3

If you are manually downloading weights, unzip them in the directory, such that the folder (e.g. lizard_convnextv2_large) sits in the same directory as main.py.

Usage

Command-line Arguments

Get a full list of available arguments:

python3 main.py --help

Required Arguments

  • --input: Path to input file, glob pattern (e.g., "/path/*.svs"), or text file containing paths
  • --output_dir: Directory where results will be saved
  • --cp: Model checkpoint identifier (see Model Weights) or path to local checkpoint

Common Optional Arguments

  • --tta: Number of test-time augmentation views (default: 4, recommended for robust results)
  • --batch_size: Batch size for inference (default: 64)
  • --inf_workers: Number of workers for inference dataloader (default: 4, set to number of CPU cores)
  • --pp_workers: Number of workers for post-processing (default: 16, set to number of CPU cores)
  • --pp_tiling: Tiling factor for post-processing (default: 8, increase if running out of memory)
  • --save_polygon: Save output as polygons for QuPath
  • --only_inference: Only run inference step (useful for GPU/CPU separation on clusters)
  • --keep_raw: Keep raw prediction files (can be large)
  • --metric: Metric to optimize post-processing for: 'f1', 'mpq', or 'pannuke' (default: 'f1')

For more examples and advanced configurations, see example_config.sh.

Documentation

For detailed API documentation and developer guides, see the docs directory:

WSI Inference

This pipeline uses OpenSlide to read images, and therefore supports all formats which are supported by OpenSlide. If you want to run this pipeline on custom ome.tif files, ensure that the necessary metadata such as resolution, downsampling and dimensions are available. Additionally, czi is is supported via pylibCZIrw. Before running a slide, choose appropriate parameters for your machine

To run a single slide:

python3 main.py \
    --input "/path-to-wsi/wsi.svs" \
    --output_dir "results/" \
    --cp "lizard_convnextv2_large" \
    --tta 4 \
    --inf_workers 16 \
    --pp_tiling 10 \
    --pp_workers 16

To run multiple slides, specify a glob pattern such as "/path-to-folder/*.mrxs" or provide a list of paths as a .txt file.

Slurm

if you are running on a slurm cluster you might consider separating pre and post-processing to improve GPU utilization. Use the --only_inference parameter and submit another job on with the same parameters, but removing the --only_inference.

NPY / Image inference

NPY and image inference works the same as WSI inference, however output files are only a ZARR array.

python3 main.py \
    --input "/path-to-file/file.npy" \
    --output_dir "/results/" \
    --cp "lizard_convnextv2_large" \
    --tta 4 \
    --inf_workers 16 \
    --pp_tiling 10 \
    --pp_workers 16

Support for other datatypes are easy to implement. Check the NPYDataloader for reference.

Optimizing inference for your machine:

  1. WSI is on the machine or on a fast access network location
  2. If you have multiple machines, e.g. CPU-only machines, you can move post-processing to that machine
  3. '--tta 4' yields robust results with very high speed
  4. '--inf_workers' should be set to the number of available cores
  5. '--pp_workers' should be set to number of available cores -1, with '--pp_tiling' set to a low number where the machine does not run OOM. E.g. on a 16-Core machine, '--pp_workers 16 --pp_tiling 8 is good. If you are running out of memory, increase --pp_tiling.

Using the output files for downstream analysis:

By default, the pipeline produces an instance-map, a class-lookup with centroids and a number of .tsv files to load in QuPath. sample_analysis.ipynb shows exemplarily how to use the files.

Docker and Apptainer/Singularity Container:

Download the singularity image from Zenodo

# don't forget to mount your local directory
export APPTAINER_BINDPATH="/storage"
apptainer exec --nv /path-to-container/hover_next.sif \
    python3 /path-to-repo/main.py \
    --input "/path-to-wsi/*.svs" \
    --output_dir "results/" \
	--cp "lizard_convnextv2_large" \
    --tta 4 

License

This repository is licensed under GNU General Public License v3.0 (See License Info). If you are intending to use this repository for commercial usecases, please check the licenses of all python packages referenced in the Setup section / described in the requirements.txt and environment.yml.

Citation

If you are using this code, please cite:

@inproceedings{baumann2024hover,
  title={HoVer-NeXt: A Fast Nuclei Segmentation and Classification Pipeline for Next Generation Histopathology},
  author={Baumann, Elias and Dislich, Bastian and Rumberger, Josef Lorenz and Nagtegaal, Iris D and Martinez, Maria Rodriguez and Zlobec, Inti},
  booktitle={Medical Imaging with Deep Learning},
  year={2024}
}

and

@INPROCEEDINGS{rumberger2022panoptic,
  author={Rumberger, Josef Lorenz and Baumann, Elias and Hirsch, Peter and Janowczyk, Andrew and Zlobec, Inti and Kainmueller, Dagmar},
  booktitle={2022 IEEE International Symposium on Biomedical Imaging Challenges (ISBIC)}, 
  title={Panoptic segmentation with highly imbalanced semantic labels}, 
  year={2022},
  pages={1-4},
  doi={10.1109/ISBIC56247.2022.9854551}}

About

Inference code for HoVer-NeXt

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors