Zhe Zhu1, Le Wan2, Rui Xu3, Yiheng Zhang4, Honghua Chen5, Zhiyang Dou3, Cheng Lin6, Yuan Liu2†, Mingqiang Wei1†
† Corresponding authors
1 Nanjing University of Aeronautics and Astronautics 2 Hong Kong University of Science and Technology 3 The University of Hong Kong 4 National University of Singapore 5 Lingnan University 6 Macau University of Science and Technology
- Install the required environment
conda create -n PartSAM python=3.11 -y
conda activate PartSAM
# PyTorch 2.4.1 with CUDA 12.4
pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu124
pip install lightning==2.2 h5py yacs trimesh scikit-image loguru boto3
pip install mesh2sdf tetgen pymeshlab plyfile einops libigl polyscope potpourri3d simple_parsing arrgh open3d safetensors
pip install hydra-core omegaconf accelerate timm igraph ninja
pip install torch-scatter -f https://data.pyg.org/whl/torch-2.4.1+cu124.html
apt install libx11-6 libgl1 libxrender1
pip install vtk
-
Install other third-party modules (torkit3d and apex) following Point-SAM
-
Install the pretrained model weight
pip install -U "huggingface_hub[cli]"
huggingface-cli login
huggingface-cli download Czvvd/PartSAM --local-dir ./pretrained
# Modify the config file to evaluate your own meshes
python evaluation/eval_everypart.py
- Release inference code of PartSAM
- Release the pre-trained models
- Release training code and data processing script
Our code is based on these wonderful works:
We thank the authors for their great work!
@article{zhu2025partsam,
title={PartSAM: A Scalable Promptable Part Segmentation Model Trained on Native 3D Data},
author={Zhe Zhu and Le Wan and Rui Xu and Yiheng Zhang and Honghua Chen and Zhiyang Dou and Cheng Lin and Yuan Liu and Mingqiang Wei},
journal={arXiv preprint arXiv:2509.21965},
year={2025}
}