Fur reconstruction from multi-view images using 3D Gaussian Splatting.
Make sure PATH includes <CUDA_DIR>/bin and LD_LIBRARY_PATH includes <CUDA_DIR>/lib64.
cd submodules/GaussianHaircut
bash install.shThis will clone external dependencies (pytorch3d, simple-knn, glm), create a conda environment, and install all required packages. NeuralHaircut is already included in ext/.
Download the preprocessed data from Google Drive (link) and place it into your desired location, e.g.:
/path/to/data/panda_processed_GH2/walk/
pip install gdown
gdown https://drive.google.com/uc?id=1rIxvQKXVaMZ6Xzx7MAmHfucnbezLSbZt
unzip *.zip
The data directory should contain:
sdf_grid.npy,min_bound.npy,max_bound.npy-- SDF volume for penetration lossneus_lr.obj-- reconstructed mesh (used for chamfer loss)furless.obj-- furless body mesh (used for strand initialization)tan_furless.npy-- precomputed tangent directionsannotations_furless_reshaped2.json-- body part annotations (controls per-region fur length and gravity)eyes.ply-- eye landmarks (used for metric scale estimation)images/-- input imagesmasks/-- segmentation masksorientations/-- orientation maps
Edit submodules/GaussianHaircut/simple_run_panda.sh and set your paths:
CUDA_HOME=/path/to/cuda-11.8 # your CUDA installation
DATA_PATH=/path/to/data/panda_processed_GH2/walk/
ENV_PATH=/path/to/conda/env # conda environment name or pathDATA_PATH is passed to the training and export scripts via --data_root, which automatically replaces the DATA_ROOT placeholder in the YAML config. No need to edit the YAML config separately.
The YAML config (src/arguments/metrical_panda_furless_15k_small.yaml) contains animal-specific parameters like per-region fur length, gravity directions, and loss settings. These don't need to change between runs of the same animal.
cd submodules/GaussianHaircut
bash simple_run_panda.shThe pipeline runs two stages:
- Fur strand reconstruction -- optimizes strand geometry and appearance using orientation, mask, chamfer, SDF, shape consistency, and gravity losses
- Strand export -- exports the reconstructed strands as a
.plypoint cloud
Results are saved to the SAVE_EXP_PATH directory. Use Tensorboard to monitor training progress.
NeuralFur/
submodules/
GaussianHaircut/
src/
train_latent_fur.py # main training script
preprocessing/export_fur.py # strand export
scene/ # scene, cameras, gaussian models
gaussian_renderer/ # rendering
utils/ # losses, camera utils, etc.
arguments/ # config and CLI argument parsing
ext/
NeuralHaircut/ # strand prior, texture networks
diff_gaussian_rasterization_hair/ # custom CUDA rasterizer
simple_run_panda.sh # example run script
install.sh # installation script
- Code for processed scenes
- Reconstruction results
- Preprocessing pipeline for new data (deadline: April 12, 2026)
- Detailed guide for using obtained ply inside unreal engine
The code is distributed for research purposes under CC BY-NC-SA 4.0.
If you find this work useful, please consider citing:
@inproceedings{NeuralFur26,
title = {{NeuralFur}: Animal Fur Reconstruction from Multi-view Images},
aword_paper = {Best Paper Runner Up},
booktitle = {Int.~Conf.~on 3D Vision (3DV)},
month = mar,
year = {2026},
author = {Sklyarova, Vanessa and Kabadayi, Berna and Yiannakidis, Anastasios and Becherini, Giorgio and Black, Michael J. and Thies, Justus},
month_numeric = {3}
}