Lukas Radl*1 ·
Felix Windisch*1 ·
Andreas Kurz*1
Thomas Köhler1 ·
Michael Steiner1 ·
Markus Steinberger1,2
1 Graz University of Technology 🇦🇹
2 Huawei Technologies 🇦🇹
CoMe is a method for unbounded mesh extraction, using 3D Gaussians. Compared to recent methods, CoMe faithfully balanced photometric and geometric losses via a confidence-based framework, enabled fast, detailed mesh extraction. For a more visual overview, cf. our project page.
- April 23, 2026 — Code/Assets release. We updated the paper on arXiv, and fixed a minor bug regarding the normal variance loss; results are not affected.
Find all instructions for running our code here!
Setup
# Clone the repository
git clone https://github.com/r4dl/CoMe.git
cd CoMe
# Create a conda environment
# default settings: torch > 2.1, cuda 12.1 (tested)
conda env create --file environment.yml
conda activate come
# Install the remaining dependencies
pip install submodules/simple-knn/ --no-build-isolation
pip install submodules/diff-gaussian-rasterization/ --no-build-isolation
# NEW: Custom Fused SSIM Implementation
pip install submodules/decoupled-fused-ssim/ --no-build-isolation
# Fused Implementation from Rahul for backwards compatibility
pip install git+https://github.com/rahul-goel/fused-ssim/ --no-build-isolationTo extract meshes, install Tetra-Triangulation, based on Tetra-NeRF:
cd submodules/tetra-triangulation
cmake . -DCMAKE_POLICY_VERSION_MINIMUM=3.5
# to build, it might be necessary for building to define the CUDA PATH
# export CPATH=/usr/local/<CUDA_VERSION>/targets/x86_64-linux/include:$CPATH
make
# Note: editable mode is required here
pip install -e . --no-build-isolationWe have tested this implementation with Ubuntu 22.04 and CUDA 12.1.
Data
For our evaluation, we used the following datasets:
| Dataset Name | Link | Note |
|---|---|---|
| Tanks & Temples | Download | |
| DTU | Download | |
| Mip-NeRF 360 | Download | - |
| ScanNet++-v2 | Download |
The links redirects you to a download page!
We assume all data within the data/ directory for our scripts to work.
If your data lies somewhere else, modify DATA_DIR in scripts/constants.py#L11.
data
├── TNT_GOF
│ ├── Barn
│ └── ...
├── DTU
├── SCN
└── m360
Tanks & Temples post-install
Note: For Tanks and Temples, additional care needs to be taken!
First, you need to rename
<SCENE>_COLMAP_SfM.logto<SCENE>_traj_path.logfor every scene! Afterwards, visit the download page for TNT. For each scene, download everything and paste into the corresponding scene folder!Now your setup is good to go!
ScanNet++ post-install
We only tested a small subset of all scenes, see scripts/constants.py#L26. To run our scripts, move these scenes directly into
<DATA_DIR>/SCN.
DTU post-install
Download both the
SampleSetand thePointsfrom here. SeeDTU_GT_DATAinscripts/run_dtu.py.
Scripts
We provide scripts to train, mesh/render and evaluate our method, using the same hyperparameters as reported in the paper/used in the evaluation.
Note: There may be some noise in the final results; for convenience, we provide the point clouds/meshes we used for evaluation in our paper (Tanks and Temples only)!
# Training, Meshing (Marching Tets) and Evaluation for Tanks & Temples
python scripts/run_tnt.py
# Training, Meshing (Marching Tets) and Evaluation for ScanNet++
python scripts/run_scn.py
# Training, Meshing (TSDF) and Evaluation for DTU
python scripts/run_dtu.py
# Training, Rendering and Evaluation for NVS (Mip-NeRF 360 by default)
python scripts/run_nvs.py Note: To show the results, simple use the corresponding
show_*script, e.g.,python scripts/show_nvs.py.
Training
To train our method, use the train.py script, as in, e.g. StopThePop. To document rasterizer settings, we use .json files, located in the configs/ directory.
# SOF default settings
python train.py --splatting_config configs/hierarchical.json -s <path to dataset>See StopThePop or SOF for more details!
The most important new hyperparameters live under MeshingParams in arguments/init.py; non-default values for experiments are in the paper or in scripts/run_*.py.
SSIM-decoupled appearance
use_vastgaussian_appearance(default:false)use_ssimdecoupled_appearance(default:false, Ours)Note: Defaults to no appearance embedding (e.g. for Mip-NeRF): Use
--use_ssimdecoupled_appearancefor meshing!
Color confidence
color_confidence(default:false)color_confidence_max(default:0.075)color_confidence_from_iter(default:500)Note: Optional confidence weighting for color; all three flags live in
MeshingParams. Use--color_confidencefor meshing!
Variance losses
lambda_variance(default:0.0)variance_from_iter(default:15000)lambda_normal_variance(default:0.0)normal_variance_from_iter(default:15000)Note: Losses are off by default; increase
lambda_*after*_from_iterto use the auxiliary color / normal variance terms. Use--lambda_variance 0.5 --lambda_normal_variance 0.005for meshing!
Meshing
For synthetic, single-object scenes (such as DTU), we use TSDF fusion, which can be run using
python extract_mesh_tsdf.py -m <MODEL_PATH>Note: By default, we use a
voxel_sizeof0.002, but it can be modified via--voxel_size.
As a result, you will get the ply-file in <MODEL_PATH>/test/ours_30000/tsdf.ply.
Here, we use Fast Marching Tetrahedra (as proposed by SOF), which are run using
python extract_mesh_tets.py -m <MODEL_PATH>Hint: If you run out-of-memory or obtain overly large meshes, consider adding
--opacity_cutoff_tetra <VAL>, with<VAL>larger than0.0039(the default value). This will remove redundant, almost transparent primitives from the initial point set.
As a result, you will get the ply-file in <MODEL_PATH>/test/ours_30000/mesh_faster_binary_search_7.ply.
Note: We marginally accelerated the mesh extraction process by no longer obtaining exact opacity values for the first iteration, compared to SOF.
Note: You can (and probably should) inspect these meshes using our mesh viewer (
python mesh_viewer.py <PATH TO PLY FILE>). See the Visualization & Debugging section below for more details.
Evaluation
All evaluation scripts for meshing are contained in mesh_utils/, whereas the evaluation scripts for novel view synthesis are in the base directory.
Tanks & Temples
To evaluate your meshes for the Tanks & Temples dataset, use
python mesh_utils/eval_TNT.py \ --dataset-dir <DATASET> \ --ply-path <PATH TO MESH> \ --traj-path <TRAJ PATH LOG FILE> \ --out-dir <OUT DIR>Note: For the
<TRAJ PATH LOG FILE>, we used the<SCENE>_COLMAP_SfM.logfile you get from the TNT_GOF download; see Data for details.
ScanNet++
To evaluate your meshes for the ScanNet++ dataset, use
python mesh_utils/eval_SCN.py \ --dataset-dir <DATASET> \ --ply-path <PATH TO MESH> \ --out-dir <OUT DIR>Note: By default, we use a
$\tau$ of0.05, you can change this here.
DTU
To evaluate your meshes for the DTU dataset, use
python mesh_utils/eval_DTU.py \ --instance_dir <PATH TO SCAN> \ --input_mesh <PATH TO MESH> \ --dataset_dir <PATH TO GT DATA> \ --vis_out_dir <OUT DIR>Note: The
GT DATAneeds to be downloaded separately from this webpage; see Data for details.
Novel View Synthesis (Mip-NeRF 360)
To evaluate novel view synthesis, run
# render images python render.py -m <MODEL DIRECTORY> --skip_train # create metrics python metrics.py -m <MODEL DIRECTORY>This is the exact same workflow as in
scripts/run_nvs.py.Alternatively, you can also adapt the
run_nvs.pyscript.Note: By default, we run Mip-NeRF 360 using the default settings; to modify this, modify the script:
# modify these to test a different dataset scenes = ... factors = ... TRAIN_DATA = ...
Metrics
These are the results for the latest run, using this codebase!
Note: The numbers may vary slightly per-run, and this is not the original codebase we used; although a cleaned-up version!
Tanks and Temples
Table: F1-Score evaluation
Metric Barn Caterpillar Courthouse Ignatius Meetingroom Truck Average Code (v1) 0.534 0.466 0.334 0.779 0.375 0.639 0.521 Paper 0.534 0.472 0.333 0.782 0.372 0.634 0.521 Hint: Use
show_tnt.pyscript to quickly get the metrics (after the correspondingrun-script)!
ScanNet++ (small)
Table: F1-Score evaluation
Metric 5a269b 08bbbd 39f36d dc263d ef18cf fb564c Average Code (v1) 0.663 0.729 0.666 0.722 0.528 0.661 0.662 Paper 0.670 0.729 0.657 0.715 0.551 0.684 0.668 Note: We remove the last 4 letters/digits of the scene name for a better layout.
Visualization & Debugging
Our visualization suite is built upon Splatviz, and is fully self-contained within this repository.
To use it, first navigate to the splatviz/ directory.
In it, run either
# if not already in splatviz
cd splatviz
# to attach to a currently running training session
python run_main.py --mode attach {--port <PORT>}
# to render a trained gaussian point cloud
# <PATH TO A POINT CLOUD FILE> must be a directory
python run_main.py --data_path <PATH TO A POINT CLOUD FILE>With both (yes, both), open the Render tab to checkout different debug visualization modes (e.g. Depth/Normal/Transmittance/Confidence), modify rasterizer settings on-the-fly or just inspect the current scene.
We additionally provide a mesh viewer to inspect triangulated meshes. To run, simply do
python mesh_viewer.py <PATH TO PLY FILE>By default, normals are displayed. Checkout the CLI for more information!
This code has been built on top of SOF, which was built on top of StopThePop, and as such, is primarily licensed under the "Gaussian Splatting License". For more information, we refer to our Notice.
@misc{radl2026come,
author = {Radl, Lukas and Windisch, Felix and Kurz, Andreas and K{\"o}hler, Thomas and Steiner, Michael and Steinberger, Markus},
title = {{Confidence-Based Mesh Extraction from 3D Gaussians}},
year = {2026},
eprint = {2603.24725},
archivePrefix = {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2603.24725},
}
