Unconditional and conditional diffusion models for geological facies modeling.
DiffSim provides a unified framework for training and using diffusion models on geological data, supporting:
- 2D facies modeling (channel facies, mud drapes)
- 3D volumetric modeling (3D facies)
- Unconditional generation (generate new samples from noise)
- Conditional generation (conditioned on given well facies)
| Case | Description | Type |
|---|---|---|
| case1_geomodeling | 2D channel facies | 2D |
| case2_muddrape | 2D mud drape facies | 2D |
| case3_la3d | 3D facies modeling | 3D |
- Python 3.10+ (tested with 3.12)
- PyTorch 2.0+ with CUDA support (recommended for training)
- CUDA 12.1 (recommended)
Option 1: Using environment.yml (recommended)
# Create conda environment from file
conda env create -f environment.yml
conda activate diffsimOption 2: Manual conda setup
# Create a new conda environment
conda create -n diffsim python=3.12
conda activate diffsim
# Install PyTorch with CUDA 12.1 (adjust for your CUDA version, see https://pytorch.org)
conda install pytorch torchvision pytorch-cuda=12.1 -c pytorch -c nvidia
# Install other dependencies
pip install -r requirements.txtOption 3: Using pip only
# Install PyTorch first (with CUDA support)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
# Install other dependencies
pip install -r requirements.txtRun the test script to verify everything is working:
python tests/test_models.pyYou should see all tests pass:
=== Testing Imports ===
[PASS] diffsim main package
[PASS] diffusion module
...
ALL TESTS PASSED - Installation verified!
DiffSim/
├── diffsim/ # Main package
│ ├── models/ # UNet architectures
│ │ ├── unet.py # 2D UNet
│ │ ├── unet3d.py # 3D UNet
│ │ └── guided_diffusion/ # Conditional models
│ ├── core/ # Core utilities
│ │ ├── diffusion.py # Beta schedules, sampling
│ │ ├── network.py # Conditional network wrapper
│ │ └── utils.py # Utility functions
│ └── data/ # Dataset classes
│ ├── dataset.py # 2D and 3D datasets
│ └── mask.py # Mask generation
├── configs/ # Configuration files
├── notebooks/ # Jupyter notebooks for inference
├── scripts/ # Training scripts
└── checkpoints/ # Model checkpoints (gitignored)
Download pretrained models from Google Drive and place them in the checkpoints/ directory.
import torch
from diffsim import Unet, Diffusion
# Create model
model = Unet(dim=64, channels=1, dim_mults=(1, 2, 4))
# Load checkpoint
model.load_state_dict(torch.load('checkpoints/case1_geomodeling/unconditional.pth'))
# Generate samples
diffusion = Diffusion(timesteps=1500, beta_schedule='linear')
samples = diffusion.sample(model, image_size=64, batch_size=16, channels=1)Unconditional Training:
python scripts/train_unconditional.py --config configs/case1_geomodeling.jsonConditional Training:
python scripts/train_conditional.py --config configs/case1_geomodeling.jsonSee the notebooks in notebooks/ for complete inference examples:
case1_geomodeling.ipynb- 2D channel faciescase2_muddrape.ipynb- 2D mud drape faciescase3_la3d.ipynb- 3D facies modeling
Each case has a JSON configuration file in configs/ with the following structure:
{
"name": "case1_geomodeling",
"type": "2d",
"image_size": 64,
"unconditional": {
"timesteps": 1500,
"channels": 1,
"dim": 64,
"dim_mults": [1, 2, 4],
"beta_schedule": "linear",
"training": { ... }
},
"conditional": {
"timesteps": 1500,
"in_channel": 5,
"out_channel": 1,
...
}
}- The Annotated Diffusion Model by Hugging Face
- Palette: Image-to-Image Diffusion Models by Janspiry
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
If you use this code in your research, please cite:
@inproceedings{xu2024diffsim,
title={Denoising diffusion model-based subsurface modeling and quantitative interpretation},
author={Xu, Minghui and Song, Suihong and Mukerji, Tapan},
booktitle={Fourth International Meeting for Applied Geoscience \& Energy},
pages={1660--1664},
year={2024},
organization={Society of Exploration Geophysicists and American Association of Petroleum Geologists},
url={https://pubs.geoscienceworld.org/segeab/proceedings/SEGEAB.43/1/1660/693551}
}