Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# SCM syntax highlighting & preventing 3-way merges
pixi.lock merge=binary linguist-language=YAML linguist-generated=true -diff
8 changes: 5 additions & 3 deletions .github/workflows/build_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,11 @@ jobs:
with:
submodules: "recursive"
fetch-depth: 0
- name: Install Conda environment
uses: mamba-org/provision-with-micromamba@main
- name: Setup Pixi
uses: prefix-dev/setup-pixi@v0.8.1
with:
pixi-version: latest
- name: Build
shell: bash -l {0}
run: |
pdm build
pixi run pdm build
102 changes: 67 additions & 35 deletions .github/workflows/build_wheels.yml
Original file line number Diff line number Diff line change
@@ -1,69 +1,101 @@
# Build on every branch push, tag push, and pull request change:
# Build on every pull request change:
# From: https://github.com/pypa/cibuildwheel/blob/main/examples/github-deploy.yml
name: Build wheels
on: [push, pull_request]
name: Build and upload to PyPI

on:
workflow_dispatch:
release:
types:
- published

jobs:
build_wheels:
name: Build wheel for ${{ matrix.python }}-${{ matrix.buildplat[1] }}
runs-on: ${{ matrix.buildplat[0] }}
environment: pypi
name: Build wheels for ${{ matrix.os }}
runs-on: ${{ matrix.runs-on }}
strategy:
# Ensure that a wheel builder finishes even if another fails
fail-fast: false
matrix:
# From NumPy
# Github Actions doesn't support pairing matrix values together, let's improvise
# https://github.com/github/feedback/discussions/7835#discussioncomment-1769026
buildplat:
- [ubuntu-20.04, manylinux_x86_64]
- [ubuntu-20.04, musllinux_x86_64] # No OpenBlas, no test
- [macos-12, macosx_x86_64]
# - [windows-2019, win_amd64]
python: ["cp38", "cp39","cp310", "cp311"]
include:
- os: linux-intel
runs-on: ubuntu-latest
- os: linux-arm
runs-on: ubuntu-24.04-arm
- os: windows-intel
runs-on: windows-latest
- os: windows-arm
runs-on: windows-11-arm
- os: macos-intel
# macos-15-intel is the last x86_64 runner
runs-on: macos-15-intel
- os: macos-arm
# macos-14+ (including latest) are ARM64 runners
runs-on: macos-latest
- os: android-intel
runs-on: ubuntu-latest
platform: android
- os: android-arm
# GitHub Actions doesn’t currently support the Android emulator on any ARM
# runner. So we build on a non-ARM runner, which will skip the tests.
runs-on: ubuntu-latest
platform: android
archs: arm64_v8a
- os: ios
runs-on: macos-latest
platform: ios
- os: pyodide
runs-on: ubuntu-latest
platform: pyodide

steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5

- name: Build wheels
uses: pypa/cibuildwheel@v2.12.3
uses: pypa/cibuildwheel@v3.3.1
env:
CIBW_BUILD: ${{ matrix.python }}-${{ matrix.buildplat[1] }}
CIBW_PLATFORM: ${{ matrix.platform || 'auto' }}
CIBW_ARCHS: ${{ matrix.archs || 'auto' }}
# Can also be configured directly, using `with:`
# with:
# package-dir: .
# output-dir: wheelhouse
# config-file: "{package}/pyproject.toml"

- uses: actions/upload-artifact@v3
- uses: actions/upload-artifact@v4
with:
name: cibw-wheels-${{ matrix.os }}-${{ strategy.job-index }}
path: ./wheelhouse/*.whl

build_sdist:
name: Build source distribution
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v5

- name: Build sdist
shell: bash -l {0}
run: pipx run build --sdist

- uses: actions/upload-artifact@v3
- uses: actions/upload-artifact@v4
with:
name: cibw-sdist
path: dist/*.tar.gz

upload_pypi:
needs: [build_wheels, build_sdist]
runs-on: ubuntu-latest
# upload to PyPI on every tag starting with 'v'
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v')
# alternatively, to publish when a GitHub Release is created, use the following rule:
# if: github.event_name == 'release' && github.event.action == 'published'
environment: pypi
permissions:
id-token: write
if: github.event_name == 'release' && github.event.action == 'published'
# or, alternatively, upload to PyPI on every tag starting with 'v' (remove on: release above to use this)
# if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v')
steps:
- uses: actions/download-artifact@v3
- uses: actions/download-artifact@v5
with:
# unpacks default artifact into dist/
# if `name: artifact` is omitted, the action will create extra parent dir
name: artifact
# unpacks all CIBW artifacts into dist/
pattern: cibw-*
path: dist
merge-multiple: true

- uses: pypa/gh-action-pypi-publish@release/v1
with:
user: __token__
password: ${{ secrets.PYPI_API_TOKEN }}
# To test uploads to TestPyPI, uncomment the following:
# with:
# repository-url: https://test.pypi.org/legacy/
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -328,3 +328,6 @@ Temporary Items
.apdisk


# pixi environments
.pixi/*
!.pixi/config.toml
Expand Down
136 changes: 131 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The library consists of thin wrappers to `potlib` under `cpot` and a

This is [on PyPI](https://pypi.org/project/pypotlib), with wheels, so usage is simply:

``` bash
```bash
pip install pypotlib
```

Expand All @@ -21,13 +21,14 @@ work with.

### Local Development

The easiest way is to use the environment file, compatible with `conda`,
`mamba`, `micromamba` etc.
The easiest way is to use the `pixi` environment.

```bash
micromamba env create -f environment.yml
micromamba activate rgpotpy
pixi s
pdm install
# For tests
pixi s -e with-ase
pytest tests/test_cache.py
```

### Production
Expand Down Expand Up @@ -93,6 +94,130 @@ optimizer = BFGS(neb)
optimizer.run(fmax=0.04)
```

## Caching runs

`pypotlib` supports persistent caching via RocksDB. This allows energy and force
evaluations to be stored and retrieved, significantly speeding up repeated
calculations on identical configurations.

```python
import pypotlib.cpot as cpot
import numpy as np

# 1. Initialize the cache with a directory path
# This will create a RocksDB database at the specified location.
cache = cpot.PotentialCache("/tmp/my_pot_cache", create_if_missing=True)

# 2. Create the potential and link the cache
lj = cpot.LJPot()
lj.set_cache(cache)

# 3. Use as normal
pos = np.array([[0.0, 0.0, 0.0], [3.0, 0.0, 0.0]])
types = [1, 1]
box = np.eye(3) * 10.0

# First call: Computes and stores result in DB
e1, f1 = lj(pos, types, box)

# Second call (same inputs): Retrieves result from DB (Instant)
e2, f2 = lj(pos, types, box)
```

### ASE Caching

The ASE calculator provides more sophisticated caching, with the internal checks
for equivalent structures further reducing calls to the underlying compiled
code.

```python
from ase import Atoms
from pypotlib import cpot
from pypotlib.ase_adapters import PyPotLibCalc

# Setup Potential with Cache
cache = cpot.PotentialCache("ase_cache_db")
pot = cpot.CuH2Pot()
pot.set_cache(cache)

# Create Calculator
atoms = Atoms(symbols=["Cu", "H"], positions=[[0, 0, 0], [0.5, 0.5, 0.5]])
calc = PyPotLibCalc(pot)
atoms.set_calculator(calc)

print(atoms.get_potential_energy())
print(atoms.get_forces())
```

### NEB Example with Benchmarking

To really see the power of the cache, we can run an NEB optimization twice. The
first run performs the calculations and populates the RocksDB database. The
second run, performing the exact same optimization, hits the cache for every
step, reducing the computational cost to near zero.

```python
import time
import shutil
from ase import Atoms
from ase.mep import NEB
from ase.optimize import BFGS
from pypotlib import cpot
from pypotlib.ase_adapters import PyPotLibCalc

# Setup a persistent cache
cache_path = "/tmp/neb_demo_cache"
# Clear previous cache to ensure a "cold" start for demonstration
shutil.rmtree(cache_path, ignore_errors=True)
cache = cpot.PotentialCache(cache_path, create_if_missing=True)


def setup_neb_images():
"""Helper to create fresh images for the NEB."""
atoms_initial = Atoms(symbols=["H", "H"], positions=[(0, 0, 0), (0, 0, 1)])
atoms_final = Atoms(symbols=["H", "H"], positions=[(0, 0, 2), (0, 0, 3)])

images = [atoms_initial]
images += [atoms_initial.copy() for _ in range(3)]
images += [atoms_final]

# Attach calculators with the SHARED cache
for image in images:
pot = cpot.LJPot()
pot.set_cache(cache) # All images share the same DB
image.calc = PyPotLibCalc(pot)

return images


# --- Run 1: Cold Cache (Calculates & Writes) ---
print("Starting Run 1 (Cold Cache)...")
images_1 = setup_neb_images()
neb_1 = NEB(images_1)
neb_1.interpolate(method="idpp")
opt_1 = BFGS(neb_1)

start_1 = time.time()
opt_1.run(fmax=0.04)
duration_1 = time.time() - start_1
print(f"Run 1 finished in {duration_1:.4f} seconds.")

# --- Run 2: Warm Cache (Reads only) ---
print("\nStarting Run 2 (Warm Cache)...")
images_2 = setup_neb_images() # Re-create identical initial state
neb_2 = NEB(images_2)
neb_2.interpolate(method="idpp")
opt_2 = BFGS(neb_2)

start_2 = time.time()
opt_2.run(fmax=0.04)
duration_2 = time.time() - start_2
print(f"Run 2 finished in {duration_2:.4f} seconds.")

# --- Results ---
speedup = duration_1 / duration_2 if duration_2 > 0 else 0
print(f"\nSpeedup factor: {speedup:.1f}x")
```

# Contributions

Expand All @@ -102,4 +227,5 @@ all contributors to follow our [Code of
Conduct](https://github.com/TheochemUI/pypotlib/blob/main/CODE_OF_CONDUCT.md).

# License

[MIT](https://github.com/TheochemUI/pypotlib/blob/main/LICENSE).
1 change: 1 addition & 0 deletions docs/newsfragments/+a9fe24b8.pypotlib.added.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
feat(pickle): generalize slightly
1 change: 1 addition & 0 deletions docs/newsfragments/+c7d8e934.pypotlib.added.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
feat(cache): add a RocksDB integration
Loading
Loading