Skip to content

Benchmarking inference layer#33

Open
RishikeshRanade wants to merge 26 commits intoNVIDIA:mainfrom
RishikeshRanade:benchmarking-inference-layer
Open

Benchmarking inference layer#33
RishikeshRanade wants to merge 26 commits intoNVIDIA:mainfrom
RishikeshRanade:benchmarking-inference-layer

Conversation

@RishikeshRanade
Copy link
Copy Markdown
Collaborator

@RishikeshRanade RishikeshRanade commented Apr 6, 2026

PhysicsNeMo-CFD Pull Request

Description

PhysicsNeMo-CFD evaluation adds a config-driven benchmarking pipeline: registered model wrappers run inference on dataset adapters, built-in metrics (shared with physicsnemo.cfd.postprocessing_tools) evaluate predictions vs ground truth, and results flow into tabular reports (JSON/CSV/HTML) plus optional PNG report visuals. workflows/evaluation_examples/ is the primary Hydra workflow (main.py, conf/config_surface.yaml / config_volume.yaml).

Scope of the library

  • Model wrappers with registry-based CFDModel implementations.
  • Dataset adapters with canonical case schema, DrivAerML, Ahmed, extensible via adapter registry
  • Automated benchmarking with run_benchmark driver, per-case metrics, optional metrics cache, matrix mode (models × datasets), report plugins
  • Visualization consistent with original physicsnemo-cfd functionality
  • Multi-GPU functionality to process multiple case ids
  • Caching to restart benchmarking from where it left off

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.
  • The CHANGELOG.md is up to date with these changes.

Dependencies

@RishikeshRanade
Copy link
Copy Markdown
Collaborator Author

@ram-cherukuri can you please review the readme?

raise ValueError(f"Unsupported point array shape for {name!r}: {data.shape}")


def build_comparison_mesh(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RishikeshRanade Have we considered using physicsnemo.mesh as the default format rather than VTK? It wont work for external models but for our own models we would be going in this direction, wont we?

@@ -0,0 +1,226 @@
# Benchmarking and Inference
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we change to: Model evaluation and benchmarking.

Comment thread pyproject.toml
@@ -13,6 +13,8 @@ requires-python = ">=3.10"
license = "Apache-2.0"
dependencies = [
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RishikeshRanade wont physicsnemo be a dependency?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Will add it


import numpy as np
import torch
from sklearn.neighbors import NearestNeighbors
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RishikeshRanade If we have physicsnemo as dependency, we will be able to use some of our optimized warp functions here as we'll as the distributed manager. This would continue as we would use physicsinformer module as well as physicsnemo mesh module.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will work on this.

from physicsnemo.cfd.postprocessing_tools.metrics.physics import (
compute_continuity_residuals,
compute_momentum_residuals,
)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RishikeshRanade, @ktangsali Shouldn't we be using physicsinformer methods here?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ram-cherukuri , yes we could use it. Let's wait for the Sym upstream effort to complete and then it will make this integration much smoother.

ram-cherukuri and others added 3 commits April 9, 2026 17:43
Updated the README to clarify the purpose of the workflow and its usage.
Updated section headers and improved clarity of config descriptions.
@@ -0,0 +1,226 @@
# Benchmarking and Inference

This workflow runs **metrics**, tabular outputs (JSON/CSV/HTML), optional **PNG visuals**, and optional VTK (`run.save_inference_mesh`, `reports.save_comparison_meshes`) using **[Hydra](https://hydra.cc/)** and **OmegaConf** — the same pattern as **`workflows/domino_design_sensitivities/`**.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RishikeshRanade Lets update this to clarify the following: its an opinionated workflow, can be extended as desired, uses a config driven experience, and more make it a goal oriented guide - running the benchmark as is and then going to extend. I will try to edit the readme and we can iterate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants