Conversation
Visualization layer into benchmarking layer
|
@ram-cherukuri can you please review the readme? |
| raise ValueError(f"Unsupported point array shape for {name!r}: {data.shape}") | ||
|
|
||
|
|
||
| def build_comparison_mesh( |
There was a problem hiding this comment.
@RishikeshRanade Have we considered using physicsnemo.mesh as the default format rather than VTK? It wont work for external models but for our own models we would be going in this direction, wont we?
| @@ -0,0 +1,226 @@ | |||
| # Benchmarking and Inference | |||
There was a problem hiding this comment.
Can we change to: Model evaluation and benchmarking.
| @@ -13,6 +13,8 @@ requires-python = ">=3.10" | |||
| license = "Apache-2.0" | |||
| dependencies = [ | |||
There was a problem hiding this comment.
@RishikeshRanade wont physicsnemo be a dependency?
There was a problem hiding this comment.
Good catch. Will add it
|
|
||
| import numpy as np | ||
| import torch | ||
| from sklearn.neighbors import NearestNeighbors |
There was a problem hiding this comment.
@RishikeshRanade If we have physicsnemo as dependency, we will be able to use some of our optimized warp functions here as we'll as the distributed manager. This would continue as we would use physicsinformer module as well as physicsnemo mesh module.
There was a problem hiding this comment.
We will work on this.
| from physicsnemo.cfd.postprocessing_tools.metrics.physics import ( | ||
| compute_continuity_residuals, | ||
| compute_momentum_residuals, | ||
| ) |
There was a problem hiding this comment.
@RishikeshRanade, @ktangsali Shouldn't we be using physicsinformer methods here?
There was a problem hiding this comment.
@ram-cherukuri , yes we could use it. Let's wait for the Sym upstream effort to complete and then it will make this integration much smoother.
Updated the README to clarify the purpose of the workflow and its usage.
Updated section headers and improved clarity of config descriptions.
| @@ -0,0 +1,226 @@ | |||
| # Benchmarking and Inference | |||
|
|
|||
| This workflow runs **metrics**, tabular outputs (JSON/CSV/HTML), optional **PNG visuals**, and optional VTK (`run.save_inference_mesh`, `reports.save_comparison_meshes`) using **[Hydra](https://hydra.cc/)** and **OmegaConf** — the same pattern as **`workflows/domino_design_sensitivities/`**. | |||
There was a problem hiding this comment.
@RishikeshRanade Lets update this to clarify the following: its an opinionated workflow, can be extended as desired, uses a config driven experience, and more make it a goal oriented guide - running the benchmark as is and then going to extend. I will try to edit the readme and we can iterate.
Updated README to streamline configuration instructions and extend workflow customization guidelines.
PhysicsNeMo-CFD Pull Request
Description
PhysicsNeMo-CFD evaluation adds a config-driven benchmarking pipeline: registered model wrappers run inference on dataset adapters, built-in metrics (shared with physicsnemo.cfd.postprocessing_tools) evaluate predictions vs ground truth, and results flow into tabular reports (JSON/CSV/HTML) plus optional PNG report visuals. workflows/evaluation_examples/ is the primary Hydra workflow (main.py, conf/config_surface.yaml / config_volume.yaml).
Scope of the library
Checklist
Dependencies