Serving the purposes of reinforcement learning, control, and adaptation, world models predict future observations from high-dimensional sensor data (e.g., camera images). Their learned latent representations often function as black boxes without a clear connection to the underlying physical states, which makes it difficult to provide strong guarantees based on such world models. This project demos Physically Interpretable World Models (PIWM), a novel architecture that aligns learned latent representations with real-world physical quantities.
MrinallU/World-Model-Visualizer
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|