Skip to content

AlejandroMllo/situationally_aware_dynamics_learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Situationally-Aware Dynamics Learning

DOI Website YouTube arXiv

Official repository and resources for the paper "Situationally-Aware Dynamics Learning," published in The International Journal of Robotics Research, 2026.


📖 Abstract

Autonomous robots operating in complex, unstructured environments face significant challenges due to latent, unobserved factors that obscure their understanding of both their internal state and the external world. Addressing this challenge would enable robots to develop a more profound grasp of their operational context.

To tackle this, we propose a novel framework for online learning of hidden state representations, with which the robots can adapt in real time to uncertain and dynamic conditions that would otherwise be ambiguous and result in suboptimal or erroneous behaviors. Our approach is formalized as a Generalized Hidden Parameter Markov Decision Process, which explicitly models the influence of unobserved parameters on both transition dynamics and reward structures.

Our core innovation lies in learning online the joint distribution of state transitions, which serves as an expressive representation of latent ego- and environmental-factors. This probabilistic approach supports the identification and adaptation to different operational situations, improving robustness and safety. Through a multivariate extension of Bayesian Online Changepoint Detection, our method segments changes in the underlying data generating process governing the robot’s dynamics. The robot’s transition model is then informed with a symbolic representation of the current situation derived from the joint distribution of latest state transitions, enabling adaptive and context-aware decision-making.

To demonstrate effectiveness, we validate our approach on an unmanned ground vehicle operating in diverse unstructured terrains, both in simulation and in real-world experiments. We also evaluate a quadrotor in simulation under randomly changing wind conditions. Both setups introduce unmodeled and unmeasured environmental factors that substantially affect robot motion. Extensive experiments in both simulation and real world reveal significant improvements in data efficiency, policy performance, and the emergence of safer, adaptive navigation strategies.


🏷️ Keywords

  • Hidden state representation
  • Symbolic reasoning
  • Representation learning
  • Online learning
  • Model-Based RL (MBRL)

Quick Start

This repository contains a domain-agnostic core and environment-specific examples. For detailed developer documentation, API reference, and a QuadX customization tutorial, see DOCS.md.

Typical Workflow: Training and Evaluation

  1. Setup environment and verify dependencies:
conda env create -f environment.yml
conda activate situationally_aware_dynamics_learning
cd src
  1. Update configuration (src/config.yaml):

    • Set replay_buffer_path to point to your offline dataset (e.g., ../data/quadx/replay_buffers/d1_wind_quadx.npz)
    • Set task to match your dataset (e.g., d1_wind_quadx, d2_wind_quadx, d3_wind_quadx)
    • Adjust eval_freq, n_eval_episodes, and other hyperparameters as needed
  2. Train the agent from offline dataset:

python train_quadx.py

This trains a QuadX situationally-aware agent on the specified dataset. Training outputs include:

  • Ensemble dynamics models (one per sa_pets.ensemble_size)
  • Situational awareness module
  • Training logs and evaluation metrics in results/sa_agent/{task}/{seed}/
  1. Evaluate best models after training:
python evaluate_quadx_models.py

This script:

  • Finds the best-performing epoch for each ensemble model (by validation loss)
  • Loads the final situational awareness module
  • Evaluates the agent on validation targets with n_eval_episodes episodes
  • Saves results and visualizations to the same task directory

Advanced: Manual Agent Training and Evaluation

For custom datasets or workflows outside the QuadX example, use the generic trainer API:

from trainer import train_offline_dataset, evaluate_agent
from your_custom_agent import YourCustomAgent

agent = YourCustomAgent(cfg=cfg, device='cuda')
train_offline_dataset(dataset_path='path/to/data.npz', agent=agent, cfg=cfg, eval_env=eval_env)

See DOCS.md for detailed API documentation and subclassing examples.


📝 Citation

If you find this work or the associated code useful in your research, please cite our paper:

Plain Text:

Alejandro Murillo-González and Lantao Liu. "Situationally-Aware Dynamics Learning." The International Journal of Robotics Research. 2026. doi:10.1177/02783649261431863

BibTeX:

@article{murillo2026situationalawareness,
  author = {Alejandro Murillo-González and Lantao Liu},
  title = {Situationally-Aware Dynamics Learning},
  journal = {The International Journal of Robotics Research},
  volume = {0},
  number = {0},
  pages = {02783649261431863},
  year = {2026},
  doi = {10.1177/02783649261431863},
  URL = {https://doi.org/10.1177/02783649261431863},
  eprint = {https://doi.org/10.1177/02783649261431863}
}

About

IJRR 2026 | Situationally-Aware Dynamics Learning | Online and Unsupervised Latent Factor Representation Learning for Robot Dynamics Learning.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages