Skip to content

nrgrp/composite-q-learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Robust and Data-efficient Q-learning by Composite Value-estimation

This repository is the official implementation of Robust and Data-efficient Q-learning by Composite Value-estimation.

Abstract: In the past few years, off-policy reinforcement learning methods have shown promising results in their application to robot control. Q-learning based methods, however, still suffer from poor data-efficiency and are susceptible to stochasticity or noise in the immediate reward, which is limiting with regard to real-world applications. We alleviate this problem by proposing two novel off-policy Temporal-Difference formulations: (1) Truncated Q-functions which represent the return for the first n steps of a target-policy rollout with respect to the full action-value and (2) Shifted Q-functions, acting as the farsighted return after this truncated rollout. This decomposition allows us to optimize both parts with their individual learning rates, achieving significant learning speedup and robustness to variance in the reward signal, leading to the Composite Q-learning algorithm. We show the efficacy of Composite Q-learning in the tabular case and furthermore employ Composite Q-learning within TD3. We compare Composite TD3 with TD3 and TD3(Delta), which we introduce as an off-policy variant of TD(Delta). Moreover, we show that Composite TD3 outperforms TD3 as well as TD3(Delta) significantly in terms of data-efficiency in multiple simulated robot tasks and that Composite Q-learning is robust to stochastic immediate rewards.

Running Motivation

python motivation.py

Requirements

pytorch (1.1.0), gym (0.12.1), mujoco_py (2.0.2.2), MuJoCo, GPU with CUDA support

Running Deep Experiments

python SCRIPT.py -e GYM_ENVIRONMENT [-s SEED]

Cite

@article{
	kalweit2022robust,
	title={Robust and Data-efficient Q-learning by Composite Value-estimation},
	author={Gabriel Kalweit and Maria Kalweit and Joschka Boedecker},
	journal={Transactions on Machine Learning Research},
	year={2022},
	url={https://openreview.net/forum?id=ak6Bds2DcI}
}

Contributing

📋 Awesome that you are interested in our work! Please write an e-mail to kalweitg@cs.uni-freiburg.de

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages