S-RL Toolbox
Reinforcement Learning (RL) and State Representation Learning (SRL) Toolbox for Robotics
State representation learning aims at learning compact representations from raw observations in robotics and control applications. The goal of such a representation is mainly to improve sample efficiency of reinforcement learning algorithms by introducing priors on the relevant information for control. This toolbox therefore provides a set of environments, data generators, robotic control tasks, metrics and tools to facilitate iterative state representation learning, the evaluation of the learned representations in reinforcement learning settings and the visual analysis of their behavior.
Istall & Run: All instructions for use are in the read me section of GitHub.
Additional information: This resource is being used for case studies in the VeriDream project. State representation learning aims at learning compact representations from raw observations in robotics and control applications. The goal of such a representation is mainly to improve sample efficiency of reinforcement learning algorithms by introducing priors on the relevant information for control, but it is also relevant for other control approaches such as model predictive control or planning. Approaches used for this objective fall in four main categories: auto-encoders, learning forward models, inverse dynamics or learning using generic priors on the state characteristics. However, the diversity in applications and methods makes the field lack standard evaluation datasets, metrics and tasks.
Full documentation for the S-RL Toolbox can be found here: https://s-rl-toolbox.readthedocs.io/en/latest/