Search for a command to run...
Deep Reinforcement Learning has been highly successful across a large range of different problems including robotics. However, particularly in robotic manipulation the deployment of Reinforcement Learning on real-world systems remains challenging. While traditional robot programming solves robotic tasks through manually devised and programmed motion sequences, trends push towards more generalized robotic applications such as collaborative environments, where it becomes increasingly difficult to anticipate and implement appropriate responses to every possible scenario. For such applications, learning-based solutions provide a promising alternative direction. Still, conventional Reinforcement Learning struggles with uncertainties, which are intrinsic to real-world environments, e.g. as sensor noise, estimation errors, or disturbances, but also exist due to approximation errors in utilized models, e.g. simulation environments. The state-of-the-art solution to address uncertainties in Reinforcement Learning for robotic applications is Domain Randomization. The intrinsic motivation of Domain Randomization is to widen the scope of environments experienced during the learning process through random variation to push towards more generalized solutions. However, instead of a random process, the alternative approach of robust Reinforcement Learning proposes a worst-case design, which ensure confrontation with scenarios that produce the worst outcomes. Despite recent advances in robust Reinforcement Learning providing the tools for integration in Deep Learning architectures, little work discusses modern hierarchical robotic skill learning frameworks from the perspective of robust Reinforcement Learning. Therefore, this work aims to provide a foundation for robust robotic skill learning. To show how the hierarchical structure of skill learning can be exploited to address different types of uncertainties in different levels of the hierarchy this work presents three contributions: (i) robust Reinforcement Learning for robotic manipulation tasks under uncertain observations of the environment is discussed and evaluated, (ii) a novel skill embedding framework called Dynamic Adversarial Skill Embeddings is proposed on the basis of robust motor skills to address uncertain robot dynamics, and (iii) the unified robust robotic skill learning concept Robust Dynamic Adversarial Skill Embeddings is proposed which unifies robust Reinforcement Learning on a task-level and low-level robust motor skills in a single hierarchical structure. Furthermore, a fourth contribution is presented in the form of a semi-autonomous Foosball table. This system was developed as part of this work for the continued research into robust and robot learning, as well as, related fields. Contrary to the traditional approach of employing toy examples and immediate jumps to full scale robotic systems, Foosball introduces incremental scaling in both complexity and difficulty to bridge the gap between these two extremes. In addition, its inherent constraints on the joints provide a safer physical environment for the development and validation of novel algorithms and frameworks.