Follow
Jakob Hollenstein
Title
Cited by
Cited by
Year
Pink noise is all you need: Colored noise exploration in deep reinforcement learning
O Eberhard, J Hollenstein, C Pinneri, G Martius
The Eleventh International Conference on Learning Representations, 2022
172022
Continual learning from demonstration of robotics skills
S Auddy, J Hollenstein, M Saveriano, A Rodríguez-Sánchez, J Piater
Robotics and Autonomous Systems 165, 104427, 2023
122023
Action Noise in Off-Policy Deep Reinforcement Learning: Impact on Exploration and Performance
J Hollenstein, S Auddy, M Saveriano, E Renaudo, J Piater
Transactions on Machine Learning Research, 2022
102022
A Visual Intelligence Scheme for Hard Drive Disassembly in Automated Recycling Routines.
E Yildiz, T Brinker, E Renaudo, JJ Hollenstein, S Haller-Seeber, JH Piater, ...
ROBOVIS, 17-27, 2020
82020
Hypernetwork-PPO for Continual Reinforcement Learning
P Schöpf, S Auddy, J Hollenstein, A Rodriguez-Sanchez
Deep Reinforcement Learning Workshop NeurIPS 2022, 0
5*
Improving Exploration of Deep Reinforcement Learning using Planning for Policy Search
JJ Hollenstein, E Renaudo, J Piater
32019
Visual Control of Hidden-Semi-Markov-Model based Acoustic Speech Synthesis
J Hollenstein, ichael Pucher, D Schabus
Auditory-Visual Speech Processing (AVSP) 2013, 2013
32013
How does the type of exploration-noise affect returns and exploration on Reinforcement Learning benchmarks?
J Hollenstein, M Saveriano, S Auddy, E Renaudo, J Piater
Austrian Robotics Workshop, 22-26, 2021
22021
How do Offline Measures for Exploration in Reinforcement Learning behave?
JJ Hollenstein, S Auddy, M Saveriano, E Renaudo, J Piater
arXiv preprint arXiv:2010.15533, 2020
22020
Improving the Exploration of Deep Reinforcement Learning in Continuous Domains using Planning for Policy Search
JJ Hollenstein, E Renaudo, M Saveriano, J Piater
arXiv preprint arXiv:2010.12974, 2020
12020
How Does Explicit Exploration Influence Deep Reinforcement Learning
JJ Hollenstein, E Renaudo, S Matteo, J Piater
Joint Austrian Computer Vision and Robotics Workshop, 29-30, 2020
12020
Evaluating Planning for Policy Search
JJ Hollenstein, J Piater
1st Workshop on Workshop on Closing the Reality Gap in Sim2real Transfer for …, 2019
12019
Colored Noise in PPO: Improved Exploration and Performance Through Correlated Action Sampling
J Hollenstein, G Martius, J Piater
Sixteenth European Workshop on Reinforcement Learning, 2023
2023
Differentiable Forward Kinematics for TensorFlow 2
L Mölschl, JJ Hollenstein, J Piater
arXiv preprint arXiv:2301.09954, 2023
2023
An Extended Visual Intelligence Scheme for Disassembly in Automated Recycling Routines
E Yildiz, E Renaudo, J Hollenstein, J Piater, F Wörgötter
Robotics, Computer Vision and Intelligent Systems: First International …, 2022
2022
Pink Noise Is All You Need: Colored Noise Exploration in Deep Reinforcement Learning
O Eberhard, J Hollenstein, C Pinneri, G Martius
Deep Reinforcement Learning Workshop NeurIPS 2022, 0
Can Expressive Posterior Approximations Improve Variational Continual Learning?
S Auddy, JHMSA Rodrıguez, SJ Piater
The system can't perform the operation now. Try again later.
Articles 1–17