Segui
Carlo D'Eramo
Carlo D'Eramo
Professor of Reinforcement Learning @ University of Würzburg | Group leader @ TU Darmstadt
Email verificata su uni-wuerzburg.de - Home page
Titolo
Citata da
Citata da
Anno
Sharing Knowledge in Multi-Task Deep Reinforcement Learning
C D'Eramo, D Tateo, A Bonarini, M Restelli, J Peters
International Conference on Learning Representations (ICLR), 2020
130*2020
Mushroomrl: Simplifying reinforcement learning research
C D'Eramo, D Tateo, A Bonarini, M Restelli, J Peters
Journal of Machine Learning Research (JMLR) 22, 1-5, 2020
742020
Estimating the Maximum Expected Value through Gaussian Approximation
C D’Eramo, A Nuara, M Restelli
International Conference on Machine Learning (ICML), 1032-1040, 2016
552016
Self-Paced Deep Reinforcement Learning
P Klink, C D'Eramo, J Peters, J Pajarinen
Advances in Neural Information Processing Systems (NeurIPS), 2020
542020
Model Predictive Actor-Critic: Accelerating Robot Skill Acquisition with Deep Reinforcement Learning
AS Morgan, D Nandha, G Chalvatzaki, C D'Eramo, AM Dollar, J Peters
International Conference on Robotics and Automation (ICRA), 2021
51*2021
Boosted Fitted Q-Iteration
S Tosatto, M Pirotta, C D'Eramo, M Restelli
International Conference on Machine Learning (ICML), 3434-3443, 2017
482017
Curriculum reinforcement learning via constrained optimal transport
P Klink, H Yang, C D’Eramo, J Peters, J Pajarinen
International Conference on Machine Learning (ICML), 11341-11358, 2022
342022
Composable energy policies for reactive motion generation and reinforcement learning
J Urain, A Li, P Liu, C D’Eramo, J Peters
The International Journal of Robotics Research (IJRR), 2023
252023
A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning
P Klink, H Abdulsamad, B Belousov, C D'Eramo, J Peters, J Pajarinen
Journal of Machine Learning Research (JMLR) 22, 1-52, 2021
242021
Estimating the Maximum Expected Value in Continuous Reinforcement Learning Problems
C D'Eramo, A Nuara, M Pirotta, R Marcello
AAAI Conference on Artificial Intelligence, 1840-1846, 2017
242017
Multi-channel interactive reinforcement learning for sequential tasks
D Koert, M Kircher, V Salikutluk, C D'Eramo, J Peters
Frontiers in Robotics and AI 7, 97, 2020
172020
Deep reinforcement learning with weighted Q-Learning
A Cini, C D'Eramo, J Peters, C Alippi
arXiv preprint arXiv:2003.09280, 2020
142020
Exploiting Action-Value Uncertainty to Drive Exploration in Reinforcement Learning
C D’Eramo, A Cini, M Restelli
2019 International Joint Conference on Neural Networks (IJCNN), 1-8, 2019
142019
Convex regularization in Monte-Carlo tree search
TQ Dam, C D’Eramo, J Peters, J Pajarinen
International Conference on Machine Learning (ICML), 2365-2375, 2021
112021
Long-term visitation value for deep exploration in sparse-reward reinforcement learning
S Parisi, D Tateo, M Hensel, C D’eramo, J Peters, J Pajarinen
Algorithms 15 (3), 81, 2022
92022
Boosted Curriculum Reinforcement Learning
P Klink, C D'Eramo, J Peters, J Pajarinen
International Conference on Learning Representations (ICLR), 2022
92022
Generalized Mean Estimation in Monte-Carlo Tree Search
T Dam, P Klink, C D'Eramo, J Peters, J Pajarinen
International Joint Conference on Artificial Intelligence (IJCAI), 2020
92020
Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts
A Hendawy, J Peters, C D'Eramo
International Conference on Learning Representations (ICLR), 2024
72024
Exploration Driven by an Optimistic Bellman Equation
S Tosatto, C D’Eramo, J Pajarinen, M Restelli, J Peters
2019 International Joint Conference on Neural Networks (IJCNN), 1-8, 2019
72019
Gaussian approximation for bias reduction in Q-learning
C D'Eramo, A Cini, A Nuara, M Pirotta, C Alippi, J Peters, M Restelli
Journal of Machine Learning Research (JMLR) 22, 1-51, 2021
62021
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20