Carlo D'Eramo
Carlo D'Eramo
Postdoctoral researcher @ Technische Universitšt Darmstadt
Email verificata su robot-learning.de - Home page
Titolo
Citata da
Citata da
Anno
Estimating the Maximum Expected Value through Gaussian Approximation
C D’Eramo, A Nuara, M Restelli
Proceedings of The 33rd International Conference on Machine Learning, 1032-1040, 2016
312016
Sharing Knowledge in Multi-Task Deep Reinforcement Learning
C D'Eramo, D Tateo, A Bonarini, M Restelli, J Peters
International Conference on Learning Representations, 2020
302020
Boosted Fitted Q-Iteration
S Tosatto, M Pirotta, C D'Eramo, M Restelli
Proceedings of The 34th International Conference on Machine Learning, 3434-3443, 2017
302017
Estimating the Maximum Expected Value in Continuous Reinforcement Learning Problems
C D'Eramo, A Nuara, M Pirotta, R Marcello
AAAI Conference on Artificial Intelligence, 1840-1846, 2017
162017
Mushroomrl: Simplifying reinforcement learning research
C D'Eramo, D Tateo, A Bonarini, M Restelli, J Peters
arXiv preprint arXiv:2001.01102, 2020
152020
Self-Paced Deep Reinforcement Learning
P Klink, C D'Eramo, J Peters, J Pajarinen
Advances in Neural Information Processing Systems (NeurIPS), 2020
52020
Exploiting Action-Value Uncertainty to Drive Exploration in Reinforcement Learning
C D’Eramo, A Cini, M Restelli
2019 International Joint Conference on Neural Networks (IJCNN), 1-8, 2019
42019
Model Predictive Actor-Critic: Accelerating Robot Skill Acquisition with Deep Reinforcement Learning
AS Morgan, D Nandha, G Chalvatzaki, C D'Eramo, AM Dollar, J Peters
arXiv preprint arXiv:2103.13842, 2021
32021
Long-term visitation value for deep exploration in sparse reward reinforcement learning
S Parisi, D Tateo, M Hensel, C D'Eramo, J Peters, J Pajarinen
arXiv preprint arXiv:2001.00119, 2020
32020
Generalized Mean Estimation in Monte-Carlo Tree Search
T Dam, P Klink, C D'Eramo, J Peters, J Pajarinen
International Joint Conference on Artificial Intelligence (IJCAI), 2020
22020
Composable Energy Policies for Reactive Motion Generation and Reinforcement Learning
J Urain, A Li, P Liu, C D'Eramo, J Peters
arXiv preprint arXiv:2105.04962, 2021
12021
Multi-Channel Interactive Reinforcement Learning for Sequential Tasks
D Koert, M Kircher, V Salikutluk, C D'Eramo, J Peters
Frontiers in Robotics and AI 7, 97, 2020
12020
Deep Reinforcement Learning with Weighted Q-Learning
A Cini, C D'Eramo, J Peters, C Alippi
arXiv preprint arXiv:2003.09280, 2020
12020
Exploration Driven by an Optimistic Bellman Equation
S Tosatto, C D’Eramo, J Pajarinen, M Restelli, J Peters
2019 International Joint Conference on Neural Networks (IJCNN), 1-8, 2019
12019
Exploiting structure and uncertainty of Bellman updates in Markov decision processes
D Tateo, C D'Eramo, A Nuara, M Restelli, A Bonarini
Symposium on Adaptive Dynamic Programming and Reinforcement Learning (IEEE†…, 2017
12017
Convex Regularization in Monte-Carlo Tree Search
TQ Dam, C D’Eramo, J Peters, J Pajarinen
International Conference on Machine Learning, 2365-2375, 2021
2021
A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning
P Klink, H Abdulsamad, B Belousov, C D'Eramo, J Peters, J Pajarinen
arXiv preprint arXiv:2102.13176, 2021
2021
On the exploitation of uncertainty to improve Bellman updates and exploration in Reinforcement Learning
C D'ERAMO
Italy, 2019
2019
On the use of deep Boltzmann machines for road signs classification
C D'Eramo
University of Illinois at Chicago, 2015
2015
Self-Paced Deep Reinforcement Learning Open Website
P Klink, C D'Eramo, J Peters, J Pajarinen
Il sistema al momento non puÚ eseguire l'operazione. Riprova piý tardi.
Articoli 1–20