Follow
Rinu Boney
Rinu Boney
Doctoral Candidate, Aalto University
Verified email at aalto.fi
Title
Cited by
Cited by
Year
Recurrent ladder networks
I Prémont-Schwarz, A Ilin, T Hao, A Rasmus, R Boney, H Valpola
Advances in neural information processing systems 30, 2017
312017
Semi-supervised few-shot learning with prototypical networks
R Boney, A Ilin
CoRR abs/1711.10856, 2017
262017
Regularizing model-based planning with energy-based models
R Boney, J Kannala, A Ilin
Conference on Robot Learning, 182-191, 2020
232020
Semi-supervised few-shot learning with MAML
R Boney, A Ilin
162018
Recurrent ladder networks
A Ilin, I Prémont-Schwarz, TH Hao, A Rasmus, R Boney, H Valpola
112017
Regularizing trajectory optimization with denoising autoencoders
R Boney, N Di Palo, M Berglund, A Ilin, J Kannala, A Rasmus, H Valpola
Advances in Neural Information Processing Systems 32, 2019
102019
Semi-supervised and active few-shot learning with prototypical networks
R Boney, A Ilin
arXiv preprint arXiv:1711.10856, 2017
92017
Learning to drive small scale cars from scratch
A Viitala, R Boney, J Kannala
5*2020
RealAnt: An Open-Source Low-Cost Quadruped for Research in Real-World Reinforcement Learning
R Boney, J Sainio, M Kaivola, A Solin, J Kannala
arXiv preprint arXiv:2011.03085, 2020
22020
End-to-End Learning of Keypoint Representations for Continuous Control from Images
R Boney, A Ilin, J Kannala
arXiv preprint arXiv:2106.07995, 2021
12021
Learning to play imperfect-information games by imitating an oracle planner
R Boney, A Ilin, J Kannala, J Seppänen
IEEE Transactions on Games 14 (2), 262-272, 2021
12021
Active one-shot learning with Prototypical Networks.
R Boney, A Ilin
ESANN, 2019
12019
Sample-Efficient Methods for Real-World Deep Reinforcement Learning
R Boney
Aalto University, 2022
2022
Adaptive Behavior Cloning Regularization for Stable Offline-to-Online Reinforcement Learning
Y Zhao, R Boney, A Ilin, J Kannala, J Pajarinen
2021
Fast Adaptation of Neural Networks
R Boney
2018
The system can't perform the operation now. Try again later.
Articles 1–15