Ofir Nachum
Ofir Nachum
Google Brain
Email verificata su google.com
Titolo
Citata da
Citata da
Anno
Data-Efficient Hierarchical Reinforcement Learning
O Nachum, S Gu, H Lee, S Levine
Advances in Neural Information Processing Systems, 2018
2622018
Learning to remember rare events
Ł Kaiser, O Nachum, A Roy, S Bengio
International Conference for Learning Representations, 2017
2512017
Bridging the gap between value and policy based reinforcement learning
O Nachum, M Norouzi, K Xu, D Schuurmans
arXiv preprint arXiv:1702.08892, 2017
2222017
Morphnet: Fast & simple resource-constrained structure learning of deep networks
A Gordon, E Eban, O Nachum, B Chen, H Wu, TJ Yang, E Choi
Proceedings of the IEEE conference on computer vision and pattern …, 2018
1882018
A Lyapunov-based Approach to Safe Reinforcement Learning
Y Chow, O Nachum, E Duenez-Guzman, M Ghavamzadeh
Advances in Neural Information Processing Systems, 2018
1402018
Deep Reinforcement Learning for Vision-Based Robotic Grasping: A Simulated Comparative Evaluation of Off-Policy Methods
D Quillen, E Jang, O Nachum, C Finn, J Ibarz, S Levine
IEEE International Conference on Robotics and Automation, 2018
1052018
Trust-pcl: An off-policy trust region method for continuous control
O Nachum, M Norouzi, K Xu, D Schuurmans
International Conference for Learning Representations, 2018
682018
Near-optimal representation learning for hierarchical reinforcement learning
O Nachum, S Gu, H Lee, S Levine
arXiv preprint arXiv:1810.01257, 2018
672018
Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections
O Nachum, Y Chow, B Dai, L Li
arXiv preprint arXiv:1906.04733, 2019
642019
Deepmdp: Learning continuous latent space models for representation learning
C Gelada, S Kumar, J Buckman, O Nachum, MG Bellemare
International Conference on Machine Learning, 2170-2179, 2019
612019
Behavior regularized offline reinforcement learning
Y Wu, G Tucker, O Nachum
arXiv preprint arXiv:1911.11361, 2019
542019
Lyapunov-based safe policy optimization for continuous control
Y Chow, O Nachum, A Faust, E Duenez-Guzman, M Ghavamzadeh
arXiv preprint arXiv:1901.10031, 2019
472019
Identifying and correcting label bias in machine learning
H Jiang, O Nachum
International Conference on Artificial Intelligence and Statistics, 702-712, 2020
442020
D4rl: Datasets for deep data-driven reinforcement learning
J Fu, A Kumar, O Nachum, G Tucker, S Levine
arXiv preprint arXiv:2004.07219, 2020
422020
Algaedice: Policy gradient from arbitrary experience
O Nachum, B Dai, I Kostrikov, Y Chow, L Li, D Schuurmans
arXiv preprint arXiv:1912.02074, 2019
302019
Improving policy gradient by exploring under-appreciated rewards
O Nachum, M Norouzi, D Schuurmans
International Conference for Learning Representations, 2017
282017
Why does hierarchy (sometimes) work so well in reinforcement learning?
O Nachum, H Tang, X Lu, S Gu, H Lee, S Levine
arXiv preprint arXiv:1909.10618, 2019
222019
Path consistency learning in tsallis entropy regularized mdps
Y Chow, O Nachum, M Ghavamzadeh
International Conference on Machine Learning, 979-988, 2018
21*2018
Multi-agent manipulation via locomotion using hierarchical sim2real
O Nachum, M Ahn, H Ponte, S Gu, V Kumar
arXiv preprint arXiv:1908.05224, 2019
182019
The Laplacian in RL: Learning representations with efficient approximations
Y Wu, G Tucker, O Nachum
arXiv preprint arXiv:1810.04586, 2018
162018
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20