Segui
Nan Jiang
Nan Jiang
Assistant Professor of Computer Science, UIUC
Email verificata su illinois.edu - Home page
Titolo
Citata da
Citata da
Anno
Doubly Robust Off-policy Value Evaluation for Reinforcement Learning
N Jiang, L Li
Proceedings of the 33rd International Conference on Machine Learning (ICML-16), 2015
6142015
Contextual Decision Processes with Low Bellman Rank are PAC-Learnable
N Jiang, A Krishnamurthy, A Agarwal, J Langford, RE Schapire
Proceedings of the 34th International Conference on Machine Learning (ICML-17), 2016
3422016
Information-Theoretic Considerations in Batch Reinforcement Learning
J Chen, N Jiang
Proceedings of the 36th International Conference on Machine Learning (ICML …, 2019
2502019
Model-based RL in Contextual Decision Processes: PAC bounds and Exponential Improvements over Model-free Approaches
W Sun, N Jiang, A Krishnamurthy, A Agarwal, J Langford
Conference on Learning Theory, 2019
1792019
Provably efficient RL with Rich Observations via Latent State Decoding
SS Du, A Krishnamurthy, N Jiang, A Agarwal, M Dudík, J Langford
Proceedings of the 36th International Conference on Machine Learning (ICML …, 2019
1772019
Hierarchical Imitation and Reinforcement Learning
HM Le, N Jiang, A Agarwal, M Dudík, Y Yue, H Daumé III
Proceedings of the 35th International Conference on Machine Learning (ICML-18), 2018
1642018
Minimax Weight and Q-Function Learning for Off-Policy Evaluation
M Uehara, J Huang, N Jiang
arXiv preprint arXiv:1910.12809, 2019
1402019
Reinforcement Learning: Theory and Algorithms
A Agarwal, N Jiang, SM Kakade
1312019
The Dependence of Effective Planning Horizon on Model Accuracy
N Jiang, A Kulesza, S Singh, R Lewis
Proceedings of the 2015 International Conference on Autonomous Agents and …, 2015
1312015
Bellman-consistent pessimism for offline reinforcement learning
T Xie, CA Cheng, N Jiang, P Mineiro, A Agarwal
Advances in neural information processing systems 34, 6683-6694, 2021
1132021
Sample complexity of reinforcement learning using linearly combined model ensembles
A Modi, N Jiang, A Tewari, S Singh
International Conference on Artificial Intelligence and Statistics, 2010-2020, 2020
1032020
Empirical study of off-policy policy evaluation for reinforcement learning
C Voloshin, HM Le, N Jiang, Y Yue
arXiv preprint arXiv:1911.06854, 2019
942019
On Oracle-Efficient PAC Reinforcement Learning with Rich Observations
C Dann, N Jiang, A Krishnamurthy, A Agarwal, J Langford, RE Schapire
Advances in Neural Information Processing Systems, 2018, 2018
922018
Policy finetuning: Bridging sample-efficient offline and online reinforcement learning
T Xie, N Jiang, H Wang, C Xiong, Y Bai
Advances in neural information processing systems 34, 27395-27407, 2021
742021
Abstraction Selection in Model-based Reinforcement Learning
N Jiang, A Kulesza, S Singh
Proceedings of the 32nd International Conference on Machine Learning (ICML …, 2015
702015
Batch value-function approximation with only realizability
T Xie, N Jiang
International Conference on Machine Learning, 11404-11413, 2021
692021
Provably efficient q-learning with low switching cost
Y Bai, T Xie, N Jiang, YX Wang
Advances in Neural Information Processing Systems, 8004-8013, 2019
692019
Repeated Inverse Reinforcement Learning
K Amin, N Jiang, S Singh
Advances in Neural Information Processing Systems, 2017, 2017
622017
Open Problem: The Dependence of Sample Complexity Lower Bounds on Planning Horizon
N Jiang, A Agarwal
Conference On Learning Theory, 3395-3398, 2018
602018
Q* approximation schemes for batch reinforcement learning: A theoretical comparison
T Xie, N Jiang
Conference on Uncertainty in Artificial Intelligence, 550-559, 2020
542020
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20