Tackling the objective inconsistency problem in heterogeneous federated optimization J Wang, Q Liu, H Liang, G Joshi, HV Poor Advances in Neural Information Processing Systems, 2020, 2020 | 1395 | 2020 |
Bellman Eluder dimension: New rich classes of RL problems, and sample-efficient algorithms C Jin, Q Liu, S Miryoosefi Advances in Neural Information Processing Systems, 2021, 2021 | 266 | 2021 |
A Sharp Analysis of Model-based Reinforcement Learning with Self-play Q Liu, T Yu, Y Bai, C Jin International Conference on Machine Learning, 7001-7010, 2021 | 156 | 2021 |
Linearized admm for nonconvex nonsmooth optimization with convergence analysis Q Liu, X Shen, Y Gu arXiv preprint arXiv:1705.02502, 2017 | 140 | 2017 |
V-learning—a simple, efficient, decentralized algorithm for multiagent reinforcement learning C Jin, Q Liu, Y Wang, T Yu Mathematics of Operations Research 49 (4), 2295-2322, 2024 | 125* | 2024 |
When is partially observable reinforcement learning not scary? Q Liu, A Chung, C Szepesvári, C Jin Conference on Learning Theory, 5175-5220, 2022 | 106 | 2022 |
A novel framework for the analysis and design of heterogeneous federated learning J Wang, Q Liu, H Liang, G Joshi, HV Poor IEEE Transactions on Signal Processing 69, 5234-5249, 2021 | 90 | 2021 |
Sample-Efficient Reinforcement Learning of Undercomplete POMDPs C Jin, SM Kakade, A Krishnamurthy, Q Liu Advances in Neural Information Processing Systems, 2020, 2020 | 82 | 2020 |
The power of exploiter: Provable multi-agent rl in large state spaces C Jin, Q Liu, T Yu International Conference on Machine Learning, 10251-10279, 2022 | 71 | 2022 |
Is rlhf more difficult than standard rl? a theoretical perspective Y Wang, Q Liu, C Jin Advances in Neural Information Processing Systems 36, 76006-76032, 2023 | 45* | 2023 |
Optimistic mle: A generic model-based algorithm for partially observable sequential decision making Q Liu, P Netrapalli, C Szepesvari, C Jin Proceedings of the 55th Annual ACM Symposium on Theory of Computing, 363-376, 2023 | 44 | 2023 |
Policy optimization for markov games: Unified framework and faster convergence R Zhang, Q Liu, H Wang, C Xiong, N Li, Y Bai Advances in Neural Information Processing Systems 35, 21886-21899, 2022 | 33 | 2022 |
Breaking the curse of multiagency: Provably efficient decentralized multi-agent rl with function approximation Y Wang, Q Liu, Y Bai, C Jin Conference on Learning Theory, 2023, 2023 | 31 | 2023 |
Sample-efficient reinforcement learning of partially observable markov games Q Liu, C Szepesvári, C Jin Advances in Neural Information Processing Systems 35, 18296-18308, 2022 | 31 | 2022 |
Learning markov games with adversarial opponents: Efficient algorithms and fundamental limits Q Liu, Y Wang, C Jin International Conference on Machine Learning, 14036-14053, 2022 | 23 | 2022 |
Provable rich observation reinforcement learning with combinatorial latent states D Misra, Q Liu, C Jin, J Langford International Conference on Learning Representations, 2021 | 9 | 2021 |
Rigorous restricted isometry property of low-dimensional subspaces G Li, Q Liu, Y Gu Applied and Computational Harmonic Analysis 49 (2), 608-635, 2018 | 9 | 2018 |
Optimistic Natural Policy Gradient: a Simple Efficient Policy Optimization Framework for Online RL Q Liu, G Weisz, A György, C Jin, C Szepesvári Thirty-seventh Conference on Neural Information Processing Systems, 2023 | 8 | 2023 |
A deep reinforcement learning approach for finding non-exploitable strategies in two-player atari games Z Ding, D Su, Q Liu, C Jin arXiv preprint arXiv:2207.08894, 2022 | 3 | 2022 |
Context-lumpable stochastic bandits CW Lee, Q Liu, Y Abbasi-Yadkori, C Jin, T Lattimore, C Szepesvári Thirty-seventh Conference on Neural Information Processing Systems, 2023 | 2 | 2023 |