Lightgbm: A highly efficient gradient boosting decision tree G Ke, Q Meng, T Finley, T Wang, W Chen, W Ma, Q Ye, TY Liu
Advances in neural information processing systems, 3146-3154, 2017
1762 2017 Asynchronous stochastic gradient descent with delay compensation S Zheng, Q Meng, T Wang, W Chen, N Yu, ZM Ma, TY Liu
International Conference on Machine Learning, 4120-4129, 2017
92 2017 A communication-efficient parallel algorithm for decision tree Q Meng, G Ke, T Wang, W Chen, Q Ye, ZM Ma, TY Liu
Advances in Neural Information Processing Systems, 1279-1287, 2016
55 2016 Convergence analysis of distributed stochastic gradient descent with shuffling Q Meng, W Chen, Y Wang, ZM Ma, TY Liu
Neurocomputing 337, 46-57, 2019
26 2019 Asynchronous stochastic proximal optimization algorithms with variance reduction Q Meng, W Chen, J Yu, T Wang, ZM Ma, TY Liu
Proceedings of the AAAI Conference on Artificial Intelligence 31 (1), 2017
17 2017 Asynchronous Accelerated Stochastic Gradient Descent. Q Meng, W Chen, J Yu, T Wang, Z Ma, TY Liu
IJCAI, 1853-1859, 2016
16 2016 -SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant SpaceQ Meng, S Zheng, H Zhang, W Chen, ZM Ma, TY Liu
arXiv preprint arXiv:1802.03713, 2018
12 2018 Capacity control of relu neural networks by basis-path norm S Zheng, Q Meng, H Zhang, W Chen, N Yu, TY Liu
Proceedings of the AAAI Conference on Artificial Intelligence 33, 5925-5932, 2019
11 2019 Reinforcement learning with dynamic boltzmann softmax updates L Pan, Q Cai, Q Meng, W Chen, L Huang, TY Liu
arXiv preprint arXiv:1903.05926, 2019
5 2019 Generalization error bounds for optimization algorithms via stability Q Meng, Y Wang, W Chen, T Wang, ZM Ma, TY Liu
Proceedings of the AAAI Conference on Artificial Intelligence 31 (1), 2017
5 2017 Target transfer q-learning and its convergence analysis Y Wang, Y Liu, W Chen, ZM Ma, TY Liu
Neurocomputing, 2020
4 2020 Positively scale-invariant flatness of relu neural networks M Yi, Q Meng, W Chen, Z Ma, TY Liu
arXiv preprint arXiv:1903.02237, 2019
4 2019 Differential equations for modeling asynchronous algorithms L He, Q Meng, W Chen, ZM Ma, TY Liu
arXiv preprint arXiv:1805.02991, 2018
4 2018 Interpreting Basis Path Set in Neural Networks J Zhu, Q Meng, W Chen, Z Ma
arXiv preprint arXiv:1910.09402, 2019
1 2019 Optimizing neural networks in the equivalent class space Q Meng, W Chen, S Zheng, Q Ye, TY Liu
1 2018 The Implicit Bias for Adaptive Optimization Algorithms on Homogeneous Neural Networks B Wang, Q Meng, W Chen
arXiv preprint arXiv:2012.06244, 2020
2020 Constructing Basis Path Set by Eliminating Path Dependency J Zhu, Q Meng, W Chen, Y Wang, Z Ma
arXiv preprint arXiv:2007.00657, 2020
2020 Dynamic of Stochastic Gradient Descent with State-Dependent Noise Q Meng, S Gong, W Chen, ZM Ma, TY Liu
arXiv preprint arXiv:2006.13719, 2020
2020 Expressiveness in Deep Reinforcement Learning X Luo, Q Meng, D He, W Chen, Y Wang, TY Liu
2018