Qi Meng
Qi Meng
Microsoft Research Asia
Email verificata su microsoft.com - Home page
Titolo
Citata da
Citata da
Anno
Lightgbm: A highly efficient gradient boosting decision tree
G Ke, Q Meng, T Finley, T Wang, W Chen, W Ma, Q Ye, TY Liu
Advances in neural information processing systems, 3146-3154, 2017
17622017
Asynchronous stochastic gradient descent with delay compensation
S Zheng, Q Meng, T Wang, W Chen, N Yu, ZM Ma, TY Liu
International Conference on Machine Learning, 4120-4129, 2017
922017
A communication-efficient parallel algorithm for decision tree
Q Meng, G Ke, T Wang, W Chen, Q Ye, ZM Ma, TY Liu
Advances in Neural Information Processing Systems, 1279-1287, 2016
552016
Convergence analysis of distributed stochastic gradient descent with shuffling
Q Meng, W Chen, Y Wang, ZM Ma, TY Liu
Neurocomputing 337, 46-57, 2019
262019
Asynchronous stochastic proximal optimization algorithms with variance reduction
Q Meng, W Chen, J Yu, T Wang, ZM Ma, TY Liu
Proceedings of the AAAI Conference on Artificial Intelligence 31 (1), 2017
172017
Asynchronous Accelerated Stochastic Gradient Descent.
Q Meng, W Chen, J Yu, T Wang, Z Ma, TY Liu
IJCAI, 1853-1859, 2016
162016
-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space
Q Meng, S Zheng, H Zhang, W Chen, ZM Ma, TY Liu
arXiv preprint arXiv:1802.03713, 2018
122018
Capacity control of relu neural networks by basis-path norm
S Zheng, Q Meng, H Zhang, W Chen, N Yu, TY Liu
Proceedings of the AAAI Conference on Artificial Intelligence 33, 5925-5932, 2019
112019
Reinforcement learning with dynamic boltzmann softmax updates
L Pan, Q Cai, Q Meng, W Chen, L Huang, TY Liu
arXiv preprint arXiv:1903.05926, 2019
52019
Generalization error bounds for optimization algorithms via stability
Q Meng, Y Wang, W Chen, T Wang, ZM Ma, TY Liu
Proceedings of the AAAI Conference on Artificial Intelligence 31 (1), 2017
52017
Target transfer q-learning and its convergence analysis
Y Wang, Y Liu, W Chen, ZM Ma, TY Liu
Neurocomputing, 2020
42020
Positively scale-invariant flatness of relu neural networks
M Yi, Q Meng, W Chen, Z Ma, TY Liu
arXiv preprint arXiv:1903.02237, 2019
42019
Differential equations for modeling asynchronous algorithms
L He, Q Meng, W Chen, ZM Ma, TY Liu
arXiv preprint arXiv:1805.02991, 2018
42018
Interpreting Basis Path Set in Neural Networks
J Zhu, Q Meng, W Chen, Z Ma
arXiv preprint arXiv:1910.09402, 2019
12019
Optimizing neural networks in the equivalent class space
Q Meng, W Chen, S Zheng, Q Ye, TY Liu
12018
The Implicit Bias for Adaptive Optimization Algorithms on Homogeneous Neural Networks
B Wang, Q Meng, W Chen
arXiv preprint arXiv:2012.06244, 2020
2020
Constructing Basis Path Set by Eliminating Path Dependency
J Zhu, Q Meng, W Chen, Y Wang, Z Ma
arXiv preprint arXiv:2007.00657, 2020
2020
Dynamic of Stochastic Gradient Descent with State-Dependent Noise
Q Meng, S Gong, W Chen, ZM Ma, TY Liu
arXiv preprint arXiv:2006.13719, 2020
2020
Expressiveness in Deep Reinforcement Learning
X Luo, Q Meng, D He, W Chen, Y Wang, TY Liu
2018
Il sistema al momento non pu eseguire l'operazione. Riprova pi tardi.
Articoli 1–19