Segui
Xun Qian
Xun Qian
Researcher, Shanghai AI Lab
Email verificata su pjlab.org.cn
Titolo
Citata da
Citata da
Anno
SGD: General analysis and improved rates
RM Gower, N Loizou, X Qian, A Sailanbayev, E Shulgin, P Richtárik
International Conference on Machine Learning (ICML 2019), 5200-5209, 2019
4772019
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Z Li, D Kovalev, X Qian, P Richt{\'a}rik
International Conference on Machine Learning (ICML 2020), 2020
1632020
FedNL: Making Newton-type methods applicable to federated learning
M Safaryan, R Islamov, X Qian, P Richtárik
International Conference on Machine Learning (ICML 2022), 2021
862021
Distributed second order methods with fast rates and compressed communication
R Islamov, X Qian, P Richtárik
International Conference on Machine Learning (ICML 2021), 4617-4628, 2021
582021
Error compensated distributed SGD can be accelerated
X Qian, P Richtárik, T Zhang
Advances in Neural Information Processing Systems (NeurIPS 2021) 34, 2021
512021
L-SVRG and L-Katyusha with arbitrary sampling
X Qian, Z Qu, PR rik
Journal of Machine Learning Research 22, 1-49, 2021
362021
A model of distributionally robust two-stage stochastic convex programming with linear recourse
B Li, X Qian, J Sun, KL Teo, C Yu
Applied Mathematical Modelling 58, 86-97, 2018
362018
SAGA with arbitrary sampling
X Qian, Z Qu, P Richtárik
International Conference on Machine Learning (ICML 2019), 5190-5199, 2019
272019
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
X Qian, R Islamov, M Safaryan, P Richtárik
International Conference on Artificial Intelligence and Statistics (AISTATS'22), 2022
232022
MISO is making a comeback with better proofs and rates
X Qian, A Sailanbayev, K Mishchenko, P Richtárik
arXiv preprint arXiv:1906.01474, 2019
172019
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
R Islamov, X Qian, S Hanzely, M Safaryan, P Richtárik
arXiv preprint arXiv:2206.03588, 2022
112022
Error compensated loopless SVRG, Quartz, and SDCA for distributed optimization
X Qian, H Dong, P Richtárik, T Zhang
arXiv preprint arXiv:2109.10049, 2021
52021
Error compensated loopless SVRG for distributed optimization
X Qian, H Dong, P Richtárik, T Zhang
OPT2020: 12th Annual Workshop on Optimization for Machine Learning (NeurIPS …, 2020
42020
The convergent generalized central paths for linearly constrained convex programming
X Qian, LZ Liao, J Sun, H Zhu
SIAM Journal on Optimization 28 (2), 1183-1204, 2018
32018
Analysis of some interior point continuous trajectories for convex programming
X Qian, LZ Liao, J Sun
Optimization 66 (4), 589-608, 2017
32017
A strategy of global convergence for the affine scaling algorithm for convex semidefinite programming
X Qian, LZ Liao, J Sun
Mathematical Programming 179 (1), 1-19, 2020
22020
Error compensated proximal SGD and RDA
X Qian, H Dong, P Richtárik, T Zhang
12th Annual Workshop on Optimization for Machine Learning, 2020
22020
Analysis of the primal affine scaling continuous trajectory for convex programming
X Qian, LZ Liao
PACIFIC JOURNAL OF OPTIMIZATION 14 (2), 261-272, 2018
22018
Generalized Affine Scaling Trajectory Analysis for Linearly Constrained Convex Programming
X Qian, LZ Liao
International Symposium on Neural Networks, 139-147, 2018
12018
Two interior point continuous trajectory models for convex quadratic programming with bound constraints
H Yue, LZ Liao, X Qian
Pac. J. Optim. 14 (3), 527-550, 2018
12018
Il sistema al momento non puň eseguire l'operazione. Riprova piů tardi.
Articoli 1–20