Mert Gürbüzbalaban
Title
Cited by
Cited by
Year
On the convergence rate of incremental aggregated gradient algorithms
M Gurbuzbalaban, A Ozdaglar, PA Parrilo
SIAM Journal on Optimization 27 (2), 1035-1048, 2017
1092017
Why random reshuffling beats stochastic gradient descent
M Gürbüzbalaban, A Ozdaglar, PA Parrilo
Mathematical Programming, 1-36, 2019
1012019
A tail-index analysis of stochastic gradient noise in deep neural networks
U Simsekli, L Sagun, M Gurbuzbalaban
International Conference on Machine Learning, 5827-5837, 2019
652019
Fast Approximation of the Norm via Optimization over Spectral Value Sets
N Guglielmi, M Gürbüzbalaban, ML Overton
SIAM Journal on Matrix Analysis and Applications 34 (2), 709-737, 2013
482013
A globally convergent incremental Newton method
M Gürbüzbalaban, A Ozdaglar, P Parrilo
Mathematical Programming 151 (1), 283-313, 2015
412015
Global convergence rate of proximal incremental aggregated gradient methods
ND Vanli, M Gurbuzbalaban, A Ozdaglar
SIAM Journal on Optimization 28 (2), 1282-1300, 2018
322018
Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Stochastic Optimization: Non-Asymptotic Performance Bounds and Momentum-Based Acceleration
X Gao, M Gurbuzbalaban, Z Lingjiong
arXiv preprint arXiv:1809.04618, 2018
312018
On Nesterov’s nonsmooth Chebyshev–Rosenbrock functions
M Gürbüzbalaban, ML Overton
Nonlinear Analysis: Theory, Methods & Applications 75 (3), 1282-1289, 2012
302012
Surpassing gradient descent provably: A cyclic incremental method with linear convergence rate
A Mokhtari, M Gurbuzbalaban, A Ribeiro
SIAM Journal on Optimization 28 (2), 1420-1447, 2018
252018
A universally optimal multistage accelerated stochastic gradient method
NS Aybat, A Fallah, M Gurbuzbalaban, A Ozdaglar
arXiv preprint arXiv:1901.08022, 2019
232019
Robust accelerated gradient methods for smooth strongly convex functions
NS Aybat, A Fallah, M Gurbuzbalaban, A Ozdaglar
SIAM Journal on Optimization 30 (1), 717-751, 2020
222020
Explicit solutions for root optimization of a polynomial family with one affine constraint
VD Blondel, M Gurbuzbalaban, A Megretski, ML Overton
IEEE transactions on automatic control 57 (12), 3078-3089, 2012
192012
Accelerated linear convergence of stochastic momentum methods in wasserstein distances
B Can, M Gurbuzbalaban, L Zhu
International Conference on Machine Learning, 891-901, 2019
182019
When cyclic coordinate descent outperforms randomized coordinate descent
M Gurbuzbalaban, AE Ozdaglar, PA Parrilo, ND Vanli
Neural Information Processing Systems Foundation, Inc., 2017
172017
Convergence rate of incremental gradient and newton methods
M Gürbüzbalaban, A Ozdaglar, P Parrilo
arXiv preprint arXiv:1510.08562, 2015
172015
Breaking reversibility accelerates Langevin dynamics for global non-convex optimization
X Gao, M Gurbuzbalaban, L Zhu
arXiv preprint arXiv:1812.07725, 2018
152018
A stronger convergence result on the proximal incremental aggregated gradient method
ND Vanli, M Gurbuzbalaban, A Ozdaglar
arXiv preprint arXiv:1611.08022, 2016
152016
First exit time analysis of stochastic gradient descent under heavy-tailed gradient noise
TH Nguyen, U Şimşekli, M Gürbüzbalaban, G Richard
arXiv preprint arXiv:1906.09069, 2019
132019
Mert Gürbüzbalaban, Thanh Huy Nguyen, Gaël Richard, and Levent Sagun. On the heavy-tailed theory of stochastic gradient descent for deep neural networks
U Simsekli
arXiv preprint arXiv:1912.00018, 2019
102019
Fractional underdamped langevin dynamics: Retargeting sgd with momentum under heavy-tailed gradient noise
U Simsekli, L Zhu, YW Teh, M Gurbuzbalaban
International Conference on Machine Learning, 8970-8980, 2020
92020
The system can't perform the operation now. Try again later.
Articles 1–20