Follow
Mert Gürbüzbalaban
Title
Cited by
Cited by
Year
A tail-index analysis of stochastic gradient noise in deep neural networks
U Simsekli, L Sagun, M Gurbuzbalaban
International Conference on Machine Learning, 5827-5837, 2019
2492019
Why random reshuffling beats stochastic gradient descent
M Gürbüzbalaban, A Ozdaglar, PA Parrilo
Mathematical Programming 186, 49-84, 2021
2042021
On the convergence rate of incremental aggregated gradient algorithms
M Gurbuzbalaban, A Ozdaglar, PA Parrilo
SIAM Journal on Optimization 27 (2), 1035-1048, 2017
1692017
The heavy-tail phenomenon in SGD
M Gurbuzbalaban, U Simsekli, L Zhu
International Conference on Machine Learning, 3964-3975, 2021
1202021
Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Stochastic Optimization: Non-Asymptotic Performance Bounds and Momentum-Based Acceleration
X Gao, M Gurbuzbalaban, Z Lingjiong
arXiv preprint arXiv:1809.04618, 2018
91*2018
Convergence rate of incremental gradient and incremental Newton methods
M Gurbuzbalaban, A Ozdaglar, PA Parrilo
SIAM Journal on Optimization 29 (4), 2542-2565, 2019
682019
Fast Approximation of the Norm via Optimization over Spectral Value Sets
N Guglielmi, M Gürbüzbalaban, ML Overton
SIAM Journal on Matrix Analysis and Applications 34 (2), 709-737, 2013
622013
Robust accelerated gradient methods for smooth strongly convex functions
NS Aybat, A Fallah, M Gurbuzbalaban, A Ozdaglar
SIAM Journal on Optimization 30 (1), 717-751, 2020
612020
Global convergence rate of proximal incremental aggregated gradient methods
ND Vanli, M Gurbuzbalaban, A Ozdaglar
SIAM Journal on Optimization 28 (2), 1282-1300, 2018
602018
A universally optimal multistage accelerated stochastic gradient method
NS Aybat, A Fallah, M Gurbuzbalaban, A Ozdaglar
Advances in neural information processing systems 32, 2019
592019
A globally convergent incremental Newton method
M Gürbüzbalaban, A Ozdaglar, P Parrilo
Mathematical Programming 151 (1), 283-313, 2015
542015
Fractional underdamped langevin dynamics: Retargeting sgd with momentum under heavy-tailed gradient noise
U Simsekli, L Zhu, YW Teh, M Gurbuzbalaban
International conference on machine learning, 8970-8980, 2020
532020
First exit time analysis of stochastic gradient descent under heavy-tailed gradient noise
TH Nguyen, U Simsekli, M Gurbuzbalaban, G Richard
Advances in neural information processing systems 32, 2019
522019
Accelerated linear convergence of stochastic momentum methods in wasserstein distances
B Can, M Gurbuzbalaban, L Zhu
International Conference on Machine Learning, 891-901, 2019
462019
Surpassing gradient descent provably: A cyclic incremental method with linear convergence rate
A Mokhtari, M Gurbuzbalaban, A Ribeiro
SIAM Journal on Optimization 28 (2), 1420-1447, 2018
432018
Convergence rates of stochastic gradient descent under infinite noise variance
H Wang, M Gurbuzbalaban, L Zhu, U Simsekli, MA Erdogdu
Advances in Neural Information Processing Systems 34, 18866-18877, 2021
402021
When cyclic coordinate descent outperforms randomized coordinate descent
M Gurbuzbalaban, A Ozdaglar, PA Parrilo, N Vanli
Advances in Neural Information Processing Systems 30, 2017
392017
On Nesterov’s nonsmooth Chebyshev–Rosenbrock functions
M Gürbüzbalaban, ML Overton
Nonlinear Analysis: Theory, Methods & Applications 75 (3), 1282-1289, 2012
372012
A stochastic subgradient method for distributionally robust non-convex and non-smooth learning
M Gürbüzbalaban, A Ruszczyński, L Zhu
Journal of Optimization Theory and Applications 194 (3), 1014-1041, 2022
34*2022
Robust distributed accelerated stochastic gradient methods for multi-agent networks
A Fallah, M Gürbüzbalaban, A Ozdaglar, U Şimşekli, L Zhu
Journal of machine learning research 23 (220), 1-96, 2022
302022
The system can't perform the operation now. Try again later.
Articles 1–20