Seguir
Shiwei Liu
Shiwei Liu
University of Oxford & Eindhoven University of Technology
Dirección de correo verificada de tue.nl - Página principal
Título
Citado por
Citado por
Año
More convnets in the 2020s: Scaling up kernels beyond 51x51 using sparsity
S Liu, T Chen, X Chen, X Chen, Q Xiao, B Wu, M Pechenizkiy, D Mocanu, ...
ICLR2023, The International Conference on Learning Representations, 2023
1032023
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
S Liu, T Chen, X Chen, Z Atashgahi, L Yin, H Kou, L Shen, M Pechenizkiy, ...
NeurIPS2021, Advances in Neural Information Processing Systems, 2021
1012021
Do we actually need dense over-parameterization? in-time over-parameterization in sparse training
S Liu, L Yin, DC Mocanu, M Pechenizkiy
ICML2021, International Conference on Machine Learning, 2021
1002021
Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware
S Liu, DC Mocanu, ARR Matavalam, Y Pei, M Pechenizkiy
Neural Computing and Applications 33, 2589-2604, 2021
862021
The unreasonable effectiveness of random pruning: Return of the most naive baseline for sparse training
S Liu, T Chen, X Chen, L Shen, DC Mocanu, Z Wang, M Pechenizkiy
ICLR2022, The International Conference on Learning Representations, 2022
792022
Selfish sparse RNN training
S Liu, DC Mocanu, Y Pei, M Pechenizkiy
ICML2021, International Conference on Machine Learning, 2021
51*2021
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
S Liu, T Chen, Z Atashgahi, X Chen, G Sokar, E Mocanu, M Pechenizkiy, ...
ICLR2022, The International Conference on Learning Representations, 2021
432021
Topological Insights into Sparse Neural Networks
S Liu, T Van der Lee, A Yaman, Z Atashgahi, D Ferraro, G Sokar, ...
ECML2020, European Conference on Machine Learning, 2020
322020
Efficient and effective training of sparse recurrent neural networks
S Liu, I Ni’mah, V Menkovski, DC Mocanu, M Pechenizkiy
Neural Computing and Applications 33, 9625-9636, 2021
282021
Achieving personalized federated learning with sparse local models
T Huang, S Liu, L Shen, F He, W Lin, D Tao
arXiv preprint arXiv:2201.11380, 2022
232022
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
Z Atashgahi, J Pieterse, S Liu, DC Mocanu, R Veldhuis, M Pechenizkiy
Machine Learning Journal (ECML-PKDD 2022 journal track), 2019
20*2019
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
S Liu, T Chen, Z Zhang, X Chen, T Huang, A Jaiswal, Z Wang
ICLR2023, The International Conference on Learning Representations, 2023
192023
Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers
T Chen, Z Zhang, A Jaiswal, S Liu, Z Wang
ICLR2023, The International Conference on Learning Representations, 2023
182023
Revisiting pruning at initialization through the lens of Ramanujan graph
DNM Hoang, S Liu, R Marculescu, Z Wang
ICLR2023, The International Conference on Learning Representations, 2023
172023
Ten lessons we have learned in the new" sparseland": A short handbook for sparse neural network researchers
S Liu, Z Wang
arXiv preprint arXiv:2302.02596, 2023
152023
Dynamic Sparse Network for Time Series Classification: Learning What to “See”
Q Xiao, B Wu, Y Zhang, S Liu, M Pechenizkiy, E Mocanu, DC Mocanu
NeurIPS2022, 36th Annual Conference on Neural Information Processing Systems, 2022
142022
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter
A Jaiswal, S Liu, T Chen, Z Wang
NeurIPS2023, 37th Annual Conference on Neural Information Processing Systems, 2023
122023
Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models
AK JAISWAL, S Liu, T Chen, Y Ding, Z Wang
ICML2023, International Conference on Machine Learning, 2023
102023
You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets
T Huang, T Chen, M Fang, V Menkovski, J Zhao, L Yin, Y Pei, DC Mocanu, ...
LoG 2022, Learning on Graphs Conference (Best Paper Award), 2022
102022
On improving deep learning generalization with adaptive sparse connectivity
S Liu, DC Mocanu, M Pechenizkiy
ICML 2019 workshop of Understanding and Improving Generalization in Deep …, 2019
92019
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20