Follow
Mostafa Dehghani
Mostafa Dehghani
Research Scientist, Google Brain
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
An image is worth 16x16 words: Transformers for image recognition at scale
A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, ...
arXiv preprint arXiv:2010.11929, 2020
138212020
Vivit: A video vision transformer
A Arnab*, M Dehghani*, G Heigold, C Sun, M Lučić, C Schmid
arXiv preprint arXiv:2103.15691, 2021
7992021
Universal Transformers
M Dehghani, S Gouws, O Vinyals, J Uszkoreit, Ł Kaiser
International Conference on Learning Representations (ICLR), 2019
6562019
Efficient transformers: A survey
Y Tay, M Dehghani, D Bahri, D Metzler
ACM Computing Surveys 55 (6), 1-28, 2022
5682022
Neural Ranking Models with Weak Supervision
M Dehghani, H Zamani, A Severyn, J Kamps, WB Croft
The 40th International ACM SIGIR Conference on Research and Development in …, 2017
3502017
Long Range Arena: A Benchmark for Efficient Transformers
Y Tay*, M Dehghani*, S Abnar, Y Shen, D Bahri, P Pham, J Rao, L Yang, ...
arXiv preprint arXiv:2011.04006, 2020
2532020
Metnet: A neural weather model for precipitation forecasting
CK Sønderby, L Espeholt, J Heek, M Dehghani, A Oliver, T Salimans, ...
arXiv preprint arXiv:2003.12140, 2020
181*2020
From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing
H Zamani, M Dehghani, WB Croft, E Learned-Miller, J Kamps
Proceedings of the 27th ACM international conference on information and …, 2018
1372018
Learning to Attend, Copy, and Generate for Session-Based Query Suggestion
M Dehghani, S Rothe, E Alfonseca, P Fleury
International Conference on Information and Knowledge Management (CIKM'17), 2017
1002017
Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks
RK Mahabadi, S Ruder, M Dehghani, J Henderson
arXiv preprint arXiv:2106.04489, 2021
892021
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, E Li, X Wang, ...
arXiv preprint arXiv:2210.11416, 2022
622022
Fidelity-Weighted Learning
M Dehghani, A Mehrjou, S Gouws, J Kamps, B Schölkopf
International Conference on Learning Representations (ICLR2018), https …, 2018
622018
Exploring the limits of large scale pre-training
S Abnar, M Dehghani, B Neyshabur, H Sedghi
arXiv preprint arXiv:2110.02095, 2021
552021
Words are Malleable: Computing Semantic Shifts in Political and Media Discourse
H Azarbonyad, M Dehghani, K Beelen, A Arkut, M Marx, K Jaap
International Conference on Information and Knowledge Management (CIKM'17), 2017
552017
Tokenlearner: What can 8 learned tokens do for images and videos?
MS Ryoo, AJ Piergiovanni, A Arnab, M Dehghani, A Angelova
arXiv preprint arXiv:2106.11297, 2021
542021
Are pre-trained convolutions better than pre-trained transformers?
Y Tay, M Dehghani, J Gupta, D Bahri, V Aribandi, Z Qin, D Metzler
arXiv preprint arXiv:2105.03322, 2021
532021
Learning to Learn from Weak Supervision by Full Supervision
M Dehghani, A Severyn, S Rothe, J Kamps
NIPS2017 workshop on Meta-Learning (MetaLearn 2017), 2017
532017
Tokenlearner: Adaptive space-time tokenization for videos
M Ryoo, AJ Piergiovanni, A Arnab, M Dehghani, A Angelova
Advances in Neural Information Processing Systems 34, 12786-12797, 2021
522021
Avoiding your teacher's mistakes: Training neural networks with controlled weak supervision
M Dehghani, A Severyn, S Rothe, J Kamps
arXiv preprint arXiv:1711.00313, 2017
442017
Time-aware authorship attribution for short text streams
H Azarbonyad, M Dehghani, M Marx, J Kamps
Proceedings of the 38th International ACM SIGIR Conference on Research and …, 2015
432015
The system can't perform the operation now. Try again later.
Articles 1–20