Segui
Hugo Touvron
Hugo Touvron
Facebook AI Research
Email verificata su fb.com
Titolo
Citata da
Citata da
Anno
Llama: Open and efficient foundation language models
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
113492023
Llama 2: Open foundation and fine-tuned chat models
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
110202023
Training data-efficient image transformers & distillation through attention
H Touvron, M Cord, M Douze, F Massa, A Sablayrolles, H Jégou
International conference on machine learning, 10347-10357, 2021
73432021
Emerging properties in self-supervised vision transformers
M Caron, H Touvron, I Misra, H Jégou, J Mairal, P Bojanowski, A Joulin
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021
58002021
The Llama 3 Herd of Models
AI Meta
arXiv preprint arXiv:2407.21783, 2024
1640*2024
Code llama: Open foundation models for code
B Roziere, J Gehring, F Gloeckle, S Sootla, I Gat, XE Tan, Y Adi, J Liu, ...
arXiv preprint arXiv:2308.12950, 2023
14932023
Going deeper with image transformers
H Touvron, M Cord, A Sablayrolles, G Synnaeve, H Jégou
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021
11612021
Convit: Improving vision transformers with soft convolutional inductive biases
S d'Ascoli, H Touvron, M Leavitt, A Morcos, G Biroli, L Sagun
International Conference on Machine Learning, 2021
9112021
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
B Graham, A El-Nouby, H Touvron, P Stock, A Joulin, H Jégou, M Douze
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021
877*2021
Resmlp: Feedforward networks for image classification with data-efficient training
H Touvron, P Bojanowski, M Caron, M Cord, A El-Nouby, E Grave, ...
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
796*2021
Fixing the train-test resolution discrepancy
H Touvron, A Vedaldi, M Douze, H Jégou
Advances in neural information processing systems 32, 2019
6642019
XCiT: Cross-Covariance Image Transformers
A El-Nouby, H Touvron, M Caron, P Bojanowski, M Douze, A Joulin, ...
Advances in Neural Information Processing Systems, 2021
565*2021
Resnet strikes back: An improved training procedure in timm
R Wightman, H Touvron, H Jégou
NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future, 2021
5252021
Deit iii: Revenge of the vit
H Touvron, M Cord, H Jégou
Proceedings of the European conference on computer vision (ECCV), 2022
3922022
Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv: 2307.0928810. 48550
H Touvron
arXiv preprint arXiv.2307.09288, 2023
226*2023
LLaMA: Open and Efficient Foundation Language Models. arXiv [cs. CL]. 2023
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix
182*
Are large-scale datasets necessary for self-supervised pre-training?
A El-Nouby, G Izacard, H Touvron, I Laptev, H Jegou, E Grave
arXiv preprint arXiv:2112.10740, 2021
1542021
Three things everyone should know about Vision Transformers
H Touvron, M Cord, A El-Nouby, J Verbeek, H Jégou
Proceedings of the European conference on computer vision (ECCV), 2022
1242022
Introducing LLaMA: A foundational, 65-billion-parameter large language model
AI Meta
Meta AI, 2023
882023
Grafit: Learning fine-grained image representations with coarse labels
H Touvron, A Sablayrolles, M Douze, M Cord, H Jégou
Proceedings of the IEEE/CVF international conference on computer vision, 874-884, 2021
812021
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20