Segui
Phu Mon Htut
Phu Mon Htut
AWS AI Labs
Email verificata su amazon.com - Home page
Titolo
Citata da
Citata da
Anno
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
7242022
Intermediate-task transfer learning with pretrained models for natural language understanding: When and why does it work?
Y Pruksachatkun, J Phang, H Liu, PM Htut, X Zhang, RY Pang, C Vania, ...
arXiv preprint arXiv:2005.00628, 2020
2702020
Generalized inner loop meta-learning
E Grefenstette, B Amos, D Yarats, PM Htut, A Molchanov, F Meier, D Kiela, ...
arXiv preprint arXiv:1910.01727, 2019
1562019
Do attention heads in BERT track syntactic dependencies?
PM Htut, J Phang, S Bordia, SR Bowman
arXiv preprint arXiv:1911.12246, 2019
1302019
BBQ: A hand-built bias benchmark for question answering
A Parrish, A Chen, N Nangia, V Padmakumar, J Phang, J Thompson, ...
arXiv preprint arXiv:2110.08193, 2021
1252021
Investigating BERT's knowledge of language: five analysis methods with NPIs
A Warstadt, Y Cao, I Grosu, W Peng, H Blix, Y Nie, A Alsop, S Bordia, ...
arXiv preprint arXiv:1909.02597, 2019
1252019
English intermediate-task training improves zero-shot cross-lingual transfer too
J Phang, I Calixto, PM Htut, Y Pruksachatkun, H Liu, C Vania, K Kann, ...
arXiv preprint arXiv:2005.13013, 2020
682020
Grammar induction with neural language models: An unusual replication
PM Htut, K Cho, SR Bowman
arXiv preprint arXiv:1808.10000, 2018
562018
Training a ranking function for open-domain question answering
PM Htut, SR Bowman, K Cho
arXiv preprint arXiv:1804.04264, 2018
552018
jiant 1.2: A software toolkit for research on general-purpose text understanding models
A Wang, IF Tenney, Y Pruksachatkun, K Yu, J Hula, P Xia, R Pappagari, ...
Note: http://jiant. info/Cited by: footnote 4, 2019
512019
jiant: A software toolkit for research on general-purpose text understanding models
Y Pruksachatkun, P Yeres, H Liu, J Phang, PM Htut, A Wang, I Tenney, ...
arXiv preprint arXiv:2003.02249, 2020
392020
Findings of the IWSLT 2023 evaluation campaign
M Agarwal, S Agarwal, A Anastasopoulos, L Bentivogli, O Bojar, C Borg, ...
Association for Computational Linguistics, 2023
252023
Comparing test sets with item response theory
C Vania, PM Htut, W Huang, D Mungra, RY Pang, J Phang, H Liu, K Cho, ...
arXiv preprint arXiv:2106.00840, 2021
232021
(QA): Question Answering with Questionable Assumptions
N Kim, PM Htut, SR Bowman, J Petty
arXiv preprint arXiv:2212.10003, 2022
162022
The unbearable weight of generating artificial errors for grammatical error correction
PM Htut, J Tetreault
arXiv preprint arXiv:1907.08889, 2019
132019
RAMP: Retrieval and attribute-marking enhanced prompting for attribute-controlled translation
G Sarti, PM Htut, X Niu, B Hsu, A Currey, G Dinu, M Nadejde
arXiv preprint arXiv:2305.17131, 2023
52023
Clustering Examples in Multi-Dataset Benchmarks with Item Response Theory
P Rodriguez, PM Htut, JP Lalor, J Sedoc
Proceedings of the Third Workshop on Insights from Negative Results in NLP …, 2022
52022
Inducing constituency trees through neural machine translation
PM Htut, K Cho, SR Bowman
arXiv preprint arXiv:1909.10056, 2019
52019
^ 2: Question Answering with Questionable Assumptions
SR Bowman, PM Htut, N Kim
2023
Analyzing and Comparing Natural Language Processing Models and Datasets
PM Htut
New York University, 2022
2022
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20