Segui
William Merrill
Titolo
Citata da
Citata da
Anno
CORD-19: The COVID-19 open research dataset
LL Wang, K Lo, Y Chandrasekhar, R Reas, J Yang, D Eide, K Funk, ...
Workshop on NLP for COVID-19, 2020
931*2020
How language model hallucinations can snowball
M Zhang, O Press, W Merrill, A Liu, NA Smith
arXiv preprint arXiv:2305.13534, 2023
1232023
Competency problems: On finding and removing artifacts in language data
M Gardner, W Merrill, J Dodge, ME Peters, A Ross, S Singh, N Smith
Empirical Methods in Natural Language Processing, 2021
782021
ReCLIP: A Strong Zero-Shot Baseline for Referring Expression Comprehension
S Subramanian, W Merrill, T Darrell, M Gardner, S Singh, A Rohrbach
Empirical Methods in Natural Language Processing, 2022
702022
A formal hierarchy of RNN architectures
W Merrill, G Weiss, Y Goldberg, R Schwartz, NA Smith, E Yahav
Association of Computational Linguistics, 2020
642020
Provable limitations of acquiring meaning from ungrounded form: What will future language models understand?
W Merrill, Y Goldberg, R Schwartz, NA Smith
Transactions of the Association for Computational Linguistics 9, 1047-1060, 2021
622021
Saturated transformers are constant-depth threshold circuits
W Merrill, A Sabharwal, NA Smith
Transactions of the Association for Computational Linguistics 10, 843-856, 2022
572022
Sequential neural networks as automata
W Merrill
Deep Learning and Formal Languages (ACL workshop), 2019
562019
Context-free transductions with neural stacks
Y Hao, W Merrill, D Angluin, R Frank, N Amsel, A Benz, S Mendelsohn
BlackboxNLP, 2018
342018
The Parallelism Tradeoff: Limitations of Log-Precision Transformers
W Merrill, A Sabharwal
arXiv preprint arXiv:2207.00729, 2022
24*2022
Effects of parameter norm growth during transformer training: Inductive bias from gradient descent
W Merrill, V Ramanujan, Y Goldberg, R Schwartz, N Smith
Empirical Methods in Natural Language Processing, 2021
222021
A tale of two circuits: Grokking as competition of sparse and dense subnetworks
W Merrill, N Tsilivis, A Shukla
arXiv preprint arXiv:2303.11873, 2023
172023
End-to-end graph-based TAG parsing with neural networks
J Kasai, R Frank, P Xu, W Merrill, O Rambow
NAACL, 2018
142018
The Expressive Power of Transformers with Chain of Thought
W Merrill, A Sabharwal
arXiv preprint arXiv:2310.07923, 2023
112023
Entailment Semantics Can Be Extracted from an Ideal Language Model
W Merrill, A Warstadt, T Linzen
CoNLL 2022, 2022
112022
Formal language theory meets modern NLP
W Merrill
arXiv preprint arXiv:2102.10094, 2021
112021
On the linguistic capacity of real-time counter automata
W Merrill
arXiv preprint arXiv:2004.06866, 2020
112020
Transformers as recognizers of formal languages: A survey on expressivity
L Strobl, W Merrill, G Weiss, D Chiang, D Angluin
arXiv preprint arXiv:2311.00208, 2023
82023
A Logic for Expressing Log-Precision Transformers
W Merrill, A Sabharwal
arXiv preprint arXiv:2210.02671, 2022
8*2022
Finding hierarchical structure in neural stacks using unsupervised parsing
W Merrill, L Khazan, N Amsel, Y Hao, S Mendelsohn, R Frank
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting …, 2019
7*2019
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20