Segui
Yichen Jiang
Yichen Jiang
Email verificata su cs.unc.edu - Home page
Titolo
Citata da
Citata da
Anno
Self-assembling modular networks for interpretable multi-hop reasoning
Y Jiang, M Bansal
arXiv preprint arXiv:1909.05803, 2019
562019
Avoiding reasoning shortcuts: Adversarial evaluation, training, and model development for multi-hop QA
Y Jiang, M Bansal
arXiv preprint arXiv:1906.07132, 2019
562019
Explore, propose, and assemble: An interpretable model for multi-hop reading comprehension
Y Jiang, N Joshi, YC Chen, M Bansal
arXiv preprint arXiv:1906.05210, 2019
352019
HoVer: A dataset for many-hop fact extraction and claim verification
Y Jiang, S Bordia, Z Zhong, C Dognin, M Singh, M Bansal
arXiv preprint arXiv:2011.03088, 2020
312020
Closed-book training to improve summarization encoder memory
Y Jiang, M Bansal
arXiv preprint arXiv:1809.04585, 2018
242018
Enriching transformers with structured tensor-product representations for abstractive summarization
Y Jiang, A Celikyilmaz, P Smolensky, P Soulos, S Rao, H Palangi, ...
arXiv preprint arXiv:2106.01317, 2021
52021
Inducing Transformer's Compositional Generalization Ability via Auxiliary Sequence Prediction Tasks
Y Jiang, M Bansal
arXiv preprint arXiv:2109.15256, 2021
42021
Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
P Soulos, S Rao, C Smith, E Rosen, A Celikyilmaz, RT McCoy, Y Jiang, ...
arXiv preprint arXiv:2208.06061, 2022
2022
Learning and Analyzing Generation Order for Undirected Sequence Models
Y Jiang, M Bansal
arXiv preprint arXiv:2112.09097, 2021
2021
Augmenting Neural Encoder-Decoder Model for Natural Language Generation Tasks
Y Jiang
2018
Supplementary Material: Closed-Book Training to Improve Summarization Encoder Memory
Y Jiang, M Bansal
Il sistema al momento non pu eseguire l'operazione. Riprova pi tardi.
Articoli 1–11