Segui
Tatsunori Hashimoto
Tatsunori Hashimoto
Altri nomiTatsu Hashimoto, Tatsunori B. Hashimoto
Assistant Professor, Stanford
Email verificata su stanford.edu - Home page
Titolo
Citata da
Citata da
Anno
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
25792021
Stanford alpaca: An instruction-following llama model
R Taori, I Gulrajani, T Zhang, Y Dubois, X Li, C Guestrin, P Liang, ...
1457*2023
Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization
S Sagawa, PW Koh, TB Hashimoto, P Liang
arXiv preprint arXiv:1911.08731, 2019
13612019
Emergent abilities of large language models
J Wei, Y Tay, R Bommasani, C Raffel, B Zoph, S Borgeaud, D Yogatama, ...
arXiv preprint arXiv:2206.07682, 2022
13182022
Fairness without demographics in repeated loss minimization
T Hashimoto, M Srivastava, H Namkoong, P Liang
International Conference on Machine Learning, 1929-1938, 2018
5722018
Holistic evaluation of language models
P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ...
arXiv preprint arXiv:2211.09110, 2022
5672022
Discovery of directional and nondirectional pioneer transcription factors by modeling DNase profile magnitude and shape
RI Sherwood, T Hashimoto, CW O'donnell, S Lewis, AA Barkal, ...
Nature biotechnology 32 (2), 171-178, 2014
4872014
Diffusion-lm improves controllable text generation
X Li, J Thickstun, I Gulrajani, PS Liang, TB Hashimoto
Advances in Neural Information Processing Systems 35, 4328-4343, 2022
3972022
Generating sentences by editing prototypes
K Guu, TB Hashimoto, Y Oren, P Liang
Transactions of the Association for Computational Linguistics 6, 437-450, 2018
3512018
Large language models can be strong differentially private learners
X Li, F Tramer, P Liang, T Hashimoto
arXiv preprint arXiv:2110.05679, 2021
2272021
Unifying human and statistical evaluation for natural language generation
TB Hashimoto, H Zhang, P Liang
arXiv preprint arXiv:1904.02792, 2019
2212019
Alpacaeval: An automatic evaluator of instruction-following models
X Li, T Zhang, Y Dubois, R Taori, I Gulrajani, C Guestrin, P Liang, ...
1692023
A retrieve-and-edit framework for predicting structured outputs
TB Hashimoto, K Guu, Y Oren, PS Liang
Advances in Neural Information Processing Systems 31, 2018
1622018
Alpacafarm: A simulation framework for methods that learn from human feedback
Y Dubois, CX Li, R Taori, T Zhang, I Gulrajani, J Ba, C Guestrin, PS Liang, ...
Advances in Neural Information Processing Systems 36, 2024
1562024
Distributionally robust language modeling
Y Oren, S Sagawa, TB Hashimoto, P Liang
arXiv preprint arXiv:1909.02060, 2019
1562019
Benchmarking large language models for news summarization
T Zhang, F Ladhak, E Durmus, P Liang, K McKeown, TB Hashimoto
Transactions of the Association for Computational Linguistics 12, 39-57, 2024
1502024
The gem benchmark: Natural language generation, its evaluation and metrics
S Gehrmann, T Adewumi, K Aggarwal, PS Ammanamanchi, ...
arXiv preprint arXiv:2102.01672, 2021
1302021
Whose opinions do language models reflect?
S Santurkar, E Durmus, F Ladhak, C Lee, P Liang, T Hashimoto
International Conference on Machine Learning, 29971-30004, 2023
1292023
Jury learning: Integrating dissenting voices into machine learning models
ML Gordon, MS Lam, JS Park, K Patel, J Hancock, T Hashimoto, ...
Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems …, 2022
1262022
The disagreement deconvolution: Bringing machine learning performance metrics in line with reality
ML Gordon, K Zhou, K Patel, T Hashimoto, MS Bernstein
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems …, 2021
1172021
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20