Di He
Di He
Microsoft Research
Email verificata su microsoft.com
Citata da
Citata da
Dual learning for machine translation
Y Xia, D He, T Qin, L Wang, N Yu, TY Liu, WY Ma
arXiv preprint arXiv:1611.00179, 2016
A theoretical analysis of NDCG ranking measures
Y Wang, L Wang, Y Li, D He, W Chen, TY Liu
Proceedings of the 26th Annual Conference on Learning Theory (COLT 2013) 8, 6, 2013
Multilingual neural machine translation with knowledge distillation
X Tan, Y Ren, D He, T Qin, Z Zhao, TY Liu
arXiv preprint arXiv:1902.10461, 2019
Frage: Frequency-agnostic word representation
C Gong, D He, X Tan, T Qin, L Wang, TY Liu
Advances in Neural Information Processing Systems, 1334-1345, 2018
Layer-wise coordination between encoder and decoder for neural machine translation
T He, X Tan, Y Xia, D He, T Qin, Z Chen, TY Liu
Proceedings of the 32Nd International Conference on Neural Information …, 2018
Non-autoregressive machine translation with auxiliary regularization
Y Wang, F Tian, D He, T Qin, CX Zhai, TY Liu
AAAI 2019, 2019
Incorporating bert into neural machine translation
J Zhu, Y Xia, L Wu, D He, T Qin, W Zhou, H Li, TY Liu
arXiv preprint arXiv:2002.06823, 2020
Non-autoregressive neural machine translation with enhanced decoder input
J Guo, X Tan, D He, T Qin, L Xu, TY Liu
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 3723-3730, 2019
A game-theoretic machine learning approach for revenue maximization in sponsored search
D He, W Chen, L Wang, TY Liu
arXiv preprint arXiv:1406.0728, 2014
Adversarially robust generalization just requires more unlabeled data
R Zhai, T Cai, D He, C Dan, K He, J Hopcroft, L Wang
arXiv preprint arXiv:1906.00555, 2019
Decoding with Value Networks for Neural Machine Translation.
D He, H Lu, Y Xia, T Qin, L Wang, TY Liu
NIPS, 178-187, 2017
Towards binary-valued gates for robust lstm training
Z Li, D He, F Tian, W Chen, T Qin, L Wang, TY Liu
ICML 2018, 2018
On layer normalization in the transformer architecture
R Xiong, Y Yang, D He, K Zheng, S Zheng, C Xing, H Zhang, Y Lan, ...
International Conference on Machine Learning, 10524-10533, 2020
Hint-based Training for Non-Autoregressive Translation
Z Li, D He, F Tian, T Qin, L Wang, TY Liu
Fast structured decoding for sequence models
Z Sun, Z Li, H Wang, Z Lin, D He, ZH Deng
arXiv preprint arXiv:1910.11555, 2019
Towards a deep and unified understanding of deep neural models in nlp
C Guan, X Wang, Q Zhang, R Chen, D He, X Xie
International conference on machine learning, 2454-2463, 2019
Macer: Attack-free and scalable robust training via maximizing certified radius
R Zhai, C Dan, D He, H Zhang, B Gong, P Ravikumar, CJ Hsieh, L Wang
arXiv preprint arXiv:2001.02378, 2020
Understanding and improving transformer from a multi-particle dynamic system point of view
Y Lu, Z Li, D He, Z Sun, B Dong, T Qin, L Wang, TY Liu
arXiv preprint arXiv:1906.02762, 2019
Dense information flow for neural machine translation
Y Shen, X Tan, D He, T Qin, TY Liu
arXiv preprint arXiv:1806.00722, 2018
Efficient training of bert by progressively stacking
L Gong, D He, Z Li, T Qin, L Wang, T Liu
International Conference on Machine Learning, 2337-2346, 2019
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20