Lawformer: A pre-trained language model for chinese legal long documents C Xiao, X Hu, Z Liu, C Tu, M Sun AI Open 2, 79-84, 2021 | 142 | 2021 |
One model, multiple modalities: A sparsely activated approach for text, sound, image, video and code Y Dai, D Tang, L Liu, M Tan, C Zhou, J Wang, Z Feng, F Zhang, X Hu, ... arXiv preprint arXiv:2205.06126, 2022 | 18 | 2022 |
Infiagent-dabench: Evaluating agents on data analysis tasks X Hu, Z Zhao, S Wei, Z Chai, G Wang, X Wang, J Su, J Xu, M Zhu, ... arXiv preprint arXiv:2401.05507, 2024 | 4 | 2024 |
Leveraging print debugging to improve code generation in large language models X Hu, K Kuang, J Sun, H Yang, F Wu arXiv preprint arXiv:2401.05319, 2024 | 2 | 2024 |
Structure-Based Drug Design via 3D Molecular Generative Pre-training and Sampling Y Yang, S Ouyang, X Hu, M Dang, M Zheng, H Zhou, L Li arXiv preprint arXiv:2402.14315, 2024 | | 2024 |