Segui
Zhihua Wu
Zhihua Wu
Affiliazione sconosciuta
Email verificata su baidu.com
Titolo
Citata da
Citata da
Anno
Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation
Y Sun, S Wang, S Feng, S Ding, C Pang, J Shang, J Liu, X Chen, Y Zhao, ...
arXiv preprint arXiv:2107.02137, 2021
3312021
Plato-xl: Exploring the large-scale pre-training of dialogue generation
S Bao, H He, F Wang, H Wu, H Wang, W Wu, Z Wu, Z Guo, H Lu, X Huang, ...
arXiv preprint arXiv:2109.09519, 2021
582021
Ernie 3.0 titan: Exploring larger-scale knowledge enhanced pre-training for language understanding and generation
S Wang, Y Sun, Y Xiang, Z Wu, S Ding, W Gong, S Feng, J Shang, Y Zhao, ...
arXiv preprint arXiv:2112.12731, 2021
552021
Ascnet: Self-supervised video representation learning with appearance-speed consistency
D Huang, W Wu, W Hu, X Liu, D He, Z Wu, X Wu, M Tan, E Ding
Proceedings of the IEEE/CVF international conference on computer vision …, 2021
432021
Ernie-vilg: Unified generative pre-training for bidirectional vision-language generation
H Zhang, W Yin, Y Fang, L Li, B Duan, Z Wu, Y Sun, H Tian, H Wu, ...
arXiv preprint arXiv:2112.15283, 2021
392021
Heterps: Distributed deep learning with reinforcement learning based scheduling in heterogeneous environments
J Liu, Z Wu, D Feng, M Zhang, X Wu, X Yao, D Yu, Y Ma, F Zhao, D Dou
Future Generation Computer Systems 148, 106-117, 2023
292023
Helixfold: An efficient implementation of alphafold2 using paddlepaddle
G Wang, X Fang, Z Wu, Y Liu, Y Xue, Y Xiang, D Yu, F Wang, Y Ma
arXiv preprint arXiv:2207.05477, 2022
202022
Se-moe: A scalable and efficient mixture-of-experts distributed training and inference system
L Shen, Z Wu, WB Gong, H Hao, Y Bai, HC Wu, X Wu, J Bian, H Xiong, ...
arXiv preprint arXiv:2205.10034, 2022
192022
Boosting distributed training performance of the unpadded bert model
J Zeng, M Li, Z Wu, J Liu, Y Liu, D Yu, Y Ma
arXiv preprint arXiv:2208.08124, 2022
72022
Ta-moe: Topology-aware large scale mixture-of-expert training
C Chen, M Li, Z Wu, D Yu, C Yang
Advances in Neural Information Processing Systems 35, 22173-22186, 2022
52022
Nebula-I: A general framework for collaboratively training deep learning models on low-bandwidth cloud clusters
Y Xiang, Z Wu, W Gong, S Ding, X Mo, Y Liu, S Wang, P Liu, Y Hou, L Li, ...
arXiv preprint arXiv:2205.09470, 2022
52022
End-to-end adaptive distributed training on paddlepaddle
Y Ao, Z Wu, D Yu, W Gong, Z Kui, M Zhang, Z Ye, L Shen, Y Ma, T Wu, ...
arXiv preprint arXiv:2112.02752, 2021
52021
PipePar: Enabling fast DNN pipeline parallel training in heterogeneous GPU clusters
J Zhang, G Niu, Q Dai, H Li, Z Wu, F Dong, Z Wu
Neurocomputing 555, 126661, 2023
32023
Stress-induced endocytosis from chloroplast inner envelope membrane is mediated by CHLOROPLAST VESICULATION but inhibited by GAPC
T Pan, Y Liu, X Hu, P Li, C Lin, Y Tang, W Tang, Y Liu, L Guo, C Kim, ...
Cell Reports 42 (10), 2023
12023
Efficient AlphaFold2 Training using Parallel Evoformer and Branch Parallelism
G Wang, Z Wu, X Fang, Y Xiang, Y Liu, D Yu, Y Ma
arXiv preprint arXiv:2211.00235, 2022
12022
Method and apparatus of processing information, method and apparatus of recommending information, electronic device, and storage medium
M Cheng, YU Dianhai, L Ma, Z Wu, D Daxiang, W Tang
US Patent App. 17/517,703, 2022
12022
Method for distributed training model, relevant apparatus, and computer readable storage medium
X Wu, X Yao, YU Dianhai, Z Wu, Y Ma, T Wu, H Wang
US Patent App. 17/362,674, 2021
12021
Addressing Heterogeneity in Federated Learning with Client Selection via Submodular Optimization
J Zhang, J Wang, Y Li, F Xin, F Dong, J Luo, Z Wu
ACM Transactions on Sensor Networks 20 (2), 1-32, 2024
2024
Resource allocation method, resource allocation apparatus, device, medium and computer program produ
J Liu, Z Wu, F Danlei, Z Chendi, M Zhang, X Wu, X Yao, D Dou, ...
US Patent App. 17/891,617, 2023
2023
Method and apparatus for distributing network layers in neural network model
J Liu, Z Wu, F Danlei, M Zhang, X Wu, X Yao, MA Beichen, D Dou, ...
US Patent App. 17/991,077, 2023
2023
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20