Yu Ding
Yu Ding
Senior Research Scientist, Netease, China
Email verificata su corp.netease.com
TitoloCitata daAnno
Laughter animation synthesis
Y Ding, K Prepin, J Huang, C Pelachaud, T Artières
Proceedings of the 2014 international conference on Autonomous agents and …, 2014
402014
Modeling multimodal behaviors from speech prosody
Y Ding, C Pelachaud, T Artieres
International Conference on Intelligent Virtual Agents, 217-228, 2013
242013
Rhythmic body movements of laughter
R Niewiadomski, M Mancini, Y Ding, C Pelachaud, G Volpe
Proceedings of the 16th international conference on multimodal interaction …, 2014
202014
Laughing with a virtual agent
F Pecune, M Mancini, B Biancardi, G Varni, Y Ding, C Pelachaud, G Volpe, ...
Proceedings of the 2015 International Conference on Autonomous Agents and …, 2015
162015
Speech-driven eyebrow motion synthesis with contextual markovian models
Y Ding, M Radenen, T Artieres, C Pelachaud
2013 IEEE International Conference on Acoustics, Speech and Signal …, 2013
162013
Real-time visual prosody for interactive virtual agents
H Van Welbergen, Y Ding, K Sattler, C Pelachaud, S Kopp
International Conference on Intelligent Virtual Agents, 139-151, 2015
102015
Vers des agents conversationnels animés socio-affectifs
M Ochs, Y Ding, N Fourati, M Chollet, B Ravenet, F Pecune, N Glas, ...
Proceedings of the 25th Conference on l'Interaction Homme-Machine, 69-78, 2013
102013
Laugh when you’re winning
M Mancini, L Ach, E Bantegnie, T Baur, N Berthouze, D Datta, Y Ding, ...
International Summer Workshop on Multimodal Interfaces, 50-79, 2013
102013
Perception of intensity incongruence in synthesized multimodal expressions of laughter
R Niewiadomski, Y Ding, M Mancini, C Pelachaud, G Volpe, A Camurri
2015 International Conference on Affective Computing and Intelligent …, 2015
72015
Upper body animation synthesis for a laughing character
Y Ding, J Huang, N Fourati, T Artieres, C Pelachaud
International Conference on Intelligent Virtual Agents, 164-173, 2014
72014
Lip animation synthesis: a unified framework for speaking and laughing virtual agent.
Y Ding, C Pelachaud
AVSP, 78-83, 2015
62015
Lol—laugh out loud
F Pecune, B Biancardi, Y Ding, C Pelachaud, M Mancini, G Varni, ...
Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015
52015
Inverse kinematics using dynamic joint parameters: inverse kinematics animation synthesis learnt from sub-divided motion micro-segments
J Huang, M Fratarcangeli, Y Ding, C Pelachaud
The Visual Computer 33 (12), 1541-1553, 2017
42017
A multifaceted study on eye contact based speaker identification in three-party conversations
Y Ding, Y Zhang, M Xiao, Z Deng
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems …, 2017
42017
Implementing and evaluating a laughing virtual character
M Mancini, B Biancardi, F Pecune, G Varni, Y Ding, C Pelachaud, G Volpe, ...
ACM Transactions on Internet Technology (TOIT) 17 (1), 1-22, 2017
42017
Perceptual enhancement of emotional mocap head motion: An experimental study
Y Ding, L Shi, Z Deng
2017 Seventh International Conference on Affective Computing and Intelligent …, 2017
32017
Faceswapnet: Landmark guided many-to-many face reenactment
J Zhang, X Zeng, Y Pan, Y Liu, Y Ding, C Fan
arXiv preprint arXiv:1905.11805, 2019
22019
Low-level characterization of expressive head motion through frequency domain analysis
Y Ding, L Shi, Z Deng
IEEE Transactions on Affective Computing, 2018
22018
Learning activity patterns performed with emotion
Q Wang, T Artières, Y Ding
Proceedings of the 3rd International Symposium on Movement and Computing, 1-4, 2016
22016
Vers des Agents Conversationnels Animés dotés d'émotions et d'attitudes sociales
M Ochs, Y Ding, N Fourati, M Chollet, B Ravenet, F Pecune, N Glas, ...
22014
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20