Follow
Yuanchao Li
Title
Cited by
Cited by
Year
Improved End-to-End Speech Emotion Recognition Using Self Attention Mechanism and Multitask Learning
Y Li, T Zhao, T Kawahara
INTERSPEECH, 2803-2807, 2019
1812019
Fusing ASR outputs in joint training for speech emotion recognition
Y Li, P Bell, C Lai
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
432022
Expressing reactive emotion based on multimodal emotion recognition for natural conversation in human–robot interaction
Y Li, CT Ishi, K Inoue, S Nakamura, T Kawahara
Advanced Robotics 33 (20), 1030-1041, 2019
412019
Cooperative comfortable-driving at signalized intersections for connected and automated vehicles
X Shen, X Zhang, T Ouyang, Y Li, P Raksincharoensak
IEEE Robotics and Automation Letters 5 (4), 6247-6254, 2020
402020
Mixture density networks-based knock simulator
X Shen, T Ouyang, C Khajorntraidet, Y Li, S Li, J Zhuang
IEEE/ASME Transactions on Mechatronics 27 (1), 159-168, 2021
342021
Exploration of a self-supervised speech model: A study on emotional corpora
Y Li, Y Mohamied, P Bell, C Lai
2022 IEEE Spoken Language Technology Workshop (SLT), 868-875, 2023
282023
Emotion recognition by combining prosody and sentiment analysis for expressing reactive emotion by humanoid robot
Y Li, CT Ishi, N Ward, K Inoue, S Nakamura, K Takanashi, T Kawahara
2017 Asia-Pacific Signal and Information Processing Association Annual …, 2017
272017
Attention-based multimodal fusion for estimating human emotion in real-world HRI
Y Li, T Zhao, X Shen
Companion of the 2020 ACM/IEEE International Conference on Human-Robot …, 2020
132020
Interactional and pragmatics-related prosodic patterns in Mandarin dialog
NG Ward, Y Li, T Zhao, T Kawahara
Speech prosody, 2016
132016
Alzheimer's Dementia Detection through Spontaneous Dialogue with Proactive Robotic Listeners
Y Li, C Lai, D Lala, K Inoue, T Kawahara
2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI …, 2022
82022
Cross-Attention is Not Enough: Incongruity-Aware Dynamic Hierarchical Fusion for Multimodal Affect Recognition
Y Wang*, Y Li*, PP Liang, LP Morency, P Bell, C Lai
arXiv preprint arXiv:2305.13583, 2023
7*2023
Feeling estimation device, feeling estimation method, and storage medium
Y Li
US Patent 11,107,464, 2021
62021
Towards improving speech emotion recognition for in-vehicle agents: Preliminary results of incorporating sentiment analysis by using early and late fusion methods
Y Li
Proceedings of the 6th International Conference on Human-Agent Interaction …, 2018
62018
ASR and Emotional Speech: A Word-Level Investigation of the Mutual Impact of Speech and Emotion Recognition
Y Li*, Z Zhao*, O Klejch, P Bell, C Lai
INTERSPEECH 2023, 2023
52023
Multimodal Dyadic Impression Recognition via Listener Adaptive Cross-Domain Fusion
Y Li, P Bell, C Lai
ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and …, 2023
4*2023
Robotic Speech Synthesis: Perspectives on Interactions, Scenarios, and Ethics
Y Li, C Lai
2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI …, 2022
32022
Utterance Behavior of Users While Playing Basketball with a Virtual Teammate.
D Lala, Y Li, T Kawahara
ICAART, 28-38, 2017
32017
I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue
Y Li, K Inoue*, L Tian*, C Fu, CT Ishi, H Ishiguro, T Kawahara, C Lai
Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing …, 2023
22023
Semi-supervised learning for multimodal speech and emotion recognition
Y Li
Proceedings of the 2021 International Conference on Multimodal Interaction …, 2021
22021
Information processing apparatus, information processing method, and storage medium
Y Li
US Patent 11,443,759, 2021
12021
The system can't perform the operation now. Try again later.
Articles 1–20