
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
Continuous Emotion Recognition in Videos by Fusing Facial Expression, Head Pose and Eye Gaze
Suowei Wu, Zhengyin Du, Weixin Li, et al.
INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION (2019), pp. 40-48
Closed Access | Times Cited: 29
Suowei Wu, Zhengyin Du, Weixin Li, et al.
INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION (2019), pp. 40-48
Closed Access | Times Cited: 29
Showing 1-25 of 29 citing articles:
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles
Zhongxu Hu, Shanhe Lou, Yang Xing, et al.
IEEE Transactions on Intelligent Vehicles (2022) Vol. 7, Iss. 3, pp. 417-440
Open Access | Times Cited: 105
Zhongxu Hu, Shanhe Lou, Yang Xing, et al.
IEEE Transactions on Intelligent Vehicles (2022) Vol. 7, Iss. 3, pp. 417-440
Open Access | Times Cited: 105
CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological Sensors
Tianyi Zhang, Abdallah El Ali, Chen Wang, et al.
Sensors (2020) Vol. 21, Iss. 1, pp. 52-52
Open Access | Times Cited: 55
Tianyi Zhang, Abdallah El Ali, Chen Wang, et al.
Sensors (2020) Vol. 21, Iss. 1, pp. 52-52
Open Access | Times Cited: 55
A Convolution Bidirectional Long Short-Term Memory Neural Network for Driver Emotion Recognition
Guanglong Du, Zhiyao Wang, BoYu Gao, et al.
IEEE Transactions on Intelligent Transportation Systems (2020) Vol. 22, Iss. 7, pp. 4570-4578
Closed Access | Times Cited: 53
Guanglong Du, Zhiyao Wang, BoYu Gao, et al.
IEEE Transactions on Intelligent Transportation Systems (2020) Vol. 22, Iss. 7, pp. 4570-4578
Closed Access | Times Cited: 53
Information Fusion in Attention Networks Using Adaptive and Multi-Level Factorized Bilinear Pooling for Audio-Visual Emotion Recognition
Hengshun Zhou, Jun Du, Yuanyuan Zhang, et al.
IEEE/ACM Transactions on Audio Speech and Language Processing (2021) Vol. 29, pp. 2617-2629
Open Access | Times Cited: 49
Hengshun Zhou, Jun Du, Yuanyuan Zhang, et al.
IEEE/ACM Transactions on Audio Speech and Language Processing (2021) Vol. 29, pp. 2617-2629
Open Access | Times Cited: 49
Weakly-Supervised Learning for Fine-Grained Emotion Recognition Using Physiological Signals
Tianyi Zhang, Abdallah El Ali, Chen Wang, et al.
IEEE Transactions on Affective Computing (2022) Vol. 14, Iss. 3, pp. 2304-2322
Open Access | Times Cited: 17
Tianyi Zhang, Abdallah El Ali, Chen Wang, et al.
IEEE Transactions on Affective Computing (2022) Vol. 14, Iss. 3, pp. 2304-2322
Open Access | Times Cited: 17
Validation and application of the Non-Verbal Behavior Analyzer: An automated tool to assess non-verbal emotional expressions in psychotherapy
Patrick Terhürne, Brian Schwartz, Tobias Baur, et al.
Frontiers in Psychiatry (2022) Vol. 13
Open Access | Times Cited: 13
Patrick Terhürne, Brian Schwartz, Tobias Baur, et al.
Frontiers in Psychiatry (2022) Vol. 13
Open Access | Times Cited: 13
Incorporating Interpersonal Synchronization Features for Automatic Emotion Recognition from Visual and Audio Data during Communication
Jingyu Quan, Yoshihiro Miyake, Takayuki Nozawa
Sensors (2021) Vol. 21, Iss. 16, pp. 5317-5317
Open Access | Times Cited: 17
Jingyu Quan, Yoshihiro Miyake, Takayuki Nozawa
Sensors (2021) Vol. 21, Iss. 16, pp. 5317-5317
Open Access | Times Cited: 17
Privacy Aware Affective State Recognition From Visual Data
M. Sami Zitouni, Peter Lee, Uichin Lee, et al.
IEEE Access (2022) Vol. 10, pp. 40620-40628
Open Access | Times Cited: 11
M. Sami Zitouni, Peter Lee, Uichin Lee, et al.
IEEE Access (2022) Vol. 10, pp. 40620-40628
Open Access | Times Cited: 11
A multimodal fusion-based deep learning framework combined with local-global contextual TCNs for continuous emotion recognition from videos
Congbao Shi, Yuanyuan Zhang, Baolin Liu
Applied Intelligence (2024) Vol. 54, Iss. 4, pp. 3040-3057
Closed Access | Times Cited: 2
Congbao Shi, Yuanyuan Zhang, Baolin Liu
Applied Intelligence (2024) Vol. 54, Iss. 4, pp. 3040-3057
Closed Access | Times Cited: 2
Eye-Tracking to Predict User Cognitive Abilities and Performance for User-Adaptive Narrative Visualizations
Oswald Barral, Sébastien Lallé, Grigorii Guz, et al.
(2020), pp. 163-173
Closed Access | Times Cited: 13
Oswald Barral, Sébastien Lallé, Grigorii Guz, et al.
(2020), pp. 163-173
Closed Access | Times Cited: 13
An Emotion and Attention Recognition System to Classify the Level of Engagement to a Video Conversation by Participants in Real Time Using Machine Learning Models and Utilizing a Neural Accelerator Chip
Janith Kodithuwakku, Dilki Dandeniya Arachchi, Jay Rajasekera
Algorithms (2022) Vol. 15, Iss. 5, pp. 150-150
Open Access | Times Cited: 8
Janith Kodithuwakku, Dilki Dandeniya Arachchi, Jay Rajasekera
Algorithms (2022) Vol. 15, Iss. 5, pp. 150-150
Open Access | Times Cited: 8
Using subjective emotion, facial expression, and gaze direction to evaluate user affective experience and predict preference when playing single-player games
He Zhang, Yin Lu, Hanling Zhang
Ergonomics (2024), pp. 1-21
Closed Access | Times Cited: 1
He Zhang, Yin Lu, Hanling Zhang
Ergonomics (2024), pp. 1-21
Closed Access | Times Cited: 1
Evaluating the Influence of Room Illumination on Camera-Based Physiological Measurements for the Assessment of Screen-Based Media
J. T. Williams, Jon Francombe, Damian Murphy
Applied Sciences (2023) Vol. 13, Iss. 14, pp. 8482-8482
Open Access | Times Cited: 3
J. T. Williams, Jon Francombe, Damian Murphy
Applied Sciences (2023) Vol. 13, Iss. 14, pp. 8482-8482
Open Access | Times Cited: 3
Quantified Facial Temporal-Expressiveness Dynamics for Affect Analysis
Md Taufeeq Uddin, Shaun Canavan
2022 26th International Conference on Pattern Recognition (ICPR) (2021), pp. 3955-3962
Open Access | Times Cited: 7
Md Taufeeq Uddin, Shaun Canavan
2022 26th International Conference on Pattern Recognition (ICPR) (2021), pp. 3955-3962
Open Access | Times Cited: 7
Enhancing Multimodal Affect Recognition with Multi-Task Affective Dynamics Modeling
Nathan Henderson, Wookhee Min, Jonathan Rowe, et al.
(2021), pp. 1-8
Closed Access | Times Cited: 5
Nathan Henderson, Wookhee Min, Jonathan Rowe, et al.
(2021), pp. 1-8
Closed Access | Times Cited: 5
The Syncretic Effect of Dual-Source Data on Affective Computing in Online Learning Contexts: A Perspective From Convolutional Neural Network With Attention Mechanism
Xuesong Zhai, Jiaqi Xu, Nian‐Shing Chen, et al.
Journal of Educational Computing Research (2022) Vol. 61, Iss. 2, pp. 466-493
Closed Access | Times Cited: 3
Xuesong Zhai, Jiaqi Xu, Nian‐Shing Chen, et al.
Journal of Educational Computing Research (2022) Vol. 61, Iss. 2, pp. 466-493
Closed Access | Times Cited: 3
Machine Learning Techniques for Emotion Detection Using Eye Gaze Localisation
Shivalika Goyal, Amit Laddi
Advances in psychology, mental health, and behavioral studies (APMHBS) book series (2024), pp. 24-60
Closed Access
Shivalika Goyal, Amit Laddi
Advances in psychology, mental health, and behavioral studies (APMHBS) book series (2024), pp. 24-60
Closed Access
Contact-Free Emotion Recognition for Monitoring of Well-Being: Early Prospects and Future Ideas
Gašper Slapničar, Zoja Anžur, Sebastijan Trojer, et al.
Ambient intelligence and smart environments (2024)
Open Access
Gašper Slapničar, Zoja Anžur, Sebastijan Trojer, et al.
Ambient intelligence and smart environments (2024)
Open Access
Temporal Attention and Consistency Measuring for Video Question Answering
Lingyu Zhang, Richard J. Radke
(2020), pp. 510-518
Closed Access | Times Cited: 3
Lingyu Zhang, Richard J. Radke
(2020), pp. 510-518
Closed Access | Times Cited: 3
Enhancing Affect Detection in Game-Based Learning Environments with Multimodal Conditional Generative Modeling
Nathan Henderson, Wookhee Min, Jonathan Rowe, et al.
(2020), pp. 134-143
Closed Access | Times Cited: 3
Nathan Henderson, Wookhee Min, Jonathan Rowe, et al.
(2020), pp. 134-143
Closed Access | Times Cited: 3
Multimodal Continuous Emotion Recognition using Deep Multi-Task Learning with Correlation Loss.
Berkay Köprü, Engin Erzin
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 3
Berkay Köprü, Engin Erzin
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 3
What's Fair is Fair: Detecting and Mitigating Encoded Bias in Multimodal Models of Museum Visitor Attention
Halim Acosta, Nathan Henderson, Jonathan Rowe, et al.
(2021), pp. 258-267
Closed Access | Times Cited: 2
Halim Acosta, Nathan Henderson, Jonathan Rowe, et al.
(2021), pp. 258-267
Closed Access | Times Cited: 2
Hand-eye Coordination for Textual Difficulty Detection in Text Summarization
Jun Wang, Grace Ngai, Hong Va Leong
(2020), pp. 269-277
Closed Access | Times Cited: 1
Jun Wang, Grace Ngai, Hong Va Leong
(2020), pp. 269-277
Closed Access | Times Cited: 1
Emotion-Driven Interactive Storytelling: Let Me Tell You How to Feel
Oneris Daniel Rico Garcia, Javier Fernandez Fernandez, Rafael Andres Becerra Saldana, et al.
Lecture notes in computer science (2022), pp. 259-274
Closed Access | Times Cited: 1
Oneris Daniel Rico Garcia, Javier Fernandez Fernandez, Rafael Andres Becerra Saldana, et al.
Lecture notes in computer science (2022), pp. 259-274
Closed Access | Times Cited: 1
Emotion-Driven Interactive Storytelling: Let Me Tell You How to Feel
Oneris Rico, Javier Fdez, Olaf Witkowski
(2023)
Open Access
Oneris Rico, Javier Fdez, Olaf Witkowski
(2023)
Open Access