
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
Multi-modal fusion network with complementarity and importance for emotion recognition
Shuai Liu, Peng Gao, Yating Li, et al.
Information Sciences (2022) Vol. 619, pp. 679-694
Closed Access | Times Cited: 100
Shuai Liu, Peng Gao, Yating Li, et al.
Information Sciences (2022) Vol. 619, pp. 679-694
Closed Access | Times Cited: 100
Showing 1-25 of 100 citing articles:
Emotion recognition in EEG signals using deep learning methods: A review
Mahboobeh Jafari, Afshin Shoeibi, Marjane Khodatars, et al.
Computers in Biology and Medicine (2023) Vol. 165, pp. 107450-107450
Open Access | Times Cited: 79
Mahboobeh Jafari, Afshin Shoeibi, Marjane Khodatars, et al.
Computers in Biology and Medicine (2023) Vol. 165, pp. 107450-107450
Open Access | Times Cited: 79
Multimodal Emotion Recognition with Deep Learning: Advancements, challenges, and future directions
Geetha Vijayaraghavan, T. Mala, Das P, et al.
Information Fusion (2023) Vol. 105, pp. 102218-102218
Closed Access | Times Cited: 63
Geetha Vijayaraghavan, T. Mala, Das P, et al.
Information Fusion (2023) Vol. 105, pp. 102218-102218
Closed Access | Times Cited: 63
A Review of Key Technologies for Emotion Analysis Using Multimodal Information
Xianxun Zhu, Chaopeng Guo, Heyang Feng, et al.
Cognitive Computation (2024) Vol. 16, Iss. 4, pp. 1504-1530
Closed Access | Times Cited: 22
Xianxun Zhu, Chaopeng Guo, Heyang Feng, et al.
Cognitive Computation (2024) Vol. 16, Iss. 4, pp. 1504-1530
Closed Access | Times Cited: 22
Using transformers for multimodal emotion recognition: Taxonomies and state of the art review
Samira Hazmoune, Fateh Bougamouza
Engineering Applications of Artificial Intelligence (2024) Vol. 133, pp. 108339-108339
Closed Access | Times Cited: 20
Samira Hazmoune, Fateh Bougamouza
Engineering Applications of Artificial Intelligence (2024) Vol. 133, pp. 108339-108339
Closed Access | Times Cited: 20
From Neural Networks to Emotional Networks: A Systematic Review of EEG-Based Emotion Recognition in Cognitive Neuroscience and Real-World Applications
Evgenia Gkintoni, Anthimos Aroutzidis, Hera Antonopoulou, et al.
Brain Sciences (2025) Vol. 15, Iss. 3, pp. 220-220
Open Access | Times Cited: 3
Evgenia Gkintoni, Anthimos Aroutzidis, Hera Antonopoulou, et al.
Brain Sciences (2025) Vol. 15, Iss. 3, pp. 220-220
Open Access | Times Cited: 3
A Survey of Deep Learning-Based Multimodal Emotion Recognition: Speech, Text, and Face
Hailun Lian, Cheng Lu, Sunan Li, et al.
Entropy (2023) Vol. 25, Iss. 10, pp. 1440-1440
Open Access | Times Cited: 34
Hailun Lian, Cheng Lu, Sunan Li, et al.
Entropy (2023) Vol. 25, Iss. 10, pp. 1440-1440
Open Access | Times Cited: 34
Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition
Ruiqi Wang, Wonse Jo, Dezhong Zhao, et al.
IEEE Transactions on Cognitive and Developmental Systems (2024) Vol. 16, Iss. 4, pp. 1374-1390
Open Access | Times Cited: 15
Ruiqi Wang, Wonse Jo, Dezhong Zhao, et al.
IEEE Transactions on Cognitive and Developmental Systems (2024) Vol. 16, Iss. 4, pp. 1374-1390
Open Access | Times Cited: 15
Proposing sentiment analysis model based on BERT and XLNet for movie reviews
Mian Muhammad Danyal, Sarwar Shah Khan, Muzammil Khan, et al.
Multimedia Tools and Applications (2024) Vol. 83, Iss. 24, pp. 64315-64339
Closed Access | Times Cited: 13
Mian Muhammad Danyal, Sarwar Shah Khan, Muzammil Khan, et al.
Multimedia Tools and Applications (2024) Vol. 83, Iss. 24, pp. 64315-64339
Closed Access | Times Cited: 13
LCCNN: a Lightweight Customized CNN-Based Distance Education App for COVID-19 Recognition
Jiaji Wang, Suresh Chandra Satapathy, Shuihua Wang, et al.
Mobile Networks and Applications (2023) Vol. 28, Iss. 3, pp. 873-888
Open Access | Times Cited: 19
Jiaji Wang, Suresh Chandra Satapathy, Shuihua Wang, et al.
Mobile Networks and Applications (2023) Vol. 28, Iss. 3, pp. 873-888
Open Access | Times Cited: 19
Disentanglement Translation Network for multimodal sentiment analysis
Ying Zeng, Wenjun Yan, Sijie Mai, et al.
Information Fusion (2023) Vol. 102, pp. 102031-102031
Closed Access | Times Cited: 18
Ying Zeng, Wenjun Yan, Sijie Mai, et al.
Information Fusion (2023) Vol. 102, pp. 102031-102031
Closed Access | Times Cited: 18
Robust Facial Expression Recognition Using an Evolutionary Algorithm with a Deep Learning Model
A. V. R. Mayuri, Ranjith Kumar Manoharan, S. Neelakandan, et al.
Applied Sciences (2022) Vol. 13, Iss. 1, pp. 468-468
Open Access | Times Cited: 25
A. V. R. Mayuri, Ranjith Kumar Manoharan, S. Neelakandan, et al.
Applied Sciences (2022) Vol. 13, Iss. 1, pp. 468-468
Open Access | Times Cited: 25
A deep cross-modal neural cognitive diagnosis framework for modeling student performance
Lingyun Song, Mengting He, Xuequn Shang, et al.
Expert Systems with Applications (2023) Vol. 230, pp. 120675-120675
Closed Access | Times Cited: 16
Lingyun Song, Mengting He, Xuequn Shang, et al.
Expert Systems with Applications (2023) Vol. 230, pp. 120675-120675
Closed Access | Times Cited: 16
Multi-Model Fusion Framework Using Deep Learning for Visual-Textual Sentiment Classification
Israa Khalaf Salman Al-Tameemi, Mohammad‐Reza Feizi‐Derakhshi, Saeed Pashazadeh, et al.
Computers, materials & continua/Computers, materials & continua (Print) (2023) Vol. 76, Iss. 2, pp. 2145-2177
Open Access | Times Cited: 14
Israa Khalaf Salman Al-Tameemi, Mohammad‐Reza Feizi‐Derakhshi, Saeed Pashazadeh, et al.
Computers, materials & continua/Computers, materials & continua (Print) (2023) Vol. 76, Iss. 2, pp. 2145-2177
Open Access | Times Cited: 14
HiT-MST: Dynamic facial expression recognition with hierarchical transformers and multi-scale spatiotemporal aggregation
Xiaohan Xia, Dongmei Jiang
Information Sciences (2023) Vol. 644, pp. 119301-119301
Closed Access | Times Cited: 11
Xiaohan Xia, Dongmei Jiang
Information Sciences (2023) Vol. 644, pp. 119301-119301
Closed Access | Times Cited: 11
A multimodal shared network with a cross-modal distribution constraint for continuous emotion recognition
Chiqin Li, Lun Xie, Xingmao Shao, et al.
Engineering Applications of Artificial Intelligence (2024) Vol. 133, pp. 108413-108413
Closed Access | Times Cited: 4
Chiqin Li, Lun Xie, Xingmao Shao, et al.
Engineering Applications of Artificial Intelligence (2024) Vol. 133, pp. 108413-108413
Closed Access | Times Cited: 4
Adversarial alignment and graph fusion via information bottleneck for multimodal emotion recognition in conversations
Yuntao Shou, Tao Meng, Wei Ai, et al.
Information Fusion (2024) Vol. 112, pp. 102590-102590
Closed Access | Times Cited: 4
Yuntao Shou, Tao Meng, Wei Ai, et al.
Information Fusion (2024) Vol. 112, pp. 102590-102590
Closed Access | Times Cited: 4
Transformer-Driven Affective State Recognition from Wearable Physiological Data in Everyday Contexts
Li Fang, Dan Zhang
Sensors (2025) Vol. 25, Iss. 3, pp. 761-761
Open Access
Li Fang, Dan Zhang
Sensors (2025) Vol. 25, Iss. 3, pp. 761-761
Open Access
Automated facial expression recognition using exemplar hybrid deep feature generation technique
Mehmet Bayğın, Ilknur Tuncer, Şengül Doğan, et al.
Soft Computing (2023) Vol. 27, Iss. 13, pp. 8721-8737
Closed Access | Times Cited: 10
Mehmet Bayğın, Ilknur Tuncer, Şengül Doğan, et al.
Soft Computing (2023) Vol. 27, Iss. 13, pp. 8721-8737
Closed Access | Times Cited: 10
The Optimization of Advertising Content and Prediction of Consumer Response Rate Based on Generative Adversarial Networks
Changlin Wang, Zhonghua Lu, Z. Y. He
Journal of Organizational and End User Computing (2025) Vol. 37, Iss. 1, pp. 1-30
Open Access
Changlin Wang, Zhonghua Lu, Z. Y. He
Journal of Organizational and End User Computing (2025) Vol. 37, Iss. 1, pp. 1-30
Open Access
Representation distribution matching and dynamic routing interaction for multimodal sentiment analysis
Zuhe Li, Zhenwei Huang, Xianfei He, et al.
Knowledge-Based Systems (2025), pp. 113376-113376
Closed Access
Zuhe Li, Zhenwei Huang, Xianfei He, et al.
Knowledge-Based Systems (2025), pp. 113376-113376
Closed Access
Modality mixer exploiting complementary information for multi-modal action recognition
Sumin Lee, Sangmin Woo, Muhammad Adi Nugroho, et al.
Computer Vision and Image Understanding (2025), pp. 104358-104358
Closed Access
Sumin Lee, Sangmin Woo, Muhammad Adi Nugroho, et al.
Computer Vision and Image Understanding (2025), pp. 104358-104358
Closed Access
An enhanced GhostNet model for emotion recognition: leveraging efficient feature extraction and attention mechanisms
Jie Sun, Tongwen Xu, Yao Yao
Frontiers in Psychology (2025) Vol. 15
Open Access
Jie Sun, Tongwen Xu, Yao Yao
Frontiers in Psychology (2025) Vol. 15
Open Access
Semantic enhancement and cross-modal interaction fusion for sentiment analysis in social media
Guangyu Mu, Ying Chen, Xiurong Li, et al.
PLoS ONE (2025) Vol. 20, Iss. 4, pp. e0321011-e0321011
Open Access
Guangyu Mu, Ying Chen, Xiurong Li, et al.
PLoS ONE (2025) Vol. 20, Iss. 4, pp. e0321011-e0321011
Open Access
Research on the Method of Integrating Teaching Resources of Traditional Chinese Medicine Pharmacology Based on Deep Learning Algorithm
Yao Fu, Yumei Li
(2025), pp. 143-159
Closed Access
Yao Fu, Yumei Li
(2025), pp. 143-159
Closed Access
CCIN-SA:Composite cross modal interaction network with attention enhancement for multimodal sentiment analysis
Li Yang, Junhong Zhong, Wen Teng, et al.
Information Fusion (2025) Vol. 123, pp. 103230-103230
Closed Access
Li Yang, Junhong Zhong, Wen Teng, et al.
Information Fusion (2025) Vol. 123, pp. 103230-103230
Closed Access