OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Information Fusion in Attention Networks Using Adaptive and Multi-Level Factorized Bilinear Pooling for Audio-Visual Emotion Recognition
Hengshun Zhou, Jun Du, Yuanyuan Zhang, et al.
IEEE/ACM Transactions on Audio Speech and Language Processing (2021) Vol. 29, pp. 2617-2629
Open Access | Times Cited: 49

Showing 1-25 of 49 citing articles:

Deep learning-based multimodal emotion recognition from audio, visual, and text modalities: A systematic review of recent advancements and future prospects
Shiqing Zhang, Yijiao Yang, Chen Chen, et al.
Expert Systems with Applications (2023) Vol. 237, pp. 121692-121692
Closed Access | Times Cited: 81

Multimodal Emotion Recognition Based on Facial Expressions, Speech, and EEG
Jiahui Pan, Weijie Fang, Zhihang Zhang, et al.
IEEE Open Journal of Engineering in Medicine and Biology (2023) Vol. 5, pp. 396-403
Open Access | Times Cited: 37

Deep learning-based EEG emotion recognition: Current trends and future perspectives
Xiaohu Wang, Yongmei Ren, Ze Luo, et al.
Frontiers in Psychology (2023) Vol. 14
Open Access | Times Cited: 37

TeFNA: Text-centered fusion network with crossmodal attention for multimodal sentiment analysis
Changqin Huang, Junling Zhang, Xuemei Wu, et al.
Knowledge-Based Systems (2023) Vol. 269, pp. 110502-110502
Closed Access | Times Cited: 36

Trusted emotion recognition based on multiple signals captured from video
Junjie Zhang, Kun Zheng, Sarah Mazhar, et al.
Expert Systems with Applications (2023) Vol. 233, pp. 120948-120948
Closed Access | Times Cited: 15

Speech emotion recognition based on bi-directional acoustic–articulatory conversion
H.Q. Li, Xueying Zhang, Shufei Duan, et al.
Knowledge-Based Systems (2024) Vol. 299, pp. 112123-112123
Closed Access | Times Cited: 4

Incongruity-Aware Cross-Modal Attention for Audio-Visual Fusion in Dimensional Emotion Recognition
R. Gnana Praveen, Jahangir Alam
IEEE Journal of Selected Topics in Signal Processing (2024) Vol. 18, Iss. 3, pp. 444-458
Closed Access | Times Cited: 4

LUTBIO: A Comprehensive multimodal biometric database targeting middle-aged and elderly populations for enhanced identity authentication
Rui Yang, Qiuyu Zhang, Lingtao Meng, et al.
Information Fusion (2025), pp. 102945-102945
Closed Access

AVERFormer: End-to-End Audio-Visual Emotion Recognition Transformer Framework with Balanced Modal Contributions
Zijian Sun, Haoran Liu, Haibin Li, et al.
Digital Signal Processing (2025), pp. 105081-105081
Closed Access

Multimodal fusion: a study on speech-text emotion recognition with the integration of deep learning
Yanan Shang, Tianqi Fu
Intelligent Systems with Applications (2024) Vol. 24, pp. 200436-200436
Open Access | Times Cited: 3

Noise-Resistant Multimodal Transformer for Emotion Recognition
Yuanyuan Liu, Haoyu Zhang, Yibing Zhan, et al.
International Journal of Computer Vision (2024)
Closed Access | Times Cited: 3

Deep Cross-Corpus Speech Emotion Recognition: Recent Advances and Perspectives
Shiqing Zhang, Ruixin Liu, Xin Tao, et al.
Frontiers in Neurorobotics (2021) Vol. 15
Open Access | Times Cited: 18

Can We Exploit All Datasets? Multimodal Emotion Recognition Using Cross-Modal Translation
Yeo Chan Yoon
IEEE Access (2022) Vol. 10, pp. 64516-64524
Open Access | Times Cited: 12

Residual multimodal Transformer for expression‐EEG fusion continuous emotion recognition
Xiaofang Jin, J. Xiao, Libiao Jin, et al.
CAAI Transactions on Intelligence Technology (2024) Vol. 9, Iss. 5, pp. 1290-1304
Open Access | Times Cited: 2

A dual transfer learning method based on 3D-CNN and vision transformer for emotion recognition
Zhifen Guo, Jiao Wang, Bin Zhang, et al.
Applied Intelligence (2024) Vol. 55, Iss. 3
Closed Access | Times Cited: 2

Context-Based Adaptive Multimodal Fusion Network for Continuous Frame-Level Sentiment Prediction
Maochun Huang, Chunmei Qing, Junpeng Tan, et al.
IEEE/ACM Transactions on Audio Speech and Language Processing (2023) Vol. 31, pp. 3468-3477
Closed Access | Times Cited: 6

Loss Function Design for DNN-Based Sound Event Localization and Detection on Low-Resource Realistic Data
Qing Wang, Jun Du, Zhaoxu Nian, et al.
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2023), pp. 1-5
Closed Access | Times Cited: 4

Improving Multi-Modal Emotion Recognition Using Entropy-Based Fusion and Pruning-Based Network Architecture Optimization
Haotian Wang, Jun Du, Yusheng Dai, et al.
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2024), pp. 11766-11770
Closed Access | Times Cited: 1

Semantic-Driven Crossmodal Fusion for Multimodal Sentiment Analysis
Pingshan Liu, Zhaoyang Wang, Fu Jie Huang
International Journal on Semantic Web and Information Systems (2024) Vol. 20, Iss. 1, pp. 1-26
Open Access | Times Cited: 1

A Parallel Multi-Modal Factorized Bilinear Pooling Fusion Method Based on the Semi-Tensor Product for Emotion Recognition
Liu Fen, Jianfeng Chen, Kemeng Li, et al.
Entropy (2022) Vol. 24, Iss. 12, pp. 1836-1836
Open Access | Times Cited: 6

Hierarchical Audio-Visual Information Fusion with Multi-label Joint Decoding for MER 2023
Haotian Wang, Yuxuan Xi, Hang Chen, et al.
(2023), pp. 9531-9535
Open Access | Times Cited: 2

Branch-Fusion-Net for Multi-Modal Continuous Dimensional Emotion Recognition
Chiqin Li, Lun Xie, Hang Pan
IEEE Signal Processing Letters (2022) Vol. 29, pp. 942-946
Closed Access | Times Cited: 4

A robust and high-precision edge segmentation and refinement method for high-resolution images
Qiming Li, Chengcheng Chen
Mathematical Biosciences & Engineering (2022) Vol. 20, Iss. 1, pp. 1058-1082
Open Access | Times Cited: 3

Acoustic and visual geometry descriptor for multi-modal emotion recognition fromvideos
Kummari Ramyasree, Ch. Sumanth Kumar
Indonesian Journal of Electrical Engineering and Computer Science (2024) Vol. 33, Iss. 2, pp. 960-960
Open Access

MSAM: Deep Semantic Interaction Network for Visual Question Answering
Fan Wang, Bin Wang, Fuyong Xu, et al.
(2024), pp. 39-56
Closed Access

Page 1 - Next Page

Scroll to top