OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Video sentiment analysis with bimodal information-augmented multi-head attention
Ting Wu, Junjie Peng, Wenqiang Zhang, et al.
Knowledge-Based Systems (2021) Vol. 235, pp. 107676-107676
Open Access | Times Cited: 80

Showing 26-50 of 80 citing articles:

AIA-Net: Adaptive Interactive Attention Network for Text–Audio Emotion Recognition
Tong Zhang, Shuzhen Li, Bianna Chen, et al.
IEEE Transactions on Cybernetics (2022) Vol. 53, Iss. 12, pp. 7659-7671
Closed Access | Times Cited: 17

Global distilling framework with cognitive gravitation for multimodal emotion recognition
Huihui Li, Haoyang Zhong, Chunlin Xu, et al.
Neurocomputing (2025) Vol. 622, pp. 129306-129306
Closed Access

Enhanced Emotion Recognition Through Dynamic Restrained Adaptive Loss and Extended Multimodal Bottleneck Transformer
Dang-Khanh Nguyen, Eunchae Lim, Soo-Hyung Kim, et al.
Applied Sciences (2025) Vol. 15, Iss. 5, pp. 2862-2862
Open Access

Multimodal sentiment analysis with text-augmented cross-modal feature interaction attention network
Huanxiang Zhang, Junjie Peng, Zesu Cai
Applied Soft Computing (2025), pp. 113078-113078
Closed Access

An effective Multi-Modality Feature Synergy and Feature Enhancer for multimodal intent recognition
Yanfang Xia, Jinmiao Song, Shenwei Tian, et al.
Computers & Electrical Engineering (2025) Vol. 123, pp. 110301-110301
Closed Access

TsAFN: A two-stage adaptive fusion network for multimodal sentiment analysis
Jiaqi Liu, Yong Wang, Jing Yang, et al.
Applied Intelligence (2025) Vol. 55, Iss. 8
Closed Access

A graph convolution-based heterogeneous fusion network for multimodal sentiment analysis
Tong Zhao, Junjie Peng, Yansong Huang, et al.
Applied Intelligence (2023) Vol. 53, Iss. 24, pp. 30455-30468
Closed Access | Times Cited: 9

CMACF: Transformer-based cross-modal attention cross-fusion model for systemic lupus erythematosus diagnosis combining Raman spectroscopy, FTIR spectroscopy, and metabolomics
Xuguang Zhou, Chen Chen, Xiaoyi Lv, et al.
Information Processing & Management (2024) Vol. 61, Iss. 6, pp. 103804-103804
Open Access | Times Cited: 3

3D residual-attention-deep-network-based childhood epilepsy syndrome classification
Yuanmeng Feng, Runze Zheng, Xiaonan Cui, et al.
Knowledge-Based Systems (2022) Vol. 248, pp. 108856-108856
Closed Access | Times Cited: 14

Affective Interaction: Attentive Representation Learning for Multi-Modal Sentiment Classification
Yazhou Zhang, Prayag Tiwari, Lu Rong, et al.
ACM Transactions on Multimedia Computing Communications and Applications (2022) Vol. 18, Iss. 3s, pp. 1-23
Closed Access | Times Cited: 13

Mixture of Attention Variants for Modal Fusion in Multi-Modal Sentiment Analysis
Chao He, Xinghua Zhang, Dongqing Song, et al.
Big Data and Cognitive Computing (2024) Vol. 8, Iss. 2, pp. 14-14
Open Access | Times Cited: 2

Transformer-based adaptive contrastive learning for multimodal sentiment analysis
Yifan Hu, Xi Huang, Xianbing Wang, et al.
Multimedia Tools and Applications (2024)
Closed Access | Times Cited: 2

BVA-Transformer: Image-text multimodal classification and dialogue model architecture based on Blip and visual attention mechanism
Kaiyu Zhang, Fei Wu, Guowei Zhang, et al.
Displays (2024) Vol. 83, pp. 102710-102710
Closed Access | Times Cited: 2

Text-centered cross-sample fusion network for multimodal sentiment analysis
Qionghao Huang, Jili Chen, Changqin Huang, et al.
Multimedia Systems (2024) Vol. 30, Iss. 4
Closed Access | Times Cited: 2

Multimodal Emotion Recognition and Sentiment Analysis Using Masked Attention and Multimodal Interaction
Tatiana G. Voloshina, Olesia Makhnytkina
(2023), pp. 309-317
Closed Access | Times Cited: 5

Target-Oriented Sentiment Classification with Sequential Cross-Modal Semantic Graph
Yu‐Feng Huang, Zhuo Chen, Jiaoyan Chen, et al.
Lecture notes in computer science (2023), pp. 587-599
Closed Access | Times Cited: 5

KianNet: A Violence Detection Model Using an Attention-Based CNN-LSTM Structure
Soheil Vosta, Kin Choong Yow
IEEE Access (2023) Vol. 12, pp. 2198-2209
Open Access | Times Cited: 4

EmotionCast: An Emotion-Driven Intelligent Broadcasting System for Dynamic Camera Switching
Xinyi Zhang, Xinran Ba, Feng Hu, et al.
Sensors (2024) Vol. 24, Iss. 16, pp. 5401-5401
Open Access | Times Cited: 1

Frame-level nonverbal feature enhancement based sentiment analysis
Cangzhi Zheng, Junjie Peng, Lan Wang, et al.
Expert Systems with Applications (2024) Vol. 258, pp. 125148-125148
Closed Access | Times Cited: 1

CTHFNet: contrastive translation and hierarchical fusion network for text–video–audio sentiment analysis
Qiaohong Chen, Shufan Xie, Xianwen Fang, et al.
The Visual Computer (2024)
Closed Access | Times Cited: 1

Semantic-Driven Crossmodal Fusion for Multimodal Sentiment Analysis
Pingshan Liu, Zhaoyang Wang, Fu Jie Huang
International Journal on Semantic Web and Information Systems (2024) Vol. 20, Iss. 1, pp. 1-26
Open Access | Times Cited: 1

TAC-Trimodal Affective Computing: Principles, integration process, affective detection, challenges, and solutions
Hussein Farooq Tayeb Al-Saadawi, Bihter Daş, Resul Daş
Displays (2024) Vol. 83, pp. 102731-102731
Closed Access | Times Cited: 1

A transformer-encoder-based multimodal multi-attention fusion network for sentiment analysis
Cong Liu, Yong Wang, Jing Yang
Applied Intelligence (2024) Vol. 54, Iss. 17-18, pp. 8415-8441
Closed Access | Times Cited: 1

Multi-task learning and mutual information maximization with crossmodal transformer for multimodal sentiment analysis
Shi Yang, Jinglang Cai, Lei Liao
Journal of Intelligent Information Systems (2024)
Closed Access | Times Cited: 1

Learning fine-grained representation with token-level alignment for multimodal sentiment analysis
Xiang Li, Haijun Zhang, Zhi-Qiang Dong, et al.
Expert Systems with Applications (2024), pp. 126274-126274
Closed Access | Times Cited: 1

Scroll to top