OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

TETFN: A text enhanced transformer fusion network for multimodal sentiment analysis
Di Wang, Xutong Guo, Yumin Tian, et al.
Pattern Recognition (2022) Vol. 136, pp. 109259-109259
Closed Access | Times Cited: 109

Showing 1-25 of 109 citing articles:

A Review of Key Technologies for Emotion Analysis Using Multimodal Information
Xianxun Zhu, Chaopeng Guo, Heyang Feng, et al.
Cognitive Computation (2024) Vol. 16, Iss. 4, pp. 1504-1530
Closed Access | Times Cited: 22

FrameERC: Framelet Transform Based Multimodal Graph Neural Networks for Emotion Recognition in Conversation
Ming Li, Jiandong Shi, Lu Bai, et al.
Pattern Recognition (2025) Vol. 161, pp. 111340-111340
Closed Access | Times Cited: 2

Token-disentangling Mutual Transformer for multimodal emotion recognition
Guanghao Yin, Yuanyuan Liu, Tengfei Liu, et al.
Engineering Applications of Artificial Intelligence (2024) Vol. 133, pp. 108348-108348
Closed Access | Times Cited: 10

FDR-MSA: Enhancing multimodal sentiment analysis through feature disentanglement and reconstruction
Yao Fu, Biao Huang, Yujun Wen, et al.
Knowledge-Based Systems (2024) Vol. 297, pp. 111965-111965
Open Access | Times Cited: 10

Video multimodal sentiment analysis using cross-modal feature translation and dynamical propagation
Chenquan Gan, Yu Tang, Xiang Fu, et al.
Knowledge-Based Systems (2024) Vol. 299, pp. 111982-111982
Closed Access | Times Cited: 9

A vision and language hierarchical alignment for multimodal aspect-based sentiment analysis
Wang Zou, Xia Sun, Qiang Lu, et al.
Pattern Recognition (2025), pp. 111369-111369
Closed Access | Times Cited: 1

Evolving techniques in sentiment analysis: a comprehensive review
M. R. Pavan Kumar, Lal Khan, Hsien-Tsung Chang
PeerJ Computer Science (2025) Vol. 11, pp. e2592-e2592
Open Access | Times Cited: 1

TF-BERT: Tensor-based fusion BERT for multimodal sentiment analysis
Jingming Hou, Nazlia Omar, Sabrina Tiun, et al.
Neural Networks (2025) Vol. 185, pp. 107222-107222
Closed Access | Times Cited: 1

Multimodal negative sentiment recognition of online public opinion on public health emergencies based on graph convolutional networks and ensemble learning
Ziming Zeng, Shouqiang Sun, Qingqing Li
Information Processing & Management (2023) Vol. 60, Iss. 4, pp. 103378-103378
Closed Access | Times Cited: 21

Coordinated-joint translation fusion framework with sentiment-interactive graph convolutional networks for multimodal sentiment analysis
Qiang Lu, Xia Sun, Zhizezhang Gao, et al.
Information Processing & Management (2023) Vol. 61, Iss. 1, pp. 103538-103538
Closed Access | Times Cited: 20

TMBL: Transformer-based multimodal binding learning model for multimodal sentiment analysis
Jiehui Huang, Jun Zhou, Zhenchao Tang, et al.
Knowledge-Based Systems (2023) Vol. 285, pp. 111346-111346
Closed Access | Times Cited: 19

A feature-based restoration dynamic interaction network for multimodal sentiment analysis
Yufei Zeng, Zhixin Li, Zhenbin Chen, et al.
Engineering Applications of Artificial Intelligence (2023) Vol. 127, pp. 107335-107335
Closed Access | Times Cited: 18

Multi-grained fusion network with self-distillation for aspect-based multimodal sentiment analysis
Juan Yang, Yali Xiao, Xu Du
Knowledge-Based Systems (2024) Vol. 293, pp. 111724-111724
Closed Access | Times Cited: 8

EmoComicNet: A multi-task model for comic emotion recognition
Arpita Dutta, Samit Biswas, Amit Das
Pattern Recognition (2024) Vol. 150, pp. 110261-110261
Closed Access | Times Cited: 7

Hierarchical denoising representation disentanglement and dual-channel cross-modal-context interaction for multimodal sentiment analysis
Zuhe Li, Zhenwei Huang, Yushan Pan, et al.
Expert Systems with Applications (2024) Vol. 252, pp. 124236-124236
Open Access | Times Cited: 7

TCHFN: Multimodal sentiment analysis based on Text-Centric Hierarchical Fusion Network
Jingming Hou, Nazlia Omar, Sabrina Tiun, et al.
Knowledge-Based Systems (2024) Vol. 300, pp. 112220-112220
Closed Access | Times Cited: 7

Hybrid cross-modal interaction learning for multimodal sentiment analysis
Yanping Fu, Zhiyuan Zhang, Ruidi Yang, et al.
Neurocomputing (2023) Vol. 571, pp. 127201-127201
Closed Access | Times Cited: 16

Multimodal sentiment analysis: A survey
Songning Lai, Xifeng Hu, Haoxuan Xu, et al.
Displays (2023) Vol. 80, pp. 102563-102563
Open Access | Times Cited: 15

Similar modality completion-based multimodal sentiment analysis under uncertain missing modalities
Yuhang Sun, Zhizhong Liu, Quan Z. Sheng, et al.
Information Fusion (2024) Vol. 110, pp. 102454-102454
Closed Access | Times Cited: 6

Multi-schema prompting powered token-feature woven attention network for short text classification
Zijing Cai, Hua Zhang, Peiqian Zhan, et al.
Pattern Recognition (2024) Vol. 156, pp. 110782-110782
Closed Access | Times Cited: 5

Co-space Representation Interaction Network for multimodal sentiment analysis
Hang Shi, Yuanyuan Pu, Zhengpeng Zhao, et al.
Knowledge-Based Systems (2023) Vol. 283, pp. 111149-111149
Closed Access | Times Cited: 13

Progressive modality-complement aggregative multitransformer for domain multi-modal neural machine translation
Junjun Guo, Zhenyu Hou, Yantuan Xian, et al.
Pattern Recognition (2024) Vol. 149, pp. 110294-110294
Closed Access | Times Cited: 4

Assessing learners’ English public speaking anxiety with multimodal deep learning technologies
Chunping Zheng, Tingting Zhang, Xu Chen, et al.
Computer Assisted Language Learning (2024), pp. 1-29
Closed Access | Times Cited: 4

CCMA: CapsNet for audio–video sentiment analysis using cross-modal attention
Haibin Li, A. Q. Guo, Yaqian Li
The Visual Computer (2024)
Closed Access | Times Cited: 4

AtCAF: Attention-based causality-aware fusion network for multimodal sentiment analysis
Changqin Huang, Jili Chen, Qionghao Huang, et al.
Information Fusion (2024), pp. 102725-102725
Closed Access | Times Cited: 4

Page 1 - Next Page

Scroll to top