OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

AOBERT: All-modalities-in-One BERT for multimodal sentiment analysis
Kyeong-Hun Kim, Sanghyun Park
Information Fusion (2022) Vol. 92, pp. 37-45
Closed Access | Times Cited: 66

Showing 1-25 of 66 citing articles:

Artificial intelligence powered Metaverse: analysis, challenges and future perspectives
Mona Soliman, Eman Ahmed, Ashraf Darwish, et al.
Artificial Intelligence Review (2024) Vol. 57, Iss. 2
Open Access | Times Cited: 52

A Review of Key Technologies for Emotion Analysis Using Multimodal Information
Xianxun Zhu, Chaopeng Guo, Heyang Feng, et al.
Cognitive Computation (2024) Vol. 16, Iss. 4, pp. 1504-1530
Closed Access | Times Cited: 22

TF-BERT: Tensor-based fusion BERT for multimodal sentiment analysis
Jingming Hou, Nazlia Omar, Sabrina Tiun, et al.
Neural Networks (2025) Vol. 185, pp. 107222-107222
Closed Access | Times Cited: 1

Efficient utilization of pre-trained models: A review of sentiment analysis via prompt learning
Kun Bu, Yuanchao Liu, Xiaolong Ju
Knowledge-Based Systems (2023) Vol. 283, pp. 111148-111148
Closed Access | Times Cited: 22

Multi-level correlation mining framework with self-supervised label generation for multimodal sentiment analysis
Zuhe Li, Qingbing Guo, Yushan Pan, et al.
Information Fusion (2023) Vol. 99, pp. 101891-101891
Closed Access | Times Cited: 21

Coordinated-joint translation fusion framework with sentiment-interactive graph convolutional networks for multimodal sentiment analysis
Qiang Lu, Xia Sun, Zhizezhang Gao, et al.
Information Processing & Management (2023) Vol. 61, Iss. 1, pp. 103538-103538
Closed Access | Times Cited: 20

TMBL: Transformer-based multimodal binding learning model for multimodal sentiment analysis
Jiehui Huang, Jun Zhou, Zhenchao Tang, et al.
Knowledge-Based Systems (2023) Vol. 285, pp. 111346-111346
Closed Access | Times Cited: 19

Disentanglement Translation Network for multimodal sentiment analysis
Ying Zeng, Wenjun Yan, Sijie Mai, et al.
Information Fusion (2023) Vol. 102, pp. 102031-102031
Closed Access | Times Cited: 18

Hierarchical denoising representation disentanglement and dual-channel cross-modal-context interaction for multimodal sentiment analysis
Zuhe Li, Zhenwei Huang, Yushan Pan, et al.
Expert Systems with Applications (2024) Vol. 252, pp. 124236-124236
Open Access | Times Cited: 7

Corporate financial distress prediction using the risk-related information content of annual reports
Petr Hájek, Michal Munk
Information Processing & Management (2024) Vol. 61, Iss. 5, pp. 103820-103820
Closed Access | Times Cited: 7

TCHFN: Multimodal sentiment analysis based on Text-Centric Hierarchical Fusion Network
Jingming Hou, Nazlia Omar, Sabrina Tiun, et al.
Knowledge-Based Systems (2024) Vol. 300, pp. 112220-112220
Closed Access | Times Cited: 7

Hybrid cross-modal interaction learning for multimodal sentiment analysis
Yanping Fu, Zhiyuan Zhang, Ruidi Yang, et al.
Neurocomputing (2023) Vol. 571, pp. 127201-127201
Closed Access | Times Cited: 16

Similar modality completion-based multimodal sentiment analysis under uncertain missing modalities
Yuhang Sun, Zhizhong Liu, Quan Z. Sheng, et al.
Information Fusion (2024) Vol. 110, pp. 102454-102454
Closed Access | Times Cited: 6

Multimodal consistency-specificity fusion based on information bottleneck for sentiment analysis
Wei Liu, Shenchao Cao, Sun Zhang
Journal of King Saud University - Computer and Information Sciences (2024) Vol. 36, Iss. 2, pp. 101943-101943
Open Access | Times Cited: 5

Spatio-temporal fusion and contrastive learning for urban flow prediction
Xu Zhang, Yongshun Gong, Chengqi Zhang, et al.
Knowledge-Based Systems (2023) Vol. 282, pp. 111104-111104
Closed Access | Times Cited: 12

VLP2MSA: Expanding vision-language pre-training to multimodal sentiment analysis
Guofeng Yi, Cunhang Fan, Kang Zhu, et al.
Knowledge-Based Systems (2023) Vol. 283, pp. 111136-111136
Closed Access | Times Cited: 11

Sentiment analysis of social media comments based on multimodal attention fusion network
Ziyu Liu, Tao Yang, Wen Chen, et al.
Applied Soft Computing (2024) Vol. 164, pp. 112011-112011
Closed Access | Times Cited: 4

AtCAF: Attention-based causality-aware fusion network for multimodal sentiment analysis
Changqin Huang, Jili Chen, Qionghao Huang, et al.
Information Fusion (2024), pp. 102725-102725
Closed Access | Times Cited: 4

MGC: A modal mapping coupling and gate-driven contrastive learning approach for multimodal intent recognition
Mengsheng Wang, Lun Xie, Chiqin Li, et al.
Expert Systems with Applications (2025), pp. 127631-127631
Closed Access

Multi-Task Supervised Alignment Pre-Training for Few-Shot Multimodal Sentiment Analysis
Jianli Yang, Jiuxin Cao, Chengge Duan
Applied Sciences (2025) Vol. 15, Iss. 4, pp. 2095-2095
Open Access

Text-guided deep correlation mining and self-learning feature fusion framework for multimodal sentiment analysis
Minghui Zhu, Xianfei He, Baojun Qiao, et al.
Knowledge-Based Systems (2025) Vol. 315, pp. 113249-113249
Closed Access

ViTASA: New benchmark and methods for Vietnamese targeted aspect sentiment analysis for multiple textual domains
Khanh Quoc Tran, Quang Nhat Huynh, Lê Thi Tu Oanh, et al.
Computer Speech & Language (2025), pp. 101800-101800
Closed Access

An electric vehicle sales hybrid forecasting method based on improved sentiment analysis model and secondary decomposition
Jinpei Liu, Pan Hui, Rui Luo, et al.
Engineering Applications of Artificial Intelligence (2025) Vol. 150, pp. 110561-110561
Closed Access

Manifold knowledge-guided feature fusion network for multimodal sentiment analysis
Xingang Wang, Mengyi Wang, Hai Cui, et al.
Expert Systems with Applications (2025), pp. 127537-127537
Closed Access

Page 1 - Next Page

Scroll to top