OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Pre-trained models: Past, present and future
Xu Han, Zhengyan Zhang, Ning Ding, et al.
AI Open (2021) Vol. 2, pp. 225-250
Open Access | Times Cited: 655

Showing 1-25 of 655 citing articles:

Transformers in Time Series: A Survey
Qingsong Wen, Tian Zhou, Chaoli Zhang, et al.
(2023), pp. 6778-6786
Open Access | Times Cited: 505

Parameter-efficient fine-tuning of large-scale pre-trained language models
Ning Ding, Yujia Qin, Guang Yang, et al.
Nature Machine Intelligence (2023) Vol. 5, Iss. 3, pp. 220-235
Open Access | Times Cited: 334

A systematic evaluation of large language models of code
Frank F. Xu, Uri Alon, Graham Neubig, et al.
(2022), pp. 1-10
Open Access | Times Cited: 288

GPT understands, too
Xiao Liu, Yanan Zheng, Zhengxiao Du, et al.
AI Open (2023) Vol. 5, pp. 208-215
Open Access | Times Cited: 252

Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification
Shengding Hu, Ning Ding, Huadong Wang, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 183

Pre-Trained Language Models and Their Applications
Haifeng Wang, Jiwei Li, Hua Wu, et al.
Engineering (2022) Vol. 25, pp. 51-65
Open Access | Times Cited: 180

PPT: Pre-trained Prompt Tuning for Few-shot Learning
Yuxian Gu, Xu Han, Zhiyuan Liu, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 176

A comprehensive survey on pretrained foundation models: a history from BERT to ChatGPT
Ce Zhou, Qian Li, Chen Li, et al.
International Journal of Machine Learning and Cybernetics (2024)
Closed Access | Times Cited: 132

VLP: A Survey on Vision-language Pre-training
Feilong Chen, Duzhen Zhang, Minglun Han, et al.
Deleted Journal (2023) Vol. 20, Iss. 1, pp. 38-56
Open Access | Times Cited: 131

A survey of GPT-3 family large language models including ChatGPT and GPT-4
Katikapalli Subramanyam Kalyan
Natural Language Processing Journal (2023) Vol. 6, pp. 100048-100048
Open Access | Times Cited: 129

OpenPrompt: An Open-source Framework for Prompt-learning
Ning Ding, Shengding Hu, Weilin Zhao, et al.
(2022)
Open Access | Times Cited: 128

A review of ensemble learning and data augmentation models for class imbalanced problems: Combination, implementation and evaluation
Azal Ahmad Khan, Omkar Chaudhari, Rohitash Chandra
Expert Systems with Applications (2023) Vol. 244, pp. 122778-122778
Open Access | Times Cited: 128

Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey
Xiao Wang, Guangyao Chen, Guangwu Qian, et al.
Deleted Journal (2023) Vol. 20, Iss. 4, pp. 447-482
Open Access | Times Cited: 96

Applications of transformer-based language models in bioinformatics: a survey
Shuang Zhang, Rui Fan, Yuti Liu, et al.
Bioinformatics Advances (2023) Vol. 3, Iss. 1
Open Access | Times Cited: 95

Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding, Yujia Qin, Guang Yang, et al.
Research Square (Research Square) (2022)
Open Access | Times Cited: 93

Auditing large language models: a three-layered approach
Jakob Mökander, Jonas Schuett, Hannah Rose Kirk, et al.
AI and Ethics (2023) Vol. 4, Iss. 4, pp. 1085-1115
Open Access | Times Cited: 85

An extensive study on pre-trained models for program understanding and generation
Zhengran Zeng, Hanzhuo Tan, Haotian Zhang, et al.
(2022), pp. 39-51
Closed Access | Times Cited: 83

The effects of artificial intelligence applications in educational settings: Challenges and strategies
Omar Ali, Peter Murray, Mujtaba M. Momin, et al.
Technological Forecasting and Social Change (2023) Vol. 199, pp. 123076-123076
Open Access | Times Cited: 79

Prompt-learning for Fine-grained Entity Typing
Ning Ding, Yulin Chen, Xu Han, et al.
(2022)
Open Access | Times Cited: 77

A deep-learning system bridging molecule structure and biomedical text with comprehension comparable to human professionals
Zheni Zeng, Yuan Yao, Zhiyuan Liu, et al.
Nature Communications (2022) Vol. 13, Iss. 1
Open Access | Times Cited: 74

Flexible multitask computation in recurrent networks utilizes shared dynamical motifs
Laura N. Driscoll, Krishna V. Shenoy, David Sussillo
bioRxiv (Cold Spring Harbor Laboratory) (2022)
Open Access | Times Cited: 74

Reasoning with Language Model Prompting: A Survey
Shuofei Qiao, Yixin Ou, Ningyu Zhang, et al.
(2023)
Open Access | Times Cited: 70

AI literacy and its implications for prompt engineering strategies
Nils Knoth, Antonia Tolzin, Andreas Janson, et al.
Computers and Education Artificial Intelligence (2024) Vol. 6, pp. 100225-100225
Open Access | Times Cited: 67

Foundation and large language models: fundamentals, challenges, opportunities, and social impacts
Devon Myers, Rami Mohawesh, Venkata Ishwarya Chellaboina, et al.
Cluster Computing (2023) Vol. 27, Iss. 1, pp. 1-26
Closed Access | Times Cited: 56

Page 1 - Next Page

Scroll to top