OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, et al.
(2020)
Open Access | Times Cited: 1690

Showing 51-75 of 1690 citing articles:

Overview of the Ninth Dialog System Technology Challenge: DSTC9
Chulaka Gunasekara, Seokhwan Kim, Luis Fernando D’Haro, et al.
IEEE/ACM Transactions on Audio Speech and Language Processing (2024) Vol. 32, pp. 4066-4076
Open Access | Times Cited: 29

A Survey of AI-Generated Content (AIGC)
Y. Charles Cao, Sherry Li, Yixin Liu, et al.
ACM Computing Surveys (2024)
Open Access | Times Cited: 18

TCM-GPT: Efficient pre-training of large language models for domain adaptation in Traditional Chinese Medicine
Guoxing Yang, Xiaohong Liu, Jian‐Yu Shi, et al.
Computer Methods and Programs in Biomedicine Update (2024) Vol. 6, pp. 100158-100158
Open Access | Times Cited: 17

The rise and potential of large language model based agents: a survey
Zhiheng Xi, Wen-Xiang Chen, Xin Hua Guo, et al.
Science China Information Sciences (2025) Vol. 68, Iss. 2
Closed Access | Times Cited: 16

A survey of large language models for healthcare: from data, technology, and applications to accountability and ethics
Kai He, Rui Mao, Qika Lin, et al.
Information Fusion (2025), pp. 102963-102963
Open Access | Times Cited: 13

Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management
Xiahua Wei, Naveen Kumar, Han Zhang
Information & Management (2025) Vol. 62, Iss. 2, pp. 104103-104103
Closed Access | Times Cited: 6

Deep Learning for Economists
Melissa Dell
Journal of Economic Literature (2025) Vol. 63, Iss. 1, pp. 5-58
Closed Access | Times Cited: 3

Open challenges and opportunities in federated foundation models towards biomedical healthcare
Xingyu Li, Peng Lu, Yu‐Ping Wang, et al.
BioData Mining (2025) Vol. 18, Iss. 1
Open Access | Times Cited: 2

Towards Lifelong Learning of Large Language Models: A Survey
Junhao Zheng, Shengjie Qiu, Chengming Shi, et al.
ACM Computing Surveys (2025)
Open Access | Times Cited: 2

Pretrained Transformers for Text Ranking: BERT and Beyond
Jimmy Lin, Rodrigo Nogueira, Andrew Yates
Synthesis lectures on human language technologies (2021) Vol. 14, Iss. 4, pp. 1-325
Closed Access | Times Cited: 104

DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented Dialogue
Shikib Mehri, Mihail Eric, Dilek Hakkani‐Tür
arXiv (Cornell University) (2020)
Open Access | Times Cited: 103

CausaLM: Causal Model Explanation Through Counterfactual Language Models
Amir Feder, Nadav Oved, Uri Shalit, et al.
Computational Linguistics (2021), pp. 1-54
Open Access | Times Cited: 99

MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare
Shaoxiong Ji, Tianlin Zhang, Luna Ansari, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 99

mT5: A massively multilingual pre-trained text-to-text transformer
Linting Xue, Noah Constant, Adam P. Roberts, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 94

Overview of CONSTRAINT 2021 Shared Tasks: Detecting English COVID-19 Fake News and Hindi Hostile Posts
Parth Patwa, Mohit Bhardwaj, Vineeth Guptha, et al.
Communications in computer and information science (2021), pp. 42-53
Closed Access | Times Cited: 94

CLEVE: Contrastive Pre-training for Event Extraction
Ziqi Wang, Xiaozhi Wang, Xu Han, et al.
(2021)
Open Access | Times Cited: 91

Generative Data Augmentation for Commonsense Reasoning
Yiben Yang, Chaitanya Malaviya, Jared Fernandez, et al.
(2020), pp. 1008-1025
Open Access | Times Cited: 90

Improving and Simplifying Pattern Exploiting Training
Derek Tam, Rakesh R. Menon, Mohit Bansal, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021), pp. 4980-4991
Open Access | Times Cited: 90

CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review
Dan Hendrycks, Collin Burns, Anya Chen, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 89

Learning to Pre-train Graph Neural Networks
Yuanfu Lu, Xunqiang Jiang, Yuan Fang, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2021) Vol. 35, Iss. 5, pp. 4276-4284
Open Access | Times Cited: 89

Understanding tables with intermediate pre-training
Julian Martin Eisenschlos, Syrine Krichene, Thomas Müller
(2020)
Open Access | Times Cited: 88

CrossNER: Evaluating Cross-Domain Named Entity Recognition
Zihan Liu, Yan Xu, Tiezheng Yu, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2021) Vol. 35, Iss. 15, pp. 13452-13460
Open Access | Times Cited: 88

Scroll to top