OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, et al.
(2020)
Open Access | Times Cited: 1690

Showing 1-25 of 1690 citing articles:

On the Opportunities and Risks of Foundation Models
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 1565

Text Data Augmentation for Deep Learning
Connor Shorten, Taghi M. Khoshgoftaar, Borko Furht
Journal Of Big Data (2021) Vol. 8, Iss. 1
Open Access | Times Cited: 1436

mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer
Linting Xue, Noah Constant, Adam P. Roberts, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
Open Access | Times Cited: 1262

BERTweet: A pre-trained language model for English Tweets
Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen
(2020)
Open Access | Times Cited: 671

Pre-trained models: Past, present and future
Xu Han, Zhengyan Zhang, Ning Ding, et al.
AI Open (2021) Vol. 2, pp. 225-250
Open Access | Times Cited: 655

SUPERB: Speech Processing Universal PERformance Benchmark
Shu-Wen Yang, Po-Han Chi, Yung-Sung Chuang, et al.
Interspeech 2022 (2021)
Open Access | Times Cited: 464

Pre-trained models for natural language processing: A survey
Xipeng Qiu, Tianxiang Sun, Yige Xu, et al.
Science China Technological Sciences (2020) Vol. 63, Iss. 10, pp. 1872-1897
Closed Access | Times Cited: 439

BERTimbau: Pretrained BERT Models for Brazilian Portuguese
Fábio Souza, Rodrigo Nogueira, Roberto Lotufo
Lecture notes in computer science (2020), pp. 403-417
Closed Access | Times Cited: 437

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Samuel Gehman, Suchin Gururangan, Maarten Sap, et al.
(2020)
Open Access | Times Cited: 411

AdapterHub: A Framework for Adapting Transformers
Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, et al.
(2020)
Open Access | Times Cited: 390

Deep Learning applications for COVID-19
Connor Shorten, Taghi M. Khoshgoftaar, Borko Furht
Journal Of Big Data (2021) Vol. 8, Iss. 1
Open Access | Times Cited: 308

scGPT: toward building a foundation model for single-cell multi-omics using generative AI
Haotian Cui, Xiaoming Wang, Hassaan Maan, et al.
Nature Methods (2024) Vol. 21, Iss. 8, pp. 1470-1480
Open Access | Times Cited: 282

Fact or Fiction: Verifying Scientific Claims
David Wadden, Shanchuan Lin, Kyle Lo, et al.
(2020)
Open Access | Times Cited: 273

Measuring Mathematical Problem Solving With the MATH Dataset
Dan Hendrycks, Collin Burns, Saurav Kadavath, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 208

COVIDLies: Detecting COVID-19 Misinformation on Social Media
Tamanna Hossain, Robert L. Logan, Arjuna Ugarte, et al.
(2020)
Open Access | Times Cited: 200

AMMU: A survey of transformer-based biomedical pretrained language models
Katikapalli Subramanyam Kalyan, Ajit Rajasekharan, S. Sangeetha
Journal of Biomedical Informatics (2021) Vol. 126, pp. 103982-103982
Open Access | Times Cited: 200

Large pre-trained language models contain human-like biases of what is right and wrong to do
Patrick Schramowski, Cigdem Turan, Nico Andersen, et al.
Nature Machine Intelligence (2022) Vol. 4, Iss. 3, pp. 258-268
Closed Access | Times Cited: 193

Neural Unsupervised Domain Adaptation in NLP—A Survey
Alan Ramponi, Barbara Plank
Proceedings of the 17th international conference on Computational linguistics - (2020)
Open Access | Times Cited: 192

A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios
Michael A. Hedderich, Lukas Lange, Heike Adel, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
Open Access | Times Cited: 191

A Computational Approach to Understanding Empathy Expressed in Text-Based Mental Health Support
Ashish Sharma, Adam S. Miner, David C. Atkins, et al.
(2020)
Open Access | Times Cited: 176

Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
Timo Schick, Sahana Udupa, Hinrich Schütze
Transactions of the Association for Computational Linguistics (2021) Vol. 9, pp. 1408-1424
Open Access | Times Cited: 176

Unsupervised Domain Clusters in Pretrained Language Models
Roee Aharoni, Yoav Goldberg
(2020), pp. 7747-7763
Open Access | Times Cited: 171

MatSciBERT: A materials domain language model for text mining and information extraction
Tanishq Gupta, Mohd Zaki, N. M. Anoop Krishnan, et al.
npj Computational Materials (2022) Vol. 8, Iss. 1
Open Access | Times Cited: 169

What Disease Does This Patient Have? A Large-Scale Open Domain Question Answering Dataset from Medical Exams
Di Jin, Eileen Pan, Nassim Oufattole, et al.
Applied Sciences (2021) Vol. 11, Iss. 14, pp. 6421-6421
Open Access | Times Cited: 168

HateBERT: Retraining BERT for Abusive Language Detection in English
Tommaso Caselli, Valerio Basile, Jelena Mitrović, et al.
(2021)
Open Access | Times Cited: 168

Page 1 - Next Page

Scroll to top