OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Samuel Gehman, Suchin Gururangan, Maarten Sap, et al.
(2020)
Open Access | Times Cited: 411

Showing 1-25 of 411 citing articles:

On the Opportunities and Risks of Foundation Models
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 1565

A Survey on Evaluation of Large Language Models
Yupeng Chang, Xu Wang, Jindong Wang, et al.
ACM Transactions on Intelligent Systems and Technology (2024) Vol. 15, Iss. 3, pp. 1-45
Open Access | Times Cited: 731

Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts
J.D. Zamfirescu-Pereira, Richmond Y. Wong, Bjoern Hartmann, et al.
(2023), pp. 1-21
Open Access | Times Cited: 418

Holistic Evaluation of Language Models
Rishi Bommasani, Percy Liang, Tong Lee
Annals of the New York Academy of Sciences (2023) Vol. 1525, Iss. 1, pp. 140-146
Open Access | Times Cited: 297

A survey on large language model (LLM) security and privacy: The Good, The Bad, and The Ugly
Yifan Yao, Jinhao Duan, Kaidi Xu, et al.
High-Confidence Computing (2024) Vol. 4, Iss. 2, pp. 100211-100211
Open Access | Times Cited: 243

CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities
Mina Lee, Percy Liang, Qian Yang
CHI Conference on Human Factors in Computing Systems (2022)
Open Access | Times Cited: 225

Factual Probing Is [MASK]: Learning vs. Learning to Recall
Zexuan Zhong, Dan Friedman, Danqi Chen
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
Open Access | Times Cited: 211

Large pre-trained language models contain human-like biases of what is right and wrong to do
Patrick Schramowski, Cigdem Turan, Nico Andersen, et al.
Nature Machine Intelligence (2022) Vol. 4, Iss. 3, pp. 258-268
Closed Access | Times Cited: 193

GeDi: Generative Discriminator Guided Sequence Generation
Ben Krause, Akhilesh Gotmare, Bryan McCann, et al.
(2021)
Open Access | Times Cited: 192

Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
Timo Schick, Sahana Udupa, Hinrich Schütze
Transactions of the Association for Computational Linguistics (2021) Vol. 9, pp. 1408-1424
Open Access | Times Cited: 176

Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus
Jesse Dodge, Maarten Sap, Ana Marasović, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
Open Access | Times Cited: 157

Trustworthy AI: A Computational Perspective
Haochen Liu, Yiqi Wang, Wenqi Fan, et al.
ACM Transactions on Intelligent Systems and Technology (2022) Vol. 14, Iss. 1, pp. 1-59
Open Access | Times Cited: 135

Red Teaming Language Models with Language Models
Ethan Perez, Saffron Huang, Francis Song, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 133

Predictability and Surprise in Large Generative Models
Deep Ganguli, Danny Hernandez, Liane Lovitt, et al.
2022 ACM Conference on Fairness, Accountability, and Transparency (2022), pp. 1747-1764
Open Access | Times Cited: 120

Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets
Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, et al.
(2021)
Open Access | Times Cited: 119

Large AI Models in Health Informatics: Applications, Challenges, and the Future
Jianing Qiu, Lin Li, Jiankai Sun, et al.
IEEE Journal of Biomedical and Health Informatics (2023) Vol. 27, Iss. 12, pp. 6074-6087
Open Access | Times Cited: 116

DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts
Alisa Liu, Maarten Sap, Ximing Lu, et al.
(2021)
Open Access | Times Cited: 115

ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection
Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 115

Human-level play in the game of Diplomacy by combining language models with strategic reasoning
Anton Bakhtin, Noam Brown, Emily Dinan, et al.
Science (2022) Vol. 378, Iss. 6624, pp. 1067-1074
Closed Access | Times Cited: 109

Multimodal datasets: misogyny, pornography, and malignant stereotypes
Abeba Birhane, Vinay Uday Prabhu, Emmanuel Kahembwe
arXiv (Cornell University) (2021)
Open Access | Times Cited: 108

Challenges in Automated Debiasing for Toxic Language Detection
Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, et al.
(2021)
Open Access | Times Cited: 105

Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos, Ryan A. Rossi, Joe Barrow, et al.
Computational Linguistics (2024) Vol. 50, Iss. 3, pp. 1097-1179
Open Access | Times Cited: 105

MERLOT RESERVE: Neural Script Knowledge through Vision and Language and Sound
Rowan Zellers, Jiasen Lu, Ximing Lu, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022), pp. 16354-16366
Open Access | Times Cited: 100

WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
Alisa Liu, Swabha Swayamdipta, Noah A. Smith, et al.
(2022)
Open Access | Times Cited: 96

Auditing large language models: a three-layered approach
Jakob Mökander, Jonas Schuett, Hannah Rose Kirk, et al.
AI and Ethics (2023) Vol. 4, Iss. 4, pp. 1085-1115
Open Access | Times Cited: 85

Page 1 - Next Page

Scroll to top