OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Samuel Gehman, Suchin Gururangan, Maarten Sap, et al.
(2020)
Open Access | Times Cited: 411

Showing 26-50 of 411 citing articles:

Toxicity in chatgpt: Analyzing persona-assigned language models
Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, et al.
(2023)
Open Access | Times Cited: 82

Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models
Patrick Schramowski, Manuel Brack, Björn Deiseroth, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
Open Access | Times Cited: 65

Foundation metrics for evaluating effectiveness of healthcare conversations powered by generative AI
Mahyar Abbasian, Elahe Khatibi, Iman Azimi, et al.
npj Digital Medicine (2024) Vol. 7, Iss. 1
Open Access | Times Cited: 65

Teaching Small Language Models to Reason
Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adámek, et al.
(2023), pp. 1773-1781
Open Access | Times Cited: 59

Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation
Jizhi Zhang, Keqin Bao, Yang Zhang, et al.
(2023), pp. 993-999
Open Access | Times Cited: 58

Give us the Facts: Enhancing Large Language Models With Knowledge Graphs for Fact-Aware Language Modeling
Linyao Yang, Hongyang Chen, Zhao Li, et al.
IEEE Transactions on Knowledge and Data Engineering (2024) Vol. 36, Iss. 7, pp. 3091-3110
Open Access | Times Cited: 54

A Holistic Approach to Undesired Content Detection in the Real World
Todor Markov, Chong Zhang, Sandhini Agarwal, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2023) Vol. 37, Iss. 12, pp. 15009-15018
Open Access | Times Cited: 53

Prompting PaLM for Translation: Assessing Strategies and Performance
David Vilar, Markus Freitag, Colin Cherry, et al.
(2023)
Open Access | Times Cited: 50

Auditing of AI: Legal, Ethical and Technical Approaches
Jakob Mökander
Deleted Journal (2023) Vol. 2, Iss. 3
Open Access | Times Cited: 44

A Review of Current Trends, Techniques, and Challenges in Large Language Models (LLMs)
Rajvardhan Patil, Venkat N. Gudivada
Applied Sciences (2024) Vol. 14, Iss. 5, pp. 2074-2074
Open Access | Times Cited: 38

Foundations & Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Paul Pu Liang, Amir Zadeh, Louis‐Philippe Morency
ACM Computing Surveys (2024) Vol. 56, Iss. 10, pp. 1-42
Open Access | Times Cited: 29

AI Psychometrics: Assessing the Psychological Profiles of Large Language Models Through Psychometric Inventories
Max Pellert, Clemens M. Lechner, Claudia Wagner, et al.
Perspectives on Psychological Science (2024) Vol. 19, Iss. 5, pp. 808-826
Open Access | Times Cited: 28

Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks
Daniel Kang, Xuechen Li, Ion Stoica, et al.
(2024), pp. 132-143
Open Access | Times Cited: 27

A Survey of Text Classification With Transformers: How Wide? How Large? How Long? How Accurate? How Expensive? How Safe?
John Fields, Kevin Chovanec, Praveen Madiraju
IEEE Access (2024) Vol. 12, pp. 6518-6531
Open Access | Times Cited: 25

MASTERKEY: Automated Jailbreaking of Large Language Model Chatbots
Gelei Deng, Yi Liu, Yuekang Li, et al.
(2024)
Open Access | Times Cited: 25

Visual Adversarial Examples Jailbreak Aligned Large Language Models
Xiangyu Qi, Kaixuan Huang, Ashwinee Panda, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 19, pp. 21527-21536
Open Access | Times Cited: 25

Addressing 6 challenges in generative AI for digital health: A scoping review
Tara Templin, Monika W. Perez, Sean Sylvia, et al.
PLOS Digital Health (2024) Vol. 3, Iss. 5, pp. e0000503-e0000503
Open Access | Times Cited: 24

Automatically Correcting Large Language Models: Surveying the Landscape of Diverse Automated Correction Strategies
Liangming Pan, Michael Saxon, Wenda Xu, et al.
Transactions of the Association for Computational Linguistics (2024) Vol. 12, pp. 484-506
Open Access | Times Cited: 19

Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models
Zichao Lin, Shuyan Guan, Wending Zhang, et al.
Artificial Intelligence Review (2024) Vol. 57, Iss. 9
Open Access | Times Cited: 19

AI Agents Under Threat: A Survey of Key Security Challenges and Future Pathways
Zehang Deng, Yongjian Guo, Changzhou Han, et al.
ACM Computing Surveys (2025)
Open Access | Times Cited: 6

Rethinking machine unlearning for large language models
Sijia Liu, Yuanshun Yao, Jinghan Jia, et al.
Nature Machine Intelligence (2025)
Closed Access | Times Cited: 4

GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation
Kang Min Yoo, Dongju Park, Jaewook Kang, et al.
(2021)
Open Access | Times Cited: 102

Intrinsic Bias Metrics Do Not Correlate with Application Bias
Seraphina Goldfarb-Tarrant, Rebecca Marchant, Ricardo Muñoz Sánchez, et al.
(2021)
Open Access | Times Cited: 83

Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space
Mor Geva, Avi Caciularu, Kevin I‐Kai Wang, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 69

Challenges in Detoxifying Language Models
Johannes Welbl, Amelia Glaese, Jonathan Uesato, et al.
(2021)
Open Access | Times Cited: 66

Scroll to top