OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Samuel Gehman, Suchin Gururangan, Maarten Sap, et al.
(2020)
Open Access | Times Cited: 411

Showing 76-100 of 411 citing articles:

What’s in the Box? An Analysis of Undesirable Content in the Common Crawl Corpus
Alexandra Sasha Luccioni, Joseph D. Viviano
(2021)
Open Access | Times Cited: 49

Bot-Adversarial Dialogue for Safe Conversational Agents
Jing Xu, Da Young Ju, Margaret Li, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021), pp. 2950-2968
Open Access | Times Cited: 47

Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts
Ashutosh Baheti, Maarten Sap, Alan Ritter, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
Open Access | Times Cited: 44

ProsocialDialog: A Prosocial Backbone for Conversational Agents
Hyunwoo Kim, Youngjae Yu, Liwei Jiang, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 36

Mix and Match: Learning-free Controllable Text Generationusing Energy Language Models
Fatemehsadat Mireshghallah, Kartik Goyal, Taylor Berg-Kirkpatrick
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022), pp. 401-415
Open Access | Times Cited: 34

Topic Discovery via Latent Space Clustering of Pretrained Language Model Representations
Meng Yu, Yunyi Zhang, Jiaxin Huang, et al.
Proceedings of the ACM Web Conference 2022 (2022), pp. 3143-3152
Open Access | Times Cited: 33

Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection
Suchin Gururangan, Dallas Card, Sarah K. Dreier, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 29

AugESC: Dialogue Augmentation with Large Language Models for Emotional Support Conversation
Chujie Zheng, Sahand Sabour, Jiaxin Wen, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2023)
Open Access | Times Cited: 20

The future of standardised assessment: Validity and trust in algorithms for assessment and scoring
Cesare Aloisi
European Journal of Education (2023) Vol. 58, Iss. 1, pp. 98-110
Closed Access | Times Cited: 18

I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation
Chandra Bhagavatula, Jena D. Hwang, Doug Downey, et al.
(2023), pp. 9614-9630
Open Access | Times Cited: 18

A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly
Yifan Yao, Jinhao Duan, Kaidi Xu, et al.
arXiv (Cornell University) (2023)
Open Access | Times Cited: 17

Can We Edit Factual Knowledge by In-Context Learning?
Ce Zheng, Lei Li, Qingxiu Dong, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023)
Open Access | Times Cited: 17

NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, et al.
(2023)
Open Access | Times Cited: 17

Requirements Satisfiability with In-Context Learning
Sarah Santos, Travis D. Breaux, Thomas Norton, et al.
(2024), pp. 168-179
Open Access | Times Cited: 8

Understanding LLMs: A comprehensive overview from training to inference
Yiheng Liu, Hao He, Tianle Han, et al.
Neurocomputing (2024), pp. 129190-129190
Closed Access | Times Cited: 8

Large language model, AI and scientific research: why ChatGPT is only the beginning
Pietro Zangrossi, Massimo MARTINI, Francesco Guerrini, et al.
Journal of Neurosurgical Sciences (2024) Vol. 68, Iss. 2
Open Access | Times Cited: 7

Theme-Driven Keyphrase Extraction to Analyze Social Media Discourse
William Romano, Omar Sharif, Madhusudan Basak, et al.
Proceedings of the International AAAI Conference on Web and Social Media (2024) Vol. 18, pp. 1315-1327
Open Access | Times Cited: 7

Cultural Bias in Large Language Models: A Comprehensive Analysis and Mitigation Strategies
Z Y Liu
Journal of Transcultural Communication (2024)
Closed Access | Times Cited: 7

Privacy preserving large language models: ChatGPT case study based vision and framework
Imdad Ullah, Najmul Hassan, Sukhpal Singh Gill, et al.
IET Blockchain (2024)
Open Access | Times Cited: 7

Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs
Ana Marasović, Chandra Bhagavatula, Jae sung Park, et al.
(2020), pp. 2810-2829
Open Access | Times Cited: 47

Which *BERT? A Survey Organizing Contextualized Encoders
Patrick Xia, Shijie Wu, Benjamin Van Durme
(2020)
Open Access | Times Cited: 45

Confronting Abusive Language Online: A Survey from the Ethical and Human Rights Perspective
Svetlana Kiritchenko, Isar Nejadgholi, Kathleen Fraser
Journal of Artificial Intelligence Research (2021) Vol. 71, pp. 431-478
Open Access | Times Cited: 37

Alignment of Language Agents
Zachary Kenton, Tom Everitt, Laura Weidinger, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 35

“Nice Try, Kiddo”: Investigating Ad Hominems in Dialogue Responses
Emily Sheng, Kai-Wei Chang, Prem Natarajan, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021), pp. 750-767
Open Access | Times Cited: 33

Sequence Length is a Domain: Length-based Overfitting in Transformer Models
Dušan Variš, Ondřej Bojar
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
Open Access | Times Cited: 33

Scroll to top