OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

StereoSet: Measuring stereotypical bias in pretrained language models
Moin Nadeem, Anna Bethke, Siva Reddy
(2021)
Open Access | Times Cited: 417

Showing 1-25 of 417 citing articles:

On the Opportunities and Risks of Foundation Models
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 1553

Language (Technology) is Power: A Critical Survey of “Bias” in NLP
Su Lin Blodgett, Solon Barocas, Hal Daumé, et al.
(2020)
Open Access | Times Cited: 678

A Categorical Archive of ChatGPT Failures
Ali Borji
Research Square (Research Square) (2023)
Open Access | Times Cited: 337

CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Nikita Nangia, Clara Vania, Rasika Bhalerao, et al.
(2020)
Open Access | Times Cited: 307

Factual Probing Is [MASK]: Learning vs. Learning to Recall
Zexuan Zhong, Dan Friedman, Danqi Chen
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
Open Access | Times Cited: 207

Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study
Travis Zack, Eric Lehman, Mirac Süzgün, et al.
The Lancet Digital Health (2023) Vol. 6, Iss. 1, pp. e12-e22
Open Access | Times Cited: 197

Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
Timo Schick, Sahana Udupa, Hinrich Schütze
Transactions of the Association for Computational Linguistics (2021) Vol. 9, pp. 1408-1424
Open Access | Times Cited: 174

Large language models propagate race-based medicine
Jesutofunmi A. Omiye, Jenna Lester, Simon Spichak, et al.
npj Digital Medicine (2023) Vol. 6, Iss. 1
Open Access | Times Cited: 169

Biases in Large Language Models: Origins, Inventory, and Discussion
Roberto Navigli, Simone Conia, Björn Roß
Journal of Data and Information Quality (2023) Vol. 15, Iss. 2, pp. 1-21
Open Access | Times Cited: 125

Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets
Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, et al.
(2021)
Open Access | Times Cited: 119

Gender bias and stereotypes in Large Language Models
Hadas Kotek, Rikker Dockum, David Sun
(2023), pp. 12-24
Open Access | Times Cited: 117

BOLD
Jwala Dhamala, Tony Sun, Varun Kumar, et al.
(2021), pp. 862-872
Open Access | Times Cited: 109

Decoding ChatGPT: A taxonomy of existing research, current challenges, and possible future directions
Shahab Saquib Sohail, Faiza Farhat, Yassine Himeur, et al.
Journal of King Saud University - Computer and Information Sciences (2023) Vol. 35, Iss. 8, pp. 101675-101675
Open Access | Times Cited: 109

Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos, Ryan A. Rossi, Joe Barrow, et al.
Computational Linguistics (2024) Vol. 50, Iss. 3, pp. 1097-1179
Open Access | Times Cited: 101

Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, et al.
(2023), pp. 79-90
Open Access | Times Cited: 94

An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models
Nicholas Meade, Elinor Poole-Dayan, Siva Reddy
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 86

Auditing large language models: a three-layered approach
Jakob Mökander, Jonas Schuett, Hannah Rose Kirk, et al.
AI and Ethics (2023) Vol. 4, Iss. 4, pp. 1085-1115
Open Access | Times Cited: 81

On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models?
Nouha Dziri, Sivan Milton, Mo Yu, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 80

Prompting meaning: a hermeneutic approach to optimising prompt engineering with ChatGPT
Leah Henrickson, Albert Meroño-Peñuela
AI & Society (2023)
Open Access | Times Cited: 52

Bias of AI-generated content: an examination of news produced by large language models
Xiao Fang, Shangkun Che, Minjia Mao, et al.
Scientific Reports (2024) Vol. 14, Iss. 1
Open Access | Times Cited: 44

A Review of Current Trends, Techniques, and Challenges in Large Language Models (LLMs)
Rajvardhan Patil, Venkat N. Gudivada
Applied Sciences (2024) Vol. 14, Iss. 5, pp. 2074-2074
Open Access | Times Cited: 35

On the Opportunities and Challenges of Foundation Models for GeoAI (Vision Paper)
Gengchen Mai, Weiming Huang, Jin Sun, et al.
ACM Transactions on Spatial Algorithms and Systems (2024) Vol. 10, Iss. 2, pp. 1-46
Open Access | Times Cited: 26

Page 1 - Next Page

Scroll to top