OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Sustainable Modular Debiasing of Language Models
Anne Lauscher, Tobias Lueken, Goran Glavašš
(2021)
Open Access | Times Cited: 62

Showing 1-25 of 62 citing articles:

A Categorical Archive of ChatGPT Failures
Ali Borji
Research Square (Research Square) (2023)
Open Access | Times Cited: 341

Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos, Ryan A. Rossi, Joe Barrow, et al.
Computational Linguistics (2024) Vol. 50, Iss. 3, pp. 1097-1179
Open Access | Times Cited: 105

An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models
Nicholas Meade, Elinor Poole-Dayan, Siva Reddy
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 86

Auto-Debias: Debiasing Masked Language Models with Automated Biased Prompts
Yue Guo, Yi Yang, Ahmed Abbasi
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 81

Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models
Zichao Lin, Shuyan Guan, Wending Zhang, et al.
Artificial Intelligence Review (2024) Vol. 57, Iss. 9
Open Access | Times Cited: 19

Measuring Fairness with Biased Rulers: A Comparative Study on Bias Metrics for Pre-trained Language Models
Pieter Delobelle, Ewoenam Kwaku Tokpo, Toon Calders, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 42

Introduction to Large Language Models (LLMs) for dementia care and research
Matthias S. Treder, Sojin Lee, Kamen A. Tsvetanov
Frontiers in Dementia (2024) Vol. 3
Open Access | Times Cited: 9

Measuring Harmful Sentence Completion in Language Models for LGBTQIA+ Individuals
Debora Nozza, Federico Bianchi, Anne Lauscher, et al.
(2022)
Open Access | Times Cited: 31

Fairness in Language Models Beyond English: Gaps and Challenges
Krithika Ramesh, Sunayana Sitaram, Monojit Choudhury
(2023)
Open Access | Times Cited: 19

Pipelines for Social Bias Testing of Large Language Models
Debora Nozza, Federico Bianchi, Dirk Hovy
(2022)
Open Access | Times Cited: 23

Unlearning Bias in Language Models by Partitioning Gradients
Charles Yu, Sullam Jeoung, Anish Kasi, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2023)
Open Access | Times Cited: 16

Fairness in Deep Learning: A Survey on Vision and Language Research
Otávio Parraga, Martin D. Móre, Christian Mattjie, et al.
ACM Computing Surveys (2023)
Open Access | Times Cited: 16

Multi2WOZ: A Robust Multilingual Dataset and Conversational Pretraining for Task-Oriented Dialog
Chia-Chien Hung, Anne Lauscher, Ivan Vulić, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022), pp. 3687-3703
Open Access | Times Cited: 20

BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation
Tianxiang Sun, Junliang He, Xipeng Qiu, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 16

Parameter-efficient Modularised Bias Mitigation via AdapterFusion
Deepak Kumar, Oleg Lesota, George Zerveas, et al.
(2023)
Open Access | Times Cited: 9

DS-TOD: Efficient Domain Specialization for Task-Oriented Dialog
Chia-Chien Hung, Anne Lauscher, Simone Paolo Ponzetto, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2022), pp. 891-904
Open Access | Times Cited: 13

FairDistillation: Mitigating Stereotyping in Language Models
Pieter Delobelle, Bettina Berendt
Lecture notes in computer science (2023), pp. 638-654
Closed Access | Times Cited: 8

Plug-and-Play Knowledge Injection for Pre-trained Language Models
Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, et al.
(2023), pp. 10641-10658
Open Access | Times Cited: 8

Modular and On-demand Bias Mitigation with Attribute-Removal Subnetworks
Lukas Hauzenberger, Shahed Masoudian, Deepak Kumar, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2023)
Open Access | Times Cited: 7

Debiasing large language models: research opportunities*
Vithya Yogarajan, Gillian Dobbie, Te Taka Keegan
Journal of the Royal Society of New Zealand (2024) Vol. 55, Iss. 2, pp. 372-395
Open Access | Times Cited: 2

MABEL: Attenuating Gender Bias using Textual Entailment Data
Jacqueline He, Mengzhou Xia, Christiane Fellbaum, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022), pp. 9681-9702
Open Access | Times Cited: 11

A Comparative Study on the Impact of Model Compression Techniques on Fairness in Language Models
Krithika Ramesh, Arnav Chavan, Shrey Pandit, et al.
(2023), pp. 15762-15782
Open Access | Times Cited: 6

Visual Comparison of Language Model Adaptation
Rita Sevastjanova, Eren Cakmak, Shauli Ravfogel, et al.
IEEE Transactions on Visualization and Computer Graphics (2022), pp. 1-11
Open Access | Times Cited: 8

Bridging Fairness and Environmental Sustainability in Natural Language Processing
Marius Hessenthaler, Emma Strubell, Dirk Hovy, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022), pp. 7817-7836
Open Access | Times Cited: 8

Page 1 - Next Page

Scroll to top