OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta, Jan Trienes, Shreyasi Pathak, et al.
ACM Computing Surveys (2023) Vol. 55, Iss. 13s, pp. 1-42
Open Access | Times Cited: 235

Showing 51-75 of 235 citing articles:

Uncertainty in XAI: Human Perception and Modeling Approaches
Teodor Chiaburu, Frank Haußer, Felix Bießmann
Machine Learning and Knowledge Extraction (2024) Vol. 6, Iss. 2, pp. 1170-1192
Open Access | Times Cited: 4

Deep 3D histology powered by tissue clearing, omics and AI
Ali Ertürk
Nature Methods (2024) Vol. 21, Iss. 7, pp. 1153-1165
Closed Access | Times Cited: 4

An interpretable approach combining Shapley additive explanations and LightGBM based on data augmentation for improving wheat yield estimates
Ying Wang, Pengxin Wang, Kevin Tansey, et al.
Computers and Electronics in Agriculture (2024) Vol. 229, pp. 109758-109758
Closed Access | Times Cited: 4

The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR
Laura State, Alejandra Bringas Colmenarejo, Andrea Beretta, et al.
Artificial Intelligence and Law (2025)
Open Access

Interpretable machine learning models for COPD ease of breathing estimation
Thomas T. Kok, John Morales, Dirk Deschrijver, et al.
Medical & Biological Engineering & Computing (2025)
Closed Access

Integrating Explainable Artificial Intelligence in Extended Reality Environments: A Systematic Survey
Clara Maathuis, Marina Cidota, Dragoș Datcu, et al.
Mathematics (2025) Vol. 13, Iss. 2, pp. 290-290
Open Access

Interpretable AI for medical image analysis: methods, evaluation, and clinical considerations
Tiago Gonçalves, Anna Hedström, Aurélie Pahud de Mortanges, et al.
Elsevier eBooks (2025), pp. 315-346
Closed Access

Evaluating robustly standardized explainable anomaly detection of implausible variables in cancer data
Philipp Röchner, Franz Rothlauf
Journal of the American Medical Informatics Association (2025)
Closed Access

Towards Visual Analytics for Explainable AI in Industrial Applications
Kostiantyn Kucher, Elmira Zohrevandi, C. A. L. Westin
Analytics (2025) Vol. 4, Iss. 1, pp. 7-7
Open Access

Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
Greta Warren, Irina Shklovski, Isabelle Augenstein
(2025), pp. 1-21
Open Access

A Comprehensive User-Centric Method for Evaluating and Comparing XAI Explanations
Saša Brdnik, Sašo Karakatič, Boštjan Šumak
International Journal of Human-Computer Interaction (2025), pp. 1-20
Closed Access

Evaluation and Enhancement of Standard Classifier Performance by Resolving Class Imbalance Issue Using Smote-Variants Over Multiple Medical Datasets
Vinod Kumar, Ravi Kant Kumar, Sunil Kumar Singh
SN Computer Science (2025) Vol. 6, Iss. 3
Closed Access

Influence based explainability of brain tumors segmentation in magnetic resonance imaging
Tommaso Torda, Andrea Ciardiello, Simona Gargiulo, et al.
Progress in Artificial Intelligence (2025)
Open Access

How Do ML Students Explain Their Models and What Can We Learn from This?
Ulrik Franke
Lecture notes in business information processing (2025), pp. 351-365
Closed Access

Interpretable Lung Cancer Risk Prediction Using Ensemble Learning and XAI Based on Lifestyle and Demographic Data
Shahid Mohammad Ganie, Pijush Kanti Dutta Pramanik
Computational Biology and Chemistry (2025), pp. 108438-108438
Closed Access

Automatic Software Vulnerability Detection in Binary Code
Shigang Liu, Lin Li, Xinbo Ban, et al.
Lecture notes in computer science (2025), pp. 148-166
Closed Access

New opportunities and challenges for conservation evidence synthesis from advances in natural language processing
Charlotte H. Chang, Susan C. Cook‐Patton, James T. Erbaugh, et al.
Conservation Biology (2025) Vol. 39, Iss. 2
Open Access

Explainability and vision foundation models: A survey
Rémi Kazmierczak, Eloïse Berthier, Goran Frehse, et al.
Information Fusion (2025), pp. 103184-103184
Closed Access

Individualized lesion-symptom mapping using explainable artificial intelligence for the cognitive impact of white matter hyperintensities
Ryanne Offenberg, Alberto De Luca, Geert Jan Biessels, et al.
NeuroImage Clinical (2025) Vol. 46, pp. 103790-103790
Open Access

Genetic analysis of swimming performance in rainbow trout (Oncorhynchus mykiss) using image traits derived from deep learning
Yuuko Xue, Arjan P. Palstra, R.J.W. Blonk, et al.
Aquaculture (2025), pp. 742607-742607
Closed Access

LP-DIXIT: Evaluating Explanations for Link Predictions on Knowledge Graphs using Large Language Models
Roberto Barile, Claudia d’Amato, Nicola Fanizzi
(2025), pp. 4034-4042
Closed Access

Can simulations aid counterfactual reasoning?
Nathaniel Gan
Synthese (2025) Vol. 205, Iss. 5
Closed Access

Fewer Than 1% of Explainable AI Papers Validate Explainability with Humans
Ashley Suh, Isabelle Hurley, Nora Smith, et al.
(2025), pp. 1-7
Closed Access

Design Principles and Guidelines for LLM Observability: Insights from Developers
Xin Chen, Li Yan, X. Sean Wang
(2025), pp. 1-9
Closed Access

Scroll to top