OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Interactive Concept Bottleneck Models
Kushal Chauhan, Rishabh Tiwari, Jan Freyberg, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2023) Vol. 37, Iss. 5, pp. 5948-5955
Open Access | Times Cited: 16

Showing 16 citing articles:

A Framework for Interpretability in Machine Learning for Medical Imaging
Alan Q. Wang, Batuhan K. Karaman, Heejong Kim, et al.
IEEE Access (2024) Vol. 12, pp. 53277-53292
Open Access | Times Cited: 5

Advancing Model Explainability: Visual Concept Knowledge Distillation for Concept Bottleneck Models
Ju-Hwan Lee, Dang Thanh Vu, NamKyung Lee, et al.
Applied Sciences (2025) Vol. 15, Iss. 2, pp. 493-493
Open Access

Enhancing deep convolutional neural network models for orange quality classification using MobileNetV2 and data augmentation techniques
Phan Thị Mai Hương, Lam Thanh Hien, Nguyễn Minh Sơn, et al.
Journal of Algorithms & Computational Technology (2025) Vol. 19
Open Access

Towards Interpretable Radiology Report Generation via Concept Bottlenecks Using a Multi-agentic RAG
Hasan Md Tusfiqur Alam, Devansh Srivastav, Md Abdul Kadir, et al.
Lecture notes in computer science (2025), pp. 201-209
Closed Access

Towards Modeling Uncertainties of Self-Explaining Neural Networks via Conformal Prediction
Wei Qian, Chenxu Zhao, Yangyi Li, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 13, pp. 14651-14659
Open Access | Times Cited: 3

On the Concept Trustworthiness in Concept Bottleneck Models
Qihan Huang, Jie Song, Jingwen Hu, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 19, pp. 21161-21168
Open Access | Times Cited: 1

Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery
Sukrut Rao, Sanket Mahajan, Moritz Böhle, et al.
Lecture notes in computer science (2024), pp. 444-461
Closed Access | Times Cited: 1

Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning
Emanuele Marconato, Andrea Passerini, Stefano Teso
Entropy (2023) Vol. 25, Iss. 12, pp. 1574-1574
Open Access | Times Cited: 1

Enhancing Explainability Through Visual Concept Knowledge Distillation on Concept Bottleneck Model
Ju-Hwan Lee, Dang Thanh Vu, NamKyung Lee, et al.
(2024)
Closed Access

Advancing Model Explainability: Visual Concept Knowledge Distillation for Concept Bottleneck Model
Ju-Hwan Lee, Dang Thanh Vu, NamKyung Lee, et al.
(2024)
Closed Access

Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models
Nishad Singhi, Jae Myung Kim, Karsten Roth, et al.
Lecture notes in computer science (2024), pp. 422-438
Closed Access

EQ-CBM: A Probabilistic Concept Bottleneck with Energy-Based Models and Quantized Vectors
Sangwon Kim, Dasom Ahn, Byoung Chul Ko, et al.
Lecture notes in computer science (2024), pp. 270-286
Closed Access

An Explicit Concept-Based Approach for Incorporating Expert Rules into Machine Learning Models
Andrei V. Konstantinov, Lev V. Utkin
Lecture notes in networks and systems (2024), pp. 153-162
Closed Access

Selective Concept Models: Permitting Stakeholder Customisation at Test-Time
Matthew L Barker, Katherine M. Collins, Krishnamurthy Dvijotham, et al.
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (2023) Vol. 11, Iss. 1, pp. 2-13
Open Access

Page 1

Scroll to top