OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

An extensive study on pre-trained models for program understanding and generation
Zhengran Zeng, Hanzhuo Tan, Haotian Zhang, et al.
(2022), pp. 39-51
Closed Access | Times Cited: 83

Showing 1-25 of 83 citing articles:

Fuzzing deep-learning libraries via automated relational API inference
Yinlin Deng, Chenyuan Yang, Anjiang Wei, et al.
Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (2022), pp. 44-56
Open Access | Times Cited: 41

An Empirical Comparison of Pre-Trained Models of Source Code
Changan Niu, Chuanyi Li, Vincent Ng, et al.
(2023), pp. 2136-2148
Open Access | Times Cited: 36

An Empirical Study on Fine-Tuning Large Language Models of Code for Automated Program Repair
Kai Ming Huang, Xiangxin Meng, Jian Zhang, et al.
2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE) (2023), pp. 1162-1174
Closed Access | Times Cited: 29

DevGPT: Studying Developer-ChatGPT Conversations
Tao Xiao, Christoph Treude, Hideaki Hata, et al.
(2024), pp. 227-230
Closed Access | Times Cited: 16

BinaryAI: Binary Software Composition Analysis via Intelligent Binary Source Code Matching
Ling Jiang, Junwen An, Huihui Huang, et al.
(2024), pp. 1-13
Open Access | Times Cited: 12

Towards an understanding of large language models in software engineering tasks
Zibin Zheng, Kaiwen Ning, Qingyuan Zhong, et al.
Empirical Software Engineering (2024) Vol. 30, Iss. 2
Closed Access | Times Cited: 11

Evaluating Large Language Models in Class-Level Code Generation
Xueying Du, Mingwei Liu, Kaixin Wang, et al.
(2024), pp. 1-13
Closed Access | Times Cited: 9

Prompt-based Code Completion via Multi-Retrieval Augmented Generation
Hanzhuo Tan, Qi Luo, Ling Jiang, et al.
ACM Transactions on Software Engineering and Methodology (2025)
Closed Access | Times Cited: 1

What Makes Good In-Context Demonstrations for Code Intelligence Tasks with LLMs?
Shuzheng Gao, Xin-Cheng Wen, Cuiyun Gao, et al.
2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE) (2023), pp. 761-773
Closed Access | Times Cited: 21

CCT5: A Code-Change-Oriented Pre-trained Model
Bo Lin, Shangwen Wang, Zhongxin Liu, et al.
(2023)
Open Access | Times Cited: 19

Reconciling the contrasting narratives on the environmental impact of large language models
Shaolei Ren, Bill Tomlinson, Rebecca W. Black, et al.
Scientific Reports (2024) Vol. 14, Iss. 1
Open Access | Times Cited: 8

Unveiling Memorization in Code Models
Zhou Yang, Zhipeng Zhao, Chenyu Wang, et al.
(2024), pp. 1-13
Open Access | Times Cited: 7

Greening Large Language Models of Code
Jieke Shi, Zhou Yang, Hong Jin Kang, et al.
(2024) Vol. 32, pp. 142-153
Open Access | Times Cited: 7

Towards Efficient Fine-Tuning of Pre-trained Code Models: An Experimental Study and Beyond
Ensheng Shi, Yanlin Wang, Hongyu Zhang, et al.
(2023), pp. 39-51
Open Access | Times Cited: 15

Do Machines and Humans Focus on Similar Code? Exploring Explainability of Large Language Models in Code Summarization
Jiliang Li, Yifan Zhang, Zachary Karas, et al.
(2024) Vol. 33, pp. 47-51
Open Access | Times Cited: 5

Who evaluates the evaluators? On automatic metrics for assessing AI-based offensive code generators
Pietro Liguori, Cristina Improta, Roberto Natella, et al.
Expert Systems with Applications (2023) Vol. 225, pp. 120073-120073
Open Access | Times Cited: 12

Prompt Tuning in Code Intelligence: An Experimental Evaluation
Chaozheng Wang, Yuanhang Yang, Cuiyun Gao, et al.
IEEE Transactions on Software Engineering (2023) Vol. 49, Iss. 11, pp. 4869-4885
Closed Access | Times Cited: 12

How Important Are Good Method Names in Neural Code Generation? A Model Robustness Perspective
Guang Yang, Yu Zhou, Wenhua Yang, et al.
ACM Transactions on Software Engineering and Methodology (2023) Vol. 33, Iss. 3, pp. 1-35
Open Access | Times Cited: 12

Delving into Parameter-Efficient Fine-Tuning in Code Change Learning: An Empirical Study
Shuo Liu, Jacky Keung, Zhen Yang, et al.
2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) (2024), pp. 465-476
Open Access | Times Cited: 4

CodeGen4Libs: A Two-Stage Approach for Library-Oriented Code Generation
Mingwei Liu, Tianyong Yang, Yiling Lou, et al.
2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE) (2023), pp. 434-445
Closed Access | Times Cited: 11

SMT Solver Validation Empowered by Large Pre-Trained Language Models
Maolin Sun, Yibiao Yang, Yang Wang, et al.
2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE) (2023), pp. 1288-1300
Closed Access | Times Cited: 10

An empirical study of best practices for code pre-trained models on software engineering classification tasks
Yu Zhao, Lina Gong, Yaoshen Yu, et al.
Expert Systems with Applications (2025), pp. 126762-126762
Closed Access

Design pattern recognition: a study of large language models
Sushant Kumar Pandey, Sivajeet Chand, Jennifer Horkoff, et al.
Empirical Software Engineering (2025) Vol. 30, Iss. 3
Open Access

Page 1 - Next Page

Scroll to top