OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Showing 1-25 of 96 citing articles:

Parameter-efficient fine-tuning of large-scale pre-trained language models
Ning Ding, Yujia Qin, Guang Yang, et al.
Nature Machine Intelligence (2023) Vol. 5, Iss. 3, pp. 220-235
Open Access | Times Cited: 334

Is GPT-3 a Good Data Annotator?
Bosheng Ding, Chengwei Qin, Linlin Liu, et al.
(2023)
Open Access | Times Cited: 84

GPT-3-Driven Pedagogical Agents to Train Children’s Curious Question-Asking Skills
Rania Abdelghani, Yen-Hsiang Wang, Xingdi Yuan, et al.
International Journal of Artificial Intelligence in Education (2023) Vol. 34, Iss. 2, pp. 483-518
Closed Access | Times Cited: 61

On the Effectiveness of Parameter-Efficient Fine-Tuning
Zihao Fu, Haoran Yang, Anthony Man–Cho So, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2023) Vol. 37, Iss. 11, pp. 12799-12807
Open Access | Times Cited: 53

End-Edge-Cloud Collaborative Computing for Deep Learning: A Comprehensive Survey
Yingchao Wang, Chen Yang, Shulin Lan, et al.
IEEE Communications Surveys & Tutorials (2024) Vol. 26, Iss. 4, pp. 2647-2683
Closed Access | Times Cited: 23

MotionGPT: Finetuned LLMs Are General-Purpose Motion Generators
Yaqi Zhang, Di Huang, Bin Liu, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 7, pp. 7368-7376
Open Access | Times Cited: 20

When LLMs meet cybersecurity: a systematic literature review
Jie Zhang, H. Bu, Hui Wen, et al.
Cybersecurity (2025) Vol. 8, Iss. 1
Open Access | Times Cited: 9

Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics
Prajjwal Bhargava, Aleksandr Drozd, Anna Rogers
(2021)
Open Access | Times Cited: 63

UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning
Yuning Mao, Lambert Mathias, Rui Hou, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 48

Offensive language detection in Tamil YouTube comments by adapters and cross-domain knowledge transfer
Malliga Subramanian, Rahul Ponnusamy, Sean Benhur, et al.
Computer Speech & Language (2022) Vol. 76, pp. 101404-101404
Closed Access | Times Cited: 39

Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Guanzheng Chen, Fangyu Liu, Zaiqiao Meng, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 32

Exploring Efficient-Tuning Methods in Self-Supervised Speech Models
Zih-Ching Chen, Chin-Lun Fu, Chih-Ying Liu, et al.
2022 IEEE Spoken Language Technology Workshop (SLT) (2023), pp. 1120-1127
Open Access | Times Cited: 21

One Adapter for All Programming Languages? Adapter Tuning for Code Search and Summarization
Deze Wang, Boxing Chen, Shanshan Li, et al.
(2023), pp. 5-16
Open Access | Times Cited: 19

From Turing to Transformers: A Comprehensive Review and Tutorial on the Evolution and Applications of Generative Transformer Models
Emma Yann Zhang, Adrian David Cheok, Zhigeng Pan, et al.
Sci (2023) Vol. 5, Iss. 4, pp. 46-46
Open Access | Times Cited: 17

IDPG: An Instance-Dependent Prompt Generation Method
Zhuofeng Wu, Sinong Wang, Jiatao Gu, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 27

Dynamic-Superb: Towards a Dynamic, Collaborative, and Comprehensive Instruction-Tuning Benchmark For Speech
Chien‐Yu Huang, Ke-Han Lu, Shih-Heng Wang, et al.
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2024)
Open Access | Times Cited: 5

PECoP: Parameter Efficient Continual Pretraining for Action Quality Assessment
Amirhossein Dadashzadeh, Shuchao Duan, Alan Whone, et al.
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2024), pp. 42-52
Open Access | Times Cited: 5

Comparison between parameter-efficient techniques and full fine-tuning: A case study on multilingual news article classification
Olesya Razuvayevskaya, Benjamin M. Wu, João Leite, et al.
PLoS ONE (2024) Vol. 19, Iss. 5, pp. e0301738-e0301738
Open Access | Times Cited: 5

ELLE: Efficient Lifelong Pre-training for Emerging Data
Yujia Qin, Jiajie Zhang, Yankai Lin, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2022), pp. 2789-2810
Open Access | Times Cited: 22

Chapter: Exploiting Convolutional Neural Network Adapters for Self-Supervised Speech Models
Zih-Ching Chen, Yu-Shun Sung, Hung-yi Lee
(2023), pp. 1-5
Open Access | Times Cited: 12

MixPHM: Redundancy-Aware Parameter-Efficient Tuning for Low-Resource Visual Question Answering
Jingjing Jiang, Nanning Zheng
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023), pp. 24203-24213
Open Access | Times Cited: 12

Delving into Parameter-Efficient Fine-Tuning in Code Change Learning: An Empirical Study
Shuo Liu, Jacky Keung, Zhen Yang, et al.
2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) (2024), pp. 465-476
Open Access | Times Cited: 4

STAF-LLM: A scalable and task-adaptive fine-tuning framework for large language models in medical domain
Tianhan Xu, Ling Chen, Zhe Hu, et al.
Expert Systems with Applications (2025), pp. 127582-127582
Closed Access

Page 1 - Next Page

Scroll to top