OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

MASTERKEY: Automated Jailbreaking of Large Language Model Chatbots
Gelei Deng, Yi Liu, Yuekang Li, et al.
(2024)
Open Access | Times Cited: 25

Showing 25 citing articles:

A survey on large language model (LLM) security and privacy: The Good, The Bad, and The Ugly
Yifan Yao, Jinhao Duan, Kaidi Xu, et al.
High-Confidence Computing (2024) Vol. 4, Iss. 2, pp. 100211-100211
Open Access | Times Cited: 243

Equipping Llama with Google Query API for Improved Accuracy and Reduced Hallucination
Young Hwan Bae, Hye Rin Kim, Jae‐Hoon Kim
Research Square (Research Square) (2024)
Open Access | Times Cited: 18

When LLMs meet cybersecurity: a systematic literature review
Jie Zhang, H. Bu, Hui Wen, et al.
Cybersecurity (2025) Vol. 8, Iss. 1
Open Access | Times Cited: 9

Introduction to Large Language Models (LLMs) for dementia care and research
Matthias S. Treder, Sojin Lee, Kamen A. Tsvetanov
Frontiers in Dementia (2024) Vol. 3
Open Access | Times Cited: 9

A Comprehensive Review of Current Trends, Challenges, and Opportunities in Text Data Privacy
Sakib Shahriar, Rozita Dara, Rajen Akalu
Computers & Security (2025), pp. 104358-104358
Open Access | Times Cited: 1

A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly
Yifan Yao, Jinhao Duan, Kaidi Xu, et al.
arXiv (Cornell University) (2023)
Open Access | Times Cited: 17

AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns
Ashfak Md Shibli, Mir Mehedi Ahsan Pritom, Maanak Gupta
(2024), pp. 1-6
Open Access | Times Cited: 5

Discussion Paper: Exploiting LLMs for Scam Automation: A Looming Threat
Gilad Gressel, Rahul Pankajakshan, Yisroel Mirsky
(2024), pp. 20-24
Closed Access | Times Cited: 5

On Large Language Models’ Resilience to Coercive Interrogation
Zhuo Zhang, Guangyu Shen, Guanhong Tao, et al.
2022 IEEE Symposium on Security and Privacy (SP) (2024), pp. 826-844
Closed Access | Times Cited: 5

Large Language Models for Conducting Advanced Text Analytics Information Systems Research
Benjamin Ampel, Chi-Heng Yang, James Lee Hu, et al.
ACM Transactions on Management Information Systems (2024) Vol. 16, Iss. 1, pp. 1-27
Open Access | Times Cited: 4

Deceiving LLM through Compositional Instruction with Hidden Attacks
Shuyu Jiang, Xingshu Chen, Rui Tang
ACM Transactions on Autonomous and Adaptive Systems (2025)
Closed Access

Comprehensive Analysis of Machine Learning and Deep Learning models on Prompt Injection Classification using Natural Language Processing techniques
Bharat A. Jain, Prashant Ashok Pawar, Dhruv Gada, et al.
International Research Journal of Multidisciplinary Technovation (2025), pp. 24-37
Open Access

SpearBot: Leveraging large language models in a generative-critique framework for spear-phishing email generation
Qinglin Qi, Yun Luo, Yijia Xu, et al.
Information Fusion (2025), pp. 103176-103176
Closed Access

Generative AI model privacy: a survey
Yihao Liu, Jinhe Huang, Yanjie Li, et al.
Artificial Intelligence Review (2024) Vol. 58, Iss. 1
Open Access | Times Cited: 3

IntentObfuscator: A Jailbreaking Method via Confusing LLM with Prompts
Shang Shang, Zhongjiang Yao, Yepeng Yao, et al.
Lecture notes in computer science (2024), pp. 146-165
Closed Access | Times Cited: 2

Getting it right: the limits of fine-tuning large language models
Jacob Browning
Ethics and Information Technology (2024) Vol. 26, Iss. 2
Closed Access | Times Cited: 1

LLMs Red Teaming
Dragos Ruiu
(2024), pp. 213-223
Closed Access

An In-depth Analysis of Jailbreaking Through Domain Characterization of LLM Training Sets
Carlos Peláez-González, Andrés Herrera-Poyatos, Francisco Herrera-Triguero
Lecture notes in computer science (2024), pp. 116-127
Closed Access

Invited Paper: Security and Privacy in Large Language and Foundation Models: A Survey on GenAI Attacks
Giuseppe F. Italiano, Alessio Martino, Giorgio Piccardo
Lecture notes in computer science (2024), pp. 1-17
Closed Access

LLM-Sentry: A Model-Agnostic Human-in-the-Loop Framework for Securing Large Language Models
Saquib Irtiza, Khandakar Ashrafi Akbar, Arowa Yasmeen, et al.
(2024), pp. 245-254
Closed Access

Securing Vision-Language Models with a Robust Encoder Against Jailbreak and Adversarial Attacks
Md Zarif Hossain, Ahmed Imteaj
2021 IEEE International Conference on Big Data (Big Data) (2024), pp. 6250-6259
Closed Access

SwordEcho: A LLM Jailbreaking Optimization Strategy Driven by Reinforcement Learning
Xuehai Tang, W B Xiao, Zhongjiang Yao, et al.
(2024), pp. 183-190
Closed Access

Page 1

Scroll to top