
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks
Daniel Kang, Xuechen Li, Ion Stoica, et al.
(2024), pp. 132-143
Open Access | Times Cited: 25
Daniel Kang, Xuechen Li, Ion Stoica, et al.
(2024), pp. 132-143
Open Access | Times Cited: 25
Showing 25 citing articles:
A survey on large language model (LLM) security and privacy: The Good, The Bad, and The Ugly
Yifan Yao, Jinhao Duan, Kaidi Xu, et al.
High-Confidence Computing (2024) Vol. 4, Iss. 2, pp. 100211-100211
Open Access | Times Cited: 222
Yifan Yao, Jinhao Duan, Kaidi Xu, et al.
High-Confidence Computing (2024) Vol. 4, Iss. 2, pp. 100211-100211
Open Access | Times Cited: 222
A survey of safety and trustworthiness of large language models through the lens of verification and validation
Xiaowei Huang, Wenjie Ruan, Wei Huang, et al.
Artificial Intelligence Review (2024) Vol. 57, Iss. 7
Open Access | Times Cited: 31
Xiaowei Huang, Wenjie Ruan, Wei Huang, et al.
Artificial Intelligence Review (2024) Vol. 57, Iss. 7
Open Access | Times Cited: 31
Embedding Democratic Values into Social Media AIs via Societal Objective Functions
Chenyan Jia, Michelle S. Lam, Minh Triet Chau, et al.
Proceedings of the ACM on Human-Computer Interaction (2024) Vol. 8, Iss. CSCW1, pp. 1-36
Open Access | Times Cited: 9
Chenyan Jia, Michelle S. Lam, Minh Triet Chau, et al.
Proceedings of the ACM on Human-Computer Interaction (2024) Vol. 8, Iss. CSCW1, pp. 1-36
Open Access | Times Cited: 9
From Chatbots to Phishbots?: Phishing Scam Generation in Commercial Large Language Models
Sayak Saha Roy, Poojitha Thota, Krishna Vamsi Naragam, et al.
2022 IEEE Symposium on Security and Privacy (SP) (2024) Vol. 7, pp. 36-54
Open Access | Times Cited: 8
Sayak Saha Roy, Poojitha Thota, Krishna Vamsi Naragam, et al.
2022 IEEE Symposium on Security and Privacy (SP) (2024) Vol. 7, pp. 36-54
Open Access | Times Cited: 8
Cyberattacks Using ChatGPT: Exploring Malicious Content Generation Through Prompt Engineering
Lara Alotaibi, Sumayyah Seher, Nazeeruddin Mohammad
(2024), pp. 1304-1311
Closed Access | Times Cited: 5
Lara Alotaibi, Sumayyah Seher, Nazeeruddin Mohammad
(2024), pp. 1304-1311
Closed Access | Times Cited: 5
On Large Language Models’ Resilience to Coercive Interrogation
Zhuo Zhang, Guangyu Shen, Guanhong Tao, et al.
2022 IEEE Symposium on Security and Privacy (SP) (2024), pp. 826-844
Closed Access | Times Cited: 5
Zhuo Zhang, Guangyu Shen, Guanhong Tao, et al.
2022 IEEE Symposium on Security and Privacy (SP) (2024), pp. 826-844
Closed Access | Times Cited: 5
A Comprehensive Survey of Attack Techniques, Implementation, and Mitigation Strategies in Large Language Models
Aysan Esmradi, Daniel Wankit Yip, Chun Fai Chan
Communications in computer and information science (2024), pp. 76-95
Closed Access | Times Cited: 4
Aysan Esmradi, Daniel Wankit Yip, Chun Fai Chan
Communications in computer and information science (2024), pp. 76-95
Closed Access | Times Cited: 4
Latent-Space Adversarial Training with Post-Aware Calibration for Defending Large Language Models Against Jailbreak Attacks
Yi Xin, Ting Li, Linlin Wang, et al.
(2025)
Closed Access
Yi Xin, Ting Li, Linlin Wang, et al.
(2025)
Closed Access
Effectiveness of Privacy-preserving Algorithms in LLMs: A Benchmark and Empirical Analysis
J. Sun, Basem Suleiman, Imdad Ullah, et al.
(2025), pp. 5224-5233
Closed Access
J. Sun, Basem Suleiman, Imdad Ullah, et al.
(2025), pp. 5224-5233
Closed Access
A Security Risk Taxonomy for Prompt-Based Interaction With Large Language Models
Erik Derner, Kristina Batistič, Jan Zahálka, et al.
IEEE Access (2024) Vol. 12, pp. 126176-126187
Open Access | Times Cited: 3
Erik Derner, Kristina Batistič, Jan Zahálka, et al.
IEEE Access (2024) Vol. 12, pp. 126176-126187
Open Access | Times Cited: 3
IntentObfuscator: A Jailbreaking Method via Confusing LLM with Prompts
Shang Shang, Zhongjiang Yao, Yepeng Yao, et al.
Lecture notes in computer science (2024), pp. 146-165
Closed Access | Times Cited: 2
Shang Shang, Zhongjiang Yao, Yepeng Yao, et al.
Lecture notes in computer science (2024), pp. 146-165
Closed Access | Times Cited: 2
A survey of emerging applications of large language models for problems in mechanics, product design, and manufacturing
K.B. Mustapha
Advanced Engineering Informatics (2024) Vol. 64, pp. 103066-103066
Closed Access | Times Cited: 2
K.B. Mustapha
Advanced Engineering Informatics (2024) Vol. 64, pp. 103066-103066
Closed Access | Times Cited: 2
ChatGPT Knows Your Attacks: Synthesizing Attack Trees Using LLMs
Olga Gadyatskaya, Dalia Papuc
Communications in computer and information science (2023), pp. 245-260
Closed Access | Times Cited: 6
Olga Gadyatskaya, Dalia Papuc
Communications in computer and information science (2023), pp. 245-260
Closed Access | Times Cited: 6
A Novel Evaluation Framework for Assessing Resilience Against Prompt Injection Attacks in Large Language Models
Daniel Wankit Yip, Aysan Esmradi, Chun Fai Chan
2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE) (2023) Vol. 71, pp. 1-5
Open Access | Times Cited: 2
Daniel Wankit Yip, Aysan Esmradi, Chun Fai Chan
2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE) (2023) Vol. 71, pp. 1-5
Open Access | Times Cited: 2
Exploring the Dual Role of LLMs in Cybersecurity: Threats and Defenses
Ciarán Bryce, Alexandros Kalousis, Ilan Leroux, et al.
(2024), pp. 235-242
Closed Access
Ciarán Bryce, Alexandros Kalousis, Ilan Leroux, et al.
(2024), pp. 235-242
Closed Access
GalaxyGPT: A Hybrid Framework for Large Language Model Safety
Hange Zhou, Jiabin Zheng, L. Zhang
IEEE Access (2024) Vol. 12, pp. 94436-94451
Open Access
Hange Zhou, Jiabin Zheng, L. Zhang
IEEE Access (2024) Vol. 12, pp. 94436-94451
Open Access
Exploring Advanced Methodologies in Security Evaluation for Large Language Models
Jun Huang, Jiawei Zhang, Qi Wang, et al.
Communications in computer and information science (2024), pp. 135-150
Closed Access
Jun Huang, Jiawei Zhang, Qi Wang, et al.
Communications in computer and information science (2024), pp. 135-150
Closed Access
Jailbreak Attacks on Large Language Models and Possible Defenses: Present Status and Future Possibilities
Shameem Ahmed, J. Angel Arul Jothi
(2024), pp. 1-7
Closed Access
Shameem Ahmed, J. Angel Arul Jothi
(2024), pp. 1-7
Closed Access
Harmful Prompt Classification for Large Language Models
Ojasvi Gupta, Marta Lozano, Abdelsalam Busalim, et al.
(2024), pp. 8-14
Open Access
Ojasvi Gupta, Marta Lozano, Abdelsalam Busalim, et al.
(2024), pp. 8-14
Open Access
Invited Paper: Security and Privacy in Large Language and Foundation Models: A Survey on GenAI Attacks
Giuseppe F. Italiano, Alessio Martino, Giorgio Piccardo
Lecture notes in computer science (2024), pp. 1-17
Closed Access
Giuseppe F. Italiano, Alessio Martino, Giorgio Piccardo
Lecture notes in computer science (2024), pp. 1-17
Closed Access
Next-Generation Phishing: How LLM Agents Empower Cyber Attackers
Khalifa Afane, Wenqi Wei, Ying Mao, et al.
2021 IEEE International Conference on Big Data (Big Data) (2024), pp. 2558-2567
Closed Access
Khalifa Afane, Wenqi Wei, Ying Mao, et al.
2021 IEEE International Conference on Big Data (Big Data) (2024), pp. 2558-2567
Closed Access
SwordEcho: A LLM Jailbreaking Optimization Strategy Driven by Reinforcement Learning
Xuehai Tang, W B Xiao, Zhongjiang Yao, et al.
(2024), pp. 183-190
Closed Access
Xuehai Tang, W B Xiao, Zhongjiang Yao, et al.
(2024), pp. 183-190
Closed Access
AI-powered smart toys: interactive friends or surveillance devices?
Valentyna Pavliv, Nima Akbari, Isabel Wagner
(2024), pp. 172-175
Closed Access
Valentyna Pavliv, Nima Akbari, Isabel Wagner
(2024), pp. 172-175
Closed Access
Collaborative Filtering of Malicious Information from the MULTimedia Data Using s
P. Manikandan, G. Abirami, Arockia Raj A, et al.
(2023), pp. 1-6
Closed Access
P. Manikandan, G. Abirami, Arockia Raj A, et al.
(2023), pp. 1-6
Closed Access