OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts
J.D. Zamfirescu-Pereira, Richmond Y. Wong, Bjoern Hartmann, et al.
(2023), pp. 1-21
Open Access | Times Cited: 418

Showing 1-25 of 418 citing articles:

Large language models and the perils of their hallucinations
Răzvan Azamfirei, Sapna R. Kudchadkar, James C. Fackler
Critical Care (2023) Vol. 27, Iss. 1
Open Access | Times Cited: 181

Using large language models in psychology
Dorottya Demszky, Diyi Yang, David S. Yeager, et al.
Nature Reviews Psychology (2023)
Closed Access | Times Cited: 168

ChatGPT improves creative problem-solving performance in university students: An experimental study
Marek Urban, Filip Děchtěrenko, Jiří Lukavský, et al.
Computers & Education (2024) Vol. 215, pp. 105031-105031
Open Access | Times Cited: 78

Artificial intelligence prompt engineering as a new digital competence: Analysis of generative AI technologies such as ChatGPT
Paweł Korzyński, Grzegorz Mazurek, Pamela Krzypkowska, et al.
Entrepreneurial Business and Economics Review (2023) Vol. 11, Iss. 3, pp. 25-37
Open Access | Times Cited: 72

AI literacy and its implications for prompt engineering strategies
Nils Knoth, Antonia Tolzin, Andreas Janson, et al.
Computers and Education Artificial Intelligence (2024) Vol. 6, pp. 100225-100225
Open Access | Times Cited: 67

Design Principles for Generative AI Applications
Justin D. Weisz, Jessica He, Michael Müller, et al.
(2024), pp. 1-22
Open Access | Times Cited: 64

Using an LLM to Help With Code Understanding
Daye Nam, Andrew Macvean, Vincent J. Hellendoorn, et al.
(2024), pp. 1-13
Open Access | Times Cited: 60

Automatic Prompt Optimization with “Gradient Descent” and Beam Search
Reid Pryzant, Dan Iter, Jerry Li, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023)
Open Access | Times Cited: 58

Personality Traits in Large Language Models
Gregory Serapio‐García, Mustafa Safdari, Clément Crepy, et al.
Research Square (Research Square) (2023)
Open Access | Times Cited: 57

The Metacognitive Demands and Opportunities of Generative AI
Lev Tankelevitch, Viktor Kewenig, Auste Simkute, et al.
(2024), pp. 1-24
Open Access | Times Cited: 57

A Design Space for Intelligent and Interactive Writing Assistants
Mina Lee, Katy Ilonka Gero, John Joon Young Chung, et al.
(2024), pp. 1-35
Open Access | Times Cited: 47

Supporting self-directed learning and self-assessment using TeacherGAIA, a generative AI chatbot application: Learning approaches and prompt engineering
Farhan Ali, Doris Choy, Shanti Divaharan, et al.
Learning Research and Practice (2023) Vol. 9, Iss. 2, pp. 135-147
Closed Access | Times Cited: 44

ChainForge: A Visual Toolkit for Prompt Engineering and LLM Hypothesis Testing
Ian Arawjo, Chelse Swoopes, Priyan Vaithilingam, et al.
(2024), pp. 1-18
Open Access | Times Cited: 43

First-year students AI-competence as a predictor for intended and de facto use of AI-tools for supporting learning processes in higher education
Jan Delcker, Joana Heil, Dirk Ifenthaler, et al.
International Journal of Educational Technology in Higher Education (2024) Vol. 21, Iss. 1
Open Access | Times Cited: 38

Bridging the Gulf of Envisioning: Cognitive Challenges in Prompt Based Interactions with LLMs
Hari Subramonyam, Roy Pea, Christopher Pondoc, et al.
(2024), pp. 1-19
Open Access | Times Cited: 35

Homogenization Effects of Large Language Models on Human Creative Ideation
Barrett R. Anderson, Jash Hemant Shah, Max Kreminski
Creativity and Cognition (2024), pp. 413-425
Closed Access | Times Cited: 34

Leveraging Large Language Models to Power Chatbots for Collecting User Self-Reported Data
Jing Wei, Sungdong Kim, Hyunhoon Jung, et al.
Proceedings of the ACM on Human-Computer Interaction (2024) Vol. 8, Iss. CSCW1, pp. 1-35
Open Access | Times Cited: 31

PromptCharm: Text-to-Image Generation through Multi-modal Prompting and Refinement
Zhijie Wang, Yuheng Huang, Da Song, et al.
(2024), pp. 1-21
Open Access | Times Cited: 29

Who Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences
Shreya Shankar, J.D. Zamfirescu-Pereira, Bjoern Hartmann, et al.
(2024) Vol. 105, pp. 1-14
Closed Access | Times Cited: 29

How to write effective prompts for large language models
Zhicheng Lin
Nature Human Behaviour (2024) Vol. 8, Iss. 4, pp. 611-615
Closed Access | Times Cited: 27

MASTERKEY: Automated Jailbreaking of Large Language Model Chatbots
Gelei Deng, Yi Liu, Yuekang Li, et al.
(2024)
Open Access | Times Cited: 25

Rehearsal: Simulating Conflict to Teach Conflict Resolution
Omar Shaikh, Valentino Chai, Michele J. Gelfand, et al.
(2024), pp. 1-20
Open Access | Times Cited: 24

Page 1 - Next Page

Scroll to top