OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Accuracy of ChatGPT on Medical Questions in the National Medical Licensing Examination in Japan: Evaluation Study
Yasutaka Yanagita, Daiki Yokokawa, Shun Uchida, et al.
JMIR Formative Research (2023) Vol. 7, pp. e48023-e48023
Open Access | Times Cited: 64

Showing 1-25 of 64 citing articles:

Performance of ChatGPT Across Different Versions in Medical Licensing Examinations Worldwide: A Systematic Review and Meta-Analysis (Preprint)
Mingxin Liu, Tsuyoshi Okuhara, Xinyi Chang, et al.
Journal of Medical Internet Research (2024) Vol. 26, pp. e60807-e60807
Open Access | Times Cited: 33

Reliability of ChatGPT for performing triage task in the emergency department using the Korean Triage and Acuity Scale
Jae Hyuk Kim, Sun Kyung Kim, Jongmyung Choi, et al.
Digital Health (2024) Vol. 10
Open Access | Times Cited: 27

Capability of GPT-4V(ision) in the Japanese National Medical Licensing Examination: Evaluation Study
Takahiro Nakao, Soichiro Miki, Yuta Nakamura, et al.
JMIR Medical Education (2024) Vol. 10, pp. e54393-e54393
Open Access | Times Cited: 27

GPT-4 Turbo with Vision fails to outperform text-only GPT-4 Turbo in the Japan Diagnostic Radiology Board Examination
Yuichiro Hirano, Shouhei Hanaoka, Takahiro Nakao, et al.
Japanese Journal of Radiology (2024) Vol. 42, Iss. 8, pp. 918-926
Open Access | Times Cited: 25

Comparative accuracy of ChatGPT-4, Microsoft Copilot and Google Gemini in the Italian entrance test for healthcare sciences degrees: a cross-sectional study
Giacomo Rossettini, Lia Rodeghiero, Federica Corradi, et al.
BMC Medical Education (2024) Vol. 24, Iss. 1
Open Access | Times Cited: 23

Evaluation of the Performance of Three Large Language Models in Clinical Decision Support: A Comparative Study Based on Actual Cases
Xueqi Wang, Haiyan Ye, S. H. Zhang, et al.
Journal of Medical Systems (2025) Vol. 49, Iss. 1
Closed Access | Times Cited: 2

Performance of GPT-4V in Answering the Japanese Otolaryngology Board Certification Examination Questions: Evaluation Study
Masao Noda, Takayoshi Ueno, Ryota Koshu, et al.
JMIR Medical Education (2024) Vol. 10, pp. e57054-e57054
Open Access | Times Cited: 15

ChatGPT (GPT-4) passed the Japanese National License Examination for Pharmacists in 2022, answering all items including those with diagrams: a descriptive study
Hiroyasu Sato, Katsuhiko Ogasawara
Journal of Educational Evaluation for Health Professions (2024) Vol. 21, pp. 4-4
Open Access | Times Cited: 10

Conformity of ChatGPT recommendations with the AUA/SUFU guideline on postprostatectomy urinary incontinence
Vicktor Bruno Pereira Pinto, Matheus F. de Azevedo, Marcelo Langer Wroclawski, et al.
Neurourology and Urodynamics (2024) Vol. 43, Iss. 4, pp. 935-941
Closed Access | Times Cited: 10

ChatGPT for Tinnitus Information and Support: Response Accuracy and Retest after Three and Six Months
W. Wiktor Jędrzejczak, Piotr H. Skarżyński, Danuta Raj-Koziak, et al.
Brain Sciences (2024) Vol. 14, Iss. 5, pp. 465-465
Open Access | Times Cited: 9

Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom’s Taxonomy
Ambadasu Bharatha, Nkemcho Ojeh, Ahbab Mohammad Fazle Rabbi, et al.
Advances in Medical Education and Practice (2024) Vol. Volume 15, pp. 393-400
Open Access | Times Cited: 8

GPT-4/4V's performance on the Japanese National Medical Licensing Examination
Tomoki Kawahara, Yuki Sumi
Medical Teacher (2024), pp. 1-8
Closed Access | Times Cited: 7

Expert assessment of ChatGPT’s ability to generate illness scripts: an evaluative study
Yasutaka Yanagita, Daiki Yokokawa, Fumitoshi Fukuzawa, et al.
BMC Medical Education (2024) Vol. 24, Iss. 1
Open Access | Times Cited: 7

Human versus Artificial Intelligence: ChatGPT-4 Outperforming Bing, Bard, ChatGPT-3.5 and Humans in Clinical Chemistry Multiple-Choice Questions
Malik Sallam, Khaled Al‐Salahat, Huda Eid, et al.
Advances in Medical Education and Practice (2024) Vol. Volume 15, pp. 857-871
Open Access | Times Cited: 6

Human versus Artificial Intelligence: ChatGPT-4 Outperforming Bing, Bard, ChatGPT-3.5, and Humans in Clinical Chemistry Multiple-Choice Questions
Malik Sallam, Khaled Al‐Salahat, Huda Eid, et al.
medRxiv (Cold Spring Harbor Laboratory) (2024)
Open Access | Times Cited: 5

The Evaluation of Generative AI Should Include Repetition to Assess Stability
Lingxuan Zhu, Weiming Mou, Chenglin Hong, et al.
JMIR mhealth and uhealth (2024) Vol. 12, pp. e57978-e57978
Open Access | Times Cited: 5

Evaluating the performance of ChatGPT-3.5 and ChatGPT-4 on the Taiwan plastic surgery board examination
Ching‐Hua Hsieh, Hsiao-Yun Hsieh, Hui‐Ping Lin
Heliyon (2024) Vol. 10, Iss. 14, pp. e34851-e34851
Open Access | Times Cited: 5

Analysis of Responses of GPT-4 V to the Japanese National Clinical Engineer Licensing Examination
Kai Ishida, Naoya Arisaka, Kiyotaka Fujii
Journal of Medical Systems (2024) Vol. 48, Iss. 1
Closed Access | Times Cited: 4

Exploration of the optimal deep learning model for english-Japanese machine translation of medical device adverse event terminology
Ayako Yagahara, Masahito Uesugi, Hideto Yokoi
BMC Medical Informatics and Decision Making (2025) Vol. 25, Iss. 1
Open Access

Page 1 - Next Page

Scroll to top