« Geri
ASSESSING PATIENT EDUCATION GUIDE GENERATED BY CHATGPT VS GOOGLE GEMINI ON COMMON HEPATOLOGY CONDITIONS: A CROSS-SECTIONAL STUDY
Priyal PATEL, Aamuktha MAREPALLI, Krithika NATHAN, Jahnavi AKKALDEVI, Niraj BALAKRISHNAN
Euroasian Journal of Hepato-Gastroenterology - 2025;15(2):173-177
Department of Internal Medicine, Worcestershire Royal Hospital, Worcester, United Kingdom

Aims and objectives: To compare ChatGPT and Google Gemini-generated patient education guide on hepatitis, cirrhosis, and non-alcoholic fatty liver disease. Introduction: As artificial intelligence (AI) is becoming more integrated into healthcare, assessing the quality of health information it generates is important. This study evaluates patient information guides produced by ChatGPT and Google Gemini for common hepatology conditions, focusing on accessibility, clarity, and comprehensiveness. Methodology: Guides from both AI systems were evaluated using Flesch-Kincaid readability tests, Quillbot for similarity scores, and the DISCERN score for reliability. A quantitative analysis was conducted on various parameters, including word and sentence counts. Results: ChatGPT generated significantly more words and sentences than Google Gemini, indicating more extensive content. However, there were no statistically significant differences in average words per sentence, syllable count, grade level, ease score, similarity percentage, or reliability scores, suggesting comparable complexity and consistency between the two models. Conclusions: The findings underscore the need to refine AI-generated health information to meet diverse patient needs. While AI shows promise in enhancing patient education, continuous evaluation and adaptation are essential to ensure clarity and balance in the information provided. Recommendations include improving content accessibility and reliability for optimal patient engagement.

Facebook'ta Payla