GPT-4’s Remarkable Performance in Eye Assessments Comparable to Expert Doctors

Advancements in learning language ⁣models (LLMs) ⁣are raising questions about their potential benefits to society, ⁢particularly in the​ medical field. ⁢A ⁤recent study conducted by the University ⁣of Cambridge’s School ⁢of Clinical ​Medicine revealed that⁢ OpenAI’s GPT-4 demonstrated impressive performance in an ophthalmology assessment, ⁤nearly⁣ matching that of expert practitioners in the field.

Published in PLOS ‍Digital Health, the ​study compared GPT-4 ⁣with its predecessor GPT-3.5, Google’s ‍PaLM 2, and Meta’s‌ LLaMA by administering 87 multiple choice questions related⁤ to ophthalmology. A diverse group of participants, including expert ophthalmologists, trainee ophthalmologists, and junior doctors,⁢ took ​the exam. The questions covered a range of topics from light sensitivity to⁢ lesions, sourced from a specialized textbook.​ Since the content was​ not publicly available, researchers believe that the LLMs were⁢ not trained on these specific questions. ChatGPT, powered by ⁤GPT-4 or‍ GPT-3.5, was given ⁤multiple opportunities to ⁢provide accurate responses.

According to the‌ Financial Times, the results⁤ of this study highlight the potential ⁣of advanced LLMs like GPT-4 to assist medical professionals in‌ diagnostic ‌assessments and decision-making processes. The implications of these ‌findings could revolutionize ⁢the way healthcare professionals leverage‌ AI technology in their practice.

2024-04-18 ‍08:15:17
Source from www.engadget.com

Exit mobile version