DR. ChatGPT: A NEW PARTNER IN DIAGNOSTIC ACCURACY
A new study led by researchers at Beth Israel Deaconess Medical Center (BIDMC) in Boston tested OpenAI's ChatGPT-4 against 50 doctors, ranging from residents to senior physicians, on diagnosing real patient cases. The study used a methodology for evaluating clinical reasoning, scoring participants on their ability to synthesize patient data, generate differential diagnoses, and justify their conclusions.
The results were striking: ChatGPT alone achieved an impressive 90% diagnostic accuracy, compared to 75% for doctors working independently. Even when doctors had access to the chatbot as an assistive tool, their accuracy only slightly improved to 76%.
This small gap of improvement may be explained by two factors:
- Many doctors held firmly to their initial diagnoses, even when ChatGPT suggested potentially better alternatives.
- Some treated the chatbot like a basic search engine, missing its ability to analyze complex cases holistically.
Previous Posts