Research: Chat GPT outperforms medical professionals in disease diagnosis
A recent study suggests that Open AI’s conversational AI Chat GPT-4 excels at identifying illnesses compared to human physicians, as reported by The New York Times.
A recent study suggests that Open AI’s conversational AI Chat GPT-4 excels at identifying illnesses compared to human physicians, as reported by The New York Times.
In the study, fifty healthcare providers comprising attending physicians and medical residents took part. Their diagnoses were based on assessments of clinical cases. Overall, Chat GPT-4 achieved a diagnostic accuracy rate of 90%, while the doctors individually averaged 74%.
The physicians’ performance was notably lower when they had the option to utilize Chat GPT-4 during their evaluations. Those who integrated the tool in their assessments only slightly improved their performance, achieving scores of 76%, compared to doctors who did not involve a chatbot in their workflow.
