AI chatbots are worse than search engines for medical advice

There is a clear gap between the theoretical medical knowledge of large language models (LLMs) and their practical usefulness for patients, according not a new study from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sci

[…Keep reading]

Global Group ransomware gang running new campaign using Windows shortcut files

Global Group ransomware gang running new campaign using Windows shortcut files

There is a clear gap between the theoretical medical knowledge of large language models (LLMs) and their practical usefulness for patients, according not a new study from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford. The research, conducted in collaboration with MLCommons and other institutions, involved 1,298 people in the UK.

In the study, one group was asked to use LLMs such as GPT-4o, Llama 3, and Command R to assess health symptoms and suggest courses of action, while a control group relied on their usual methods, such as search engines or their own knowledge.

The results showed that the group using generative AI (genAI) tools performed no better than the control group in assessing the urgency of a condition. They were also worse at identifying the correct medical condition, according to The Register.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.