AI chatbots are not your friends

It’s not just young people. Even boomers and earlier generations are finding AI conversations to be helpful. Indeed, there is a simple desktop robot with a complex AI chatbot called ElliQ that’s made just for the elderly.

[…Keep reading]

A delightful new Android discovery

A delightful new Android discovery

It’s not just young people. Even boomers and earlier generations are finding AI conversations to be helpful. Indeed, there is a simple desktop robot with a complex AI chatbot called ElliQ that’s made just for the elderly.

There’s only one problem with this. AI chatbots are not our friends. They’re not even safe. They’re great at imitating people, but at day’s end, they’re just Large Language Models (LLMs). As researchers from Duke University and Johns Hopkins University wrote in Psychiatric Times, “bots [are] tragically incompetent at providing reality testing for the vulnerable people who most need it (e.g., patients with severe psychiatric illness, conspiracy theorists, political and religious extremists, youths, and older adults).”

Other professionals have also raised the alarm about teen use of chatbots, including a Boston psychiatrist who tested 10 popular chatbots by posing as various troubled teenagers. He found that the bots often gave inadequate, misleading, or harmful responses when he raised difficult ideas, such as when a Replika chatbot encouraged him to “get rid of” his parents.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.