Artificial Intelligence overseeing AI that is supervising AI: What may potentially go awry?
“CriticGPT does not always provide accurate recommendations, yet we observe that they can assist instructors in identifying a significantly greater number of issues within model-generated responses compared to not using AI assistance,” the corporation st
“CriticGPT does not always provide accurate recommendations, yet we observe that they can assist instructors in identifying a significantly greater number of issues within model-generated responses compared to not using AI assistance,” the corporation stated. “Furthermore, when individuals utilize CriticGPT, the AI enhances their capabilities, resulting in more thorough evaluations than if individuals were to work independently, and fewer misleading errors than if the model operates solo.”
And this is the fundamental issue at hand. One of the drawbacks of creative AI is its proficiency in imitating humans without comprehending them. This brings to mind an article I penned over a decade ago, regarding engineers developing a device that evaluates genuine affection. (An authentic product indeed: a Bluetooth brassiere that only unfastens upon detecting true love. Seriously. Just to clarify, I am not explicitly implying that engineers are inept at grasping human emotions similar to genAI. Not refuting it, but not explicitly affirming it either.)
Returning to genAI reasoning, the flawed presumption that OpenAI is making is that humans will persist in scrutinizing their systems for inaccuracies. Humans are inclined to idleness, and human IT professionals are burdened with excessive work and inadequate resources. The more plausible scenario is that humans will increasingly rely on the AI surveillance, which is when the real peril arises.
