Apple’s guidelines for its fresh Siri GenAI solution demonstrate the GenAI predicament

“Although, the concept of security in AI can become hazy,” stated Williamson. “Anything disclosed to an AI may potentially be leaked to others. There’s no indication that Apple took precautions to safeguard this specific template.

[…Keep reading]

Apple’s instructions to its new Siri GenAI offering illustrate the GenAI challenge

“Although, the concept of security in AI can become hazy,” stated Williamson. “Anything disclosed to an AI may potentially be leaked to others. There’s no indication that Apple took precautions to safeguard this specific template. One could assume they did not anticipate end-users to have access to it. Regrettably, Language Model Machines (LLMs) don’t excel in maintaining confidentiality.”

Another expert in the field of AI, Rasa Chief Technology Officer Alan Nichol, praised several of the remarks. “The viewpoint was pretty practical and straightforward,” mentioned Nichol, emphasizing that “a model cannot always identify its errors.”

“Such models generate coherent texts that may coincide with reality,” stated Nichol. “Occasionally, merely by chance, they may happen to be correct. When considering how these models are trained, they are designed to cater to the end-user’s preferences and desires.”

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.