Apple’s guidelines for its fresh Siri GenAI solution demonstrate the GenAI predicament
“Although, the concept of security in AI can become hazy,” stated Williamson. “Anything disclosed to an AI may potentially be leaked to others. There’s no indication that Apple took precautions to safeguard this specific template.
“Although, the concept of security in AI can become hazy,” stated Williamson. “Anything disclosed to an AI may potentially be leaked to others. There’s no indication that Apple took precautions to safeguard this specific template. One could assume they did not anticipate end-users to have access to it. Regrettably, Language Model Machines (LLMs) don’t excel in maintaining confidentiality.”
Another expert in the field of AI, Rasa Chief Technology Officer Alan Nichol, praised several of the remarks. “The viewpoint was pretty practical and straightforward,” mentioned Nichol, emphasizing that “a model cannot always identify its errors.”
“Such models generate coherent texts that may coincide with reality,” stated Nichol. “Occasionally, merely by chance, they may happen to be correct. When considering how these models are trained, they are designed to cater to the end-user’s preferences and desires.”
