Weaknesses in SAP AI Core Expose Customer Data to Cybersecurity Threats

î ‚Jul 18, 2024î „NewsroomCloud Security / Enterprise Security

Cybersecurity specialists have found issues in the SAP AI Core cloud-based system for establishing and releasing predictive artificial intelligence (AI) workflows, which could be utilized t

SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks

î ‚Jul 18, 2024î „NewsroomCloud Security / Enterprise Security

SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks

Cybersecurity specialists have found issues in the SAP AI Core cloud-based system for establishing and releasing predictive artificial intelligence (AI) workflows, which could be utilized to gain access to authorization tokens and client information.

A cloud security company named Wiz has collectively named the five vulnerabilities as SAPwned.

“The security drawbacks we identified might have permitted malicious individuals to reach customer data and impact internal elements, thus spreading to interconnected services and other clients’ environments,” remarked security analyst Hillai Ben-Sasson in a document shared with The Hacker News.

After SAP was informed of these security flaws on January 25, 2024, the vulnerabilities were resolved by May 15, 2024.

In simple terms, these weaknesses enable unauthorized access to private client items and access credentials to cloud services such as Amazon Web Services (AWS), Microsoft Azure, and SAP HANA Cloud.

Cybersecurity

Moreover, they could also be leveraged to alter Docker images stored on SAP’s internal container registry, SAP’s Docker images on the Google Container Registry, and elements hosted on SAP’s internal Artifactory server, leading to a supply chain assault on SAP AI Core services.

Additionally, this access could be utilized to seize cluster administrator privileges on SAP AI Core’s Kubernetes cluster by exploiting the exposure of the Helm package manager server to both read and write functionalities.

“By utilizing this level of access, a malicious actor could directly access other clients’ Pods and pilfer sensitive information such as models, datasets, and code,” Ben-Sasson elaborated. “This access also empowers malevolent individuals to meddle with clients’ Pods, taint AI data, and manipulate models’ inference.”

Wiz stated that these issues stem from the platform’s ability to support running malevolent AI models and training procedures without adequate isolation and sandboxing mechanisms.

“The recent security loopholes in AI service providers like Hugging Face, Replicate, and SAP AI Core highlight significant vulnerabilities in their tenant isolation and segmentation implementations,” shared Ben-Sasson with The Hacker News. “These platforms allow users to run untrusted AI models and training procedures in shared environments, increasing the risk of nefarious users gaining access to other users’ information.”

“Unlike established cloud providers with extensive experience in tenant-isolation practices employing robust isolation methods like virtual machines, these emerging services often lack this expertise and depend on containerization, which provides less robust security. This underscores the necessity of raising awareness regarding tenant isolation’s importance and urging the AI service sector to bolster their environments.”

Consequently, a malicious actor could create a standard AI application on SAP AI Core, circumvent network restrictions, and probe the Kubernetes Pod’s internal network to seize AWS tokens and access client code and training datasets by exploiting misconfigurations in AWS Elastic File System (EFS) shares.

“People must recognize that AI models essentially consist of code. When running AI models on your own framework, there is a potential exposure to supply chain attacks,” emphasized Ben-Sasson.

“Only deploy trusted models from verified sources and ensure a clear distinction between external models and sensitive infrastructure. When utilizing AI service providers, it is crucial to validate the robustness of their tenant-isolation architecture and guarantee the implementation of best practices.”

The discoveries emerge as Netskope disclosed that the increasing enterprise adoption of generative AI necessitates organizations to employ preventative controls, data loss prevention (DLP) tools, real-time guidance, and other measures to mitigate risks.

“More than a third of the sensitive information being shared with generative AI (genAI) applications comprises regulated data (data that organizations are legally obligated to safeguard), presenting a potential jeopardy to businesses in the form of costly data breaches,” the company disclosed.

These revelations are also in the wake of the emergence of a new cyber threat group named NullBulge, which has targeted entities focused on AI and gaming since April 2024 with the intention of filching sensitive data and vending compromised OpenAI API keys in clandestine forums while purporting to be a hacktivist group “guarding artists worldwide” against AI.

“NullBulge concentrates on the software supply chain by weaponizing code in publicly accessible repositories on GitHub and Hugging Face, enticing victims to import malicious libraries or through modification packs utilized by gaming and modeling software,” stated security analyst Jim Walter from SentinelOne said.

“The group deploys tools like AsyncRAT and XWorm before distributing LockBit payloads constructed utilizing the disclosed LockBit Black builder. Groups like NullBulge underscore the persistent threat of easily accessible ransomware coupled with the enduring effect of information-stealing infections.”

Found this article interesting? Follow us on Twitter ď‚™ and LinkedIn to read more exclusive content we post.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.