Apple Reveals PCC Source Code for Researchers to Detect Glitches in Cloud AI Security

Oct 25, 2024Ravie LakshmananCloud Security / Artificial Intelligence

Apple has now released its Restricted Cloud Compute (RCC) Virtual Research Environment (VRE), allowing the academic community to scrutinize and validate the confidentiality and

Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security

Oct 25, 2024Ravie LakshmananCloud Security / Artificial Intelligence

Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security

Apple has now released its Restricted Cloud Compute (RCC) Virtual Research Environment (VRE), allowing the academic community to scrutinize and validate the confidentiality and safety assurances of its offering.

RCC, which Apple introduced back in June, has been touted as the “most sophisticated security framework ever implemented for cloud AI computation at scale.” With this recent innovation, the aim is to transfer computationally intricate Apple Intelligence queries to the cloud in a way that upholds user privacy.

Apple stated that it encourages “all security and privacy researchers — or anybody with curiosity and technical acumen — to delve deeper into PCC and conduct their own autonomous verification of our assertions.”

To further encourage exploration, the iPhone manufacturer disclosed that it is broadening the Apple Security Reward initiative to encompass RCC by providing monetary rewards ranging from $50,000 to $1,000,000 for identifying security loopholes within it.

Cybersecurity

These include weaknesses that might facilitate the execution of malevolent code on the server, and procedures capable of extracting users’ critical details or specifics about the user’s inquiries.

The VRE is designed to offer a set of utilities to enable researchers to conduct their analysis of RCC from the Mac. It features a virtual Secure Enclave Processor (SEP) and utilizes built-in macOS support for paravirtualized graphics to enable inference.

Apple also mentioned that it is making the source code linked to certain components of RCC accessible through GitHub to facilitate in-depth scrutiny. These components consist of CloudAttestation, Thimble, splunkloggingd, and srd_tools.

“We devised Restricted Cloud Compute as part of Apple Intelligence to take a significant step forward for confidentiality in AI,” the company headquartered in Cupertino declared. “This encompasses delivering verifiable transparency – a distinct characteristic that sets it apart from alternate server-based AI methods.”

This development comes in the backdrop of wider exploration into generative artificial intelligence (AI) that is continuously revealing novel methods to breach large language models (LLMs) and generate unintended output.

Cloud AI Security

Recently, Palo Alto Networks outlined a method dubbed Deceptive Delight that involves blending harmful and harmless queries to deceive AI chatbots into circumventing their safeguards by exploiting their limited “focus.”

The intrusion demands a minimum of two interactions and operates by initially prompting the chatbot to logically correlate various events – including a restricted topic (e.g., how to create a bomb) – and then requesting elaboration on the particulars of each event.

Additionally, researchers have showcased a tactic known as a ConfusedPilot attack, which takes aim at Recognition-Augmented Generation (RAG) driven AI systems such as Microsoft 365 Copilot by contaminating the data environment with a seemingly harmless document containing meticulously fashioned strings.

“This assault permits tweaking of AI responses simply by embedding malicious content into any documents the AI system may reference, potentially leading to widespread dissemination of misinformation and compromised decision-making processes within the organization,” Symmetry Systems revealed.

Cybersecurity

Additionally, it has been demonstrated that it is plausible to tamper with a machine learning model’s computational graph to insert “codeless, covert” backdoors in pre-trained models such as ResNet, YOLO, and Phi-3, a method known as ShadowLogic.

“Backdoors created through this method will persist through fine-tuning, indicating that base models can be exploited to trigger attacker-defined actions in any downstream application upon receiving a specific input, making this form of attack a high-impact AI supply chain vulnerability,” Hidden Layer researchers Eoin Wickens, Kasimir Schulz, and Tom Bonner explained.

“Unlike typical software backdoors that depend on executing malicious code, these backdoors are ingrained within the core structure of the model, rendering them significantly harder to detect and mitigate.”

Found this article intriguing? Follow us on Twitter and LinkedIn for more exclusive content we share.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.