AI Demands Laser Security Focus on Data in Use
AI’s rapid advancement is stirring up both enthusiasm and concern, particularly regarding data security.
AI Demands Laser Security Focus on Data in Use
AI’s rapid advancement is stirring up both enthusiasm and concern, particularly regarding data security. While traditional security measures have focused on data at rest and in motion, the rise of AI underscores the critical need to protect data in use. AI expands the threat landscape because the volume and sensitivity of data being processed by AI is increasing dramatically, with the increasing use of AI workloads. In addition, the data that powers AI solutions is often processed outside of an organization’s datacenter — for example, in the cloud, where GPUs are readily available, and the solution may be enriched with sensitive data via Retrieval-Augmented Generation (RAG). While the more sensitive, proprietary data may be stored encrypted at rest on premises and encrypted while in transit to the Cloud, all of this data must be decrypted while in use, creating an attack vector.
Ignoring the need to adjust security accordingly is not an option. The cascading consequences of compromised data in use range from exposing sensitive information to bad actors or competitors to data manipulation resulting in poor or malicious responses. And the costs — not just of a breach but of not being able to demonstrate AI readiness and transparency — are high. Some of those costs relate to potential non-compliance with a growing number of regulations that directly and indirectly address AI and data in use. Existing regulations such as GDPR, HIPAA, DORA and, most recently, the EU CRA mandate strict controls over confidentiality and integrity of personal and sensitive data, but new regulations are emerging that target the use of AI specifically. For example, last year the EU adopted the Artificial Intelligence Act, which, among other things, bans AI systems posing unacceptable risks, mandates codes of practice, and sets transparency requirements. While the act doesn’t explicitly address data in use, its safeguards apply when data is being actively used in AI applications. These AIA rules will roll out throughout 2025, and we will surely see new government and industry-specific regulations emerging in the near future. Organizations can avoid some of the risks associated with data in use by hosting everything on-premises, but that requires purchasing expensive and scarce GPUs and is not feasible for most. Especially as the use of agentic AI expands, it’s clear that organizations must do everything they can to create concrete trust boundaries around data — no matter where it is hosted or in what manner it is being deployed, processed, and delivered. Confidential Computing: Circles of Trust Confidential computing is one way that organizations can expand data protection to include protecting data-in-use. Confidential computing creates trusted enclaves — secure environments that protect data in use from unauthorized access even when it is running in the public cloud. Confidential computing leverages hardware-based trusted execution environments (TEEs) such as AMD SEV, Intel TDX and NVIDIA H100 to create isolated “circles of trust.” However, this hardware (like much of the technology in this realm) can be complicated to understand and deploy. Organizations can mitigate some of the complexity of TEEs by leveraging software layers of abstraction that simplify (relatively speaking) secure TEE deployments.For example, Confidential Containers, also known as CoCo, is an open-source cloud-native project that integrates TEEs. CoCo standardizes confidential computing at the pod level and reduces its complexity in Kubernetes environments. All of this still requires expertise to implement, but the use of Confidential Containers reduces the chance that unauthorized entities — be they administrators, infrastructure providers, privileged programs, or malicious hackers — will be able to access workload data. Say you’re a global bank and you want to use the same chatbot in India, Germany and the U.S., but you don’t want to have to write the application multiple times for each country. RAG enables the chatbot to deliver location-aware, contextualized content to users based on where they are by augmenting AI models with region-specific data, but this introduces risks related to data sovereignty, exposure during processing, through the use of the public cloud and LMMs. However, if the application were run in a TEE-based container — with data decrypted only within the TEE — it would be isolated from risk and in compliance with, for example, GDPR. Attestation: Proof of Trust A key piece of confidential computing is attestation, which verifies trust before code is executed and data is delivered or processed. TEEs can be run without an attestation service; however, leveraging a remote attestation solution with the TEE provided by the Cloud provider adds validation by a third-party to the overall solution. Attestation uses cryptographic proof to verify that the confidential environment – servers, clusters, or containers, running within the protected hardware environment (TEE) and have not been tampered with. Verification is used to determine whether the requested operation is allowed or not. This gives organizations confidence and, importantly, evidence that their data remains secure during processing, even in public or shared cloud infrastructures. Open-source projects such as Trustee and Keylime are making attestation more accessible and portable across platforms. The Future and Post-Quantum Computing As noted, all of this is complex, although the community is working to simplify the deployment and validation of confidential computing environments. And, it may be hard to hear, but it’s only going to get more complex. Quantum computing is waiting in the wings, ready to exponentially (literally, not figuratively) increase the rate at which computers work, along with increasing the types of problems that computers can address. Sadly, quantum computing will also pave the way for attacks that are not currently possible, including new attacks on encrypted data at rest and in motion. Organizations should be thinking about post-quantum computing — and what they are going to do to prepare for the time when quantum computers become powerful enough to break existing public key cryptography methods. The software community is working to update cryptographic libraries to post-quantum algorithms. Organizations — especially those that need to secure data for extended periods, such as financial and healthcare institutions — will need to be ready to transition to quantum-safe algorithms as soon as possible. However, confidential computing and attestation can be used now to protect against harvest-now and decrypt-later attacks. Organizations need to keep on top of the latest news and research about quantum computing, as they also track the fast past of change in AI technologies. Conclusion AI’s rise necessitates not just a stronger focus but a laser focus on protecting data in use. Confidential computing and attestation now — in tandem with planning for making use of post-quantum cryptographic algorithms and libraries moving forward — can provide the means to create secure environments for AI processing and set the stage for a complete AI data security ecosystem. Recent Articles By Author
