RSAC 2025 Innovation Showcase | Aurascape: Reinventing the Smart Defense Line of AI Interactive Visibility and Native Security
Company Summary
Aurascape is a startup in the cybersecurity sector that was established in 2023 and is based in Santa Clara, California, USA.
Company Summary
Aurascape is a startup in the cybersecurity sector that was established in 2023 and is based in Santa Clara, California, USA. The founding members of the company are seasoned security professionals and engineers who previously worked at renowned technology firms like Palo Alto Networks, Google, and Amazon. They possess extensive knowledge in network security, artificial intelligence, and infrastructure and have developed several security products generating billions in annual revenue. Aurascape’s mission is to empower businesses to innovate boldly in the AI era. The company is dedicated to revolutionizing how organizations protect themselves using the most advanced AI security platform, facilitating the swift and secure implementation of AI-driven innovations.
Figure 1: Founders of Aurascape
In August 2024, Aurascape secured $12.8 million in seed funding led by Mayfield Fund and Celesta Capital, with StepStone Group, AISpace, and Mark McLaughlin, the former Chairman and CEO of Palo Alto Networks, also participating. Leveraging its profound understanding of AI-native security challenges, Aurascape successfully entered the top 10 finalists in the 2025 RSA Innovation Showcase. According to RSAC, “Aurascape offers the necessary security measures for industry leaders in security and AI to confidently embrace AI technology.”
Product Synopsis
The swift adoption of cutting-edge technologies like generative AI and AI agents is reshaping business collaborations and information exchange at an unprecedented pace. However, this rapid evolution is accompanied by complex security threats: How can data leaks be prevented? How do we detect “shadow AI” applications? Is there any hidden malicious intent in AI-generated content? Traditional security frameworks struggle to provide adequate solutions to these questions. Aurascape points out that AI applications operate in fundamentally novel ways, engaging in dynamic, real-time, and autonomous communications. Conventional security measures prove inadequate in this new landscape. Therefore, the Aurascape platform is purpose-built to tackle security challenges in the AI era. It focuses on “complete visibility and controls”, mapping the behavior patterns and data flow routes of numerous AI applications.
Aurascape anticipates that every enterprise application will ultimately incorporate AI in the future. To address this, Aurascape has architected a security platform that can adapt to the constantly evolving AI ecosystem. It fully supports the latest AI tools such as generative AI, embedded AI, and Agentic AI. The platform is tailored to provide robust protection for companies at the forefront of the advancing AI wave.
Figure 2: Aurascape Platform
The platform’s objective is to preempt new threats and safeguard corporate data with unparalleled precision, all while ensuring the seamless work efficiency of end users. By actively monitoring AI interaction patterns, detecting embedded AI components, and meticulously managing multimodal data exchanges, Aurascape aims to establish a more intelligent, flexible, and true-to-life AI security governance system. Its functional framework highlights three key capabilities: visibility, protection, and prevention to address the security management hurdles stemming from the widespread adoption of generative AI and embedded AI.
Visibility: Aurascape platform offers comprehensive coverage of AI tools within organizations, encompassing thousands of AI tools ranging from generative AI to embedded AI and agentic AI. The platform can automatically uncover new AI applications on the day of their emergence. It can also conduct conversational-level scrutiny of AI prompts and responses, assisting companies in identifying potential data vulnerabilities in each AI interaction. Additionally, Aurascape supports real-time surveillance of “shadow AI” usage, unauthorized accesses, and sharing of sensitive data.
Protection: On the data security front, the Aurascape platform facilitates the classification and safeguarding of multimodal content, spanning various data formats like text, voice, images, video, and code.
The platform’s in-built labeling system supports multiple semantic dimensions, enabling companies to attain more precise identification of sensitive content through the “data fingerprinting” feature. This not only enhances detection accuracy but also significantly reduces false positives, making it particularly suitable for safeguarding critical assets like intellectual property and source code.
Prevention: To combat new threats arising from AI-generated content, Aurascape offers a suite of protective measures with content comprehension at its core to detect phishing attempts, malicious code generation, social engineering, and hidden attack motives in AI outputs. The platform evaluates each AI response dynamically via content-level, human-like comprehension, preemptively blocking potential risks before AI-generated content enters the operational process.
Resolution
As outlined on its official website, Aurascape presents five AI security solutions tailored to diverse requirements:
1. Uncover and supervise AI
This solution aids organizations in comprehensively understanding how AI tools are actually utilized within their setups, especially generative AI, embedded AI, and the distribution and behavior of agentic AI. The solution emphasizes the capability for “same-day discovery”, swiftly identifying the utilization of AI tools upon launch and continuously recording prompt phrases, response content, and user interaction data to create a comprehensive view of AI tools across the organization. Functionally, the solution aims to bridge the information gap that leaves security teams unaware of where and how AI is implemented. It addresses the limitation of traditional auditing methods in the context of AI.
Figure 3: AI tool asset perspective
Nevertheless, the effectiveness of this solution may be influenced by various factors. For instance, AI tools can be integrated through multiple channels like browser extensions, API integrations, or embedded SaaS modules, and it remains unclear whether the system can achieve extensive coverage across diverse environments. Furthermore, while the solution offers semantic-level analysis of prompts and AI responses, the approach to data privacy protection by Aurascape is not explicitly detailed. Consequently, enterprises may need to carefully assess their specific requirements and compliance risks prior to adoption.
In essence, the solution is forward-thinking conceptually and stands as a vital element in the AI security governance structure. However, its detection accuracy and adaptability require validation through practical implementation.
2. Securing AI usage
This solution focuses on addressing prevalent issues during AI application interactions such as data leakage, compliance risks, and audit blind spots. It leverages an in-built multimodal data recognition, detection, and classification engine to instantly identify and safeguard text, voice, images, videos, and codes. The platform claims to eschew static rules, instead relying on contextual comprehension and organizational-level semantic learning to enhance classification accuracy and reduce false positives effectively.
The crux of its operation lies in “classification during usage”, where users’ input and output content are automatically classified and policy evaluated as they engage with AI tools, supporting interception and release modes concurrently. Additionally, the platform offers real-time guidance and prompts to minimize disruption to end-user experience while ensuring**Security**
Figure 4: Identification of Confidential Information in a Discussion
The writer contends that the true impact of this plan could rely on the semantic understanding and rating capabilities of the classification model in intricate scenarios. Currently, there are fairly developed technologies for detecting and desensitizing confidential information available, which have shown satisfactory performance in general situations. However, for potential users of the solution, it is often crucial to tailor the security approach according to specific business contexts. Nevertheless, the author has yet to come across customized solutions from Aurascape for industries. If it only offers generic protective features, it might struggle to handle the variances in data structures and compliance standards across different sectors (such as healthcare, finance, and utilities). Moreover, the precision of identifying embedded text, watermarks, QR codes, and other content in images also sets the boundary for the effectiveness of this solution in diverse scenarios.
On the whole, this solution embodies a relatively sophisticated “gentle intervention” methodology towards AI data security. However, its pragmatic adaptability within the intricate settings of large organizations still requires substantial empirical backing.
3. Preparation for Copilot
This solution puts emphasis on the readiness of AI Copilots like GitHub Copilot, Microsoft Copilot, etc., for secure deployment within the enterprise. As these tools are widely integrated into code repositories, document systems, and collaborative platforms, companies are increasingly conscious about the compliance of their access controls and the risk of data over-sharing.
The solution links up with the organization’s internal file repositories to evaluate if the access permissions set for Copilot are suitable. It highlights potential issues related to excessive privileges by examining data categories, sensitivity levels, and user attributes. The platform also scrutinizes whether Copilot is disseminating confidential content to all employees, external users, or other AI systems. Its aim is to mitigate the chances of data leaks caused by AI automation while not impeding user productivity.
Figure 5: Assessment of Copilot Readiness
In terms of technical design, Aurascape lays out a methodical governance path in this regard, particularly showcasing a certain level of maturity in “access readiness audit” and “continuous monitoring + behavior correction”. By pinpointing permission inconsistencies and sharing patterns, the solution provides a clear direction for company AI governance, especially suited for large organizations gearing up to implement Copilot-like tools on a broad scale.
Nevertheless, the writer also believes that the effectiveness of such mechanisms is closely intertwined with the complexity of companies’ internal authority structures. Presently, many organizations lack standardized authorization system architectures, potentially leading to misinterpretations or oversight in actual implementation. Furthermore, how well the solution integrates with existing DevSecOps processes and whether it supports detailed behavior guidance (like hierarchical alerts and user feedback mechanisms) demands further scrutiny.
4. Safeguards for Coding Assistants
Aurascape has devised this solution to manage the risks associated with AI code development assistants in businesses, such as CodeWhisperer. The solution strives to strike a harmony between “enhancing development efficiency” and “shielding core code assets” by enforcing policy directives and behavior analysis to achieve “restricted permission and targeted protection.”
The platform also caters to the detection of unauthorized plugins, IDE integrations, and non-standard access methods that evade browser controls. It has the capability to set up varied responses based on the sensitivity of the code: automatically blocking sharing actions for highly confidential code and intervening with prompts and confirmations for regular projects, thereby steering clear of blanket prohibitions that could hinder development speed. Furthermore, the system can learn developer preferences and actual usage behaviors, aiding enterprises in optimizing their tool subscriptions effectively.
Figure 6: Code Assistant Security
Concerning its design philosophy, the solution offers a rather specific approach to addressing data leakage dilemmas in AI-facilitated development situations, noticeably demonstrating practical usefulness in “dynamic authorization + granular control.” Its ongoing discovery abilities for “zero-day” tools also give the strategy a forward-thinking nature, which is noteworthy.
Nonetheless, the writer opines that the solution’s scalability in multi-team settings remains ambiguous. The compatibility among diverse languages, development toolchains, and code repository standards determines the complexity of deployment and the upper limit of policy granularity. Simultaneously, the platform’s competency in “comprehending code context” has not been explicitly divulged. If it relies solely on file paths or naming conventions for assessment, it might face challenges regarding insufficient classification precision in complex projects.
5. Seamless AI Protection
The primary objective of this solution is to minimize disruptions for end users and security teams while ensuring AI safety. Through a range of automated measures, the platform aims to break free from the binary constraints of conventional security products, which are often categorized as either overly restrictive or overly permissive.
The solution accentuates holistic automation from identifying AI applications, assessing risks, to executing and responding to policies. Particularly in terms of user engagement, Aurascape not only clarifies the reasons for blockages but also offers recommendations for improvement and temporary avenues for feedback to make safe behaviors more understandable and negotiable. For security administrators, the platform provides automated workflows, refined reviews, and audit capabilities to lighten the workload of frontline teams.
Figure 7: Streamlined Automation for AI Risk Responses
In terms of functional implementation, Aurascape’s suite of “soft guidance + gradual policy enactment” designs encapsulate a moderately advanced human-machine partnership concept. Leveraging AI for data classification and risk evaluation, along with “data fingerprinting” technology to minimize false positives, is anticipated to boost the precision and acceptance of strategies in practical scenarios.
However, the writer believes that the central challenge in achieving a seamless experience lies in the contextual adaptability of the strategy engine. If the assessment of risk levels or identification of abnormal behaviors is not sufficiently accurate, there’s a risk of this “soft guidance” transforming into “soft permissiveness”. Additionally, how well the platform addresses misjudgments, circumvention, and misuse in the automated process of user complaints remains a common industry concern.
Overall, this solution exudes a forward-thinking essence in terms of concept and orientation. The writer anticipates Aurascape’s continued investment in this realm going forward.
Summary
Aurascape has erected a relatively comprehensive product suite around “AI visibility,” “multimodal data protection,” and “user collaborative governance,” with the intent of tackling the core security hurdles that enterprises encounter amidst the widespread integration of generative AI applications. Its solutions embody forward-looking design principles, yet necessitate further observation regarding cross-industry adaptability, real-world validation, and model interpretability. AI-native security is gradually emerging as a distinctive developmental trajectory in the cybersecurity domain. Striking a balance between effective protection and user efficiency is a shared challenge faced by all emerging platforms. Aurascape, as scrutinized in this discourse, stands as one of the representative pioneers in this trajectory. Its proactive exploration in this direction contributes significantly to its inclusion in the 2025 RSAC Innovation Sandbox roster. The future performance of this budding company is certainly worth monitoring.
References
[1] Aurascape Inc. (2024) Discover and monitor AI. Available at: https://aurascape.ai/discover-and-monitor-ai/ (Accessed: 21 April 2025).
[2] Aurascape Inc. (2024) Safeguard AI use. Available at: https://aurascape.ai/safeguard-ai-use/ (Accessed: 21 April 2025).
[3] Aurascape Inc. (2024) Copilot readiness. Available at: https://aurascape.ai/copilot-readiness/ (Accessed: 21 April 2025).
[4] Aurascape Inc. (2024) Coding assistant guardrails. Available at: https://aurascape.ai/coding-assistant-guardrails/ (Accessed: 21 April 2025).
[5] Aurascape Inc. (2024) Frictionless AI security. Available at: https://aurascape.ai/frictionless-ai-security/ (Accessed: 21 April 2025).
[6] Aurascape Inc. (2024) Product overview. Available at: https://aurascape.ai/product/ (Accessed: 21 April 2025).
[7] RSA Conference (2025) RSAC 2025 Innovation Sandbox finalists announced. Available at: https://www.rsaconference.com/ (Accessed: 21 April 2025).
[8] Aurascape Inc. (2024) About us. Available at: https://aurascape.ai/about/ (Accessed: 21 April 2025).
[9] Aurascape Inc. (2024) Aurascape AI secures $12.8 million in oversubscribed seed funding. Available at: https://aurascape.ai/aurascape-ai-secures-12-8-million-in-oversubscribed-seed-funding-to-revolutionize-cybersecurity-for-the-ai-era/ (Accessed: 21 April 2025).
The post RSAC 2025 Innovation Sandbox | Aurascape: Reconstructing the Intelligent Defense Line of AI Interactive Visibility and Native Security appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..
*** This is a Security Bloggers Network syndicated blog from NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks. authored by Jie Ji. Read the original post at: https://nsfocusglobal.com/rsac-2025-innovation-sandbox-aurascape-reconstructing-the-intelligent-defense-line-of-ai-interactive-visibility-and-native-security/
