The surge in AI activity is magnifying vulnerabilities within corporate data ecosystems and cloud surroundings, as per cybersecurity specialist Liat Hayun.
During an exchange with TechRepublic, Hayun, the VP of product management and research for cloud security at Tenable, recommended that organizations prioritize comprehending their exposure to risks and their tolerance for them, while also emphasizing the resolution of key issues such as cloud misconfigurations and safeguarding confidential information.

She highlighted that while corporations maintain a cautious stance, the accessibility of AI is accentuating certain vulnerabilities. Nonetheless, she elaborated that Chief Information Security Officers (CISOs) are transforming into facilitators of business operations—and AI could potentially serve as an influential instrument for fortifying security.
The Influence of AI on Cybersecurity and Data Storage
TechRepublic: What alterations are occurring in the cybersecurity landscape due to AI?
Liat: Primarily, AI has become significantly more attainable for organizations. Approximately a decade ago, only organizations with specialized data science teams possessing Ph.D.s in data science and statistics could develop machine learning and AI algorithms. Now, AI has become much simpler for organizations to create; it’s almost akin to incorporating a new programming language or library into their ecosystem. Consequently, numerous organizations—not solely large enterprises like Tenable and others—but also startups can now harness AI and integrate it into their offerings.
SEE: Gartner Advises Australian IT Leaders to Embrace AI at Their Own Tempo
The second aspect: AI necessitates a substantial amount of data. Hence, more organizations need to amass and retain larger data volumes, which at times entail heightened levels of sensitivity. Previously, my streaming service might have retained minimal information about me. Presently, aspects like my location can be crucial for generating more precise recommendations, as well as my age and gender, among other details. Since this data can now be utilized for business purposes—to drive more business—organizations are increasingly motivated to store this data in expanded volumes and with escalating sensitivity levels.
TechRepublic: Is this fostering an increase in cloud utilization?
Liat: If you wish to store extensive data, it’s notably simpler to do so in the cloud. Each time you opt to store new data types, the data volume under storage increases. You don’t have to physically expand your data center and procure new data volumes for installation purposes. You simply click, and voilà , you have a new data storage site. Consequently, the cloud has significantly streamlined data storage operations.
These three elements form a self-sustaining loop. When data storage becomes more convenient, the integration of advanced AI capabilities becomes feasible, leading to an increased drive to amass more data and so forth. This progression characterizes the developments in recent years—since LLMs have evolved into a more accessible, common capability for organizations—resulting in challenges spanning these three domains.
Comprehending the Security Risks Associated with AI
TechRepublic: Have you observed specific cybersecurity risks escalating alongside AI?
Liat: The utilization of AI within organizations, unlike AI usage by individuals worldwide, is still in its nascent stages. Organizations endeavor to introduce AI in a manner that, in my opinion, avoids creating any unwarranted or extreme risks. Hence, from a statistical viewpoint, we still possess a limited number of examples, which do not necessarily serve as representative cases since they are largely experimental.
One instance of a risk involves training AI on sensitive data. This is something we’re noticing. It’s not due to a lack of caution on the part of organizations; it’s mainly because segregating sensitive data from non-sensitive data while maintaining an effective AI mechanism trained on the appropriate dataset is inherently challenging.
The other aspect we’re witnessing involves what we term data poisoning. Even if an AI agent is being trained on non-sensitive data, if that non-sensitive data is publicly exposed, I could, as an adversary or attacker, inject my data into that publicly accessible data repository and manipulate your AI to produce unintended outputs. AI isn’t omniscient; it’s only as knowledgeable as the data it has processed.
TechRepublic: How should organizations gauge the security risks associated with AI?
Liat: Initially, organizations need to assess their exposure levels, encompassing aspects like the cloud, AI, and data … as well as all elements related to their utilization of third-party vendors, software diversification within their organization, etc.
SEE: Australia Proposes Mandated Guardrails for AI
The subsequent step involves identifying critical exposures. For instance, if a publicly accessible asset displays a high-severity vulnerability, that’s a priority for remediation. Additionally, it’s about weighing the repercussions; if you have two similar issues, but one can compromise sensitive data while the other cannot, addressing the one jeopardizing sensitive data takes precedence.
Furthermore, it’s imperative to discern the optimal strategies for addressing those exposures with minimal business disruptions.
TechRepublic: What are some major cloud security risks that warrant caution?
Liat: When advising our clients, we typically emphasize three key aspects.
Primarily, attention should be directed toward misconfigurations. Given the intricate nature of the infrastructure, cloud complexity, and diverse technologies at play—especially in multi-cloud environments—the potential for issues arising due to misconfigurations remains substantial. Hence, this is certainly an area to prioritize, particularly when integrating new technologies like AI.
TheThe second issue is about excessive access. A lot of individuals believe their company has top-notch security. However, if your residence is fortified, and you’re distributing keys to everyone nearby, that remains a concern. Thus, providing too much access to confidential information, to crucial infrastructure, becomes another focal point. Even if all settings are flawless and there are no intruders in your environment, it still elevates the risk level.
What people primarily consider is detecting malevolent or dubious actions as soon as they occur. This is where AI can come in handy. By utilizing AI features integrated within our security systems and infrastructure, we can capitalize on their ability to analyze a vast amount of data rapidly. This aids in identifying suspicious or malevolent behaviors promptly, allowing us to tackle them before any critical information is compromised.
Implementing AI ‘too good of a chance to overlook’
TechRepublic: How are Chief Information Security Officers (CISOs) handling the observed risks related to AI?
Liat: I’ve been part of the cybersecurity domain for 15 years. What’s fascinating to witness is that most security professionals and CISOs differ significantly from those of a decade ago. Instead of merely being a deterrent, rejecting options because of the associated risks, they are now asking themselves, “How could we adopt this while reducing the risks?” This shift in mindset is truly inspiring. They are evolving into enablers.
TechRepublic: Do you perceive the bright side of AI along with its drawbacks?
Liat: Companies need to strategize more on how to integrate AI rather than assuming it’s too risky at present. That approach won’t work.
Entities failing to embrace AI in the coming years will end up falling behind. It represents an exceptional instrument that can enhance numerous internal business functions like collaboration, analysis, and insights. Externally, it can enhance the services we offer to our clients. It’s too valuable of a prospect to forgo. If I can guide companies to foster a mindset where they acknowledge, “Yes, we can use AI, but we have to manage these risks,” then I’ve fulfilled my duty.
