Generative AI security risks underestimated by IT leaders, survey reveals

A newly released Australian survey by ExtraHop, a leader in cloud-native network detection and response (NDR), suggests information technology (IT) and security leaders are unaware of the security risks presented by Generative AI.

Generative AI security risks underestimated by IT leaders, survey reveals

A newly released Australian survey by ExtraHop, a leader in cloud-native network detection and response (NDR), suggests information technology (IT) and security leaders are unaware of the security risks presented by Generative AI. The survey formed part of a global report entitled “The Generative AI Tipping Point”.

The goal of the study was to gain insight into how enterprises plan for the security and governance of generative AI tools. ExtraHop’s findings reveal a significant discrepancy among security leaders as 74% of IT and security leaders acknowledge that their employees sometimes, or frequently, use generative AI tools or Large Language Models at work whilst they grapple to tackle potential security risks.

Interestingly, security isn’t the primary concern of IT leaders. According to the survey, IT and security leaders are more troubled by the potentiality of receiving false or nonsensical responses (40%) from generative AI tools than by security-centric threats, such as customer and employee personally identifiable information exposure (36%), trade secret exposure (33%), and financial loss (25%).

Notably, 29% of participating organisations have implemented bans on the use of generative AI tools but with little success. Despite such restrictions, only 5% of respondents reported that employees never use these tools at work, proving the ineffectiveness of such prohibitory measures.

This year, 68% of respondents have invested or are planning to invest in protective or security measures against generative AI threats. However, IT and security leaders demand further guidance, predominantly from the government. The majority (85%) are in favour of governmental involvement, with 51% advocating for mandatory regulations and 34% in support of government standards for businesses to implement of their own volition.

While a considerable 80% of respondents feel confident that their current security stack can defend against threats from generative AI tools, fewer than half have invested in technology to monitor the use of generative AI tools. Only 45% have established governing policies for its appropriate use, and only 34% offer training for the safe utilisation of these tools.

“Following the launch of ChatGPT in November 2022, businesses have grappled with the risks and rewards of generative AI tools. The swift pace of adoption makes it vital that business leaders understand their employees’ generative AI usage and identify possible gaps in security protections to ensure data or intellectual property aren’t improperly shared,” stated Raja Mukerji, Co-founder and Chief Scientist at ExtraHop.

Raja Mukerji emphasised the transformative performance of generative AI in the workplace, adding, “However, leaders need more guidance and education to understand how generative AI can be applied across their organisations and the potential risks associated with it. By mixing innovation with strong safeguards, generative AI will continue to elevate entire industries.”

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.