One of the widely-recognized advantages of the expansion of artificial intelligence is its ability to aid developers in mundane tasks. Nevertheless, recent research demonstrates that security overseers are not completely in favor, with 63% pondering the prohibition of AI in programming due to the risks it presents.
An even greater percentage, 92%, of the decision-makers surveyed express apprehension regarding the utilization of AI-generated code within their organization. Their primary concerns all center around the decline in the quality of the outcomes.
AI models might have been educated on outdated open-source libraries, and developers could swiftly become excessively reliant on the tools that simplify their work, leading to the proliferation of inferior code in the company’s products.
SEE: Top Security Tools for Developers
Moreover, security leaders are of the opinion that AI-generated code is unlikely to be subjected to rigorous quality checks as handwritten code. Developers may not feel as accountable for the output of an AI model and, thus, may not feel as compelled to ensure its perfection either.
Tariq Shaukat, CEO of Sonar, a code security company, recently conversed with TechRepublic about how he is observing an increasing number of instances where companies that have employed AI to generate their code encounter outages and security challenges.
“Generally, this is due to inadequate reviews, either because the organization has failed to implement robust code quality and review practices, or because developers are inspecting AI-authored code less meticulously than they would their own code,” he remarked.
“When questioned about flawed AI, a common justification is ‘it is not my code,’ meaning they feel less responsible since they were not the ones who wrote it.”
The latest report, “Organizations Struggle to Secure AI-Generated and Open Source Code” from Venafi, a machine identity management provider, is based on a survey of 800 security decision-makers from the U.S., U.K., Germany, and France. The study reveals that 83% of organizations are currently utilizing AI for code development, with over half incorporating it into their standard practices despite the reservations of security experts.
“New threats — such as AI poisoning and model escape — are starting to surface while massive amounts of generative AI code are being utilized by both developers and amateurs in ways that are yet to be fully grasped,” commented Kevin Bocek, chief innovation officer at Venafi, in the report.
Although many have contemplated prohibiting AI-supported coding, 72% believe that they have no alternative but to permit the practice to continue to maintain competitiveness. According to Gartner, by 2028, 90% of enterprise software engineers will leverage AI code assistants, resulting in enhanced productivity.
SEE: 31% of Organizations Utilizing Generative AI Request It to Produce Code (2023)
Significant Concerns Among Security Professionals
According to the Venafi report, two-thirds of the respondents reveal that they struggle to keep pace with highly efficient developers while ensuring the security of their products, and 66% admit to lacking the visibility needed to effectively manage the safe utilization of AI within their organization.
Consequently, security leaders are worried about the repercussions of overlooking potential vulnerabilities, with 59% losing sleep over this issue. Nearly 80% believe that the surge in AI-developed code will result in a security crisis, prompting a reevaluation of how it is dealt with.
In a press release, Bocek added: “Security teams find themselves in a challenging position in a new era where AI is responsible for writing code. Developers are already empowered by AI and are unlikely to relinquish their capabilities. Meanwhile, malicious actors are infiltrating our ranks — recent instances of persistent interference in open-source projects and infiltration of IT by North Korea are just the beginning.”
