Following a period of sustained optimism, we now face the dawn of the AI hangover. It’s a mild one at this point, with the market adjusting the valuations of major players (such as Nvidia, Microsoft, and Google), and other participants reevaluating their strategies. Gartner labels this phase as the disillusionment trough, where enthusiasm diminishes and actual outcomes fall short of expectations. Companies in the sector either consolidate or fade away. Sustained investments will only continue if the surviving entities enhance their offerings to meet the expectations of early adopters.
It is essential to understand that the vision of a post-human revolution championed by AI enthusiasts was never a practical target, and the initial excitement spurred by early LLMs did not equate to market triumph.
AI: A Permanent Fixture
So, what does the future hold for AI? Well, if we consider the Gartner hype cycle, the steep decline will be succeeded by the “slope of enlightenment,” where the technology matures, benefits become apparent, and companies introduce second and third-generation products. If things progress favorably, the journey culminates in the highly productive “plateau of productivity,” where mainstream acceptance skyrockets due to the widespread appeal of the technology. However, Gartner cautions that several uncertainties exist: not all technologies will rebound post-crash, and quick market adaptation is crucial for success.
Current indications strongly suggest that AI is here to stay. Apple and Google are rolling out consumer-oriented products that repackage AI into manageable, user-friendly applications (like photo editing, text correction, advanced searching). While the quality may vary, it appears that certain developers have successfully integrated generative AI into their offerings in a manner that holds significance – both for users and their financial performance.
What Contributions Did LLM Make?
So, where does this leave corporate clients – particularly in the realm of cybersecurity solutions? Generative AI still grapples with significant limitations that impede large-scale adoption. One of the primary challenges is the inherently non-deterministic nature of generative AI. As the technology relies on probabilistic models (an attribute, not a flaw!), output variations are inevitable. This unpredictability could unnerve traditional industry professionals accustomed to deterministic software behavior. Furthermore, this signifies that generative AI cannot simply replace existing tools; rather, it acts as a supplement and enhancement. Nevertheless, it holds the potential to operate as a layer within a multifaceted defense system, complicating predictability for potential attackers.
Cost represents another barrier hindering widespread adoption. The training of models comes at a considerable expense, which is currently passed on to model consumers. Efforts are underway to lower the per-query fees, with innovations in hardware and model refinement promising substantial energy savings in AI operations. The anticipation is that text-based outputs, at minimum, will yield profitable endeavors.
While the availability of more cost-effective and precise models is beneficial, there is a growing awareness that integrating these tools into organizational workflows poses a notable challenge. As a society, we lack the experience needed to seamlessly assimilate AI technologies into daily work routines. Moreover, there’s uncertainty regarding how existing workforces will embrace and operate alongside these innovations. For instance, there are situations where human professionals and users prefer interacting with explainable models over accurate ones. A study conducted in March 2024 by Harvard Medical School revealed inconsistent impacts of AI assistance on radiologists, with improved performance in some cases and declination in others. The recommendation stresses a nuanced, personalized approach in integrating AI tools into clinical settings for optimal outcomes.
Regarding the earlier notion of market adaptation, while it’s unlikely that generative AI will entirely supplant developers (despite some claims by companies), AI-augmented code generation serves as a valuable prototyping tool for various scenarios. In the field of cybersecurity, generated code or configurations could serve as a preliminary framework for rapid development before further refinement.
However, a critical caveat exists: while current technology can accelerate the work of experienced professionals who efficiently debug and enhance generated outputs, novices might encounter risks. There’s the potential for unsafe configurations or insecure code to emerge, jeopardizing the organization’s cybersecurity posture if introduced into production. Like any tool, its usability is contingent on skill level, with proficiency leading to positive outcomes and ignorance potentially resulting in adverse effects.
It’s important to highlight a distinctive trait of contemporary generative AI tools: they exhibit unwarranted confidence in their outputs. Even when blatantly incorrect, these tools present results with unwavering certainty, often misleading inexperienced users. Remember: the system may falsely convey confidence and may even provide inaccurate information.
Another compelling application emerges in customer support, particularly in level 1 assistance – catering to users who overlook manuals or FAQs. Modern chatbots are adept at addressing basic queries and directing more complex issues to higher support tiers. While this may not represent the ideal customer experience, the cost savings (especially for large organizations with numerous untrained users) could be substantial.
The evolving landscape of AI integration in businesses has sparked a surge in demand for management consultants. For instance, Boston Consulting Group now generates a fifth of its revenue from AI-related projects, while McKinsey anticipates a 40% revenue contribution from AI endeavors this year. Consultancies like IBM and Accenture are similarly embracing this trend. Projects range from facilitating language translations in advertisements to enhancing procurement through advanced search capabilities.Suppliers, along with robust customer service chatbots that prevent misinformation and include citations to boost credibility. While only 200 out of 5000 customer inquiries pass through the Chatbot at ING, this number is expected to rise as the accuracy of the responses improves. Similar to the evolution of internet search, we can envision a tipping point where “asking the bot” becomes a reflex action rather than digging through data oneself.
AI Governance needs to tackle cybersecurity worries
Irrespective of particular scenarios, the new AI tools introduce a whole set of cybersecurity challenges. Just like RPAs in the past, customer-facing chatbots require machine identities with suitable, sometimes elevated, access to corporate systems. For instance, a chatbot may need to authenticate the customer and retrieve certain records from the CRM system – a situation that should set off alarms for IAM experts. Establishing precise access controls around this novel technology will be a crucial part of the implementation process.
The same holds true for code generation tools employed in Dev or DevOps practices: configuring the appropriate access to the code repository will restrict the impact zone in case of any mishaps. It also mitigates the repercussions of a possible breach, should the AI tool itself pose a cybersecurity risk.
Furthermore, there is the aspect of third-party risk: by incorporating such a potent yet not fully understood tool, organizations expose themselves to adversaries testing the limits of LLM technology. The relative lack of maturity in this area could pose issues: as there are no established best practices for fortifying LLMs, precautions need to be taken to ensure they lack writing permissions in critical areas.
The potential for AI in IAM
Currently, use cases and chances for AI in access control and IAM are emerging and being provided to clients in products. Traditional areas of classical ML like role mining and entitlement suggestions are being revisited in the context of modern approaches and interfaces, with role generation and evolution more seamlessly integrated into pre-configured governance workflows and interfaces. Recent AI-driven innovations such as peer group analysis, decision recommendations, and behavior-centric governance are becoming standard in the realm of Identity Governance. Customers now anticipate enforcement technologies like SSO Access Management systems and Privileged Account Management systems to provide AI-driven anomaly and threat detection based on user behavior and sessions.
Natural language interfaces are beginning to significantly enhance UX across all these categories of IAM solution by enabling interactive natural language exchanges with the system. Although static reports and dashboards are still necessary, the ability for individuals with varying roles and needs to communicate in natural language and refine search outcomes interactively reduces the expertise and training required to ensure that organizations derive value from these systems.
This marks the beginning of the end
One certainty remains: no matter the state of AI technology in mid-2024, it will not signify the conclusion of this domain. Generative AI and LLMs form just one subset of AI, with many other AI-related fields making swift progress thanks to advancements in hardware and ample government and private research funding.
Regardless of the shape mature, enterprise-ready AI may assume, security experts must already contemplate the potential advantages that generative AI can offer to their defensive stance, the loopholes these tools can exploit to breach existing defenses, and how to limit the fallout if the experiment goes awry.
Note: This well-crafted article is contributed by Robert Byrne, Field Strategist at One Identity. Rob boasts over 15 years of IT experience, fulfilling diverse roles such as development, consulting, and technical sales, with a primary focus on identity management. Before joining Quest, Rob served at Oracle and Sun Microsystems. He holds a Bachelor of Science degree in mathematics and computing.
