Following a period of sustained enthusiasm, the arrival of the hangover is apparent. It manifests mildly (at least for now), as the market adjusts the valuation of major players (such as Nvidia, Microsoft, and Google), prompting other contenders to reassess market conditions and realign priorities. Gartner terms this the disillusionment phase, marked by fading interest and unmet expectations in implementations. Subsequently, technology producers either refine their offerings or exit the market. Continued investment hinges on enhanced offerings that resonate with early adopters.
It was always foreseeable that the utopian vision of a post-human revolution touted by AI enthusiasts was unattainable, and the fervor sparked by initial Language Learning Models (LLMs) lacked a foundation in market success.
AI’s Endurance
So, what lies ahead for AI? If it aligns with the Gartner hype cycle, the sharp downturn is succeeded by the enlightenment phase, wherein the technology matures, benefits solidify, and vendors introduce advanced product iterations to the market. Progressing positively, this phase may lead to the productivity plateau, characterized by extensive adoption driven by widespread market acceptance of the technology. Gartner emphasizes the crucial prerequisites: not all technologies recover post-downturn, and swift product-market alignment is fundamental.
The stability of AI seems almost assured at this juncture. Apple and Google are rolling out consumer-centric products reimagining the technology in easily digestible, user-friendly forms (e.g., photo editing, text editing, advanced search). While quality remains inconsistent, some players seem to have cracked the code on effectively commercializing generative AI to offer meaningful value to consumers and bolster profitability.
LLM’s Impact on Us
Where does this leave enterprise clients, particularly in cybersecurity applications? Generative AI still grapples with significant limitations hindering large-scale adoption. Key among these is the inherently non-deterministic nature of generative AI, rooted in probabilistic models, resulting in output variability. This unpredictability may unsettle industry veterans accustomed to traditional software predictability. Generative AI doesn’t supplant existing tools; rather, it enhances them, necessitating integration and augmentation within current workflows.
Another adoption challenge is the high cost associated with training models, a cost largely transferred to model consumers. Efforts are focused on reducing per-query expenses, with advancements in hardware and model refining showing promise in lowering energy consumption. This could turn text-based output into a profitable avenue, though challenges remain in integrating AI into organizational processes effectively.
Though generative AI won’t entirely replace programmers, it serves as a valuable prototyping tool for cybersecurity specialists, enabling rapid development before fine-tuning. Leveraging the technology’s potential demands expertise to swiftly rectify inaccuracies or insecure outputs to prevent compromising organizational cybersecurity.
Notably, the confidence exuded by current generative AI tools can be misleading, often overpromising results. Users must exercise caution as the technology may present inaccuracies with unwarranted assurance.
Furthermore, in customer support, particularly level 1 assistance, modern chatbots can efficiently handle basic queries, redirecting complex concerns to higher support tiers. Though this setup may not optimize customer experience, it could yield substantial cost savings for large organizations with diverse user bases.
The uncertainties surrounding AI’s integration into business operations present lucrative opportunities for management consulting firms. Companies like Boston Consulting Group and McKinsey are leveraging AI-related projects for significant revenue generation. Noteworthy projects include multilingual ad translations and enhanced procurement searches, showcasing the technology’s versatile business applications.
vendors, and tough customer support automated systems that prevent misinterpretation and include citations to boost credibility. Although only 200 out of 5000 customer inquiries pass through the Chatbot at ING, we can anticipate this number to grow as the accuracy of the responses improves. Similar to the evolution of online search, one can envision a tipping point where it becomes instinctive to “consult the bot” rather than search through the data swamp oneself.
The Oversight of AI needs to confront cybersecurity worries
Irrespective of specific scenarios, the innovative AI tools introduce a completely new set of cybersecurity challenges. Similar to RPAs previously, customer-facing chatbots require machine identities with suitable, at times privileged, access to enterprise systems. For instance, a chatbot may need the ability to recognize the customer and retrieve certain records from the CRM system – a situation that should immediately alert IAM veterans. Enforcing precise access controls around this experimental technology will be a crucial aspect of the implementation process.
The same applies to code generation tools utilized in Development or DevOps operations: setting the appropriate access to the code repository will confine the impact if something goes awry. It also minimizes the consequences of a potential breach, in case the AI tool itself turns into a cybersecurity risk.
Moreover, there’s the element of third-party risk: by integrating such a potent yet less-understood tool, organizations are exposing themselves to adversaries exploring the capabilities of LLM technology. The relatively low maturity level here could pose an issue: there are no established best practices for strengthening LLMs, so we must ensure they do not have the authorization to modify sensitive areas.
The Possibilities for AI in IAM
At present, applications and possibilities for AI in access control and IAM are emerging and being delivered to customers in products. Traditional domains of classical ML like role exploration and entitlement suggestions are being reexamined with modern approaches and interfaces, with role construction and development becoming more tightly integrated into standardized governance workflows and interfaces. More contemporary AI-inspired innovations such as peer group investigation, decision recommendations, and behavior-oriented governance are becoming common in the realm of Identity Governance. Customers now anticipate enforcement point technologies like SSO Access Management systems and Privileged Account Management systems to offer AI-fueled anomaly and threat detection grounded in user behavior and sessions.
Natural language interfaces are starting to significantly enhance user experience across all these categories of IAM solution by enabling interactive natural language dialogues with the system. While static reports and dashboards are still necessary, the capability for individuals with diverse responsibilities and requirements to express themselves in natural language and refine search outcomes interactively lessens the expertise and training necessary to ensure that organizations extract value from these systems.
It’s the Start of a New Phase
One thing is evident: regardless of the state of AI technology in mid-2024, it will not mark the culmination of this field. Generative AI and LLMs represent just one facet of AI, with numerous other AI-related disciplines advancing rapidly thanks to improvements in hardware and substantial government and private research backing.
No matter the form mature, enterprise-ready AI takes, security experts already need to contemplate the potential advantages generative AI can offer to their defensive strategies, the impact these tools can have on breaching existing defenses, and how we can limit the extent of damage if the experiment fails.
Note: This masterfully written piece is provided by Robert Byrne, Field Strategist at One Identity. With over 15 years of experience in IT, Rob has held various roles such as development, consulting, and technical sales, primarily focusing on identity management throughout his career. Prior to joining Quest, Rob worked with Oracle and Sun Microsystems. He holds a Bachelor of Science in mathematics and computing.
