Living Security Adds AI Engine to Surface Risky End User Behavior
Living Security revealed it is beta testing an artificial intelligence (AI) engine on its platform that continuously analyzes billions of signals to predict risk trajectories, recommend the most effective actions, and automate routine interventio
CrowdStrike Acquires Browser Security Startup Seraphic in Latest Buying Spree
Living Security revealed it is beta testing an artificial intelligence (AI) engine on its platform that continuously analyzes billions of signals to predict risk trajectories, recommend the most effective actions, and automate routine interventions to better secure employees and, by extension, AI agents.Dubbed Livvy, the AI engine is being added to a Human Risk Management platform that is already being used to provide security awareness and training. The AI engine adds an ability to analyze behaviors and identity in the context of external threat signals to make it simpler to determine what risks mattered most, what actions to take or how to scale interventions across increasingly complex attack surfaces.Living Security CEO Ashley Rose said even in the age of AI the single biggest challenge is not the technology itself as much as it is how individuals are using it. Livvy provides insights into which employees are engaged in the most risky behaviors to provide cybersecurity teams insights and recommendations for where additional security measures might need to be put in place, she added.Cybersecurity teams can then use the Living Security platform to send an alert to remind employees about the potential risk the business is being exposed to when, for example, they give an AI tool access to sensitive data. The issue is likely to become especially problematic when an autonomous AI agent has the permissions required to access data in a way that violates compliance mandates.Unfortunately, it’s now more a question of when and how many incidents there will be involving AI agents as the pace of adoption continues to exceed the ability of cybersecurity teams to keep up. Arguably, the only way to ultimately fix those types of AI security issues will be to rely more on AI, noted Rose.Overall, the goal as part of an effort to reduce the number of cybersecurity incidents is to leverage AI to identify risks faster at scale, said Rose. The earlier those issues are addressed the less stress there will be for everyone involved, she added.It’s not clear to what degree the rise of AI will induce organizations to revisit cybersecurity training, but there is a clear need to be able to remind employees to do the right thing in the moment. Most employees are not willfully violating policies. Instead, in the rush to complete a task they are forgetting what was discussed in some training class they attended a month or more ago.Of course, auditors are not going to be especially concerned about why a breach occurred so much as they are the fact it did and what level of penalty should be applied. The potential costs for inadvertently allowing an AI agent to violate any number of compliance mandates could easily become staggering.Cybersecurity teams, as always, might want to hope for the best but continue to prepare for the worst, which hopefully won’t turn out to be nearly as catastrophic as many now fear.
