When AI Knows Something is Wrong, But No One is Accountable
In the absence of government regulation, we are leaving it to individual tech companies to determine when their own internal thresholds are met to identify potentially dangerous individuals to law enforcement.That is exactly what happened here.
With ‘Frontier,’ OpenAI hopes to own the enterprise agent stack
In the absence of government regulation, we are leaving it to individual tech companies to determine when their own internal thresholds are met to identify potentially dangerous individuals to law enforcement.That is exactly what happened here.An 18-year-old in Canada murdered members of his own family and then carried out a school shooting. Eight lives gone. A small community shattered.Months before the attack, he had been interacting with OpenAI’s systems in ways serious enough to trigger internal safety flags. According to Associated Press reporting, the company’s monitoring systems identified conversations tied to violent ideation. His account was ultimately banned.But OpenAI determined the activity did not cross its internal threshold for notifying the Royal Canadian Mounted Police.That decision was made inside a private company.No statute required it.No uniform industry standard guided it.No regulatory framework defined “imminent.”Just internal policy. Internal judgment. Internal thresholds.And then, months later, the unthinkable happened.Before you tell me, “Shimmy if Open AI was predicting this crime and did nothing, that is wrong! Let’s be very clear about something.OpenAI did not predict the crime.It did not ignore explicit instructions for a specific attack.It followed its stated policy: Escalate only when there is credible, imminent harm.That’s important. But frankly, it’s not the real issue.The real issue is that we are living in a regulatory void where private companies are quietly acting as digital risk assessors for society — without public standards, without consistent thresholds, and without shared accountability.And that doesn’t work.Don’t Kid Yourself. The Systems Are Watching.For anyone who still thinks AI models are not monitoring usage, think again.No, there isn’t a human reading every chat. But there are classifiers, pattern detection systems, risk scoring algorithms. Certain phrases, behaviors and topic clusters trigger review. That is how modern AI safety works.In this case, something triggered. Enough to close the account.In hindsight, unfortunately, not enough to call law enforcement. That gap is where the story lives.Would the same activity have triggered escalation on Claude? On Gemini? On another large model? We don’t know. And that uncertainty is not a small detail. It is the whole ballgame.Because right now, whether a concerning digital trail gets reported may depend entirely on which platform someone happens to use.That’s not governance. That’s roulette.Liberty vs. Public Safety, The Collision We Knew Was ComingFrom where I sit, this incident exposes five fault lines.First, none of us wants our private conversations constantly monitored and piped to the authorities. That’s not paranoia. That’s civil liberty.Second, we also recognize there is a duty to protect the public when credible danger appears.Third, defining “imminent” is messy. Is it a direct threat with a date and location? A pattern of escalating violent ideation? A credible plan? Who decides?Fourth, should that decision be made independently by every AI vendor? Or should there be a cross-industry standard developed in partnership with law enforcement, privacy advocates and civil liberties groups?And fifth, what about digital sovereignty? This was Canada. The platform is American. Different legal systems. Different privacy norms. Who has authority in cross-border digital harm scenarios?These are not academic debates anymore. They are real.The Temptation of the Easy AnswerIt’s easy to say, “If lowering the threshold could save even one life, do it.”I understand that instinct. After a tragedy like this, that argument feels morally unassailable. But here’s the danger.The power to algorithmically flag someone and escalate them to law enforcement is extraordinary. Once normalized, it expands. We have seen that pattern over and over in government surveillance programs and corporate compliance systems alike.What starts as protection can drift into pervasive oversight. And we should not pretend that risk is theoretical.So as painful as it sounds, I would tread very carefully before empowering any tech vendor to routinely alert authorities about personal activity unless it is clearly criminal in itself or explicitly tied to imminent, credible harm.Not “disturbing.”Not “concerning.”Not “problematic.”Criminal. Or imminent.And that standard cannot live in the policy document of a single company.The Regulatory Vacuum is the Real StoryThis isn’t about blaming OpenAI. They operated within their own framework. The problem is that their framework is theirs alone.We have built AI systems powerful enough to detect early signals of violent intent. But we have failed to build the public governance mechanisms that define what happens next. So we are left with this:Private companies deciding when to escalate individuals to police.No universal standard.No transparency across vendors.No shared oversight.That is not sustainable.If we are going to allow AI systems to monitor for violent risk, then the thresholds for reporting must be:Transparent.Consistent across platforms.Developed collaboratively.Subject to oversight.Otherwise, we are asking corporations to balance civil liberty and public safety in private conference rooms. That is a role they should not want. And one we should not hand them by default.This tragedy forces a hard question.Do we want AI platforms that watch but rarely escalate?Or platforms that escalate aggressively and risk overreach?Right now, we have something worse.We have platforms that watch and decide alone.And that, more than anything, is the void that needs to be addressed.Because deciding when someone becomes a danger to society is not a product feature. It’s a public responsibility. And pretending otherwise won’t make the next tragedy any easier to explain.
