That Time a Software Engineer Had Dominion Over 7000 Robot Vacuums
What happens when you combine a DJI robot vacuum with a video game controller, a homegrown remote-control app, an AI coding assistant and an intrepid software engineer interested in getting a little housework done?
Anthropic targets core business systems with new Claude plug-ins
What happens when you combine a DJI robot vacuum with a video game controller, a homegrown remote-control app, an AI coding assistant and an intrepid software engineer interested in getting a little housework done? Let’s just say, a little more than the intrepid software engineer intended. All Sammy Azdoufal wanted to do was reverse engineer how his Romo vacuum, which sells for about $2000, and the DJI remote cloud servers interacted to manage his own vacuum through a game controller. Instead, he found that he could get the goods from 7,000 or so vacuums located in 24 countries—that includes live camera feeds, audio from microphones, status data and maps. In effect, he could have created quite the surveillance network if he’d wanted to. And no one would have been the wiser. All because of a security flaw that Popular Science said allowed the servers to grant “access for a small army of robots, essentially treating [Axdoufal] as their respective owner” rather than “just verifying a single token.” The vulnerability in the DJI robot reportedly allowed a user to access any other DJI user’s robot. Calling the flaw the “equivalent to a standard-issued key, originally designed to unlock your house, unlocking any house—and in any country,” Vineeta Sangaraju, security solutions engineer at Black Duck, says it “indicates a fundamental authentication vs authorization failure—a class of issue that should have been detected early through basic access control testing.” But add to that “that it is an IoT device that is mobile in a user’s home, not just a computer, makes this simple vulnerability, materially impactful,” says Sangaraju, who believes “such access control oversights will be very expensive for companies going forward, especially in this era of using machines to do the hard and complex work for us.” The software engineer reported his findings to The Verge, which reviewed and notified DJI. The company responded to Popular Science that it had “identified a vulnerability affecting DJI Home through internal review in late January and initiated remediation immediately,” addressing the bug through a pair of updates on February 8 and 10. “The fix was deployed automatically, and no user action is required.” Still… It is the stuff of nightmares for consumers and defenders who have long worried that smart devices could be tapped by hackers—or others—to tap information or create a surveillance web, with good reason. “This situation is all too common for IoT and OT devices; severe time to market pressure, focus on innovation and not security, limited ability to make users perform security functions, and need to keep costs low all conspire to push a product into the market without comprehensive security,” says John Gallagher, vice president of Viakoo Labs at Viakoo. And in that sense, the notion of hacked robots is nothing new, and in fact, Gallagher says “IoT and OT products shipping without comprehensive security should be expected.” That doesn’t mean it should be taken lightly. “The system correctly confirmed the user’s identity, but it didn’t properly limit which devices or data they could access. That difference between knowing who you are and tightly controlling what you can touch is critical,” says Randolph Barr, CISO at Cequence Security. “When those checks are weak in a multi-tenant IoT environment, especially one that handles video, audio, and detailed home maps,” he says, “the privacy fallout can be huge.”Part of the problem is that the barriers to exploitation have been lowered considerably. “Unlike earlier eras, when one needed deep technical knowledge, today, anyone who can describe their intent to a model can quickly receive a working exploit,” says Sangaraju. Taking Action To protect against these vulnerabilities, bug bounty programs are valuable, but they’re not enough. “They only serve as a precautionary measure rather than the primary defense,” and what is needed beyond that is “a mature, secure development lifecycle that incorporates security from the outset” to provide “the real foundation,” says Barr. “That means explicitly modeling threats to tenant isolation, rigorously testing authorization boundaries, thinking through realistic misuse scenarios, and hammering APIs and backends with tests that focus on cross-account access,” he says, calling for multi-tenant authorization to “be treated as a first-class design requirement in a cloud-connected device platform.” But even good engineering practices don’t obliterate authorization bugs, which will still slip into production. “That’s where strong runtime monitoring becomes essential,” Barr says. “Enterprises need to see how APIs and messaging systems are actually being used, not just whether a token passes validation,” he says, and “if an authenticated client starts enumerating devices, subscribing to unusual data streams, or pulling data outside its normal tenant scope, that behavior should stand out and get attention fast.” API security goes beyond checking boxes or enforcing fixed rules. “It’s about understanding what a user or system is actually trying to do,” says Barr. “Tracking behavior becomes possible to spot and respond to anomalies in real time,” and “if a client suddenly starts acting in ways that don’t match normal usage, say, accessing far more devices or data types than usual—that traffic can be flagged or blocked, even if a backend permission check mistakenly allows it,” he says. “This adds an important layer of defense when application-layer authorization isn’t perfect,” he contends. Companies must reassess their AppSec risk processes. “They need to ask themselves if they are going back to the board and recalibrating their risk models in this world where exploitation is more accessible?” says Sangaraju. “Or are they still rating vulnerabilities using assumptions from a slower era?” The responsibility doesn’t just fall to defenders. For those companies that build connected devices, “the key takeaway is that prevention and detection have to work together” with secure design, sound authorization models, and thorough testing laying the groundwork, Barr says. “Vendors in this space should consider backend isolation and behavioral monitoring as essential security requirements, not as optional features.” And, consumers, too, must take precautions, who may “feel a false sense of security and assume that IOT/OT devices have equivalent security to IT systems—they don’t,” says Gallagher. Regardless of whether they are home or enterprise users, security basics should be followed. “Keep IoT devices on a separate network to prevent lateral movement from the infected IoT device,” says. Use multiple layers of security (defense in depth), including network firewalls, device hardening, and zero-trust segments to limit the blast radius of a vulnerable device. Not only do agent-based IT security solutions not work with IoT/OT because IoT devices use custom operating systems, but they also can miss discovering IoT/OT devices because traditional asset discovery solutions often can’t see IoT/OT devices that communicate rarely or with small packets (such as the MQTT packets in the DJI example). Whether home or enterprise users, there are security basics that should be followed. “Keep IoT devices on a separate network to prevent lateral movement from the infected IoT device” and “use multiple layers of security (defense in depth), including network firewalls, device hardening, and zero trust segments to limit the blast radius of a vulnerable device,” says Gallagher. And perhaps consumers should cross their fingers that hackers don’t infiltrate their lives through smart devices…or maybe go back to pushing a traditional vacuum around the house.
