AI Overviews Rife With Scam Phone Numbers
I have a love/hate relationship with the AI overviews that Google dishes up when I launch a search.
Purpose-built AI Security Agent Detected 92% of DeFi Contracts Vulnerabilities
I have a love/hate relationship with the AI overviews that Google dishes up when I launch a search. On the one hand, they helpfully summarize data from all over the internet at lightning speed and at first blush, it appears to be a cogent narrative that, with a little tweaking, could pass for perspective. On the other hand, they (not surprisingly) sound like they’re AI-generated with some familiar word choices and phrasing that I find annoying. And they present a needless verbiage hurdle to the search results I was looking for in the first place. Oh, also, some of the information is inaccurate, so… Now Wired writes that “these AI answers can actually be dangerous.” Say what? Color me surprised (I’m being sarcastic). In addition to odd word choices, inaccurate information and blatant ripping off real human writers, AI Overviews are chock-full of scams. They often include fake phone numbers, which, if called, can deliver the user into the hands of con artists. The Wired report explains that the potential victim, in search of a company’s phone number, Googles the organization’s name. AI produces a number; the searcher rings it and is connected not to the company they’re seeking but to a scam number, where a person posing as a company representative tries to extract information from the caller, perhaps including payment data. How do these phone numbers make their way into the AI Overview to begin with? The report says they’re likely being published in a variety of sites online (low-profile, not prominent ones) in association with the names of large companies. That way, when AI Overview scrapes websites to fulfill a search request and craft a narrative, it pulls those numbers and without verifications, no one is the wiser. “AI-generated summaries effectively ‘launder’ misinformation into authoritative answers,” says Ram Varadarajan, CEO at Acalvio. “Scammers have discovered that they can flood user-generated content sites and forums with fake phone numbers for major businesses, then trick callers into sharing their credit card information,” Lily Ray, vice of search engine optimization and research at Amsive, posted on LinkedIn, pointing out that “a lot of people have grown accustomed to trusting Google’s results without second guessing things, because for many years (decades, even), you could.” Calling “the tiny disclaimer under AI Overviews saying ‘AI can make mistakes’” insufficient, Ray wrote that “AI Overviews shouldn’t trigger in situations where there is a good chance that getting the answer wrong can be dangerous (e.g., business phone numbers). Or when Google has better, more reliable methods for answering the question correctly.” Think Google Maps or the Knowledge Graph, she says. “When AI is trained on bad data, it remembers,” says David Brumley, chief AI and science officer at Bugcrowd. “The problem is that foundational LLMs are indiscriminately learning from everything, so attackers have learned it’s just a game of numbers.” If enough fake information is put up enough, Brumley says, “the algorithms will repeat it” and “worse, AI can’t unlearn bad data, so Google will have to figure out a band-aid to patch over this every time it happens.” Defenders, too, can take steps to prevent these scams from being successful. “For security teams, it’s the pivot from protecting the systems to defending the integrity of their brand’s digital identity,” says Varadarajan. “Success requires a dual-track strategy: monitoring for brand safety, while also training users to treat every AI-surfaced ‘fact’ as an unverified claim.”
