The AI cat and mouse game has begun

Social engineering for access
Hackers, like the ones referenced above, are often motivated by financial gain, but their intentions may also be to create a political disturbance or simply ruin a company’s reputation, among other reasons.

[…]

The AI cat and mouse game has begun

Social engineering for access

Hackers, like the ones referenced above, are often motivated by financial gain, but their intentions may also be to create a political disturbance or simply ruin a company’s reputation, among other reasons.

Typical tactics may involve phishing emails or deceptive social media messages designed to steal company credentials. Last fall, I wrote about the attacks on MGM Resorts International and Caesars Entertainment perpetrated by the hacker groups BlackCat/ALPHV and Scattered Spider. This ransomware-based crime involved hackers demanding cash payments from the companies after hacking into databases that included members’ driver’s license information and Social Security numbers.

No AI-based deepfake technology was used in these attacks. Rather, it was a more low-tech approach, leveraging social engineering to emulate an employee’s identity, and the fooled IT helpdesk provided access. Lesson learned: once access is given, it’s too late. These attacks will become even more common and severe as AI enters the equation.

Whom can you trust?

Distributed teams and remote workers make this problem worse. It’s impossible for the help desk to validate every one of these employees, even with the aid of visual verification. This problem gets worse as you consider business partners, customers, and third-party vendors. You may work with several third parties whom you trust with network and data access but may need to learn more about them and their employees to mitigate risk. For example, a faked vendor’s voice may want to confirm shipments or validate payment instructions. Someone in your organization may have met that vendor at some point, believe that the fake is actually real, and might willingly provide account information, much like the Hong Kong finance manager did. What happens then?

The key is to ‘shut your front door’ using new AI solutions that assist with managing credentials, verifying employee identity, and limiting access control. Some software products already combine behavioral and biometric signals in real-time applications to ensure true identity and access privileges.

  • Protect Your Digital Presence: Utilize AI to safeguard social media and online assets. With the rise of spoofed business pages on platforms like Instagram or Facebook, it’s crucial to defend against the potential damage to sales, reputation, and customer trust.
  • Defend Against Deepfakes: AI-based real-time identity verification tools are vital in combating deepfake threats, ensuring secure transactions and account modifications by verifying user identities.
  • Validate Every Interaction: In an era where identity and credential spoofing are rampant, CIOs and CISOs must ensure the integrity of every transaction and identity verification process.

Recently, hackers even found ways to hack into stored biometrics authorization files through an iOS and Android trojan called GoldPickaxe. This should set off other alarms, as we now see that biometrics that have been stored previously to match your fingerprint or face scan are susceptible to attack.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.