Criminals are increasingly using generative artificial intelligence to craft sophisticated scams. To spot an AI scam, tap into your intuition: if something feels off, it probably is. It’s also smart to double down on the foundational security practices that can shield against all scams, whether or not they’re AI-assisted.
Advancements in artificial intelligence, or AI, have been making headlines — and changing the landscape of identity security.
Though you're may be hearing more about it recently, AI is nothing new. Over the last decade, predictive artificial intelligence has been used to sort through massive amounts of data and create predictions and suggestions.
In recent years, though, generative AI (artificial intelligence that gives computers the ability to produce original content, audio, and imagery) has hit the mainstream, and scammers are taking note.
With software like ChatGPT and Bard, it’s now possible for fraudsters to create better, more convincing scams.
AI voice scams and other common schemes
It didn’t take long for bad actors to take advantage of recent AI breakthroughs.
In March 2023, the Federal Trade Commission (FTC) released a consumer alert warning the public about AI voice scams where fraudsters are using generative AI software to clone a person’s voice to execute highly convincing imposter schemes.
For example, a scammer may call their target pretending to be a family member in distress: The victim thinks they're sending money to help their relative in an emergency but it's really an AI scam and the scammer pockets the funds.
In another increasingly common AI scam, bad actors use AI software to quickly craft human-sounding phishing emails and text messages. The fact that these AI programs can sound like real people is one of the biggest dangers of artificial intelligence.
3 ways scammers may use generative AI
Create highly convincing phishing messages in minutes
Generate sophisticated malicious code that infects devices
Clone a loved one’s voice to run a phone scam
Historically, scammers have been vulnerable to human errors such as misspelled words and flawed grammar but because AI is a learning technology, all of those traditional red flags may now be hidden by the machine.
AI can learn to create sophisticated messages that blur the lines between what is legitimate and not.
If AI is helping fraudsters develop better and more convincing scams, how do we fight back?
Combat AI scams by tapping into your intuition
The good news? We still have the power of scrutiny and skepticism on our side.
The best way to avoid an AI scam, or any scam, is to question everything that seems off. Scrutinize any message you get that’s asking for data or anything personal.
For example, if a loved one calls and asks you for personal information or money out of the blue, take a moment and ask yourself if this seems normal. Ask them questions about the situation and, if it feels off, hang up and call the person back to make sure it’s them.
When it comes to phishing texts, emails, and messages sent via social media, the same fundamental security tips you’ve relied on for years will work against AI scammers, too:
Never click on a suspicious link or unsolicited file. Phishing messages may be more believable than ever thanks to AI but the bottom line stays the same: If you weren’t expecting a link or document, don’t click on it. If it came from someone you know, ask them about it (using the email or phone number you usually use to communicate with them) before responding.
Pay attention to the sites you visit. Remember that scammers can plant authentic-looking ads on search engines and social media. If you click on a new-to-you site, check that it’s secure and never enter personal or financial information on an unsecured website.
Practice good password hygiene. Create strong passwords that are difficult to guess, change those every few months, and utilize multi-factor authentication to protect your data. And don’t use the same password more than once. This will help keep your private data safe and minimize your risk of account takeover.
Keep your phone and computer software up to date. Regular device updates can be a minor inconvenience but they have a major payoff. Built-in antivirus software relies on regular updates to stay effective; keeping your devices current can help ward off malware attacks.
How will AI impact the future of cybersecurity?
Though AI in the hands of a scammer is cause for concern, artificial intelligence can also be used to help people keep their identities safe.
The Allstate Digital Footprint® tool, for instance, uses machine learning to show you which companies store your information — and helps you send requests to delete it. By controlling what exists online, you can help prevent theft or misuse of your personal information.