June 29, 2023

AI scams & voice cloning: How to identify them

AI scams & voice cloning: How to identify them

Artificial intelligence (AI) has made its way out of sci-fi films and into real life, and it’s gradually taking on more sophisticated roles in our society. The bad news is that as AI becomes smarter, cheaper, and more widely available, it’ll only get easier for criminals to incorporate this technology into their scams. The worse news is that it’s already started.

With the spam and scam problem in full effect, AI has revealed itself as the next wrinkle consumers must prepare for. Whereas most spam calls are annoying but harmless (if you know how to handle them), AI scam calls can be dangerous and traumatic. These types of scams may use cloned voices of your loved ones, which can take an emotional toll. And, like any other scam, they can compromise your privacy and drain your finances.

Scammers and spammers stay at the cutting edge of technological advancements, so when a new type of tech is integrated into our culture, scammers integrate it into theirs. AI is the next frontier for fraudsters, and it’s up to us to understand how to protect ourselves and our families. Read ahead for a comprehensive look at AI scams, voice cloning, and how to avoid them both.

What are artificial intelligence scams?

Whereas spam calls, scam texts, and phishing emails generally rely on the same tactics, AI scams add completely new elements to the fraudster’s repertoire. They may incorporate more traditional scamming methods in order to initially pull in their targets, but the use of artificial intelligence allows them to elevate their fraud.

AI scams take different forms, often using voice cloning or fake chatbots, but the goal is the same as always: to steal information, assets, or identities. Tactics like voice cloning can be emotionally triggering to their targets. By inventing a highly emotional situation, scammers trick their victims into cooperating with their demands before they realize they’re being deceived.

Like smishing, robocalling, and caller ID spoofing before it, voice cloning is the latest in a long list of technology-based scam tactics. As scammers build on traditional methods with newer AI capabilities, their schemes only stand to become more dangerous.

Free 7 day trial
Fight back against spam and reclaim your phone.
You’re one step away from a spam-free phone.
Get Started

Types of AI scams

Scammers are finding various ways to use AI technology to their advantage. AI scams can be especially intense, which is why it’s crucial to understand how to recognize them. Here’s a brief overview of some of the new types of AI scams that have been hitting the phone lines.

Voice cloning and AI scams

One of the more popular uses of AI in phone scams is voice cloning. This particularly conniving tactic allows scammers to replicate the voices of their target’s loved ones. Once the voices have been copied, criminals can use these clones to activate voice-controlled devices, ask loved ones for personal information (voice phishing), or even commit virtual kidnapping.

Voice cloning in identity theft

Hands-free interaction with our devices is convenient, but voice cloning exposes it as a weak point in our security. Any scammer with access to your voice can communicate with your Siri and Alexa, which means they may also have access to your credit cards, bank accounts, and personal identification.

Fraudulent phone calls and transactions

Between caller ID spoofing and voice cloning, a crafty scammer may have you thinking you’re on the phone with your best friend — the call will come up as their number, and you’ll hear what sounds like their voice. However, if they start asking questions that your friend wouldn’t normally ask, hang up the phone and call back. AI voice cloning allows scammers to casually ask for sensitive information you’d only share with friends and family.

The DeStefano story (virtual kidnapping)

Jennifer DeStefano and her family lived through the harrowing experience of an AI kidnapping scam and have shared their story with the world. While bringing her younger daughter, Aubrey, to the dance studio, Jennifer answered the phone to hear the crying voice of her 15-year-old daughter, Brianna, along with that of an unknown and threatening male.

As the caller made his demands — including $1 million in ransom — Jennifer had other moms at the studio call 911 and attempt to reach Brianna. Fortunately, the dispatcher recognized the situation as a scam, and Jennifer’s son was able to make contact with the allegedly kidnapped daughter to confirm that she was fine. Although everyone involved made it out with no injury or financial damages, the emotional impacts of this type of scam may last a lifetime.

Deepfakes

A deepfake is when someone’s likeness or voice is copied in a realistic, believable way. You may have seen AI deepfakes of different actors’ faces over characters in your favorite movie, or you might have heard a soundboard that sounds just like a famous celebrity, saying things they definitely didn’t say. Deepfakes use AI to fabricate a close copy of the target’s persona, creating a situation in which it seems like the person is really involved.

Social engineering attacks using deepfakes

Scammers use social engineering to manipulate their targets’ emotions, and deepfakes can be a direct shortcut. When convincing a target to turn over their personal information, the voice of a loved one in distress provides instant leverage. They think their family member is in trouble, so they readily cooperate with the scammer to de-escalate the situation. Some scammers may impersonate a government official or other influential person rather than a target’s friends or family.

Fraudulent digital content

People can easily use deepfake technology to create fraudulent content and associations. When you can paste someone’s face and voice over someone else’s, you can make that person appear to do or say just about anything. Since it allows you to create false evidence, deepfaking is a Swiss army knife for framing and extorting people.

AI-generated text

Modern AI can be used to create everything from fake voices and speech patterns to fully fabricated images and videos, so generating text is no problem. Unfortunately, it may turn out to be a huge problem for us.

Fake news and disinformation

AI can generate text in the style of any person or entity, including authoritative news sources. This can be harmless and entertaining when used in a vacuum, but it can be dangerous when applied to the real world. AI-generated text can be used to leave fake reviews to inflate the value of a bad product or send phony emails and texts on behalf of a political candidate.

Impersonating executives and influencers

If you know a celebrity, athlete, or other public figure well enough, you might do a pretty good impression of them even without AI. With AI, however, it can be easy to impersonate a social media influencer or well-known personality — especially via text. AI can easily emulate the writing style of famous or powerful figures to scam people out of information, money, and other assets. Think twice before answering a direct message from a model or a text from your boss’s number that asks for unusual information.

Phishing emails and social media posts

The point of phishing is to make the target think a phony message is real, and AI has made it easier than ever for scammers to do just that. By using AI to copy the structure, style, and tone of actual marketing emails and social media posts, they can generate phishing and smishing scams that seem legitimate and professional.

Malicious chatbots and AI assistants

In addition to caller ID, scammers spoof websites, down to the chatbots and AI assistants that live within them. Criminals can create near-identical copies of familiar websites, making you think the forms you’re filling out and chatbots you’re interacting with are trustworthy. This process usually starts with phishing links in an email or text, so it’s worth repeating: Never click links from unknown senders.

Manipulating conversations and data theft

Spoofed websites can look just like the real thing, so it can be difficult to tell if you’re on one. If a chatbot or AI assistant asks for personal information that you wouldn’t usually share with a bot, like usernames, passwords, and financial information, then you might be conversing with malicious AI. These bots aren’t associated with a brand you know and trust — they’re out to steal your sensitive information so they can sell it to other scammers or use it to hack your accounts.

Deceptive customer support scams

Chatbots and AI assistants are supposed to assist consumers by answering their questions, helping them navigate through the website, and setting up their appointments when the humans are off the clock. Scammers use these robotic helpers for evil, however, and may impersonate companies you’ve shopped with before.

How does AI voice cloning work?

With just a few seconds to a minute of data, scammers can use AI software to make a convincing copy of your voice and make it say anything they want. Criminals can create an infinite soundboard of your AI-generated voice just by running the samples through software. The fake voice matches your gender, tone, inflection, and emotion, creating a clone that even close friends and family would think is your real voice. Voice-cloning technology can be found for low prices or even for free, and it’s widely available online.

Ways to detect and protect against AI scams

Spam calls and texts are annoying and potentially dangerous, but AI scams can be downright scary. That’s why it’s critical to understand how to detect them as they happen and how to protect yourself if you find yourself wrapped up in one.

Red flags in AI scams

AI scams are tricky by nature, which is one reason they tend to be effective. When you know what to look for, however, you empower yourself to shut them down before any damage can be done.

Keep yourself and your family secure by recognizing these red flags in AI scams:

  • Links in an email or text message. Many AI scams stem from traditional phishing or smishing links. If you find yourself talking to a chatbot after clicking a link in an email or text, you might be in the midst of a scam.
  • Extraction of personal information. An AI scam bot might ask you for personal information that isn’t necessary or relevant to the exchange. Never give out sensitive details like login credentials or financial information without verifying whom you’re talking to.
  • Extreme situations. AI voice cloning scams can make targets think their loved ones are in serious trouble, which immediately creates a sense of urgency. Although these situations can be nerve-wracking, try to stay calm and remember that these scams exist. If you get a call that sounds like a family member crying, call that person to ensure they’re okay before assuming it’s actually them.
  • Odd imagery. Some scams use AI-generated images as advertisements that compel viewers to purchase a fake product or donate to a phony cause. AI-generated images aren’t perfect, so look closely at the details; these images might feature a person with too many fingers or an entire setting that doesn’t really exist.

Protecting yourself from AI scams

There are steps you can take to make yourself less visible to bad actors and avoid being targeted by AI scams in the first place. If you do find yourself in the midst of one, it’s also important to know how to act.

Here’s how you can minimize your risk for AI scams and handle them safely if you’re targeted:

  • Don’t post about upcoming trips. Of course you want to document your travels, but refrain from posting about your out-of-town excursions until you return. If scammers know you’re going to be away, they may take the opportunity to con your parents or grandparents while you’re harder to reach.
  • Only share personal information with people you can trust. Even basic information can be used against you when it falls into the wrong hands. Don’t give out your email address, phone number, login information, or any other personal details unless you fully trust the other person.
  • Reach out immediately. If you’ve been targeted with a virtual kidnapping (when the scammer clones a loved one’s voice) and there’s someone else in the room with you, have them try to contact the person who is allegedly being held captive. If there are multiple people in the room, call 911 or another emergency service right away.
  • Use a family password. Given scammers’ ability to recreate voices, it’s more important than ever to verify who you’re talking to on the phone. Create a secret password or code for family and friends to use to confirm their identity. Don’t share the code with anyone else, and never post about it online.

Impact on businesses and individuals

AI scams add a new dimension to the familiar credit card and car insurance scams we’ve all dealt with before, and they have the potential to do massive damage to businesses and individuals alike. The more of your voice that’s out there, the easier it is for criminals to clone it. This can be problematic for public figures, podcasters, singers, and anyone else whose voice is widely available online. Furthermore, businesses with remote employees may be more vulnerable to these types of scams because of their decentralized communications.

According to the FTC, consumers reported losing more than $2.6 billion to imposter scams out of $8.8 billion in total reported losses to fraud in 2022. Our research estimates that Americans actually lost over $85 billion to phone scams last year. Since the number has been growing year-over-year, even without AI we can expect things to get worse before they get better.

Live life spam-call-free®
Sign up for a 7-day free trial

Legal and regulatory approaches to AI scams

AI scams are becoming more popular, but they’re not a brand-new concept. People have been using artificial intelligence and machine learning to pull scams for years — now, it’s just easier for the average scammer to get their hands on more capable technology. With this high-tech form of scam infiltrating our airwaves, we’ll need new legislation to combat, punish, and ultimately eliminate AI scams. 

Existing laws and regulations

While we already have certain laws and regulations governing AI, privacy, and cybercrime, they weren’t designed to fight the types of AI scams we’ve seen so far. Let’s take a look at why our current laws and regulations might not be enough to fight these new kinds of AI scams.

Privacy laws and data protection

The United States has many laws related to privacy and data protection, which is actually part of the problem. Instead of having overarching, centralized laws that apply to data protection across the board, we have smaller clusters of laws spread between federal and state governments. They may be divided by location, demographic, and type of data being collected, making it difficult to protect “data” as a whole. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Electronic Communications Privacy Act of 1986 (ECPA), for example, are two types of data protection acts that focus on different types of data.

Cybercrime legislation

Much like privacy laws, some cybercrime legislation is already on the books. Again, however, the existing laws were not meant to handle the AI-infused phone scams that have recently been targeting American consumers. We will need to pass new legislation that focuses on these particular phone scams, and voice providers must continue to cooperate with agencies like the Federal Trade Commission (FTC) and Federal Communications Commission (FCC) to protect Americans against this new type of threat.

Proposed policies and legislation

Although we have some legislation in place to help prevent cybercrime and protect consumer privacy and data, we’ll need to take additional measures to squash AI scams. There are a few specific areas in which we can focus our efforts to fight back against this particular type of fraud.

Regulating AI and deepfakes

As of now, it’s relatively cheap and easy to create convincing deepfakes with AI. This is fine when you’re just pranking your friends or editing movie trailers for fun, but it has dangerous implications when scammers use this technology to defraud people of their money, assets, and identities. Future laws may seek to regulate the use of AI and deepfake technology to reduce the prevalence and impact of AI scams.

International cooperation on AI scams

As the spam epidemic has shown us, the threat of law enforcement doesn’t deter foreign scammers from sneaking into our inboxes, voicemails, and bank accounts. It will take better international cooperation to eliminate spam and scams of all kinds from our airwaves — especially those that use AI.

Key takeaways

Scammers stay up-to-date with the latest technology so they can incorporate it into their ploys. AI scams use tactics like voice cloning to defraud people and amplify the spam and scam calls that have already plagued our phones for years. That’s why it’s important to protect yourself with a third-party app like Robokiller.

Robokiller is a spam-blocking app that keeps scams, telemarketers, and robocalls from ringing your phone. It’s 99% effective at eliminating scam calls and texts thanks to audio fingerprinting, machine learning, and predictive algorithms. We’ve cultivated our AI to be the quintessential counterintelligence to the deviant AI scammers use.

Unlike the misused AI technology harnessed by fraudsters, our AI is deliberately designed to safeguard your peace. By outsmarting the deceitful algorithms employed by the scamming underworld, Robokiller boasts a high success rate in extinguishing scam calls and texts. It's a classic case of good AI triumphing over bad, where our AI doesn't just combat but excels.

Try Robokiller free for 7 days.

FAQs

What can a scammer do with your voice?

Newly available AI technology has made it easy and affordable for scammers to spoof people’s voices. With just a few seconds of audio clips, scammers can hijack your voice and make it sound like you’re saying whatever they want you to.

How does AI voice cloning work?

Scammers incorporate your voice into their scams using a process called AI voice cloning, in which they feed a sample of your voice through a program that recreates your tone, inflection, and timbre. With a small amount of data, they can create and manipulate a realistic copy of your voice — which they then use to dig for personal information, request a wire transfer, or ask your grandparents for gift cards.

Can my voice be cloned?

In a word: yes. Scammers might need as little as three seconds of your voice to create a realistic and believable clone of it. The longer the sample they have, the more accurately they can clone your voice.

How can we identify a scam in AI?

AI scams are often built into the usual spam framework, so they come with many of the same signs as spam calls, texts, and emails. Beware of communications that create a sense of urgency, ask for personal information, or use voice cloning to impersonate friends, family members, or other recognizable figures.

Free 7 day trial
Fight back against spam and reclaim your phone.
You’re one step away from a spam-free phone (and a little poetic justice, thanks to Answer Bots).
Sign up for a 7-day free trial

Featured articles

American Solar scam calls and how to avoid them
January 24, 2024
American Solar scam calls and how to avoid them
Read more
arrow right
Data protection in the digital age: Why it's so important
January 24, 2024
Data protection in the digital age: Why it's so important
Read more
arrow right
How to protect yourself from a cyber attack
January 24, 2024
How to protect yourself from a cyber attack
Read more
arrow right