What information can you trust these days? “Alternative facts” blur reality. Social media bots spread misinformation. And you can’t even believe your own Caller ID anymore.
What comes next, though, may eclipse these problems.
They’re called deepfakes; audio and video generated by computers that are so lifelike, most people can’t tell they’re fake.
To prove it, listen to these three recordings and guess which one is an actual human.
It’s difficult to tell, isn’t it? Below, we'll reveal which one is real.
Robocallers and telemarketers have always used technology to fuel their businesses. After all, more calls mean more opportunities to scam people which means more money. Deepfake technology is about to become part of their arsenal.
There’s a good chance you’ve watched a deepfake video and not even known it. This tech is already used to make videos that are entirely manufactured. Buzzfeed and Jordan Peele created this one below to demonstrate how easy it can be.
That same technology may soon be used by phone scammers to create incredibly persuasive robocalls that react and respond to you.
As you can tell, these aren’t the robotic-sounding audio we’re used to. Deepfake technology can understand and replicate the speaking style and emotions we use in daily conversation.
Let’s examine the IRS robocall scam that’s been that’s been used for years. It has a gimmicky, robotic tone. Other robocalls do, too. Yet, everyday people still fall for it. Last year scammers stole $9.5 billion from consumers.
Now compare the audio of a deepfake voice to the typical IRS scam. Which one is more believable?
This is the kind of tool phone scammer’s dream of. It makes it nearly impossible for you to discern whether a call is legitimate or a phishing scheme.
Phone scammers have a history of applying clever tactics and new technology. One of the first known scams involved an impersonator and fake ransom which managed to steal $20,000 back in 1888. Almost one hundred years later, autodialers allowed them to place calls at incredible volumes for cheap.
When Caller ID made it easier for consumers to avoid calls from unknown numbers, scammers turned to spoofing. The more people ignore calls from toll-free and long distance numbers, the more robocallers use neighbor spoofing.
Just imagine the damage robocalls powered by deepfake technology will incur.
So which of the three audio samples above belongs a real human? Actually, all three are fake. An AI from Chinese tech company Baidu created them using only 3.7 seconds of a real human’s voice.
The ability to simulate a conversation was the missing piece in the phone scammer arsenal that already includes neighbor spoofing. Soon, the combination of autodialers, neighbor spoofing, and deepfakes will open up a scammer’s paradise.
Consumers already have a hard time telling the difference spotting deepfakes. Neighbor spoofing tricks people into answering their phone more frequently. Authenticity is becoming malleable.
Imagine getting a call from your bank. The caller explains that they flagged a potentially fraudulent charge on your account. You’re suspicious, but the friendly voice reads off the last four digits of your Social Security number to confirm your identity. Sounds legitimate so far.
To correct the fraudulent transaction, the bank representative asks you to verify some information. She asks for the answer to your security question. Almost done. Now she requests your PIN and then the three-digit security number on the back of your card to complete the process. The representative proclaims that the fraudulent charge will be removed and hangs up. Phew! Close call, right?
The problem? You’ve just given up personal banking information to a neighbor spoofed deepfake robocall.
Modern phishing and theft scams are getting more sophisticated. Adding deepfake technology to neighbor spoofing is a recipe for telephony disaster.
What about Google and Apple? As the two companies that dominate the smartphone market, surely they have a solution?
Just a couple months ago, Google announced a new feature for their new flagship Pixel phone. When a call reaches your phone that you think is questionable, you’ll be able to tap a “Screen Call” button.When you do, Google Assistant alerts the caller that you are using a screening service and that you’ll get a copy of this conversation. It then asks them to state their name and reason for calling, which will then be relayed to you in real time.
As exciting as it is, there’s still plenty of problems. The chief among them is its reliance on you to tell the difference between a spam and legitimate call, not once, but twice. One to initially tell Google Assistant to screen the call, and a second time to decipher whether the caller’s response to the Google AI helper is fake.
Deepfake technology, with machine learning that manufactures realistic voice responses, will still get to you, even if the initial interaction is transcribed through Google Assistant.
Oddly enough, the kind of machine learning and automation tools that power deepfakes is the same that Duplex, another of Google’s AI software, uses to place human-sounding voice calls on your behalf, like booking restaurant reservations and hair appointments.
iPhone users have had to fight spam calls without help from Apple. Not long ago the company announced their patent for a system attempts to analyze technical data about incoming calls and decide whether they’re legitimate calls or spam calls masked behind a spoofed caller ID.
When one is identified as spam, your iPhone will display a warning to you that the incoming call might not be legitimate.
That could be helpful in filtering some of the spam calls you get, but does nothing about the deepfake technology super powering robocalls.
Additionally, Apple files patents the way a germaphobe uses hand sanitizer. Ultimately, many never see the light of day. There’s no guarantee this one will.
Between these new technologies, Google and Apple could help prevent some phone scams, but their new features won’t be attacking the technology creating the issue.
For that, you need to fight fire with fire. Aka, audio fingerprinting and machine learning built by the good guys.
Sometimes you have to fight fire with fire. In this case, it’s fighting machine learning and automation technology with machine learning and audio fingerprinting.
That last technology, audio fingerprinting, is the key. Scammers might be able to create deekfake robocalls using only 3.7 seconds of audio, but fingerprinting only needs 3.7 words to identify the scam calls.
Fingerprinting technology, like the one used by RoboKiller, works by matching the same audio, even if there's a bit of distortion. It’s precise enough to keep from matching similar sounding audio, however. If a scammer recorded a robocall saying, "Hi my name is Sarah," and then that recording had a little bit of static or distortion, both recordings would still have a matching fingerprint.
Let’s say a scammer generated a deepfake robocall that said, "Hi, this is Sarah," and "Hi, my name is Stacy." Machine learning technology using fingerprinting would then match the, "Hi, my name is..." part of the recording that contains the exact same audio.
Deepfakes won’t be able to fool audio fingerprinting tech, keeping you from having to discern what is real and what isn’t.
In October, there were more than 5.3 billion spam calls were made to Americans. That’s a massive 44% jump from just six months before.
The rise of neighbor spoofing was a significant contributor. Just imagine the increase that will manifest when deepfake robocalls become widely used.
RoboKiller was one of the first spam call blockers to get ahead of the neighbor spoofing problem. The same audio fingerprinting technology and machine learning that identifies spam calls now is the first app to help protect you as deepfake robocalls gain popularity.