Home of internet privacy

New AI tech bodes ill for identity theft, scams, and propaganda

What happens when AI and machine learning evolve beyond the uncanny valley? How will we know what is real and what isn’t? Developments in AI and machine learning have progressed forward in leaps and bounds, but not always with a focus on our privacy or security.

In this exploratory piece, we consider what could happen with some of the biggest changes in AI, whether this could usher in a new era for cybercrime, and the biggest question of all: Are we finally transitioning to a post-truth world?

[Keep up with the latest in privacy and security. Sign up for the ExpressVPN blog newsletter.

Stealing your face

Gone are the days where scammers could easily steal your profile picture, set up a fake social media account, and begin defrauding other people—all in your name. We’re not saying this doesn’t happen anymore, but reverse image searching capabilities provided by various search engines have made it easier to determine if the individual behind a profile image is genuine.

But there’s more to fake accounts nowadays. This Person Does Not Exist is a website that provides AI-generated human faces of people that, you guessed it, do not exist. Unlike using a stolen profile picture of an existing person, these AI-generated faces are virtually untraceable. Which, as you can imagine, will make investigations into fraud a little harder to conduct.

This year’s pandemic has now completely normalized video calls for anything from education to medical consultations and employment screening. But what if the person you’re video conferencing with isn’t actually who they say they are? Facial swapping software is now so believable that an actor’s face can be convincingly replaced with another person’s appearance. Anyone can be placed into anything—including you. Flat 2D portraits can even now be converted into 3D videos. These high quality images and videos generated by machine learning algorithms are called “deepfakes” and are capable of face swapping, facial mimicry, lip-syncing, and full-body motion. Did you ever want to see Sylvester Stallone as Kevin in Home Alone? Now you can. What about Jim Carrey as Jack Torrance in The Shining? That exists too. While these examples are novel, a far more menacing issue lurks in the shadows.

What if your likeness was used in a falsified surveillance video showing you committing a crime that you never did, in a place that you’d never been before? To take it further, what if the environmental conditions could also be altered to any specification?

It is now possible to create 5D (3x dimensions for location and 2x dimensions for viewing directions) spatial photorealistic scenes and environments with a technique called Neural Radiance Fields (NRF). This works by aggregating a collection of photos of a single location to generate entirely unique real-world scenes of that same location.

Let’s go even farther down the rabbit hole. What if you contested the authenticity of this hypothetical falsified video with your accusers, only to be met with an updated video edited with a shift in time of day—or even season? Turns out, that can actually happen. In 2017, researchers from Nvidia showcased an AI that could change videos from day to night (and vice versa) and summer to winter (and vice versa).

Worse yet, what if your face was used in a pornographic film without your consent? It turns out that not only does this already happen, it’s more nefarious than you could imagine. The use of deepfakes in pornography has shifted from replacing a porn actress’s face with that of a celebrity’s, to being uses as a tool for revenge porn or even extortion.

The implications of all of these developments are both stark and terrifying.

Stealing your voice

It’s not just images and videos. Your voice can now be stolen too. Technologies such as Descript’s OverDub are leading the way in “ultra-realistic” voice cloning. OverDub utilizes the Lyrebird technology which can synthesize a digital recreation of any voice by feeding multiple vocal samples into a program so that it can aggregate the audio data to create a realistic re-creation of said voice.

Novel examples of the technology include an AI-written track about Mark Zuckerberg “performed” by Eminem, Donald Trump “reciting” the infamous Darth Plagueis speech from Star Wars: Episode III—Revenge of the Sith, and Jay-Z “rapping” the “To be, or not to be” soliloquy from Shakespeare’s Hamlet. While it’s fun to listen to these and consider possible cover versions of songs that you’d love to hear, deepfake voice fraud is on the rise with an increase in adoption of scammers utilizing the practice in spam calls.

In 2019, a deepfake voice was used to defraud a CEO out of 243,000 USD. The UK-based CEO of an unnamed energy firm was fooled into thinking that he was conversing with his boss based on the slight German accent and cadence in the speech.

Stealing your fingerprints

Fingerprints are a universal marker of human identity based on its unique features—specifically arches, loops, and whorls. Historically, fingerprint identification has been used by law enforcement as far back as the late 19th century.

In 2018, researchers from New York University announced that not only had they used a neural network to synthesize artificial fingerprints—dubbed “DeepMasterPrints”—they had also discovered that these fingerprints could act as a “master key” to fooling biometric identification systems. Further, it was found that these fingerprints were able to imitate more than 20% of the samples in a biometric system.

DeepMasterPrints work by exploiting two flaws in a fingerprinting system. The first being that fingerprint sensors generally do not read an entire fingerprint but rather read whichever part of the finger is scanned. The second being that some fingerprint features are more common than others. This then increases the margin of error for positive identification but will match other samples in the system by pure chance.

Stealing your creativity

As humans, one of the last weapons we have left in our uphill battle with machines is creativity. The ability to create art has been long held as the one thing we have that sets us apart from artificial intelligence. It has often been said that jobs like artists, writers, and musicians would theoretically be difficult to be replaced by robots.

That’s no longer true.

Is nothing sacred anymore?

Creating art

In October 2018, New York auction house Christie’s sold an AI produced art piece entitled Portrait of Edmond Belamy for a whopping 432,500 USD. Unsurprisingly, this sent shockwaves through artistic and non-artistic circles alike. The algorithm behind the painting consists of two parts—a Generator, and a Discriminator. The system was given a data set of 15,000 portraits painted between the 14th to the 20th century, which the Generator used to create a portrait. The Discriminator then attempts to discern if the generated portrait was composed by human or artificially made, which then ultimately helps to determine if the final product is a success.

Composing music

In 2019, AI research laboratory OpenAI created MuseNet, a deep neural network capable of generating four-minute compositions, using ten different instruments, in a variety of different genres. MuseNet uses GPT-2, a methodology trained to predict the progression of a composition. In other words, you can feed GPT-2 a portion of a song and it will generate what it thinks would logically come next. Here’s an example using Take Me Home, Country Roads by John Denver. Give it a listen. It’s simultaneously serene and scary.

The written word

The above two examples are great, philosophically speaking, but what if you’re not an artist or a musician? How can AI creativity encroach on your daily life as a non-creative?

Introducing GPT-3, a deep learning model that is capable of producing a variety of written content that is almost indistinguishable from humans. A step up from the aforementioned GPT-2, the latest iteration in the GPT-n series from OpenAI has taken the internet by storm.

Like the auto-generative capabilities of OpenAI’s MuseNet mentioned above, GPT-3 can produce a variety of unique content types based on a set of parameters. So for example, you could ask GPT-3 to write you a short story or screenplay based on nothing more than a genre or writing prompt. While GPT-3 is great at generating creative text and could be used in some light content marketing tasks, it doesn’t do as well with producing factually accurate content. One of the most notable examples of its use in the real world is an article that was published in The Guardian in September 2020 that was written entirely by a robot.

In August 2020, Liam Porr, a student at University of California, Berkeley, used GPT-3 to generate a fake blog that became the most talked about website on Hacker News. Initially created as an experiment, Porr was surprised at how the blog was able to fool thousands of readers. While the blog was still trending on Hacker News, only three or four people aired their suspicions that it might have been the work of an AI—it was that convincing.

In October 2020, a GPT-3-powered bot began interacting with users on r/AskReddit under the username /u/thegentlemetre. Over the course of a week, the bot interacted with a number of Reddit users without anybody being tipped off to its true identity.

On the surface these examples seem quite tame, but when applied to social engineering or propaganda, everything takes a more sinister turn. There are growing concerns that moving forward, propaganda and disinformation will be AI-generated. Given the aforementioned capabilities of the technology, it’s not hard to imagine fake business reviews, accusations of racism on social media platforms, and even fakes that influence public policy.

Read more: How to completely disappear online in 4 easy steps