Home of internet privacy

Facial swapping software is now open-source

Developed by experts from the Massachusetts Institute of Technology and NVIDIA, vid2vid technology allows anybody to swap the surface images of objects in a video.

Most of us know this kind of video-to-video synthesis from ‘face swapping,’ where an algorithm detects a face and applies another face on top of it. But the technology could apply to other objects too, for example, cars or shop fronts.

Copyright issues and identity theft

Face swapping technology, similar to any other video and photo altering technique, is widely feared for its use in the creation of defamatory and copyright restricted content. After all, it allows anybody to use the faces of famous actors in homemade movies (called Deepfake), or maybe even attribute evidence of a crime to the wrong suspect.

What’s more, the MIT scientists behind vid2vid have published the code under a Creative Commons license, which means anyone can use or modify the code as they see fit (as long as they give proper credit to the author).

Mass adoption is damage limitation

Making image swapping technology open-source, however, is an important step in limiting harm, as it levels the playfield between well-funded malicious actors and the public.

It may also help deter people from maliciously using such technology, as any video comes under more heavy scrutiny when those watching it know how easy it could be to ‘fake’ such material.

In their paper, the authors explain the math behind their models and demonstrate the ability of their code by swapping faces, trees, and buildings.

Image swapping software in action

In a documentary from February 2018, a different team of researchers working on image swapping software demonstrates their capabilities by replacing decoration on a wall with offensive symbols, and the face of an actor with that of President Trump.

“Don’t believe everything you see on the internet.”
– Abraham Lincoln

The future of doctored images and videos

The credibility of online information is already quite low—tools like Photoshop can falsely attribute images and intentionally manipulate event, which distorts our impression of the world.

It will increasingly become difficult to verify online sources as technology advances accurately, and we may see the day when digital images are no longer admissible in court.

Technology such as decentralized timestamping might make it more difficult to attribute fake footage to events in the past, but they will still allow anybody to manipulate content on the fly willfully.