Creating Deepfakes to Fight Deepfakes- Will Algorithmic Detection Be Enough?

Source: Casey Chin, WIRED

A video of President Donald Trump circulated on Twitter in 2018 stating, “As you know I had the balls to withdraw from the Paris climate agreement. And so should you.” Commenters expressed shock and confusion by the president’s rhetoric. This rudimentary deepfake video was doctored by the Flemish Socialist Party to urge Belgium to follow in America’s footsteps to withdraw from the climate agreement. While their intention was not to spread disinformation, the damage was done. 

Deepfake videos show real people saying and doing things that they never actually have done. This process of face-replacement uses algorithmic deep learning made with generative adversarial networks (GANS) which mimic two neural networks at once through the learning of data sets of photos and videos of an individual. These deep machine-learning algorithms analyze one face and transposes it over another human face. 

Manipulated videos are most commonly seen in the pornography industry which accounts for 96% of deepfakes on the internet according to research at Deeptrace. Publicly available software including the popular FakeApp have been used by amateurs to create deepfake porn of celebrity faces placed over porn actresses bodies.

The weaponization of deepfakes extends beyond the porn industry into politics with world leaders targeted by deceptive videos with major implications to the fake news/post-truth epidemic plaguing modern politics. These videos can sabotage reputations and spread disinformation. The technology to produce deepfakes is also rapidly evolving as deep learning algorithms become more advanced and in the hands of of anyone motivated to create content intended to spread falsehoods. If we cannot trust the content we see with our own eyesight democratic systems destabilize.

A “shallowfake” of Speaker of the House Nancy Pelosi received millions of views on Facebook in 2019 showing the leader slurring her words during a speech using simple video editing software which slowed down the original clip. Donald Trump’s personal attorney Rudy Giuliani tweeted the video ensuing widespread mockery of Pelosi who was accused of being inebriated.  This example shows how deepfakes are defamatory by placing words into someone else’s mouth which establishes the issues of lack of consent growing alongside increasing media distrust.  

Big tech companies are developing large data sets of deepfakes to train algorithms to detect them. It appears that their best solution to remove deepfakes is to first create thousands and release them into the internet like a virus in need of antibodies

In September 2019 Facebook announced a partnership with Microsoft and researchers at several universities including the University of Oxford to form the Deepfake Detection Challenge (DFDC). Facebook dedicated $10 million to this challenge and commissioned a dataset using paid actors for the AI community to use in order to collectively find algorithms that can detect doctored videos. 

Facebook’s Deepfake Detection Challenge

“This is a fundamental threat to freedom. Manipulated media being put out on the internet, to create bogus conspiracy theories and to manipulate people for political gain, is becoming an issue of global importance I believe we urgently need new tools to detect and characterize this misinformation.”

Professor Philip H. S. Torr at the University of Oxford who is working with Facebook for the DFDC.

Google is also prepared to combat deepfakes trickling into its search engine and in 2019 announced that they created a database with with over three-thousand deepfakes. According to Google’s AI blog, this dataset was made by recording videos of consenting actors and creating deepfakes using those faces transposed onto other bodies of actors from television and film. The dataset is available now for free for the research community to use in hopes of finding algorithms aimed at spotting deepfakes from their original forms. 

The efforts of tech platforms to find algorithmic solutions to deepfakes are argued by some to be unambitious. A former managing editor of fact-checking at Facebook, Brooke Binkowski, stated in an interview with Politico that Facebook is lacking incentive to regulate deepfakes because they are more concerned with user engagement as deepfake videos and fake news in general garner high attention which earns the platform revenue.

A recent study found that fake news on Twitter spreads ten times faster than true news stories. According to the study, fake stories are retweeted 70% more because sensationalized content targets our emotions which gathers higher interest. Therefore, a deepfake of a politician saying something offensive can spread virally on these sites and damage that candidates reputation if viewers are unaware that the video has been manipulated.  

Researchers at the University of California, Berkeley have designed a deepfake detector using an algorithm capable of distinguishing AI deepfakes of politicians which already exist without having to create new deepfakes. The algorithm picks up the spatio-temporal glitches that oftentimes occur in AI software which struggles to match the replaced face with the unique facial movements called “soft biometrics” of the other individual. 

Soft biometrics include subtle features like Donald Trump’s pursed lips or raised eyebrows. The algorithm learns to catch these glitches and so far has a 92% accuracy. However, this achievement will become useless one day because, “deepfake technology is developing with a virus / anti-virus dynamic,” according to the report.

Deepfake Cosmogram. Source: By Author

Will these efforts to use algorithms to detect and filter deepfakes from the internet be enough to combat disinformation in time for the sociopolitical damage they can cause? According to the recommendations of a report by NYU on deepfakes and the 2020 US Presidential Elections, tech platforms need to not only quickly develop a successful detector but also outright ban any content that spreads disinformation in the same stern tone that they remove hate speech. The report also suggests raising awareness to the issue as the second frontier to protecting the public. 

In the future the software to produce deepfakes can become sophisticated enough to outsmart algorithmic detectors and can be exploited. Mark Anderson of IEEE warns that, “an overly trusted detection algorithm that can be tricked could be weaponized by those seeking to spread false information.” Deepfake detectors can therefore serve as a double-edged sword in the battle against disinformation.

The governmental battle against deepfakes has already begun in the US. In October 2019 California passed law AB 730 criminalizing video and audio which superimposes a political candidate’s image onto another body within sixty days of an election in order to protect election tampering. The law hopes to protect the speech and image of politicians from the malices of deepfake videos. 

However, regulating deepfakes is a slippery slope as any governmental intervention of online content can be argued as a violation of free speech and can be seen as an overreach of personal freedom. Despite this pushback, a recent survey of US legislation found that twelve bills have been placed before Congress and two others apart from California have been passed in state legislation. In Virginia deepfake pornography is now considered a cybercrime and in Texas deepfakes that are defamatory to political candidates and interfere with elections have been criminalized.   

Despite these collaborative efforts between tech platforms and government to curtail deepfakes, once content is uploaded on the internet it never truly goes away and these videos will persist. However, the effort being placed into deepfake detection is a hopeful sign of signaled global alarms.