‘Deepfake is the future of content creation’

Spread the love

A few months ago, millions of TV viewers across South Korea were watching the MBN channel to catch the latest news.

At the top of the hour, regular news anchor Kim Joo-Ha started to go through the day’s headlines. It was a relatively normal list of stories for late 2020 – full of Covid-19 and pandemic response updates.

Yet this particular bulletin was far from normal, as Kim Joo-Ha wasn’t actually on the screen. Instead she had been replaced by a “deepfake” version of herself – a computer-generated copy that aims to perfectly reflect her voice, gestures and facial expressions.

Viewers had been informed beforehand that this was going to happen, and South Korean media reported a mixed response after people had seen it. While some people were amazed at how realistic it was, others said they were worried that the real Kim Joo-Ha might lose her job.

MBN said it would continue to use the deepfake for some breaking news reports, while the firm behind the artificial intelligence technology – South Korean company Moneybrain – said it would now be looking for other media buyers in China and the US.

When most people think of deepfakes, they imagine fake videos of celebrities. In fact, only last week one such bogus – but very lifelike – video of Tom Cruise made headlines around the world after it appeared on TikTok.

The deepfake TikTok account

 

Despite the negative connotations surrounding the colloquial term deepfakes (people don’t usually want to be associated with the word “fake”), the technology is increasingly being used commercially.

More politely called AI-generated videos, or synthetic media, usage is growing rapidly in sectors including news, entertainment and education, with the technology becoming increasingly sophisticated.

One of the early commercial adopters has been Synthesia, a London-based firm that creates AI-powered corporate training videos for the likes of global advertising firm WPP and business consultancy Accenture.

“This is the future of content creation,” says Synthesia chief executive and co-founder Victor Riparbelli.

To make an AI-generated video using Synthesia’s system you simply pick from a number of avatars, type in the word you wish for them to say, and that is pretty much it.

Synthesia's deepfake platform

 

Mr Riparbelli says this means that global firms can very easily make videos in different languages, such as for in-house training courses.

“Let’s say you have 3,000 warehouse workers in North America,” he says. “Some of them speak English, but some may be more familiar with Spanish.

“If you have to communicate complex information to them, a four-page PDF is not a great way. It would be much better to do a two or three-minute video, in English and Spanish.

“If you had to record every single one of those videos, that’s a massive piece of work. Now we can do that for [little] production costs, and whatever time it’ll take someone to write the script. That pretty much exemplifies how the technology is used today.”

Mike Price, the chief technology officer of ZeroFox, a US cyber-security company that tracks deepfakes, says their commercial use is “growing significantly year over year, but exact numbers are difficult to pin down”.

However, Chad Steelberg, chief executive of Veritone, a US AI technology provider, says that the increasing concern about malicious deepfakes is holding back investment in the technology’s legitimate, commercial use.

“The term deepfakes has definitely had a negative response in terms of capital investment in the sector,” he says. “The media and consumers, rightfully so, can clearly see the risks associated.

“It has definitely hindered corporations as well as investors from piling into the technology. But I think you are starting to see that crack.”

 

Presentational grey line

 

 

New Tech Economy

 

New Tech Economy is a series exploring how technological innovation is set to shape the new emerging economic landscape.

 

Presentational grey line

 

Mike Papas, chief executive of Modulate, an AI firm that allows users to create the voice of a different character or person, says that firms in the wider commercial synthetic media sector “really care about ethics”.

“It amazing to see the depth of thought these people put into it,” he says. “That has ensured that investors also care about that. They’re asking about ethics policies, and how you’re thinking about it.”

Lilian Edwards, professor of law, innovation and society at Newcastle Law School, is an expert on deepfakes. She says that one issue surrounding the commercial use of the technology that hasn’t been fully addressed is who owns the rights to the videos.

“For example, if a dead person is used, such as [the actor] Steve McQueen or [the rapper] Tupac, there is an ongoing debate about whether their family should own the rights [and make an income from it],” she says.

“Currently this differs from country to country.”

Deborah Johnson, professor of applied ethics, emeritus, at the University of Virginia, recently co-wrote an article entitled “What To Do About Deepfakes?”.

She says: “Deepfakes are part of the larger problem of misinformation that undermines trust in institutions and in visual experience – we can no longer trust what we see and hear online.

“Labelling is probably the simplest and most important counter to deepfakes – if viewers are aware that what they are viewing has been fabricated, they are less likely to be deceived.”

Prof Sandra Wachter, a senior research fellow in AI at Oxford University, says that deepfake technology “is racing ahead”.

Sandra Wachter

“If you watched the Tom Cruise video last week, you can see how good the technology is getting,” she says. “It was far more realistic than the President Obama one from four years ago.

“We shouldn’t get too fearful of the technology, and there needs to be a nuanced approach to it. Yes there should be laws in place to clamp down on bad and dangerous things like hate speech and revenge porn. Individuals and society should be protected from that.

“But we shouldn’t have an outright ban on deepfakes for satire or freedom of expression. And the growing commercial use of the technology is very promising, such as turning movies into different languages, or creating engaging educational videos.”

One such educational use of AI-generated videos is at the University of Southern California’s Shoah Foundation, which houses more than 55,000 video testimonies from Holocaust survivors.

A Holocaust survivor with his avatar at the Shoah Foundation

 

Its Dimensions In Testimony project allows visitors to ask questions that prompt real-time responses from the survivors in the pre-recorded video interviews.

Mr Steelberg says that in the future such technology will enable grandchildren to have conversations with AI versions of deceased elderly relatives. “That’s game changing, I think, for how we think about our society.”

 


Spread the love