By Aaren Madden. Photo: UVic Photo Services. Originally published in the Summer 2021 edition of Business Class magazine.
There is a meme bouncing around online showing a photo of Abraham Lincoln and quoting, “Don’t believe everything you read on the internet.” While both wryly amusing and urgently true, the emergence of deepfakes suggests we should also question everything we see on the internet. “We trust our senses, and of all the senses, vision dominates our perception of the world around us,” says Jan Kietzmann, Associate Professor of Innovation and Information Systems at Gustavson (and incidentally, a BCom ’98 grad from the school). “When we see something, we accept it as true, and it’s really hard to convince us otherwise.”
In simple terms, deepfakes are mostly videos in which a person appears to be doing or saying things they have neither done nor said. “By understanding what two faces have in common, we can swap one face for another,” Kietzmann explains. Rapid advances in machine-learning technology make it increasingly difficult to differentiate real from fake content. Out of curiosity, Kietzmann has deepfaked himself into movies and episodes of Friends. “It’s kind of fun, “Kietzmann says. “But it’s important that we understand how deepfakes are created” in order to understand their possible uses and implications.
In business and advertising, the potential for disruption by deepfakes is limited only by the imagination. One advertising firm helped the Salvador Dali museum in Florida create a deepfake kiosk featuring the long-dead artist to engage with visitors. Even commercials are seizing on the novelty of deepfakes to create connection and amusement. Moving forward, “we have the opportunity to move from consumers as the audience to consumers as co-creators of value in advertising,” Kietzmann’s research suggests. “For instance, if people can already star in their favourite movie scenes, how about advertisers inviting consumers to replace models to perform in ads or to try on clothes on their actual body in virtual changing rooms?”
“For the movie industry this [technology] holds a number of promises,” Kietzmann notes. For starters, models and actors may end up licensing their likeness to be used in deepfakes instead of traveling to various locations for in-person shoots. A deepfake version of scenes from Hollywood’s The Irishman, for instance, is one of many examples at least on par with traditional special effects. Then there’s the currently time-consuming process of movie translation. “The entire dubbing industry, unless they are on point right now, will go out of business. I think a market leader will emerge that harnesses deepfake technology to make the same person appear to give a message in any language—no dubbing necessary,” Kietzmann predicts.
Deepfakes and similar technologies are also being used in meaningful and life-changing ways, including keeping Holocaust survivors’ stories alive. People stricken with medical conditions that take away their ability to speak are now able to use computers to communicate in their own voices. “These are all good examples of what we can do with this relatively basic technology,” Kietzmann says.
However—and it’s a big however—Kietzmann doesn’t sugar-coat the risks. “There are lots of individual, non-consensual use cases where [deepfakes] could be harmful. But overall what I think is at stake is the institution of trust as a bedrock of our social and economic fabric,” he says. Recent instances of deepfakes being used for nefarious purposes include one woman who created false videos to damage the reputations of her daughter’s cheerleading rivals, and growing numbers of people who are victims of fake revenge campaigns. Warns Kietzmann: “We could have political adversaries feeding fake material onto social media platforms to show our politicians doing things they shouldn’t be doing, which could really change public opinion.”
In this larger context, deepfakes could generate catastrophic mistrust between businesses and consumers. In an era when truth is already contested, Kietzmann’s research considers the value and meaning of authenticity with studies designed to “understand how people process deepfakes and how they perceive deepfakes might be violating their understanding of ethics and the truth.”
Alas, “technology always outruns us,” Kietzmann says. “It is commonly accepted that the law is about five years behind.” In the meantime, he adds, “we need to come up with ways to manage the negative impact these technologies will have.” To that end, he and his colleagues have devised a strategy called the R.E.A.L. framework: Record original content; Expose deepfakes early; Advocate for legal protection; and Leverage trust.* The framework aims to counter deepfake tricks by giving people and organizations the tools to identify, label and control the dissemination of deepfakes.
For now, the biggest piece businesses and individuals can focus on is trust. “Strong brands will be better positioned to weather deepfake assaults,” Kietzmann wrote in the recent Business Horizons article, “Deepfakes: Trick or Treat?” For individuals, he reflects, “the more I spend time with deepfake research, the more I think about how the construct of trust changes right in front of our eyes. When we can no longer trust online video ‘evidence’ to be true, we need to think much more critically about what we see before we start believing and sharing it.”