Deepfake technology makes misinformation ever more believable—and makes it harder to counter
By Jennifer Hughes
As Canadians prepare for another federal election, the risk of bad guys confusing and fooling the electorate is more a concern than ever before. Using misinformation to sway voters and to create bias and fear is older than the printing press, but new technology can make a lie look very convincing and make it hard to refute.
Deepfake is a technology that uses artificial intelligence (AI) to modify or create videos so that they appear to show something happening that didn’t actually happen. The name is derived from “fake” and “deep learning,” which is an AI process based on artificial neural networks. Using deepfake technology, someone can create fake videos by pasting someone’s face onto someone else’s body. These videos are often indistinguishable from real footage, and video evidence is very compelling.
As a result, for example, if a video popped up online showing a political candidate making a statement that would make him or her essentially unelectable, he or she could have a very hard time proving that he or she never said it—and much damage would already have been done.
What does deepfake mean for the future? It’s hard to say right now since the technology is still developing, but it’s likely that it will create a huge shift in how we receive and rely on information.
According to the Cyber Threats to Canada’s Democracy report this year, “Improvements in artificial intelligence are likely to enable interference activity to become increasingly powerful, precise, and cost-effective.” Because of this, we are likely to see more deepfake videos emerge in the future.
Meanwhile, both the Canadian and America governments are working to find ways to counter deepfake.