
And this can easily have dire practical consequences. That is, people might take deepfakes to be genuine videos and believe that what they depict actually occurred. First, deepfakes can lead people to acquire false beliefs. There are three ways that deepfakes pose a threat to knowledge. The whole business of trust and reliability is undermined by this stuff.” What is new is the fact that it’s now available to everybody, or will be. As Johnson puts it, “we’re getting to the point where we can’t distinguish what’s real-but then, we didn’t before. Thus, it may seriously interfere with our ability to acquire knowledge from videos. Nevertheless, deepfake technology threatens to drastically increase the number of realistic fake videos in circulation. So, there is certainly a sense in which deepfakes do not pose a brand new threat to knowledge. For example, during World War Two, the Nazis created propaganda films depicting how well Jews were treated under Nazi rule (see Margry 1992). Is that really the President of the United States saying what he’s saying?”Īdmittedly, realistic fake videos of events that did not actually occur are nothing new. As Floridi puts it, “do we really know what we’re watching is real?. Philosophers, such as Deborah Johnson, Luciano Floridi, and Regina Rini (2019) and Michael LaBossiere (2019), have now issued similar warnings. In the news media and the blogosphere, the worry has been raised that, as a result of deepfakes, we are heading toward an “infopocalypse” where we cannot tell what is real from what is not (see Rothman 2018 Schwartz 2018 Warzel 2018 Toews 2020). Notably, the statements or actions of politicians, such as former President Obama, can be, and have been, fabricated (see Chesney and Citron 2019 Toews 2020).ĭeepfake technology threatens to seriously interfere with our ability to acquire knowledge from videos. But for almost any event, these techniques can be used to create fake videos that are extremely difficult to distinguish from genuine videos. A high-profile example is “face-swap porn” in which the faces in pornographic videos are seamlessly replaced with the faces of celebrities (see Cole 2018). Deepfakes can depict people saying and doing things that they did not actually say or do. Thus, videos are extremely useful when collective agreement on a topic is needed (see Rini 2019).īut the value of videos as a source of knowledge is now under threat by deepfakes-realistic videos created using new machine learning (specifically, deep learning) techniques (see Floridi 2018). And we are significantly more likely to accept video evidence than other sources of information, such as testimony. 2018), and, most recently, to mass protests around the world (see Stern 2020). Videos recorded by smart phones have led to politicians losing elections (see Konstantinides 2013), to police officers being fired and even prosecuted (see Almukhtar et al. Moreover, we make significant decisions based on the knowledge that we acquire from videos. We can find out what is going on at great distances from us by watching videos on the evening news, for instance. In such cases, videos are often the next best thing. But we cannot always be at the right place, at the right time, to see things for ourselves. Direct visual perception is one such source. And since we do not have unlimited time and energy to do this, it is useful to have sources of information that we can simply trust without a lot of verifying. In order to survive and flourish, people need to constantly acquire knowledge about the world. The consequences for our trust in any testimony are profound, writes Don Fallis. The rise of deepfake technology takes this a radical step further. So it is that the news or documentary footage has an uncanny ability to take us in, presenting the illusion of an eyewitness experience, even though it is always a construction and not the truth. Visual evidence provides the gold standard of truth.
