![]() ![]() ![]() While the implications of these developments are considerable, a recent breakthrough in information technology and AI is predicted to have an even greater impact on politics and society: the rise of deepfakes.ĭeepfake technology refers to machine learning techniques that can be used to produce realistic looking and sounding video or audio files of individuals doing or saying things they did not necessarily do or say. These developments harm key pillars of the democratic system through the spreading of disinformation, the polarisation of social and political divides, the erosion of trust in electoral results, and the disintegration of a shared public sphere where people with differing views can engage in constructive debate. Further issues arise in liberal democracies as a consequence of technological advancements in the field of political communication where the effective functioning of democracy is undermined by the dissemination of fake news (Figueira & Oliveira, 2017 MacKenzie & Bhatt, 2020 Zannettou et al., 2019), micro-targeting (Wilson, 2017 Zuiderveen Borgesius et al., 2018), and cyber subversion and information warfare (Paterson & Hanley, 2020 Polyakova & Boyer, 2018). ![]() Governments, institutions, enterprises, and civilians have become increasingly dependent on digital information systems for sensitive infrastructure, including virtual networks involved in national defence, financial transactions, repositories of personal data, and the healthcare system, thereby heightening the vulnerability to cyber threats (Farwell & Rohonzinski, 2011 Nye, 2017 Valeriano & Maness, 2015 Weimann, 2015). Rapid developments in information technology and artificial intelligence (AI) present considerable challenges. Since our image and voice are closely linked to our identity, protection against the manipulation of hyper-realistic digital representations of our image and voice should be considered a fundamental moral right in the age of deepfakes. The most distinctive aspect that renders deepfakes morally wrong is when they use digital data representing the image and/or voice of persons to portray them in ways in which they would be unwilling to be portrayed. Three factors are central to determining whether a deepfake is morally problematic: (i) whether the deepfaked person(s) would object to the way in which they are represented (ii) whether the deepfake deceives viewers and (iii) the intent with which the deepfake was created. The main argument is that deepfake technology and deepfakes are morally suspect, but not inherently morally wrong. This article will help fill this gap by analysing whether deepfake technology and deepfakes are intrinsically morally wrong, and if so, why. While this literature importantly identifies and signals the potentially far-reaching consequences, less attention is paid to the moral dimensions of deepfake technology and deepfakes themselves. The literature that addresses the ethical implications of deepfakes raises concerns about their potential use for blackmail, intimidation, and sabotage, ideological influencing, and incitement to violence as well as broader implications for trust and accountability. The ability to produce realistic looking and sounding video or audio files of people doing or saying things they did not do or say brings with it unprecedented opportunities for deception. Deepfake technology presents significant ethical challenges.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |