What are Deepfakes?
Jeremy Peckham - Research Lead at the AI, Faith and Civil Society Commission
What is a deepfake?
Deepfake is a term used to describe the creation of fake artefacts, usually people, based on learning from images and speech of the real person.
In the past, it has been fairly easy to spot deepfake images, but now the technology is becoming so realistic that it is often almost impossible to distinguish between a deepfake and the person targeted.
How are deepfakes used?
1. To mimic celebrities
Various well publicised examples have been around for a few years, such as the Channel 4 production of a fake speech by the UK’s Queen, broadcast at Christmas time in 2020. Various politicians and celebrities from Barack Obama, Mark Zuckerburg to Tom Cruise have also suffered from being used in deepfake videos (Creative Bloq). The algorithms are also being used to bring people “back from the dead” by using old recordings and images to create interactive avatars of people who have died.
2. For Film Production
The same technology however is seen as a gift to film producers, usually referred to as AI generated videos or synthetic media, enabling them to cut down on editing time or even remaster old videos.
3. Non-consensual pornography
Deepfake is perhaps best known for its use in non consensual pornography, where images of a person can be used to superimpose upon a pornographic scene, making it look as though that person is involved in a sexual act.
4. Targeting children
More recently, since free algorithms have been made available, even children are now the targets of bullying from others when pictures of them posted on Facebook are manipulated by AI to appear on someone else’s body, even Dustin Hoffman’s autistic character from Rain Man (iNews).
Dangers of Deepfakes
Over 1.2 trillion digital images and videos were captured in 2017 – a figure that increases by about 10% each year. Around 85% of those images are captured using smartphones, carried by over 2.7 billion people around the world.
Deepfake has huge implications for everyone, not just politicians or celebrities, but also school children and anyone whom someone has a grudge against. The opportunities for criminals to blackmail, corrupt states or actors to manipulate news and politics to destabilise are obvious.
As we all know, mud sticks, so even if the images or video, is proven to be deepfake, the reputational damage has been done. The use of deepfake in these ways is nothing less than identity theft.
Human Values Risk Analysis
Truth & Reality: High Risk
Deepfakes challenge the very concept of truth and reality by creating realistic yet fake content, making it difficult to distinguish fact from fiction. This undermines trust in media and public figures and can contribute to misinformation, disinformation, and manipulation.
Privacy & Freedom: High Risk
The use of deepfakes, particularly in non-consensual pornography and targeting individuals, violates privacy and personal freedoms. Individuals can be manipulated without their consent, causing harm to their reputation, personal life, and emotional well-being.
Authentic Relationships: High Risk
Deepfakes can erode authentic relationships by spreading false representations of people, leading to misunderstandings and emotional harm. This is particularly damaging in personal relationships and can be used for bullying, harassment, or blackmail.
Moral Autonomy: High Risk
The creation and distribution of deepfakes, especially without consent, interfere with an individual's moral autonomy by controlling how their image or likeness is used, often in harmful ways. This can strip away personal agency and force individuals into situations where they must defend their integrity against fabricated content.
Dignity of Work: Medium Risk
Deepfakes carry vast potential for misuse (such as impersonating individuals for malicious purposes) which can undermine the dignity of work, especially for those whose likenesses are exploited without consent, leading to potential professional and reputational damage.
Cognition and Creativity: Medium Risk
While deepfakes can be used creatively (e.g., in film production or resurrecting historical figures for educational purposes), the ability to manipulate reality can lead to cognitive dissonance, confusion, and undermine critical thinking when it comes to discerning truth.
Policy Recommendations
1. Implement Blockchain Technology for Data Transparency
Mandate the use of blockchain for encoding digital data to create verifiable audit trails. This would help track the usage and movement of data, ensuring accountability and transparency, particularly for sensitive personal information.
2. Establish International Recognition of Data Ownership
Advocate for international laws recognising that data tied to an individual belongs to them, similar to copyright or intellectual property rights. This would provide the means to uncover deepfakes and to prosecute bad actors.
3. Support Initiatives Combating Fake News and Deepfakes
Encourage the development and adoption of blockchain-based solutions for combating digital disinformation, such as services like SafePress and TruePic. SafePress is a news certification service developed by Block Expert in France. Truepic provides the tools to verify and secure digital media, ensuring transparency and trust in content. Initiatives like these should be supported through public-private partnerships to ensure a scalable and effective system for certifying content authenticity.
4. Foster Global Collaboration to Combat Digital Disinformation
Promote international cooperation through organisations like the DeepTrust Alliance, bringing together stakeholders to fight digital disinformation and deepfakes. As they state on their website -“This is an arms race. Solutions require human and technical interventions to counter threats — and we’re in a race to out-innovate nefarious actors.”
References
Michael Grothaus, Trust No One: A Journey Into Deepfakes, Hachette UK, 2021. [https://inews.co.uk/news/technology/deepfake-videos-school-bullying-cyberbullying-ai-apps-parents-teachers-children-1290664]
Creative Bloq [https://www.creativebloq.com/features/deepfake-examples]
Salvador Dali [https://youtu.be/mPtcU9VmIIE]
https://www.deeptrustalliance.org/
https://www.weforum.org/agenda/2020/03/how-to-make-better-decisions-in-the-deepfake-era/