Deepfakes are a hot topic in the media right now, and understandably so. Advances in different areas of AI technologies have quickly made the comedic lip syncs of 2018’s Barack Obama deepfakes look almost quaint. Everybody now has easy access to tools which can create compelling image, audio, and even video deepfakes with limited technical expertise and resources.
It seems like, all of a sudden, the world has woken up to the threat of a technology which has been maturing for the past decade. In education, we need to have some serious conversations about what this all means.
What is a Deepfake?
First of all, some definitions.
A deepfake is an emergent type of synthetic mediathat uses artificial intelligence and machine learning (AI/ML) techniques to create or manipulate audio, video, images, or text.
Deepfakes produce highly realistic and convincing content of events or people doing or saying things that never actually occurred.
Leon Furze – CSP Deepfakes Webinar
Deepfakes overlap with image generation and other artificial intelligence technologies. While earlier versions of deepfake technology relied on fairly complex machine learning for lip syncing (like the 2018 Barack Obama / Buzzfeed deepfake), more recent technologies like Runway make this feature accessible to anyone.
A couple of options from Runway.Equally terrifying. The voice generated by ElevenLabs is fairly high quality but the video is not quite there yet. However, as you can see in comparison to the 2018 Buzzfeed Obama deepfake, the technology has reached a point where it’s possible to make synthetic media of about the same quality with no need for complex tech or voice actors.
Can you spot an AI generated image?
In the past, deepfakes were created using models trained on an enormous amount of data from an individual: this is why we see so many celebrity and political deepfakes. With millions of images of celebrities like Jennifer Lawrence, Steve Buscemi and Tom Cruise available online, it is easier to create deepfakes of these individuals.
But now, we have highly capable Generative AI models which can create convincing synthetic images and be trained on only a handful of images to create a deepfake. Open source image generation means that the technology is also free and increasingly ubiquitous.
The following images of “me”, for example, were generated from a single webcam photo via deepfake.civai.org, an awareness raising nonprofit.
And it is no longer possible to discern AI generated images from real ones with the naked eye.
By now the risks of deepfakes should be immediately obvious. But it’s worth highlighting the major risks so we can think about how to address them. In my opinion, the biggest risks of Generative AI related deepfakes are:
Nonconsensual explicit deepfakes
Blackmail/extortion/sextortion
Reputational damage
Misinformation and “fake news”
Last night, I had the opportunity to run a free parent webinar with Cyber Safety Project’s Trent Ray and sex education expert Vanessa Hamilton to talk about these risks for young people. With over 2000 parents signed up, we can tell that it’s a topic people want to hear about.
We are making the recording and resources available for free until October 10th, so make sure you check it out and please share it with friends and family.
In the agenda, we outline four main pillars for advancing research into GenAI-based deepfakes. These include potential positive uses of synthetic media, such as teaching and learning materials, accessibility tools, and creative or artistic applications of the technology.
A Deepfake Research Agenda for Tertiary Education. From Perkins, Roe, & Furze (2024). Deepfakes and Higher Education: A Research Agenda and Scoping Review of Synthetic Media. Journal of University Teaching and Learning Practice. https://doi.org/10.53761/2y2np178
We are also conducting a survey of Higher Education perspectives on deepfakes. If you work in HE as a researcher or in teaching and learning, we would love to hear from you.
Despite the obvious risk of deepfakes in education, including both K-12 and Higher Education, there are things that schools and universities can do right now:
Legislation and criminalisation: Support and implement laws like the Australian Criminal Code Amendment (Deepfake Sexual Material) Bill 2024.
Prevention and education: Public awareness campaigns and educational initiatives. These might include attending webinars like our parent session above, or including deepfake talks as part of existing cyber safety, digital consent, and sexual health talks with students.
Policies and risk management: These should include proactive, preventative policies including educational resources and reactive policies such as crisis management. School and university boards and executive leadership need to be aware of deepfake risks.
Support for victims: Provide resources, counselling, and support services for those affected by deepfake image-based abuse, including assistance with content removal and recovery. In Australia, this can be done through the eSafety Commissioner.
I have been working with education boards and leadership teams to discuss the risks associated with Generative AI, including deepfakes, reputational risks, and image based abuse.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:
[…] face new forms of cyber abuse, particularly fuelled by advancements in artificial intelligence (AI). Deepfake technology, which allows users to create fake images and videos of real people, has led to disturbing […]
[…] this year I wrote about deepfakes as a hot topic in the media and a concern for governments and legislators across the world. The proliferation of […]
Leave a Reply to The Near Future of Generative Artificial Intelligence in Education: September 2024 Update – Leon FurzeCancel reply