Taylor Swift isn’t the only person falling victim to deepfakes. Everyone’s at risk, but new laws are on the way to crack down on this high-tech crime.
The state government is proposing tough new laws to crack down on deepfakes, which can be used to spread fake news, create non-consensual porn, scam money, and steal identities. With the potential to even influence elections, deepfakes are becoming more common, yet their creation and use remains largely unregulated in Australia.
To assist in formulating policies and legislation to curb the darkside of deepfakes, the state government is asking South Australians to share their views on three key questions:
- Are current laws suitable?
- Do we need to reform state legislation?
- Do you have a preferred solution for reform?
What are deepfakes, anyway?
Despite how common their use has become, a recent worldwide survey found 71 per cent of people have no idea what deepfakes are.
Deepfakes are digital creations – photos, videos or audio – that commonly use artificial intelligence (AI) to make it seem like someone did or said something they didn’t. You’ve probably seen deepfakes before – maybe you laughed at the Seinfeld x Pulp Fiction YouTube mashup or followed the @unreal_keanu TikTok account. But what started as a tool for harmless fun has now become a serious threat, capable of spreading misinformation and violating privacy.
Deepfakes are often created using a type of AI called deep learning. This involves feeding thousands of images or videos of a person into a machine learning algorithm that learns to mimic their facial expressions, voice and mannerisms.
The result? A hyper-realistic but completely fake version of that person. The scary part? Deepfakes are getting so good that they’re nearly impossible to distinguish from real footage. The even scarier part? Thanks to a whole lot of new apps, you don’t need a lot of special skills or money to create them – the technology is available to everyone.
Why should you care?
While deepfakes have positive applications in fields like education and entertainment, they aren’t just about swapping faces in movies for LOLs. They’ve been weaponised for more sinister purposes and have the potential to ruin lives, destroy careers and even destabilise governments. Here are three very good reasons to be wary of deepfake technology:
Non-consensual pornography:
A staggering 96 per cent of all deepfakes online are pornos, 90 per cent are non-consensual, and 99 per cent of the victims are women and girls. Imagine the trauma that comes from finding out your face has been used in a fake “revenge porn” video by an angry ex.
This isn’t just a hypothetical scenario – it’s happening right now, and the victims are real Australians, just like you. There’s also the horrific but real possibility of deepfake pornography involving minors.
In Australia, existing laws like the Commonwealth Online Safety Act 2021 make it a civil offence to distribute intimate images without consent, but these laws are struggling to keep up with the fast-paced evolution of deepfake tech.
Misinformation and political manipulation:
Deepfakes are the ultimate tool for spreading fake news and manipulating public opinion. Picture this: an election is coming up and a deepfake video of a politician emerges, showing them doing something illegal. Even if it’s quickly debunked, the damage is done. Studies have shown that people are more likely to remember the fake information than the correction.
This phenomenon, known as the Continued Influence Effect, means that even when people know a deepfake is fake, it still influences their opinions. And it isn’t just a theoretical risk. In 2022, a deepfake of Ukrainian President Volodymyr Zelenskyy was circulated, falsely portraying him urging his military to surrender to Russian forces.
With 43 per cent of Australians concerned about deepfakes affecting elections – up 66 per cent since 2023 – the potential for these tools to undermine democracy is real.
Fraud and scams:
Deepfakes are also being used in scams to swindle people out of money. For instance, a whole lot of Taylor Swift fans were scammed this year by a deepfake video of the singer promoting a cookware giveaway. And it’s not just individuals – nearly a quarter of Australian businesses have faced deepfake-related security incidents.
In one particularly jaw-dropping case, a finance worker in Hong Kong was tricked into transferring $39 million to fraudsters who used deepfake tech to impersonate the company’s chief financial officer during a video conference. Think about that next time you’re on a Zoom call.
What’s the state government doing?
The government is considering several options to combat deepfakes, including making it illegal to create or share harmful deepfakes without consent.
The state government also wants to specifically criminalise the creation and distribution of sexually explicit deepfakes. Under the proposed laws, offenders could face up to two years jail time or a $10,000 fine for creating and circulating a humiliating, degrading or invasive depiction without consent, which will double if the victim is under the age of 17.
Your voice matters
Here’s where you come in. The government wants to hear from everyday South Australians about how best to tackle this high-tech challenge and ensure the proposed laws are effective and fair. Here’s how you can contribute:
- Read the Discussion Paper
- Check out the FAQs
- Leave a comment on the online discussion board
- Send written feedback to LLPsubmissions@sa.gov.au
What’s next?
Public consultation closes on 11 September. After gathering the feedback, the government will draft legislation aimed at curbing the misuse of deepfakes. Your input is crucial in shaping a future where technology serves the public good without compromising safety and trust.
Find out more and have your say here.
Help us shape The Post by sharing your thoughts in our short survey: