Senate debates

Monday, 19 August 2024

Bills

Criminal Code Amendment (Deepfake Sexual Material) Bill 2024; Second Reading

7:50 pm

Photo of David PocockDavid Pocock (ACT, Independent) Share this | Hansard source

I rise to speak in support of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024, which represents a step in the right direction in regulating the use of generative AI in Australia. Deepfakes and generative AI are reshaping the nature of communication. The technology has already completely undermined the faith that Australians have in text, image, video and voice recordings.

Fakes are not new, of course. There's a long history of images and videos being created to mislead and deceive. Photos alleging to capture the Loch Ness monster were created in 1934. Air bushed photos of Mussolini and Stalin were distributed to mislead citizens in the 1920s and 1930s, and the Roswell alien autopsy video sparked conspiracy theories in 1995. But clearly the ease at which these images, and now videos, can be altered or just plain manufactured using generative AI is completely unprecedented, and it seems to speed up month to month. Advancements in the technology have made it infinitely easier to create realistic fake content. What used to demand significant time, skill and resources can now be done instantly with minimal effort or expertise—worse, the results are often indistinguishable to the naked eye.

The use of generative AI to produce deepfake pornography is a particularly heinous use of the technology, but even this technology is not new. It's been five years since the release of the DeepNude app, which allowed users to generate nude images of women by inputting clothed photographs. Since then, we've seen an explosion in the creation and distribution of deepfake pornography. At times, it has been used to target celebrities like Chris Evans and Emma Watson. More recently, it is increasingly used to undermine the credibility of public figures and politicians, usually women.

In 2020, an Indian politician was targeted with a deepfake pornographic video that was circulated to discredit her election. In 2021, an American congresswoman was targeted in an attempt to undermine her credibility, and, in 2022, a woman running for the Senate of the Philippines had a deepfake porn video shared across social media in a bid to diminish her standing. Even more worrying, deepfake pornography is now being created and distributed by children and young people. Just a few months ago, we heard media reporting of the creation of deepfake pornography by school students here in Australia.

It's been more than half a decade since the release of the DeepNude app, so it's well beyond time for the distribution of deepfake pornography to be criminalised. The urgency of this is revealed in the statistics on deepfake pornography. Of online deepfake videos, 96 per cent are pornographic. Of deepfake pornography, 99 per cent targets women, and there was a 2,000 per cent rise, in 2023, in the number of websites generating non-consensual sexual material using AI.

I welcome the provisions in this bill and, in particular, the creation of new offences around non-consensual deepfake sexual material. A person's identity is theirs alone and must be safeguarded. Non-consensual sharing of sexual content is a cruel and damaging abuse. It shatters reputations, devastates relationships and inflicts deep emotional wounds, often leaving victims with anxiety, depression and a haunting sense of violation. The psychological and social toll of deepfake pornography is immense, and the most effective way to mitigate against it is to criminalise the behaviour.

But, while deepfake sexual material is a particularly heinous use of emerging AI technology, it is far from the only one. Generative AI is being used to create deepfakes to mislead and deceive people for a variety of nefarious reasons, and this bill does not go to many of these uses. Deepfakes are increasingly used in scams which are reported to have cost Australians some $8 million last year. They're also used to impersonate experts to provide misleading advice or information to consumers.

But, to me, one of the most worrying uses of deepfakes is in the context of our democracy. The use of deepfakes to mislead or deceive voters is exploding in democracies across the world. In the 2022 US mid-terms, AI generated profiles on social media platforms used deepfakes to spread misinformation, amplify divisive content and create the illusion of grassroots support or opposition for certain candidates or policies. A particularly worrying example was a deepfake robocall purporting to be from President Biden to thousands of New Hampshire residents discouraging them from voting. In the Indian election held earlier this year, deepfakes were used to impersonate candidates, celebrities and even dead politicians, serving up mis- and disinformation to millions of Indians. The election was described by academics at the John F Kennedy School of Government as being awash with deepfakes.

In the last month, we've seen deepfakes of Australian politicians emerging from both sides of politics. The Queensland Premier has fallen victim to a deepfake created by the Liberal Party, but Labor does not have clean hands after posting an AI generated video of the opposition leader. These videos could be described as awful but lawful, but we can surely do better than that.

Deepfakes used without consent threaten our democracy and should be banned in the context of elections. Australians deserve to know that the information they receive from parties and politicians is genuine. The coming elections in Queensland and the ACT and the upcoming federal election will be impacted by deepfakes. So, while this bill is a positive step forward, there is much to be done. Unfortunately, the lag between technological change and regulation response threatens to undermine our democracy. But that's no excuse. The government must be proactive in catching up to change and getting ahead of what we know is coming. We're being warned by experts, academics and even the AEC. Other jurisdictions—the US, the UK, the EU and even China—are already out in front in combatting and regulating against the use of non-consensual deepfakes.

I've circulated a second reading amendment in my name that calls on the government to act swiftly and decisively to regulate and protect Australian voters and Australians generally from the danger of deepfakes. In moving the amendment, I recognise the work done on this topic in the lower house by the member for Warringah, Zali Steggall, and I join her in calling on the government to act in providing Australians protection against the potential harms of deepfakes in elections.

I move the amendment.

At the end of the motion, add ", the Senate:

(a) notes that:

(i) deepfakes present inherent risks including the spread of misinformation, threats to privacy and security, altering election outcomes and misleading and deceiving voters,

(ii) Australians have lost over $8 million to scams linked to online investment trading platforms, often using deepfakes,

(iii) the Australian Electoral Commissioner has warned that Australia is not ready for its first artificial intelligence election that is coming within the next 12 months,

(iv) deepfakes of Australian politicians have already been used, and

(v) the Commonwealth Electoral Amendment (Voter Protections in Political Advertising) Bill 2023, introduced by the Member for Warringah on 13 November 2023, would ban the use of deepfakes in political advertising; and

(b) calls on the Government to:

(i) ban the use and transmission of deepfakes in political advertising prior to the next federal election, and

(ii) further regulate the use and transmission of deepfakes to address the risks they present".

This is critical. We must protect our democracy, and I urge the government to move before the next federal election.

Comments

No comments