House debates

Tuesday, 2 July 2024

Bills

Criminal Code Amendment (Deepfake Sexual Material) Bill 2024; Second Reading

5:15 pm

Photo of Zoe DanielZoe Daniel (Goldstein, Independent) Share this | Hansard source

Artificial intelligence is a technology we don't yet understand or, at best, we are trying desperately to understand. It's very much a black box. Those that engineer it—OpenAI, Google and the like—continue to accelerate its capabilities without true knowledge of how a neural network, AI's underpinning logic, truly works, including how it works functionally and how it works philosophically and ethically. This black box, however, is dual sided. AI presents enormous opportunity as well as risk. The potential exists for even the currently available nascent models to exceed human intelligence in specific areas of knowledge within just the next few years. It also carries the potential to bring vast uncertainty, change and risk to our society. This presents us with a dilemma. How quickly do we allow this technology to develop and be deployed in light of its promise and its perils?

In light of this, legislators must find the right balance between overregulating this technology too early and underregulating it too late. A first step is acknowledging the nature of how AI foreseeably intersects with almost every aspect of society, from the economy to physics to medicine and, yes, once again, to violence against women. In the case of the Criminal Code Amendment (Deepfake Sexual Material) Bill, the proposed law reflects the urgent need for additional legal measures, particularly to protect women and girls from emerging AI technologies. The unregulated explosion of AI on the internet over the past two years has seen a range of mostly unintended but, in the case of deepfake sexual tools, disgustingly intentional consequences. This legislation addresses one of these consequences—the capacity of this technology to cause humiliation and violence against women online.

In the interim period between the availability of these AI tools and the proposal of this legislation to the House, individuals have been able to, shockingly, create blended images of people they know using templates of naked bodies within seconds. These images have been shared and sold around social networks and social circles. In just the past few weeks, boys of two Melbourne schools have been expelled and arrested for targeting girls in grades 9 to12 as well as teachers with online sexual deepfake images. Up to 50 girls may have been targeted in one case. Investigations are ongoing; however, it's apparent that these boys took profile images from social media and allegedly uploaded them to free sexual image generation tools on the internet. The images were circulated among school cohorts and have been described as 'incredibly graphic'. One female victim described the lifelong consequences of being targeted, the inability of these images to ever truly be deleted from internet, the unfair risk they may pose to future employability and the prolonged impact to emotional wellbeing and mental health.

It's important to consider the specific impact of this kind of abuse on teenagers, who are particularly vulnerable to bullying, social isolation and feelings of humiliation and lack of self-worth. This behaviour is dehumanising, demeaning and must be prevented. Without a strong legal framework preventing the sharing of this content, we risk normalising the sexualisation of women, including underage girls.

I support this legislation, but I do have some concerns about its limitation to the transmission of content depicting an individual who is exclusively over the age of 18. While I understand that other legal mechanisms exist to capture content depicting girls under 18, such as child exploitation and pornography laws, more must be done to clearly communicate to boys and men that this behaviour is not just illegal but immoral and unacceptable. I have raised with the Attorney-General my concern about whether teenager-to-teenager deepfake abuse would be captured under existing laws. My concerns were graphically borne out when, subsequent to that conversation, the serious incident involving dozens of girls happened at Bacchus Marsh Grammar outside Melbourne. The Attorney-General says this behaviour is captured, but I do wonder whether the existing legislation regarding child abuse is quite fit for purpose in this evolving space and whether underage deepfake offences should be looked at specifically.

School led educational and awareness programs during the early years of high school are a start, but government can do more—either to legislate or to coordinate across government at all levels. One example is programs and campaigns to make clear that these actions have severe, and sometimes lifelong, consequences. Another action the Commonwealth could take, as suggested by Asher Flynn, Associate Professor of Criminology at Monash University, involves placing the onus of responsibility of the creators of these AI models and tools. The powers of the government's Online Safety Act could be expanded, including the ability of the eSafety Commissioner to intervene as this content spreads online and across digital platforms. I say that in the knowledge that the Online Safety Act is at risk of becoming an enormously tentacled piece of legislation. We have to make sure it doesn't become impossibly unwieldy, and also that any enforcement is properly resourced.

As the capability of artificial intelligence is accelerated and deployed by commercial interests into society, Australia and all countries face a challenge to regulate its risks before they're realised. I would like to take this opportunity to endorse the second reading amendment proposed by the member for Warringah. Artificial intelligence is indeed a technology in its nascency, but the capacity exists today for deepfakes to present a risk to the integrity of electoral processes around the world. In a year of democratic elections, and our own in the foreseeable future, this is perhaps more relevant now than ever. Videos of prominent politicians and celebrities, including non-consensual spread of content depicting the likes of Taylor Swift, are already readily accessible online. We will soon find out the capacity for this tool to mislead and misinform electorates here in Australia, because the present capacity for AI to interfere with election integrity is only set to accelerate. Emerging models, such as OpenAI's Sora text-to-video model, present an even more significant challenge for regulators than existing deepfake-generation tools. I strongly suspect that there's a reason that this model will remain publicly unavailable until after the US election in November.

While this legislation targets deepfake sexual material only, I think there's a strong argument for banning all deepfakes that are used without permission. Political deepfakery, for example, has the potential to influence voters in insidious and dangerous ways, and can be difficult to debunk. Deepfakes can also be used for identity theft, scams and extortion. AI's capabilities and risks are rapidly accelerating—its engineers understand this—and bold action is required by government if Australia is to safeguard against them. Historically, governments around the world have not been up to this task, being more reactive than proactive. Regrettably, I suspect that this will not be the last time I discuss the societal risk of AI in this House. As well as legislation, tech companies can contribute by using technology to track, trace and prevent deepfakes. Deepfakes are a threat to democracy and public trust, and we must step in strongly to prevent their nefarious use. This legislation, which I will support, is part of that process. Thank you.

Comments

No comments