House debates

Tuesday, 2 July 2024

Bills

Criminal Code Amendment (Deepfake Sexual Material) Bill 2024; Second Reading

4:55 pm

Photo of Nola MarinoNola Marino (Forrest, Liberal Party, Shadow Assistant Minister for Education) Share this | | Hansard source

In continuation, I was talking about the inception of online social media platforms and I wondered which one of the creators or executives in charge of these platforms decided that it was perfectly fine to expose our very young children to a free-for-all in what is an online paedophile's paradise. And I'll repeat that: a paradise where our children would be groomed online by sexual predators and exposed to extreme and violent pornography. That's what's available on their platforms to be exposed to billions of random people of all ages. But that is part of the platform business model and what's available to vulnerable young people on these platforms and these apps, exposing our young people 24 hours a day, seven days a week. The harm is there for all to see and for our families to have to deal with—from the bullying with the sharing of naked and semi-naked photos.

On Snapchat, which young people were sucked into—that's a nice way of putting it—they thought they could take provocative images and they would disappear. Well, they didn't. There was an app called SnapSave that automatically saved all of these photos, and people took screenshots. So many young people I've dealt with have been affected by this and the image based abuse that follows that—the suicides because of what's going on on social media.

In my electorate, we have seen young boys with image based abuse. We have seen increased rates of self-harm over the years, as well as the dramatic increase of hospitalisations of girls in the last decade because of this and the group chats and sites promoting everything from eating disorders to risky behaviours. We did see, some years ago, that dreadful choking game. We've seen such a rise in mental health problems, which I've repeatedly spoken about in this House before. We now see the online gaming addiction and the clinics that have had to be set up to help families deal with online addictions, as well as the risks on these apps with tracking and mapping. There are young girls presenting to GPs with internal damage from aggressive sexual experiences learned from online pornography sites.

These same platforms and representatives still believe, according to some of the evidence we saw recently, that they're doing no harm. I can tell them differently. I do cybersafety presentations in schools, and how dreadful it is that, through that interaction on social media and through these sites, the person of the youngest age so far, in all of the years I have been doing this, who will admit to me they've been to meet people in person that they've only first met online is a year 3 girl. I'll let that sink in—it was a year 3 girl, in all those classes and sessions that I've done over the years. When I do my silent survey and they put their hands up for the three critical questions, it is completely unusual to come across a class, of various ages—I've done from preschool through to year 12—in which there are not some children who have gone to meet people in person that they've only first met online.

I've dealt with an extraordinary amount of issues created by these online platforms, who claim that they're doing no harm. That is entirely incorrect. The harm to these young people is done every single day and night. It isn't okay when we see GPS or location services and geotagging turned on and embedded in the photos that young people share. Then they get tracked and mapped. It is not okay that we have four-year-olds being encouraged and conned into uploading totally inappropriate content. It's not okay when the mum of a nine-year-old girl rings my office because her daughter has uploaded an explicit video on one of these sites. I can just see the risks with AI generated images and particularly sexual images. This legislation is so important in that space because we will see a rise in the instances of this.

The harm to young people is constant. When I listen to these young people and ask them questions about what they do, how much time they spend online, where their devices are and how much time they can spend on them, it is almost unlimited. And so their level of risk in this space is extreme. Of course, the gaming chats in so many of the messaging platforms—it is just ongoing, and they're asked all sorts of questions in this space. When I look at the ages of the young people I've dealt with—and it's much younger than people think—one of the things I say to these good, young people is: 'Can you help your families? If you've got younger brothers and sisters, they are at risk in this space. You are in this as a family, and the whole family needs to be involved in what's happening online to help keep your family safe.'

I've dealt with some terrible experiences that young people have had. When you meet these young people—I had a group of great young kids that were 15 and 16, and they came to talk to me to tell me how concerned they were about the nine-year-olds in their school that they knew were watching and live-streaming sex acts in school time. What those young people were seeing and having access to is why age verification is so important, which we have been pushing for for some time. It's a start. It doesn't give all of the answers, but it is certainly a start. There have been extraordinary experiences that young people have been subjected to, and it has really harmed their whole lives.

Here is one of the other issues that I just want to warn people about as well. I had a visit recently from the optometrists at Optometry Australia. We are seeing an increase in myopia, shortsightedness, because of the amount of time young people are spending behind a screen. Not only is there physical harm and emotional harm, with young people going to meet people in person they only first met online, but what's happening to them with bullying and image based abuse is constant. When I spoke to one principal and said, 'What is the biggest issue facing your year 12 students?' he held up the phone and said, 'They're just not sleeping enough.' And they cannot cope with every other issue in their lives—their relationships, what's going on at school, what they intend to do when they leave school—because of what's happening online. They're engaging with it for so much of the night that they're simply not sleeping enough.

So I've been very active in this space for a lot of years. What the government is proposing here in relation to deepfake images—I think what we're going to see ahead is even worse because of this. I hope that people use these laws and that the laws are suitable for them to be used far more often. The issues that these young people have to deal with, with deepfakes, the issues around consent and how the courts will interpret and act on this, are very important to our young people and people of all ages.

5:04 pm

Photo of Sam LimSam Lim (Tangney, Australian Labor Party) Share this | | Hansard source

Today, I rise to present on a crucial piece of legislation on behalf of the Albanese Labor government. The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 is a bill that addresses a rapidly growing threat to privacy, dignity and safety—the non-consensual sharing of deepfake sexually explicit material.

As a former police officer who has witnessed firsthand the devastating impact of digital manipulation on individuals' lives, I can attest to the urgent need for these reforms. Digitally created and altered sexually explicit material, particularly deepfakes, represents a sinister form of abuse that targets women and girls. It not only violates their privacy but also inflicts deep emotional and psychological harm.

Under the proposed amendment to the Criminal Code Act 1995, we are taking decisive action to modernise our law and impose serious penalties on those who perpetrate these egregious acts. This legislation introduces new offences specifically targeting the transmission of sexually explicit material without consent utilising technologies like artificial intelligence to create or alter such content. The gravity of these offences is reflected in the penalties we propose: up to six years' imprisonment for transmitting sexually explicit material without consent and seven years imprisonment for aggravated offences where the perpetrator also created the material. These penalties underscore our commitment to deterring and punishing those who engage in this disgraceful conduct. This initiative aligns with a broader commitment to combating online harms, particularly those that disproportionately affect women and young girls.

Our government on 1 May 2024 publicly pledged to introduce a comprehensive suite of measures to safeguard vulnerable individuals from digital exploitation and abuse. The amendment proposed in this bill recognises that assisting law in adequately addressing artificially generated materials such as deepfakes. By repealing outdated provisions and introducing updated offences, we ensure that our legal framework remains responsive to evolving technologies and the harm it can perpetrate.

It is important to emphasise that this reform does not infringe on private communications between consenting adults or interfere with lawful sexual relationships. We have carefully crafted specific defences to ensure the legislation targets only nonconsensual acts and is proportionate in its application. Moreover, these reforms extend protection to adults depicted in the material, whether it is of a real person or a digital application. This inclusive approach is crucial in safeguarding individuals from the malicious use of technology to degrade, humiliate and perpetuate harmful stereotypes.

As someone who has personally experienced the malicious misuse of technology, I am compelled to share a personal reflection that has shaped my unwavering commitment to these reforms. As a former police officer and now a member of parliament, I have experienced firsthand the distressing misuse of my image and my identity on platforms designed to deceive and defraud. This violation left me feeling helpless and deeply frustrated by the inadequacy of existing legal protections. I have encountered instances where my own image and identity were exploited on scam platforms, subjected to identity theft and used in an attempt to tarnish my reputation. These experiences have deeply informed my understanding of the urgent need for robust protection against digital exploitation and abuse.

The legislation we are proposing today is not just about legal frameworks; it's about safeguarding the identities and reputations of everyday Australians. In a world where modern AI technology is advancing at a rapid, rapid pace, we must put in place strong guardrails and protective mechanisms. Deepfake technology poses a significant threat by allowing the creation of convincing and realistic, but entirely fabricated, images and videos. These can be used to deceive, defame and harm individuals, with devastating consequences. I can only begin to imagine the amplified sense of violation and devastation experienced by individuals, particularly women, who are targeted by deepfake technology for sexually explicit material.

The deliberate creation and dissemination of manipulated images and videos without consent represents a profound betrayal of trust and flagrant disregard for personal autonomy. The lack of adequate legal safeguards exacerbates the harm inflicted upon victims of such heinous acts. It is incumbent on us, as representatives of the community, to rectify this glaring gap in our legal framework and to provide robust protection against the insidious threat of digital exploitation. The reforms we propose today aren't just about updating statutes; they're about restoring dignity, ensuring justice and fortifying our defences against evolving threats in the digital age. By criminalising the non-consensual sharing of deepfake sexually explicit material, and by imposing significant penalties, we send a clear message that such behaviour will not be tolerated in our society. Moreover, these amendments are testament to our commitment to gender equality and to the protection of vulnerable individuals. They reflect our collective responsibility to dismantle the mechanism of gender based violence, and to empower victims to seek justice and reclaim their narratives.

My own encounters with this issue have reinforced my commitment to ensuring that our law evolves to keep pace with technological advancement. The amendments we're presenting to the Criminal Code are crucial steps forward in this regard. By explicitly criminalising the non-consensual sharing of deepfake sexually explicit material, we are sending a clear message that perpetrators will face serious consequences for their actions. It isn't just about protecting privacy; it's about protecting human dignity. The dissemination of manipulated images and videos without consent is a violation that can have profound and lasting effects on victims, perpetrating harm long after the image is shared. Our legislation acknowledges this harm and aims to provide meaningful help for those affected.

Furthermore, these reforms are essential in combating the insidious work of scammers and fraudsters, who exploit digital technologies for personal gain. By establishing clear boundaries and penalties, we strengthen our defences against those who seek to deceive and manipulate for malicious purposes.

In conclusion, the passage of this bill is not merely a legal obligation; it is a moral imperative. It sends a clear message that Australia stands firm against digital exploitation and affirms our commitment to protecting the dignity and rights of every Australian. I urge all members of this House to support this vital reform and to stand together in defence of justice, privacy and human dignity.

5:15 pm

Photo of Zoe DanielZoe Daniel (Goldstein, Independent) Share this | | Hansard source

Artificial intelligence is a technology we don't yet understand or, at best, we are trying desperately to understand. It's very much a black box. Those that engineer it—OpenAI, Google and the like—continue to accelerate its capabilities without true knowledge of how a neural network, AI's underpinning logic, truly works, including how it works functionally and how it works philosophically and ethically. This black box, however, is dual sided. AI presents enormous opportunity as well as risk. The potential exists for even the currently available nascent models to exceed human intelligence in specific areas of knowledge within just the next few years. It also carries the potential to bring vast uncertainty, change and risk to our society. This presents us with a dilemma. How quickly do we allow this technology to develop and be deployed in light of its promise and its perils?

In light of this, legislators must find the right balance between overregulating this technology too early and underregulating it too late. A first step is acknowledging the nature of how AI foreseeably intersects with almost every aspect of society, from the economy to physics to medicine and, yes, once again, to violence against women. In the case of the Criminal Code Amendment (Deepfake Sexual Material) Bill, the proposed law reflects the urgent need for additional legal measures, particularly to protect women and girls from emerging AI technologies. The unregulated explosion of AI on the internet over the past two years has seen a range of mostly unintended but, in the case of deepfake sexual tools, disgustingly intentional consequences. This legislation addresses one of these consequences—the capacity of this technology to cause humiliation and violence against women online.

In the interim period between the availability of these AI tools and the proposal of this legislation to the House, individuals have been able to, shockingly, create blended images of people they know using templates of naked bodies within seconds. These images have been shared and sold around social networks and social circles. In just the past few weeks, boys of two Melbourne schools have been expelled and arrested for targeting girls in grades 9 to12 as well as teachers with online sexual deepfake images. Up to 50 girls may have been targeted in one case. Investigations are ongoing; however, it's apparent that these boys took profile images from social media and allegedly uploaded them to free sexual image generation tools on the internet. The images were circulated among school cohorts and have been described as 'incredibly graphic'. One female victim described the lifelong consequences of being targeted, the inability of these images to ever truly be deleted from internet, the unfair risk they may pose to future employability and the prolonged impact to emotional wellbeing and mental health.

It's important to consider the specific impact of this kind of abuse on teenagers, who are particularly vulnerable to bullying, social isolation and feelings of humiliation and lack of self-worth. This behaviour is dehumanising, demeaning and must be prevented. Without a strong legal framework preventing the sharing of this content, we risk normalising the sexualisation of women, including underage girls.

I support this legislation, but I do have some concerns about its limitation to the transmission of content depicting an individual who is exclusively over the age of 18. While I understand that other legal mechanisms exist to capture content depicting girls under 18, such as child exploitation and pornography laws, more must be done to clearly communicate to boys and men that this behaviour is not just illegal but immoral and unacceptable. I have raised with the Attorney-General my concern about whether teenager-to-teenager deepfake abuse would be captured under existing laws. My concerns were graphically borne out when, subsequent to that conversation, the serious incident involving dozens of girls happened at Bacchus Marsh Grammar outside Melbourne. The Attorney-General says this behaviour is captured, but I do wonder whether the existing legislation regarding child abuse is quite fit for purpose in this evolving space and whether underage deepfake offences should be looked at specifically.

School led educational and awareness programs during the early years of high school are a start, but government can do more—either to legislate or to coordinate across government at all levels. One example is programs and campaigns to make clear that these actions have severe, and sometimes lifelong, consequences. Another action the Commonwealth could take, as suggested by Asher Flynn, Associate Professor of Criminology at Monash University, involves placing the onus of responsibility of the creators of these AI models and tools. The powers of the government's Online Safety Act could be expanded, including the ability of the eSafety Commissioner to intervene as this content spreads online and across digital platforms. I say that in the knowledge that the Online Safety Act is at risk of becoming an enormously tentacled piece of legislation. We have to make sure it doesn't become impossibly unwieldy, and also that any enforcement is properly resourced.

As the capability of artificial intelligence is accelerated and deployed by commercial interests into society, Australia and all countries face a challenge to regulate its risks before they're realised. I would like to take this opportunity to endorse the second reading amendment proposed by the member for Warringah. Artificial intelligence is indeed a technology in its nascency, but the capacity exists today for deepfakes to present a risk to the integrity of electoral processes around the world. In a year of democratic elections, and our own in the foreseeable future, this is perhaps more relevant now than ever. Videos of prominent politicians and celebrities, including non-consensual spread of content depicting the likes of Taylor Swift, are already readily accessible online. We will soon find out the capacity for this tool to mislead and misinform electorates here in Australia, because the present capacity for AI to interfere with election integrity is only set to accelerate. Emerging models, such as OpenAI's Sora text-to-video model, present an even more significant challenge for regulators than existing deepfake-generation tools. I strongly suspect that there's a reason that this model will remain publicly unavailable until after the US election in November.

While this legislation targets deepfake sexual material only, I think there's a strong argument for banning all deepfakes that are used without permission. Political deepfakery, for example, has the potential to influence voters in insidious and dangerous ways, and can be difficult to debunk. Deepfakes can also be used for identity theft, scams and extortion. AI's capabilities and risks are rapidly accelerating—its engineers understand this—and bold action is required by government if Australia is to safeguard against them. Historically, governments around the world have not been up to this task, being more reactive than proactive. Regrettably, I suspect that this will not be the last time I discuss the societal risk of AI in this House. As well as legislation, tech companies can contribute by using technology to track, trace and prevent deepfakes. Deepfakes are a threat to democracy and public trust, and we must step in strongly to prevent their nefarious use. This legislation, which I will support, is part of that process. Thank you.

5:24 pm

Photo of Justine ElliotJustine Elliot (Richmond, Australian Labor Party, Assistant Minister for Social Services) Share this | | Hansard source

I, too, wish to speak on the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. It strengthens existing Commonwealth criminal offences and creates new offences to respond to the online harm caused by deepfakes and other artificially generated sexual material, and we know how vitally important it is to have this legislation.

We are all aware of instances of AI digital creations and altered images, and it is horrific. Digitally created and altered sexually explicit material that is shared without consent is a damaging and deeply distressing form of abuse. Many of us have seen people in our areas and right across the country who are deeply distressed when this happens. It is an insidious behaviour. It is degrading, humiliating and dehumanising for victims. We know that such acts are overwhelmingly targeted at women and girls, perpetuating harmful gender stereotypes and gender based violence. I know that everyone in this House joins with me in the actions that we take, as a government and as a parliament, to end gender based violence, and we have had record commitments when it comes to many investments. It's important to act right across the sphere of all that government can do, and we are in fact doing that in this bill as well, because we know that digitally created explicit material adds to very harmful gender stereotypes. That's why this bill, along with all of our other measures, is vitally important when it comes to all of us working together to end violence against women and children.

This bill will deliver on a commitment made by the Albanese government following the National Cabinet meeting that was held in May to address gender based violence. This commitment recognises the urgent and collective need to respond to the growing challenges associated with artificially generated sexual material. The bill creates a new offence that applies where a person uses a carriage service to transmit sexual material depicting an adult person and they know the person depicted does not consent to the transmission of the material or they are reckless as to whether the other person consents. The new offence will carry a maximum penalty of six years imprisonment. It will apply to sexual material depicting adults, with child abuse material continuing to be dealt with under very serious, dedicated separate offences in the Criminal Code. The bill repeals previous offences in the code dealing with non-consensual sharing of private sexual material. The new offences are based on a consent model to better cover both artificial and real sexual material.

It's vitally important to have this legislation in place. As I say, these measures add to the already record amounts of funding that we have put in place, since we came into government, to end violence against women and children in one generation. We have our national plan. We have a major investment in providing frontline services. We have announced in the budget our 'leaving violence' payment, which is now permanent. We are working with all of the states and territories—and we call on all the community, as well, to work with us—with the aim of ending violence against women and children. I know I've spoken on this in the House many times, and I know it's an objective that we all share. That's why taking the action in this bill is vitally important. With the growth of AI and the horrendous depictions that we see, it's important that we pass this bill and that we keep speaking publicly about this and about the action that needs to be taken. I commend the bill to the House.

5:28 pm

Photo of Peter KhalilPeter Khalil (Wills, Australian Labor Party) Share this | | Hansard source

On 1 May 2024 the Albanese Labor government made a public commitment to introduce a suite of measures to tackle online harms, particularly addressing the harms done to women and girls. Delivering on that promise, we are creating new criminal offences to ban the non-consensual sharing of deepfake pornography. We already know that digitally created and altered sexually explicit material is a damaging form of abuse against women and girls that can inflict deep harm on the victims, but I also want to take this opportunity to highlight the broader issues that we are facing as a society in this respect.

The purpose of this parliament is, yes, to make the laws of the land, but we are also required to show leadership, to lead reform on the pressing social issues of the time. And while the current reforms are about making it crystal clear that those who share sexually explicit material without consent using artificial intelligence, AI, will be subject to the most serious criminal penalties, it is also important to continue the national conversation we are having around the safety of women and girls in our society. That is our role here in parliament, and that is what we want to do today in this debate.

As a nation, we have been horrified at the young man at a private school who recently distributed graphic deepfake images of 50 girls in Victoria. This young man did this by taking their likenesses from social media and using an AI app to create sexually explicit deepfakes. He then shared them on social media—obviously without the girls' consent. This was a despicable act of degrading and dehumanising his peers, young women he was supposed to have respected. And it's attitudes like these that, if not rooted out at the earliest opportunity, will go on to perpetuate the stereotypes about women that should have no place in our society because they drive that gender based violence that we abhor in the adult lives of these young men.

At the immediate point when these images were shared without consent, they had already damaged, possibly irrevocably, the lives of those victims. This was not just a breach of privacy; it is much more than that. It is really difficult to imagine the feelings of shame, embarrassment, anger and pain that these young women face when these images are shared with others or posted online, and so we want to be extremely clear, in this government, that there is no place for such behaviours. That's why the suite of reforms in these laws are so important.

The bill amends the Criminal Code to modernise and strengthen offences targeting the non-consensual sharing of sexually explicit material online, especially material produced with the assistance of AI. This includes material that has been created or altered using technology such as deepfakes. Aggravated offences will build on this new underlying offence where the person was responsible for the creation or alteration of the sexual material transmitted.

The gap in the Criminal Code currently criminalises the sharing of private sexual material online; however, the definition of 'private sexual material' may not cover artificially generated material. This bill eliminates any doubt that artificially generated material is absolutely without a shadow of a doubt covered by these changes. Section 473.1, which contains this definition, will be replaced or repealed. The new offence applies where a person transmits material using a carriage service and where the material depicts a person who is or appears to be 18 years of age or older.

Those opposite think there is no need for these laws. The Manager of Opposition Business thinks we are 'repealing existing offences and replacing them with offences that appear to do the same thing'. That's a quote. With great respect, what we are doing is moving to a consent model where an essential part of the determination is whether these images were created and shared without consent. We are doing this by introducing new offences in the new section 474, where the key question relates equally to both types of sexual material, both artificial and real.

Did the opposition not learn their lesson when it came to social media? Famously, governments around the world, including former coalition governments, took a laissez-faire approach to not regulating giant social media companies. Only now we are paying the price; we're seeing that play out, decades on. Nine years in government and suddenly they have now realised that they didn't do enough to regulate emerging social media companies. This is the realisation of the opposition, all of a sudden. I ask the opposition: why don't you come out and tell us exactly why you didn't do this when you had the chance?

We unequivocally disagree that these amendments to the Criminal Code are not needed. We are mitigating and we must mitigate the unintended harms of AI, and we will do this. We must do this early. We're not going to wait another 10 years. And we're going to do it in a pragmatic way that recognises the productivity benefits of these tools, but approach much more cautiously than the coalition ever did with social media companies when they had the chance. There is a need for regulation, there is a need for a legislative framework to deal with the rapidly expanding technologies we're seeing, like AI.

We are behind the eight ball. We need to start taking the necessary legislative and regulatory action to mitigate the worst effects of these technologies. And there are negative effects—clearly—as I have described already with the AI deepfake sexual material.

We cannot go down the same path that we did in the past with the social media companies with respect to AI. In many respects, people are arguing that it's already too late for our kids who are addicted to doomscrolling and spend hours and hours on screens, exposed to content that disrupts their mental health and self-esteem, either through social media or through just being online for hours and hours as well.

There is a real epidemic within our communities. Parents are waking up to it, clearly. Parents are disturbed by it. There are a lack of legislative or regulatory frameworks to address or mitigate the worst impacts of these technologies. That has to stop. I'm not saying we're going to get it right 100 per cent of the time, but we do need to do this and we need to start the work today. The opposition needs to actually understand that this is in the national interest.

The other thing we have seen time and time again is when someone is already known to law enforcement authorities and recognised in the justice system and they offend again. That is why this bill introduces two aggravated offences which apply with increased penalties if the offender has had three or more civil penalty orders made against them for contravening the Online Safety Act 2021—'creation or alteration of the sexual material transmitted'. The new offences will have a maximum penalty of six years imprisonment for transmitting sexual material without consent and seven years imprisonment for aggravated offences, including where the person created the material themselves. These amendments are absolutely necessary and absolutely our priority.

In conclusion, I want to remind the House—and we're all aware of this, as recent events have shown—that young girls in schools are not psychosocially safe, in many respects. When you have these horrific offences being committed against them, like the sharing of sexual deepfakes or the making of lists which rank young women and sexualise them, there is no safety for these girls. It's these events where a society-wide scourge starts to take hold. As I said earlier, we've made the mistake—the previous government and other governments around the world certainly did—of not regulating the social media giants and the impacts of that technology. We cannot make the same mistake when it comes to the use of AI in this context. The young people who are exposed to and partake in these behaviours today will go on to live in a world where women will be harmed by their intimate partners or raped or killed on our streets. The majority of women who are killed are killed in that way—by people that they know or men that they know.

Online abuse, and the element of psychological violence therein, is a part of this cycle of physical violence. It's all interconnected. This is what women are subjected to on a society-wide scale. We've seen the horrible outcomes of this in many examples. In my own electorate, the murder of women and the rape of women has occurred, in the northern suburbs of Melbourne. Jill Meagher was an infamous case, and there are many, many others. People in my electorate of Wills won't forget that. They won't forget what happened to Jill Meagher while she was walking home in Brunswick in 2012 or, in May 2019, when Courtney Herron, another young woman from our community of Wills, was murdered in Royal Park or, in the same year, when Aiia Maasarwe was murdered in the northern suburbs whilst walking home.

These were just young women—who were walking home after going out or going to work or who were on shift—and that is what happened to them. There was also a woman who, again, in the same year, was running along Merri Creek, which borders my electorate—a spot that many of us in my community in Wills have found sanctuary in. She was attacked, sexually assaulted and raped there. We can't forget their stories. We can't forget what happened to them. We can't forget the deep trauma that has impacted their families and their loved ones. They were part of our community. They were part of the fabric of our community. What they went through in their deaths was senseless and needless. I say again: harm is not just the physical harm. There is also that psychosocial harm that comes from the impacts of having that kind of sexually explicit material shared online without consent. Harm is that which young women go through when targeted by these forms of online abuse and it is a harm that leads them to a deadly and life-threatening cycle that is, as I said, interconnecting all of these behaviours. That is why legislation like this is so important.

Young men should be educated, encouraged to be strong allies and work actively towards reducing violence at its source, where it starts. Educate young boys, teach them respect, empower them to understand that there is a positive sense of being male and what that positive masculinity entails, which is not the violent acts, not the anger. Give them another path where they are empowered to be respectful and strong in a very different way. That is what real manhood is about and that is where early education comes in.

It is not only the responsibility of parents, of teachers and of society to teach kids respect but it is also responsibility of this place, this House, to pass laws to provide the legislative and regulatory framework that make it unacceptable by law to take such actions. It is also our responsibility to show leadership, to take actions and to lead societal reform in this place. I would hope the opposition would understand the importance of that to the national interest.

We won't hesitate, through these laws, to punish those that violate these expectations, these standards that we're setting. That is what we do here in this place. The safety of women across Australia starts in our schools, starts at our sports clubs, starts in the conversations we have at the dinner table about respectful relationships. In the worst-case scenarios, when it has to be addressed, it is addressed by these laws that we pass here. The Criminal Code has to be robust and it has to appropriately criminalise actions. All of this in combination has to be able to break the chain of disrespect towards women that leads to the worst elements of violence down the track.

Legislative change plays an important role in sending a strong message that non-consensual distribution of such horrible images is unacceptable to us as leaders—as political leaders, as community leaders and as citizens and that applies whether it is AI-generated—artificial intelligence—or it is real. This bill starts the first steps towards regulating the worst effects of this technology. Because we cannot wait until something else happens to regulate technologically created harms, until it becomes worse, until the scourge is so widespread that we have lost control of it completely. We cannot wait for things to get worse before we change these laws. That is why I call on the opposition to support this bill, to support the necessary changes that we are making to address these issues, and to begin that journey together. I commend this bill to the House.

5:43 pm

Photo of Tracey RobertsTracey Roberts (Pearce, Australian Labor Party) Share this | | Hansard source

I rise to support the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. Every time you read or hear the news there is another story about artificial intelligence, deepfake abuse, and the ongoing debate on how to regulate these rapidly advancing technologies. Encountering many forms of sexual violence online driven by AI-generated deepfake technology is deeply unsettling. Women represent 99 per cent of those targeted by deepfake pornography, which makes up 98 per cent of all deepfake videos online.

Urgent action is absolutely needed and effective legislation is a critical starting point. Thousands of women and girls have experienced this form of gender based violence already and it is exacerbated by the advancing accessibility and the sophistication of technology. In 2023 alone the volume of deepfake abuse videos surpassed the total of all previous years combined, with the number of nonconsensual videos doubling annually. Those nonconsensual images are created and shared with the goal of humiliating and degrading the women and girls in them. The fallout is immense, and it goes beyond personal harm.

Deepfakes, for those unfamiliar, are digitally manipulated images, videos or audio recordings that use artificial intelligence to create realistic but false depictions of individuals. These creations can depict people doing or saying things they never did—always stunning realism blurring the lines between reality and fiction. The legislation before us aims to address a specific and troubling aspect of this technology: the nonconsensual sharing of sexually explicit deepfake material. This bill seeks to criminalise such acts, ensuring that individuals who distribute sexually explicit content without the consent of those depicted face significant legal consequences. This includes deepfakes generated using AI or other technologies which have increasingly been used to exploit and harm individuals, particularly women and girls.

The new bill aims to criminalise the sharing of nonconsensual deepfake sexually explicit material online. These reforms ensure individuals who distribute sexually explicit content without the consent of those depicted, including deepfakes created using AI or other technologies, will face significant criminal penalties. The bill targets individuals who share and create sexualised deepfake images without consent. The application of Commonwealth criminal law to individuals who meet the minimum age of criminal responsibility is not a new concept. It is worth noting that the new offences are specifically targeted at sexual material depicting or appearing to depict individuals over 18 years of age. Existing criminal offences already cover child abuse material, which is comprehensively addressed in a separate division of the Criminal Code. This includes detailed offences and severe penalties for the sharing of child abuse material online, including content generated by artificial intelligence.

Whether you personally know a victim or have simply seen and used their image on the internet will now be irrelevant. The new offences cover instances where an individual shares a sexually explicit deepfake of a real person either knowing the person has not consented or being reckless as to whether they have consented. This applies regardless of who the real person is and whether they are personally known to the offender. Prosecuting offences like this is increasingly complex in the digital age. However, the Australian Federal Police possesses unique capabilities and extensive experience in investigating these types of crimes. The AFP was also consulted in the development of this bill, ensuring their input and the challenges faced by law enforcement were taken into consideration.

It is also worth noting that the Commonwealth power to legislate does not extend to the mere creation of sexually explicit adult deepfakes. The offence applies to the sharing of nonconsensual deepfake sexually explicit material. If the person who shares the material also created the deepfake, there is an aggravated offence with a higher penalty of up to seven years imprisonment.

There are existing Commonwealth offences that prohibit menacing or harassing another person online. These could apply, for example, if someone creates an image of their partner and then threatens to post it online without consent. Additionally, state and territory laws largely cover the creation of such deepfakes. This approach is consistent with existing arrangements for the criminalisation of child abuse material. State and territory laws address the creation of child abuse material, while strong Commonwealth offences criminalise the sharing of such material online.

The existing Commonwealth criminal offences do not clearly apply to sexually explicit deepfake material. Some state and territory jurisdictions have legislation to address the nonconsensual sharing of intimate images online. This bill will compliment state and territory criminal offences and ensure a more consistent national approach. The bill stipulates severe consequences for those found guilty of transmitting such material without consent. Offenders can face up to six years imprisonment, with aggravated cases facing potentially up to seven years behind bars. These penalties are designed to deter and punish this harmful behaviour.

To ensure the legislation targets appropriate cases, specific offences have been incorporated. For example: reporting to authorities—passing material to the police to report a crime, or providing it to court to assist in prosecution, is permissible; for medical and professional purposes—doctors can share images to obtain a second opinion from colleagues without falling under the law's purview; for content based photography—images of models taken with explicit consent for advertising or publication purposes are exempt from the legislation; and for satirical use—images used in a satirical context are also considered exempt under these laws.

It's important to know that the legislation is not intended to encompass the sharing of legitimate adult pornography. Rather, its focus is on prohibiting the dissemination of deepfake sexually explicit images where the perpetrator either knows that consent has not been granted or behaves recklessly in determining whether consent was given. This approach aims to address the harmful impacts of non-consensual deepfake material while safeguarding legitimate uses of digital content. Someone may wonder what happens when consent is uncertain. The law addresses this by focusing on cases where individuals knowingly share deepfake sexually explicit images without consent or recklessly disregard whether consent was given. This includes situations where consent is not actively considered.

There's a deep and profound impact from non-consensual sharing of digitally altered and sexually explicit material, particularly on women and girls. This behaviour not only causes immediate harm but can also have lasting emotional and psychological effects. It's a form of abuse that degrades, humiliates and dehumanises victims, and which perpetuates harmful gender stereotypes, contributing to gender based violence. Unfortunately, instances where such material is created and shared out of revenge or malice, often referred to as 'revenge porn' or image based abuse, are widespread. These are actions that exploit vulnerabilities and which can severely damage a person's reputation and sense of security. Victims of such abuse often face severe anxiety, depression and trauma. The non-consensual distribution of sexually explicit deepfake material can destroy personal relationships, careers and mental health. The fear of these images surfacing can lead to long-term psychological distress and a pervasive sense of insecurity. Furthermore, this behaviour perpetuates a culture of misogyny and violence against women. By creating and sharing deepfake sexually explicit images, perpetrators are engaging in an act of control and humiliation, reinforcing toxic power dynamics that have long oppressed women. This form of digital violence is an extension of the systemic gender based violence that continues to plague our society.

Clearly, artificial intelligence technology also has the potential to exacerbate election related challenges, including the spread of disinformation and cybervulnerabilities in election systems. The Albanese Labor government is committed to enhancing the transparency and accountability of Australia's electoral system. Actions are being taken by the Minister for Communications and by the Australian Electoral Commission to combat misinformation and disinformation, especially during elections. Additionally, the government has asked the Joint Standing Committee on Electoral Matters to consider ways to prevent or limit the influence of inaccurate or false information on electoral outcomes. This includes examining the impact of artificial intelligence, foreign interference, social media, misinformation and disinformation.

The government also has work underway to combat online scams and fraud, including establishing the National Anti-Scam Centre to coordinate and share intelligence across law enforcement, private and public sectors. It has taken down over 5,000 investment scam websites through ASIC's scam disruption activities. The government has also committed to introduce legislation in 2024 and to fund the administration and enforcement of mandatory industry codes for banks, telecommunications providers and digital platforms.

As AI technology advances and becomes more accessible, the potential for misuse in creating and distributing such material grows. This necessitates urgent legislative action, like the proposed bill, to establish serious penalties. These measures are crucial for protecting vulnerable individuals from further online harm and ensuring that those who engage in such abusive behaviour face appropriate consequences under the law. Looking ahead, the Albanese Labor government remains committed to safeguarding Australians from online threats including deepfake misuse, conducting ongoing reviews such as the statutory review of the Online Safety Act 2021, seeking to enhance protections and ensuring our online safety laws remain effective in combatting emerging harms.

In conclusion, this legislation represents a critical step in protecting individuals from the insidious impact of deepfake technology. By imposing stringent penalties and clarifying exceptions, we aim to uphold dignity, privacy and safety in our digital age. This bill sends a clear message that our society will not tolerate the degradation and abuse of individuals through technological means. We stand against the perpetration of misogyny and violence against women, and we commit to fostering a safer, more respectful digital environment for all.

5:56 pm

Photo of Matt BurnellMatt Burnell (Spence, Australian Labor Party) Share this | | Hansard source

I rise today to speak in favour of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. At the beginning of the previous sitting year, I remember the member for Bruce came into this place and delivered a speech that was, in part, the product of artificial intelligence. The member for Bruce, of course, disclosed this fact, for it was indeed part of illustrating the capability and role of artificial intelligence, the rapid pace of its evolution and how it can find its way into activities within our lives which have been occurring even prior to the advent of the digital age.

As we know, the humble art and science of speechwriting predates the invention of the computer, the typewriter and even paper as we conventionally know it. Now we can utilise generative artificial intelligence to create speeches, a whole raft of other documents, sounds including a person's voice and video including the likeness of a person. Subjects can be blissfully unaware of the production of those last two until the content is disseminated by its creator. The potential applications for these productions can be completely benign, humorous and even satirical, but they can also be malevolent. It is incumbent on governments and parliaments alike to act to deter and punish individuals to ensure that society can reflect on the criminal law keeping pace with technological adjustments.

This is the position we find ourselves in today with the measures contained within this bill. It is one that forms a crucial step in addressing one of the most insidious forms of abuse that has emerged in our digital age: the non-consensual creation and distribution of deepfake, sexually explicit material. Deepfakes generated through advanced artificial intelligence can create disturbingly realistic but entirely false depictions of individuals in compromising and degrading situations. This legislation aims to protect individuals, particularly women and girls, from the profound harm caused by these digital fabrications. Deepfakes are manipulated images, videos or audio recordings that depict individuals in situations they never actually experienced. The technology behind deepfakes has advanced rapidly, making it increasingly accessible and easy to use. This has led to a surge in the creation of deepfake content, much of which is sexually explicit and intended to humiliate, degrade and harm the victims. As noted by the Australian Federal Police Commissioner, we are facing a potential tsunami of AI-generated abuse material.

The impact of these deepfakes on victims is profound. They suffer from severe emotional distress, reputational damage and, in some cases, financial harm. The victims are often young women who find their likeness manipulated into explicit content that is then shared widely on the internet. This not only violates their privacy but also perpetuates harmful gender stereotypes and contributes to a culture of gender based violence. The emotional and physical toll on these victims cannot be overstated. Many suffer from anxiety, depression and a profound sense of violation. This is a form of abuse that leaves deep and lasting scars.

The Criminal Code Amendment (Deepfake Sexual Material) Bill introduces new offences to specifically target the non-consensual sharing of deepfake sexually explicit material. The key provisions of the bill include the creation of a new offence for transmitting sexually explicit material using a carriage service where the person depicted is an adult and has not consented to the transmission. This offence carries a maximum penalty of six years imprisonment. The use of a carriage service has evolved with its meaning over the course of time, but such is the nature of the law adapting to reflect technological change and capture future developments within its auspices. The bill also introduces aggravated offences with a maximum penalty of seven years imprisonment. These apply where the perpetrator was responsible for creating or altering the material or where they have a history of civil penalty orders under the Online Safety Act. These offences will cover both real material, such as unaltered images and recordings, and fake material, created or altered using technology, like deepfakes.

Existing laws have not kept pace with the rapid advancements in technology that enable the creation of deepfakes. The current definition of private sexual material does not explicitly extend to artificially generated content, leaving a significant gap in our legal framework. This bill addresses the gap, ensuring that those who exploit technology to create and disseminate harmful material are held accountable. The introduction of these new offences is a clear signal that the law must evolve to meet the challenges posed by new technology. Moreover, this legislation is a response to the growing public outcry against the misuse of deepfake technologies. Victims, community groups, and advocacy organisations have called for stronger protections against this form of abuse. The Albanese government's commitment to introducing these reforms reflects our dedication to safeguarding the dignity and rights of all Australians.

The need for such legislation is underscored by the numerous reports of deepfake pornography circulating online, often targeting young women and causing significant distress, even leading to self-harm and suicide. There have been numerous instances where individuals, particularly young women, have been targeted with deepfake pornography. For example, recent reports highlighted a case where students at a school were subjected to fake nude images circulated online. These incidents underscore the urgent need for robust legal protections. In another case, a female teacher found herself the victim of a deepfake attack, with manipulated images circulating amongst students and colleagues. These examples illustrate the profound harm that can result from the misuse of deepfake technology.

Law enforcement agencies, including the Australian Federal Police and the Commonwealth Director of Public Prosecutions, have provided critical input in shaping these reforms. Additionally, legal experts and victims-survivors have emphasised the importance of creating a legal deterrent against the creation and dissemination of deepfake pornography. The involvement of these stakeholders ensures that the bill is both comprehensive and effective in addressing the issue. This bill is part of a broader suite of measures aimed at tackling online harms.

The Albanese Labor government has been proactive in addressing various forms of digital abuse, particularly those targeting women and girls. These measures include increasing funding for the eSafety Commissioner, advancing the review of the Online Safety Act and committing to address harmful practices such as doxxing. These efforts demonstrate the Albanese Labor government's commitment to creating a safer online environment for all Australians.

The rapid growth of deepfake technology has created new challenges for law enforcement and policymakers. While artificial intelligence can have many positive applications, its misuse to create deepfake pornography represents a significant threat to individuals' privacy and dignity. This legislation ensures that our laws keep pace with technological advancements and provide robust protections against digital abuse. By criminalising the non-consensual sharing of deepfake sexually explicit material, we are taking a decisive step to protect individuals from digital abuse and to uphold their dignity and their privacy. The new offences introduced by this bill are designed to be both comprehensive and proportionate. They target the non-consensual sharing of both real and artificially generated sexually explicit material, ensuring that perpetrators can't evade responsibility by exporting gaps in the existing law.

The bill also includes specific defences to ensure the offences are targeted and do not overly criminalise legitimate activities. For example, the transmission of material necessary for law enforcement, court proceedings or genuine medical or scientific purposes is exempted from the purpose of this legislation. These defences are consistent with the existing exemptions in the Online Safety Act and ensure that the new offences are applied fairly and justly. It is important to note that this legislation does not intend to interfere with private communications between consenting adults. The government has no interest in regulating the private activities of adults who engage in consensual behaviour.

The focus of this bill is solely on the consensual creation and distribution of sexually explicit material. This distinction is crucial to ensure that the legislation is both effective and proportionate. By targeting the harmful and non-consensual use of deepfake technology, we are upholding the rights and dignity of individuals while respecting personal freedoms.

Whether it is in the transmission or the creation of such material, the knowledge that the subject has consented to being a part of it should remain vitally important as to whether it can be deemed to be lawful or otherwise. Reckless indifference as to whether someone has provided consent to be involved in any sexual activity—whether it involves them or their digital likeness and whether it is within cyberspace or outside of it—should always have a presumption of being unlawful.

The bill also addresses the issue of aggravated offences, which carry higher penalties for more serious conduct. These aggravated offences apply in cases where the perpetrator has previously been subject to civil penalty orders or similar conduct under the Online Safety Act and where they were directly involved in creating or altering material. By introducing these aggravated offences, the legislation provides a stronger deterrent against repeat offenders and those who engage in particularly egregious conduct. The inclusion of these higher penalties reflects the serious nature of the offences and the profound harm they cause to victims.

The introduction of this legislation follows extensive consultation with key stakeholders, including law enforcement agencies, legal experts and victim advocacy groups. This collaborative approach has ensured that the bill is both comprehensive and effective in addressing the issue of deepfake pornography. The feedback from these consultations has been invaluable in shaping the final provisions of the bill and ensuring that it meets the needs of victims and the broader community.

This process has also highlighted the widespread support for these measures and the urgent need for action. The importance of this legislation cannot be overstated. It sends a clear message that the non-consensual creation and distribution of sexually explicit material is unacceptable and will not be tolerated. It provides a robust legal framework to hold perpetrators accountable and protect victims from digital abuse. This is not just a legal issue but a moral imperative. As a society we must stand against the misuse of technology to harm and exploit individuals. This legislation is a critical step in that direction, and I am proud to support it. By working together, we can create a safer and more respectful online environment for everyone.

The issue of deepfake technology and its misuse is not confined to Australia. It is a global problem that requires international cooperation and collaboration. We must work with other nations to develop and enforce standards that protect individuals from the harmful use of deepfake technology. By leading the way with this legislation, Australia can set an example for other countries to follow. This global approach is essential to effectively combat the spread of deepfake pornography and to protect individuals worldwide. Additionally, legal experts and victims-survivors have emphasised the importance of creating a legal deterrent against the creation and dissemination of deepfake pornography. The involvement of these stakeholders ensures that the bill is both comprehensive and effective in addressing the issue.

Moreover, it is essential to continuously evaluate and adapt our legal frameworks as technology evolves. Though we must continue to develop innovative legal solutions to address the dynamic challenges posed by emerging technologies, we must explore avenues to enhance digital literacy, strengthen cybersecurity measures and promote responsible AI development as it continues to evolve and improve its ability to learn and execute even more complex commands and functions. We need to ensure they follow society's expectations. Ensuring the creators and the end users of this technology are not just ethical but that they uphold community values and expectations is important, given that artificial intelligence itself isn't ethical or moral of its own accord. This bill lays a foundation for future efforts to combat digital abuse through collaborative efforts between government, industry and civil society. We can create a digital environment that prioritises safety, privacy and dignity for all Australians, whether they're public figures or just someone who, like all of us should, have an expectation that if anyone were to create and transmit sexually explicit deepfake-generated content without their consent, then such conduct would be defined explicitly as criminal behaviour. Perpetrators of this conduct must know that what they're doing is not just unethical or morally reprehensible but also that it's something that's illegal—and that this criminal behaviour is of such a nature that it deserves to be condemned both by the government of Australia and the people Australia insofar that, when prosecuted, their actions are punished. And that punishment should also provide a strong deterrent not just for themselves but for others out there who might attempt to do the same to someone else in the future.

In closing, I want mums and dads—fathers and mothers—brothers and sisters to know that we have an obligation to educate our young people in society and to ensure that we uphold the values we all expect. This type of behaviour needs to stop. I commend this bill to the House.

6:11 pm

Photo of Alicia PayneAlicia Payne (Canberra, Australian Labor Party) Share this | | Hansard source

I also rise in support of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. This important bill will ban the non-consensual sharing of deepfake sexually explicit material, something which is a serious and growing concern which is already, very sadly, impacting Australians who have fallen victim to this new technological abuse. Digitally created and altered sexually explicit material is a vile and damaging form of abuse that can inflict deep harm on victims.

Young people, in particular, are at risk of image based abuse. The internet has changed dramatically in recent years, with new social media apps constantly popping up—changing platforms that are, increasingly, showing children and young adults damaging content. Further, the rise of artificial intelligence—while potentially an incredibly useful tool in work or study—has the potential to be increasingly harmful. The children of today are digital natives; they have never lived in a world where the internet and social media don't exist. Older generations learned new technologies as we moved into the workforce and engaged with many of these new apps, however, this was without the innate knowledge from being on the internet from as young as being a toddler, as Gen Z were. They inherently understand and use social media. I see this with my own very young children and the way in which they adapted very quickly, and from a very young age, to these sorts of technologies.

With the onset of new social media and the wide range of AI platforms now available, new ways of subjecting women and girls, particularly, to assault and harassment have, disgustingly, infiltrated their way into society. We must act now to ensure we can end violence against women and girls within one generation, as our government has committed to. Specifically, this bill amends the Criminal Code to modernise and strengthen offences targeted to non-consensual sharing of sexually explicit material online, including material that has been digitally created or altered. This bill also includes aggravated offences, which build on the underlying offence, for people who have created or altered the material that has been transmitted.

The reforms in this bill make clear that those who share sexually explicit material without consent by using technology like artificial intelligence will be subject to serious criminal penalties. The new offences will have a maximum penalty of six years imprisonment for transmitting sexual material without consent, and seven years imprisonment for aggravated offences, including where the person created the material. These amendments are high-priority reforms following the government's public commitment on 1 May this year to introduce a suite of measures to tackle online harms, particularly those targeting women and young girls. These new offences will place a high burden on anyone who thinks to perpetuate this kind of abuse. It is another step that the Albanese government is taking to protect women and girls from horrific online attacks.

Sexually explicit deepfakes which are created and shared without consent are used to degrade and dehumanise victims. They are primarily targeted at women and can perpetuate harmful gender stereotypes and drive gender based violence. When they are shared with others or posted online without the consent of the person depicted, it is a serious breach of a person's privacy and can have lasting physically and psychologically harmful impacts on victims.

The phenomenon of image-based violence and abuse is not new. Currently the Criminal Code criminalises the sharing of private sexual material online. There has also been a decades-long discussion about the sharing of private sexual material, often called revenge porn. Revenge porn has been criminalised in every state and territory and the Commonwealth, with the exception of Tasmania. This has ensured that people who are victims of that form of image based abuse can receive restitution for the harm that experience and that perpetrators can be charged. However, the current definition of 'private sexual material' does not adequately cover deepfake images or other artificially generated material.

This bill proposes to repeal the existing offence relating to private sexual material and replace it with a new offence that applies in the following circumstances: where a person transmits material using a carriage service, where the material depicts a person who is or appears to be 18 years of age or older, where the material depicts or appears to depict a person engaging in a sexual pose or sexual activity, or particular parts of their body, and where the person knows that the person depicted does not consent to the transmission of the material or they are reckless as to whether the other person consents.

The bill also introduces two aggravated offences which apply increased penalties to the offence where, before the commission of the underlying offence, three or more civil penalty orders were made against the person for contraventions of relevant provisions of the Online Safety Act 2021 and the person was responsible for the creation or alteration of the sexual material transmitted. These amendments are essential to ensure that our laws can apply to both real material, such as unaltered images and recordings, and fake or doctored material that has been created or altered using AI or other technology.

The bill sets out specific defences to the transmission of the sexual material without consent to ensure the offence is targeted and proportionate. The new offences will not cover private communications between consenting adults or interfere with private sexual relationships. It's about sharing with other people or more broadly. They will also only apply to material depicting or appearing to depict adults. The Criminal Code continues to criminalise the use of carriage services for child abuse material, including child abuse material generated by artificial intelligence.

I am proud to be part of a government that is aiming to end gender based violence in a generation. Our government's work is guided by the National Plan to End Violence against Women and Children. The plan sets out how all parts of society, including governments, businesses, workplaces, the media, schools and educational institutions, the family, the domestic and sexual violence sector, communities and all individuals must work together to achieve this shared vision.

In the recent budget, our government permanently established the Leaving Violence Program so those escaping violence can receive financial support, safety assessments and referrals to support pathways. The Leaving Violence Program supports victims-survivors of intimate partner violence to make informed choices about leaving violent relationships and ensures they are receiving the vital support they need. Our government has also funded a pilot of age assurance technology to protect children from harmful content like pornography and other age-restricted online services. The new pilot is part of a suite of interventions aimed at curbing easy access to age-inappropriate material by children and young people and tackling extreme misogyny online.

Education is such an important part of the prevention of gender based violence. That's why our government is funding a new phase of the Stop it at the Start campaign. This new phase will specifically include a counter-influencing campaign in online spaces where violent and misogynistic content thrives to directly challenge the material in the spaces where it's being viewed. The campaign is intended to counter the corrosive influence of online content targeted at young adults that condones violence against women. It will raise awareness about a proliferation of misogynistic influencers and content and encourage conversations within families about the damaging impact of the material.

The disturbing reports we heard out of Victoria last month, where over 50 girls were subjected to sexually explicit deepfake photos of them spread around their school, indicates how important it is for our government to act now. Criminalising deepfake sexually explicit images is the next step that we can take to ensure that vulnerable people are protected online. Sexually explicit deepfake images can affect everyone, from major celebrities to school students. For each person subjected to image abuse online, it is a harrowing and life-altering experience. As technology develops at a never seen before rate, our laws must protect vulnerable people from new forms of abuse. No-one should ever be subjected to any form of abuse, and this bill is how our government is working to ensure that future generations are kept safe online.

I must say, as a mother of young children, just how important this is and how concerning, I suppose that as part of an older generation—as my staff, who drafted this, have described me, and I guess that's so—I was not exposed from such a vulnerable and young age. We didn't live through that in the way that our children's generation are. The idea that young girls could have these faked images spread around their school—just think about the impact that that will have on those girls for rest of their lives. That is why this bill is so incredibly important.

As I said, our government is wanting to end gender based violence in a generation, and the online abuse is a really important part of that. This deepfake AI generated or altered pornography is something that we first saw affecting celebrities, but now it's affecting children in our schools. I am very proud that our government is taking this action now, taking this seriously and putting an end to it.

6:22 pm

Photo of Tanya PlibersekTanya Plibersek (Sydney, Australian Labor Party, Minister for the Environment and Water) Share this | | Hansard source

The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 is a timely piece of legislation. The Albanese government will create a new criminal set of offences to ban the non-consensual sharing of deepfake pornography. Digitally created and altered sexually explicit material is a damaging form of abuse against women and girls that can inflict deep harm on victims. I say women and girls because overwhelmingly this material is created and used against women and girls, although some boys are also the victims of it. In fact, some of the apps that create this deepfake material won't work on male bodies; they only work on female bodies.

These reforms will make clear that those who share sexually explicit material without consent using technology like artificial intelligence will be subject to serious criminal penalties, as they should be. This includes material that's been created or altered using technology such as deepfakes. Sexually explicit deepfakes, created and shared without consent, are used deliberately to dehumanise and degrade others. They particularly target women and girls, and they can certainly perpetuate harmful gender stereotypes and drive gender based violence. When they are shared with others or posted online without the consent of the person depicted, it is a serious breach of that person's privacy, and it can have long-lasting harmful impacts on the subject. So I welcome this legislation. It's the work of a government that is looking at the issues that our society faces and addressing them in a swift and thorough way.

This legislation is designed to stem the flow of the particular harms that result from deepfake pornography, and it's very important that we act now. The world's changing, and the technology that's available to online actors is changing. We're in a very particular moment of history, before what I think will be a really massive change as AI becomes even more dominant and mainstream. The shift that we're talking about, with the massive upswing in the use of AI, is as profound as the shift that we saw in 2007 when the iPhone was introduced. But only now is the alarm beginning to be raised, in the last couple of years, about the huge impact that social media is having on our health, on our attention spans and, in particular, on the mental health of our children. The lag between the introduction of the iPhone, the widespread use of social media and the way that this government is beginning to address those harms shows how quickly our society can change—and not for the better—and how important it is that we tackle these problems now. We can't allow the new and additional problem of AI generated deepfake pornography to escape from us in the way that some of the worst elements of social media escaped from us.

This bill amends the Criminal Code Act 1995 to modernise and strengthen offences targeting the non-consensual sharing of sexually explicit material online. The new offences will have a maximum penalty of six years imprisonment for transmitting sexually explicit material without consent and seven years imprisonment for aggravated offences, including where the person created the material. These amendments are a high priority following the government's public commitments on 1 May 2024 to introduce a suite of measures to tackle online harms, particularly harms that target women and young girls, who are overwhelmingly the victims here. This is timely.

As I said a moment ago, we're at a moment in history where AI is about to massively take off in its influence in all elements of our lives, and I do think that, as a community, we are not even beginning to grapple with what some of those effects might be. We're perhaps just beginning to understand what the future looks like. We really do need to think about how we get the best out of artificial intelligence, how we use it for public good and human benefit. We need to give some thought to that now in these early stages in a way that we really didn't give consideration to the massive upswing in the use of social media a few years back. We allowed open slather in social media and we're constantly chasing the harmful impacts of that now, trying to, in some ways, put the genie back in the bottle.

For deepfake porn apps, essentially what you do is you feed a photo of a real person into the app, and it spits out a faked sexually explicit pornographic but ultra-realistic image of that person. In fact, one Spanish mother that was interviewed on this said, if she hadn't known the details of her daughter's body, she would have assumed that the image that was circulating in her small town of her 14-year-old daughter was a real image.

In her book, Time to Reboot: Feminism in the Algorithm Age, Canberra writer Carla Wilshire calls this an 'epidemic of non-consent'. The apps and fake images are being used very deliberately to bully, to harass and to cause immense distress to the, largely, young girls who are being pornified by their classmates—young girls, fellow students and sometimes teachers and other women that they come into contact with.

Parents who were interviewed by News Corp papers in May told horror stories about their children even taking their own lives after being bullied and threatened with these sorts of deepfake images. There are so many horror stories, including that of Tilly Rosewarne from Bathurst, who was bullied from grade 5 onwards but found that the bullying escalated beyond endurance when one kid faked a pornographic picture of her and put it on Snapchat. Tilly suicided. Bullying follows children home from school. They cannot escape because of the influence of social media in their lives, the fact that their phones are always in their pockets, and it becomes a million times more toxic and more powerful when you combine it with the technology that allows this sort of deepfake pornography. So we are acting with this legislation to ban the creation and nonconsensual distribution of deepfake pornography. These reforms will make it clear that those who seek to abuse or degrade people through doxing or deepfakes or abusing their privacy online will be subject to serious criminal penalties because what is happening to people online is having a real-world impact in our homes, in our schools, in our workplaces, in our community.

In the most recent budget we funded a rapid review of research into perpetrators that we expect will also look at the way perpetrators of violence are using social media and technology to continue their harassment and intimidation. Researchers in the area of domestic violence and sexual violence, Jess Hill and Michael Salter, are very explicit, too, about the connection between pornography and rising rates of violence against women. Professor Salter said last month, 'The technology sector is profiting from services and products that cause mass social harm, including violence against women and children.'

The most recent research shows that pornography related searches and downloads range between 10 and 20 per cent of overall internet traffic. While research on the algorithms that porn sites use is just beginning, what is very clear from the studies is that people are being shown and recommended new and more extreme forms of pornography compared to what they searched for or have even expressed an interest in. A 2024 study from Dublin City University shows that the recommender algorithms used by social media platforms are rapidly amplifying misogynistic and male supremacist content. The study tracked and recorded information to 10 neutral accounts on 10 blank smart phones, five on YouTube shorts and five on TikTok. The researchers found that all of the accounts that were identified as male accounts were fed masculinist, antifeminist and other extremist content irrespective of whether they sought out general or male supremacist related content, and they all received this content—this is the bit that really blows me away—within the first 23 minutes of the experiment. That is what the algorithm is feeding people whether they want it or not. And there are plenty of young men who will say to you, 'I believe in equality between men and women, girls and boys. I don't search out the stuff. They know I'm a bloke, possibly, because I'm looking up car videos, and the stuff that gets fed to me is violent, it is degrading, it is graphic and it is harmful.'

Released today was a study that outlined the link between social media use, pornography and choking, which has become very common in the sexual relations of teenagers and young adults. Choking is a sexual activity that is significantly on the rise. There is no question in my mind that it's on the rise because it's being depicted in pornographic videos that are being fed to young people in their social media accounts.

Researchers from the University of Melbourne and the University of Queensland raised the alarm about the harms done—again, mostly to women and girls—by the increasing adoption of this practice. Harms include, obviously, losing consciousness. Of course, every time you lose consciousness, you're damaging your brain, in the same way as you would if you were a footballer getting a knock to the head that knocked you unconscious. Obviously, it can lead to death in extreme cases—after minutes, mind you, it can lead to death. But harms also include the risk of strokes. In fact, sexual choking is the leading cause of strokes for women under the age of 40, according to this research. Most people engaged in these practices are not aware of the risks involved. I have to say: this is a really good example of where you need very explicit sex education for young people and you need very explicit consent education, because I don't believe you can consent to a practice unless you know that there is potential for brain damage, stroke or death from that practice.

I congratulate the Attorney-General for this very important legislation. It is one of the ways we are seeking to address the increasingly hostile and dangerous online world that exists for our children. I think the increasingly hostile and dangerous online world, instead of helping us build a better, more equal, fairer society, in many respects is taking us backwards.

I want to finish by saying to the eSafety Commissioner how important her work is and how much, as a government, we value the effort that she and her team are putting into creating a safer online world for our children. We cannot allow the progress that we've made, over decades, towards gender equality, or allow the progress that we want to make towards safety for women and girls from gender based violence, to be undermined by the tech bros and cowboys who think the rules don't apply to them.

6:37 pm

Photo of Dan RepacholiDan Repacholi (Hunter, Australian Labor Party) Share this | | Hansard source

I rise to speak on the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. One of the biggest changes in our world over the past couple of years has been the rise and rise of artificial intelligence technology. Every second day now there is another news story involving AI. It is being used in all sorts of ways, mostly for good, to help people learn new things and for businesses to increase their productivity. It is generally being used to make the world a better place. But, as is sadly common with new, powerful technologies, AI is also being misused for bad reasons. One of the most highly disturbing examples of this is the use of AI to create non-consensual deepfake sexual material.

A deepfake is a fake image, video or voice clip depicting a real person. Deepfakes are created by using AI to manipulate real data associated with the depicted person, which could be photos or videos of them from social media. Deepfake content can be highly convincing and hard to distinguish from real photos or videos of a targeted person. This is especially true as AI tools become more advanced and as people upload more and more content of themselves online as well. The more data that is out there and available for a given person, the more realistic deepfakes of them can become.

Most deepfake content isn't sexual, and sometimes deepfake technology could be useful. I suspect many singers around the world would probably not have a career without autotune. Deepfakes can also be funny when they are created in good faith and clearly meant to be a fake, such as when a constituent in my electorate sent a deepfake image of me as a kind of Transformer merged with a helicopter. That was a nice surprise, but unfortunately, alongside the harmless deepfakes, a serious issue has been brewing.

Deepfakes can be very dangerous. The danger of a deepfake lies in the fact that they can depict people doing or saying things that they did not actually do or say. When sexually explicit deepfakes are created of a person without their consent, this is a highly distressing experience for the person depicted and extremely harmful to them and their loved ones. It is a damaging form of abuse and it must stop.

Fake and manipulated content has been a feature of our society for many decades. Famously, in magazines and newspapers, photoshopping has been used to make all kinds of alterations to the way people look—be that slimming them down, whitening their teeth or removing their wrinkles. But deepfakes are opening the door to a whole different league of manipulation.

In terms of the quality of output and how realistic it is and how easy it is to generate, AI powered deepfakes are making photoshop look like ancient technology. Soon it is possible that an everyday person will be able to generate huge amounts of almost any content they like of very high quality at the click of a button. It is deeply disturbing that some people in our community will use this technology to degrade and dehumanise others by generating non-consensual sexual material based on their likeness.

Sadly, deepfake technology is already being used in this way. When deepfake sexual material is shared or posted online without the consent of the person depicted, it is a serious breach of the person's privacy and their sense of security. The effects can be long lasting. Once images are out there, victims may have to live with the fear that the reputational damage they're experiencing will follow them throughout their life. The harm also extends to the friends and family of the victim, who share their distress.

Recently we have heard shocking news stories about non-consensual deepfake sexual material. Some of these stories involve material being distributed at schools and at workplaces. The stories involving school students are particularly disturbing. Other existing laws already appropriately penalise the use of AI technology to generate sexual material depicting children. This bill applies to deepfake depictions of adults. As young people are early adapters of AI technology, it is crucial that we clearly set out expectations in law. This bill ensures that it will be illegal to generate non-consensual sexual material of any person regardless of their age.

Given the new risks that AI technology is opening up and the harms that are already being brought upon victims of deepfake sexual material, it is crucial that we, as a government, match powerful advancements in technology with advancements in the law, and that is what this bill seeks to do. It addresses an important area of risk and harm that the rise of AI has opened up.

This bill will amend the Criminal Code to ban the sharing of deepfake sexually explicit material of a person without their consent. This offence will carry a serious criminal penalty, a maximum of six years in prison. If an offender is also responsible for the creation of the non-consensual sexually explicit material, the offence will be considered aggravated, which increases the maximum penalty to seven years imprisonment. The law as it already stands criminalises non-consensual sharing of private sexual material; however, this bill amends the law to ensure that it equally penalises deepfake sexual material of a person too. These important reforms make clear that those who share and create sexually explicit material without consent, including by using new technology like AI, will be subjected to serious criminal penalties—and so they should be. What makes this bill especially needed and important is that we also know that in the vast majority of cases the victims of deepfake sexual material are women and girls.

The government is deeply concerned about the level of gendered violence in this country. In May, the Prime Minister met with state leaders from across Australia to address how the gendered violence crisis in our nation could be best addressed. After that meeting, the government announced almost $1 billion of funding towards the Leaving Violence Program, which will provide those escaping violence with financial support, safety assessments and referrals to support pathways. We also announced a trial of age verification technology to limit the ability of children to access sexually harmful content online. Non-consensual deepfake sexual material perpetuates harmful gender stereotypes and drives gender based violence. That's why our next step to address the gendered violence crisis is to crackdown on harmful deepfakes.

This bill builds on other initiatives of this government that work towards ending gendered violence in this country. We made a commitment to tackling online harms, particularly online harms that affect women and young girls, and this bill is part of our effort to honour that commitment fully. The bill was developed in consultation with the Australian Federal Police, the Commonwealth Director of Public Prosecutions and other relevant Commonwealth agencies. It will work hand in hand with relevant state based laws to empower law enforcement and prosecutors to properly go after the people who are doing this abuse who create and share non-consensual deepfake sexual material.

We have heard the calls of victims and members of the public who have felt strongly that the law must be strengthened to address non-consensual deepfake sexual material. We are listening and we are also acting. I support this bill because it penalises the sharing of non-consensual deepfake sexual material. It will reduce this terrible kind of abuse, and I support this bill because it helps our continuing fight against gendered violence in our country. I commend this bill to the House.

6:47 pm

Photo of Mark DreyfusMark Dreyfus (Isaacs, Australian Labor Party, Cabinet Secretary) Share this | | Hansard source

Digitally created and altered sexually explicit material that's shared without consent is a damaging and deeply distressing form of abuse. This insidious behaviour is degrading, humiliating and dehumanising for victims. Such acts are overwhelmingly targeted at women and girls and perpetuate harmful gender stereotypes and gender based violence. The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 delivers on a commitment made by the Albanese government following the National Cabinet held in May to address gender based violence. This commitment recognises the urgent and collective need to respond to the growing challenges associated with artificially generated sexual material.

This bill will strengthen the criminal law to protect vulnerable people from online harm and abuse and hold perpetrators to account. It will create a new criminal offence that applies where a person transmits sexual material depicting an adult person, using a carriage service and the person knows the person depicted does not consent to the transmission of the material or is reckless as to whether or not the other person consents. The new offence will carry a maximum penalty of six years imprisonment.

The bill also introduces two new aggravated offences. The first aggravated offence applies where the person transmitting the material is also responsible for creating or altering the material. The second aggravated offence applies where a person has already been found liable for similar conduct as the civil standard under the Online Safety Act 2021 on three occasions. These aggravated offences carry a maximum penalty of seven years imprisonment to reflect the seriousness of this offending. The bill sets out specific defences to ensure these offences are targeted and proportionate.

Let me address some matters that have been raised in debate. The bill ensures criminal offences relating to non-consensual sharing of sexual material applies to both real and fake material, including deepfakes. The bill repeals previous offences in the Criminal Code dealing with non-consensual sharing of private sexual material. Those existing offences do not adequately cover the situation where deepfake sexual material is shared online without consent. This was the clear advice of the Commonwealth Director of Public Prosecutions to the Parliamentary Joint Committee on Law Enforcement in December 2023. The new criminal offences remedy this defect and create a strong framework to criminalise the non-consensual sharing of sexual material online. The new criminal offences are based on a consent model to better cover both artificial and real sexual material. Consent is not defined in the legislation and relies on its ordinary meaning, which is understood to be free and voluntary agreement. A person is taken to have consented if the person freely and voluntarily agrees to the sharing of the material. It would not apply where consent was obtained through fear, force or deception.

The new offences will apply to sexual material depicting adults, with child abuse material continuing to be dealt with comprehensively in a separate division of the Criminal Code which includes detailed offences and heavy penalties. The new offences do not change the meaning of recklessness, rather it clarifies that being reckless as to whether a person has consented includes:

… not giving any thought to whether or not the person is consenting.

That's consenting to the transmission. This is consistent with other offences in the Criminal Code which deal with non-consensual sexual activity—for example, the war crime of rape and sexual violence crimes. The spurious arguments raised by the opposition, particularly by the member for Bradfield, to the effect that these new criminal offences are not needed should be firmly rejected. The opposition should instead support stronger laws to counter deepfake sexually explicit material.

The bill will hold perpetrators to account for causing harm through the non-consensual sharing of deepfakes, and ensure that Australia's criminal offences keep pace with new technology. The Albanese government is committed to keeping Australians safe from technology facilitated abuse.

Photo of Ian GoodenoughIan Goodenough (Moore, Liberal Party) Share this | | Hansard source

The question is that the amendments be agreed to. There being more than one voice calling for a division, in accordance with standing order 133 the division is deferred until the first opportunity on the next sitting day.

Debate adjourned.