Senate debates

Monday, 19 August 2024

Bills

Criminal Code Amendment (Deepfake Sexual Material) Bill 2024; Second Reading

7:01 pm

Photo of Michaelia CashMichaelia Cash (WA, Liberal Party, Shadow Minister for Employment and Workplace Relations) Share this | | Hansard source

I rise to speak on the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. At the outset, I will say that the coalition support the intention behind this bill, but, as we've looked through the bill, through the committee process and otherwise, it has become clear that there is no clear rationale for why the bill actually needs to pass.

The bill repeals existing offence laws that make it an offence to transmit private sexual material with intent to menace, harass or cause offence. In their place, it establishes new offences for transmitting sexual material without consent. This includes material created or altered using technology. This is a one-for-one replacement. The existing laws were introduced by the coalition in 2018 with strong support from Labor. They already deal with both revenge porn and fake sexual material, now known as deepfake material.

Labor now wants to repeal and replace the existing offences that deal with deepfakes and revenge porn. In going through the process, though, Labor has given no persuasive rationale for why we should now repeal laws that it previously supported. The only explanation for the change is a reference to an unexplained legal conclusion that has been put forward by the Commonwealth Director of Public Prosecutions. The issue relates to the legal definition of the term 'private sexual material'. The term refers to persons engaging in or appearing to engage in sexual activity 'in circumstances that reasonable persons would regard as giving rise to an expectation of privacy'. In parliamentary submissions—and this is very important—the CDPP have expressed concern about how that definition deals with fake images. They said:

The issue that arises here is that, with certain deepfakes, it cannot be said that any expectation of privacy attaches to the depiction of a victim. For example, if an accused were to transpose an image of a victim's face onto a publicly available pornographic video, this would generally speaking, not be 'private sexual material'. This is because the creator of the deepfake uses, as source material, a depiction of a part of the victim (for example, their face) with respect to which it cannot be said there would be an expectation of privacy.

The government has adopted that view but the reasoning has never been explained. Why can it not be said an expectation of privacy arises with respect to the image of one's face being attached to pornographic video? In many cases, perhaps the vast majority of cases, people would feel the dissemination of a picture that shows their face attached to a pornographic image is a huge intrusion on privacy. Why would courts not arrive at that commonsense outcome? So far as we can tell, the issue has not been tested in court.

The government have cited a case authority for their conclusion. It appears to be a triumph of technical reasoning over common sense. What is worse, this conclusion appears to be directly contradicted by the explanatory materials about the existing law. The explanatory materials for the existing law make clear the term 'private sexual material' is intended to cover fake material. This is what it says:

The definition for private sexual material captures material that appears to depict a person engaged in sexual activity or a sexual pose. The use of the phrase 'appears to depict' is intended to capture material that has been altered or doctored to falsely portray a person in sexual activity or a sexual pose. For example, an image of a person's face taken from an innocuous image may be transposed onto an image of another person's body in a sexual pose.

In other words, the very example that the CDPP uses to show why there might be a gap the in the law is the same example used in 2018 to explain what the laws do cover. This direct contradiction has never been resolved, and that is a problem because, quite frankly, on a side-by-side comparison—that's something we've done—if you look at the existing law and this particular bill, there are significant drawbacks to the model we're now considering. I will just take you through those drawbacks. It is worth reiterating the coalition fully supports the policy intent of this bill. Our object is to improve the legislation so we get the best outcome for Australians and we will move amendments in that regard.

There are at least five downsides, though, to the replacement of this law. First, the new offence is inconsistent with other laws. Right now, when we use the term 'private sexual material', it has the exact same meaning in both the Criminal Code and the Online Safety Act. The bill removes that consistency, so the civil regime now deals with a different thing to the criminal regime. This means there is now an obvious risk of gaps between our civil and criminal laws when it comes to revenge porn deepfakes and other forms of image based abuse.

Secondly, the new offence is drafted in an exceptionally broad way that risks capturing innocuous behaviour and then reverses the onus of proof, which is of serious concern. We're unable to say with clarity precisely what types of behaviour are criminal offences under the bill because there is no reference to objective standard. This problem is created because the new offence will no longer be based on using a carriage service to menace, harass or offend. By losing that objective reference point, we risk criminalising things like material distributed for educational purposes, material that was consensually distributed in the past, as well as restricted political and satirical speech. There is a clear risk they will be captured by this new law.

Thirdly, even though the government has framed its new offences around the concept of consent—this is something we do need to explore in the committee stage—it is repealing the definition of 'consent'. At present there is a definition of 'consent' in the Criminal Code, which is used in relation to the sharing of private sexual material. The law currently says consent means free and voluntary agreement. But, in redesigning the existing criminal offence, this bill adopts a bizarre and contradictory approach. Even though consent will be the central element of the new criminal offence, the bill repeals the definition that explains what consent means. The explanatory memorandum says consent takes its ordinary meaning. But there is only one sentence explaining what the ordinary meaning is, and that explanation is limited to a single specific circumstance. Courts will now need to decide what consent means, so the courts and not the parliament will decide the scope of the conduct that is now criminal. This introduces uncertainty into the law, which is not where we want to be. As it stands, it would appear that, based on the drafting, we can't clearly outline the circumstances in which this new offence will apply.

The fourth issue that we need to explore in committee is that the government, for some unexplained reason, has changed the definition of 'recklessness' for this offence. Recklessness is one of the fault elements of the offence. One of the core ways that this new offence will be proven in court is by showing that an accused was reckless about whether a victim consented to sharing an image. If a person meets the standard of recklessness, they will be morally culpable—in other words, they can be punished by law. But, in just about every other part of the Criminal Code, recklessness has a standard meaning. This is by design. The Criminal Code is meant to be consistent. It lists just four standard fault elements: intention, knowledge, recklessness or negligence. For some unexplained reason, however, for this offence, the government has given 'recklessness' a different meaning. We've seen now that new words have been added which change the way the fault element applies. This is untested and inadequately explained, and, once again, we're going to have to explore this because we don't know what the consequences will be.

The fifth downside is that we increase the risk of negative experiences for victims. Under the existing offences, you don't need to prove consent or a lack thereof. This means there is far less need to cross-examine a victim about the material which shows them engaging in sexual activity, whether it's real, as with revenge porn, or fake, as with AI generated deepfakes. However, under this new legislation, the prosecution will need to prove consent as an element of the offence. The obvious consequence is that you will have the defence challenging the prosecution on the issue of consent. In practice, this will likely mean that you're going to have victims—and they'll be overwhelmingly women—being cross-examined about the ins and outs of whether and how they consented to particular material, and this risk arises both in relation to deepfakes and also in relation to revenge porn.

One of the other things that we need to actually explore is: Who was crying out for this? Who was saying that we need to replace an existing law that deals with these issues adequately with one that is now more likely to result in victims being dragged through the justice system? Again, we need to explore what the reasons are for this. In short, instead of a clear workable offence that covers both deepfakes and revenge porn, this bill gives us now, particularly for the victims, an uncertain mess.

There are two pathways forward to deal with this legislation. The first way is to deal with this and accept the government's legislation as is. This means a complete redesign of laws that are already on the statute books, even though the rationale has never been clearly explained and there are clear disadvantages. The second way forward—and I would hope that this is what the Senate reflects on—is to improve the legislation. This means acknowledging we have legislation on the statute books right now to deal with both fake material and so-called revenge porn. It involves adding clarity by directly addressing the concern that the existing laws might not apply to deepfakes. This means adding short provisions to expressly make clear the existing laws cover both deepfakes and so-called revenge porn. It is a simple matter of removing all doubt that the current laws do what they were always intended to do. The second option is the coalition's preferred approach. It's the option that we've put on the table.

As we've said consistently, we support the intent of this legislation. We will work constructively to ensure that deepfakes can be addressed, as we have been doing since 2018. We support and will not stand in the way of the legislation, but, as I've outlined, we have some serious concerns and we will move amendments which we think will improve the bill, given the very legitimate concerns that I have raised, particularly in relation to the cross-examination of victims and potentially dragging them through the courts. The obvious consequences, as I said, of the way this legislation has now been drafted is that you will have the defence challenging the prosecution on the issue of consent. In practice, this will likely mean that you are going to have victims, overwhelmingly women, being cross-examined about the ins and outs of whether and how they consented to particular material being transmitted.

As I said, there is a way forward. We will be putting amendments up to address these issues, and I certainly ask the Senate to give consideration to the coalition's approach to addressing the legislation, which will minimise the impact, in particular, on victims.

7:16 pm

Photo of Larissa WatersLarissa Waters (Queensland, Australian Greens) Share this | | Hansard source

I rise to speak on the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. The Greens welcome this bill, which responds to the online harm caused by deepfakes and other artificially generated sexual material. However, it does nothing to stop the creation of those images in the first place. It's well and good for legislation to be updated to make sharing deepfake or AI image based abuse a crime, but it would be better that the images weren't so easy for anyone with internet access to create and then share. Dr Rachael Burgin of RASARA, the Rape and Sexual Assault Research and Advocacy organisation, said during the Legal and Constitutional Affairs Legislation Committee inquiry into the bill that the creation of deepfake images:

… whether or not they are distributed, is a significant harm and, commonly, threats to circulate … deepfakes, is a tactic used by an abuser to instil fear and exert control and power over another person.

A failure to criminalise the creation and the threat of the creation of this type of material accepts a culture of gendered violence.

This bill doesn't come close to addressing how image based abuse is contributing to gendered violence. Image based abuse is deeply gendered. Women, girls and gender diverse people are most commonly targeted and, much like revenge porn, it is overwhelmingly used by men as a tool to harass and degrade. In fact, 99 per cent of victims of deepfake pornography are women, according to a 2023 study by Security Hero. Deepfake material may be fake, but the impacts are very, very real. We heard heartbreaking evidence during the inquiry into this bill about the harm of deepfake abuse on individuals. Noelle Martin, a victim-survivor of deepfake abuse, said:

… it is life-destroying and shattering. It is something that impacts every aspect of a person's life. It impacts their being able to find work and their employability to being able to manage and control their reputation. It impacts a person's dignity, autonomy and agency. It compromises and threatens people's capacity to self-determine, to flourish and to exercise their sexual agency.

I completely agree with her remarks. The federal government can and must do more to stop the harm of deepfake abuse by stopping their creation.

During the inquiry into this bill, the federal government said that the Commonwealth can't stop the creation of deepfake material and that it's the states and territories who would need to do any criminalising under their own state based criminal codes. I note that they mentioned that it was on the Standing Council of Attorneys-General agenda, which I welcome, but they insisted that the Commonwealth could not act. I beg to differ. The Convention on the Elimination of All Forms of Discrimination against Women is an international treaty that Australia has ratified. In my view, as a lawyer—and I'm sure many minds will have views on this—there's a clear case that the federal government could use this external affairs power, based on the fact that we ratified that treaty, to enact laws that stop the creation, as well as the associated threat of creation, of deepfake sexual material. So I'd like to foreshadow a second reading amendment in my name to that effect, and I'll move that properly towards the end of my contribution.

It's pretty clear, in my mind, that the Commonwealth could legislate to stop the creation of this deepfake abuse if they so chose. I think it's unfortunate that they waved away the suggestion that there's no constitutional barrier to them doing so and, instead, that it's up to somebody else to tackle what is actually the real problem, and they're leaving that to the states and territories. I hope the states and territories do prioritise that work and do come on board and attend to fixing the state and territory based criminal codes in that respect. Let's hope they take a harmonious approach; otherwise, you'll have a patchwork of approaches where you could have had the Commonwealth simply step in to criminalise the creation of that material as well as just the mere dissemination of it. Let's stop playing catch-up with technology and, instead, show leadership in preventing this harm from occurring in the first place.

With all of that said, we know that criminalisation is not the sole way to prevent image based abuse from occurring. Education and primary prevention are much needed, and they're definitely needed to drive cultural change in this regard. That's why the Greens are calling on the Albanese government to include education on the harms of deepfake sexual material and on the weaponisation of artificial intelligence in consent education and respectful relationships education as they're currently rolled out, albeit patchily and albeit without enough funding, in our secondary school system.

It's also imperative that frontline service providers who specialise in both preventing and responding to this kind of abuse receive the funding that they need. These crimes can have a number of victims, and they can impact a number of victims-survivors. Ultimately, those people will seek assistance and support through specialised sexual violence services, which, at the moment, are underfunded and are having to turn people away or condemning them to obscenely long waitlists.

Sexual violence support services are even more underfunded than family and domestic violence frontline support services. The wait times for all waiting lists for support after experiencing sexual violence, for counselling, for justice and for healing are unacceptable and will compound the harm that is done. We need those frontline sexual violence service providers and support organisations fully funded so that you don't have waitlists that are months and months long for people to start that process of healing.

Explicit deepfakes are becoming prolific, but, with widespread and easy access to artificial intelligence apps in every single smartphone, I might note that deepfakes are already wreaking havoc on political communications and on election outcomes. Deepfakes present inherent risks, including the spread of misinformation and threats to privacy and security, altering election outcomes and misleading and deceiving voters. I note that my colleague Senator Shoebridge will have a second reading amendment to address that issue, and he also has a private senator's bill. I urge the Albanese government to take immediate action to extensively ban the creation, as well as the transmission, of deepfakes to address the harm that deepfakes present to our democracy and to address our culture of gendered violence.

Just to recap the existing rules, despite the rather lengthy contribution from Senator Cash, it's pretty clear to those of us that sat on the committee that deepfakes generated by AI were not covered by existing laws, and it's important that they be covered. This bill would do that, but it would only address the distribution of deepfakes. It wouldn't address the creation. It actually needs to stop them from being created in the first place. We took some incredibly disturbing evidence that some of these apps—Nudify was one example that was often referred to—are not only freely available for download on any smartphone but have been categorised as acceptable for age 4-plus. My jaw hit the floor when I heard that evidence. There is something deeply wrong when the Commonwealth says they can't fix that. I don't accept that they can't fix it. I wish that this bill were going to fix the problem. I think it's a good step forward, but we need to actually take that next step and make sure that those apps can't run riot and provide material that is so deeply unhelpful and in fact so traumatising and can so easily be turned into a tool of violence predominantly against women. We need the government to do more in this space. They have the power to legislate, and they should. The videos might be fake, but the harm is real.

As I said earlier, 99 per cent of victims are women. Overwhelmingly, this sort of abuse is used by men as a tool of violence, as a tool of power and as a tool of control. We desperately need that education and that primary prevention. That is not just a change to our criminal laws; we need to make sure that a culture change and a message are sent particularly to young people that this is not the appropriate way to be relating to your peers or to other people that you might be subjecting to this AI generated abuse.

With that said, I note that it's time for this government to put the rights of women and girls ahead of the profits of big tech. I move:

At the end of the motion, add ", but the Senate notes that Australia, having ratified the Convention on the Elimination of All Forms of Discrimination against Women, has an international obligation to make the creation of, and associated threat of the creation of, deepfake sexual material an offence."

7:26 pm

Photo of Nita GreenNita Green (Queensland, Australian Labor Party) Share this | | Hansard source

I'm really pleased to rise to speak on the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. I chaired the inquiry into this bill. I want to thank all the witnesses who came forward during the inquiry's hearing. This is a crucial piece of legislation that is going to change and protect lives. I'm going to talk about the bill and the evidence that we received throughout the hearing. I will come back to the previous contributions because I think it is worth responding to claims from those opposite that this offence and this Criminal Code amendment should be watered down in any way or that, in some sort of way, per the Greens' position that they've put forward, we should ignore the Constitution. I seek to come back to those issues.

The rise of deepfake AI represents a growing problem that we can no longer ignore. Our technology is rapidly advancing, and it is our responsibility to ensure that our legal frameworks evolve with new crimes of the digital age. The bill before us will pave the way for our stance on technology abuse within Australia by imposing serious criminal penalties on those who share sexually explicit material without consent. These reforms will include material which is created using artificial intelligence or other technology. This bill could not arrive at a more crucial time. The eSafety Commissioner provided evidence to our committee that sexually explicit material has increased 550 per cent every year since 2019. Approximately 90 per cent of deepfake images are sexually explicit and 95 per cent of them are targeted material of women and girls. It is prolific, it's horrific, and it destroys lives.

The public hearings on this bill revealed that, in October 2022, research shows that a new AI bot created and shared 680,000 fake nude pictures of women without their knowledge. These statistics expose a distressing truth for our country, that deepfake and AI sexually explicit content is being weaponised against women. It is rooted in misogyny, and its presence is only going to expand as technology improves. Digitally created and altered sexually explicit material is a deeply damaging form of abuse used to degrade and humiliate its victims. It has the potential to cause life-lasting harm, with this fabricated explicit material often indistinguishable from real life. It is also predicted that deepfakes will only become more advanced in the future, with technology being able to swap people's heads from their bodies and voices in photos and videos and images that are received on phones. This material impacts its victims in all areas of their lives, with survivors left traumatised often personally, professionally and emotionally. This is why the Albanese Labor government is strengthening the Criminal Code, imposing a penalty of six years imprisonment for those who share non-consensual deepfake sexually explicit material and an aggravated offence and a higher penalty of seven years for those who created the non-consensual deepfake. This new law is an extension of our government's commitment to ending family, domestic and sexual violence within a generation. We are keen to take these steps to protect women and girls, given that they exist as 99 per cent of victims of this harmful material.

As chair of the committee, I was privileged to hear from experts, stakeholders and victims on the impact that this type of material has on our community. It was clear from the very beginning of the hearing that this type of material is pervasive and unprecedented in its nature. The creation and sharing of non-consensual deepfakes has reached a phenomenal scale in terms of how many people can be targeted, how easy it is to target individuals and the sheer number of websites and apps that are dedicated to this type of abuse. Ms Noelle Martin, a global activist and Young Western Australian of the Year in 2019 spoke to these issues. Not only did she emphasise that billions of people have viewed the top 30 non-consensual deepfake websites; she also explained that this is not just an issue that affects celebrities and people in the public eye. This is an issue that continues to target everyday women and girls.

With more and more apps emerging that allow people to upload photos of women and churn out sexually explicit material, we are seeing lives destroyed overnight. Websites and apps of this nature are causing women to become bullied in their school grounds and their workplaces. It is destroying their ability to find work and compromising their employment. This strain of abuse threatens people's ability to control their reputation and uphold their right to dignity, autonomy and agency. As one can imagine, those who experience this type of abuse, of digital material being non-consensually created and distributed of themselves, often witness an extreme decline in their physical and mental health. Everywhere these victims go they are worried. They are worried that their family members, grandparents, coworkers and friends have seen the non-consensual material, and some are unable to make eye contact with others as they think, 'Have they seen it too?' Our hearing revealed stories of individuals who would have the images replay over and over in their nightmares, stories of women experiencing an all-consuming feeling of dread or afraid to leave their homes due to the overwhelming feeling of shame and fear.

We know that gender based violence is all too common within Australia, with one in five women having experienced sexual violence since the age of 15. It manifests in various forms, whether it be physical, emotional, economic or social. But, with AI available to everyone with the click of a button, the scope, severity and type of violence against women is only going to grow if we do not act. If our country wants to address the culture of gendered violence, then it needs to be made clear that women and their bodies are not property to be non-consensually manipulated, distributed and ridiculed.

Now, of course, this work does build on the work our government is doing around making sure that we are improving our laws around consent and ensuring that we have the proper type of education and training for young people about what consent means. The Albanese government is providing $76 million to states and territories and the non-government school sector to deliver age-appropriate, expert developed respectful relationships programs, and this funding goes hand in hand with types of legislation like this. As we will see, this program ensures that we are facilitating healthy interactions between girls and boys. This action is also complemented by the Stop it at the Start campaign, which has been designed to combat misogynistic attitudes and influences online.

But I want to go to a very important point. It was raised by Senator Cash in her contribution. It is this notion and idea from those opposite, which has been said many times—I think the shadow minister said it today, and we've seen other shadow ministers say it in the media—that we don't need this bill. It is clear that anyone who would say that today didn't listen to the victims who were at the hearing and hasn't been listening to the women who have been coming forward and talking about this for years. Those women said at the hearing that they spoke to the previous government and weren't listened to. Now they are being listened to, and this type of material is being criminalised.

Failing to amend the criminal framework to address new types of technology poses a real risk to our future. It also just ignores what the victims have been telling us. Those at the deepfake public hearing warned against a future without a bill such as this. Professor Asher Flynn said:

There is a real risk, without taking serious steps like criminalising deepfake abuse, that we will see backwards efforts in our attempts to address and prevent sexual harassment and abuse.

That's what I want those opposite to consider before they walk in here and talk through Senator Cash's talking points on this bill, because the evidence we received at the hearing does not support the position that Senator Cash put forward. This deepfake bill, we know, will ensure that the laws send a very clear message that this type of behaviour is not tolerated by our country, by our parliament, and that we recognise the harm of deepfake abuse within our country. That's what this bill does.

This bill also recognises that we want to do as much as we can to create safe online environments for women and young people. It recognises that, as a government, we reject the idea that it is acceptable to dehumanise and degrade others by using technology. Saying that this bill is not needed is insulting to the victims who came forward and it doesn't recognise the very real propositions put forward by the CDPP about the lack of legal protection, for women, in this case, when it comes to crimes like this. It's important that we set the standard; it's important that we give victims the legal right to pursue justice; and it's important that we don't mislead the Senate by suggesting that this type of bill or this type of law is not needed. I want to be very clear about that.

The Albanese Labor government is committed to making sure that our communities are safe for our current generations and the next generations of women—and that is what this bill is. That's what protecting women looks like: this bill. It looks like delivering respectful relationships and consent programs across the country, to ensure that we're fostering healthy in-person and online relationships. It looks like criminalising deepfake abuse, to send a clear message that dehumanising and degrading behaviour is unacceptable in our nation. It looks like making sure that men and women feel protected in their schools, their offices, their homes or in online spaces, in their experience—particularly if they experience deepfake sexual violence. This is what it looks like to protect women and girls. It's taking action like this and bringing forward legislation that will protect them.

I know that women are watching what this parliament does on this bill, because they are very fearful, if they have been the victims of this abuse, that they won't have legal justice. They understand that the problem is being propelled by the technology that is out there. So this is a crucial time to take action on this type of crime. The future of technology has a lot of unknowns—I think we all recognise that. But what we do know is that the use of deepfake and AI for the creation and sharing of sexual material is only going to become worse if we don't act now.

I really commend the Attorney-General for bringing forward this important piece of legislation. I also want to thank all of those who spoke at the public hearing on this bill. It really was clear from the experts, victims and members of the public who spoke that strengthening the law against deepfake sexually explicit material is an excellent step towards addressing gendered violence within our country. Those experts and victims and members of the public who spoke out during that hearing, and all of the people who've called for this type of legislation, are listening to this debate, and what they don't want to hear is some sort of word salad from those opposite about how this bill needs to be watered down. What they want to see is a parliament and a government who are willing to take action and step up and protect women and girls from what is incredibly corrosive behaviour and to set a community standard about what this parliament thinks is acceptable. That's what we're talking about today.

I really hope that the Senate sees through the attempts from those opposite to weaken this legislation, and I really hope the Senate sees that this is an urgent piece of legislation—that there are women and girls out there who need this protection right now, who will get it when we amend this legislation. We heard evidence that the legislation we have right now is not enough. It doesn't go far enough. It doesn't cover the field. That is quite clearly evident from the evidence that we received. I know that those women who are listening want to see this action taken because they understand the real threat that AI technology presents to them.

Finally, I want to say thank you not only to the Attorney-General but also to members of our government for bringing this legislation forward as quickly as possible. The evidence received at the committee inquiry showed us how urgent it is, how important it is and the difference that it will make to protecting women and girls, which this government—members on this side of the Senate—are committed to.

7:40 pm

Photo of Paul ScarrPaul Scarr (Queensland, Liberal Party, Shadow Assistant Minister for Multicultural Engagement) Share this | | Hansard source

I rise to speak on the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. Most of the points have been covered in the debate up to date by other speakers. I, too, sat on the inquiry and heard the absolutely appalling evidence. We heard about the explosion in this sort of activity, which is deeply, deeply concerning. I must say, one point which perhaps hasn't been touched upon thus far is the totally underwhelming response we got from some of the big tech companies. Senator Waters touched upon it. But I must say that I found their submissions completely underwhelming and, from my perspective, they demonstrated a lack of focus in combatting the issue. That is very, very concerning and I think that there should be some reflection on their part.

In relation to the process, I've made the point in this place on a number of occasions the importance of making sure that we hear from key stakeholders in a timely fashion. It was disappointing in this regard that the process was abbreviated such that the Law Council of Australia's submission wasn't received until after we had the public hearing. I would have thought, with legislation dealing with criminal offences of this nature, with terms of imprisonment attached, it would be important that we hear from the peak body representing the legal profession in this this country and consider their perspectives at a committee hearing as opposed to receiving the submission afterwards. I'll put on the record what the Law Council of Australia submitted:

Because of the limited time available between the referral of this inquiry for review by this Committee and the deadline for public submissions, our Constituent Bodies and expert advisory committees have not been able to consider completely all the issues raised by this Bill. As a general point, we reiterate the importance of appropriate consultation timelines to enable informed scrutiny of what are important changes to Australia's criminal law frameworks. Additionally, as we have explained below, the explanatory materials have been of limited assistance in understanding the rationale for certain unusual drafting choices.

And that's from the submission of the Law Council of Australia.

I note that this isn't the first time in this parliament that we received submissions on this committee from the Law Council of Australia decrying the fact that they were provided insufficient time to provide the benefit of their expertise. We've heard similar comments in relation to the Administrative Review Tribunal Bill 2023, the Migration Amendment (Removal and Other Measures) Bill 2024 and in the Identification Verification Services Bill 2023. So I would hope that the government can manage its legislative program in a somewhat more efficient way to ensure that important stakeholders like the Law Council of Australia actually have the opportunity to provide their expert perspective.

The second point I would like to make is in defence of my colleague Senator Cash. I listened to Senator Cash's speech. I didn't interpret anything Senator Cash said as 'watering down' the bill. Senator Cash raised a number of technical issues in relation to the bill, but certainly, as Senator Cash indicated, we're all on the same page in this space in protecting in particular women, girls, from this sort of behaviour. I think we're all in the same space, but there were some technical issues raised and it's our obligation to ventilate these issues in this, the house of review. We are a house of review. Again, I just want to quote from the Law Council of Australia's submission:

While we agree that the existing aggravated offences in the Criminal Code may not apply to deepfake material, for the reasons explained below we consider that this Committee should keep in mind the advantages of the framing of the existing primary offence in section 474.17, noting that this provision is likely to be engaged for such material.

In our view, there are certain features of the framing of the existing … offence in section 474.17 that promote certainty.

That's the Law Council of Australia saying that the existing offence, in certain aspects as it's currently framed, promotes certainty. So it is a concern if we are moving away from the features of the existing framing of the offence. The Law Council says:

These features should be considered by the Committee in considering improvements to the new offences contained in the Bill.

That's from the Law Council of Australia, so some of the characterisation of Senator Cash's position, and of the coalition's position, in this regard is somewhat unfair and perhaps should be reflected upon.

The last point I wish to touch on is in relation to the aggravated offences. One of the particular aggravated offences that hasn't been touched upon in this debate yet is the aggravated offence that applies where certain civil penalty orders have been made. The first proposed aggravated offence would be established when a person commits the underlying offence and has three or more civil penalty orders made against them in relation to contraventions of the OSA. We were informed by the eSafety Commissioner that there's hardly any prospect of anyone receiving three civil penalty offences. The concern with respect to this particular aggravated offence is that because there's little prospect of someone receiving three or more civil penalty orders, it means there's little or no prospect of the aggravated offence itself being triggered. That is of concern.

We sat in the inquiry and heard that put consistently by all of the witnesses who spoke to the issue before the committee. The response to that from the Attorney-General's Department was, 'Well, it means it's consistent with the other provisions elsewhere in the act.' I don't think that answers the question. If we're being told that there's an aggravated offence being proposed, and the evidence being provided by the eSafety Commissioner and others is that the specific act which triggers the application of the aggravated offence is never going to be triggered because people don't get, in particular, three or more civil penalty orders, then one must reflect upon whether or not we should be proceeding down that path.

The way that's been addressed in the majority report from the committee is to suggest that a statutory review be conducted into that particular aggravated offence. I don't think that we need a statutory review in relation to that particular aggravated offence, because the evidence was quite clear—in all likelihood it's never going to be triggered because that's just not the way the system works. I suspect we'll get to the end of the statutory review period and that's not going to change, nothing in that respect would change.

Given the concerns that have been raised by the Law Council of Australia with respect to the fact that, under the existing offence provisions, there's better promotion of certainty in some respects, what would be useful under the offence which is proposed to be inserted is that in itself presents a pretty strong argument, from my perspective, that there should be a statutory review of the operation of the bill in its entirety and related matters after two years of operation. The preparation of the terms of reference for that review should be informed by, amongst other things, the issues raised during the course of the inquiry we had. That constituted recommendation 3 of the additional comments I provided.

Having said that, this is absolutely a very important piece of legislation. This Senate must act to protect, in particular, women and children in our community who are the subject of this vile phenomena.

7:50 pm

Photo of David PocockDavid Pocock (ACT, Independent) Share this | | Hansard source

I rise to speak in support of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024, which represents a step in the right direction in regulating the use of generative AI in Australia. Deepfakes and generative AI are reshaping the nature of communication. The technology has already completely undermined the faith that Australians have in text, image, video and voice recordings.

Fakes are not new, of course. There's a long history of images and videos being created to mislead and deceive. Photos alleging to capture the Loch Ness monster were created in 1934. Air bushed photos of Mussolini and Stalin were distributed to mislead citizens in the 1920s and 1930s, and the Roswell alien autopsy video sparked conspiracy theories in 1995. But clearly the ease at which these images, and now videos, can be altered or just plain manufactured using generative AI is completely unprecedented, and it seems to speed up month to month. Advancements in the technology have made it infinitely easier to create realistic fake content. What used to demand significant time, skill and resources can now be done instantly with minimal effort or expertise—worse, the results are often indistinguishable to the naked eye.

The use of generative AI to produce deepfake pornography is a particularly heinous use of the technology, but even this technology is not new. It's been five years since the release of the DeepNude app, which allowed users to generate nude images of women by inputting clothed photographs. Since then, we've seen an explosion in the creation and distribution of deepfake pornography. At times, it has been used to target celebrities like Chris Evans and Emma Watson. More recently, it is increasingly used to undermine the credibility of public figures and politicians, usually women.

In 2020, an Indian politician was targeted with a deepfake pornographic video that was circulated to discredit her election. In 2021, an American congresswoman was targeted in an attempt to undermine her credibility, and, in 2022, a woman running for the Senate of the Philippines had a deepfake porn video shared across social media in a bid to diminish her standing. Even more worrying, deepfake pornography is now being created and distributed by children and young people. Just a few months ago, we heard media reporting of the creation of deepfake pornography by school students here in Australia.

It's been more than half a decade since the release of the DeepNude app, so it's well beyond time for the distribution of deepfake pornography to be criminalised. The urgency of this is revealed in the statistics on deepfake pornography. Of online deepfake videos, 96 per cent are pornographic. Of deepfake pornography, 99 per cent targets women, and there was a 2,000 per cent rise, in 2023, in the number of websites generating non-consensual sexual material using AI.

I welcome the provisions in this bill and, in particular, the creation of new offences around non-consensual deepfake sexual material. A person's identity is theirs alone and must be safeguarded. Non-consensual sharing of sexual content is a cruel and damaging abuse. It shatters reputations, devastates relationships and inflicts deep emotional wounds, often leaving victims with anxiety, depression and a haunting sense of violation. The psychological and social toll of deepfake pornography is immense, and the most effective way to mitigate against it is to criminalise the behaviour.

But, while deepfake sexual material is a particularly heinous use of emerging AI technology, it is far from the only one. Generative AI is being used to create deepfakes to mislead and deceive people for a variety of nefarious reasons, and this bill does not go to many of these uses. Deepfakes are increasingly used in scams which are reported to have cost Australians some $8 million last year. They're also used to impersonate experts to provide misleading advice or information to consumers.

But, to me, one of the most worrying uses of deepfakes is in the context of our democracy. The use of deepfakes to mislead or deceive voters is exploding in democracies across the world. In the 2022 US mid-terms, AI generated profiles on social media platforms used deepfakes to spread misinformation, amplify divisive content and create the illusion of grassroots support or opposition for certain candidates or policies. A particularly worrying example was a deepfake robocall purporting to be from President Biden to thousands of New Hampshire residents discouraging them from voting. In the Indian election held earlier this year, deepfakes were used to impersonate candidates, celebrities and even dead politicians, serving up mis- and disinformation to millions of Indians. The election was described by academics at the John F Kennedy School of Government as being awash with deepfakes.

In the last month, we've seen deepfakes of Australian politicians emerging from both sides of politics. The Queensland Premier has fallen victim to a deepfake created by the Liberal Party, but Labor does not have clean hands after posting an AI generated video of the opposition leader. These videos could be described as awful but lawful, but we can surely do better than that.

Deepfakes used without consent threaten our democracy and should be banned in the context of elections. Australians deserve to know that the information they receive from parties and politicians is genuine. The coming elections in Queensland and the ACT and the upcoming federal election will be impacted by deepfakes. So, while this bill is a positive step forward, there is much to be done. Unfortunately, the lag between technological change and regulation response threatens to undermine our democracy. But that's no excuse. The government must be proactive in catching up to change and getting ahead of what we know is coming. We're being warned by experts, academics and even the AEC. Other jurisdictions—the US, the UK, the EU and even China—are already out in front in combatting and regulating against the use of non-consensual deepfakes.

I've circulated a second reading amendment in my name that calls on the government to act swiftly and decisively to regulate and protect Australian voters and Australians generally from the danger of deepfakes. In moving the amendment, I recognise the work done on this topic in the lower house by the member for Warringah, Zali Steggall, and I join her in calling on the government to act in providing Australians protection against the potential harms of deepfakes in elections.

I move the amendment.

At the end of the motion, add ", the Senate:

(a) notes that:

(i) deepfakes present inherent risks including the spread of misinformation, threats to privacy and security, altering election outcomes and misleading and deceiving voters,

(ii) Australians have lost over $8 million to scams linked to online investment trading platforms, often using deepfakes,

(iii) the Australian Electoral Commissioner has warned that Australia is not ready for its first artificial intelligence election that is coming within the next 12 months,

(iv) deepfakes of Australian politicians have already been used, and

(v) the Commonwealth Electoral Amendment (Voter Protections in Political Advertising) Bill 2023, introduced by the Member for Warringah on 13 November 2023, would ban the use of deepfakes in political advertising; and

(b) calls on the Government to:

(i) ban the use and transmission of deepfakes in political advertising prior to the next federal election, and

(ii) further regulate the use and transmission of deepfakes to address the risks they present".

This is critical. We must protect our democracy, and I urge the government to move before the next federal election.

7:57 pm

Photo of Helen PolleyHelen Polley (Tasmania, Australian Labor Party) Share this | | Hansard source

The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 is a fundamentally important bill tackling the exploitation and the targeting of mostly women within our society. The bill strengthens existing Commonwealth criminal offences and creates new offences to respond to the online harm caused by deepfake pornographic material and other artificially generated, sexually explicit material.

Digitally created or altered sexually explicit material is more and more common in our digital world. It refers to content which is shared without consent. This behaviour is a damaging and deeply distressing form of abuse unfortunately suffered by too many Australians. It can only be described as insidious behaviour which is degrading, humiliating and dehumanising for victims, especially for women and girls. As a mother and a grandmother, I'm appalled at those who try to do such harm to another human being. It is hateful behaviour which must be stamped out, so the Albanese government is fully committed to doing so.

The creation of these images has significant and lasting effects on a victim's mental health and forever changes their lives. Young people who are the targets of deepfake images being created are robbed of their childhood. The perpetrators of these disgusting and disgraceful acts do not have the right to steal the innocence of girls and young women, yet it is unfortunately happening too often—several examples of which I will discuss shortly.

Stamping out child—

Debate interrupted.