House debates

Tuesday, 2 July 2024

Bills

Criminal Code Amendment (Deepfake Sexual Material) Bill 2024; Second Reading

6:37 pm

Photo of Dan RepacholiDan Repacholi (Hunter, Australian Labor Party) Share this | Hansard source

I rise to speak on the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. One of the biggest changes in our world over the past couple of years has been the rise and rise of artificial intelligence technology. Every second day now there is another news story involving AI. It is being used in all sorts of ways, mostly for good, to help people learn new things and for businesses to increase their productivity. It is generally being used to make the world a better place. But, as is sadly common with new, powerful technologies, AI is also being misused for bad reasons. One of the most highly disturbing examples of this is the use of AI to create non-consensual deepfake sexual material.

A deepfake is a fake image, video or voice clip depicting a real person. Deepfakes are created by using AI to manipulate real data associated with the depicted person, which could be photos or videos of them from social media. Deepfake content can be highly convincing and hard to distinguish from real photos or videos of a targeted person. This is especially true as AI tools become more advanced and as people upload more and more content of themselves online as well. The more data that is out there and available for a given person, the more realistic deepfakes of them can become.

Most deepfake content isn't sexual, and sometimes deepfake technology could be useful. I suspect many singers around the world would probably not have a career without autotune. Deepfakes can also be funny when they are created in good faith and clearly meant to be a fake, such as when a constituent in my electorate sent a deepfake image of me as a kind of Transformer merged with a helicopter. That was a nice surprise, but unfortunately, alongside the harmless deepfakes, a serious issue has been brewing.

Deepfakes can be very dangerous. The danger of a deepfake lies in the fact that they can depict people doing or saying things that they did not actually do or say. When sexually explicit deepfakes are created of a person without their consent, this is a highly distressing experience for the person depicted and extremely harmful to them and their loved ones. It is a damaging form of abuse and it must stop.

Fake and manipulated content has been a feature of our society for many decades. Famously, in magazines and newspapers, photoshopping has been used to make all kinds of alterations to the way people look—be that slimming them down, whitening their teeth or removing their wrinkles. But deepfakes are opening the door to a whole different league of manipulation.

In terms of the quality of output and how realistic it is and how easy it is to generate, AI powered deepfakes are making photoshop look like ancient technology. Soon it is possible that an everyday person will be able to generate huge amounts of almost any content they like of very high quality at the click of a button. It is deeply disturbing that some people in our community will use this technology to degrade and dehumanise others by generating non-consensual sexual material based on their likeness.

Sadly, deepfake technology is already being used in this way. When deepfake sexual material is shared or posted online without the consent of the person depicted, it is a serious breach of the person's privacy and their sense of security. The effects can be long lasting. Once images are out there, victims may have to live with the fear that the reputational damage they're experiencing will follow them throughout their life. The harm also extends to the friends and family of the victim, who share their distress.

Recently we have heard shocking news stories about non-consensual deepfake sexual material. Some of these stories involve material being distributed at schools and at workplaces. The stories involving school students are particularly disturbing. Other existing laws already appropriately penalise the use of AI technology to generate sexual material depicting children. This bill applies to deepfake depictions of adults. As young people are early adapters of AI technology, it is crucial that we clearly set out expectations in law. This bill ensures that it will be illegal to generate non-consensual sexual material of any person regardless of their age.

Given the new risks that AI technology is opening up and the harms that are already being brought upon victims of deepfake sexual material, it is crucial that we, as a government, match powerful advancements in technology with advancements in the law, and that is what this bill seeks to do. It addresses an important area of risk and harm that the rise of AI has opened up.

This bill will amend the Criminal Code to ban the sharing of deepfake sexually explicit material of a person without their consent. This offence will carry a serious criminal penalty, a maximum of six years in prison. If an offender is also responsible for the creation of the non-consensual sexually explicit material, the offence will be considered aggravated, which increases the maximum penalty to seven years imprisonment. The law as it already stands criminalises non-consensual sharing of private sexual material; however, this bill amends the law to ensure that it equally penalises deepfake sexual material of a person too. These important reforms make clear that those who share and create sexually explicit material without consent, including by using new technology like AI, will be subjected to serious criminal penalties—and so they should be. What makes this bill especially needed and important is that we also know that in the vast majority of cases the victims of deepfake sexual material are women and girls.

The government is deeply concerned about the level of gendered violence in this country. In May, the Prime Minister met with state leaders from across Australia to address how the gendered violence crisis in our nation could be best addressed. After that meeting, the government announced almost $1 billion of funding towards the Leaving Violence Program, which will provide those escaping violence with financial support, safety assessments and referrals to support pathways. We also announced a trial of age verification technology to limit the ability of children to access sexually harmful content online. Non-consensual deepfake sexual material perpetuates harmful gender stereotypes and drives gender based violence. That's why our next step to address the gendered violence crisis is to crackdown on harmful deepfakes.

This bill builds on other initiatives of this government that work towards ending gendered violence in this country. We made a commitment to tackling online harms, particularly online harms that affect women and young girls, and this bill is part of our effort to honour that commitment fully. The bill was developed in consultation with the Australian Federal Police, the Commonwealth Director of Public Prosecutions and other relevant Commonwealth agencies. It will work hand in hand with relevant state based laws to empower law enforcement and prosecutors to properly go after the people who are doing this abuse who create and share non-consensual deepfake sexual material.

We have heard the calls of victims and members of the public who have felt strongly that the law must be strengthened to address non-consensual deepfake sexual material. We are listening and we are also acting. I support this bill because it penalises the sharing of non-consensual deepfake sexual material. It will reduce this terrible kind of abuse, and I support this bill because it helps our continuing fight against gendered violence in our country. I commend this bill to the House.

Comments

No comments