House debates

Tuesday, 2 July 2024

Bills

Criminal Code Amendment (Deepfake Sexual Material) Bill 2024; Second Reading

5:43 pm

Photo of Tracey RobertsTracey Roberts (Pearce, Australian Labor Party) Share this | Hansard source

I rise to support the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. Every time you read or hear the news there is another story about artificial intelligence, deepfake abuse, and the ongoing debate on how to regulate these rapidly advancing technologies. Encountering many forms of sexual violence online driven by AI-generated deepfake technology is deeply unsettling. Women represent 99 per cent of those targeted by deepfake pornography, which makes up 98 per cent of all deepfake videos online.

Urgent action is absolutely needed and effective legislation is a critical starting point. Thousands of women and girls have experienced this form of gender based violence already and it is exacerbated by the advancing accessibility and the sophistication of technology. In 2023 alone the volume of deepfake abuse videos surpassed the total of all previous years combined, with the number of nonconsensual videos doubling annually. Those nonconsensual images are created and shared with the goal of humiliating and degrading the women and girls in them. The fallout is immense, and it goes beyond personal harm.

Deepfakes, for those unfamiliar, are digitally manipulated images, videos or audio recordings that use artificial intelligence to create realistic but false depictions of individuals. These creations can depict people doing or saying things they never did—always stunning realism blurring the lines between reality and fiction. The legislation before us aims to address a specific and troubling aspect of this technology: the nonconsensual sharing of sexually explicit deepfake material. This bill seeks to criminalise such acts, ensuring that individuals who distribute sexually explicit content without the consent of those depicted face significant legal consequences. This includes deepfakes generated using AI or other technologies which have increasingly been used to exploit and harm individuals, particularly women and girls.

The new bill aims to criminalise the sharing of nonconsensual deepfake sexually explicit material online. These reforms ensure individuals who distribute sexually explicit content without the consent of those depicted, including deepfakes created using AI or other technologies, will face significant criminal penalties. The bill targets individuals who share and create sexualised deepfake images without consent. The application of Commonwealth criminal law to individuals who meet the minimum age of criminal responsibility is not a new concept. It is worth noting that the new offences are specifically targeted at sexual material depicting or appearing to depict individuals over 18 years of age. Existing criminal offences already cover child abuse material, which is comprehensively addressed in a separate division of the Criminal Code. This includes detailed offences and severe penalties for the sharing of child abuse material online, including content generated by artificial intelligence.

Whether you personally know a victim or have simply seen and used their image on the internet will now be irrelevant. The new offences cover instances where an individual shares a sexually explicit deepfake of a real person either knowing the person has not consented or being reckless as to whether they have consented. This applies regardless of who the real person is and whether they are personally known to the offender. Prosecuting offences like this is increasingly complex in the digital age. However, the Australian Federal Police possesses unique capabilities and extensive experience in investigating these types of crimes. The AFP was also consulted in the development of this bill, ensuring their input and the challenges faced by law enforcement were taken into consideration.

It is also worth noting that the Commonwealth power to legislate does not extend to the mere creation of sexually explicit adult deepfakes. The offence applies to the sharing of nonconsensual deepfake sexually explicit material. If the person who shares the material also created the deepfake, there is an aggravated offence with a higher penalty of up to seven years imprisonment.

There are existing Commonwealth offences that prohibit menacing or harassing another person online. These could apply, for example, if someone creates an image of their partner and then threatens to post it online without consent. Additionally, state and territory laws largely cover the creation of such deepfakes. This approach is consistent with existing arrangements for the criminalisation of child abuse material. State and territory laws address the creation of child abuse material, while strong Commonwealth offences criminalise the sharing of such material online.

The existing Commonwealth criminal offences do not clearly apply to sexually explicit deepfake material. Some state and territory jurisdictions have legislation to address the nonconsensual sharing of intimate images online. This bill will compliment state and territory criminal offences and ensure a more consistent national approach. The bill stipulates severe consequences for those found guilty of transmitting such material without consent. Offenders can face up to six years imprisonment, with aggravated cases facing potentially up to seven years behind bars. These penalties are designed to deter and punish this harmful behaviour.

To ensure the legislation targets appropriate cases, specific offences have been incorporated. For example: reporting to authorities—passing material to the police to report a crime, or providing it to court to assist in prosecution, is permissible; for medical and professional purposes—doctors can share images to obtain a second opinion from colleagues without falling under the law's purview; for content based photography—images of models taken with explicit consent for advertising or publication purposes are exempt from the legislation; and for satirical use—images used in a satirical context are also considered exempt under these laws.

It's important to know that the legislation is not intended to encompass the sharing of legitimate adult pornography. Rather, its focus is on prohibiting the dissemination of deepfake sexually explicit images where the perpetrator either knows that consent has not been granted or behaves recklessly in determining whether consent was given. This approach aims to address the harmful impacts of non-consensual deepfake material while safeguarding legitimate uses of digital content. Someone may wonder what happens when consent is uncertain. The law addresses this by focusing on cases where individuals knowingly share deepfake sexually explicit images without consent or recklessly disregard whether consent was given. This includes situations where consent is not actively considered.

There's a deep and profound impact from non-consensual sharing of digitally altered and sexually explicit material, particularly on women and girls. This behaviour not only causes immediate harm but can also have lasting emotional and psychological effects. It's a form of abuse that degrades, humiliates and dehumanises victims, and which perpetuates harmful gender stereotypes, contributing to gender based violence. Unfortunately, instances where such material is created and shared out of revenge or malice, often referred to as 'revenge porn' or image based abuse, are widespread. These are actions that exploit vulnerabilities and which can severely damage a person's reputation and sense of security. Victims of such abuse often face severe anxiety, depression and trauma. The non-consensual distribution of sexually explicit deepfake material can destroy personal relationships, careers and mental health. The fear of these images surfacing can lead to long-term psychological distress and a pervasive sense of insecurity. Furthermore, this behaviour perpetuates a culture of misogyny and violence against women. By creating and sharing deepfake sexually explicit images, perpetrators are engaging in an act of control and humiliation, reinforcing toxic power dynamics that have long oppressed women. This form of digital violence is an extension of the systemic gender based violence that continues to plague our society.

Clearly, artificial intelligence technology also has the potential to exacerbate election related challenges, including the spread of disinformation and cybervulnerabilities in election systems. The Albanese Labor government is committed to enhancing the transparency and accountability of Australia's electoral system. Actions are being taken by the Minister for Communications and by the Australian Electoral Commission to combat misinformation and disinformation, especially during elections. Additionally, the government has asked the Joint Standing Committee on Electoral Matters to consider ways to prevent or limit the influence of inaccurate or false information on electoral outcomes. This includes examining the impact of artificial intelligence, foreign interference, social media, misinformation and disinformation.

The government also has work underway to combat online scams and fraud, including establishing the National Anti-Scam Centre to coordinate and share intelligence across law enforcement, private and public sectors. It has taken down over 5,000 investment scam websites through ASIC's scam disruption activities. The government has also committed to introduce legislation in 2024 and to fund the administration and enforcement of mandatory industry codes for banks, telecommunications providers and digital platforms.

As AI technology advances and becomes more accessible, the potential for misuse in creating and distributing such material grows. This necessitates urgent legislative action, like the proposed bill, to establish serious penalties. These measures are crucial for protecting vulnerable individuals from further online harm and ensuring that those who engage in such abusive behaviour face appropriate consequences under the law. Looking ahead, the Albanese Labor government remains committed to safeguarding Australians from online threats including deepfake misuse, conducting ongoing reviews such as the statutory review of the Online Safety Act 2021, seeking to enhance protections and ensuring our online safety laws remain effective in combatting emerging harms.

In conclusion, this legislation represents a critical step in protecting individuals from the insidious impact of deepfake technology. By imposing stringent penalties and clarifying exceptions, we aim to uphold dignity, privacy and safety in our digital age. This bill sends a clear message that our society will not tolerate the degradation and abuse of individuals through technological means. We stand against the perpetration of misogyny and violence against women, and we commit to fostering a safer, more respectful digital environment for all.

Comments

No comments