Senate debates

Thursday, 10 October 2024

Committees

Adopting Artificial Intelligence (AI) Select Committee; Report

3:37 pm

Lisa Darmanin (Victoria, Australian Labor Party) Share this | Hansard source

The interim report of the Select Committee on Adopting Artificial Intelligence is being tabled today. It focuses on the potential for AI—generative AI in particular—to influence electoral processes and undermine public trust in our democracy. In an era when technology is advancing rapidly, this is a matter of significant concern. The integrity of our electoral system is central to the health of our democracy, and any threats posed by AI cannot be taken lightly. I'd like to thank all of the parties that have provided submissions and witnesses who appeared throughout this inquiry and provided their insights for the benefit of the committee.

The final report, which we will be tabling in November, will address other matters in the terms of reference which include the risks and harms arising from the adoption of AI technologies, including bias, discrimination and error; opportunities to adopt AI in ways that benefit citizens, including for economic growth; and the environmental impacts of AI technologies. But first I want to talk a little bit about the risks and why we are inquiring into this matter. Dr Cathy Foley, Australia's Chief Scientist, told the inquiry that political disinformation has been around since the introduction of the printing press. Today, however, AI-driven deepfakes have become a game changer because they are easy to create and even easier to disseminate.

Just in 2024, we've seen how deepfakes could pose a significant threat to both our democracy and our individual safety. Before the New Hampshire presidential primary, an AI voice clone of US President Joe Biden called voters and urged them not to vote. Prior to the election in Indonesia a deepfake was circulated of deceased former president Suharto endorsing his former political party. In Pakistan jailed former prime minister Imran Khan claimed election victory in a video created using AI.

The committee has looked very carefully at the experiences here and abroad in addressing this existential risk. The Digital Services Act in the European Union has a requirement for synthetic audio, image, video or text content to be marked and for systems which generate deepfakes to disclose that the content has been artificially generated or manipulated. And here in Australia the AEC told the inquiry that they are seeking to address disinformation and misinformation including through a national digital literacy 'stop and consider' campaign and engaging with social media platforms to actively debunk disinformation. Locally, we've heard growing concerns about the misuse of AI to manipulate images or voices, particularly in the arts sector. Voice actors represented by the Media, Entertainment and Arts Alliance shared where they have had their voices replicated without permission, damaging their livelihoods and the integrity of creative industries.

It's clear that we need robust laws and oversight to ensure AI developers are transparent and accountable for their technology, but we aren't suggesting that we rush head-first into creating new laws that might have unintended consequences. Any debates about electoral and other related artificial-intelligence related laws are sensitive and they need to be approached cautiously. Reform must be carefully balanced to protect freedom of speech and political expression while preventing the spread of damaging disinformation.

Of course, there are also benefits to artificial intelligence, which the committee has heard about. Generative AI can also provide interesting opportunities for strengthening our democratic processes. For example, in their submission to the inquiry, the ANU Tech Policy Design Centre raised the potential benefits for democracy associated with AI, including identifying patterns in government expenditure and monitoring for corruption.

So what is the government doing about this right now? While this report makes a number of interim recommendations, we are not sitting on our hands. The government has acted urgently by making creating or sharing deepfake sexual material a crime. The minister for industry, Ed Husic, also recently announced that we will legislate mandatory guardrails for the use of high-risk AI, including where there are serious impacts on things like health, safety, human rights or society at large. These mandatory guardrails will mean that developers and users have to have appropriate processes in place, including a requirement for real human oversight and requirements for transparency around how AI is used. This is appropriate because Australians want to know that the government is responding to our rapidly changing world and, while we want to continue to reap the benefits of rapid technology advancement, it is absolutely critical that we all have the confidence that these technologies are being developed and used responsibly with robust safeguards in place to protect our democracy, our privacy and our safety. It is clear that the platforms must be part of the solutions, with strong transparency and accountability measures, if we are to address the risks of AI effectively.

Dealing with the risks and opportunities of AI will be an ongoing process for policymakers to work out what has been effective elsewhere and what needs to be tailored to our Australian context. For the sake of our democracy and health of our political debate, we need to get this right. This government is working on several fronts to grapple with these complicated issues.

Comments

No comments