Senate debates

Thursday, 10 October 2024

Committees

Adopting Artificial Intelligence (AI) Select Committee; Report

3:36 pm

Photo of Anne UrquhartAnne Urquhart (Tasmania, Australian Labor Party) Share this | | Hansard source

I present the interim report of the Select Committee on Adopting Artificial Intelligence, and I move:

That the Senate take note of the report.

I seek leave to continue my remarks later.

Leave granted.

3:37 pm

Lisa Darmanin (Victoria, Australian Labor Party) Share this | | Hansard source

The interim report of the Select Committee on Adopting Artificial Intelligence is being tabled today. It focuses on the potential for AI—generative AI in particular—to influence electoral processes and undermine public trust in our democracy. In an era when technology is advancing rapidly, this is a matter of significant concern. The integrity of our electoral system is central to the health of our democracy, and any threats posed by AI cannot be taken lightly. I'd like to thank all of the parties that have provided submissions and witnesses who appeared throughout this inquiry and provided their insights for the benefit of the committee.

The final report, which we will be tabling in November, will address other matters in the terms of reference which include the risks and harms arising from the adoption of AI technologies, including bias, discrimination and error; opportunities to adopt AI in ways that benefit citizens, including for economic growth; and the environmental impacts of AI technologies. But first I want to talk a little bit about the risks and why we are inquiring into this matter. Dr Cathy Foley, Australia's Chief Scientist, told the inquiry that political disinformation has been around since the introduction of the printing press. Today, however, AI-driven deepfakes have become a game changer because they are easy to create and even easier to disseminate.

Just in 2024, we've seen how deepfakes could pose a significant threat to both our democracy and our individual safety. Before the New Hampshire presidential primary, an AI voice clone of US President Joe Biden called voters and urged them not to vote. Prior to the election in Indonesia a deepfake was circulated of deceased former president Suharto endorsing his former political party. In Pakistan jailed former prime minister Imran Khan claimed election victory in a video created using AI.

The committee has looked very carefully at the experiences here and abroad in addressing this existential risk. The Digital Services Act in the European Union has a requirement for synthetic audio, image, video or text content to be marked and for systems which generate deepfakes to disclose that the content has been artificially generated or manipulated. And here in Australia the AEC told the inquiry that they are seeking to address disinformation and misinformation including through a national digital literacy 'stop and consider' campaign and engaging with social media platforms to actively debunk disinformation. Locally, we've heard growing concerns about the misuse of AI to manipulate images or voices, particularly in the arts sector. Voice actors represented by the Media, Entertainment and Arts Alliance shared where they have had their voices replicated without permission, damaging their livelihoods and the integrity of creative industries.

It's clear that we need robust laws and oversight to ensure AI developers are transparent and accountable for their technology, but we aren't suggesting that we rush head-first into creating new laws that might have unintended consequences. Any debates about electoral and other related artificial-intelligence related laws are sensitive and they need to be approached cautiously. Reform must be carefully balanced to protect freedom of speech and political expression while preventing the spread of damaging disinformation.

Of course, there are also benefits to artificial intelligence, which the committee has heard about. Generative AI can also provide interesting opportunities for strengthening our democratic processes. For example, in their submission to the inquiry, the ANU Tech Policy Design Centre raised the potential benefits for democracy associated with AI, including identifying patterns in government expenditure and monitoring for corruption.

So what is the government doing about this right now? While this report makes a number of interim recommendations, we are not sitting on our hands. The government has acted urgently by making creating or sharing deepfake sexual material a crime. The minister for industry, Ed Husic, also recently announced that we will legislate mandatory guardrails for the use of high-risk AI, including where there are serious impacts on things like health, safety, human rights or society at large. These mandatory guardrails will mean that developers and users have to have appropriate processes in place, including a requirement for real human oversight and requirements for transparency around how AI is used. This is appropriate because Australians want to know that the government is responding to our rapidly changing world and, while we want to continue to reap the benefits of rapid technology advancement, it is absolutely critical that we all have the confidence that these technologies are being developed and used responsibly with robust safeguards in place to protect our democracy, our privacy and our safety. It is clear that the platforms must be part of the solutions, with strong transparency and accountability measures, if we are to address the risks of AI effectively.

Dealing with the risks and opportunities of AI will be an ongoing process for policymakers to work out what has been effective elsewhere and what needs to be tailored to our Australian context. For the sake of our democracy and health of our political debate, we need to get this right. This government is working on several fronts to grapple with these complicated issues.

3:43 pm

Photo of James McGrathJames McGrath (Queensland, Liberal National Party, Shadow Assistant Minister to the Leader of the Opposition) Share this | | Hansard source

The coalition members of the Select Committee on Adopting Artificial Intelligence hold that the regulation of AI poses one of the greatest challenges in the 21st century in the context of our public policy. Nevertheless, the coalition members of the committee hold that any AI policy framework ought to safeguard Australia's national security, cybersecurity and democratic institutions without infringing on the potential opportunities that AI presents in relation to job creation and productivity growth. As the report of the Select Committee on Adopting AI is an interim report and solely considers the impact of AI on democracy, the additional comments of the coalition members of the committee focused only on this chapter, and we will hold off on a broader response until the committee concludes its final report.

The coalition holds that any electoral changes to improve Australia's democracy ought to be assessed against four core principles: firstly, fair, open and transparent elections; secondly, equal treatment of political participants; thirdly, freedom of political communication and participation without fear of retribution; and, fourthly, recognising freedom of thought, belief, association and speech as fundamental to free elections. Australia's success as a democracy is reliant on the effective operation of the Australian Electoral Commission and the federal government more broadly to satisfy and uphold these four principles. Ensuring that Australians have continued faith in the electoral system is paramount to Australia's faith in its government. The coalition's response to the five recommendations proposed in the interim report are guided by these four core principles.

The first recommendation recommended that, ahead of the next federal election, the government implement voluntary codes relating to watermarking. Though coalition members of the committee do not oppose this recommendation in principle, the coalition reserves its final position on this recommendation until the United States policy response to AI is holistically assessed following the US election. With different US states opting for different policy responses to manage AI, the US election will provide guidance to Australian policymakers on the different mechanisms to manage the risks that AI poses to Australia's democracy.

The second recommendation from the committee recommends that the Australian government undertake a thorough review of potential regulatory responses to AI. The coalition members of the committee would welcome a thorough review of potential regulatory responses, but the coalition members of the committee will not support any rushed legislative responses to fit political timelines, especially if the response contains prohibitions or restrictions on freedom of speech.

The third recommendation from the committee recommends that laws restricting the production or dissemination of AI content be designed to complement rather than conflict with the recently introduced disinformation and misinformation reforms and foreshadowed truth-in-political-advertising reforms. The coalition members of the committee strongly oppose this recommendation. The coalition members do not support the introduction of measures that purport to adjudicate truth in political advertising, nor does the coalition support the dystopian reforms included in the government's Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill.

Freedom of speech and the contestability of ideas are necessary, indeed compulsory, for a healthy liberal democracy. Distinguishing between truth, opinion and falseness is an inherently subjective process and one that is appropriately left to the Australian public. The federal government and its bureaucrats, no matter how independent and qualified, has neither the scope nor the ability to adjudicate what is or is not misinformation. It is inappropriate for any government body, let alone a government minister, to have the authority to censor the Australian people and their political parties in their communications. Australian democracy ought to remain a marketplace of ideas. If Australians share statements that are considered to be false, it is the role of civil society to hold their statements to account, not for the federal government to prohibit such statements in the first place.

The fourth recommendation from the committee relates to mandatory guardrails for AI in high-risk settings. Similar to the first recommendation, the coalition members of the committee do not oppose this recommendation in principle, although the coalition will reserve its final position until the United States policy response is assessed following the US election.

The fifth recommendation from the committee is that the government examine mechanisms to improve AI literacy for Australians. While the coalition does not oppose this recommendation, it is particularly important in the electoral contest that any AI education programs are designed following extensive consultation with the opposition.

Unlike the theme of the report, the coalition members of the committee hold that freedom of speech is not a mere constitutional guardrail but is integral to the success of our liberal democracy. That is why coalition members of the committee strongly oppose laws that purport to adjudicate truth in political advertising and the dystopian reforms set out in the government's Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024.

It is not surprising that the Labor government are seeking to develop further mechanisms to control the Australian public. Indeed, proposals that purport to govern truth in political advertising and proposals targeted at providing social media giants with financial incentives to silence the Australian public play into the consistent dystopian vision that the Labor Party has for our country—a vision of less freedom, greater executive secrecy and less transparency. As such, it is unsurprising that the Labor Party are now attempting to use further vehicles to censor the Australian public, through misinformation regulations and laws that purport to adjudicate truth in political advertising.

The coalition members of the committee are concerned that if the government introduces a rushed regulatory model for AI with prohibitions on freedom of speech in an attempt to protect Australia's democracy, the cure will be worse than the disease. The coalition members of the committee would welcome the opportunity to work with the government on balancing how our freedom of speech can be protected in an AI world.

3:51 pm

Varun Ghosh (WA, Australian Labor Party) Share this | | Hansard source

As a starting point, I'd like to commend the chair of the committee, Senator Tony Sheldon, whose leadership and perspicacity have been evident throughout the conduct of this inquiry and are reflected in both the interim report and the thoughtfulness of the contributions that have been made to it. Artificial intelligence is a transformative technology in myriad ways, and its use cases are so broad that they will come to affect almost every aspect of Australian life. That is no less true in the context of the production, consumption and analysis of information and data, and AI in this space presents significant risks and opportunities for Australia.

This inquiry has revealed that there are some core regulatory and policy challenges that are required to balance those risks and benefits from AI. They include principles to ensure that AI is deployed safely and responsibly; that AI is deployed and utilised in a manner that improves fairness and opportunity in Australian society rather than detracting from it; that Australia as a country and our people derive the benefits, economic and otherwise, from the safe and responsible use of AI; and that Australia itself has some sovereign capability in the artificial intelligence space. It's also relevant to consider if our regulations in this space are broadly coherent with global frameworks, because AI is a global problem and the generation and use of this technology will transcend borders.

The inquiry also revealed some of the regulatory difficulties that AI presents as a technology. First, the current and future uses and capacities of AI are not fully understood and will expand further. There is often an asymmetry of information between AI developers and regulators, and the speed of technological development in this space has sometimes been underestimated and shows no signs of slowing down. The final report will address this issue in a more fulsome way, but the interim report deals with the potential for AI technology and in particular generative AI to influence electoral processes and to undermine public trust and confidence in our democracy.

The recommendations of this interim report reflect many of the recurring issues that arose during the inquiry itself. But may I make, as a starting point, a response to Senator McGrath's contribution today, which is to say that no-one from the government—neither in the recommendations in this report nor in the full report when it comes out—is attempting either to limit freedom of speech or to challenge or undermine a marketplace of ideas. That's simply not the objective here.

But what is a focus of the inquiry and what is going to be an important aspect of how we regulate AI in this country is making sure that AI is not used to mislead or deceive Australians. That's at its core, and it's so that Australians are not, for instance, consuming false information generated by AI and thinking that it's something else. It's not about undermining the contest of ideas. It's just about making sure that people understand who's speaking at a given point in time. Do they know the information is real? The technology is sufficiently good now—as was demonstrated by one of the other members of the committee, Senator Pocock, in one of his recent posts—that ordinary Australians can produce effectively fake content, and it's important in the context of our democracy that people are protected from that. The methods are various, and I think that's what the first recommendation goes to: attempting to utilise methods like watermarking, setting standards and disclosing to consumers when they are interacting with AI and with AI generated content to ensure that Australians know what they're looking at.

I'll deal with one of the other remarks that Senator McGrath made, which was that the Liberals are going to reserve their position. That's understandable to a point, but it does give us an insight into one of the things this parliament will have to do when it's dealing with artificial intelligence, and that is to be as nimble as it can be, because this is a space that moves quickly and the regulations may require significant nuance but also updates as things change in the space.

In respect of recommendation 2, I think the coalition generally indicated its support for that. The benefit of that approach, of taking our time, of getting it right, means that we can manage some of the regulatory challenges I mentioned earlier and can ensure that the regulations are carefully designed but also comprehensive. AI will seep into so many aspects of the way we communicate as a society, and it is important that we deal with that comprehensively, or holistically. Recommendations 3 and 4 are aimed at ensuring coherence in the way we deal with AI in the political space and in the broader community. Again, it's not about censoring different things; it's about making sure that people know what they're looking at.

The fifth recommendation is, I think, possibly the most important one in terms of the way we as a society deal with AI, because it deals with Australia's digital literacy in this space and the ability of Australians to navigate these fields and environments with appropriate information and sufficient skill sets and knowledge to avoid being deceived, to avoid being misled and to avoid being the victims of bad actors in this space. One of the things that we see around the world, in elections and in broader contexts, is that bad actors use this technology to hurt ordinary people or to deceive ordinary people. One of the best inoculations against that is digital literacy across our entire community. Therefore I'm very proud to stand by recommendation 5 and this interim report. I seek leave to continue my remarks later.

Leave granted; debate adjourned.