Senate debates

Tuesday, 26 November 2024

Committees

Adopting Artificial Intelligence (AI) Select Committee; Report

5:53 pm

Photo of James McGrathJames McGrath (Queensland, Liberal National Party, Shadow Assistant Minister to the Leader of the Opposition) Share this | Hansard source

The coalition agreed with a small number of recommendations in this report, but we certainly disagreed with a larger number of recommendations, which is why Senator Reynolds and I provided a dissenting report.

The governance of artificial intelligence does pose one of the 21st century's greatest public policy challenges. Any AI policy framework ought to safeguard Australia's cybersecurity, intellectual property rights, national security and democratic institutions without infringing on the many, many potential opportunities that AI presents in relation to job creation and productivity growth. AI does present an unprecedented threat to Australia's cybersecurity and privacy rights. AI technologies, especially large language models such as ChatGPT, are trained on substantial amounts of data in order to be able to generate outputs—that is, in order for large language models like ChatGPT to gain their predictive capacity, the large language model needs to be fed significant quantities of data to enable the technology to develop its own text, images or videos.

One risk of AI is that these models become the amalgamation of the data that they are fed. As a result, if the information going into that particular large language model is biased or prejudicial, there is a significant risk that the model would then replicate such biases and discrimination on a mass scale. However, one of the greatest risks associated with the modern advancements in AI is the inappropriate collection and use of personal information, as well as the leakage and unauthorised disclosure or de-anonymisation of personal information. With little to no domestic regulation of large language models, especially those owned and operated by multinationals such as Meta, Google and Amazon, the storage and utilisation of significant amounts of private data on its users is a real risk. When asked about the extent to which these organisations use the private data of their users in the development of their AI models, these organisations provided very unclear responses. Indeed, Meta did not even answer questions about whether it used private messages sent through Messenger or WhatsApp in its generation of its large language model, Meta AI. Notwithstanding the severe privacy considerations related to this type of conduct, as large language models have not yet matured, there is a significant risk that private information on certain users may, unintentionally, form the basis of future outputs. Such a risk to the cybersecurity of the Australian people is unprecedented. On a similar basis, AI presents a significant challenge, not just to Australia's creative industries but to the entire intellectual property rights structure in Australia more broadly.

As the final report highlights, a significant issue in relation to copyright arises where copyrighted materials are used to train AI models. Indeed, the data that large language models require to acquire predictive capacity, including images and text, are often extracted from the internet with no safeguards as to whether this data is owned by another individual or entity. When Meta, Amazon and Google were asked whether they use copyrighted works in training their large language models, they either did not respond, stated the development of large language models without copyrighted works is not possible, or stated that they trained their large language models on so much data that it would be impossible to even know. These potential violations of Australia's copyright laws represent only the beginning of the threat that AI generation poses to the ongoing management of intellectual property rights in Australia. The Department of Home Affairs highlighted the severe national security risk presented by AI in its submission to the inquiry. Due to the recent exponential improvements in AI capabilities, coupled with the unprecedented level of publicly available personal and sensitive information on many Australians, foreign actors now have the ability to develop AI capabilities to target 'our networks, systems and people'—that is, foreign actors could gain the ability to target specific Australians through AI capabilities trained on their own private and sensitive data. The ability for foreign or malicious actors to use sophisticated AI technology for scamming and phishing represents a significant threat to Australia's national security.

As this inquiry into AI occurs in the context of the two-year anniversary of the public release of ChatGPT, these threats have been clear and in the public domain for 24 months, yet the federal Labor government has seemingly done absolutely nothing to deal with these threats to Australia's cybersecurity, intellectual property rights and national security across this entire two-year period. Indeed, 10 months ago, the Department of Industry, Science and Resources stated:

existing laws do not adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur

Yet absolutely nothing has happened over the past 10 months. The Labor government has neglected its responsibility to deal with any of the threats that the growth of the AI industry poses to the Australian people and their entities. If the inquiry illustrated anything, it reaffirmed the view that the governance of AI is an intractable public policy problem. The inquiry has also demonstrated that AI poses an unprecedented risk to our cybersecurity, intellectual property rights, national security and democratic institutions. Though it is essential that the federal government minimises compliance costs for businesses that do not develop or use high-risk AI, the federal government must act to address the significant risk that AI poses to Australia's security and our institutions of governance.

The Labor government's complete inaction on any AI related policymaking whatsoever, despite its own admission 10 months ago that its existing laws do not adequately prevent AI facilitated harms, is a disgrace. Nevertheless, the coalition will always welcome the opportunity to work with the government on tackling public policy challenges associated with the governance of AI in our contemporary society. I seek leave to continue my remarks later.

Leave granted.

Comments

No comments