Senate debates

Tuesday, 26 November 2024

Committees

Adopting Artificial Intelligence (AI) Select Committee; Report

5:42 pm

Photo of Tony SheldonTony Sheldon (NSW, Australian Labor Party) Share this | | Hansard source

I present the final report of the Select Committee on Adopting Artificial Intelligence, together with accompanying documents, and I move:

That the Senate take note of the report.

I rise to take note of the final report of the Senate Select Committee on Adopting Artificial Intelligence. First of all, I want to thank the deputy chair, Senator Shoebridge, other committee members and the secretariat for all their work on what has been a very collegiate inquiry. I also want to thank everyone that provided evidence.

The committee was tasked with inquiring into the adoption of AI in Australia. Throughout the inquiry there was no doubting that AI is already here and is already impacting the lives of Australians in a number of ways, both positive and negative. It has the potential to substantially increase productivity, wealth and wellbeing. It can automate menial, unfulfilling tasks, freeing up our time to pursue productive, creative and fulfilling uses of our time. There was also no doubt that this is only the beginning. AI will be a transformative technology that will one day in the not too distant future touch on everyone in every aspect of their lives. That much is obvious.

The real question before the committee and before regulators around the world is: who will see the benefits of the age of AI? We're now at an important crossroads for AI regulation. As it stands, in the absence of regulation, the spoils of AI will disproportionately go to the richest and most powerful people in society, particularly the big tech companies and their billionaire backers—the likes of Amazon, Google and Meta. It's up to us to ensure that the value created by AI is enjoyed by all Australians, not just by those at the very top. This report contains 13 recommendations that aim to do just that.

Throughout the inquiry, we heard the same core concerns about how these AI models work. The most common concern was about transparency, especially with the big general-purpose AI models being developed by big tech; that means models like OpenAI, GPT, Google's Gemini and Meta's Llama. Stanford University's Center for Research on Foundation Models produces the authoritative index for how transparent these models are. Stanford has repeatedly found that these general purpose AI models are opaque black boxes, especially when it comes to what data is used to train them. Do these use copyright data? That's an important question. Private or personal data? Where does the data come from? How do they select what data goes into it? These are all questions that big-tech developers try to avoid answering.

As the Law Council said, echoed by many other submitters:

To improve public trust and confidence … AI technologies need to be transparent, well understood, and subject to stringent safeguards …

We also heard the related issues of bias and discrimination present in AI outputs. The content and decisions that AI models generate reflect the biases in the data that goes in. We heard about the experience at Amazon where they introduced an AI recruitment system to help them with hiring. As the ACTU assistant secretary, Joseph Mitchell, told the committee, they trained their dataset to recommend new employees based on the existing employees in the workplace, not realising the AI was predicting they should preference white, male, private school attendees in their hiring practices. That's what you do when you hire based on the past. That's just one example of how AI can reinforce, even unintentionally, damaged bias and discrimination.

Another related issue was data privacy. Many Australians will be surprised to hear that Meta unilaterally decided that all the content uploaded to Facebook or Instagram since 2007 is fair game for training their AI products. When the committee asked Meta how someone in 2007 could consent to a photo of their children being used to train an AI product that wouldn't exist for another decade, they said, 'Well, I can't speak to what people did or did not know.' It's the sort of ridiculous response we got from big-tech platforms over and over again, especially on questions of privacy.

I asked Amazon about using audio recordings in people's homes through Alexa to train its AI products. Amazon refused to answer. I sent Google a list of 28 Google products and services, ranging from Android phones to Google Docs to Gmail, and asked which of those services they've taken using data to train their AI products. Google refused to answer. Do you see a pattern emerging here? This lack of transparency becomes particularly problematic when you consider high-risk uses of AI.

I commend the Minister for Industry and Science, Minister Husic, for the significant work he's undertaken to this point. Throughout this year he's engaged in consultation on how we can best legislate guardrails around high-risk AI uses. The first three recommendations of this report provide the committee's view. We need new dedicated AI legislation mandating transparency, testing and accountability requirements for high-risk uses of AI. That brings us in line with the approach taken around the world, including the EU, Canada and the UK. We need to ensure that general purpose AI models, those with the biggest and broadest impact but the worst record on transparency, are captured by these guardrails.

The other issue is how AI will impact on people at work. At the most severe end, there are concerns about job losses. The Finance Sector Union said:

It is also clear that certain categories of jobs are far more susceptible to impacts from AI … There is also a risk that these impacts will be felt more by people of lower socioeconomic groups, worsening inequality.

The risk of job losses highlights why it is so important that government, employers and unions come together to identify pathways to retraining and reskilling.

The Australian Chamber of Commerce and Industry said realising the benefits of AI depends on 'embracing retraining opportunities, and investment in research and education'.

For most people, their jobs will not be fully automated by AI but their work will change. This is already happening. For example, the SDA NSW & ACT secretary, Bernie Smith, told the committee that retail companies like Amazon are already using AI to surveil workers, monitor performance, set rosters or even track them outside of work. As Mr Smith said:

… this technology has either the capacity … to improve the … equity … and security of work … or, alternatively, the capacity to … increase insecurity and tether insecure workers to their apps in a virtual Hungry Mile of always on-demand hiring, where the fastest finger gets the next shift.

In fact, there was widespread concern about AI being used to manage work in ways that would create unsafe work environments. The Victorian Trades Hall Council said AI workplace surveillance is 'dehumanising, invasive and incompatible with fundamental rights'.

That's why the committee's recommendations (5), (6) and (7) call for the use of AI in the workplace to be categorised as high-risk and for our longstanding tripartite work health and safety laws to be extended to apply to the workplace risks created by AI. What that means in practice is that employers should have duties to consult with their workforce on how AI is introduced in workplaces. They should have duties to minimise risks, including psychosocial risks. And the workforce should have the right to representation and to stop work if there's a serious and imminent threat to their safety.

As the report touches on, this approach is supported by a broad range of stakeholders, including unions, Digital Rights Watch, Centre of the Public Square, the Human Rights Law Centre, industrial lawyers and Australian AI developers. One of those developers, Michael Gately, the CEO of Trellis Data, said:

The idea that AI … fits under … [the] OH&S set of frameworks we already have is brilliant. That is exactly where we should be.

Of course, the point on consultation is particularly important. As Professor Nicholas Davis from the University of Technology Sydney said:

… despite the fact that tech companies are saying that artificial intelligence offers the greatest opportunity for workplace productivity … workers are invisible bystanders in this conversation. They are not consulted …

The committee make it very clear that we believe workers deserve to have a say in how AI is used in their workplace, to make the outcomes better for employers, the workforce and for society more generally. That includes authors, journalists, scriptwriters, graphic designers, voice actors, game designers and many more. Big tech AI developers like Amazon, Google and Meta have committed arguably the largest theft in history by scraping the work of these people and using it to train their AI products, and then using those AI products to produce inferior imitations of their work, putting their future earning capacity in jeopardy. When I asked Amazon if they engaged in this, including work published on their own Kindle and Audible platforms, they said, 'We don't disclose specific sources of our training models.' It's a simple matter of us protecting—

Photo of Matt O'SullivanMatt O'Sullivan (WA, Liberal Party) Share this | | Hansard source

Thank you, Senator Sheldon. Unfortunately, your time has expired.

Photo of Tony SheldonTony Sheldon (NSW, Australian Labor Party) Share this | | Hansard source

I seek leave to continue my remarks later.

Leave granted.

5:53 pm

Photo of James McGrathJames McGrath (Queensland, Liberal National Party, Shadow Assistant Minister to the Leader of the Opposition) Share this | | Hansard source

The coalition agreed with a small number of recommendations in this report, but we certainly disagreed with a larger number of recommendations, which is why Senator Reynolds and I provided a dissenting report.

The governance of artificial intelligence does pose one of the 21st century's greatest public policy challenges. Any AI policy framework ought to safeguard Australia's cybersecurity, intellectual property rights, national security and democratic institutions without infringing on the many, many potential opportunities that AI presents in relation to job creation and productivity growth. AI does present an unprecedented threat to Australia's cybersecurity and privacy rights. AI technologies, especially large language models such as ChatGPT, are trained on substantial amounts of data in order to be able to generate outputs—that is, in order for large language models like ChatGPT to gain their predictive capacity, the large language model needs to be fed significant quantities of data to enable the technology to develop its own text, images or videos.

One risk of AI is that these models become the amalgamation of the data that they are fed. As a result, if the information going into that particular large language model is biased or prejudicial, there is a significant risk that the model would then replicate such biases and discrimination on a mass scale. However, one of the greatest risks associated with the modern advancements in AI is the inappropriate collection and use of personal information, as well as the leakage and unauthorised disclosure or de-anonymisation of personal information. With little to no domestic regulation of large language models, especially those owned and operated by multinationals such as Meta, Google and Amazon, the storage and utilisation of significant amounts of private data on its users is a real risk. When asked about the extent to which these organisations use the private data of their users in the development of their AI models, these organisations provided very unclear responses. Indeed, Meta did not even answer questions about whether it used private messages sent through Messenger or WhatsApp in its generation of its large language model, Meta AI. Notwithstanding the severe privacy considerations related to this type of conduct, as large language models have not yet matured, there is a significant risk that private information on certain users may, unintentionally, form the basis of future outputs. Such a risk to the cybersecurity of the Australian people is unprecedented. On a similar basis, AI presents a significant challenge, not just to Australia's creative industries but to the entire intellectual property rights structure in Australia more broadly.

As the final report highlights, a significant issue in relation to copyright arises where copyrighted materials are used to train AI models. Indeed, the data that large language models require to acquire predictive capacity, including images and text, are often extracted from the internet with no safeguards as to whether this data is owned by another individual or entity. When Meta, Amazon and Google were asked whether they use copyrighted works in training their large language models, they either did not respond, stated the development of large language models without copyrighted works is not possible, or stated that they trained their large language models on so much data that it would be impossible to even know. These potential violations of Australia's copyright laws represent only the beginning of the threat that AI generation poses to the ongoing management of intellectual property rights in Australia. The Department of Home Affairs highlighted the severe national security risk presented by AI in its submission to the inquiry. Due to the recent exponential improvements in AI capabilities, coupled with the unprecedented level of publicly available personal and sensitive information on many Australians, foreign actors now have the ability to develop AI capabilities to target 'our networks, systems and people'—that is, foreign actors could gain the ability to target specific Australians through AI capabilities trained on their own private and sensitive data. The ability for foreign or malicious actors to use sophisticated AI technology for scamming and phishing represents a significant threat to Australia's national security.

As this inquiry into AI occurs in the context of the two-year anniversary of the public release of ChatGPT, these threats have been clear and in the public domain for 24 months, yet the federal Labor government has seemingly done absolutely nothing to deal with these threats to Australia's cybersecurity, intellectual property rights and national security across this entire two-year period. Indeed, 10 months ago, the Department of Industry, Science and Resources stated:

existing laws do not adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur

Yet absolutely nothing has happened over the past 10 months. The Labor government has neglected its responsibility to deal with any of the threats that the growth of the AI industry poses to the Australian people and their entities. If the inquiry illustrated anything, it reaffirmed the view that the governance of AI is an intractable public policy problem. The inquiry has also demonstrated that AI poses an unprecedented risk to our cybersecurity, intellectual property rights, national security and democratic institutions. Though it is essential that the federal government minimises compliance costs for businesses that do not develop or use high-risk AI, the federal government must act to address the significant risk that AI poses to Australia's security and our institutions of governance.

The Labor government's complete inaction on any AI related policymaking whatsoever, despite its own admission 10 months ago that its existing laws do not adequately prevent AI facilitated harms, is a disgrace. Nevertheless, the coalition will always welcome the opportunity to work with the government on tackling public policy challenges associated with the governance of AI in our contemporary society. I seek leave to continue my remarks later.

Leave granted.

6:00 pm

Photo of Linda ReynoldsLinda Reynolds (WA, Liberal Party) Share this | | Hansard source

I will first of all associate myself with the comments by Senator McGrath, and I totally endorse him and the dissenting report. But I do congratulate all colleagues, including the chair, for the way in which this important inquiry was conducted.

The committee did highlight the profound impact that AI will have and, in fact, already is having on all aspects of life. From my perspective, through this inquiry and also through work that I've been doing with the IPU globally, it is very clear that that is something that our nation has to be engaged in. Everybody has to be engaged in the discussion about the threats and opportunities that AI now presents. The technology is developing so fast, particularly in generative AI, that it is really important, I think, that as a nation we hasten with caution—that we adopt things that we are confident about but that we at least have a good appreciation of the unintended consequences and are able to mitigate those risks.

But, as Senator McGrath has said and also Senator Sheldon has noted, there are vast quantities of data that are scraped from, probably, the records of all Australians—on social media and on many other platforms—without their knowledge, that have gone into that generative AI, the LLMs. This is something that should be of concern to all Australians, and I hope that this report will stimulate that discussion.

I also am disappointed that the government hasn't moved far more quickly on this in terms of analysing where current laws are sufficient to cover the regulation of AI that is currently being used, and to then identify the gaps where laws will be needed as this technology develops.

I want to spend the last couple of minutes talking about how valuable this inquiry and the findings have been for an inquiry that has just been stood up for the Joint Committee of Public Accounts and Audit, of which I am co-chair with Linda Burney, from the other place. We are now looking at the use of AI in all its forms across the federal public sector, in departments and agencies. We've had 41 submissions so far, and I must confess that what we've seen and what we've heard is quite alarming. That inquiry feeding into this one and what is happening across the Commonwealth public sector is going to be very important. The use of AI is patchy. We've got different departments with different rules, doing different things. There are working groups everywhere. But I fear that a number of departments who make decisions about the entitlements of all Australians—which is pretty much most departments—will step into areas that will cause great pain and angst, and there will be a lack of transparency for many Australians. So I am very much looking forward to seeing where that goes.

We are also having a look at the national security implications of AI's use by state and non-state actors, and at how AI can be used by criminals in terms of expanding their reach and the rapidity with which their criminal activities, including spear phishing and other things, can impact on all Australians.

I commend the report to chamber, and I seek leave to continue my remarks later.

Leave granted; debate adjourned.