Senate debates

Tuesday, 26 November 2024

Committees

Adopting Artificial Intelligence (AI) Select Committee; Report

5:42 pm

Photo of Tony SheldonTony Sheldon (NSW, Australian Labor Party) Share this | Hansard source

I present the final report of the Select Committee on Adopting Artificial Intelligence, together with accompanying documents, and I move:

That the Senate take note of the report.

I rise to take note of the final report of the Senate Select Committee on Adopting Artificial Intelligence. First of all, I want to thank the deputy chair, Senator Shoebridge, other committee members and the secretariat for all their work on what has been a very collegiate inquiry. I also want to thank everyone that provided evidence.

The committee was tasked with inquiring into the adoption of AI in Australia. Throughout the inquiry there was no doubting that AI is already here and is already impacting the lives of Australians in a number of ways, both positive and negative. It has the potential to substantially increase productivity, wealth and wellbeing. It can automate menial, unfulfilling tasks, freeing up our time to pursue productive, creative and fulfilling uses of our time. There was also no doubt that this is only the beginning. AI will be a transformative technology that will one day in the not too distant future touch on everyone in every aspect of their lives. That much is obvious.

The real question before the committee and before regulators around the world is: who will see the benefits of the age of AI? We're now at an important crossroads for AI regulation. As it stands, in the absence of regulation, the spoils of AI will disproportionately go to the richest and most powerful people in society, particularly the big tech companies and their billionaire backers—the likes of Amazon, Google and Meta. It's up to us to ensure that the value created by AI is enjoyed by all Australians, not just by those at the very top. This report contains 13 recommendations that aim to do just that.

Throughout the inquiry, we heard the same core concerns about how these AI models work. The most common concern was about transparency, especially with the big general-purpose AI models being developed by big tech; that means models like OpenAI, GPT, Google's Gemini and Meta's Llama. Stanford University's Center for Research on Foundation Models produces the authoritative index for how transparent these models are. Stanford has repeatedly found that these general purpose AI models are opaque black boxes, especially when it comes to what data is used to train them. Do these use copyright data? That's an important question. Private or personal data? Where does the data come from? How do they select what data goes into it? These are all questions that big-tech developers try to avoid answering.

As the Law Council said, echoed by many other submitters:

To improve public trust and confidence … AI technologies need to be transparent, well understood, and subject to stringent safeguards …

We also heard the related issues of bias and discrimination present in AI outputs. The content and decisions that AI models generate reflect the biases in the data that goes in. We heard about the experience at Amazon where they introduced an AI recruitment system to help them with hiring. As the ACTU assistant secretary, Joseph Mitchell, told the committee, they trained their dataset to recommend new employees based on the existing employees in the workplace, not realising the AI was predicting they should preference white, male, private school attendees in their hiring practices. That's what you do when you hire based on the past. That's just one example of how AI can reinforce, even unintentionally, damaged bias and discrimination.

Another related issue was data privacy. Many Australians will be surprised to hear that Meta unilaterally decided that all the content uploaded to Facebook or Instagram since 2007 is fair game for training their AI products. When the committee asked Meta how someone in 2007 could consent to a photo of their children being used to train an AI product that wouldn't exist for another decade, they said, 'Well, I can't speak to what people did or did not know.' It's the sort of ridiculous response we got from big-tech platforms over and over again, especially on questions of privacy.

I asked Amazon about using audio recordings in people's homes through Alexa to train its AI products. Amazon refused to answer. I sent Google a list of 28 Google products and services, ranging from Android phones to Google Docs to Gmail, and asked which of those services they've taken using data to train their AI products. Google refused to answer. Do you see a pattern emerging here? This lack of transparency becomes particularly problematic when you consider high-risk uses of AI.

I commend the Minister for Industry and Science, Minister Husic, for the significant work he's undertaken to this point. Throughout this year he's engaged in consultation on how we can best legislate guardrails around high-risk AI uses. The first three recommendations of this report provide the committee's view. We need new dedicated AI legislation mandating transparency, testing and accountability requirements for high-risk uses of AI. That brings us in line with the approach taken around the world, including the EU, Canada and the UK. We need to ensure that general purpose AI models, those with the biggest and broadest impact but the worst record on transparency, are captured by these guardrails.

The other issue is how AI will impact on people at work. At the most severe end, there are concerns about job losses. The Finance Sector Union said:

It is also clear that certain categories of jobs are far more susceptible to impacts from AI … There is also a risk that these impacts will be felt more by people of lower socioeconomic groups, worsening inequality.

The risk of job losses highlights why it is so important that government, employers and unions come together to identify pathways to retraining and reskilling.

The Australian Chamber of Commerce and Industry said realising the benefits of AI depends on 'embracing retraining opportunities, and investment in research and education'.

For most people, their jobs will not be fully automated by AI but their work will change. This is already happening. For example, the SDA NSW & ACT secretary, Bernie Smith, told the committee that retail companies like Amazon are already using AI to surveil workers, monitor performance, set rosters or even track them outside of work. As Mr Smith said:

… this technology has either the capacity … to improve the … equity … and security of work … or, alternatively, the capacity to … increase insecurity and tether insecure workers to their apps in a virtual Hungry Mile of always on-demand hiring, where the fastest finger gets the next shift.

In fact, there was widespread concern about AI being used to manage work in ways that would create unsafe work environments. The Victorian Trades Hall Council said AI workplace surveillance is 'dehumanising, invasive and incompatible with fundamental rights'.

That's why the committee's recommendations (5), (6) and (7) call for the use of AI in the workplace to be categorised as high-risk and for our longstanding tripartite work health and safety laws to be extended to apply to the workplace risks created by AI. What that means in practice is that employers should have duties to consult with their workforce on how AI is introduced in workplaces. They should have duties to minimise risks, including psychosocial risks. And the workforce should have the right to representation and to stop work if there's a serious and imminent threat to their safety.

As the report touches on, this approach is supported by a broad range of stakeholders, including unions, Digital Rights Watch, Centre of the Public Square, the Human Rights Law Centre, industrial lawyers and Australian AI developers. One of those developers, Michael Gately, the CEO of Trellis Data, said:

The idea that AI … fits under … [the] OH&S set of frameworks we already have is brilliant. That is exactly where we should be.

Of course, the point on consultation is particularly important. As Professor Nicholas Davis from the University of Technology Sydney said:

… despite the fact that tech companies are saying that artificial intelligence offers the greatest opportunity for workplace productivity … workers are invisible bystanders in this conversation. They are not consulted …

The committee make it very clear that we believe workers deserve to have a say in how AI is used in their workplace, to make the outcomes better for employers, the workforce and for society more generally. That includes authors, journalists, scriptwriters, graphic designers, voice actors, game designers and many more. Big tech AI developers like Amazon, Google and Meta have committed arguably the largest theft in history by scraping the work of these people and using it to train their AI products, and then using those AI products to produce inferior imitations of their work, putting their future earning capacity in jeopardy. When I asked Amazon if they engaged in this, including work published on their own Kindle and Audible platforms, they said, 'We don't disclose specific sources of our training models.' It's a simple matter of us protecting—

Comments

No comments