House debates

Monday, 12 February 2024

Private Members' Business

Digital Economy

6:47 pm

Photo of Daniel MulinoDaniel Mulino (Fraser, Australian Labor Party) Share this | Hansard source

I disagree with this motion in large part because I disagree with the approach that the move of the motion takes in relation to AI. I might say that I disagree with the long bow and the massively overblown rhetoric of the immediately preceding speaker, but I will leave that largely to one side and focus on the contribution from the member for Casey.

I think there is probably to some degree and overlap of ambition and aspiration when it comes to AI across this chamber, across both major parties and across the crossbench. I suspect there's a recognition across parliament and all chambers that this is, in all likelihood, an extremely important transition for our society and for our economy. Where I disagree with those opposite is where they say somehow we need some kind of quick fix. Their quick fix seems to be that CEOs need certainty for the next $5 million investment, that we need to adopt this mysterious AI strategy now, that what we really need is another 50-page glossy document, that more important than anything is having the term 'AI' in someone's title. For goodness sake, someone has to have 'AI' in their title! This is all we've heard from them. It is not really much content at all other than symbolism.

What we have from this government, in contrast to that quick fix and, frankly, symbolic approach is a recognition that this is actually an extremely complicated emerging phenomenon. I want to touch on a couple of the pros and cons that are emerging. Experts put different weight on the pros and the cons and on where they see the risks and the potential benefits. Clearly, AI has a lot of potential upside and could actually be the next generalised industrial revolution. For example, it creates the potential for types of functionality which otherwise don't exist. Autonomous vehicles are one example. Vehicles that have to process vast amounts of data about their surrounds in real time couldn't operate safely and in a coordinated way without AI. This could be extrapolated across many parts of the economy. So there is that functionality.

Secondly, there's the capacity to replace and improve a whole raft of decision-making—for example, diagnosis based upon facts. This could be used in all sorts of situations, such as medical, legal or engineering contexts. There is also the use of big data to inform the activities of large organisations like governments and even large corporates.

Thirdly, there's outright creativity. We know, for example, that AI is now writing essays. AI is probably writing better speeches than most private members' bills generate! But, even in the context of chess, chess geniuses now see the moves that AI generates, and there are often moves that are clearly the optimal move but human beings have no idea why. They just call them 'computer moves' now because they don't know why they're good.

But then, of course, there are all the potential downsides. There are the downsides around privacy. What does the interpretation of all this data mean for ordinary citizens? We don't yet know what AI will do on that front. There are the ways in which it might change market structures, where, if AI requires massive amounts of research and development to get to the cutting edge, it might entrench the largest players or the winners in markets. It could also lead to the replacement of many human tasks, many more than previous waves of automation and industrialisation. What does that do the role of humans in the workforce?

The point I make is that we don't quite yet know what the balance of all these trends is. We don't yet know the trajectory of each of these trends. That's why, in my opinion, we need to take a cautious approach. We don't want to over-regulate too quickly, because we might kill the golden goose. At the same time, we probably do need to set up some guardrails, what one might describe as a regulatory sandbox. That's exactly what this government is doing. It's taking an approach which is looking at AI in high-risk settings. We've received 500 submissions. The government is developing a voluntary AI safety standard with the industry. We're looking at safeguards that would include testing, transparency and accountability.

With something as broad ranging as AI, something with huge potential upsides which could boost productivity growth in ways we can only imagine, but something that also has potential downsides that we can find hard to conceive, we have to take an approach that is measured and staged, and that's exactly what the government is doing. The right approach here is not to wade in and just regulate for its own sake; it's to take a measured approach.

Comments

No comments