House debates
Wednesday, 6 November 2024
Bills
Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024; Second Reading
1:09 pm
Zoe Daniel (Goldstein, Independent) Share this | Hansard source
With respect to the previous speaker, I think you're about to hear a quite different speech. Each time a technology is invented which holds transformative potential, it brings with it a fresh set of opportunities and risks. The printing press was the foundation of mass communication but was used as a tool for propagandistic social control. Television revolutionised the news media and brought the world home to people's lounge rooms, drawing them far closer to political events—at times, viscerally so. The internet has enabled instant communication but, in doing so, has exposed us to unforeseen social risk, has eroded public trust in traditional democratic institutions and has left us questioning the nature of truth itself.
A common thread which links the three inventions I named is the disruption they have brought to what Australian academic and former journalist Professor Andrea Carson of La Trobe University describes as the 'information landscape'. In today's information landscape everyone's a producer, no-one's an editor and content is everywhere. And we've proven ourselves to be ineffective at managing it.
As a three-time ABC foreign correspondent, I began my career in the analogue days of reel-to-reel tape in a radio studio. Producing news was rigorous, and news travelled only at a pace the radio or TV schedule would allow. Traditional broadcast media, perhaps unwittingly, acted as gatekeepers—mediators—in the public discourse. Traumatic footage involving disasters and accidents, for example, was—and still is, by mainstream media—carefully cut together. This can differ according to country and culture, with higher tolerance in some places for graphic imagery. Having witnessed immense death and destruction firsthand as a foreign corresponded many times, I can truthfully say that the horrific things that continue to exist in full colour in my memory never ever made it to your screens—only my descriptions of those events did.
Today in discussions about free speech and censorship this fact has been lost. The humans involved in distributing information to the public have always made a series of decisions and value judgements about what you see, what's too violent, what's too offensive and so on. In general, these decisions are guided by editorial policies together with the collective experience and instincts of those producing the news.
As time progressed in my career as a journalist, so did technology as the industry transitioned from a bricks-and-mortar media landscape to one of unfettered information. The rise of the internet and novel forms of digital media flipped the structure of our media environment on its head. This new landscape had transformative potential. For example, Facebook and Twitter spread democratic ideas and mobilised mass demonstrations throughout the Arab world. When I was covering deadly civil unrest in Thailand in 2010, Twitter became a tool used by and between journalists—to work out what was happening where and whether it was safe to be there—as well as a way of sharing news. This transformation reached a tipping point in 2017 during my coverage of the Trump administration. Then it felt as if accusations of 'fake news' were coming as fast as the President could tweet. Suddenly we began questioning everything. Fragmentation of trust and truth became so great that basic facts became contested.
The COVID years both suffered from and compounded this post truth era, which was adeptly grasped by Donald Trump himself, who expertly seeded doubt that the 2020 election may be stolen due to alleged flaws in the COVID affected electoral process. The day of 6 January 2021 was a day when those storming the Capitol believed they were protecting democracy, not attacking it. Such was the success of Trump's disinformation campaign. More recently we've seen the dangerous and destructive results of disinformation in the form of false accusations about the Bondi Junction attacks and race riots in the UK triggered by online lies.
I've spent some time outlining what's a huge problem for societies across what is now a very connected world. The question is how to fix it. Several countries have either failed to legislate on mis- and disinformation or relented to public backlash by repealing new laws. In the US, for example, a newly appointed disinformation governance board was shut down after just a few months due to pushback.
At the heart of this minefield of international legislative failure and the culture war that closely follows is the question: what exactly do the words 'disinformation' and 'misinformation' mean? Anxiety over the integrity of free speech should not be dismissed. At its core lie philosophical questions about what modern society's relationship with information should be. To date, successive Australian governments of both stripes have allowed these questions to be answered by big tech. The Albanese government is no exception. Its foray into this territory has been fraught. The first draft of this bill was shelved, only to reappear over a year later, after tens of thousands of submissions were received.
This legislation, the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024, has two core objectives. One combats misinformation and the other disinformation. The two are entangled. Both spread falsehoods. Both can cause harm. But the distinction lies in intent. Disinformation is the deliberate sowing of falsehoods. Misinformation is inadvertent, yet it can accumulate to become capable of demonising communities and polarising nations. The US and the UK continue to grapple with the lasting impact caused by targeted disinformation campaigns. Society is polarised and trust eroded.
We still have time to prevent this from happening here. In 2015, evidence showed that Russia's notorious Internet Research Agency conducted influence operations in the Australian Twittersphere. A spike of troll and possibly bot posts targeted our then Prime Minister for his critique of the Russian President following the country's shooting down of MH17. Another spike occurred in April 2017. After Australia deployed fighter aircraft in Russia-backed Syrian airspace, a barrage of nonsensical content appeared on the hashtag #AusPol at four to five times the rate of typical organic traffic. This technique was designed to hook Australian Twitter users by presenting them with innocuous or humorous content and, over time, luring them to a more extreme viewpoint.
A majority of Australians polled before the Voice to Parliament referendum, says Timothy Graham of Queensland University of Technology, supported a 'yes' vote 12 months out from the vote. Public sentiment then fluctuated wildly, in correlation with the use of non-traditional pan-partisan and conspiratorial messaging techniques across X by the 'no' campaign. This is one clever example of how automated content recommendation via algorithms can be weaponised for political purposes and, indeed, how algorithms have begun to change the nature of modern politics.
Misinformation, though, is more challenging to nail down. Reliable data outlining the actual volume of misinformation on the internet is lacking, in part due to the way digital platforms prevent researchers from accessing platform data. The legislation expects digital platforms to intervene via various methods where content is both 'reasonably verifiable' and likely to cause 'serious harm'. These are subjective concepts. Indeed, as presented in this legislation, they are imperfect. These definitions remain matters of discussion between me and other members of the crossbench and the government.
It's a delicate balance to preserve freedoms when we know that algorithms favour outrageous content such as misinformation and disinformation and propagate it to drive user engagement and profit, with little or no regard for public safety and social cohesion. Where the boundary should be when it comes to public discourse and misinformation has become a political flashpoint. This debate is inflated when, in effect, this bill expects platforms to inform users about misleading content, not remove it, and up the transparency. Where content may cause serious harm to an individual or our community, do something about it and then tell ACMA how you'll do it.
To be clear, platforms already action misleading content, but under their own rules—Elon Musk's rules or Mark Zuckerberg's rules. Whose do you prefer—theirs or ours? Rules formed by our parliament or big tech? Currently, all the power lies with the platforms, and doing nothing, in my view, is no longer an option. But what we do must hold carefully calibrated definitions and demonstrate a clear capacity to be effective in encountering this societal threat.
For the moment, I'm not entirely satisfied that either the definitions or the transparency measures in this bill are adequate, and discussions with the government on these matters are ongoing. It's clear, though, that competition alone has not aligned digital platforms with the best interests of society. The misaligned commercial incentive structures which guide the logic of big tech's algorithms are why we're glued to our screens, why are feeds make us compare our lives with our friends, why political content polarises and outrages us, why Cambridge Analytica was able to exploit our divisions, and why nation states can cynically weaponise our open but fractured information landscapes. Surely, there is a better approach.
This government talks a big game about standing up to big tech, but the truth is that time after time it's the path of least resistance. Labor's strategy on online safety revolves around the industry co-regulation model—self-regulation, in other words—and it's a core element of this bill. Indeed, the core thesis of the co-regulatory model was coalition policy. This is backed by a veiled threat of resolve to impose more interventionist regulation if big tech companies don't pick up the game on their own. This is in the face of new laws in the EU and UK which have far surpassed our own. Their approach—a systems approach to online safety—is proving to be the far more effective strategy. But not for Australia: 'Just switch your devices off,' we're told. This is not a solution.
I recognise the obvious benefits which digital platforms can provide, but the risk of harm in Australia's existing regulatory environment is too high. It goes far beyond the narrow scope of just mis- and disinformation, and we can't continue to let the platforms decide how to fix it, which is why I will be introducing substantive, detailed amendments to this legislation. Today researchers in Europe and America have data access rights where it's in the public interest for them to investigate how algorithms are operating on a digital platform in a specific region. My amendments will allow Australian researchers to gain the same level of insight into the data of how algorithms are functioning in our country too. I fear that ACMA, at present, is not properly resourced with the technical expertise needed to critically analyse the data provided to the government under the transparency measures in the bill in its current form. Absent data access rights for Australian researchers, the capacity of this bill to meaningfully and urgently combat the threat mis- and disinformation poses to our democracy is, in my judgement, questionable.
The rapid and uncontrolled spread of mis- and disinformation is a risk to social cohesion, public health and safety, and political stability, yet this bill has arguably been the subject of disinformation in the form of a knee-jerk reaction about censorship, despite nothing in the bill requiring content to be removed and no penalties for individual users. I absolutely do understand public concern about free speech—as a former journalist I see it as a fundamental tenet of democracy—but at its core the aspiration here is to make platforms more responsible—just a bit more, not even a lot more. What those opposing this legislation need to answer is this: do you believe mis- and disinformation are threats to democracy and, if so, do we plan to do something about it? If not, we'll just have leave the rule-making to Mr Zuckerberg and Mr Musk.
No comments