House debates

Wednesday, 6 November 2024

Bills

Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024; Second Reading

7:59 pm

Photo of Monique RyanMonique Ryan (Kooyong, Independent) Share this | Hansard source

The Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 demands careful consideration. While the government's desire to tackle the harmful effects of misinformation and disinformation in our media is both important and worthy, its somewhat piecemeal approach to the issue raises concerns both about the potential limits of its effectiveness and about its potential to limit freedom of expression in this country. Misinformation and disinformation can have devastating effects on human rights, on social cohesion and on democratic processes. Australians are increasingly concerned about them.

The rise of internet technologies has amplified diverse voices within our society. These technologies have enabled new economic opportunities for many. They have democratised access to information and education. They have given all of us, globally, new ways to connect together. However, internet technologies have also enabled the rapid spread of abhorrent and illegal materials. Hate speech, misinformation, disinformation and child exploitation content have contributed to an erosion of trust in our media and democracy. The vulnerable, children, women, Indigenous women and men, people of colour and LGBTQIA+ individuals are particularly targeted by harmful content, harassment and hate speech.

The digital providers have failed to voluntarily address the aspects of those platforms which enabled the spread of harmful content—features such as recommender systems, dark patterns, invasive data harvesting, ineffective and vague content moderation policies, addictive design features and opaque mechanisms for reporting misconduct or abuse online. There have in fact been no commercial incentives for the platforms to regulate misinformation and disinformation; in fact, it's the opposite. Inflammatory content drives engagement, which in turn drives profits. The digital platforms are most able to reduce their harmful content. They're the ones who can do that most easily. They have to be made responsible for mitigating the harms that they actively enable.

Effective citizenship requires access to reliable information. Democracy cannot flourish without a diversity of media sources and a regulatory regime that protects consumers against the spread of misinformation, which ensures trust in what we hear, read and see. The erosion of trust in our media is a serious concern, and we do need to address it, but we also need to be conscious of the need to ensure that any measures taken to address misinformation and disinformation do not, paradoxically, exacerbate the toxic erosion of trust and the feelings of disenfranchisement expressed by so many in the population.

I'm particularly concerned about this bill's definitions of 'misinformation' and 'disinformation'. It's very important to differentiate between the unintentional and deliberate spread of false information, but the current wording of this legislation lacks clarity. Drawing a clear line between truth and falsehood is not always simple. There may be legitimate differences in opinion as to how content should be characterised. We have to ensure that these definitions are robust enough to capture harmful content and significant forms of bad action by bad actors but not so broad as to capture legitimate political expression, robust and open expressions and discussions of differences of opinion and scientific debate. There are legitimate fears that the bill could be used by politicians or government officials to suppress dissent and control public narratives.

My constituents are concerned that the government may not be a reliable judge of truth, that it might exempt itself from the same scrutiny that applies to others. Misleading content is published in print and online every second of every day in every region of the world. The terms misinformation and disinformation are vastly overused in public discourse. Too often they are employed by both the far left and the far right to discount opposing viewpoints without engaging in reasoned debate. We all know that, we recognise it and we try to address it, but many people struggle with giving our government the ability to decide what is and what is not true. Many Australians lack faith in the ability of digital platforms to adjudicate on such matters with sensitivity and rigour.

I've also heard from my constituents concerns that this bill grants excessive powers to the Australian Communications and Media Authority and to eSafety Commissioner, allowing them to regulate and censor online content. There are valid concerns that these authorities, were they to act in an overzealous fashion, could silence legitimate criticism or opposition. In response to such criticism and other feedback from the Human Rights Commissioner and other legal experts, the government has refined its definition of 'serious harm' such that it must now constitute significant and far-reaching consequences. Health related content is only in the scope of this legislation where it relates to public health or preventive health measures. While the ACMA retains broad powers under this bill, it cannot ask that accounts be blocked or posts removed if they are simply posting misinformation unintentionally, unless it comes from bots.

The bill does not deal with the dissemination of content which may be incorrect or false, only that which is regarded as seriously harmful and reasonably verifiable as false, misleading or deceptive. In that, the law remains piecemeal. It doesn't cover professional news, it doesn't cover government communications, art, satire or religious content. Hate directed at specific groups has to meet the legal bar of vilification to fall under the remit of this legislation.

We need to consider the complexities involved in platforms' content moderation systems as part of these oversight strategies. Recent experience has shown that automated oversight processes, such as those seen in the robodebt scheme, are prone to error when tasked with interpreting complex human behaviour, intention and meaning. The outcomes can be both unintended and devastating. As we saw with robodebt, automation can allow organisations to cut costs, at the expense of accountability and transparency, prioritising efficiency over justice and oversight.

Applying this to the media context, automation in content moderation raises a severe risk of algorithmic overreach. Imagine AI algorithms trained to detect what is deemed harmful or misleading content across Australia's diverse media landscape. In practice, this could well result in uneven, opaque and arbitrary systems of content regulation. The inherent subjectivity in defining misinformation and disinformation makes it especially unsuited to the blunt instrument of algorithmic enforcement. Equally, the proposed dependence on automated systems could well foster a sense of faithless moderation. When individuals are subject to opaque and unaccountable decision-making processes, they are left feeling powerless to challenge or understand the rationale behind any content removal. Moreover, as we've seen with the gradual expansion of surveillance technologies, there's a real risk that these automated content moderation systems, which are initially presented as narrowly targeted solutions, could incrementally expand their reach over time, potentially ultimately encroaching upon a wider spectrum of online expression.

I'm glad that the legislation has been amended now to require that platforms grant access to approved independent researchers, such as academic institutions, not-for-profit entities and other non-commercial researchers. Those researchers should be given immunities to protect them from the sorts of attacks that have recently occurred with X Corp's litigation against the Center For Countering Digital Hate and Meta's actions against CrowdTangle and Reset Tech Australia.

To my mind, the greatest weakness of this piece of legislation is its carve out for the dissemination of professional news content, which it defines as that produced by or for newspapers, magazines, television, radio and websites. We expect with this legislation that Meta and Instagram will police their content, but we as a country remain happy to accept the frequent and deliberate misleading of Australians by professional media sources, who, in my mind, are worse because they operate on a much larger scale in a mainstream format, and they rarely receive anything other than the mildest admonishment from the media regulator.

Watch Sky, read the West Australian, listen to the dross on 2GB or KIIS FM, then go to Instagram, TikTok or Facebook. On which platforms are women more often subjected to overt misogyny? In which formats are Muslims and Indigenous Australians more often subjected to racism and to bigotry? Who is more likely to attack trans people and kids in the Australian youth justice system? Russian bots or Sky presenters? Who has published the text messages of alleged rape victims? Who has outed women who've made confidential sexual harassment claims? Who has falsely named an innocent individual in Sydney as a mass murderer? Which media outlets in this country most often exhibit clear, intentional political bias reflecting the editorial influence of their owners? The answer, sadly, is the mainstream media not only in the press but also on digital platforms. This is not to say that the purely digital platforms are not also guilty of these sorts of behaviours. It's just that, in this shameful race to the bottom, regulation of all forms of media in this country is deeply problematic. We also have the recent example of the referendum debate, which, both online and in the mainstream media, became contaminated by falsehoods and by lies, disguised as truth and repeated and amplified by the mainstream media just as much as they were by the digital platforms.

In 2020 more than half a million Australians signed a petition calling for the establishment of a royal commission into the Australian media landscape. A subsequent Senate inquiry concluded that the Australian regulatory environment for all forms of news media is weak, fragmented and inconsistent in its governance arrangements and in standards across platforms. That inquiry found that focusing on the internet platforms alone would not resolve the grave problems in Australia's media sector. It recommended consideration of a platform neutral, single news regulator. In 2021 the Google owned platform YouTube suspended mainstream broadcaster Sky News for broadcasting medical misinformation about the COVID-19 pandemic. In that instance, a private company was able to act swiftly and effectively to protect the public from misinformation, but ACMA, the purported media regulator, was not.

When she introduced this legislation, the minister referenced the false accusations made against an innocent individual after the Bondi Junction stabbings. Those accusations were made and amplified by Seven Network every bit as much as they were by the digital platforms. For years we have seen that ACMA is ineffectual and toothless. A statement of ministerial expectations regarding its actions might well give strength to those who are unsure whether or not to support this bill.

If we pass this law but do not act on the manifest deficiencies of mainstream media in this country, we are kidding ourselves that we're making any difference to public trust in this country's media landscape. We still need a judicial inquiry into media regulation in this country—an inquiry which should consider a single, independent media regulator to harmonise news media standards and oversee an effective process for remedying which would likely result in a much more effective process for remedying, which would consider complaints from all parts of our media landscape, than would result from the legislation that we're debating now—which would do away with duplication and inadequacy.

In opposition this government was supportive of the concept of meaningful media reform. Now that it's in power, it's happy to act on digital platforms to limit their forays into misinformation and disinformation, while also threatening to impose financial levies on those platforms to prop up mainstream media. This is at a time when it is also actively protecting the advertising streams of mainstream media through its refusal to ban gambling advertising, which it knows is harming Australians and which it knows is allowing the mainstream media to behave in ways at odds with the standards that we are proposing in this legislation.

In its current form, with these limitations, this legislation will not do enough. It won't protect democracy in this country, as we so desperately need. It will be just another half-effective half-measure; an inch when we need a mile; a gesture when we need a stand.

Comments

No comments