House debates

Wednesday, 6 November 2024

Bills

Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024; Second Reading

9:46 pm

Photo of Allegra SpenderAllegra Spender (Wentworth, Independent) Share this | Hansard source

Misinformation and disinformation cost our social cohesion and democracy, and there's clearly a need to address it. Earlier this year in my electorate of Wentworth a lone attacker entered Bondi Junction Westfield and indiscriminately attacked and brutally killed six people. In the wake of that attack, a student from the University of Technology, Sydney, was incorrectly identified as the perpetrator on platform X, which precipitated abuse, threats of violence, hate speech, and racial slurs those against him and the Jewish community, of which is a member. This August in the United Kingdom, anti-immigration riots erupted when misinformation was spread following the tragic death of three young girls at a dance school in north-west England.

There is clearly a cost to allowing harmful misinformation and disinformation to propagate and spread unchecked on social media. It can have significant and sometimes tragic consequences. But fighting misinformation and disinformation can also have real costs. We are in a time of low trust in government, low trust in media. We have only recently emerged from a pandemic which imposed restrictions on our community at levels never seen before. We are facing, too, deep divisions and a battle of narratives and facts regarding a war in the Middle East. In this context, real and perceived restrictions to freedom of expression and ideas, and restrictions on contesting ideas and facts, have the potential to undermine trust in government, trust in our institutions and trust in our society. In doing so, it perhaps makes people even more vulnerable to true misinformation and disinformation. This is the fraught context in which this bill is being debated.

This bill will empower ACMA to require and enforce industry developed codes relating to the treatment of misinformation and disinformation on digital media platforms. ACMA will be responsible for approving codes and standards and will be able to determine if codes are suitable. Where industry codes are not sufficient, ACMA will be able to impose codes on companies. This bill provides guidance for assessing the threat of misinformation and disinformation, and the misinformation content will need to meet four criteria. These criteria include, most critically, that misinformation can be reasonably verified as false and that it is reasonably likely to cause harm or contribute to serious harm. Disinformation is differentiated from misinformation by additional condition of inauthentic behaviour, which is widely understood to be the dissemination of bots. ACMA will not be able to remove specific content, nor will it be able to take action against specific individuals for producing content. It will, however, be able to enforce civil penalties on media companies for noncompliance with the codes to reasonably prevent misinformation and disinformation.

Measures to prevent the spread of information at the source will clearly impact freedom of speech and expression, and, while I acknowledge freedom of speech is not absolute, as it stands I'm not yet convinced that this bill is the correct approach. There are substantive issues that have been raised about the bill, particularly around the potential for this bill to limit freedom of expression, and this is of great concern to me.

The first of these issues is the definition of 'verifiably false'. While this may be clear-cut in the majority of content, it ignores the nuance that what is considered true and false may vary over time and also by the interpreter of information. As a special rapporteur noted, it is difficult to classify all types of information into the binary analysis of true and false. As one of my constituents observed in one of their emails to me, experts sometimes get it wrong. Even members of my constituency who have written to me about this bill have noted, for instance, that, while they agree with many of the public health notices that came out during COVID, they did believe that it was important there was a debate about what was true and false and important for that debate to be public and allowed to flourish.

Secondly, while there are definitions of 'misinformation' and 'disinformation', there is no clear definition of the types of information that will be considered. This leaves open ambiguity or, at least, a presumption that all information posted by an individual, regardless of purpose, such as commentary or opinion, may be determined to be misinformation. This bill addresses, through exemptions, some of the most important examples, such as professional, news, satire and academia, but it does not clarify on the more fundamental question of content posted by ordinary Australians.

Thirdly, perhaps the most controversial is the harm threshold. For content to qualify as misinformation and disinformation it must be reasonably likely to cause harm. Some stakeholders, including the Human Rights Commission, believe it is too low a threshold for determining misinformation. However, I'm more concerned by the six discrete categories of harm that will be treated with greater scrutiny, including election interference, public health, vilification, physical harm to an individual, damage to infrastructure and harm to the economy. While some of these may not be controversial, I do have concerns—and so have many of my constituents—with the restrictions on the discussion dissemination of matters relating to public health and the economy, in particular. Misinformation was certainly a problem during the pandemic, but part of this bill raises more concerns from my constituents, as I mentioned before, even those who actually agreed with the public health information coming from the government. Ignazio and Luke, two of my constituents, are concerned that these powers could silence genuine and valid critics of public health measures, including whistleblowers, and prevent proper scrutiny of corporations, such as banks, that have a significant impact on the economy.

The other area that I think is really important to explore is the unintended consequences of this bill. The most concerning unintended consequence of this bill is that the penalties that can be imposed on digital media companies for breaches of the code could result in an overly cautious approach being taken to publishing content. If this approach is taken, this may mean that they overly censor their work to avoid falling afoul of the regulations put forward in this bill.

I understand that the government has tried to thread a needle here in terms of effectiveness and proportionality. And I note that the community of digital, legal and human experts are divided on the bill. For example, I note that the Human Rights Law Centre believes that the restrictions on freedom of expression are proportionate to the intentions of this bill. This may be true, but it's not just a matter of these issues that there was proportionate. And I'll also note that the Human Rights Commission itself is not a supporter of the bill. This is a very contested piece of legislation

So what are the alternatives here? I think that we, perhaps, in this situation, may be putting the cart before the horse. Can we really say with confidence that there are no other ways of moving forward in our objective in terms of suppressing and stopping the spread of misinformation and disinformation while, at the same time, preserving trust in the system? Are there better ways that we could approach this here?

First and foremost, I think transparency is key. I know that many people have observed that, in this space, social media companies are already taking actions against misinformation and disinformation. This is correct—and that is already a threat, I think, to our democracy and our ability to exchange ideas. I welcome, particularly, the amendments put forward by the member for Goldstein that seek to create greater transparency over what is being done through the algorithms and the actions of the social media companies in this space, and particularly to give researchers access to this information from digital media companies. If we start with, frankly, that greater scrutiny on what actions are already being taken—how misinformation and disinformation are already being addressed—that will be a really important part.

In addition to this transparency, which could be the first building block of better addressing misinformation and disinformation, there are some other safeguards that are being employed, perhaps, in other jurisdictions, that I think could be potentially useful in this space. There is scope to bring in content warnings that could indicate misinformation or identify where claims cannot be verified. Emerging literature is showing that these are types of warning labels, and they certainly have been used in the past in public areas, and certainly some of the students I spoke to and consulted in my response to this bill highlighted how useful some of those warning labels were—effectively, labels to say, 'Hey, check the facts on this one.' This could be legislated on media companies without discouraging the publishing of material or limiting the ability of a consumer to view the material and make up their own mind. Similarly, several jurisdictions, including the US and the UK, are exploring the use of watermarks on AI content, or, at the very least, provisions on AI providers to enable detection and tracing of AI content. Again, I think these are some really positive steps. It will require some coordination, but I believe that these could be quite effective measures to deal with misinformation and disinformation without limiting the power of AI, but also without some of the potential restrictions and some of the concerns that the community will have—and do have and have certainly expressed to me—if they feel that legitimate debates are not being allowed to be prosecuted on the basis of misinformation and disinformation.

When I spoke to members of my community about this, I spoke particularly to young people: one group of high school students and one group of people between high school and university and starting work, and all of these students identified the challenges of misinformation and disinformation. However, they all came back to me and said that perhaps the strongest and best approach in this area is actually education and critical thinking—'How do we better equip people to critically assess the information that is in front of them?'—as well as having some of the warnings on content and those pieces. That was certainly what they suggested should be our first area of action.

So I come back to where I started on this bill, which is to say: I recognise that misinformation and disinformation are significant threats to our country and to our society and to our social cohesion, and I take those threats extremely seriously. However, I believe that restrictions on content—real but also perceived restrictions on content—are also a threat to our ability as a society to trust our government, our society and the media and other things. I do think that poses a very significant threat to our country as well. This is a situation where we have to get the balance right and, I'm afraid, with the contested perspective on this bill, at this stage, with the bill in its current form, I don't think the bill has got that balance right.

Comments

No comments