Australian High Court ruling could see media held legally responsible for their Facebook posts
A new legal ruling could have major implications for how news content is shared online and ensure less sensationalism in Facebook posts, in particular, which were designed to elicit maximum response.
The Australian High Court last week upheld a ruling which, under certain circumstances, could see Australian media held accountable for user comments left on their respective Facebook pages.
The discovery sparked a new set of concerns about the potential limitation of journalists’ freedom of expression and the impediment to reporting capacity. But the complexity of the case goes beyond the original title. Yes, the High Court ruling gives the media more leeway to be held legally responsible for comments made on their social media pages, but the full nuance of the ruling is more specifically aimed at ensuring that inflammatory posts are not shared with the clear intent of comments and bait shares.
The case stems from a 2016 investigation, which found that detainees at a youth detention center in Darwin had been severely ill-treated, and even tortured, while incarcerated. In the media coverage of the incident that followed, some media have sought to provide more context on the victims of this torture, with a handful of publications citing the criminal records of said victims as an alternative narrative in the case.
One of the former detainees, Dylan Voller, claims subsequent media portrayals of him were both incorrect and defamatory, leading Voller to seek damages for the published allegations. Voller himself had become the subject of several articles, including an article in The Australian titled “The list of incidents in Dylan Voller’s prison exceeds 200”, which shed light on the many wrongs Voller allegedly committed that led to his incarceration.
The case regarding Facebook comments, in particular, arose when those reports were reposted on the Facebook pages of the outlets in question. The heart of Voller’s argument is that the framing of these posts, in Facebook posts in particular, has elicited negative feedback from users of the platform, which Voller’s defense team says , was designed to generate more comments and engagement on these posts, and therefore gain more reach. in Facebook’s algorithm.
As such, the essence of the matter comes down to a crunch – it’s not that posts can now be prosecuted for people commenting on their Facebook posts, in layman’s terms, but it’s about how whose content is framed in such posts, and whether there can be a definitive link between the Facebook post itself, and whether this has attracted defamatory comments, and the perception of the community, which may harm an individual ( it is not clear that the same regulations extend to an entity, as such).
Indeed, in the original case notes, Voller’s legal team argued that the publications in question:
“Should have known that there was a ‘significant risk of defamatory comments’ after publication, in part due to the nature of the articles”
As such, the complexities here extend far beyond the main conclusion that publishers can now be sued for comments posted on their Facebook page, as the real impetus here is that those who post content on Facebook on behalf of a media editor should be more careful. in the wording of their messages. Because if subsequent defamatory comments can be linked to the post itself and the publisher is then acknowledged to have prompted such a response, then legal action can be taken.
In other words, editors can re-share whatever they want, as long as they stay aligned with the facts, and don’t seek to share intentionally inflammatory social media posts around such an incident.
As an example, here is another article published by The Australian on the Dylan Voller case, which, as you can imagine, also elicited a long list of critical and negative remarks.
But the post itself is not defamatory, it simply states the facts – this is a quote from an MP, and there is no direct evidence to suggest that the publisher has sought to entice Facebook users. to comment on the basis of the shared article.
What’s the real point in question here – the ruling puts more pressure on publishers to consider framing their Facebook posts as a way to attract comments. If the publisher is seen to incite negative comments, they can be held responsible – but there must be definitive evidence to show both the damage to the individual and the intent in their posting to the networks. social, in particular, not the linked article, which could then lead to prosecution.
Which may in fact be a better way to go. Over the past decade, media incentives have been altered so significantly by online algorithms due to the obvious benefit for publishers of sharing emotionally charged and anger-provoking headlines in order to elicit comments and shares, which then guarantees maximum range.
This extends to misinterpretations, half-truths and outright lies in order to trigger this user response, and if there is a way that editors can be held accountable for it, that seems like an approach. beneficial, as opposed to proposed reforms to Section 230 laws in the United States that would more severely limit press freedoms.
Again, this decision relates specifically to Facebook posts, and the wording of those posts is designed to trigger an emotional response in order to attract engagement. Proving a definitive link between a Facebook update and possible personal injury will always remain difficult, as is the case in all libel cases. But perhaps this discovery will prompt media Facebook page managers to be more factual in their updates, as opposed to baiting comments to trigger the algorithm’s reach.
As such, while this opens up the media to increased accountability, it could actually be a way forward to institute more factual reporting and hold editors accountable for triggering online mob attacks based on their focus. of a case.
Because it’s clear it’s happening – the best way to get comments and shares on Facebook is to trigger an emotional reaction, which then prompts people to comment, share, etc.
If it turns out that a Facebook post clearly prompts this, and it can damage reputation, it seems like a positive step – although it inevitably comes with increased risk for social media managers.