Skip to content
  • Central government
  • Technology
  • People, places, planet

Careless talk costs lives: Tackling misinformation around Covid-19

Misinformation Graphic 2

by Sarah Finch

False information spreads fast on social media… But technology can also be part of the solution.

Growing up, we were probably all told at some point not to believe everything we read, hear or see. It's a reminder to be wary of taking things at face value, to think for ourselves, and to examine the facts.

But with increasing amounts of our time spent online, this isn't always easy to do... The democratised nature of information exchange on the internet — whilst on the one hand facilitating peer to peer communication and the ability to publish content with ease — can also make it difficult to know who to trust, and what to believe. If anyone can post anything online, then how do you know if it's true?

Don't trust me, I'm an expert

There's no doubt that our society has been impacted by a growing distrust of experts and authority figures in recent years. Nationalist political campaigns such as those of Donald Trump — and Brexit — have promoted anti-establishment and anti-intellectual narratives, casting doubt upon the experts to boost their own populist agendas.

Yet muddying the waters of who and what to believe is a dangerous game. In a world of alternative facts and fake news, online conspiracy theories around Q Anon5G, and Covid-19 have flourished, with serious real world consequences.

In recent months, the Covid-19 vaccination programme has faced notable setbacks due to the spread of rumours on social media, with BAME communities in particular showing hesitancy around vaccine uptake. Thanks to campaigns to tackle false information about the vaccine, including this one from the Department for Digital, Culture, Media and Sport (DCMS), vaccination hesitancy has now halved amongst black British people. But the ease and pace at which misinformation can gain traction online is a huge concern.

The digital tools for the job

Known as mis or disinformation, the spread of false information over the internet is a specific form of online harm. It is an extremely difficult issue to address, as tailored efforts are required to tackle each individual topic area — such as politics or health — that the misinformation might come under.

The responsibility within government for monitoring mis and disinformation currently lies with DCMS, which leads a Counter Disinformation Unit comprising teams from the Home Office, Foreign, Commonwealth & Development Office, Cabinet Office and the intelligence agencies. Their aim is 'to provide a comprehensive picture of the extent, scope and the reach of disinformation and misinformation' and to ensure this kind of content is removed from social media platforms, but there is still more that could be done.

Safety tech is a huge asset when it comes to tackling online harms, with the UK safety tech industry leading the way for the rest of the world in this space. While social media companies have internal trust and safety teams who focus on tackling online harms, including misinformation, there’s a role for third party organisations to evaluate the efficacy of their work from an independent perspective.

Safety tech companies approach online harms in different ways, but the majority apply a combination of machine and human intelligence to large volumes of data, with the results analysed and used to inform the way organisations respond.

Open source intelligence

A safety tech business might start its analysis by using open-source intelligence (OSINT) techniques to aggregate data from publicly available material. It might focus on particular content themes or on particular groups of ‘actors’ (the term generally used to describe those enacting online harms) across the surface web, deep web and dark web over a certain time period.

This normally unstructured data can then be standardised and classified using artificial intelligence — and/or machine learning-driven tools. These tools will have been trained on comparative data sets (a challenge in itself, given the sometimes illegal nature of the content in question) to recognise behaviours and attributes that point to misinformation. The suggestions from the tools will normally be sense checked in some way by human intelligence or threat analysts to reduce the likelihood of false alarms.

The safety tech company may then inform the social media platform of harmful misinformation taking place on their site, and potentially advise on how to prevent it. Some safety tech companies also provide moderation capabilities, where automated and human layers take down or counter harmful content.

Of course, policing what content people can and can't share online is a difficult issue, particularly when it comes to misinformation. If the content is not explicitly illegal, then does blocking it infringe on a person's freedom of expression? Who gets to decide what other people should be influenced by? Is one person's fake news another's legitimate opinion?

The Online Safety Bill

This balancing act is acknowledged in the draft Online Safety Bill, which recognises the need to 'protect the right of users […] to freedom of expression within the law' whilst also keeping people safe online.

Although there are some grey areas within this topic, the existence of the Bill represents an understanding that although the internet positively impacts people's lives in a variety of different ways, the potential for harm is too great to leave it unregulated or with voluntary codes of practice alone. The Bill will give the new regulator — Ofcom — the power to hold providers of services to account if they fail to protect users, enshrining a duty of care that must be upheld.

It also sets out provisions to tackle misinformation and disinformation, with Ofcom required to establish an advisory committee on how to deal with these issues in both user-to-user services (including social media platforms) and search services.

Spreading the news

Whether politically motivated or the result of a more innocent lack of facts amongst individuals, mis and disinformation have always been present in our society. What is now different is the ease at which false information can spread, in a matter of clicks, right around the world.

The Covid-19 pandemic has shown the real cost of misinformation in a very modern way. Largely thanks to social media, significant proportions of the population have come to doubt the facts about the virus, from the benefits of mask-wearing to death rates, and even the existence of the disease itself.

In a pandemic environment, where the relationship between our individual actions and their impact on the collective is stronger in terms of spreading infection, misinformation becomes much more dangerous to us all. The government's efforts to tackle this issue are therefore well-timed, with the upcoming Online Safety Bill, and the development of the UK Safety Tech industry, now more important than ever.

Sarah Finch's avatar

Sarah Finch


Contact Sarah

Our recent insights

Why Government Must Step Up To Combat The Digital Industry’S Carbon Footprint%0A

Why government must step up to combat the digital industry’s carbon footprint

By raising awareness, opening up the data, and following best practices in design and engineering, we can mitigate the environmental impact of digital transformation.

Is social value fit for purpose in the public sector?

What is the Social Value Model, and can it be meaningfully applied to public services?

Safety tech in the UK: A platform for success

If the internet is a Wild West, then the safety tech sheriffs are coming to town. But how can we make sure they win?