Should social media be used to censor bad science health?

312
health
health

How do you deal with a problem like inaccurate data?

It can be life-or-death when it comes to understanding science and making health decisions. People who were discouraged from getting vaccines after reading false information on the internet have ended up in hospitals or even died.

In addition, violence and vandalism have been linked to misleading or wholly false assertions concerning 5G and the origins of Covid-19.

However, fully eliminating material can appear to be censorship, especially for scientists whose careers are built on the belief that truths can and should be challenged and that evidence evolves.

The Royal Society is the world’s oldest continually operating scientific institution, and it is grappling with the problems posed by our newest forms of information communication.

It advises against social media firms eliminating anything that is “legal but detrimental” in a new report. Instead, the authors of the paper suggest that social media platforms should alter their algorithms to prevent the story from getting viral – and to prevent anyone from profiting from fraudulent claims.

However, not everyone shares this viewpoint, particularly those who specialize in tracking how disinformation spreads online and how it damages people.

The Center for Countering Digital Hate (CCDH) claims that there are times when removing content is the best option, such as when it is extremely harmful, clearly incorrect, and widely disseminated.

The team cites Pandemic, a video that went viral at the start of the pandemic and was eventually taken down for making dangerous and false claims in order to scare people away from effective ways of reducing virus harm, such as vaccines and masks.

Pandemic 2, the video’s sequel, was better prepared for social media companies, but it fell flat after being restricted on major platforms and having nowhere near the same reach as the first video.

Prof Rasmus Kleis Nielsen, director of the Reuters Institute for the Study of Journalism at the University of Oxford, says, “It’s a political question…what balance we see between individual liberties and some form of restrictions on what people can and cannot say.”

Although science disinformation is a modest part of healthy people’s media diets, Prof Nielsen agrees that it can do disproportionate harm.

But, he adds, misinformation is exacerbated by a lack of trust in institutions: “I imagine a lot of citizens would have their worst suspicions about how society works confirmed if established institutions took a much more hands-on role in restricting people’s access to information.”

‘Difficult to reach’
“Removing content may exacerbate feelings of distrust and be exploited by others to promote misinformation content,” the Royal Society warns. “By directing health false content…towards harder-to-address parts of the internet,” this “may create more harm than good.”

However, the fact that some corners are “harder to reach” may be part of the point. It lowers the chances of someone who isn’t already committed to potentially dangerous beliefs and isn’t looking for them being exposed to them by accident.

Some of the violent protests that were fueled at least in part by theories began on Facebook, rather than in obscure corners of the internet.

Modify the algorithm.
Misinformation about science is nothing new.

In 2021, HIV misinformation is still circulating.
The inaccurate idea of a link between the MMR vaccine and autism was based on a published (and later retracted) academic paper, whereas widespread unfounded views about the dangers of water fluoridation were promoted by the print media, advocacy groups, and word of mouth.

What’s changed is the speed with which health false information spreads, as well as the large number of people who may be exposed to it.

Rather than eliminating content, the report’s authors propose making it more difficult to locate and share, as well as less likely to surface automatically on someone’s feed.

Prof Gina Neff, a social scientist at the Oxford Internet Institute, noted that this was done to “ensure that people may still speak their minds” – they simply won’t get a million-plus audience.

“They can still share this material, but platforms aren’t required to make it become viral.”

Read more: 6 ways to stay fit and healthy

Fact-checking
According to the Institute for Strategic Dialogue (ISD), a think organization that tracks extremism, a large amount of misinformation is based on the appropriation and misuse of legitimate material and research.

“This is sometimes more hazardous than simply health erroneous information because it takes much longer to disprove by demonstrating how and why this is a misunderstanding or abuse of the data,” a spokeswoman for the company explains.

That’s where fact-checking comes in, another method endorsed by the Royal Society.

The notion that individuals are being damaged in large numbers by the vaccine has been one of the most popular pieces of vaccine misinformation over the last year, which the BBC has regularly fact-checked. This claim is based on a misunderstanding of real-world data.

Individuals are being de-platformed.
According to a study, a tiny group of accounts propagating misinformation had a “disproportionate influence on public debate throughout social media,” according to the ISD.

“Many of these accounts have been flagged by fact-checkers for spreading fraudulent or misleading content multiple times, but they continue to exist.”

The Royal Society did not look into deactivating the accounts of “influencers,” who are known for spreading dangerous falsehoods in large numbers.

Many disinformation experts believe this is a crucial strategy, and research into ISIS and the far-right suggests it can be effective.

The CCDH discovered that when David Icke, a major spreader of Covid misinformation as well as anti-Semitic conspiracy theories, was removed from YouTube, his capacity to reach people was significantly decreased.

While his videos remained on the alternative video-hosting platform BitChute, their average number of views dropped from 150,000 to 6,711 after the YouTube suspension. 64 of his YouTube videos have received 9.6 million views.

According to Cardiff University health research, Kate Shemirani’s de-platforming, a former nurse and prolific spreader of Covid misinformation reduced her reach in the short term.

“Part of the problem is that current de-platforming models need to be improved. It’s not enough to take down a single piece of information or a few accounts “Prof Martin Innes, one of the paper’s authors, adds.

According to him, research from organized crime and counter-terrorism reveals that the entire network must be disrupted.

However, he feels that “this degree of expertise isn’t yet incorporated” in the way we deal with disinformation that could endanger individuals.

Read more: Top 12 Strategic Tech Trends for 2022