Conspiracy theories spread: high-tech companies should be more responsible

Social media is rife with anti-vaccine misinformation. The Reuters Institute reported that 63% of people in the UK are concerned about fake news on the internet. Is it time to pass legislation to make social media companies responsible for the content published on their platforms?

It was 2am in Texas’s Galveston County, and a four-year-old girl, Kali Cook, was asleep on the bed and was feeling ill because her body temperature kept increasing. Her mother, Karra Harwood, felt worried and gave her daughter some medicine to try to control her symptoms.

However, after five hours of struggle, Kali’s breathing stopped and she passed away. According to the Galveston County medical examiner’s office, her death was confirmed to have been caused by the Covid after several of her unvaccinated family members had been infected.

Before her daughter’s death, Kali’s mother refused to allow any of her children to get jabbed. However, she was also infected with Covid. She said to The Daily News: “I was one of the people that was anti. Now, I wish I never was. I did not murder my daughter. People are getting vaccinated that are still getting Covid and they can still spread Covid.” 

This is not the only case in which anti-vaxxers have been infected or died. In the UK, a British government report says that in the first six months of 2021, Covid was involved in 76% of deaths in unvaccinated people, and in just 1.2% of deaths in fully vaccinated people. 

Patrick Vallance, England’s chief medical officer, also confirmed on Twitter that 60% of Covid patients in hospital now are unvaccinated. Furthermore, Nature shows although it is still possible for vaccinated people to contract Covid, without the protection of vaccines, there is a far higher death rate for unvaccinated patients compared with vaccinated patients. 

The spread of anti-vaccine opinions on the internet might be the stem of the anti-vaccine movement. Federico Germani and Nikola Biller-Andorno’s research data found adults’ opinions about pandemic treatments were mainly impacted by disinformation on social media. Meanwhile, the Reuters Institute reported that 63% of people in the UK are concerned about what is real and what is fake on the internet, in particular relation to the news.

Dr Ceri Hughes, political and media researcher at Cardiff University, warns that social media platforms are feeding fake information and fueling its spread. He said: “We currently live in a period that is being called the age of misinformation. Informed research often shows false information can be spread widely and shared on social media before the truth.”

Alistair Coleman, member of the BBC’s Disinformation Monitoring Group, believes social media companies should be responsible for this problem. He said: “Disinformation should have been dealt with a long time ago. However, big-tech companies are more interested in user growth and the income of advertising. Now it is probably one of the greatest scandals that they have allowed these movements that are costing people’s lives.”

High-tech companies, like Facebook, published some actions and statements to restrict fake information after facing both societal and governmental pressure during the pandemic. For example, Facebook launched a campaign to target fact-checking for users and believes this shows the company is “listening and adapting”. Another big platform, Twitter, said they had introduced a new system to allow users to report fake news content so that it can be labelled, although the basic issues with the content will not be solved or removed. 

However, experts pointed out that the introduction of such measures by high-tech companies comes too late for the UK, Europe, Africa, and the Middle East: the effort is “too little, too late”. The BBC had also criticized social media firms that failed to restrict Covid-19 fake news, noting that 90% of the fake news items highlighted can still be found on the platform after the company claimed to have introduced these new measures. 

These measures proved inefficient because the social media companies in question underestimated the complexity of the anti-vaxxer groups; these groups were able to work around the restrictions.

The report of Center for Countering Digital Hate (CCDH), a famous non-profit organization which aims to disrupt the architecture of online hate and misinformation, argued that big-tech companies profit from the business of the anti-vaccine movement. They open up free access to these groups and gain income from book sales or alternative products.

Dr Hughes shows many social media platforms contain misleading content to stimulate people to spend more money. He said: “So much media that people consume comes through portals that are not good journalism. These are clickbait stories. These stories start off on a false premise from a false origin and get circulated in the wrong direction.”

Specifically, CCDH believes that social media stimulated the anti-vaccine movement with a high-profit business system from two different sides. The first lies in providing the “shop front” for anti-vaxxer business.

The unique algorithms of social media sites support these external sites in attracting revenue. One study carried out by First Draft, an organization which fights against misinformation, shows anti-vaccine websites install various user tracking tools to target their audience and communities with specific ads.

A team led by Professor Neil Johnson, social physicist at George Washington University, says in the report of CCDH, that where a social media page exists on a site like Facebook, it allows anti-vaxxers to have more potential connections with undecided users who are easily persuaded or converted.

The strategy employed by these sites is usually to stimulate the audience with misleading advertising language placed on social media to encourage an emotional response and gain their audience’s trust. The target audience can then easily access their website and even go on to donate or to spend money on their products.

For example, the influential anti-vaccine entrepreneurs Rashid Buttar and Ty Bollinger called their supporters on social media to exit the public domain and move onto their private website to view their products: they said, “there are certain things you cannot say until you live in a health world today.”

A similar campaign can also be found in the Covid-vaccine movement. A survey conducted by Sky News shows that several famous anti-vaccine organizations of the UK – Save Our Rights, Stand Up X, Stop New Normal – had all sought to generate money through donations and product sales.

These organizations advertise on their social media pages, like Instagram, that people should go to their website to join their online community and then spend money to support their cause. The leader of them has amassed more than £450,000 from the general public in this way.

CCDH’s survey also pointed out that anti-vaccine group advertising strategies represent a win-win situation for social media companies meaning that social media companies find it hard to address this problem. The huge number of views and followers generated by anti-vaccine websites is a source of profit for social media companies: according to the same report, over 85 million anti-vaccine social media trackers on Twitter generate website views worth 1 billion dollars to social media companies. The large audience of these anti-vaxxers (over 30 million followers on Facebook and Instagram) could be earning Facebook up to 23.2 million dollars in revenue.

To better target their audience, anti-vaccine groups also spend money on advertising on social media platforms. The report says that 11 of the anti-vaccine entrepreneurs identified by their research, who have a combined following of 13.6 million people, have paid Facebook for advertising.

The unregulated environment of social media also provides space for anti-vaccine groups to become radicalised and break the limits. Restriction and labelling of anti-vaccine content alone has little effect.

One report pointed out that the new misinformation restriction measures of high-tech companies had vague judgement standards, meaning that a grey area existed, especially on Twitter. This standard is only rigid based on the information users provide in reports and a key words filter, such that it is not entirely efficient enough; there are still many panic-inspiring accounts and fake bot accounts which exist to create and spread rumours.

Dr Hughes also warns uncensored recommendation lists on social media can relate back to misinformation groups even when anti-vaccine information had been deleted. He said: “There’s some posts on social media, maybe it’s based on some facts. But they just to make important element to be pretend. And if people believe that, then they will follow the process to another directions.”

“I know YouTube now have made attempts to take down anti-vaccine videos, but chances are if people are looking to find another conspiracy theory video on YouTube, it will suggest another related video to them until they reach to the anti-vaccine content. This is common.”

Moreover, privacy policies on social media privacy policies provide the basis for these groups to hide. CCDH’s research confirmed that apart from the overt groups, there are uncountable hidden anti-vaccine groups in the corners of social networks. These groups only use the invitation function to add new members.

However, it is dangerous that social media platforms have never had to moderate them, meaning that extreme content can be exchanged between the group members, like Facebook and Telegram. For some groups which had been found after launching illegal activities, members were even able to use this function to reject the police successfully.

Paul Chantler, podcast journalist and media law expert, also pointed out that society should be alerted to disinformation on social media. He said: “I think social media is unregulated. No one is looking over what they put out there. People can float all sorts of theories there.”

Anti-vaccine misinformation is not the only problem that can be attributed to the unregulated spread of disinformation on the internet. One report from Ofcom this year pointed out that the flood of disinformation and fake news online can cause more incidents of hate and discrimination, such as online abuse and racism.

A report from the UK’s leading online harassment opposition charity, Glitch, verified that during the Covid, most of the abuse took place on social media platforms (Twitter 65%, Facebook 29%, Instagram 18%) despite the tech companies’ commitments to making their platforms safer.

Matthew Tracey, who is an NHS nurse and campaigner, calls for more vaccines and better salaries for NHS staff. However, he also faced online abuse and received death threats. He believes that the main reason abuse happened is because people are misguided by disinformation online.

“I had witnessed the pandemic, and I know we do not need conspiracy theory,” said Matthew. “It is disheartening. My job is to save lives, protect people, and make people better. However, when I left my heavy work, I got abuse from somebody I’ve never met before.”

Alistair Coleman (of the BBC’s Disinformation Monitoring Group) believes that the government should get more involved in forcing social media to take more responsibility. He said: “Laws should be passed. We do need legislation to stop people being harmed by untruths and abuse on social media because self-regulation by social media companies just has not worked.”

To help resolve these problems, in May 2021, the UK government tried to fight against the spread of disinformation by giving more powers of supervision and by making social media companies more responsible by setting out more severe punishments. The newest draft of the bill has been published as the “Online safety bill”. The scope of social media content it indicates should be regulated has never been stricter.

According to the draft, content that is legal can still be judged to be harmful, such as abuse that doesn’t reach the threshold of criminality, posts that encourage self-harm, and misinformation. It also allows Ofcom to ban sites and fine companies which fail to protect users from harmful content for up to £18 million, or 10% of annual global turnover, whichever is greater.

However, because the bill will supply more powers of supervision for authorities, it has caused much debate. According to the existing regulations, the domain of deletion was defining as belonging to the courts. While this has concerned some people, the courts can decide on a punishment directly, rather than based on express terms.

Coleman believes that in order to ensure that the bill is efficient, certain details will require careful discussion. He said: “A line must be defined where freedom of speech comes with its responsibility not to cause harm. And this bill must define what harms are before it can be legislated.”

Patrick Maxwell, a political reporter, also believes that the vague standards of the bill will harm democracy. He said on The Politico: “In applying such a broad-brush approach to what is deemed censorable or not, the government will let those that the bill intends to eradicate cower under the cloak of free-speech martyrdom, while also giving free reign for the same removal of content which breaks no current law. Those removed for any transgression are, in essence, treated the same as those inciting to mob violence.”