Decentralised social media and the spread of dangerous health rumourssteemCreated with Sketch.

in steemstem •  7 months ago

Image credit: Pixabay

In public health, when bad information is left unchecked the results of it can be deadly.

During the 2014/15 Ebola outbreak, misleading rumours were everywhere. One, claimed that someone was attempting to infecting the local water supply with formaldehyde to raise the death count to gain more in international aid funding1. Another, linked the origins of the virus to the US testing a bioweapons on African nations during the cold war2, and a further rumour suggested that people dressed as nuns were injecting residents with a fake vaccine in order to kill children and traffic their organs3.

While all these rumours were complete fabrications of the truth, that didn’t stop Liberia’s largest newspaper, The Daily Observer, from quoting them (without rebuttal) in their paper and on the papers website where they were they went on to generate substantial traffic 3.

Rumours such as these were a huge challenge to overcome during the outbreak. If you read my post a while back on how to control an Ebola outbreak, you’ll recall the importance of contact tracing. In brief, this process involves asking an infected individual to give details related to all of the other individuals that they had been in contact with over the last 21 days (the length of the Ebola incubation period), these contacts are then followed up and checked for symptoms.

Contact tracing is a simple, but highly effect control strategy4. It allows for speedy treatment of those infected and stems the flow of new cases. It does however require one vital ingredient for it to work:

Image credit: Creative Commons

Where rumours spread, trust is eroded. When trust is lost during a life threatening outbreak, it makes people hide their symptoms, it makes people less likely to take recommended treatments, and it makes people take treatment advice from far less reliable sources5.

For instance, in another rumour circulating during the 2014/15 Ebola outbreak, “drinking hot salty water” was thought by many in a community to be a reliable means of preventing an individual from catching the disease (it's not). This rumour reportedly caused two people, in Nigeria, to died due to excessive salt consumption and many to act without appropriate caution around Ebola victims, substantially increasing their risk of infection 1

Rumour as a disease

The spread of rumours shares remarkably levels of similarity to the spread of disease; however, instead of transmission being due to physical proximity, rumours rely on communication as their vector of transmission. The modern consequence of this is that a rumour can now spread to the other side of the world at a click of a button, and vice versa. Therefore, when the website Natural News runs articles suggesting that Ebola can be treated with homeopathy, or that the Ebola vaccine is dangerous and only for the profit of big pharm, or that Ebola is airborn, or the old classic Ebola being a genetically engineered virus that is designed to wipe out most of humanity to benefit the globalist agender(somehow?), it makes me worried that someone will read, follow the advice and suffer unnecessarily because of it.

Up until recently, the predominant means of addressing dangerous rumours has been through a “quarantine” equivalent approach. For instance, the stories in The Observer newspaper have since been removed due to backlash from the Liberian government and even Natural News removed their homeopathy treating Ebola post (although the page does now point you towards a post titled: “Ebola vaccine to be manufactured by criminal drug company with felony record” So, there’s that) after substantial public backlash. A similar approach has recently been seen being adopted by the social media giants (Facebook, Twitter, YouTube), through the banning, or limiting the reach of, ‘bad actors’.

This has always been an imperfect, sometimes heavy-handed, response. It may do the job but in the current age of social media, banning accounts seems, to me at least, to be creating animosity and hardening beliefs rather than changing minds. Yes, it stops new people “becoming infected” by a rumour but at the cost of trust in the system.

With the popularity of decentralised social media platforms (e.g. here on Steemit, Minds or Akasha) increasing, potentially fuelled by the growing distrust in centralised platforms, this discussion about free speech and fake news may soon be rendered mute, due to no central authorities being able to ban users or remove content (for more on this see this post). As such, those of us in public health may have to get our hands dirty(er) and engage in a way that we have not done so previously.

I’ve been here on Steemit for around 6 months now. I’m constantly in awe with the possibilities that this technology has to offer and I firmly believe that ultimately this next iteration of the internet has benifits that far outweigh the negatives. However, sometimes I do worry that we’re sleepwalking into a tricky situation when it comes to our relationship to the truth.

So, I guess my question to you is this:

If decentralised social media goes mainstream, what should we, as a society, do about dangerous health advice? Do we treat it as “everyone is entitled to their own opinion”? Even if their opinion is that you can treat Ebola with homeopathy. Do we rely on mob justice? Or, is there another option?

I have some ideas, but I’d genuinely value your input here. Maybe I’m worrying over nothing and we’re entering a new age of techno-utopia, an age where we all get on and the truth just rises to the top. That’d be nice!

To explore this topic in more detail I’ve decided to write a paper for a social science conference that I plan on attending in January. See the comments below for the abstract that I’ve submitted to them. Their theme is Understanding the social in a digital age, should be fun.


About me

My name is Richard, I blog under the name of @nonzerosum. I’m a PhD student at the London School of Hygiene and Tropical Medicine. I write mostly on Global Health, Effective Altruism and The Psychology of Vaccine Hesitancy. If you’d like to read more on these topics in the future follow me here on steemit or on twitter @RichClarkePsy.



[1] Time: Fear and Rumors Fueling the Spread of Ebola
[2] CBS DC: Largest Liberian Newspaper: US Government Manufactured Ebola, AIDS Virus
[3] The Washington Post: The major Liberian newspaper churning out Ebola conspiracy after conspiracy
[4] Greiner AL, Angelo KM, McCollum AM, Mirkovic K, Arthur R, Angulo FJ. Addressing contact tracing challenges—critical to halting Ebola virus disease transmission. International Journal of Infectious Diseases. 2015 Dec 1;41:53-5.
[5] Cheung, E. Y. (2015). An outbreak of fear, rumours and stigma: psychosocial support for the Ebola Virus Disease outbreak in West Africa. Intervention, 13(1), 70-76.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

The following is my proposed title and abstract for the conference

Title: Web 3.0 and the future of online misinformation


Blockchain, the digital infrastructure behind Bitcoin and others cryptocurrencies, allows for three unique design aspects that make it a desirable technology on which to build a social media platform. These aspects are decentralisation, radical transparency and advertisement free monetisation.

Applications such as Steemit, Akasha, Minds and Memo are programmed on blockchains (Steem, Ethereum, BitcoinCash) in such a way that; (i) there is no central authority that decides what content violates terms and conditions, (ii) the content is permanent and censorship free for as long as the blockchain remains in use, (iii) every interaction can be seen by every individual that has the technical expertise to look, and (iv) posting and engagement can be rewarded in cryptocurrencies that hold real world value.

With internet access in emerging economies increasing at an exponential pace. Many in low- and middle-income countries have been quick to jump on this technology, whether it be to avoid censorship from oppressive regimes or make a living wage through content generation. While, therefore, potentially of high benefit, little has been discussed in relation to how misinformation will be treated in such a network. Web 2.0 platforms, such as Facebook and Twitter, have recently been embroiled in debate related to the level of responsibility that they should be held to in regards to “fake news” distributed by bad actors on their platforms. This debate is meaningless on the crypto-anarchistic platforms of Web 3.0 as there is no longer a central body that rules, or able to rule, on such a decision.

In this paper, I will outline the difference between a web 2.0 and a web 3.0 social media platform, discuss how it relates to online misinformation and indicate some of the possible future avenues for social media based academic research. [upvoted for visibility]

In my humble opinion, everyone should be entitled to their opinions. But blatant misinformation of the type you describe can be easily challenged with information, rational debate and discussion on a platform such as this. The best counter to fake news is the truth, not censorship. For example, the climate change/CO2-caused global warming controversy will not be resolved by name-calling and labeling those that disagree as deniers that are not worthy of rational debate.


I really hope you're right! These conversations take a long time and a lot of skill and empathy, so here on Steemit there's a chance we might be slightly better off as (ideally) we could incentives for civility in discourse overtime.

My nightmare is people coming here with bad faith arguments with the aim of causing harm. Or the mob descending and being rewarded for the cheapest and nastiest hits. Twitter and Facebook have become fairly toxic when it comes to talking about vaccination (or in fact anything), when Steemit becomes more crowded this'll all come here without a doubt. My aim it to try and bring social scientists in to the mix as a early as possible as its often said that we're playing catch up with how misinformation spreads online.


Sadly, there's already much acrimony over things like vaccination here. As I understandit, much of @steemstem was created directly as a reaction to that.

I need to think harder, but at least part of the issue hinges on trusted sources. The problem is that one-person's snake-oil salesman is another's reputable font of info, so we have the same echo-chamber issues we see on other social media.


Yes, it's going to be far from easy. @steemstem was a wonderful first step, its certainly the main reason I joined the platform. Just growing the community and keeping it nice should be good enough for now. What I want to bring to the academics is the 5-10 year scenario, just so they at least get them used to the possibility when/if it does go mainstream anytime soon.


It's worth looking at this study on how climate change denial arguments arise and are spread:

Eye-roller study: “Climate change denial strongly linked to right-wing nationalism”

Chalmers University of Technology in Sweden: Seeks to explore 'connections between conservatism, xenophobia, and climate change denial'

I personally don't think that it helps the 'climate change' debate to assume that all deniers are wrong and must be xenophobic, right-wing, conservatives!

Similarly in the health debate there are always two sides to the story and these sides need to be given equal opportunity to debate and express their opinions. With regards to vaccines, yes they are mostly safe and prevent the spread of deadly diseases, but we must also acknowledge that they can also cause rare adverse events for reasons that are not fully understood. Those patients (and their families) that experience these adverse events should not be ignored and dismissed .... it is very personal to them. Look at the Vaccine Adverse Event Reporting System for example:

Vaccine 33: 4398-4405 (2015)
Safety monitoring in the Vaccine Adverse Event Reporting System (VAERS).
Shimabukuro TT, Nguyen M, Martin D, DeStefano F.

During 2011-2014, VAERS averaged around 30,000 U.S. reports annually, with 7% classified as serious.


Thank you for these, this also overlaps with the literature on public perceptions GMO (which I plan to delve into at some point.

I too share similar concerns with research that makes connections such as you mention here (it can sometimes read derogatory). With climate change it seems far more appropriate to frame it as a wedge issue rather than say that the people that are wrong believe X.

Yes debate is important but we should always be wary of false balance. I’m aware of VAERS. When looking at the stories reported there it’s usually best to bear in mind that this is a self-reported system. It’s used for signal detection rather than a statistical data set in its own right, if a signal is found (e.g. a recurring adverse events) then a more active and controlled data collection process is undertaken.


I now realize that I may be a bit naive in my understanding of the social media spread of false information. I just found that there is a tendency for false news to spread faster than true news online according to this article:

The spread of true and false news online
Soroush Vosoughi, Deb Roy, Sinan Aral
Science 09 Mar 2018: Vol. 359, Issue 6380, pp. 1146-1151

If this is correct, then this might undermine the idea that misinformation can be effectively combated with real information, rational debate and discussion.

Good luck with your conference paper development and presentation. I look forward to your future articles on public perceptions of GMOs.


This is exactly what I've been thinking about. Banning the bad information sources just fans the flames of conspiracy theories/makes people double down on their beliefs; letting them continue to tell dangerous lies puts people at risk. There doesn't seem to be a solution that doesn't have a backlash that might be just as bad as not doing anything.


Yes, this is my worry. Ultimatly the answer is for us to learn how to talk to each other better. With complicated subjects (e.g. vaccination, GMOs) however, this will mean experts getting involved more in the day to day communication of the topics. Somthing that I'm not sure many of them will want to do.

This post has been voted on by the steemstem curation team and voting trail.

There is more to SteemSTEM than just writing posts, check here for some more tips on being a community member. You can also join our discord here to get to know the rest of the community!


Thank you @SteemSTEM! Love you!

It's really simple and fixable in 1 day - algorithms for search engines must have weight coefficients. Source from sci journal - 1, AlexJ - minus Infinity.
The second part is fundamental, experts should be paid to write understandable, comperhensive articles. GO STEEMIT!!!


Hell yes, that’s the dream! Although if you take a look at the communities on Minds (another decentralised social media site also with around 1 mill accounts) it seems to be all Jones all the time at the moment.

Part one is largely out of my control but I’m all about part two! People in academia have just about come around to the blogging part, weirdly the getting paid part is harder to convince them of.


I see that current system in scientific publishing is abysmal, without simple scientific answers to simple questions:

  • Bro, how many reps should I lift?
  • How many days should I rest?
  • Should I do cardio?

Also, there is not a single book in my language with solid agricultural advice:

  • How exactly should you cut the branches and form the tree?
  • Should you plough deep, shallow or not at all?

It sounds incredible, but those fundamental, simple answers can't be found on Scholar because - uuu let's give 10M for the research of FPCD bonding with QWE and RTY


That's the argument for expertise right there. It takes a long time to get a sense of a discipline and be able to apply it while updating your knowledge as you go.

Soon (and by soon I mean not soon, but in our lifetimes) we’ll have an AI that can run systematic reviews to bring together all the published lit on questions like this and spit out a recommendation and level of certainty (also hopeful let us know where the gaps are and what studies to run). Right now we have Cochrane which is just a bunch of super slow humans that take ages to lets us know whats what, and they rarely take requests.

Hi @nonzerosum!

Your post was upvoted by in cooperation with steemstem - supporting knowledge, innovation and technological advancement on the Steem Blockchain.

Contribute to Open Source with

Learn how to contribute on our website and join the new open source economy.

Want to chat? Join the Utopian Community on Discord