I wrote yesterday of how I found that it is possible to effectively have a post on Steemit shut down (censored) if it is downvoted quickly enough after it is initially posted - before anyone else has had a chance to upvote it.
@dwinblood wrote a post to attempt to cover this issue and explain the details - but despite the kind words, the problem continues.
How to control Steemit and silence dissenting voices
What is not really being addressed here is that these small problems that exist now can escalate fast to become major problems, especially for a platform that alleged that it is censorship resistant! For example - it is possible to create anonymous accounts on steemit for a small fee and it is possible to quite easily set up bots that monitor the network and make posts and downvotes. It is therefore a relatively simple matter for corrupt and wealthy groups to censor messages that they want suppressed just by following a few simple processes:
- Create anonymous accounts on steemit.
- Give Control of the accounts to custom written bots.
- Give significant Steem Power to the bot controlled steemit accounts.
- Get the bots to make regular posts to their own channels.
- Get the bots to vote on each other's posts.
- Once the accounts' reputation is high enough, add them to a list of accounts used for attacks.
- Create code to monitor new posts on steemit that contain keywords that might identify the post as one to be censored.
- Humans examine the suggested posts and pick accounts that are to be attacked.
- Use code to monitor all new posts on those accounts and auto-downvote new posts which match certain rules.
The result of this could quite easily be that certain posters are totally unable to post on certain subjects.
The suggested resolution of just 'not supporting' the attacker is useless here since they do not require anyone's support to continue their activity.
Is hiding a post really censorship?
YES! Without doubt! If hiding a post didn't limit it's reach then what would be the point in hiding it?
There are so many posts on Steemit that any post that loses it's visual presence will definitely receive less views and upvotes.
But we can cancel the downvotes with upvotes, so there's no problem - right?
WRONG! Firstly, you need to be aware of the problem in order to correct it - so you now need to be putting some of your time into checking for this when you really shouldn't need to. Secondly, and more importantly - the suggested remedy is to get quality upvotes to cancel the downvotes, but that requires you to firstly be able to contact high reputation people who can upvote you and secondly, to do that quickly - since you only have a short time window within which to gain the attention of users of steemit when you post a new post.
Most upvotes seem to be gained in the first hour of making a post and you need decent upvotes to get into the hot/trending lists and stand a chance of having significant numbers of people read them.. So if the first 30 minutes you are trying to get upvotes to cancel malicious downvotes, then your post is pretty much sunk already as it will have no meaningful upvotes and will probably not reach hot/trending.
In short, the workaround is not good enough to prevent deliberate and organised censorship attempts.
Downvotes harm reputation!
Currently, if the downvoter has a higher reputation than you do, you may lose reputation as well as payout! Reputation is built by receiving upvotes from accounts that themselves have high reputations. In principle this sounds ok, but there is really no guarantee at all that a high repuation equates to the account owner having integrity and no personal bias. Reputation above 70 does not equal sainthood!
But I'm never censored, so what's the problem?
This exact type of censorship already occurs on Reddit, for example, because they use a similar system of downvotes and hiding there. Most people are not posting controversial topics that challenge the status quo in society and thus they never encounter being silenced in this way. Therefore, there is not a massive uproar against the issue and the problem continues almost un-noticed by the majority.
A society that makes space for alternative views to be silenced is one that is destined for tyranny, bland homogenous same-ness and at worst total enslavement - George Orwell's 1984 makes this clear.
The idea that their posts cannot be censored by a central authority is attractive to users of Steemit, yet if the door is left open for anyone who has the time and money to come and be the censors instead, then arguably the design may actually be worse than the existing centralised platforms that are known to be controlled by governments and at least one owning corporation (e.g. Google / Facebook). So, it is imperative that this potential be nipped in the bud asap.
Better Censorship Resistance for Distributed / Blockchain Social Networks
How can we improve the situation?
Firstly, do we really need posts to be hidden at all?
The argument for hiding posts is essentially that we need a way to block spam and low quality posts and the steemit reputation system is not enough to do that because it can be boosted up in various ways anyway. Ok, so what are the other options?
I run a social network and have watched anti-spam techniques evolve over the years. I currently have a system on my own site that results in zero spam posts which is a hybrid of IP blocking known offenders along with rules that identify spam using pattern matching (much like anti virus software uses) and which also allows for nominated human users to manually identify spammers. On a small scale this works perfectly, however, on a larger scale there is a requirement to have a way of vetting the humans involved to ensure that they are not biased.
The Steemit system of voting for witnesses already provides a mechanism for the community to nominate trusted people, however, that is not flawless and in no way guarantees that the humans who are voted for are actually going to act impartially when it comes to removing spam. So a carbon copy of the witness voting system that nominates anti-spam agents is not going to be good enough for a scale-able and perfect anti-spam system. Perhaps this could work well though, provided there can be oversight into their actions and some way of addressing biases where they exist.
Here's a first draft sketch for how I think a better system would work on steemit for problem posts.
My suggestion is that posts which are 'reported' are not hidden and do not have their payout reduced as a result of the reporting action. Instead, they are added to a list of 'reported posts' which can be seen by all users via a new navigation option in Steemit - similar to the 'promoted' posts lists. This ensures that we have a crucially needed level of transparency regarding the reporting process (missing from other social networks) and also that everyone can more easily identify genuinely problematic accounts on Steemit and take whatever personal action they wish to as a result.
The posts will continue to be visible in the main lists until a later stage in the process, ensuring that valid posts do not lose important voting time and thus do not lose reach and possible payout.
Reasons for reporting a post
Just as is used by Facebook, it would be a requirement for us to give the reason for a post being reported - such as 'spam', 'illegal' or whatever other reason that means that the post has crossed the steemit terms of service policy. The reason would be visible in the 'reported posts' lists.
But what is a valid reason for reporting a post?
This is an issue that requires clarification since the terms of service document is not clear on the matter.
The Terms of Service for Steemit state:
- User Conduct
17.1. When accessing or using the Services, you agree that you will not commit any unlawful act, and that you are solely responsible for your conduct while using our Services. Without limiting the generality of the foregoing, you agree that you will not:
17.1.1. Use our Services in any manner that could interfere with, disrupt, negatively affect or inhibit other users from fully enjoying our Services, or that could damage, disable, overburden or impair the functioning of our Services in any manner;
These are the main comments in that document regarding problem posts and they are totally open to interpretation. The terms could just as easily be said to state that negative downvoting (censorship) is MORE against the rules than is 'spamming' - for example - Since there is not actual rule which states that commercial use of steemit is prohibited.
If the community wants an anti-spam policy and thinks it is OK to remove posts it thinks are spam, for example, then that should be included in the terms of service so that everyone is clear on the situation. Having a clear policy sets the scene for effective action to be taken, instead of vigilante downposting and all the ill feeling that goes with it.
What to do with genuinely problematic posters?
Some kind of agreed upon definition for what constitutes problem posts is essential for any effective action here. Once that is decided upon and clearly set out, it becomes possible for the community to in some way transparently and fairly:
- Warn the posters
- Educate the posters
- Potentially remove the posters from the network if they continue to violate the terms of service.
This, I feel, is possibly the only truly fair way to create a balanced environment that is respectful of all voices - while enforcing a minimal standard to posts in the network.
Malicious reports are against the Terms of Service, so...
It makes sense to me to treat false reports just as harshly as any other action that might be against the terms of service - so malicious reporters should go through the same process of warning and potential ejection that anyone else would face.
Due to Steemit's financial aspect, an interesting option exists too - in that it could be possible to apply penalty fees to repeat offenders and where the fee revenues are split between any victims and the post pool - so to better serve the community as a whole. This would be a form of karma in action - though ultimately may prove to be unpopular if not handled in a truly fair way.
What do you think? Is something like this a better solution to this common problem for online communities?
Let us know in the comments section below. Thanks!
Wishing you well