Censorship (or, the lack thereof), moderation, and the future of reading.steemCreated with Sketch.

in blogging •  last year

Not Censorship

  1. People flagging posts on steemit.com.
  2. People removing content from a website that they themselves control (so long as most browsers can access most websites).
  3. People disagreeing with you and denying you access to their audience.

Basis

We all participate in the same social community on Earth. No person is entitled to an audience that was grown or cultivated by any other party. Each person or group's ideas and publications must stand on their own.

If you think about it, denying someone the editorial control over their own website is itself a restriction on their freedom of speech. It is as much an infringement of someone's rights to tell them they must publish something they disagree with as it is to tell them they must not publish something.

People getting banned from the site at twitter.com: This is within the discretion of the owners of that domain name. People getting filtered or limited on facebook.com? Same thing. When you publish exclusively into a centralized site, a small group of people control the maximum distribution of your message.

The great thing about the Steem Blockchain is that no one group can decide how far your message spreads. steemit.com, for example, is hosted in the United States. Under US law, several types of content are illegal to host, and will necessarily be censored from this domain name, but will remain in the Steem Blockchain. Other people or groups, not bound by US law, will be free to provide access to that exact same content stored in the Steem Blockchain on their own websites. (Some will attempt to independently monetize that content, which will bring them into conflict with the rightsholders of the content so monetized. How that ends up playing out will be interesting to everyone who cares about the intersection of national jurisdictions with a borderless internet.) There are already several such sites that provide a view into the full content of the Steem Blockchain (at least 4 or 5 independent ones), and we hope that by the end of 2017 that number is 100 or more. Several initiatives have been undertaken to make it as easy as possible for people to do so, and it will get easier yet in the future.

The thing to remember is that being denied access to a publishing on a specific website is not censorship. Each operator of each site has, within their own personal freedoms, the final say of what gets published on their site. This is the nature of the internet. Everyone with uncensored internet access is free to register a domain, buy hosting in the jurisdiction of their choice, and participate in the global network.

The Steem Blockchain allows a community to publish, grow, and thrive independent of any one site, which is the unprecedented aspect of our amazing community, and is the reason I have dedicated my productive energies to its promotion and growth. (Prior to this, I have managed various addictions to BBSes, Usenet, IRC, LiveJournal, Reddit, Hacker News, and many others, almost all of which were centrally controlled by one person or a small group.)

Historical Context

The last time we had anything remotely resembling such a decentralized community on Earth, it was called Usenet. Another similar structure was email-based mailing lists, but both eventually crumbled due to the issue that all censorship-resistant communities face: with growth comes a "free" audience for spammers. Eventually, readers leave when their time is mostly spent dodging crap instead of reading posts of value. Failure to maintain a decent signal-to-noise ratio is hard enough in centralized sites (it requires a big team to stay on top of the 24/7 banning of spammers, loud nazis, et c), and is extra hard in a decentralized model (as it must be done in the client application that processes the global feed of posts). Back in the old days, that meant email clients, which used a mechanism called "killfiles" (a local text file that contained a list of sender addresses that you would ignore, sort of like our current mute lists). This worked back when email addresses (used as source addresses for posts to usenet and mailing lists) were relatively "expensive" (generally, you only got one) but now anyone can generate millions of throwaway email addresses for free. A better approach is required today. (It's actually a good thing overall, as cheap new identities improve the prospects of anonymous speech, which is critical for expressing unpopular views and avoiding the tyranny of the majority.)

It has always been my personal belief that "the correct answer to speech that you don't like is more speech". The correct answer is never to seek to silence those who are doing the speaking. What this means in practice is empowering those doing the listening to decide what they want to listen to, using that additional speech as advice. I, for example, don't want to hear from people I have no existing relationship with who want to try to convince me to visit their site/page/blog, or to buy their product or service. The content my web browser displays is filtered based upon lists published by third parties who have collected the domain names of people publishing such content that I am not interested in—an ad blocker extension.

One side effect, though, of providing readers tools to stop listening to low-value speech or speech that is offensive to them is that those who are reduced in eyeballs/eardrums will cry "censorship" very loudly, even when their declining audience is the audience's decision, not that of any gatekeeper. As has been echoed many times, freedom of speech is not freedom from the consequences of one's speech. Whenever someone makes an accusation of censorship against another party who doesn't own any jails or guns, make absolutely sure to double-check that they're not just saying things lots of people aren't interested in hearing.

The nature of a blockchain makes it impossible to silence or delete content that is committed into blocks, which is an incredible strength. It permits a better model, however, which is the best of both worlds: the "more speech" moderation system. Just as spammers and nazis can publish into the blockchain, I can publish lists of things I think are unreadable garbage. Neither of us can shout over the other—we both have equal standing in the database, and neither of us can force people to read or ignore us or the other. The complete power lies in the hands of the reader.

The last step to enable this ideal model is building client-side tools (software, applications, and websites) to allow people to subscribe to those opinions of others, at their own choosing. This makes everyone a top-level moderator of their own personal reading experience.

Back over on Twitter, where they had a pretty large problem of misogynistic pseudonymous accounts harassing and threatening women (gamergate), Randi Harper came up with a tremendous idea called ggautoblocker. It subscribed to a list (that she edited and published) of in-her-opinion harasser accounts, and allowed anyone who wanted to run ggautoblocker to download her list and automatically block all of those accounts on their own Twitter account. All of it was completely voluntary and opt-in, and I was inspired. This model is the future, and Twitter should have adopted it site-wide.

I wanted to see it expanded to apply to any arbitrary list of users on Twitter. Twitter already has a "lists" feature. Wouldn't it be great if you could bulk follow or bulk mute/block any list created by any other user, that updates automatically as they update their own list? Then, you could outsource your moderation to people or groups that you trust, and not have to spend your whole day wielding the banhammer at shitlords. An added bonus of this decentralized model is that if one list didn't cover someone, maybe one of the other lists you follow did. It would work both for blocking people as well as following people, covering both the "ggautoblocker" use case for avoiding harassment/deplorables (are we allowed to use that word again yet?) as well as the positive flip-side of uncovering valuable posters, replacing the tradition of a weekly "follow friday" content discoverability workaround.

It is my belief that we, as the final arbiter of the websites we visit, are each personally in ultimate control over what we read (or, more importantly, don't read) online. The process right now is mostly manual, but the software tools are going to improve vastly in the coming years. Eventually, you will subscribe to your own chosen curators (+) and moderators (-), the way you presently subscribe to websites or authors.

I look forward to the day I can read right-wing news feeds, with the comments sections pruned by left-wingers, and vice versa—at my personal option, day to day. The possibilities here under this model are huge. We can't read everything, so the tools we use to decide what subset of things to read must get smarter, more automated, and more personalized.

steemit.com

The precise way in which these forms of moderation will be implemented on steemit.com are still subject to change. We have only just begun the process of formally specifying our first small moderation features (as they relate to our impending Communities implementation) and the exact way it will function on the site.

The requirements are fairly clear, but the implementation is still being fleshed out. There are many strong opinions on the matter, all of which I listen to diligently. I hope that you share yours (along with your justifications, explanations, and use cases) in the comments below, and, critically, be prepared to civilly discuss each's downsides and failure modes, too. It is a necessary bit of negativity when doing product design for something that is to be used by millions of people, some of whom are total jerks; anything we come up with, spammers are vigorously going to attempt to work around.

The eventual implementation is likely going to annoy some people, or is going to be abused in various ways by some other people. Engineering is a series of trade-offs: it won't be perfect, it will just be better. We firmly believe that we can build something unprecedented, that functions better than anything else ever built, preserving censorship-resistance and simultaneously delivering to readers a high signal-to-noise ratio. It's a relatively hard problem: many, many sites have tried, and almost all have failed. (Most professional sites, today, having better things to do, just disable comments entirely, which is in my opinion a modern large-scale tragedy.)

With your help, understanding, passion, and creativity, I know that together we will reach a new standard for the marketplace of ideas that will inspire people well into the future.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  Trending

To make it more explicit, one of the things you are asking for comment on is this piece on the roadmap (pdf):

Feature: Comment moderation for post authors.

Which is what you're referencing in #3 of your not-censorship list:

People disagreeing with you and denying you access to their audience.

My response,

The incentive to abuse Steem is undeniable. The simplest way people abuse it are copy/pasting text from elsewhere as if they've written it. This is plagiarism, and plagiarism needs to be discouraged as best as possible on Steem. Why? Because we want real people to participate, we want creators and people who share themselves. Incentivizing scammers will only bring more scammers and reinforce scammers' habits.

A large and useful part of @anyx's @cheetah bot's ability is its automatic text checking. If similar text is found on the web, cheetah will leave a comment linking to what it found. Furthermore, it's often users who find abuse and leave a comment in the post. Giving post authors moderation power will empower abusers to remove comments indicating abuse.

There are other instances when it's beneficial to notify everyone viewing a post about a scam a poster may be trying to bait people into. If you've ever used bitcointalk.org, you should know that any self-moderated thread should be read with that in mind.

Steemit in many ways has been modeled after Reddit, and while we don't need to follow their model as a dogma (as we haven't), I do think Reddit's lack of own-post comment moderation is a good thing. Comments that get downvoted enough get sorted to the bottom, and there is an account option to choose at what threshold of negative karma they simply get hidden.

That's not to say I don't see any value in allowing own-comment moderation. Certainly there is value in legitimate posters removing actual useless trolling from their posts. But so far in Steemit's life, I would say the amount of trolling has been low enough that giving post authors ability to completely remove comments isn't an emergency. The incentives of Steem makes Steemit a target of abuse. My concern with this proposed change is that it may create more of a problem than it solves.

·

My concern with this proposed change is that it may create more of a problem than it solves.

Is there harm in finding out? It's always possible to turn it back off if it sucks.

There is obviously big upside in allowing authors more control over what types of content have access to their audience. Many bloggers will not blog on sites that permit commenting by anyone.

·
·

It is possible to get the best of both worlds by allowing authors to turn on or off this feature on a per-post basis, along with a per-account default. Those authors who want to control comments can do so (along with an indicator to readers the comments being shown are moderated by the author) and those who do not want to control comments can convey to readers that the comments being shown are not subject to author moderation.

·
·
·

This, of course, comes at a cost: additional UI complexity and the need for somehow getting this mental model we've just spent several hundred words describing into the minds of every author who uses the interface. I'm not entirely sold on the benefits of additional customization outweighing the downsides of additional complexity. We are determined to build the simplest possible thing that can work.

·
·
·
·

Mental model in six words: "Click here to enable comment moderation"

·
·

Okay what about viewer-preference? I'd prefer the default-mode to be the uncensored view. Then the other mode would be the "comments censored by post owner" view.

Anyone should be able to leave a note, so to speak. Resteemed..

·
·
·

It is possible that you want this default only because neonazis haven't heard of steemit.com yet. I come from the seedy underbelly of the internet and know that the only reason we don't have problems with content right now is because of how small we are, and fully expect people to spam all sorts of terrible crap on every article posted soon.

It is my personal opinion that we should give an author 100% control over what gets displayed underneath their words, lest they go and publish their words somewhere else (almost every other blogging platform on the planet gives an author this control).

If our decision is wrong, then people will flock to busy.org (or some other site that doesn't lets authors moderate) instead, won't they?

·
·
·
·

It's a design that makes sense - just worried about the points, similar to what @pfunk has brought up.

I come from the seedy underbelly of the internet and know that the only reason we don't have problems with content right now is because of how small we are, and fully expect people to spam all sorts of terrible crap on every article posted soon.

I'll take your word for it, I can imagine how bad it can get :)

·

Perhaps it can be filtered in combination with reputation score? say, if you, the author, has a higher rep score than the author of a reply, you can remove it. Just think out loud..

·
·

All of these ideas are potentially valuable, and it's good to brainstorm. :)

Thank you for all your hard work and your open communication with all of us! Keep up the good work! :)

... so true! Thanks for putting it together!

Superb explanations. And is it also probably related to community moderation?

·

Yup, we have to figure out exactly how communities (open and closed) will work, write a spec, figure out the UI, and then start hacking.

·
·

I can collect and provide some inputs from Korean community if you need. We also have strong interests with Communities feature.

·
·
·

Happy to read any and all feedback, and include the korean community on steemit in our process, as I imagine you guys will be some of the first heavy users. Let me know!

Thank you for posting this. There are trade-offs for sure. I like your direction and giving moderators more overall control. Of course, we need to find a good balance, but there's no way to know what is best without trying it. The balance can be adjusted of needed.

I'd like to see the Communities enabled to serve as voting guilds also, allowing as much freedom as possible for accountholders to allocate their voting power and split curation rewards however they wish. This would allow all accountholders (even passive investors) to put their voting stakes to work and help build communities.

·

The first step is to get communities in, and let people start building them. Once that's in place, it would be a good stage to test out other things (like voting guilds) but we're probably going to have our hands pretty full in 2017 with the roadmap stuff (voting guilds didn't make the cut).

Mobile apps, in particular, are going to be a huge resource sink, and I want them to be as good or better than the functionality on the website, including all the new stuff (like communities) that we're working on.

If I have a website, I like to be able to hide comments that I do not want there as an owner/author. I do like the idea of authors moderation. It's their turf and they deserve to have a bit more control over it.

·

I am with you but it must be easy for anybody to see the hidden /moderated content.

·
·

Why?

Very insightful