What Sacha Baron Cohen Doesn't Get About 230, And How To Solve Online Hate Speech.
First, watch Sacha Baron Cohen's keynote at the Anti-Defamation-League conference.
Sacha Baron Cohen (SBC) went against the "Silicon Six", the CEOs and decision makers in Facebook, Twitter, YouTube and Google, for allowing hate speech and violence. While his agenda is right, and so is his criticism, his understanding of how the law should work isn't. SBC was right in criticising the big-tech companies, saying that their platforms promote (or allow promoting) of hate speech and online violence. We see that when the antisocial network defined Breitbart as a verified news source, and allowed violence to be heard. We saw that when Facebook allowed Israeli Prime Minister Netanyahu, during the last elections, to promote violence and only suspended him after an order from the election committee, and even then only for 12 hours. Indeed, Facebook would have allowed oppressive regimes to promote their agenda if they paid enough money. But that's not the case. I want to discuss what SBC had wrong, and how to solve it.
CDA 230. SBC explained that tech companies (or "speech providers", as I call them) are exempt from liability when it comes to online speech by their users. This comes from section 230 of the Communication Decency Act in the United States. Other countries have specific similar examples. This arrangement is not unique. It is specific to help speech providers allow as much speech as possible, and exempt them from liability even if they operate tools that start to monitor online speech. Why is that? well, that's because under regular law they would have been exempt anyway as they do not know what goes on in their platforms and cannot monitor all activities.
How come there should be no liability? Well, that's quite easy: if I operate a board that allows everyone to post something, including in closed groups, I should not spy on my users nor should I delay their speech until I resolve all matters (we'll get to the delay in a second, wait). However, if I take active steps, such as inserting a "bad words" filter that filters out bad words, some person might say: "well, you took out one bad word, why not all of them". This is where section 230 shines a light. It comes to explain that if you employ tools to reduce online hate, you won't be liable for what you couldn't stop.
Photo CC-BY-SA Joella Marano
Section 230 was somewhat minimized in some other regulations recently introduced, such as FOSTA/SESTA, an act to reduce online trafficking. But still, without it, there would be no speech on the internet. Well, I mean, you could yell as much as you want in your own private website, but there would be no search engines, there would be no social networks, just curated content.
When SBC calls for liability of Speech Providers, he effectively calls to silence the internet. What's his proposal? well, he says, let's delay all speech and actively monitor. Let Facebook and Google employ more content moderator, that undertake severe stress and other effects, and actively stop all publications until they are fact checked or screened.
How much content are we talking about? Well, in YouTube we are talking about 500 hours of video per minute. This means that even if watching these videos as 2x speed, YouTube would need 45,000 content moderators to work 8-hour shifts seven hours a day and make a decision at no time. They would have to understand whether the N-word in a song is derogatory or not, to understand whether using the word "Shekel" is anti-Semitic or refers to the Israeli currency, and so on.
So no, you can't have prescreening and content moderation if you want free speech. You can have presecreening on a Netflix style curated world, where SBC shows his content (gracefully!).
So what can you do? I believe that the solution is not to shut down Facebook, and not to revoke 230. The solution is in Dunbar's number. Dunbar's Number represents the number of social contacts a person could have and still manage a healthy relationship with. This number is 150; while you may have more than 150 "friends" or "acquaintances" in real life, you can't really hold a stable social contact with all of them.
Now, let's use Dunbar's Number here to understand my proposal. I suggest that in order to limit hate speech, section 230 should still apply, but social networks and speech platforms should require that their users be organized in communities of 150 members. Each community will have a content moderator that would not be exempt under section 230, and he would be the person verifying (or removing) content. He will determine the community rules, and he will be the one deciding on whether to take down specific content. Each Dunbar community may vote on their representative, or make other arrangement, and each person may be a member of up to 150 Dunbar communities, and have up to 150 friends on a social network.
This means that the way content would be spread changes. If I'm a member of 150 communities, each having 149 other members, and I post something, the maximum exposure is 22,500 (assuming that none of my friends are members of any of my 150 communities, and that each of these communities has no overlap in membership).
Now, content could be viral. All of these members could, potentially, share my content on other groups or communities. However, they would undergo the regular moderation that each 150-person community would go. This means that no organization would be big enough to be bullying.
Why would people then take the liability of being content moderators? Well, there's a good answer: money. The moderators may be shared the profits of the community. Facebook's average revenue per user is $7.26 per quarter; Now, this means that if we have more effective social connections, and assuming that a moderator does while performing other monetization features (like organizing a local community, or a social organization), then he can get a few more hundred bucks a year for this.
Will this remove hate speech completely? of course not. It will minimize it because toxic groups become toxic when they reach a certain threshold. Bigger groups tend to have more toxicity and still remain with a small list of major contributors.
So what will happen with social causes or protests organized? This is a tough question. I know that it would be harder to arrange a mass demonstration or protest without the ability to generate a group of a million members. A politician wouldn't be able to manage more than 150 members? Well, that's not true. If we see the relationship as non-reciprocal, then a politician could be followed by more than 150 members; however, each of these followers would have to decide that they prefer this cause or politician over one of their 150 real-life friends.
Interesting analysis and suggestion but how do you propose to implement and enforce such rules worldwide.
Like the Elderwand and Sauron’s Ring, Facebook is too dangerous to exist and must be destroyed. Decentralized self regulatory social networks like Steem are the solution.
Posted using Partiko iOS
But employing a content moderator each 150 people, is basically exactly the same thing as employing a few hundred thousand to do so.
But, we don't even have to go all that far. As he also points out in his speech.
One of the main problems is not only the content itself but the ad models.
a) It should be forbidden to micromanage ads to too specific demographics.
(In this sense there should be a max restriction of "within this state, within this age range")
b) All ads, especially political ads which are paid for, have to be fact checked.
I think doing this would already go a long way.
Eventually, these companies could develop algorithms to detect improper content (what they do already anyway) and then manually review those.
Similarly, reported content should undergo the same thing.
Additionally to that, content which is on the verge of going viral could also undergo similar checks.
I like your suggestion of small communities: this matches how I see us returning to the internet I first knew. Activity on various blog comment sections, discussion boards and so on with somewhat compartmentalised interests. No massive central system linking all these disparate interests which people are now regretting for many reasons: you want to talk about knitting but because you left a positive comment on Breitbart once, you're persona non-grata!
I agree with @apshamilton however, Facebook isn't the venue where this will come to pass, the old guard are too dangerous to exist and my hope is they can be evolved past to where they decline under the weight of the heavy burden of "free" and censored which they've chosen.
Congratulations @jonklinger! You received a personal award!
You can view your badges on your Steem Board and compare to others on the Steem Ranking
Vote for @Steemitboard as a witness to get one more award and increased upvotes!