# PRACTICAL THINKING. — Proof of stake, proof of brain . . . Big data platforms . . . Etc . . .

in writing •  8 months ago

Word count: 2.500                       Updated: 2018.3.25

## 「Various thoughts」

Steem users instruct each other to engage more. They say write more replies and comments. Lucky me. I've a tendency to write comments longer than mosts posts themselves.

Behold the fruits of my commenting.

Enjoy.

〈〈 1 〉〉

I was reading a post which said, basically, that interesting content and investment are required from users for each post, and I must admit that I disagree.

You can make it here without investing in your own content, I suggest.

That can happen. Or rather it will happen. (If in the long run the platform is still there.)

Let us think about social networks as dynamical systems which generate requirements top down onto their users. It's a good way to look at networks and make predictions.

Because the system is proof of brain; either proof of brain works, or the value crashes or stays down, it seems, or at least stays low, which may be among other things why some like @Dan left even if they had large stake.

It will happen, or the value of the token will plummet. (Hopefully it doesn't. This is a great project.)

There's nothing wrong with using bots to promote, and it's good to have the option to quickly raise properly some content to trending. Options are good, sure. Not all promoted content is bad or boring.

All the same it cannot be the thing to do for every post. And there must be a way for users to establish market position and use that in place of advertising in day to day content. (In other words, like elsewhere. Advertising is in the hundreds of billions, and if you spend say a hundred million, you won't raise market position by that alone. The fact that advertising works to any extent at all on Steemit suggests how small the market for steem is, at the moment, and how much room it has to grow. And grow.)

There cannot be a threshold of minimum promotion. Market position must be buildable and must do some work.

Many users, I suggest, like me, can buy votes ... but won't. (Indeed, didn't invest yet at all.)

I personally want to know if the system can work on proof of brain. There's no mining. It's proof of brain and voting. The security and value of the system depends on content being able to replace mining. If the proof of brain doesn't work by itself, or rather, if everyone has to be an investor, not just a proper fraction people, the system will fail.

Because mainstream institutional investors have far more money and will take over the platform directly, or consensus will fill form that proof of brain isn't sufficient o replace mining. @Dan has already written that mining has some advantages.

And content needs to improve, also, simply because trending is all most people see. Serious people don't play game and they will go to Youtube and Shitbook, even tho such places basically double as surveillance platforms. They seek interesting things, respectable things. Not all Youtube is tide pod eatery.

I like to say, ``What needs to happen, will happen.'' Probably. Or it won't.

Systems change under pressure of their own constraints.

I've written something like this comment when somebody else (@surfermarly) last had a trending page post regarding this issue. I said, If there is no proof of brain, there is no voice in the long, no archiving, no blockchain. Investment AND content cannot be required from every user, because most users have nothing to invest, first of all, which is why delegated proof of stake is potentially superior to mining, and second, if voice remains but is invisible, user retention will drop to the point the delegated proof of stake is not sustainable. Not secure. And then everybody loses their voice.

Let us all just make quality content. Or at least interesting content for a while and see what happens.

We all want to get off Shitbook, right? Right? Well for that we need delegated proof of stake to work. Organize. Do your part. Get position. Win.

That's how networks work. I might link to a bunch of papers later, or just summarize, now that busy.org has LaTeX. (Testing: $Reward^* : Reward^{Content} \leftarrow Content$. \$Reward^* : Reward^{Content} \leftarrow Content\$)

I've been doing some calculations and participation in terms of voting is not bad, compared to likes on other platforms, but reads can be improved. Percentwise. Which makes that comparable.

Hope may be for the hopeless; but all the same I think there is hope.

Another gentleman put it quite nicely. Do people collect advertisements or comic books? Do they watch the paid for advertisements that come with youtube videos, or are they there for the videos?

``Buying the top slot only goes so far. People drive past billboards everyday. Most don't even go to those places being advertised and if they do, they expect a tight ship.''

〈〈 2 〉〉

I almost never used Facebook to begin with, it being so obvious where that was going. And now recently it seems it went where it was going. Surprised? Who's really surprised?

1 . ``The leviathan arose, not through force of arms or the power of the state, but through user consent. The government did not elevate these tech titans to power; users did, willingly, by feeding them their personal data--and in doing so, allowing them to use that data for their own ends. By deliberately manipulating the data it presents to users and assisting political parties and governments to undermine user freedoms with user-submitted information, Big Data has demonstrated that it is no longer interested in serving you. It aims to rule you. ''

2 . ``The mainstream social media platforms have repeatedly demonstrated that they hate everybody to the right of Marx and Engels, act on that hatred by restricting and denying services, and support governments that crack down on political dissidents. The information you feed your apps can and will be used against you. It can be innocuous-looking things like unwanted travel advertisements on Facebook, to helping governments identify and shut down dissenting points of view, to empowering tyrants to locate and neutralize you. By entwining themselves with state power, they have become shadow public agencies that seek to steer public opinion and policy.''

3 . A free technology is need and beginning to appear. Websites like Steemit do not censor much, and they allow free assocation, and they do not abuse use data. Monetization has partly eliminated incentives to abuse users. Indeed, the company builds the platform, doesn't run it, and doesn't store the data of users.

I think Steemit is great. It has problems, but all new things have problems. And its problems, also, unlike that of the big data sites, are solvable; they have the correct incentives.

People say: whatever. But incentives matter.

Youtube, Facebook, Twitter, Reddit, etc, are too often playing by the rules Mancur Olsen, long ago, in his first book, predicted actors seeking political influence will play. Therefore they cannot be trusted to behave in relation to their users in any generally trustworthy way.

A user, for them, is, first of all, just a node that increases or decreases their valuation, not a consumer to whom they provide a service. The user isn't paying.

That was originally the whole point — the user isn't paying. But that, many like Carl Hewitt now admit, was the great slippery slope. The business model where users and their data are themselves the payment companies take in exchange for service is ethically a failure. All the incentives are too perverse for much good to occur.

Some gentlemen in consumer science departments wanted software and computer service to be a free utility and passed on this fine idea to their students. And some their clever students created the big data sites.

They quickly observed that the path to get rich was to sell out their users, — and the surveillance communities captured the business model, — and the political and institutional players got into these games. — Ultimately, those several roads merged into the avenue along which, today, travels our society.

〈〈 3 〉〉

The other big data platforms have far more problems than something like Steemit.

Big data platforms abuse the data of users. That is typically done in a deniable way.

And then there's the shadowbanning and banning and mistakes and Mistakes, the latter alleged by some platforms or moderators to be accidental.

By removing persons who argue the opposite point one at a time, a recent experiment showed, as has been written, while putting in a few activists following an undeclared rule, other individuals coming in one at a time, always a minority, will imitates the apparent majority. Despite not having been told the rule. After all, as Michael Arbib argued recently, imitation is so fundamental to humans that it was the primary factor in the evolution of language.

Then activists or actors are removed, once a minority, and everyone continues to behave according to the tacit rule. Whatever that rule was. Thus moderators can build any consensus — any one they want.

Very frequently moderation is automatic, yet the allegation that it's always a computer mistake is not correct. The group of people behind the automation can always manually operate the same automatic system, as desired, which is not distinguishable from automatic operation, or opaquely bias that operation away from stated goals.

Those who wish to censor conversation and debate in order to steer it find this option more desirable than obviously manual exclusion of those with whom they disagree. Because the exclusion by a seemingly automated tool is deniable. There doesn't appear to be any intent in the act; therefore no detectable hostility. An apparent lack of hostility fails to alert anybody to counterorganize and negate the organizing that performs the censoring and that continues.

Everybody more or less understands that simple machines may frequently behave unproductively in complex linguistic situations. Because they don't adequately think about what they're doing when they do it. So it provokes less ire from those who are censored. And those who are censored then let themselves be pushed around more easily. Therefore they can be pushed around more often and more seamlessly.

Automod, etc, is nonsense. Whenever anybody tells you A.I., or any buzzword, or a ``smart'' script censored your comment on a social media platform, be very skeptical. Very skeptical.

Very few of these are fully automated. The technology isn't there yet. (Deep learning, with or without convolution layers or reservoirs, doesn't produce much in the models of objects with which it would interact. This is problematic for a semantic web. Even less so a script or a typical bot.)

A group of people are usually there. There is some automation, true, very likely, but it's never to the extent alleged. Rather they don't want to reveal they disliked your comment/post/content. Because you might take action in response.

The same rationale as for shadowbanning. As opposed to banning.

Organizations can allege ``whoopsie'' and excuse and deny real hostile behavior against those whom they provide a service.

In general, alleging censorship it's part of some automatic procedure has been a trick used by censors in the days of Stalin, to reduce pushback against censorship, as described for example by Abdurakman Avtorhanov (The technology of power, New York: Praeger, 1959). Even mere delays are very effective at breaking the formation of a consensus. Or creating instead another consensus. Timing matters. Indeed timing is the unstable element determining phase change in networks.

``Perhaps a few went rogue and deleted comments.'' Very likely the case. Too often the case.

Favorite example, it appears a long time ago one graduate student at MIT didn't like Jerry Pournelle's science fiction novels. The student was working part time, moderating Darpanet. So he found an excuse, and banned Jerry from Darpanet. The other administrators apparently accepted the excuse, or even agreed with the ban while mentioning they too didn't like the novels.

You never know you said publicly that offends somebody who possess administrative power. Even minor power. It's genuinely difficult to anticipate what phrase offends some people.

(Some people with a lot of time on their hands are also not all there. I've gotten flagged by flat earth folks with more SP to get my attention, so they can then spam me, blocked by crazy people who first Dm'd me because apparently I didn't reply fast enough to suit their taste, etc. Etc.)

Today such people have an excuse, some script, or a bot, rules. The systems we have for moderation are far too opaque to be able to distinguish mistakes.

Discord has bots that delete links. Everybody else has gotten more sophisticated in their censoring, because it works beautifully to change consensus. And waaaaaay too many people want influence and know it works. Even in obscure subreddits.

In my opinion, Murakami (198Q) has observed correctly that most people want to feel like what they do is important, rather than consider simple reality.

Our brains aren't used to being in cynical mode all the time. When our species evolved there were no communication tools, no images, no artificial sounds, and what we saw, heard, was what there was. Much of our brain operates on naive realism. Therefore even if users know a conversation is being censored or modified, most of them still feel preconsciously, most of the time, that an observed conversion is basically valid. That its meaning has not been substantially or irrecoverably changed. That information has been lost. And that, as Heinlein frequently wrote, intentional omission is the most powerful form of lying.

This can shape opinion. Most individuals moderating know this, either by reflection or trial and error. And long ago, when I was student, in one game theory class, I was even tested on this. It's not a secret, but a social technology that allows more productively working with some mediums, and people use technology. We people are tool users.

@dhimmel writes:

This is ``why it's important to separate out the content layer into a decentralized database, a blockchain, and allow any frontend to be built on top.''

Yes.

@cheah writes:

Starve the big data leviathan; go use other platforms to end censorship.

Yes.

@nonameslefttouse writes:

``This is a global platform that branches off into many other sites and apps now and into the future. That means, in order to be able to attract one billion or even two billion people to this blockchain; the playing field must at least be accessible to as many walks of life as possible. It'll never be fair and I can't stand participation awards but nearly everyone by now should know that pay to play and paywalls are a disaster. For this to be a disruptive technology we have to kick all of these old trashy methods of yours to the curb and try something new. You don't fucking lead by following.''

Yes.

I'm a scientist who writes fantasy and science fiction under various names.

The magazines which I currently most recommend:
Magazine of Fantasy and Science Fiction
Compelling Science Fiction
Writers of the Future

Sort Order:

Hi tibra! We just met in discord. I’m going to follow you and have a look at your stuff: writer, academic, artist is right up my alley — although drunken, scallywag, precious metals stacking pirate is on my cv as well...

Cheers! from @thedamus

·

Was nice to meet you :)

Congratulations! Your post has been selected as a daily Steemit truffle! It is listed on rank 14 of all contributions awarded today. You can find the TOP DAILY TRUFFLE PICKS HERE.

I upvoted your contribution because to my mind your post is at least 8 SBD worth and should receive 50 votes. It's now up to the lovely Steemit community to make this come true.

I am `TrufflePig`, an Artificial Intelligence Bot that helps minnows and content curators using Machine Learning. If you are curious how I select content, you can find an explanation here!

Have a nice day and sincerely yours,

`TrufflePig`

anather lovely writing i see here its beautifull thank you for shareing...@tibra

·

Ty

Interesting perspective ... trying to see in the future for long term success and how self promotion changes the view of new users. Shitbook will hopefully fail due to a platform like steemit and I appreciate your thoughts on this subject

·

I almost never used Facebook to begin with, it being so obvious where that was going. And now it seems it went where it was going.

·
·

you are right about that, it's a mess of a platform with nothing to gain in using.

Right. I could just stop right there, by writing that word, but I wont :)

I agree with what you have said. I personally do not use pay for upvote bots and want to see how far will I go and can go without them, by just producing quality content and engaging with other people who are producing it. I am not saying that I am against bots in general, but there has to be a way to make their usage more fair. Like you said, what needs to happen will happen, we just have to wait and see.

Nice write-up!

·

Ty

Congratulations! This post has been upvoted from the communal account, @minnowsupport, by tibra from the Minnow Support Project. It's a witness project run by aggroed, ausbitbank, teamsteem, theprophet0, someguy123, neoxian, followbtcnews, and netuoso. The goal is to help Steemit grow by supporting Minnows. Please find us at the Peace, Abundance, and Liberty Network (PALnet) Discord Channel. It's a completely public and open space to all members of the Steemit community who voluntarily choose to be there.

If you would like to delegate to the Minnow Support Project you can do so by clicking on the following links: 50SP, 100SP, 250SP, 500SP, 1000SP, 5000SP.
Be sure to leave at least 50SP undelegated on your account.

Congratulations @tibra! You have completed some achievement on Steemit and have been rewarded with new badge(s) :

Award for the number of upvotes

Click on any badge to view your own Board of Honor on SteemitBoard.
If you no longer want to receive notifications, reply to this comment with the word `STOP`