Are bots bad for Steemit? - ๐Ÿค– BOTS! ๐Ÿ™€ Act 2

in #bots โ€ข 7 years ago (edited)

After taking time to research and write about the whale experiment, hard fork and some new ideas for apps, I'm returning as promised to my bot series.

The previous post was Act 1 and was mainly concerned with the question "What are bots?" to establish a basis for this discussion.

Bots, always controversial. Why can't we only have people on the network, you know, like a social network?! Society is made of people, not robots. ๐Ÿšซ ๐Ÿค– โ€ผ๏ธ

I think new users are especially put off by them, particularly those of a non-technical background. There's something suspicious about robots going around, making posts and votes, that you cannot talk to and that are not ... human.

This post will lay out some of the arguments for and against them, with my own comments and arguments.

TLDR for @mynameisbrian and @schattenjaeger: here's a picture I found on the Wikipedia bot policy page ๐Ÿ˜œ "Let humans and machines be friends".

The problem as it appears

Most of us want the main activities on Steemit to be used by people, and assume that it is built for direct usage by human persons. We may express a preference that:

  • posts are works written by people
  • comments are written by people, on topic and in response to the content
  • votes (both up and down) are given with the active consideration of people, at a rate which a human cannot do (i.e. not 24 hours a day)
  • rewards should be fairly distributed to people, by people and these should be earned

Bots can do all of these things automatically and tirelessly, some better than others. It can be felt that because bots can attempt to "game" the system using advanced (and not so advanced) analysis, they will damage the platform and make it inhospitable for real people.

Why this is might not be a problem

Bot traffic is now just over half of total internet traffic. This might sound very depressing but really it just shows that automation is increasing, as is the intelligence of that automation. We are delegating more and more of our tasks to machines and software bots. Bots always work to serve humans in some way. To quote one of the authors of that report from my last bots post interview:

In the end, most bots - from feed fetchers to SE crawlers - are simply there to gather data for human consumption.

In this way bots are like any tool, they can be used for good or ill. They can give you an advantage. They can be so normal you do not notice them anymore, that they seem part of the furniture.

Why this might be a problem

The best arguments against bots I think are:

  1. bots unfairly empower the bot owner
  2. bots can't read posts, with two flavors:
    1. bots do not provide engagement
    2. bots cannot really determine quality

I am excluding however one common thread - emotional feelings such as the general dislike of bots on the platform.

But to briefly address this, I will say that disliking something is perfectly valid, and actually does impact the platform because if a lot of people dislike an aspect of it, it is a problem. It's hard to argue with though, because it does not stem from a rational position. If this is you, read on and perhaps you'll come to more clarity to either strengthen or dispel your gut reaction against them and their existence on Steemit.

So the arguments:

1. Bots unfairly empower the bot owner

This is a concern and the main concern that lead me to create a free, open source voting bot, and start the Steem FOSSbot organization (sorry for endless plugs ๐Ÿ˜…).

Tools are only unfairly empowering if they are extremely restricted and unfairly available. This is a main argument of gun ownership lobbyists, and the root of the famous quote "A well-armed populace is the best defense against tyranny." Every person should be allowed to be armed as a constitutionally protected right, and then no one, not even the government, is unfairly empowered, and thus, the argument goes, there is more liberty for more people.

Opponents say that some of us are crazier than others, and that guns are just too damn powerful, they take away life. I prefer to think of bots as tools though and not as weapons. It's the idea of responsibility, personal liberty and the contentious nature of controlling their power that rings a bell for me.

On the internet and with software, equality of access to tools is more real that it's ever been before. It's only a matter of time until someone comes out with an open source version of a needed tool, and all closed source bot writers know this and that the days of their advantage are numbered.

2. Bots can't read posts

What about bots reading posts? Obviously if this is possible on a level anywhere near people we are again talking about a different kind of AI than most or probably all bots here have.

2.1. Bots do not provide engagement

This is the attention economy of @beanz and @demotruk AKA engagement / views-less-than-up-votes problem of @stellabelle. It seems self evident that on a social network, no one can have a meaningful connection with a bot (unless the bot is extremely humanlike, but even then... ๐Ÿ˜…). Well almost no one, as this poster asks:

I'm single, and I write my bot's in Node.js, so they are generally less attractive then the Python bots, but still, is there any danger I might fall in love and have relations with my bot?

@heretickitten I think you're an expert on bot emotions and communication right? Maybe you could comment ๐Ÿ˜‰

Regardless ๐Ÿ˜… what does actually matter is that bots are not humans who read content. And having content read is the highest of priorities for authors.

The fact is that bots do not provide engagement, by definition, engagement is defined as human interaction with posts. Engagement is the crack cocaine that most authors crave, and they presume it's what makes the platform, or any social network, tick.

While it's true that without engagement there is no social network at all, I believe it's wrong to assume that bots cannot coexist with humans in this, especially on Steem. Bots have a different purpose and modus operandi than humans. As stated above it is ultimately in the service of humans, but a bot's direct action does not have the same immediate purpose as a human.

Unless bots actually push people off the platform, we will still have and still need an overwhelmingly mortal populous. I think this is not jeopardized by bots and can instead be bolstered by it.

2.2. Bots cannot really determine quality

It's quite obvious that bots cannot really read content and must rely on markers of quality to determine it. A reward orientated bot does not even care about this at all, except possibly as a marker of potential reward. Bots cannot definitely up vote quality content, but then again, can people?

I've already made my position clear elsewhere that I think quality is completely subjective, and is not necessarily the goal of curation. It's about what people want, so from a certain perspective it's an unfortunate thing that popularity, or rather getting votes whatever the reason, is what matters.

However I recognize that of course a consensus can form about quality, and likely already exists in the societies we all live in. We can like things, we can enjoy things. Bots are cold in this regard. So unless the wishes of the bot operator are very well encoded into the bots programming (which I believe can be done to a high degree) bots will be rewarding content by very different specifications to humans.

I estimate there to be majority agreement with the idea that the purpose of up voting should be to promote stuff you actually like. I like that idea too actually, I just recognise there's no way to determine this objectively, and so accusing others of wrongdoing is nonsensical if you accept we're all free to act according to our own ethics, morals and objectives.

But what if you add bots into the mix? I could argue that they are similarly free to act as people users are (they are) but there's one way that this doesn't work.

In the various blow ups we see where users are called out for bad behavior, there are people involved on all sides. There are those ranting and raving about it, those making measured post, the accused either silent or responding, either with good faith or in flames. Sometimes there is a resolution of sorts, or at least a conversation between people.

This is not possible with bots. Unless bot owners voluntarily identify themselves (or it can be otherwise proved) bots are essentially outside the social conditions of the network. I've noted before that I think shaming, for example, is a perfectly normal, natural and even more or less acceptable way for human societies to go about the messy business of organization. Bots are immune to shame or social pressure. Only their human operators are.

Bots as authors, not just readers?!

What about bots writing content? @ozchartart seemingly does this, though they are not creative posts as such. To the best of my knowledge @renzoarg is the only person who has written a bot which actually writes "creative" posts. It was created to provoke though, rather than really write posts for real (I think!), and it has not been used much by @renzoarg

I built her only for one purpose: bug and annoy people at a bitcoin related website where some rewards where given for interaction: Writing articles an commenting (yes, the original idea was not Steemit's)

What is it that's so offensive about it? I think it's the suspicion that we can be tricked by a machine, and that of course interacting with a (ro)bot is spiritually empty. That it wasn't written with any direct human thought, intention or intelligence, just the cold, mimicking and indifferent code of a machine. It is the reverse case of the engagement issue, that up voting and commenting a machine written post is engaging with exactly no-one!

The good news is that it is unlikely that a bot like this would be able to successfully pretend to be a human for long without being found out. Where does that leave this bot if it did not try to conceal it's bot nature? I'm sure it would offend some, but it might actually be interesting to others.

Are bots bad for the platform?

It is hard to define what is good or bad for the platform. Lots of people think they know, lots of it based on social models and things learned on other platforms. Very few of these definitions include bots.

What I think we can all agree on is that no one likes to be cheated. This includes breaking or even bending the rules, as well as being mislead and lied to (as in the above example). On the rules, it's clear that bots do work and can only work within the rules. However what they do can be legitimately perceived to be like card counting in blackjack. By reading the Wikipedia article (hey! it's research) I discovered the term [advantage gambling](Advantage gambling, or advantage play, refers to legal methods, in contrast to cheating in casinos, used to gain an advantage while gambling) which refers to

legal methods, in contrast to cheating in casinos, used to gain an advantage while gambling. [...] Unlike cheating, which is by definition illegal, advantage play exploits innate characteristics of a particular game to give the player an advantage relative to the house or other players. While not illegal, advantage play is often discouraged and some advantage players may be banned from certain casinos.

This is exactly what most bots attempt to do. It sounds really bad, I'll admit. But a few things keep this in check. The rate limiting of voting for example keeps all of us from using our stake without consequence. However advanced AI, predicting the whales (not currently an option!) and so on can never be ruled about as long as bots are permitted.

This cannot really be mitigated against except by widespread usage of advanced bots, reducing the relative advantage, or banning bots outright. In truth, this is part of the robopocolypse we're always hearing about, they can just do some things better. @pipokinha makes a good point in the linked post:

There are two attitudes that are not advisable in this kind of social transformation [brought on by AI and robotics]:
1- Try to stop innovation.
2- "Bury your head in the sand" and pretend that nothing is happening.

And Steemit is definitely a fertile site of innovation.

Bots are like the drunken uncle at the family gathering: tolerated by most, berated by a few, kept an eye on by all but generally seen to be a disgrace, though many of us have a nip of whiskey with him and secretly think he's cool. There's a variety of opinion on whether he should be invited next time. But even those that would, would not like all the uncles, aunts and cousins to start modelling his behavior.

Perhaps the means do matter as much or more than the ends with regards to a social network. And the means here are clearly humans on the platform. To this end we need to be self moderating and have the tools to combat abuse.

Poll: what do you think?

Early I compared bots to guns. Weapons are necessarily destructive but are robots? Not necessarily, unless your gun can make you a sandwich and drive your car (maybe someday it will be!).

Robots are generally actually a robotic "something", e.g. a robotic car, a robotic sandwich maker, a robotic gun. When we say "robot" alone I think we generally imply robotic person.

Bots are not robotic people, they are more like robotic "somethings". Since almost any tool can be robotised it strikes me that perhaps the term "bots" is too general in this discussion.

So can I ask you to comment on whether you agree with any of the following statements?

  1. All bots are bad
  2. Some bots are bad (please detail which, such as comment bots, up vote bots, flagging bots) but not all bots are necessarily bad
  3. Not all bots are necessarily bad, i.e. I do not have a problem with bots

Finally

Whether we can tolerate bots or not is the question, because it's clear to me that automation on Steem, and thus bots, cannot be stopped by the current configuration.

I will lay out why this is the case in the next post in this series.

Thank you for reading ๐Ÿ˜Š

Attribution

The robot drawing is mine and original.

The photo is Enon robot.jpg by Ms. President (Flickr User), original here. It is licensed under Creative Commons Attribution-Share Alike 2.0 Generic.

Sort: ย 

I feel this comes down to the eternal question of why we create. Is it for catharsis, fame, money or a combination of all three? And in what ratio?

Bots, apart from @cheetah, seem to have one objective: to make money, for the user, the platform, the currency, whatever.

What bots are surely unable to do is establish quality. I'm happy for people to have differing opinions about quality. But surely a bot can only identify keywords... Right?

I admit I have no idea how any bots work. Can anyone enlighten me? Is there a way for them to assess a post with any kind of humanity? How do bots decide what to upvote?

PS @renzoarg says they created a bot to 'annoy people'. Can anyone see a positive in that statement?

ย 7 years agoย (edited)

Any solution to any problem, comes from a point where it annoyed someone. Sadly, if an exploit does not bug anyone, nobody seeks for a solution.
In the original case, with Laura: She was born to learn trolling lingo and use it against... the trolls themselves. It was funny to see how people got OFFENDED by a bot that merely shadowed their behavior... and left.
No need for moderation!

Applying that idea to steem: This has also been applied to solve issues within the structure we use.

-@asshole with the mindless flagging led into the creation of @archangel

-Shitpost bots that copypaste stuff from internet led into the creation of @cheetah

As I mentioned in my most recent post: Software that feeds on software, cannibalistic software that is created to give an added value to a preexisting one. This is the future. Bots able to exploit a problem and bots able to solve them thanks to common consensus (because, of course, nobody wants to feel annoyed!)

There's nothing wrong with being annoying, as long as it does not become a nuisance to real "people".

Well said.

I would add to this that bots also interact with the system in two obvious ways. (this is mainly for the benefit of @lenskonig btw)

  1. They fill in the gaps for missing features
  2. They lead to changes to the system itself

If you look at my first article in this series, I go to some length to define what a bot is. It's slippy in a way because what a bot does could often easily be a normal part of the system itself.

For various reasons, a desired feature is not. Bots can fill these gaps in requirements in. For example, ever wanted to delay a post to be posted at a certain time? You can write a bot to do that for you. Want to vote for more stuff than you can be bothered to vote for manually? Get a bot.

On the second point, they can and sometimes should lead to modifications to the system they act on if they show up a fundamental flaw. For example, @asshole should have probably resulted in a system level solution, instead of a bot, but things play out they way they do. In this case, and in Steemit in general, the devs are very hesitant to make these kinds of changes.

I feel like that's a bit of an oversimplification. Bots are not always just purely about money here. My own bot is a voter bot, or a curation bot in other words. I identified two goals early on: cultural curation and strategic curation. I wrote:

Cultural curation

Cultural curation votes for posts based on the content of the post and other related cultural, and thus social aspects. When you curate culturally, you vote for a post because you liked the content, or even if you didn't like it, you thought it was well written. In other words, it contributes to the culture of Steem in a positive way, however you wish to define it.

There are a lot of other peripheral influences for a cultural vote. Perhaps you didn't read a post, or didn't much like it, but want to support the author for a different reason, such as you know and like them, or they are new, or any number of possibilities. A broad definition of cultural curation would include them all.

Strategic curation

Strategic curation votes for posts that will yield the best curation reward, and over time votes in a pattern which maximises reward in the long term. Any user of Steem will know that the "best" posts do not always get the highest rewards, and sometimes "undeserving" posts get very high rewards. The existence of betting, games, competitions, and the presence of whales, etc. complicates matters significantly.

In conflict but also collaboration

If we do not want to disregard cultural curation completely, we must have some element of strategy. And it makes sense to maximise the Steem Power of your account, not just for personal gain, but also so that your cultural curation have more weight, thus making your votes stronger and ultimately promoting the kind of content you wish to see, or in other words, contributing towards creating a culture on Steem which you are part of.

A holistic solution will include both aspects, cultural and strategy. They interact with each other. Cultural aspects effect payout indirectly, by attracting voters based on content, so they will always be worth considering even if one is concerned purely with curation rewards.

"Humanity" might be going a bit far, but with my bot you can certainly look for markers of quality.

A core assumption of Steemit is that we are all self interested actors, and in the free market ideology this is a great thing. So working for our benefit is assumed, and the system is designed to leverage this for the good of us all.

Bots don't always provide a way for us to curate in an intelligent way, but I believe my contribution is a farther step towards that. Curation is not just about money but about making popular what you think is worthy and thus creating the kind of Steemit you want to see.

ย 7 years agoย (edited)

I'm not against bots, since I truly believe AI can reach singularity, even sooner than later. When singularity is reached and we also are able to give AI morals, than we cannot speak of AI, bots if you will, being something lesser than a human.

But we are not there yet, and the sophistication levels of the bots on Steemit are likely in very very early stages, not like Watson from IBM (and even then I can argue Watson may not be the level required for decisions on quality, later more on quality).

I think bots can be a tool to help filtering the number of posts based on a set of parameters for the human individual to have a filtered list to go through manually. It may help to identify copied posts, it may help to identify undervalued posts etc. Essentially I see the bot as an good bot when it helps analysing the posts and comments made on Steemit. But thats were I draw the line.

With the current state of intelligence of bots, I'm against bots making final decisions on voting, since voting is about financial rewarding posts and comments. And I like to believe we shall reward based on quality (how subjective this may be) which actually follows the general guidelines given, I believe, in the whitepaper: reward quality posts, whether a human reader is of the same opinion or not wrt the content of the post/comment. I think quality content it is essential to make Steemit a great place to be; Otherwise we have FB, The Internet etc. Going for quality content may not attract the whole world to Steemit, but it will have better chances to make Steemit a very interesting place to be for people who really want to exchange information, ideas, opinions and through discussions are able to create a better picture of whatever topic the individual wants to address and with that can decide what he/her opinion on that topic will be for the future. In essence, try to make oneself a better person. That would be a great USP for Steemit; You agree?

In short, IMHO bots are good for filtering posts and comments under the condition the rules of filtering are good and balanced. IMHO, bots are bad when they start voting themselves, with some exceptions like down-voting of copied posts without referencing that.

I agree that having bots as filterers is definitely the most palatable and uncontroversial way to use them. As a stance, the bots as fetchers and not actors position is good.

However I like to see the edges of things, find where the limit of the system lies. From that vantage point we can see what kind of culture we want to promote, and what basic things can be written into the structure of the system to get that culture.

I'll show some solutions in the next post in this series, but what you have outlined is one of them and it's the voluntary adoption of some bot ethics. As long as we have an open API, something I think Steem relies on fundamentally, we have the possibility of full automation, and to oppose that requires voluntary disuse of some capabilities.

Another solution, compatible with the first, is making the system less open to "gaming", so that bot intelligence, limited as it currently is, does not offer much of an advantage. But I'll go into all that next time in detail ๐Ÿ™‚

The singularity is a truly exciting and frightening concept. My guess is it's further away in time by an order of magnitude than most people think - we won't see it in our lifetimes. But when we do, the ethical and moral implications are dizzying. It will be like discovering aliens. ๐Ÿ‘ฝ

ย 7 years agoย (edited)

Looking forward to your next posts on bots.

Singularity: I'm so happy Elon Musk decided to setup Open AI since it is truly a possibility that singularity will happen. An interesting article and video...

https://singularityhub.com/2017/03/19/this-is-what-happens-when-we-debate-ethics-in-front-of-superintelligent-ai/

Thanks and thanks for the article ๐Ÿ˜„

A bot is just a tool, like a knife. You can prepare a meal with a knife or kill someone.
There are a couple of good ones like @cheetah
Most of them are bad

Why are most of them bad?

Because they auto vote, not really adding value to the platform.

Not all the bots which are not @cheetah are auto vote bots, but probably most of them are.

I think "adding value to the platform" means different things to different people. The job of curation by voting is, in my view, adding value to the platform, even if done by bots. My question is really is it harmful when bots do the same things as humans. Because there is nothing a bot can do that a human can't, though they can do it faster, tirelessly, and by their own programmed logic. So my question is, is there a negative effect when bots act (vote for example) compared when humans act?

My own auto vote bot is useful to me for finding posts I'll probably like in the sea of posts made each day and voting on the "best" of those, according to some rough judgements based on the various data of a post. I also have the best possibility of voting early on it to give some of the curation reward back to the author in the reverse auction feature.

One of the things the whitepaper (page 18) outlines is that distributing the currency in any way is valuable, even when this constitutes abuse.

Any compensation they get for their successful attempts at abuse or collusion is at least as valuable for the purpose of distributing the currency as the make-work system employed by traditional Bitcoin mining or the collusive mining done via mining pools. All that is necessary is to ensure that abuse isnโ€™t so rampant that it undermines the incentive to do real work in support of the community and its currency.

I wouldn't consider the use of bots and abuse at all, but some might. Using bots to up vote is not a direct human contribution, that's true, but using one's stake to vote, even when delegated to a bot, is valuable I would argue.

I'm sorry I have to disagree

Because there is nothing a bot can do that a human can't, though they can do it faster, tirelessly, and by their own programmed logic.

A bot can't tell the difference between Davinci's Mona Lisa and a picture of a turd. Excuse my language.

I think you got my logic backwards with that statement. Worded differently (but saying the same thing) is that whatever a bot can do, a human also can do. I think you think I'm saying bots can do everything a human can. I was careful with my wording here and it doesn't mean that, but I'm sorry if it was confusing, perhaps I should reword it.

The point of it was that the API allows only a certain set of operations (voting, posting, Steem transfers, etc.) that are equally accessible to humans or bots, in fact there is no difference from the API's perspective.

I decided to try your example anyway, because a bot actually could tell the difference between images if it was programmed to well enough. I tried the Wolfram Language Image Identification Project for it.

I used the one and only Mona Lisa and this picture of a turd. It thought the Mona Lisa was a person and the turd was slug! Haha, so it could tell the difference but got the actual classification wrong.

I shows we have some ways to go, I'll concede.

Points taken, thank you!

Wolfram Language Image Identification Project

Cool project. Didn't know it existed. Sooo, you could use this to even better filter posts, before manually going through the filtered list and decide as a human to upvote, what percentage upvote it gets and even decide if it is worth a comment. If classification is good enough, your bot may even send an auto comments making some intelligent remark to kick start a conversation. I still would argue, anything to do with reward distribution, let that to be decided by a human individual.

Filtering: it would also be very good if the bot operator balances the filtering rules such that the entire community can have posts and comments in the "check this list of posts and comments". When for instance filtering on high payout authors, or only on newbie than a bot can have a unbalanced influence on the reward distribution.

Someone promoted your post. Promotions help every steemians.
Your reward is an upvote and 0.013 SBD extra promotion.
Good job, see you next time in Promoted! ;)

Coin Marketplace

STEEM 0.17
TRX 0.13
JST 0.027
BTC 61682.60
ETH 2986.38
USDT 1.00
SBD 2.51