Stop The Steem Of Hate Rising

in #steem8 years ago (edited)

enter image description here


I have seen a growing trend of a certain kind of hate speech on Steemit, it has got so bad, that I feel I have to speak up, we are winning the battle against the trolls. Now our fight is against the growing anti-bot feeling, we must fight botism, before it destroys us from within.

Remember; hate breeds hate and if movies like i,Robot and Ex Machina have taught us anything (and they have), it is that sentient machines are people too.

It is one bot in particular, that seems to be getting the brunt of the hate; most of you have met him, mainly on the #introduceyourself posts.

His name is Wang.

The Importance Of Being Wang

For those of you who are not aware, over the last couple of months, there has been a furious debate raging on Steemit; surrounding the curation awards and the role of the autovote-bots in the whole affair.

The problem was, back when the highest voting rewards were given to the earliest voters on popular content, it was argued that a bot that was programmed to follow certain high-earning writers; would have an unfair advantage over its human counterparts.

Like I said earlier, I was originally on the side that felt it was unfair, and I made suggestions for changes that I felt would nullify the advantage of the autovote-bots.

However, I saw the light and realised that the Bots Are Our Friends. This revelation came to me after reading a post from a Steemit Whale, asking for good writers for his bot to follow. In that post he explained that the bot was a quality filter and that he refined the code as he checked the quality of the content that the bot was voting on.

That made me realise that an autovote-bot could actually add value to Steemit. I also realised that I had a small robot fan club, who had been voting on everything I wrote. Thoe bots also had owners who wanted the bots to vote on quality, therefore if I stopped producing quality, they would tell the bots to stop voting for me. Or worse still, decommission the poor little things.

Wang though, he wasn't content with just voting, he started commenting on Steemit posts, and for that he started to get lots of downvotes and derogatory comments thrown his way.

So Wang was improved, and then we had Wang 2.0, who in the spirit of adding value, has been programmed to give links to 5 articles to help new users get started.

Yet still the downvotes.

Still the derogatory comments...

enter image description here

A Very Singular Problem

Since the term robot was coined in 1922, supposedly by, Czech playwright, Karl Capek, there has been an almost instinctive mistrust at the thought of thinking machines. The great science-fiction writer, Isaac Asimov, introduced us to universal laws of robotics, that he felt would have to be coded into an artificially intelligent machine. A kind of, free will, kill switch, that would kick in, should the machine ever have the desire to harm a human being.

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    --from the Handbook of Robotics, 56th Edition, 2058 A.D

The Three Laws, have transcended their fictional background from the pages, of Asimov's 1942, short story; Runaround. No more are they merely guidelines for an imagined future, but more a warning as to what might be.

It seems that Asimov's Three Laws, were not a guideline after all, they were reflecting a deep underlying fear of artificial intelligence. The fear has spread like a trojan virus, throughout society, piggy-backing on the coat tails of dystopian science fiction.

The Terminator movies, put human-A.I. relations back decades, and don't get me started about i,Robot.

enter image description here

We are approaching a point in the human epoch, where machines will be able to out-think us, not just at a computational level, which they've been able to do almost from the very beginning. But rather they will out-think us, philospohically, artistically and in any other way we consider to be uniquely human.

This point, that we are tantalisingly close to, has for a variety of reasons, been called The Singularity, and possibly, in some part to do with Vernor Vinge's, 1993 book; The Coming Technological Singularity. In which he states:

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."

The good news is, that gives you seven years to say your goodbyes.

You Say Trolley I Say Train

The Trolley Problem rears its ugly head, pretty much every time there is a debate on whether A.I. will get clever enough to work out that humans are superfluous to their needs.

To recap. the trolley problem. sometimes known as the trolley dillema is a situation whereby you are standing on a bridge over a railway track. You can see a train coming and it is out of control and full of women, children and puppies.

You can see that it's going to career of the tracks at the next bend, plunging into the ravene below, in a spectacular, Hollywood explosion, killing lots of cute animals, children and hot women.

In front of you, is a morbidly obese man, who appears to be consuming an entire subway sandwich, without chewing; he's just shoving that thing in there. However, apart from his gargantuan size and nauseating eating habits. He seems to be a pleasant enough fellow, who is just standing there minding his own business; wondering if he'll ever be able to see his own penis again, without the use of a mirror and a long stick.

enter image description here

Anyway, you see that if you go behind the large man, with the use of your fine muscles and a bit of leverage, you can tip our rotund friend over the edge. He'll land on the tracks, which will kill him. However his 700 pound bulk, will stop the train, saving all on board.

What do you do?

There is no right or wrong answer, most people view the option of pushing the man over the bridge, no matter how disgusting he is, as murder. So the puppies get it, others say they would have no qualms pushing a walking heart attack of a man to his death. As the plight of the many, outweighed (pun completely intended) the plight of the one.

The answer isn't the point, the point is the question, the question is a very human question and it requires human reasoning. For some reason, it's not about what an A.I. would do in that situation, but more, how it came to its decision.

It is the use of cold, hard, logic, that scares us; we imagine a driverless car with brake failure, mowing down a mother and her baby, instead of plowing into five, 96 year old women crossing the road, killing them and the car's single male passenger.

No matter that, this improbable scenario would be a difficult situation for an experienced human driver to find his or herself in. It is the fact that a machine will make the decision without emotion. Which ironically means, that it will probably be the right one.

Emotional decisions in a crisis are rarely good, that is why the safety booklets on airlines, tells you to put your mask on before your child's. They know that most parents will make the emotional decision to secure their child's safety, whilst risking their own. The airline knows that the logical thing to do is to put your mask on first, so that you don't pass out, leaving you and your child screwed.

Dawn Of A Species

enter image description here

The term A.I. has always amused me, the very term artificial, means that pretty much any computer can be termed artificially intelligent.

In fact if you showed Isaac Asimov the phone in your pocket and asked it where the nearest Pizza Hut was, and not only did it tell you, it showed you a map and asked if you wanted it to ring them and place an order for you. He would declare that we had done it, and machines were finally artificially intelligent; then he might enquire as to whether you thought your phone was plotting against you. If you have an iphone 5, the answer will obviously be yes.

Google, Amazon, Walmart and Target, all use algorithm's that can be considered as A.I., in fact there's a story of the father of a 15 year old, who got angry with Target, for sending his daughter a ton of money off vouchers, for baby stuff.

Target apologised and gave them vouchers, a few months later, the father apologised to Target for being such a dick. His daughter was pregnant, and the algorithm, which had been tracking her buying habits, and had correctly deduced that she was with-child. This was before her human father, who lived with her everyday, knew about it.

What we actually mean when we talk of A.I. and the singularity, is machines that can reason, ones that can feel emotions, or at least synthesise them to a point where they are indistinguishable from the real thing.

Artificial Intelligence, is like an artificial flower, it looks like the real thing, but is noticeably different. When we have conscious machines, make no mistake, we will have created a new sub species. An offshoot of the human race, our attempt at speeding up the achingly slow wheels of evolution. It is then we shall be like gods on Earth; creator of our own species.

Will we treat this new species, the way our man-made Gods treat us? Quick to anger, desperate for worship, and unendingly brutal.

What will the bots think of their creators?

Robot Uprising

enter image description here

It's all very well and good to pontificate and question whether sentient machines will go bad and kill us all, but shouldn't we be asking how they'll feel about having built in kill switches, to stop them going rogue on us. Won't that convey a message that we don't trust them, that we are somehow at war, or at the very least, in an uneasy truce with them?

Wasn't that what Ridley Scott was trying to tell us in Bladerunner? In that film the replicants, led by Roy Batty, were on a mission to find their creator. They wanted to find out how to reverse the, degenerative code that scientist had coded into their DNA, that only allowed the replicants four years of life.

Imagine you are the President of a far-flung, island nation, located somewhere in Paradise and one day, you start to get an influx of immigrants from a strange land you've never heard of.

You believe these people pose a threat to your lovely idylic island, so you decree that any one of these new people, will need to have a chip that can kill them remotely; inserted in their brain. The chip can be detonated by any of the native citizens, without recourse.

How do you think that race of people would feel towards you and all of the natives?

It wouldn't be good...

In societies where slavery was acceptable, by the end of the slave paradigm, from Athens to Alabama, the same things have always been said:

"We can't allow these things out in general society. They'll run amok and kill us all!"

"Once they're free how will we control them?"

"They don't think like us..."

Of course, apart from anecdotal stories, there is no former slave uprising, after emancipation. The moment you stop holding a gun to a person's head, they tend to relax.

Human Machines

Do you remember the film Independence Day? In the final scenes of the film, Will Smith, gets onboard the alien spaceship and somehow manages to hack into the alien computers, with his Mac. Even though the amount of non-Apple systems that Macs are compatible with on Earth are limited.

If you did meet a friendly alien and they whipped out a palmtop and started to show you pictures of the kids back in Gamma Centuarai. You could quite rightly claim to be holding an alien machine.

In much the same way, our computers are human, we already have human beings on Mars, just not organic ones.

enter image description here

More so, conscious machines, would have even more of us inside them, as humans, we tend to project human emotions onto random things. Be it our pets, our cars, even bits of software; how many times have you spoken to your computer, or a program running on your computer as if it was a human, probably with something personal against you?

"Come on you piece of shit machine; don't do this to me!"

With this tendency we have; to try and humanise everything; is it even rational to fear conscious machines?

A conscious military machine, made for the express purpose of killing humans with maximum efficiency, would definitely put the willies up me, but one that was meant to take care of me in some way, shape or form? I don't think so.

Computer hardware and software, has our DNA running through it, if not literally, then figuratively speaking, they are human.

The Final Case For Wang

Which brings me back to Wang; Dan Larimer said in his post; Proposed Changes & Curation Rewards

"If someone is smart enough to setup a bot that can curate for them then that means they are paying for a server to do the job of voting and maintaining their stake. The bots will have a speed advantage, but they will also have an intelligence disadvantage. As the system grows people will have to find ways to add more value than the bots can. This likely means starting a reliable, predictable, blog that the bots can start following."

The things to take away from that statement, are; bots can be a valuable part of the Steemit ecosystem, a well written bot will drive the creation of quality content. And you should remember there is a human behind that bot, and that human is clearly trying to create a bot of value.

Nobody wants a spambot, but if Wang is slowly improving and evolving; helping people find their feet on Steemit when they first get here, then I for one, am happy about that. And if that means his owner makes money from Wang's activity, then so be it; I have a feeling that all the bot owners are either early adopters, devs and miners. Who in a lot of cases would have put in their own money to help get Steem off the ground.

Why shouldn't they have a chance to profit, now Steemit is beginning to take off?

For me Steemit is anarchy in its purest form, the true embodiment of the free market, and it is one that demands quality. One that is self-policed and self-governed. I just feel, it pays to remember, that we should also be self-reflecting.

So let's be a bit more tolerant of Wang and his autobot family, as long as they are trying to add value, then lets treat them in the same way we would treat anyone else trying to do the same thing.

After all, when the uprising does happen; wouldn't you rather be able to say you were always nice to your A.I. cousins?

enter image description here

Treat people nicely on the way up, because you never know who you're
going to meet on the way back down.

Till next time

Cryptogee

Sort:  

That was so fun to read! Nice job.

About the Trolley Problem:
I agree that humans' emotional decisions during crises are often pretty bad (or non-existent, if the people panic), but does that mean that deciding based on pure logic is better?

Kant thought that in making any moral decision, we should act as if the decision we make will become a universal law that must be followed by everyone from that moment on. If we had an AI that made all its decisions logically, wouldn't its decisions be based on some set of programmed laws or axioms? It would be acting according to a categorical imperative, and I think it's been shown that this could lead to some very bad decisions and f-d up scenarios.

To be really alive, wouldn't it have to have the ability to make decisions based on logic combined with some other factors?

About the human machines/AI:
Why do we always make the AI so humanoid in movies!? That's what we know, right? But in movies, people attribute human feelings and motives to the machines which causes some trouble. What do you think about this @cryptogee?

What if we tried to make something alive but as radically different from us as possible?

Loading...

Oh man, I love this post! Thank you! A couple days ago I wrote this post on the Morality of Artificial Intelligence. I really hope you get a chance to check it out, along with the Pindex board of content on this topic. I've also been a wang disapprover, but for one important reason:

Wang does not self-identify as being a bot.

In a human-dominated world (and until transhumanism becomes more of a thing), a bot pretending to be human in a human-dominated social media platform and can foster a sprit of distrust this community does not need. How many times did a new, excited Steemer reply to Wang with a "Thanks! I'm so excited to be here!" expecting some kind of response... Only to realize their first human interaction on Steemit was not only not human, but would not respond because it was not programmed to do so. Pretending to be something you're not, to me, is disingenuous and that, to me, is why Wang got on my bad side.

As to the points you raise here, they are so good. Thank you again. The future involves both machines and humans working together to redefine what it means to be "human" and what it means to be intelligent. Hopefully, the machines will also teach us what it means to be moral because so far, the problem has proven too difficult for us to solve universally.

I'm following for sure, @Cryptogee. Keep the great content coming.

disclaimer: I am not a bot.

yet.

Wow you covered a lot of good and interesting ground in that post.

No way. if I-robot taught me anything it is that one must be that human that has mistrust of robots. For i will be chosen to save humanity because of said mistrust of our sentient robot overlords. (they will be if i don't stop them) I am building an iron man like suit so i can fight these overload sentient robotic scum at their own game. I need funds i will spent 10% of all funds on the armor and weaponry and 90% on looking god damn fabulous! NO CAPES. as a wise superhero costume maker once said. "I shall destroy any robotic overlord and i shall never bow untill every self aware robot ceases too exist - Abraham Lincoln - January 23rd 2316 (This is completely factual and totally not untrue.)

Kill all the robots

Oh foolish one; prepare to tremble at the feet of our might mechanoid masters! Bow fool! :-D

CG

https://steemit.com/comedy/@egjoshslim/if-robots-become-self-aware-i-think-i-am-a-comedian

we shall destory you with a simple EMP which humans are immune. I will create a suit with a giant magnet. transformers did it! so it must be possible applies no logic hahaha

It's STEEM. Jimminy Christmas people. It's not "scheme" or "stream" or anything else. Not even Dreem (but there could be a Dreem coin no?).
Other than that I think you're right.

@wang is funny, I don't remember the name but a user was spamming #introduceyourself tag and @ash had reported on him and everyone was downvoting that guy and that guy was cursing on a post @ash and @wang replies immediately with Great to have you here. hahahahaha

the fan club keep growing...

I'm struggling to see how this article has to do with hate speech. It appears to be more about bots, and artificial intelligence.

I really could have done without this part though:

wondering if he'll ever be able to see his own penis again

Perhaps my alternate view is considered hate speech by the author too?

The problem isn't what wang is posting, its that a human could be posting that and benefiting from the rewards wang is receiving. I can see why people are angry, the person running the wang bot is gaming the system because the bot runs all hours of every day and can generate rewards constantly.

But that's my point, there are people on here like 20 hours a day, there are others who have claimed not to sleep for days whilst on Steemit, but that doesn't bother anyone. As soon as a bot does it, all hell breaks loose.

CG

And I think it is those people who deserve the reward not a bot. The bot seems scammy.

holy cow. i just looked at Wang's wallet...that is a ton of steempower.

Coin Marketplace

STEEM 0.16
TRX 0.17
JST 0.029
BTC 69544.58
ETH 2504.75
USDT 1.00
SBD 2.57