You are viewing a single comment's thread from:

RE: Stop The Steem Of Hate Rising

in #steem9 years ago (edited)

You make some excellent points @moksha, let me do my best to answer them in order

  1. I agree that humans' emotional decisions during crises are often pretty bad (or non-existent, if the people panic), but does that mean that deciding based on pure logic is better?

Good point, I think what I was trying to get across, is that in situations such as the trolley dilemma a machine that wasn't clouded by emotion could make the best choice.

Pure logic isn't infallible, but I guess it's the apparent "coldness" of logic, in a trolley dilemma situation , whereby an A.I. has a 2% chance to save a baby from a burning building and a 89% chance of saving the pensioner with a 0% chance of saving both.

If the A.I. saves the old lady, we are horrified that anything could leave a baby to die, however as you point out, for true machine, consciousness, it would have to base decisions on other things; it would have to be human.

Maybe we build fallible A.I.s?

  1. About the human machines/AI:
    Why do we always make the AI so humanoid in movies!? That's what we know, right? But in movies, people attribute human feelings and motives to the machines which causes some trouble.

I wrote a whole section on this, and then edited it out, because I felt it made the article too long and went a bit off point. So I'm glad you asked.

We make A.I.s humanoid in movies, because that is part of human nature, it is called; anthropomorphism. The tendency to cast human emotion onto things that aren't human.

It is why we claim our newborn babies have individual personalities (they don't)

It is why all Gods, in all religions, throughout history have been humanoid.

It is why we see personality in our cats and dogs, why we talk to machinery and a whole lot more.

However as I touched on in the article, I think this anthropomorphism will benefit us when it comes to A.I., we will perhaps because of this phenomena, make the machines more human.

  1. What if we tried to make something alive but as radically different from us as possible?

I think that would be very bad, then, I think, we would usher in our own destruction, they need to be like us, to empathise with us.

CG

Sort:  

@cryptogee, OK, I think I better understand what you were saying about the trolley dilemma now. A truly conscious machine would use logic and other human qualities to make decisions, but it could be a lot better than we are at using logic when it is appropriate, as in the case with the 2% and 89% chances of success. Tha's what seems "cold" and is disconcerting to us.

You should do a separate post on anthropomorphism! Maybe it will benefit us when building A.I. In the movie "Ex Machina," I thought this tendency sometimes caused the characters to assume machines had empathy when they didn't, which was scary.

Whether something has to be like us to empathize with us depends, I think, on your philosophy and definition of life. If all life is somehow part of a "universal mind," and therefore connected or literally one, then maybe it's possible for advanced life forms to empathize with each other simply because they share the quality of being alive. Then they wouldn't have to have other things in common.

Coin Marketplace

STEEM 0.13
TRX 0.34
JST 0.034
BTC 111455.86
ETH 4332.82
USDT 1.00
SBD 0.83