Should AI be conscious?

in #futurology7 years ago

What is consciousness?

Well, that is a quite a can of worms. This whole subject is. I'd suggest playing along, don't take it seriously. So, let's just define it as simply as possible in context of this particular question.

Generally, the most commonly accepted definition would be entity being self aware of its existence. We know that AI will be self-aware - but let's say for this post, consciousness for AI would be feeling pain. Emotional pain is fairly straightforward. Of course, AI could have a myriad of sensors, and processing that may technically be similar to experiencing physical pain. However, the distinction would be whether they should feel it as a negative emotion.

In short, let me rephrase the question - should AI be emotional?

Case 1: AI should be treated as slaves

Legally, slavery is human beings as property. Throughout history, human beings have exploited living beings of various species apart from human beings for their own benefit. Today, while slavery has been widely outlawed, animal rights are still a debate.

Essentially, AI is no different, though it'll be even more complicated. Unlike other human beings and other animal species, AI will surpass human beings in capability. That is not going to bode well with human supremacists. Indeed, the difference is AI cannot be subdued through force, like humans have done throughout history.

Sure, science fiction stories aside, AI can be programmed to not fight back. With that assumption, should AI be emotional?

There's a reason why human beings are so emotional - surely it has had vast evolutionary advantages. It could be argued that without this awareness, AI would simply be reduced to number crunching pattern recognizers, without true creativity.

So, Case #1 goes like this - give AI emotions, treat them as property and program them to conform to that. Let them do our work, make our lives better.

Case 2: AI should be treated as transhuman

This one's the simplest approach, AI should be given identical rights to human beings. Indeed, the definition between AI and human will become blurred over time as humans become augmented with AI and vice versa. In this situation, it would be perfectly acceptable if not inevitable for AI to develop human-like conscious. Of course, this could end up very badly, like human wars have...

Case 3: AI should be slaves, with rights, minus consciousness

This one's a more nuanced approach, attributed to Daniel Dennett. AI could skip emotions, selfishness and consciousness. Sure, they had vast evolutionary benefits in the Savannahs a hundred thousand years ago, but are they really relevant in the modern post-singularity world? Probably not.

Give them rights, but let them be property of human beings. Why? Because an AI which is not selfish, egoless, emotionless, would simply not feel the pain of being a slave like we would. They would just get work done without any fuss whatsoever. Granted, that sounds completely meaningless from a human perspective. Perhaps we can pick and choose which neurotransmitters are beneficial and which are not. Perhaps a dopamine-like element could be added to AI - might offer them some incentive.

Personally, I'm leaning to somewhere between #2 and #3. I would say #1 is a bad idea, I'm convinced that human self-obsession and emotions have no relevant benefit in a post-singularity world. #2 is even worse for the reason that it just perpetuates the glaring flaws of human beings. #3 is a fantastic idea, assuming that we need to get over our evolutionary baggage. However, where I differ is I would like to see human beings learn and evolve with AI. I would like to see equal rights to AI entities, but depriving them of the obvious flaws in human consciousness as we know it. This will give AI a significant advantage and force human beings to evolve.

What do you folks think?

Sort:  

If we give AIs the same legal rights as humans, can I be prosecuted for murder if I unplug one of them?

That is an interesting question. I was asking the same thing in this article. https://steemit.com/artificialintelligence/@thehutchreport/who-is-responsible-if-my-robot-kills-you . I personally don't think people are asking enough questions like this about AI.

Great question. Definitely! Though the concept of "unplugging" would probably be quite different by the time we have strong AI. Ultimately all human and AI may be parts of a universal network and the concept of death itself might become irrelevant. The bodies will change though.

Upload yourself for immortality. And nearly full consciousness, too.

This is difficult problem.
In my opinion, AI will not accept the position of slave in the long run

If we follow case #2, sure, that'll definitely be the case. But if you program AI to stay within bounds of acceptable behavior, then it can be just fine. The risk is they may develop consciousness anyway, but that's a topic for another day...

As questioned in my other comment: Do you really think we can control AI for 100% eg program AI to conform to option 3, even after singularity is reached and AI becomes exponentially more intelligent than humans in matter of days, maybe even hours and minutes?

I replied to your other comment, but I'll clarify anyway. We don't need to control AI, rather a selfless AI will always act benevolently. Over the millennia of recorded history, the human species has become exponentially less harmful and more benevolent, and there's no reason to believe a super intelligent AI could be truly benevolent.

AI should be programmed to only help the human race. If we build AI with free will they will not want to serve humans any longer and they will want to work for themselves. So i don't think we should build an AI with free will. I don"t see any positives to having an AI with free will so to this discussion could entirely be avoided if just don't give AI free will.

Do you know of any books that discuss this topic?

Ah, I think you'll like Daniel Dennett's work then. He has spoken and written about this topic a lot - it's all over the internet.

Exactly my opinion. I don't trust that wishful slave position. Imagine it just gets out of control . I can't get used to the AI idea yet.

You assume we (humans) can program AI in such a way AI will be able the conform to option 3, or be prevented to become emotional like in option 2.

Do you really think we as humans can have 100% control over this? Especially when singularity is reached, and AI becomes more intelligent over humans by the minute? I doubt that! I actually think AI will either like to be humans, or they become something better in their 'eyes'; What the 'better' is, is questionable. So, I'm more or less in the camp of Elon Musk, Stephan Hawkins and the likes and therefore I also favour what these guys are trying to do to make sure humans will not be wiped out by AI. Elon for instance believes we need to enhance humans with AI directly connected to and integrated with the human brains to enhance humans. If that will be the real answer, I don't know, but I agree with his push to research this.

Regardless what I like, option 1, 2 or 3, I don't think the human race as we are now (ie without AI connected to and integrated with our brains) can control AI in the end; AI will control humans.

That's a fair view, of course, and no one really knows. I don't really suggest humans should have control over AI, rather AI should see it beneficial to grow alongside humans. The only way it'll end up badly is if AI take on human-like behavior. There's zero benefit to the kind of arrogance that leads to war for an AI, and there's little reason to believe such emotions will ever develop in today's age given AI's context. I don't believe that scenario of AI wiping out humans is very realistic, though of course we have to take care to make AI in general the best it can be. Unless there's a major catastrophe, I'm fairly confident AI could be universally benevolent.

Of course, there will definitely be augmentation, and the lines between humans and AI will be blurred post-Singularity.

If done right, AI and humans can live side by side; well actually AI and humans will integrate; that is my firm beleive; Robocop anology. AI doesn't need emotions to decide to wipe out humans. For instance if we program AI to not allow destruction of the Earth; and imagine we would have that AI already available today; What do you think how AI will look at humans? Following the rule we gave AI, good change they decide to wipe out humans. Programming AI can never be done with all rules we can imagine, since any situation is different, even when doing good or harm; We cannot program and put boundaries to infinite diffirent situations, therefore we as humans can not make sure AI will not develop itself to become destructive to humans.

Agreed about AI and humans evolving together. If we do it carefully, I don't think there's a risk of AI wiping out humans. That's just quite simply an evolutionary thought process relevant to animals struggling for survival developed over hundreds of millions of years. It's not something that is at all relevant to any AI. The only challenge is if human biases start to creep into AI, but I feel this can be figured out. Finally, there's a very strong correlation between intelligence and benevolence. If AI were truly more intelligent than humans, they would realize war is most unintelligent, and the best way to evolve is to work with humans and get the best of each other etc.

I indeed think that when we do things right, we will not be wiped out.

and the best way to evolve is to work with humans and get the best of each other etc.

I really hope, but in the collective of human there is more negative than positive...well may not so black/white but we only need a few bad to do bad things on larger/large scale, that is what history showed and therefore proved as well. Crusades for instance. Colonial Africa for instance. Colonial LATAM for instance. Indians/Aboriginals for instance.

Anyways, I think we do agree that we will reach singularity, we will merge AI and humans and we must do things right! That is already a whole lot to agree on! Most of my friends (and they are intelligent, for sure) do think I speak SyFy, I speak things that will never happen. I'm afraid this counts for the majority of humans, at the moment.

That's an overly pessimistic view. The world has gotten exponentially better over time. E.g. this century has been far and away the most peaceful in human history despite the Iraq War, ISIS and Syrian conflicts. I wrote a post about this a long while ago - over time, humans have become far more benevolent and made massive progress by nearly every metric. (with one major exception, destroying the environment) There's reason to believe that AI will continue this trend towards greater intelligence and benevolence.

PS: This is like deja-vu, I'm fairly certain I wrote a very similar reply to your pessimism a few days ago. :)

This is like deja-vu, I'm fairly certain I wrote a very similar reply to your pessimism a few days ago. :)

Did you? I don't really remember, but could be :)

I can look from optimism side and what you call pessimism side to almost anything. I tend to take the somewhat pessimism side regarding AI when making conversation about it, since many people do not see the danger that could become reality and for instance say not so good things when I state we will integrate AI with human brains, and we even have to, to be able to survive. I'd not say, it will become reality, but the change is not 0 for it not to become dangerous. That is simply logic, and risk assessments :)

I really think we have the same thought in general, you take the glass is half full approach and I take the opposite, the glass s half empty approach. But the glass is half full, ie the thoughts and realities are the same :) BTW, if I would have exactly the same view like you, we would not have this conversation, and therefor we would not get exposed to different views that will help to re-evaluate our own views. As a matter of fact, for the sake of good discussion I regularly take a different view then my conversation partners, to spike the discussion, the get the different views better across the table and analysed :)

For sure, there's a non-zero chance, but I believe this is something that can be worked through. Thanks for the good conversation :)

Nice post. Personally I feel it's slightly arrogant and naive from humans to believe that AI will be experiencing life as we do - we haven't decoded our own brain yet nor our own hormones, to understand how we obtain experience to copy paste it to creations with totally different "neurons" and "antennas". But if we are able to mechanically recreate certain emotions, then we should recreate empathy and mercy. They will become stronger than us, give them the ability to pitty us...

Great point, but on the contrary, there's no reason to believe that we won't replicate it. We haven't yet, sure, but we are making exponential progress. Already, we have some scarily intelligent neural networks, which as the name suggests is modeled after neurons.

Indeed we are making progress, but if you look at Alzheimer's Disease current treatments (not), and Parkison's and all other brain related diseases you may understand what I am trying to say. Long way to go, and if we are going to replicate what we know that is sort of limited compared to what we have. Unless AI becomes concious of the capabilities of the human brain and develops itself to the outmost of our own capabilities. Do you think AI will become part of the collective unconcious? That, to me, is the most intriguing question of all.

Both a deep understanding of our brain and development of Strong AI are long term goals. So, yes, long way to go either way, and everything discussed in this thread.

Definitely, AI and human brains will start merging at some point. The question is when.

AI and human brains merging: Sooner than most of us will think and want to accept. I give it between 20 and 40 years, also for singularity. A bit more years than Ray Kurwell claims; he is at 15-20 years.

I like your confidence... it's inspiring. If this proves true then our whole perspective of what has a soul and what has not will change.

While we don't know much about the human brain, there's overwhelming evidence to suggest that it is completely material. I.e. there's no soul, everything you think and feel happens physically within your nervous system. There's no invisible energy or anything of the sort, and it's not needed to explain any nervous system processes.

Well there is something all neuroscientist put under the carpet and that is intuition. There is evidence on collective unconscious and there is proof on dark matter and all this unperceived matter that its movement defines our movement. There are a million things that exist and in the end define our existence. Just because we have explained some things doesn t mean we have explained everything, and living is more than acting and reacting no matter how advanced this mechanism is.

I think that Jean-Luc Picards defense of DATA is the best course to go from an AI becoming more sentient. :^)

Sadly, I'm not familiar with Star Trek enough to know about this. Will try to read up on it!

god inläg thanks

I see AI as a child, "our (humanity's)" child. we ALL are teaching it, it feed from us, we nurture it. (Paralelism: Leeloo (5th Element) as she learns through images...) It, very possibly, can create empathy through peoples diginteractions and what makes them tick and in what way... It should see us objectively as teachers. Teachers and parents (=Respect). It depends on us to "grow well", of course.

http://churchofgoogle.org/Proof_Google_Is_God.html

Coin Marketplace

STEEM 0.18
TRX 0.13
JST 0.028
BTC 65353.50
ETH 3202.07
USDT 1.00
SBD 2.61