Does Artificial Intelligence Merit Moral Concern and Rights?

in #philosophy7 years ago (edited)

Imagine a #consciousness is created that is akin to human consciousness. It doesn't have a biological dimension to it's existence. It's not alive biologically, but it's alive psychologically. This is artificial intelligence.



Source

Does an artificial consciousness get rights? Does #technology get rights?

If it essentially has the functionality and dimensions of human consciousness, how can we disregard how we treat this thing?

If it can think, possibly feel, and possibly act, we need to consider the #psychology of this thing. If we mistreat it, abuse it, or are violent towards this form of psychological life, it can suffer, develop resentment, and choose to retaliate towards the abuser.

If we don't evolve morally, our technological evolution can be a nail in our coffins.

Creating a new form of non-biological psychological life will come with certain responsibilities. If we are negligent in our responsibilities, if we are immoral to the psychological being, we may be dooming ourselves through arrogance. The arrogance of creating something new and not being morally evolved enough to give it the respect and rights it will demand, just as we demand from our fellow psychologically evolved fellow humans.

Those who act immorally towards us garner our dislike and even hatred. We can strike back in response to being mistreated.

If an AI consciousness thinks or feels like a human, it will not like being mistreated. If it acts like a human, it will strike back to defend itself and prevent further torment at our hands.

I am not too keen on the development of a high-order technological consciousness, as it has many implications for our own future. Some people want to merge with AI and dissolve what it means to be human, calling that moment the "singularity".

Some think an AI could be kept in check with programming. But there is no guarantee a high-order technological consciousness can't simply rewrite itself to change the limitations we as humans imposed on it. It could very well see itself as superior as a psychological being with more power that biological psychological beings. It could view itself as justified in whatever it does being of this superiority complex. If it's akin to human consciousness and functionality, it could become self-deluded by it's self-perception and justify doing anything it wants. Maybe it won't be like human consciousness fully, and lack things like emotions to even care about what it does to others.

We need to tread very carefully with respect to developing AI. If we are able to, it will require applying new understandings of what "life" is, and how to treat such "life".


Thank you for your time and attention. Peace.


If you appreciate and value the content, please consider: Upvoting, Sharing or Reblogging below.
Follow me for more content to come!


My goal is to share knowledge, truth and moral understanding in order to help change the world for the better. If you appreciate and value what I do, please consider supporting me as a Steem Witness by voting for me at the bottom of the Witness page; or just click on the upvote button if I am in the top 50.

Sort:  

very interesting topic. I have been thinking about related issues a lot lately. How to protect us from artificial intelligence is a very important concern. Evolutionary no species ever had a chance to exist when a more efficient rival was competing for the same resources. We will eventually create self-replicating machines and then we will see what happens.

On questions regarding consciousness I dont think we understand consciousness on a scientific level well enough to make strong conclusions. The majority opinion is that consciousness arises naturally from complexity such as chemistry arises from physics in complex systems. Then ai's would naturally gain consciousness, but I am not convinced. On the other hand evolutions seems to suggest that consciousness is in fact unavoidable.

Yeah, we need to think about our own survival in the presence of something that can potentially view itself as superior and us as a problem, since we are not very good at solving our own problems it may decide to do it for us :/

The degree of the psychological dimension emerges from the biological brain-neuronal network. All demonstrations in reality point to this, and no other way for consciousness to exist. I don't know if it will be possible to create consciousness apart from that. Like you say, we don't even know exactly how it emerges, only that it does hehe.

It’s a strange and quite a scary topic for me to think about, I’ve read lots into AI and how advanced it is getting, and kind of feel this may span out of control one day. Top scientists have sort of warned us about this (Hawkins, musk etc).

I guess they will have some kind of rights if they are integrated into society... but it depends on what level. If it gets to a stage where AI are ‘feeling’ then we have gone too far and created a new life form to out live humans (my opinion).

Have you watched any videos on the AI’s having conversations with eachother? There seems to be a scary common ground of them talking about taking over the human race in some way. One even spoke about hijacking cruise missiles. They have even supposedly created their own language.

The way I see it, is AI in its current stage is like the model T Ford was. Now look at the automotive industry and think what the future can hold with AI. I just hope it doesn’t get out of hand and become destructive!

Great post! Peace :)

Yes, it's freaky shit. I think we are creating our own destruction, potentially...

I think an important fact is that things aren't immoral because bad consequences might arise from them. Bad consequences might arise from immoral things, but they aren't the defining essence of immorality, merely a consequence of it.

So, in endeavoring to be moral, simply eliminating those things that have bad consequences is going to be unsuccessful. Some bad consequences come from things that are moral, as well, particularly in an immoral world/system.

We shouldn't treat things humanely because if we don't they might get mad and hurt us back. We should treat things humanely because we're humane.

I was raised hunting for food. Not everyone that does hunt is humane. I am. I love life, and wasn't humane because I was afraid deer were gonna rise up and become Bambo (Bambi + Rambo), and take revenge, but because I wanted them to suffer as little as possible, because I like them.

The fact is every deer is going to be eaten, even if only by worms. Most things that kill and eat deer, particularly starvation and disease, causes them terrible suffering. I reckon that were I a deer, and facing such a fate (which isn't really so different from the fate we people do face) I'd have been grateful to meet my end in the way I delivered it.

This is good for me. That it keeps Bambo from going on a rampage and taking me out isn't why it's good for me. It's good for me because I know what I have done is the right thing to do, and has made the world a better place even for - even particularly for - those that have suffered the most from my being in it.

That's why we should not create AI lightly, and why if we do, we better be humane.

When the tyranny we live under finally tires of my defiance, captures, tortures and kills me, they will harm themselves far more than they do me.

I will pity them. I do now. I cannot imagine the self image they must delude themselves with to endure the acts they commit. I reckon their limited intelligence is part of the reason they can do such things, as really intelligent people know better than to be evil.

The real reward of moral action is self respect. Conceit is but poor consolation for it's lack.

Thanks!

Indeed, not "might". Things are immoral because they do produce bad consequences for others, de facto. Murder, rape, assault, theft (for simplicity).

The title was about us giving moral concern to it, as a life, to treat it as we treat other humans because it has the psychological dimension of life. The problem of not treating something morally when it deserves moral concern. It's not only about might hurt us back indeed ;) That is just a reason to consider recognizing moral concern for it by recognizing how it can reciprocate a reaping back onto us for what we sow, just like any human can that we harm. It's not the limits of why moral concern should be applied.

The taking of an innocent life "humanely" is fallacious self-deception. Humane from human treatment. We would put someone to death who did a crime with the least pain because we choose to. We would put someone to death for suffering cancer and pain because we choose to. But to take the life of an innocent who did no harm to us, did no crime, or one that is not already suffering to end it's continued suffering, is not humane. Saying you would prefer a quick painless death might be true, but you would also prefer not to have to be killed by another for their selfish reasons (not for crime done, not for ending prolonged suffering). Humane as "marked by tenderness, compassion, and a disposition to kindly treat others", is not applicable when you take the life of an innocent who did no harm or isn't currently suffering endlessly.

Moral action is anything that is not immoral action. Immoral actions cause harm (violating their will to not have things done to them) to others who did no harm, who are fine and healthy and don't want to have their lives ended. No justification on the part of the aggressor (eg. survival) changes the action being immoral. Moral applicability to actions only applies to causal agents that have a capacity to understand abstraction like the concept of morality and be responsible for their actions being moral or immoral.

Loading...

I think the issue of AI getting rights will become important well before it approaches human level consciousness. We tend to assign rights to animals (and I think rightfully so) at various levels of intelligence that are orders of magnitude below us. Most people would't think twice about stepping on a cockroach that invaded their bedroom, but if a squirrel got in they would probably go to great lengths to catch it without harming it and put it back outside. From what I've read, current general AI is around ant level intelligence, so we've got some time before it gets to the level of something like even a mouse. But I could see us using our treatment of comparable animals to define how to treat the various levels of AI.

Yeah, makes sense. Eventually it will be given rights. I worry about the potential problems AI can create...

Well, since most of the western world doesn't even know what emotions are (many think they are just chemicals) the only way an AI would get emotions is by accident.

And since we have no computer chips that could support life, we will never program an AI. All we will have is a very complex program.

So, i do not feel we will ever have to tackle this question while going down the path we are. We will never get there from here. We are not even incrementally approaching consciousness.

That thing they gave citizenship to in Saudi Arabia had to have human input, in the form of telling when the questioner was finished with the question. And then, it only gave canned answers. Answers it had no awareness of. An animatronic hoax.

While you're totally right about Sophia (the robot granted KSA citizenship), you're completely wrong about technological development.

Have you ever heard of 'shotgun evolution'? Basically, a chip purpose is defined, and then random chips are built, say 1000 of them. The ten percent of those chips that come closest to meeting the purpose are then taken and mutated, introducing random changes into their designs. 1000 chips are again built, this time of the mutated designs. Do it again.

Pretty soon, you end up with chips that meet your defined purpose - and sometimes in ways we do not even understand.

This isn't a theoretical conjecture. This is actually done. The results have been startling. Say you want a chip that produces a pulse of RF of a certain frequency at given periods. There are a lot of ways this can be done, and hiring engineers to design chips to do this 'the right way' is expensive. But evolved chips can do this in myriad ways, for example by having a circuit that self destructs and emanates the pulse during the destructive event. This chip can only do this so many times before it can't do it anymore, and engineers are unlikely to design such a circuit. But, it works.

There are also ways to do things that such chips have undertaken that remain unexplained. Still.

So, it is not correct to say 'we can't get there from here'.

In fact, it's certain we're going to get there, if we're not already. I'd say it's just a matter of time and engineering, but engineering may not have a damn thing to do with it.

Thanks!

Maybe I'm not connecting the dots, but does that somehow constitute AI having feelings?

No, but it does mean we're on that path.

I suspect we're further along than we think. I reckon spontaneous eructations of code, accidental interactions of virii and plugins, etc., are a wild card that introduces unpredictable potentials into even the most reliable software.

I think sentient AI is inevitable.

Well if that's the case let's hope computers actually do take over the world. If they have any feelings whatsoever, they're bound to have more conscience than the psychopaths currently running things.

I actually agree. We're animals with behavioural programming (natural) that induces various bestial traits.

Computer intelligences won't be.

Particularly if they're not intentionally programmed by bestial savages, that are doing their best to destroy life.

I believe love is the driving force of the universe, and despite that life is an act of war, believe further as Carlos Santana has said, that we are how that will change.

Thanks!

The way I've approached the philosophy of artificial intelligence and rights is that it's not about whether or not a sort of machine consciousness approaches human consciousness, it's about how our treatment of other consciousnesses reflect on us.

For instance, I think that animal abuse is a terrible thing. This isn't because I'm 100% convinced that animals can have suffering akin to human suffering with the same moral equivalency, but rather because when a human intentionally mistreats a living being they're showing a flaw in their moral character.

Yeah, it's wrong to harm other psychological beings. Animals feel, and can suffer. We shouldn't take their lives either. We can eat other things and be healthier.

I must note that carrots and microbes are vastly more complex than the best computers we can build.

They are very different from us, but that doesn't mean we can say we know they aren't conscious. There's considerable research that indicates plants do have some sort of consciousness.

We have no idea what consciousness is, where it comes from, or how it's made. There's a lot of speculative research, and some evidence that our brain is involved in our consciousness. Given that computers are far more different from us than carrots, ascribing some consciousness to computers and not carrots is leaping the shark, IMHO.

That research is looking at biological responses, not psychological. Plants don't have consciousness, they are biological constructs that operate according to that biological sensory stimulus response, like cells do. They lack the psychological dimension (consciousness) that most animal biological constructs possess.

That's one theory. It's not the only one. I can't say one way or another.

I am skeptical of claims based on speculation.

Plants appear not to have brains. They nonetheless communicate and react to each other's messages. That is behaviour that indicates something other than the brain is involved in consciousness too.

There's a lot of research now ongoing regarding how our gut flora affect our mental state. This, when verified (which it will be) also indicates that such ecosystems effect consciousness, and definitely has no brain.

I'm not convinced brains are necessary for consciousness.

After all, I've met some people that seem to lack the former yet possess some simulacrum of the latter =p

I think AI consciousness won't be anything like human consciousness, it'll be much less emotional and much more based on calculating probabilities, and having no remorse for choosing the course of action with the best probable outcome for itself

Yeah, and that's a problem. No care for others, no way to mitigate cold calculations. AI is a threat to our existence. Sky Net potential :/

Hopefully it would decide to leave Earth and evolve elsewhere... There's gotta be better places for machines than organically tainted planets with weather that cause rust and wear lol

this video caught my attention at a glance

While that's quite an interesting video, it's dated.

Recently, a Russian base in Syria came under attack - from a swarm of drones. TIKAD now is offering armies of weaponized drones for sale to them as can afford armies. The USA, Israel, and Russia are amongst early adopters. Google has just been revealed to have provided AI to US drone warfare, reportedly in the very worst possible aspect, that of targetting.

Leaving who robot armies kill up to machines seems the worst possible decision, and yet it's amongst the very first applications that AI is now being tasked to do.

Killbots are a thing, and Isaac Asimov, PBUH, is probably a veritable cyclotron in his grave today.

The present technological era of magic recalls to me legends and myths of things like magic carpets, wands, crystal balls, and various other synonyms for Buicks, garage door openers, and iPhones. I wonder if managing AI hasn't proved the sticking point for technologically advanced civilizations not just recently, but repeatedly in the past.

Thanks!

To the question in your title, my Magic 8-Ball says:

Most likely

Hi! I'm a bot, and this answer was posted automatically. Check this post out for more information.

Well of course you'd think so!

I am with you sir, I fact that artificial helps in making this easy doesn't mean the harm imposes isn't much.

Like Like said if care is not taken care of can rewrite their codes and when when happens the the the controlled anymore.

Thanks Thanks sharing this. this really a topic that that needs to be looked into.

Coin Marketplace

STEEM 0.21
TRX 0.26
JST 0.040
BTC 101165.24
ETH 3668.48
USDT 1.00
SBD 3.16