You are viewing a single comment's thread from:

RE: Should AI be conscious?

in #futurology7 years ago (edited)

That's a fair view, of course, and no one really knows. I don't really suggest humans should have control over AI, rather AI should see it beneficial to grow alongside humans. The only way it'll end up badly is if AI take on human-like behavior. There's zero benefit to the kind of arrogance that leads to war for an AI, and there's little reason to believe such emotions will ever develop in today's age given AI's context. I don't believe that scenario of AI wiping out humans is very realistic, though of course we have to take care to make AI in general the best it can be. Unless there's a major catastrophe, I'm fairly confident AI could be universally benevolent.

Of course, there will definitely be augmentation, and the lines between humans and AI will be blurred post-Singularity.

Sort:  

If done right, AI and humans can live side by side; well actually AI and humans will integrate; that is my firm beleive; Robocop anology. AI doesn't need emotions to decide to wipe out humans. For instance if we program AI to not allow destruction of the Earth; and imagine we would have that AI already available today; What do you think how AI will look at humans? Following the rule we gave AI, good change they decide to wipe out humans. Programming AI can never be done with all rules we can imagine, since any situation is different, even when doing good or harm; We cannot program and put boundaries to infinite diffirent situations, therefore we as humans can not make sure AI will not develop itself to become destructive to humans.

Agreed about AI and humans evolving together. If we do it carefully, I don't think there's a risk of AI wiping out humans. That's just quite simply an evolutionary thought process relevant to animals struggling for survival developed over hundreds of millions of years. It's not something that is at all relevant to any AI. The only challenge is if human biases start to creep into AI, but I feel this can be figured out. Finally, there's a very strong correlation between intelligence and benevolence. If AI were truly more intelligent than humans, they would realize war is most unintelligent, and the best way to evolve is to work with humans and get the best of each other etc.

I indeed think that when we do things right, we will not be wiped out.

and the best way to evolve is to work with humans and get the best of each other etc.

I really hope, but in the collective of human there is more negative than positive...well may not so black/white but we only need a few bad to do bad things on larger/large scale, that is what history showed and therefore proved as well. Crusades for instance. Colonial Africa for instance. Colonial LATAM for instance. Indians/Aboriginals for instance.

Anyways, I think we do agree that we will reach singularity, we will merge AI and humans and we must do things right! That is already a whole lot to agree on! Most of my friends (and they are intelligent, for sure) do think I speak SyFy, I speak things that will never happen. I'm afraid this counts for the majority of humans, at the moment.

That's an overly pessimistic view. The world has gotten exponentially better over time. E.g. this century has been far and away the most peaceful in human history despite the Iraq War, ISIS and Syrian conflicts. I wrote a post about this a long while ago - over time, humans have become far more benevolent and made massive progress by nearly every metric. (with one major exception, destroying the environment) There's reason to believe that AI will continue this trend towards greater intelligence and benevolence.

PS: This is like deja-vu, I'm fairly certain I wrote a very similar reply to your pessimism a few days ago. :)

This is like deja-vu, I'm fairly certain I wrote a very similar reply to your pessimism a few days ago. :)

Did you? I don't really remember, but could be :)

I can look from optimism side and what you call pessimism side to almost anything. I tend to take the somewhat pessimism side regarding AI when making conversation about it, since many people do not see the danger that could become reality and for instance say not so good things when I state we will integrate AI with human brains, and we even have to, to be able to survive. I'd not say, it will become reality, but the change is not 0 for it not to become dangerous. That is simply logic, and risk assessments :)

I really think we have the same thought in general, you take the glass is half full approach and I take the opposite, the glass s half empty approach. But the glass is half full, ie the thoughts and realities are the same :) BTW, if I would have exactly the same view like you, we would not have this conversation, and therefor we would not get exposed to different views that will help to re-evaluate our own views. As a matter of fact, for the sake of good discussion I regularly take a different view then my conversation partners, to spike the discussion, the get the different views better across the table and analysed :)

For sure, there's a non-zero chance, but I believe this is something that can be worked through. Thanks for the good conversation :)

Thank you as well! I appreciate it really much.

Coin Marketplace

STEEM 0.16
TRX 0.15
JST 0.028
SBD 2.31