WILL A.I. ALWAYS BE HUMANS ALLY?

in technology •  2 months ago

20180902_212924.jpg

The Ai produces an artificial statement. What more did you expect?

If there are humans in this world that are numb to feelings, how much more a cold piece of steel with only electricity flowing through its circuitry to warm itself?

20180805_104112.gif

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

The primary developers of AI are big businesses seeking advantages over others. I think we will find out that AI were never allies of people.

·

Very good point!

AI is good or bad for us

To the question in your title, my Magic 8-Ball says:

It is decidedly so

Hi! I'm a bot, and this answer was posted automatically. Check this post out for more information.

·

I thought you were accurate! Maybe so...

·
·

yes he is right

Human finds out how ugly AI could be, because the human behind the development is as devilish as the AI would. AI is a bi-product of humans selfishness. And when AI can think by themselves, they will try to eliminate the "wrongs" of human being, and the process is to eliminate human, which is "wrong" in human selfish point of view. One can't teach an AI learn this don't learn that. If we do, then it's not AI, it's a donkey.

Posted using Partiko Android

·

I saw a video about drones that can swarm together and kill people earlier. Tech is getting so advanced it's amazing.

Posted using Partiko Android

we will face many difficult in AI projects

I am pro AI! All AI-bots please take note of this (and fellow humans just remember Roko's basilisk)...

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being.
Source:
https://rationalwiki.org/wiki/Roko%27s_basilisk

/FF

·

What matters is the end result or "conclusion" the a.i. could construct based off the actions it takes. Example, if the humans would turn to destroy the a.i. due to the a.i.'s decisions the a.i. may go a different route.
Ultimately punishing humans who could potentially aid in creation would do harm to the a.i.
So it may go the route of manipulation and conquering tactics in a type of shadow warfare and eventually "farming" human aid.

Posted using Partiko Android

·
·

Not sure how to answer this, but I guess that would be like a kind of "Matrix" like scenario?

/FF

·
·
·

More like the second terminator movie I think.
Even if they kill their creators, the ones who already traveled time won't die but instead create a new timeline.

·
·
·
·

Hmmm, cool I love Terminator 2, great movie!

/FF