You are viewing a single comment's thread from:

RE: The Spock effect

in #philosophy7 years ago

reading through your post made me realize a few things, first, the enlightened intellect and true intelligence might be the same thing after all. Like your friend, my instinct was to see intelligence as being synonymous with "wit", "tact" or the ability to find your way through difficult situations. This is an amoral attribute and will likely be used by those who are endowed with it to oppress others and exploit systems. However true intelligence, as has been shown in some of the most intelligent specimens from our species entails the ability to see things from a broader and higher perspective, to realize the future effect of every action regardless of the gratification in the present moment.
I also want to believe that the greater the intelligence of the human specie, the closer we get to any form of Utopia, our current level of intelligence only pushes us to exploit and dominate each other in most cases.

Sort:  

speaking of AI being either end of the human specie or the way to Utopia, it reminds me of irobot (the movie) where the robots believed that to save humanity, they had to stop humans from being in control. it's fictional but true, our collective intelligence at the moment is at a state where exploitation of resources is the main goal.

I wrote a piece a ways back about whether people would accept the directions of a super intelligent AI that was outside of political circles and had all information at its fingertips. I wonder what plans for humanity it may suggest and whether we would willingly follow or rebel if it made absolute sense to us.

Thats interesting. personally, I want to think we humans will accept the directions of a non political super intelligent AI as long as it is not anthropomorphous, but I believe our ego wont allow us have a human-like AI overlord. As long as it functions like an impersonal decision making system (something like Jarvis from Iron man or a gps guide), we will likely see it's decision in good light. for example if AI would tell us the best way to live on mars, end hunger, stop climate change, we wouldn't mind following the suggestions but this will be different if the AI has distinct personality and is made to appear human.

That is interesting too but, what if it has already begun, recognised the problem and instead is nudging us into a group through internet content algorithms? :P

Well, that will be a very dangerous AI, that level of stealth or manipulation should never be trusted. :-)

I whould absoluttly not agree on having ANY kinda AI making the desitions like gouvernments is trying to do now. Even if it was to be moral and beneficialin some ways. Mostly because that goes agains what life is, so to speak. Are we not meant to live ? - As opose to merely exist? If most decitions are made for us in how to live, and settling issues with others and explorations, and we are given what we need, then whats the challenge to overcome? Living includes exploration, failing, succeding, trail and error.. This is how we grow and do develope wisdome.
This is why the nany state is ultimatly not beneficial.

Coin Marketplace

STEEM 0.18
TRX 0.16
JST 0.030
BTC 62539.28
ETH 2437.94
USDT 1.00
SBD 2.67