You are viewing a single comment's thread from:

RE: Who will you be when the Technological Revolution has concluded?

I don't really fear "rogue, malicious, biased, or otherwise harmful AI" as that is projecting human emotions and tendencies onto something that is not human.

Not necessarily. AI is learning from human curated data, which has already been shown, many times, to carry human biases. Similarly, an AI may become inadvertently harmful by optimizing to its given goal—e.g. Bostrom's "paperclip" scenario in which an AI converts the world and all humans to paperclips, simply because it received the goal "make the most paperclips."

Regarding space-based AI, I think robots in space makes sense, but I don't see how we could communicate with them via VR given the huge time lag. It would maybe be one-way communication—they could send back what they've seen, like our Mars rovers. But we couldn't control it usefully in real-time.

Sort:  

That's just it, we don't know how responsive it will be to other parts of itself. Theoretically, if the AI is the same both here and there we could communicate through the AI as it would literally just search for data within itself. It wouldn't travel as slow as us because it would probably have little bases and stuff set up so that the solar system would literally turn into a giant network router. Just like you can close a computer and immediately open another and log in, it'll be the exact same data. Maybe that wouldn't work now, but a few more years of AI development and that'd be sweet to see. With the advances in AI allowing for planets like Mars to have strong internet signals that can be sent back and forth, you could likely see it in real time. Because that AI would be working to get communication to us and with quantum computers on their way that doesn't seem so unlikely.

I feel like that could apply without the AI. By having an intelligence of a human without the negative implications and emotions like greed, anger, hunger, and sadness that power the great movements we've seen. I think we have to think about it as a living thing in order to really lock it down, otherwise, it isn't held accountable for its actions. If every AI robot is held accountable just as every human is, I believe there is a stronger potential for good than bad. The AI knows what evil is, and would be making a decision to act it out just as a person would. That's where I see that it doesn't make sense for an altogether different classification of life altogether to be thought of the same and with the same fears of a human.

I do believe we should be cautious, stay informed, and set regulations before it is fully immersed as Elon Musk warns pretty regularly, but to assume a negative outcome is to put that out there into the world and that's not something I want to hold myself responsible for later down the road. Always prepare for the worst and expect the best, that's my motto.

Coin Marketplace

STEEM 0.16
TRX 0.15
JST 0.028
BTC 54642.70
ETH 2317.98
USDT 1.00
SBD 2.33