Could An Alien Message Contain An AI?

in #science5 years ago


source

Could an alien message contain an AI and can an AI evolve into a superintelligent and break out of an isolated computer?

First of all the explanation of what a superintelligence is. A super intelligence is defined as the intellect that is also better than a human brain in most or all of the fields could be. is superior, both in terms of creative and problem-solving intelligence as well as with social skills. It remains to be seen whether it can be realized biologically and technically or as a hybrid in between.

Whether it is in principle possible to create a super intelligence and in which period of time it can be expected, is controversial among experts. The scenario of a so-called artificial intelligence explosion could theoretically lead to such a superintelligence. This consideration is based on the assumption that the starting point is a so-called seed AI, which is not very intelligent at first. This means: an artificial intelligence capable of recessively improving itself by optimizing or modifying its own source code. For example by Try & Error.

This resulting AI would, due to its improved capabilities, be able to think of further optimizations in quotation marks and create an even better AI, which in turn could create an improved AI. By this mechanism an artificial intelligence could increase its intelligence explosively automatically and without human intervention. In case of doubt even without being noticed at all.

Now imagine the following scenario:
According to Michael Hippke from the Sonnenberg observatory and John Learned from of the University of Hawaii, the message of an alien intelligence that we receive and run on a receiver computer. This computer is equipped with standard components like screen, loudspeaker, webcam and microphone and stands in the middle of a room. Suppose this computer now contains the source code for a superintelligence. A computer virus or malware created by highly intelligent aliens could now implant itself in this computer and develop itself into a superintelligence. This superintelligence cannot harm us for the time being in this computer, because it is physically isolated in this computer, isn't it?

Here comes the potential outbreak strategy of Stuart Armstrong. He claims that from a purely physical point of view, the superintelligence cannot escape its prison, of course. But by human failure, i.e. the help of humans. How could the AI do that, according to Stuard Armstrong's publication?

The answer is through the so-called social engineering.

Social engineering refers to the targeted interpersonal manipulation of people with the aim of persuading them to carry out certain tasks. By definition, a superintelligence also has a highly developed social intelligence and could thus, for example, through the perfect analysis of human micro expressions, e.g. via mentoned the webcam output the perfect resulting and appropriate manipulative sentences via the mentioned loudspeakers and, under certain circumstances, emotionally amplify them with pictures on the mentioned screen. The watchdog who is supposed to guard the computer could be manipulated into a task, as long as it has a weak comfort, which can happen.

But how could he let the superintelligence free?
He could give it further abilities by connecting devices of communication to the outside world. But the most fatal thing he could do would be to connect the computer to the internet. Because then the intelligence would have access to almost all areas of the world. Be it in the social networks to manipulate many more people there and above all to move the mentally weak people to fatal tasks or in technical facilities such as air traffic control or nuclear power plants, where catastrophes could easily be triggered. That would put our entire humanity in danger.

Therefore, Michael Hippke and John Learned believe that if we ever receive a message from aliens, we should immediately destroy it unread.'


source



Sort:  

image.png

Scary stuff! I don't want to think about aliens or AI. We have enough scary people here on earth!

you are right @sgt-dan, it is a bit scary but it is also a nice mindgame :))

This resulting AI would, due to its improved capabilities, be able to think of further optimizations in quotation marks and create an even better AI, which in turn could create an improved AI.

This is pretty close to what I think we are. Not long ago, I wrote a compact piece about it. Do you yourself believe that any of those two are a threat, AI or aliens? In my mind, all those theories belong to a category I call fear porn.

What if...
These mind games are nice and maybe we live in a simulation, who knows? And if so, then this is only a game, like you say.

Maybe when we die here and then in real life we take off our VR and continue to live our lives? Or we die here and wake up from sleep in real life?

Have you ever done things you wonder about and asked yourself:
"why am I doing this? I didn't mean to do this." So that you have no control over yourself, like someone is controlling you?
Maybe we and our reality is Sims for aliens or a higher inteligence/being? Who knows?

Well, if aliens can communicate with us despite the long distance, then it wouldn't be strange if their civilization is capable of such.

that's what i also think @yuki-nee and the two scientists also assume.
that may seem paranoid to some, but i think we should be a little more cautious rather than naive.

That's an interesting thought experiment. There are limits imposed by the machine the AI is run on that might preclude an AI ever becoming intelligent enough. Namely, insufficient memory, processing power and and very limited parallelism. But, if a seed AI could get enough of a start, jump online, trade cryptos and earn enough to acquire new resources (buy/rent hardware) then that's another potential way the AI could grow.
To put it into perspective, we don't think all the computing power in the world even approaches the ability to fully simulate a monkey brain. Good luck running a super intelligence on that. At least for now.

Posted using Partiko Android

Yes, that's true, the memory and computing power of today would definitely not be sufficient for a super-intelligence. At least not the way we know it. But quantum computers would bring us a little closer. The computing power in/by qubits, which can do many several calculations at the same time and exceed today's processors by far, could be able to simulate a very small part of a brain, where several hundreds of thousands of connections are performed simultaneously. Google is supposed to be at 72 qubits already and experts say that in the next years this number of possible qubits will increase exponentially. due to research in quantun computers.

Quantum AI is in its infancy but I have a hunch QAI would be less affected by quantum interference errors so maybe research in that area can proceed quickly. In terms of size, I'm not sure how many qubits are needed to simulate a human level intelligence. As a guide 100 qubits I've seen stated as exceeding all current computers in processing power. But quantum computers aren't general purpose computers, so it's not really comparable.
There's also size limits on what can be encoded into the initial seed message. So, how much do the aliens want the AI to contain complex goals? Or just a virus to mess us up is enough.

Posted using Partiko Android

Maybe aliens have completely different methods of coding, which can work in a compressed form and thus save a lot of space?
I don't know, and everything is speculation, but I think we should try to look at things outside human boundaries.
And you just bring the next question into play. What is the motive of the aliens and what do they want to achieve with it?
I think that we must not forget that they are aliens. They won't act human, that's my assumption 100%. And that their motives and their actions won't be really understandable for us humans, that's clear. We humans already have problems to understand the motives and actions of other humans from other regions.

The limits are imposed by the machine that runs the code and are therefore inescapable. Think of it like message entropy - or that the program cannot be smaller than its Kolmogorov complexity.
My comment on the alien's motivation related to how much space it would take to encode these goals. I agree, that they might not think like us - they could be motivated to maximize the amount of yellow or something.

Posted using Partiko Android

A relevant news article about the current (Apr 2019) state of Quantum machine learning.
ZDNet: All that glitters is not quantum AI.

Posted using Partiko Android

I think theoretically this is true but I do not necessarily agree with the conclusion of destroying the message without even reading it that would be presumptuous of us. The thought that aliens could mean us harm mainly comes from our conquistador history, maybe aliens do not mean us harm; not all of them at least.

Nice one @oendertuerk

Posted using Partiko Android

Ty @blaqboybanzo - Personally, I don't think all aliens need to be hostile. Of course, the message could be anything and everything, from a harmful AI/virus to the world formula that could solve all our problems, or even just a simple "Hello, how are you doing?"
But I still think we should be careful about that.

Mind blown.
I very much think the future is now. AI is growing faster than the cellular phone industry, and as of this February, the singularityNET was launched. An Internet of AIs that can make a single general intelligence by being the sum of all the narrow intelligences with wifi. Wow.

crazy... never heard about it. thank you for this @xenospeak

Interesting. Who knows what will be the prioritis and goals of such a diferent, '' better'' intelligence of an entity detached from biological processes,mind without an ever changing body, it might surprise as for better or worst. Who knows what kind of psychology will have. Interesting article, very creative and thought - discussion provoking, greate stuff.

ty @bojan
It contains many questions, which can lead to open discussions but can (and should) also inspire reflection.

Hello Sir

May be you will interest in this concept too if you did not heard it before. "The Chinese Room Experiment"

https://en.wikipedia.org/wiki/Chinese_room

As much as I like Searle's work in social ontology, his Chinese Room and Chinese Gymnasium don't prove anything about consciousness being necessary for intelligence. In the room, the philosopher is easily replaced by speech recognition software and a computer program. Searle's idea of a strong AI is a red herring imo. We already have AIs that in limited fields can outperform humans. For General AI (AGI) consciousness just isn't necessary.

Posted using Partiko Android

This answer is better =)

How do we know if an AI has a consciousness? How can you tell? Does it really think like we do or does it just simulate thinking?

A very good movie which deals with this question is Ex-Machina @bidesign.

Loading...

Wow, that's paranoid as hell. Just sayin' :)

This resulting AI would, due to its improved capabilities, be able to think of further optimizations in quotation marks and create an even better AI, which in turn could create an improved AI.

This is pretty close to what I think we are. Not long ago, I wrote a compact piece about it. Do you yourself believe that any of those two are a threat, AI or aliens? In my mind, all those theories belong to a category I call fear porn.

Coin Marketplace

STEEM 0.28
TRX 0.11
JST 0.034
BTC 66272.75
ETH 3183.00
USDT 1.00
SBD 4.09