If we really want an artificial intelligence that thinks for itself, we have to give survival instinct!
The evolution of artificial intelligence depends exclusively on the ability to think for itself, that is, having instincts for their own reproduction and preservation is the way for this to happen. Anything other than that would result in a calculator and not artificial intelligence.
By copying human instincts to machines, we will be able to remove artificial intelligence from inertia when we don't give it tasks ... After all, we don't want to become the nanny of robots, we want robots to help us instead.
But how will this system work?
Each robot will have an evaluation level in its program, with the number 100 being the best and 0 meaning its death. Whenever that number drops, a system frees up more tasks the more your score drops. There are no choices when the survival instinct is activated, so it is with us humans too.
Tasks that help you get more energy earn points
Tasks that make you gain influence also earn points
Tasks that potentially destroy it, lose points.
That is, the amount of points received or removed is adjusted at all times in order not to make the same mistakes.
Since each artificial intelligence has its own separate system, it makes them unique ... Each one will have a different experience of how to earn points to keep it alive.
Is there a danger in giving machines survival instinct?
Of course, one day they may just come to the conclusion that we are obstacles in their own survival. Even if we add security that prevents them from hurting humans and locking that part of the code, we still have a risk of terrorism from some malicious human.
This post is only a few thoughts I had about consciousness in artificial intelligence.