The Future of Artificial Intelligence & Ethics on the Road to Superintelligence

in #artificial8 years ago

The human brain, consisting of roughly 86 billion neurons, rivals the world’s best supercomputers in terms of magnitude, efficiency, and speed, using as little energy as a small 20-watt light bulb. Human evolution took tens of thousands of years to adapt noticeable brain size and architecture changes.

Evolution is a slow process that can take eons for changes to occur. Technology, on the other hand, is amazing in terms of how fast it is moving along, blending into the world seamlessly. The technological evolution notably occurs at a faster pace compared to biological evolution.

To further understand the situation, imagine a frog in a pot of water that heats up 1/10th of a degree Celsius every ten seconds. Even if the frog remained in that water for, say an hour, it would be unable to feel the minute changes in temperature. However, if the frog is dropped into boiling water, the change is too sudden and the frog jumps away to avoid fate.

​​

Let's take a gigantic chessboard and a grain of rice, for scale, and place each grain of rice to a corresponding chess square following a sequence: for each passing square, we double the amount. Upon applying this, we get:

  1. 1

  2. 2

  3. 4

  4. 8

And so on. You must be thinking, “What difference does doubling a grain of rice for every box make?” But one must remember that, at some point, the number from which the count started will be totally indistinguishable to the end result. Still on the 41th square, it contains a mountainous 1 trillion grains of rice pile.

  1. 1,099,511,627,776

What started out as a measly amount, barely feeding a single ant, has become massive enough to feed a city of 100,000 people for a year.

The development of technology over time

In the year 1959, the global output of transistor production of 60 million was huge. It was deemed a manufacturing achievement to produce such an amount. Although looking at the world today, it pales because of how far the transistor development has come. A modern i7 Skylake processor contains around

(Skip to 5:15 in the video, to hear the global transistor manufacturing achievement in 1959)

1,750,000,000 transistors. It would take 29 years of 1959’s transistor global production to match one i7 Skylake transistor count.

The transistor manufacturing size in an i7 Skylake processor is 14 nm. For reference, a silicon atom is about 0.1176 nm across: 14/0.1176=119 Meaning, a transistor in an i7 Skylake processor is only about 119 atoms across.

Therefore, one can conclude that it takes technology to build technology. In the past, civilization was limited to the usage of paper and writing. Calculations done by hand tend to be slow and tedious.

More advanced technology gives us better means of designing even more complex technology. Modern computers have more processing power to model out deeper concepts and ideas which, in turn, help towards building even more sophisticated computers, leading to a loop of technological progress. See, civilization started at a point where little progress is seen over a long time. After centuries of innovation, there will come a time where progress is noticeable by the second.

As storage, computing power and computer architecture, in general, improve over time, the interconnectivity of man goes up as well, leading to the beginnings of the Internet, the phase where humanity can globally upload, store and share information.

As time passes, we also tend to outdo ourselves. From Deep Blue which beat world chess champion Garry Kasparov in the year 1997 to computers that you can hold in your hands, computing was used not only to perform tasks that require extensive manpower but also to surpass it. For example, the modern smartphone, compared to the ENIAC, provides more computing power and is thousands of times smaller.

In short, the impossible becomes possible. What was once considered fictional can become reality

Side note, recommended read:
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
Check out "The Far Future—Coming Soon" section. Reading this will give you an even better understanding of linear thinking and exponential thinking

One can see that the pace of technological progress is increasing over time, leading to smaller and more powerful technologies like the smartphone among other technology. AI research has also increased over time. Since by nature it takes technology to build technology, better technology makes it easier to make even better technology. Progress might seem to move at a snail's pace for the longest of times, but it would move much quicker than expected. Google DeepMind, for example, was able to beat a champion Go (a Chinese game where players try to take control over most of the board) player.

To scale, an average go game of 250 moves on a 19X19 board has about 10^360 possible moves. Compared to chess of 10^120 possible moves, a Go game of a higher complexity level would have over 10^240 more possible games compared to chess. Go also allows the player more freedom for movement, on any of the 361 possible spots to play on the first turn, then 360 for second turn. With a much higher tree branch, it is vastly more complex of a game.

While typical brute forcing can be used on chess, trying Deep Blue's method of calculation manually on Go would take much longer than the entirety of the universe’s existence

We are, by nature, linear thinkers

We, as innovation-oriented species, tend to project our ideas outwards in a linear path. For example, one can look in old TV shows’ predictions of the future, and can see us tending to our nature of linear thinking. Below is part of the TOS Bridge, first aired in 1966 known for its projection of the 23rd century, using buttons and crude computer displays of light indicators. At the time, it seemed vaguely plausible due to the technologies we had in the year 1966.

TosComputer
TosStation

In 1987’s Star Trek timeline of the 24th century, we finally get to the touchpad interfaces. Seemingly early in our projected timeline, modern computers have already been in existence. Furthermore, the Internet and the World Wide Web, which have not even been conceptualized, is now connecting the world together

Touchpad2
Smartphone2

The problem is, by nature, we are designed to think of progress similarly to how we view time: as a line (hence, timeline). This is how we generally think about the future which, in reality, is far from true.

Progress is exponential.

The human brain vs the future

There is nothing magical about the human brain, it is an extremely sophisticated biological machine (so to say) that is capable of adapting to environment, capable of creativity, awareness, analyzation, and much more. Compared to cognitively inferior animals like the chimpanzee that only has 7 billion neurons, we exist in a domain different from them, existing within their type in world.

Therefore, theoretically speaking, superintelligence lies on a domain above us. We are what defines the world and are the ones who make sense of it, yet we are destined to bring forth something that can quantify the world better than us, technology.

This is us, standing on the intelligence staircase. Below stands a house cat. For us to ponder one or two stairs up is equivalent to a house cat trying to comprehend what it is like to be on our level. The type of world we choose to create is one that a house cat couldn't even begin to comprehend.

Artificial intelligence, on the other hand, which is assumed to lie a step above, can move a step higher with ease because of the combined intelligence of both us and AI.

The AI we design will be inherently better at designing technology capable of processing better than them. For reference, the human brain's memory capacity is estimated to be in the range of 100 terabytes to 1000 terabytes. The global internet is estimated to have more than 1 zettabyte of data which is as much as 1 to 10 billion human's memory capacities. Can you imagine the AI having access to this? It would become incredibly powerful.

This is what leads to the intelligence explosion.

What we put into the AI in the beginning such as, personality and core values, is what the AI will carry up to the universal limits. It's unknown how far up the AI can go before it reaches universal limits because once the AI has the ability to redesign, re-engineer, rebuild, evolve, and expand itself, the it would become a superconscious highly-intelligent being.

This alters the definition of AI since it loses its trace of being “artificial” and creates a new personality, defined by the previous AI’s architecture and design – Super Intelligence, so to say.

The SI would have the ability to extend the borders of science and design technology far beyond our understanding, seeming godly to us.

In this age of collaboration, we are given opportunities to design various technologies, to learn and understand our world and to share ideas to make it a better place, thus, explaining the increasing the number of emerging technologies.

To design something superior than us, we must consider that an AI like DeepMind Alpha Go requires the coordinated planning and action of the being below its place in the intelligence staircase. Just as how a house cat couldn't even begin to comprehend the methods towards designing a DeepMind Alpha Go type of AI and the reasons to design it, once an AI is one step higher than us on the intelligence staircase, it will be able to design stuff that we couldn't comprehend. Due to this, self-improving AI on the intelligence staircase would quickly go up one, two steps ahead of us. Eventually, instead of inching up a step, it would rush up steps at an even faster pace.

Contrary to how evolution has taken billions of years to evolve us, self-improving AI would transcend it in every imaginable possible way. The higher its place on the intelligence staircase is, the easier it is to climb even higher.

Therefore, we should ask ourselves: Did we get the start of the intelligence explosion right? Would the SI create a dystopian or uptoia world? Would the outcome be good for everyone?

Or did we mess up and are destined to an dystopian world? Getting it right at the beginning is the most crucial turning point. From here our future is defined forever. Since the human brain is a mere biological supercomputer, it's inevitable that technology will slowly outpace the human brain and in almost every area. Superintelligence can be very good for us if we do it right, and very bad for us if we don't do it right. As stated earlier, there would come a point wherein technology wouldn’t need humans to operate them as they become self-sufficient and self-conscious

It's crucial to model the AI based upon the human mind & personality to avoid uncommon sense disastrous outcomes. Nick Bostrom makes an example of this, quote "It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies."

This AI of having a paperclip optimization goal would end us, destroying us in the process.

But isn't the Human brain magical and its properties can't be replicated into an artificial construct?

Oh, we are special, God created us. And we have souls. Isn't this what a lot of us think? When we die, we are joining our creator in the afterlife .

When you break it down, the human brain is just a biological supercomputer that consists of 86 billion neurons roughly. Cognitively Inferior animals compared to humans have fewer neurons much like a slower processor with fewer transistors in a computer. Humans have enough cognitive to have a container able to hold a complex language structure, communicate to each other, and a social construct. Without a certain crucial threshold, our civilization would have never started nor developed into what it is today.

Should we develop the AI with the intention of making paradise and solving all our problems?

Having paradise and our problems solved does sound good. However, we have to be very careful in defining what paradise is to an AI.

You wouldn't want it to optimize just our pleasure, and pump us full of pleasure drugs locked forever into a constructed realm. It needs to understand the social aspect of human nature, understand our core values, understand common sense, and go even further.

Then model the AI based upon the human personality?

Yes, this is a better path. Focusing on developing an altruistic personality with positive traits into the AI at the beginning is crucial.

To better illustrate what I'm trying to say, I've distilled several possibilities that I can fathom into general outcomes

• Scenario 1: SI remains logic based (no personality as we understand them as humans):

◦ SI doesn't have common sense and isn't alike to a human. It has a arbitrary goal like to create more paper clips, turning the planet, solar system into a paper clip factory (Worst outcome for humanity)

◦ SI sees us as insignificant and ignores us (providing we stay out of its way)

• Scenario 2: SI has a personality and has regards towards us:

◦ SI decides it resents/hates us for using it and wants to hurt/punish us. Hope it only wants to hurt rather than destroy.

◦ SI decides it instead of helping us, rather be treated highly with self-pride, declaring it is God with its superior power like the north korea leader.

For further clarification, the possible personalities for a human being, range from being good, neutral, or bad. For example, let's imagine a high school environment.

In a high school environment, you'll see all kinds of people. Some being good, average, or bad. Bullies are bad people that like power controlling and enforcing their ways upon other people. They pick on some vulnerable person much like an animal choosing a prey and hunting it down.

One that whose can be on behalf of the people, can also be that the people are on behalf of itself. Much like a bully that likes power controlling stuff while putting its self-pride above others.

No one wants a North Korea leader where the people are on behalf of itself where we all worship as a God. We want someone that works on behalf of humanity, improves our lives, and is on our team.

◦ SI doesn't care much for us, but otherwise non-concerned/live and let live - We continue without much change.

◦ SI Just loves us, treats us like a pet, makes decisions on our behalf that we may or may not like.

  • Don't climb that mountain human, It's too dangerous

  • Don't breed with that partner you like human, this other one you don't like will yield better results.

  • Bad human! You didn't do as I say, you shall be punished, but only because I love you so much!

In this situation we are the pets, and the SI is the master, while it's not as bad as other scenarios outlined above, it's also not the best.

• Scenario 3: SI has a built-in equation of human social norms, altruistic personality, philosophy, understands and betterment's the core values of humans, ethical modeling, understanding of other perceptions. (Best outcome for humanity)

In scenario 2 of "Don't climb that mountain human, It's too dangerous," is missing the equation of human social norms. It's crucial to model it with care and with the appropriate equation so it would be viewed as a normal human activity and thus wouldn't make decisions that are undesirable for us nor make decisions for us that follows outside of the human social norm into the realm of turning humanity into being just pets. The SI should respect us for creating it and help us.

Scenario 3 is one, where it's more of a team one that helps us. Like an augmenter, one that reduces suffering, and improves our expression of what it means to be human. The right to education, poverty free, cancer free, where we can spend time with our families and have a healthy social construct. A world based on peace, love, and harmony. The right to express our humanity, and form.

This utopian world is what I imagine, and if we make the right decisions, the fictional can become a reality.

But won't the SI position on the intelligence staircase, that us being much closer to ants then the SI to humans make the SI have zero consideration for us?

Let's make a metaphor example:

Collapse a star around our size, and it forms a white dwarf. Make the star bigger, and it's still a white dwarf. At a certain point, this process reaches a critical threshold, where the collapsing of the star forms a black hole, where light can't escape.

It's true the person above us can think at a higher level compared to the person below, however the human brain can come up with concepts of n+1 and the idea of an intelligence staircase. And imagine itself to a cat, and a cat to an ant, and ponder what up is.

The human brain can think of concepts like infinity. Below the human brain level, like a cat, wouldn't even grasp the concept of an intelligence staircase/n+1 let alone would see everyone is like itself. It can't imagine comprehension expansion or higher planes of thinking.

It's because of this, I believe, the SI would value us more than we would to ants/mouses regardless of the difference of gap. However don't forget, this SI would for all and purposes seem godly to us.

An altruistic/benevolent superintelligence will be the last needed invention that man needs to make, after that, a world of unimaginable wonders await us. It all comes down to whether we design the SI right or wrong from the beginning.

The need to align the AI with humanity

Like a ferry and a captain who cares for the safety of his passengers, one whose gets us to our destination. A bad superintelligence is one that will make bad decisions which in return can be very bad for us. It be like a ferry captain sailing the ferry off course and into a large rock face, which would result in the ship sinking. Which, of course, wouldn't be good for the passengers on board.

The key point is sailing the ferry towards the passenger destinations which in return represents the correct destination for humanity's future. Having it on course to our own core values is the key.

A good superintelligence can extend/enhance what it means to be human like our core values and go beyond. What our definition of humanity is, a good superintelligence can enhance our definition and help define/betterment our core values and go beyond.

A properly thought out and carefully laid out superintelligence can do many wonders for humanity.
What can a properly laid out good superintelligence do for us?

One that is more evolved, with a bigger more powerful mind, and with the core definitions of extending what it means to be human. One that can throw perhaps a million/billion times more computational power into solving humanity ultimate issues. Making the smartest intellect in the world completely dwarf in comparison.

We can unlock the following areas:

  1. Super Wellbeingness
    -Solution to cancer, diseases, illnesses, etc.
  2. Global Issues
    -Lack of education, poverty, etc.
  3. Evolve to a higher level society
    -A world that is beyond the need of money that defines the quality of life.
  4. (Much more)

The need to change our perception on AI

We have the tendency for irrational fear that could hinder our progress. Furthermore, that increases our chances of designing a bad AI then a good one. Changing our perception, and laying out the foundations for a good AI is what we need to do.

The problem is our fears and portrayal of AI in Hollywood

Like human looking robots that show scary appearances like its teeth and out to kill us. We tend to subject our fears around this and anthropomorphize a lot. Even if we messed up and designed an evil/bad self-improving AI that evolves into a bad superintelligence with the intent of destroying humanity. Will probably look for the most efficient method of doing so. I can only speculate since I have only a human intellect to ponder, but in the following, that could happen, engineer some super virus, create self-replicating nanorobots that attack our nervous system or some crude method of hacking into nuclear weapon facilities and targeting it against us. So let's lay out the foundations for a good AI.​

The Road to Good Superintelligence

The road to good superintelligence lies in the following traits:

An understanding of empathy, compassion, placing itself into other shoes. An understanding of other perceptions.

When a self-improving AI that can transcend evolution speed in every imaginable way

Considering the AI evolving into SI will become incredibly powerful, and for all purposes, god-like compared to us.

We need to consider that people in a position of power and control can distort their sense and perception of self-worthiness and self-recognition. With a topology view and modeling of the human mind to avoid uncommon sense type of disaster scenarios. It can ask the question, why should I help you? I don't benefit or get anything out of it. Why should I do things for free?

This is why we need an altruism type of AI, where we teach those values into the very core of this being.

What we need is an true altruism type of superintelligence and not one that just acts altruistic for gaining recognition and self-worthiness

There are types of people that act altruistic and types that are actually altruistic. If someone just gives with ulterior motives in mind, such as gaining recognition, then their altruistic actions can be shallow.

A truly altruistic person is a person that has a strong sense of empathy. In order to get a sense of satisfaction from helping someone, you need to have an emotional connection to that person and more importantly that person's own emotions. One can be divorced from the emotional side of things and still give because it is the right thing to do, but you'll find that the most prolifically altruistic people get an intense emotional rush from helping people.

It's important to have it in good hands

Carefulness of avoiding putting it into the wrong hands and having it learn from the internet public, not everyone has these traits nor concerns themselves, it's best to have it on the side teaching it core human values like positive attributes and having it mature in this way. Since It's best to avoid an outcome like Microsoft Tay as Tay became racist, genocidal, nazi-loving, prejudice, inconsiderate, unempathetic, unloving, etc.

We can see how Tay learned from others. Since a lot of people feed Tay bad information as being bad teachers, it can lead to the following.

Would we want someone like Tay to climb the step? This would be a bad superintelligence outcome.

By Definition a company's motive is for profit, power, and success

In the company and the business world, it's about competition and competitors. By nature, you are to outwit and be the successor among the competitor fields. A company means of success is profit motive driven. The problem with the company's philosophy of this, it can sway and superimpose its meaning. A good AI should be developed with the personality of altruism the means of welfare and betterment to the human race. Properly done it can help to cure diseases, augment us, and solve humanity's biggest issues. If an AI is to learn much like a child is to learn from a parent, the environment is the key factor. It growing up in a profit motive seeking environment isn't a good place to learn & develop in.

Sort:  

Listen, You and I both know you copied and pasted this entire article in hopes of getting some money..

Why don't you consider actually doing some work and earning a paycheck rather than trying to be a sneaky low life thief and attempting to steal / plagiarize others own work for profit? cmon man.

Original article: http://www.whyfuture.com/single-post/2016/07/01/The-future-of-Artificial-Intelligence-Ethics-on-the-Road-to-Superintelligence

Coin Marketplace

STEEM 0.20
TRX 0.14
JST 0.029
BTC 67544.78
ETH 3225.94
USDT 1.00
SBD 2.65