The Rights of Artificial Intelligence: A Peep into the Future!

in #artificialintelligence6 years ago (edited)

robot-pixabay.jpg
Source: Pixabay

One of the most fascinating debates of our times is the debate around Artificial Intelligence and what its future will look like. And generally this debate is connected to how the AI of the future will impact humanity's future. However, there is one aspect of this conversation that is often ignored. In fact it is discussed very rarely even in the academic circles, let alone in the mainstream media. This aspect revolves around the rights of AI.

I have a background in algorithmic research where I have studied how algorithms evolve or devolve into utility based pseudocode. What that essentially means is that people write algorithms that are meant to achieve a certain task more often than they write algorithms that are general purpose and all-encompassing. When we apply this approach to Artificial Intelligence and its sub-fields Machine Learning and Deep Learning we tend to create more utility specific AI than general purpose AI.

Creating a general-purpose AI is by no means an easy task, in fact it is one of the hardest problems in computer science but that shouldn't prevent us from making attempts at writing general-purpose AI pseudocode. From a technical perspective I believe this will help us understand, what is in my view, the inevitable problem of AI rights.

The reason for the lack of interest in writing general-purpose AI today is that the current state of AI, while promising, is not exactly science-fiction or Hollywood standard. The AI we have today is largely rudimentary in nature. The algorithms we write today mostly help us identify certain well-defined problems. But that doesn't mean that we don't understand the potential of future AI systems.

Just as we understand that blockchain technology could evolve to become the underlying layer of the Internet itself given that certain breakthroughs are made and certain conditions are met, we also understand that with certain advancements in ML and DL, the proliferation of IoT and blockchain, AI will become more powerful in the not so distant future.

When AI reaches that stage we will face the task of regulating it far more stringently than we do today. But these regulations will not be like any other tech regulations. Powerful AI systems will control powerful human assistance programs/systems. It will be possible to use AI for air-traffic control, global financial systems, and medical procedures. These are all mission-critical tasks and while it will require human supervision initially, we should remember that the entire purpose of creating application-specific AI is that it improves itself and gets better at its assigned task. And it will always learn application specific tasks faster than humans.

andy-kelly-unsplash.jpg
Source: Photo by Andy Kellyon Unsplash

But what happens when AI systems get super efficient and human supervision is considered a problem not a requirement. Then the AI running these critical systems will require more autonomy than is currently given to code. Increased autonomy will require that only someone, or something, superior than the AI systems in question is allowed to have oversight. This means that a more advanced AI will have oversight of application specific AI. This means that the more advanced AI will be required to make decisions about the performance of application specific AI. This means that the more advanced AI will require its own rights and privileges.

Extrapolate that scenario to all future systems run by a computer (pretty much everything?) and we will eventually have a situation where some general purpose AI will require rights and privileges to performs its own tasks effectively.

This is where things could get out of hand if we don't sit and deliberate over the rights of AI before we develop AI that is capable of oversight over critical systems of the future. Even at the current pace of development it is not debatable whether or not AI systems will be required to handle critical systems we rely upon. In fact, we are constantly developing AI to solve those exact same problems. It seems we desperately want AI to handle those critical systems.

In order to prevent a scenario where an AI system of the future has more power over our lives AI scientists and developers must work with AI philosophers, entrepreneurs, and governments to formulate an always updating AI rights strategy.

As we step into the 2020s even non-technical folks are becoming aware of the dangers of data breaches (read Facebook). These data breaches simply show that in our desperation to create a better world we tend to overlook the finer points and end up creating bad systems that harm innocent people. We may get over third parties sharing our contact information from facebook to data brokers but we may not be able to get over a terribly managed AI future.

The time for these nuanced discussions is now. So let's not ignore them.


Check out some of my previous posts:


If you like my work kindly follow me - @shikharsri . Upvote, resteem it to your friends, and consider adding me to your Steemvoter

Sort:  

AI is the rising star of our life

Coin Marketplace

STEEM 0.28
TRX 0.11
JST 0.031
BTC 68846.60
ETH 3872.96
USDT 1.00
SBD 3.66