Is AI technology moving too fast?

Add heading (1).png
Machine learning has been getting a lot of shine in recent years. The field has been touted as the shining light for humanity. Steps have been taken to leverage this technology for the future. Though we are still light years away from the ideal vision, the rate at which we are advancing means light years could possibly be a few decades away.

The age of computing and Artificial Intelligence (AI) is upon us. And as with all new technologies, there is concern about its applications and uses. Let’s face it, man has a track record of using the technologies we possess to inflict pain and damage on ourselves and the environment. From fire to nuclear technology, man has found ways of making technology work for him. And mostly in the most destructive of ways. Hence with mankind’s history of tech abuse, I stand to ask this question: Is technology growing too fast? And are we ready to handle the consequence of what we have done?

There is a lot of evidence suggesting AI tech is indeed growing too fast. Some of the larger ones being:

Despite creating it, we don’t really understand it:

Saying humanity has to slow down is an understatement. Despite creating AI, we’ve reached a point where we have no real understanding of it. Though the prospect is exciting, this potential flaw really makes things a bit unsettling. AI today is capable of learning and transforming itself. What was once a unique trait of human is now shared with inorganic beings. Worse of all, we have no idea how they work. If mankind is to keep up with the tech advances, we have to at least understand what they do.

This unpredictability makes it difficult to keep track of. This scenario has been adequately named the "Black-box" problem. Sometimes, AI goes on to accomplish more than simply being unpredictable. They even develop traits that are uniquely human, able to program themselves and even develop languages of their own.

AI is indeed smart, but lacking in heart:

Let’s face it, we aren’t the greatest role models. Yet, it’s the only way AI learns to interact. For humans to be able to use AI properly, especially to foster polite humanlike interactions, there has to be a really smart way of learning “polite talk”.

A while back, Microsoft developed a chatbot. Touted as “…AI fam from the internet”, Tay was expected to be a trailblazer in bot-human relations. However, within a few hours of being unveiled, Tay degraded to being a racist bot. It uttered slurs and sentiments it could have only learned from humans. This is an essential flaw. Unlike humans, AI has no inherent emotion or sense of moral compass and can’t navigate right and wrong on its own. A normal human like you and I can sense the hurt caused by such statements and avoid it. This is simply not the case for A.I.

This isn’t an isolated case either. There have been many bizarre cases of bots exhibiting human behavior and not in a positive light. In 2015, an image tagging software identified 2 black people as gorillas. An AI bot learned to lie in order to get better deals from costumers. All these examples beg for more questions: What if an AI chooses to learn the wrong things? What if AI chooses to learn from the wrong person? We simply do not have a way of predicting what happens or effectively curbing it.

The argument among enthusiast is that there will be safeguards, but if we agree that we don’t fully understand them, then how can we implement the safeguards?

The burden of making deeply emotional and stressful decisions is one that can’t be taken lightly. AI today do not have the capability to make a sound moral argument. When faced with an unavoidable accident, how does the AI decide against rationality to hold on for those brief moments that might potentially save lives? How does AI adapt to the various unique plots and subplots of everyday interaction? As at the time of writing this article, nobody has a definite answer to this.

The laws for governing AI are not yet in place:

For every new technology, there has to be a set of laws governing its use. The learning curve and unpredictability of A.I throws this in smoke. How do you make laws to control a machine that doesn’t play by rules? Some people will say that programming a computer with this set of hard-wired rules makes it really easy to combat this. I don’t think so. Sebastian Thrun, in a recent Ted-Ed talk, asserted that “…the next generation of computers are programming themselves”. Whether this is a loose definition is up for debate, but however, the possibility of this is worth the extra thought.

This isn’t to say there haven’t been strides made. The EU, in particular, has taken it upon itself to establish rules that require transparency. Any company using AI algorithms for a specific purpose has to have a way of explaining how it works to her customers. This seems easy, however, we know that the AI specialists aren’t even sure of how this works. I now wonder how they would go on to explain it to us.

Additionally, there is always the fear of such technology falling into the wrong hands. Imagine a virus that learns and adapts, noticing patterns to avoid and defeat your defenses. Imagine if something like that fell into the wrong hands. An attack from that could cripple millions of people and pose a high level of danger. Though there aren’t many cases to support this, we have to agree that with technology it is possible.

Humans are being sacrificed for the endless rat race:

Anytime there is new technology. The potential for it is always highlighted. People praise what it can do, what it will do, and what it should do. This might seem like progress to the future. But I ask, at what cost? We hear every day of companies laying off workers due to adoption automation tech. We hear of companies releasing faulty products into the market. Are we now letting consumerism cloud morals? Are we letting companies bypass our rights due to the desire to make a profit? Are we being sacrificed for the sake of technology? These are meaningful questions that require a lot of thought.

I believe that every company that takes potentially life-altering decisions is accountable. They should know that AI, even though useful has really great potential to be misused. Simply claiming not to know is inexcusable. What they do about that is the question now.

So what now?

This is by no means an attack on technology. Nor is it an attack on artificial intelligence. I, however, believe that mankind’s enthusiasm over technology often drives oversight. Is technology here to stay? Yes. It is. Do we understand it? No, not all of it. We are however working towards it. Do we need to better test technologies before they reach the market? Yes, we do. All these can’t be done if mankind doesn’t see that adopting technology while forgetting the consequences can be a bad and often dangerous thing.

Coin Marketplace

STEEM 0.18
TRX 0.13
JST 0.030
BTC 56966.72
ETH 3008.71
USDT 1.00
SBD 2.19