The Moral Dilemmas of Self Driving Cars

in #technology7 years ago

smart-1348189_1280.jpg

It doesn’t happen too often that a new technology is developed that causes a paradigm shift throughout the world, but we are at the very cusp of such a shift. I’m talking, of course, about self driving cars.

Although self driving cars or autonomous vehicles are already a reality and people are already quite familiar with the technology and may have even used it, they are still not mainstream as the technology itself needs a lot of work before your conventional car can be replaced for good.

Companies like Tesla and Google have played an instrumental role in the development of such cars and have brought them from the realm of science fiction to reality much sooner than anyone had expected. I certainly was surprised when I came to know that they were already on roads, this early in the 21st century.

Anyways, as with any revolutionary technology such as this, there are a million questions attached to it and a technology that is supposed to make decisions on its own and for us, there arise some really deep questions pertaining to what we think are unique to us humans like ethics and morals.

Moral Decisions Made By A Machine?

ethics-2991600_1280.jpg

Each year hundreds of thousands of people lose their lives in road accidents throughout their world. It is said that most of these accidents are due to human error. That is where the self driving cars step in.

Since autonomous vehicles will have computing power that could take much faster decisions than us, and since future autonomous vehicles could be interconnected, we could see a reduction of 90% in road accidents after the world has switched to self driving cars.

However, accidents will still happen because technology, no matter how cutting edge, still fails from time to time. The problem here is that, in this case, lives are involved and split second decision making that can decide someone’s fate.

And for the first time, we will be relying on machines to make the ethical and moral decisions for us. Can we really build them to that degree where they can do that? How would a machine even decide?

Which Life To Spare?

car-accident-2165210_1280.jpg

Let’s say you are in an autonomous car and suddenly there comes a small child running in front of the car. Now there are only two options that the car has. Either to steer right or to steer left, because applying the brakes would still mean colliding with the child.

Now, let’s say that steering left kills pedestrians while steering right causes the car to collide with other cars and killing the passengers in the car. What does the car do?

Many experts have suggested that an autonomous car should take such a decision where the least possible lives are lost and while that sounds perfectly logical, would people want to be in a car, where they could be killed by the car to save other people, while in reality human drivers do the exact opposite and try to save themselves first?

Moreover, would it be ok to let a child get killed to save an adult’s life? We are reaching the depths of our human thoughts and feelings and maybe we should be questioning our own morality before encoding anything into the machines first.

Sort:  

In my opinion the answer is very clear: the car should take the decision where the least possible lives are lost or harmed.

DQmQdyiUoU8QFHoW3HHr64cTxzCfPJiXMcNNrvAfvAAtkH6.gif

That would be the most logical, yes.

The gif is way too cute and entertaining. Your opinion could be to kill them all and I still wouldn't be able to resist the urge to upvote!

I don't think any AI coder will ever have to program moral choices into an autonomous vehicle. They will only make efficient mechanical solutions based on the limitations of the car's breaks/steering options. When the car 'sees' a hazard it will hit the breaks and turn away to avoid trying to hitting anything. If it still plows into a person or a barrier then it was beyond the laws a physics to avoid the accident. So basically making the same action a human driver would except with a lower decision latency.

for me the most interesting question is who will get the liability, will the car makers be responsible or will the car owner or passengers take the blame?
I can't wait for these to become more commonplace because I feel like even the worst self driving car is far better than a human operator who is texting behind the wheel.

Yes the liability part is a whole different set of questions that will be asked soon and regulations would need to be clear from the get go.

The only thing that makes sense to me is to hold the manufacturer of the self driving system as liable in at-fault accidents, and with that tremendous liability it will be incumbent upon the manufacturers to make their systems as close to accident-proof as possible because otherwise they will be sued into oblivion.

The main problem is that the car HAS to make a decision. It‘s not possible to make no decision because not making a decision is still a decision to do nothing. So we all have to face the problem of cars deciding about human lives.

Exactly!! It's the rare but certain-to-happen situations where the car WILL HAVE TO decide which lives to save.

No decision is still a decision in this case.

Each year hundreds of thousands of people lose their lives in road accidents throughout their world. It is said that most of these accidents are due to human error.

and that's a fact. I was a truck driver for almost 4 million miles. People turn off their brains when they turn on their cars.

there IS no question.
If robot cars will save lives.
DO IT.

end of story.

4 million miles? That's a lot! You must have seen everything that happens on the road. I agree, people surely turn off their brains lol!

And yes, there is no question about it. Replacing the human driver with a robot car will save lives and that too by as much as 90% if the estimates are to be believed.

We can replicate it, these are just rules. Whatever you wrote are a set of rules ingrained in us. We just need to ingrain them in the machines and hope they don't learn to break them as we do 🤔🤔

True! And as AI advances more, it will get better at making such life and death situations.

This is a very important issue. They are working on artificial intelligence to try and figure out which is the best moral choice and if a machine can learn it.

It is indeed very important. More so because in the future, almost everyone is going to be affected by it.

I recently was in a self-driving car in Las Vegas and it was the most helpless feeling. Not for me... at all!

Really? It must have been awesome!

It was but it was creepy at the same time. :)

Was it some kind of novelty ride or were you actually going somewhere?

Coin Marketplace

STEEM 0.19
TRX 0.15
JST 0.029
BTC 63101.67
ETH 2588.03
USDT 1.00
SBD 2.74