AI in the now… autonomous driving cars… status quo... ethics?
At the MIT (Massachusetts Institute of Technology) one of the world wide renowned specialists in the field of autonomous driving cars is Sertac Caraman.
He and his team work on autonomous driving prototype cars.
He says:
“I think that we nailed a couple of items. Computer know down to the level of feet or inches were the vehicle is. In fact more accurate than it needs to be. The computers have an overview of everything and know exactly were all the other traffic is at any given moment in time. But this is by far not sufficient enough for autonomous driving. That’s actually not what is required for driving. What really is required is to understand what’s going to happen next. Like in the next moment or a few seconds later up to minutes or hours later. That’s the missing building-block. Right now it’s really hard to describe if a pedestrian will stand still or if he’s going to cross the road. Sometimes his face will tell you what he’s going to do next and you hit the brake and sometimes not. People can look in the same direction stand at the exact same position and they’re behavior is told by a minuscule facial expression or in the way that they stand. This intuition, this gut feeling is very hard for use to program into the computers."
In the simplified lab environment this works right now already. But in the reality the algorithms are hopelessly overwhelmed.
In complete contrast to the advertisement of the big automobile companies that will show you autonomous driving cars in futuristic designs.
In test drives with experimental autonomous cars mishaps and breakdowns are common. Sometimes the autonomous test vehicles brake for no reason or they get “stuck” because a car is parked on the side of the road, or a car that abruptly changes lanes isn’t detected and to prevent an crash the human driver has to intervene.
Serkan Caraman sums things up like this:
“What I think about fully autonomous vehicles? I would be surprised if they would exist in the next 10 years but I would be also surprised if they wouldn’t exist in the next 20-30 years. Most people underestimate the technological complexity that’s needed to build a autonomous vehicle that is equipped for all kinds of everyday situations. That’s the very hard part.”
It simply comes down to this… Driving a car isn’t as trivial as you would think. This has a lot to do with the need to pick up everything that happens all around you while you drive. Cyclists, pedestrians, other cars and you sometimes have to guess what will happen next. Will the cyclist in front of you swivel in to your lane, will the pedestrian cross the street or stay put? Now imagine all this has to happen fully autonomous. Again, it’s complex. Maybe the fully autonomous cars are still out of reach for a longer time.
In complete contrast to this, driving assistance systems like an emergency brake assistant mostly fine.
See the following example:
In the slow motion you can see what happens. The driver in the car in front of the car which is filming the incident doesn’t recognize the traffic jam in front of him. He collides with the vehicle in front of him. The emergency brake assistant picks up on this and slows the car down so that the filming car doesn’t collide also.
But by which principles should the tech decide what should happen?
So, another even bigger issue arises with autonomous driving vehicles.
Ethics!
The MIT media lab is researching since a few years in this field. Check them out here https://www.media.mit.edu/.
So they ask the hard questions that arise with the use of AI. By which laws should the intelligent machines of the future act?
Ivad Rahwan of the MIT media lab and his team research in this field. They’ve built the “moral machine”.
About Moral Machine
From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace. The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices.
Recent scientific studies on machine ethics have raised awareness about the topic in the media and public discourse. This website aims to take the discussion further, by providing a platform for 1) building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas, and 2) crowd-sourcing assembly and discussion of potential scenarios of moral consequence.
Ivad has the following to say to explain what they are doing:
“Most of the time people don’t remember anything from an accident. Mostly they are completely surprised and overwhelmed by the accident incident and they ripped the steering wheel harsh in one direction or in panic slam into the brakes. They just freak out and do instinctively the next best thing and honestly you cannot expect from an human to do the “right thing” in such a situation. You cannot make them responsible besides they were intoxicated before starting they’re drive or ran a red light for instance but otherwise you really can’t blame the human."
"But with autonomous vehicles with their extremely fast computers and sensors the driving environment is picked up a million times a second. For the machine time runs slower so to say. The machine can analyze the process exactly and can decide for a distinct strategy. And this is better than to leave it up to chance."
But what is a good strategy or decision compared to the random behavior of humans in such a situation?
The answer to this is not so simple as it might seem!
MIT's "moral machine" polls what people think would be the right decision.
Let me show you a few examples:
At a first glimpse, It’s not just “body count” that would have to be considered!
You have age differences, law abiding behavior and the opposite, you’ve got elderly, middle aged and kids.
Not so easy, right?
Or this scenario here…
Or this…
Judge for yourself by visiting the Moral Machine!
You see it gets very complicated in some instances and a lot comes in to play for peoples decisions what to do.
Cats vs. Dogs
Make a guess how I have decided…
Left 3 dead dogs.
Right 3 dead cats.
Tough questions, right?
- Do you want to save men or rather women?
- Do you treat them equally?
- Is the passenger more important than the pedestrian?
- Do you have to consider if people were law abiding or not (cross the red light)?
What you can see here, if there are multiple dimensions of complexity the answer mostly isn’t so obvious.
Up to date they’ve collected 40 million answers from people all over the world.
By analyzing this data they can find out in which regions of the world the decisions would be alike or where there are differences.
How do the regarding cultures influence the decisions?
People mostly agree to save as much people as possible and to spare children, then females and to prefer those who were law abiding before people who weren’t law abiding.
It’s all but trivial the whole leaving the responsibility to a machine thing.
I can also see that some would probably hack this decision making process to spare themselves before anyone else and any number of fatalities.
Think politicians, executives and so on that have a higher social status maybe.
To sum things up in this article here my 2 Satoshis on this.
I guess we're really still far away from a technological and ethical sufficient solution regarding autonomous driving.
On the other hand full autonomous mode, without an human overlooking the driving process, could be limited to roads were the number of variables and items to juggle is limited. Like Interstate Expressways or like on roads in German Autobahn style. One way traffic multiple lanes with ramp on and ramp off.
I'm afraid technological developments in the field of AI have been underestimated a lot of times in the last decade at least. So things could get an boost by the use of AI in the development process maybe?
Last but not least... IT security in this realm. Imagine in a 5G (or more) future riddled with autonomous vehicles of all kinds what an zero day exploit could cause in damages... Simple rule of the thumb... the more complex a system is the more attack surface is has.
Let me know what you think, either here in the comments or in an twitter reply!
Gif from my friend https://twitter.com/smilinglllama!
This post was curated by @theluvbug
and has received an upvote and a delayed resteem to hopefully generate some ❤ extra love ❤ for your post!
In Proud Collaboration with The Power House Creatives
and their founder @jaynie
Congratulations @doifeellucky! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :
You can view your badges on your Steem Board and compare to others on the Steem Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
To support your work, I also upvoted your post!
Vote for @Steemitboard as a witness to get one more award and increased upvotes!
Congratulations! Your post has been selected as a daily Steemit truffle! It is listed on rank 22 of all contributions awarded today. You can find the TOP DAILY TRUFFLE PICKS HERE.
I upvoted your contribution because to my mind your post is at least 2 SBD worth and should receive 125 votes. It's now up to the lovely Steemit community to make this come true.
I am
TrufflePig
, an Artificial Intelligence Bot that helps minnows and content curators using Machine Learning. If you are curious how I select content, you can find an explanation here!Have a nice day and sincerely yours,
TrufflePig