Moral Robots
Sometimes I feel like life turned out to be a group therapy session where the standard introduction is: "Hello, I'm called Renzo, and I love analyzing AIs".
Automated entities that we call "bots" have become normal in this last decades, among them: self driving cars (horrible name, if you ask me), people see an apocalypses in a "Terminator" style everyday. Seeing bots as a problem, instead of a solution (yet, many bots today are more noxious than beneficial; a natural trait, considered that humans create them).
This kind of issue hits me in the face every time I scream at the blender because of its lack of skill to stop at the right moment, in between to liquid and too thick, but it gets interesting as we starts speaking about robotic's moral values; something that sounds absurd, word that I don't use lightly so I'll try to explain why.
This goes sort of like this:
Lets suppose that a bus loaded with 10 babies, 15 kittens and, why not, a Mime, rollin' down the hill, with a cliff at their side and the mountain side at the other, in a two way road. Also, another vehicle is coming the other way, a self driving car, with a 40 year old man as a passenger. Right at the moment they are about to cross each other, because of a mechanical failure at the bus, the self driven car is in between two options: A) Force the bus to fall off the cliff and save itself and the passenger; or B) Throw itself down the cliff saving the bus passengers; but killing his only passenger - saves the babies, the kittens and the mime.
If we keep the Mime into the equation, the choice is simple. Now, if there was no Mime, things go into a darker tone. Does the robot has to kill his passenger/owner, following the logic of the lesser of two evils? Or should it ensure the safety of its own passenger above all?
This Moral question is something some people have, the error relies on a huge flaw in it: they judge the bot about things humans cannot even dream of doing. Putting it in another way, Why would we force "human moral values" into a robot? We already know humans are anything but "moral"... It's a flaw. Yes, that awful moment where science and engineering stop being abstract, distant, perfect and magic; and meet the real world, interact with real people, real kittens and Mimes, so, even when we do not want, we need a little help from our far cousin Philosophy, thing that makes us feel a little bit "dirty".
The first question I'd make is, creating the same situation but where both vehicles where driven by humans, what would the human do?
There's a wide reply to that. First, because the human could not evaluate what is at the other vehicle. Then, because a human could not possibly evaluate the potential damage of an accident in a fraction of a second. And measure which one is the lesser of two evils. Third, because no matter how trained a human is, self preservation and survival always prevail (an automatic response that requires no reasoning, this includes embedding the mime, the babies and the kittens into the side or the mountain).
Then again, same starting question, why do we ask a robot to do things humans cannot even do? A good thing is, as if that only question was not enough, people cannot come into a consensus of what a robot should or should not do, thinking on the benefit of communal needs, everyone has their own opinion. A robot would value Mime's lives just as strongly as he would value a worthy person's life. Would we empower them to judge people's value?
Since sciences are stronger than philosophy at this topic, we need to apply a scientific method to solve this problem, a bot does not work powered with theories and speeches (Engineering wins, flawless victory). We could create conditional statements that evaluate damage and minimize human risks. We can setup predefined "values" of life where younger people have a higher "score"(babies), then kittens (every one loves kittens) and, at the end of the chain, mimes (yes, we can save them too). This, is also arguable, we could add another conditional statement that, in case of seeing a Mime, it'll try to run him over.
Google's MimeSlayer
The problem prevails, "what would we do?" vs "what would it do?". Then, the real problem relies in our feeling of "having control" , freedom, even intimacy of a choice (personal values, vs the robot ones). Is it less of my choice if I code it into al algorithm? so that the car will wreck the Mime if he sees it even when it's not me the one behind the wheel? Will I have nightmares about that line of code that states that will run over a baby to prevent running over an elderly summit? Awkward and hard questions, do they imply that we should renounce to self driven cars (or any bot at all?)? Anyways, they aren't THAT good.
Or maybe they are, self-commanded vehicles can have a lot of advantages, among them:
- Up to 90% of accidents reduction - 90 out of 100 accidents are to blame on human errors (or, "natural stupidity"), thing an AI does not have. -
- Energy consumption optimization - brake efficient use, optimal cruise speed... -
- Optimized logistics - If robots use a common protocol, they will arrange an efficient way of moving more goods in less time, without clogging the roads at 8 AM, when you need to be already at work. -
- No need for parking - the car drives itself while you're not using it it may give a service to other people... or go party with his other autocar friends. -
- A humble prediction over autonomous vehicle's full integration, car-sharing included, is: the total vehicular population could be reduced down to today's 20%.
Here is where an Engineer is again put in an uncomfortable position (or pragmatic, whichever comes first). The point where technology becomes more "human", because the man embeds his properties into it; becoming a potential poisonous moral stain, transferring his notions of good and evil to a system able of taking options we could not possibly take. Perhaps, even better choices that we would ever do, limited by this urge of "keep existing" that we have, no matter how many babies, elderly and Mimes we have to stain our car with to accomplish that.
Progress does not only mean answers, inventions, iPads, huge TVs and Google Glasses that nobody, ever wants to use because they make people look dumb. Progress implies answers, sometimes the questions are new, others are so old that they already left a sore. What is good? What is bad? How much bad can we withstand for the sake of good? How in hell do we tell a bot how much is each kitten worth?
Philosophy, I don't want to force you, but here we are hundreds of thousands of scientists, engineers and thinkers waiting for your answer. You always expected us to take you seriously. Here's your moment to shine.
(No Mimes were hurt at the creation of this article)
It is an interesting discussion. However I must also point out that this is hilarious:
Simply brilliant:)
I always try to put a little bit of humor in anything I write, otherwise It becomes somewhat boring, for me, and the readers.
Glad to see it's appreciated :D
Lol it is and it helps because ethics (particularly machine ethics) can be a dry subject. Your use of humour made it much easier and more rewarding to read. Very clever technique:)
Don't ask me, I'd reduce the mime to pulp if you ask me... Even if that has nothing to do with my self integrity.
The ethics of driver decisions are something that we humans don't really think about very often, but it is something autonomous car designers are going to need to squarely confront in order to design an effective system.
One possible solution not contemplated in the OP is that an autonomous car's ethics should be a strictly legal/regulatory compliance based system. Expecting a machine to exhaustively analyze on a case-by-case basis the way humans do is arguably unreasonable given the current state of the art for AI. But we can design one that will follow the letter of the law by rote, and possibly refine our system of creating such rules to unambiguously define our desired behavior for autonomous vehicles.
For car accidents, we attribute liability and responsibility, and for functioning self-driving car we are really only concerned about negligence. Has the creator of the self-driving system negligently designed the system such that it will cause harm? This approach greatly simplifies the problem of the trolley problem of whether to kill the driver or to run over the elderly summit, because instead of having some sort of computer vision system that must know exactly what it is going to hit, and make some sort of utilitarian calculus and weigh the value of human lives, the machine instead only routinely obeys the drive instructions imposed upon it. In some contexts this may cause it to run over the elderly summit because they were improperly crossing the street at a dangerous intersection. And sometimes it will cause the car to choose to hit the brick wall, killing the driver, perhaps because humans made the decision that in certain zones or streets the danger to pedestrians is so great that the car should choose to crash rather than enter those areas (such as a crowded shopping mall). The car cannot be expected to make such complex decisions, so humans need to reduce it to a more primitive set of instructions "drive here" and "never, ever drive here" type of commands.
Just in case it wasn't noted, I'm making indirect reference to steemit upvote/downflag bots.
Yet, it's a great addendum.
I am only not "in tune" with one thing "The car cannot be expected to make such complex decisions"
There was a time when humans made "chess playing" bots. Were only written in paper (http://www.turingarchive.org/browse.php/B/7) Today, "Deep Blue" is the cornerstone of a whole AI development discipline.
As I clearly state in the article one cannot hinder AIs under a human parameters.
Love the presence of the mime in this!
@renzoarg Thanks for sharing!
I don't... I even had a hard time inserting the image at the article.