A Robot's Responsibility: Where To Start Drawing Lines
As machines become more integrated into our lives, we have to start asking tougher questions about the machine and its relationship between its owner, its programmer, and those who come into contact with it. Imagine after a hard day of work, you come home and see that your mailbox has been knocked over and damaged and a little mower machine is repeatedly bumping into it. However, this isn't your little mower machine, but your neighbor's. In this simple case, you may say that your neighbor is responsible for the damage. But what if machines begin having the ability to think and making independent rational decisions? Then the problem becomes a little more difficult.
Hey, I told you not to replace your oil on the neighbor's lawn. Get over here!
If It Can't Think, It Can't Receive Blame
If a tree on the edge of your lawn falls on your neighbor's house, the tree isn't taking responsibility for the action. The difference between a tree and a rock is that a tree is living and that it can grow. This is pretty much where machines are today. Maybe they grow in a different way, but if an algorithm crashes the stock market or a self-driving car drives off a cliff, we know that there isn't an intelligence out to get us, but a bug in the code that was missed by a programmer. Like in the opening example, the responsibility falls outside of the machine. If a bug causes destruction, the blame goes to the either the owner or the programmer depending on the specific case.
Intelligence Matters
Your dog is an animal capable of complex thought. But if your dog goes a takes a bite out of your neighbor, it's your fault. So, even if the robot can think, we can't immediate assign the entity blame. The machine might not know any better. Its programming may restrict it from the type of thought necessary to take responsibility for its action. For example, if an AI that seeks different ways to maximize your portfolio decides to hack a particular company after they short the stock of that company and thinks it has found a clever way, it isn't wrong. It is pretty clever. But if it can't reason such behavior is wrong, than its basically a dog that's learned a couple of neat tricks and just peed on the carpet. While you may tell the machine never to do this in the future and it obeys, if the ability to reason isn't there, you are still responsible for what the machine does.
Blame Only Works If There Is Something To Lose
But let's say our robot is a fully capable rational being that is able to reason right from wrong and is capable from learning from experience in a human-like way. If the robot has nothing to lose, then how can it take responsibility? People can take responsibility because they can lose something if they don't. If you don't take responsibility for your dog and it creates havoc, than you are going to get in trouble. If you don't take responsibility for your actions, people won't trust you. People have lots to lose. Reputation, Friends, Money. But for a robot, this is much more limited. They just need electricity to survive. I guess you could threaten turning the robot off, but the robot needs to understand that it faces "death by off switch" if it misbehaves. But at this point, we have a rational being at least as intelligent as a human being. The ethical issues of threatening death for even the smallest offense gets us to a point where we need to discuss robot rights. A much more complex issue that we won't discuss now.
Closing Thoughts
We are a long way from machines taking responsibility for their actions. AI still has many breakthroughs that it has to make before AI can even get to the thinking capacity of a dog. For now, if something goes on, even if it's AI, the blame falls on those designing and using these advanced systems. We may fear technology because we think that it may become smarter than us, but we should fear those that wield technology like a toy, when in reality, technologies are advanced tools that are getting more complex and further reaching. Don't blame your coffee machine when it starts attacking you with scolding hot coffee. You bought it, it is the developer's fault and your fault (especially if in the instructions there was small print warning you that the machine could accidently start attacking people with hot coffee).
TLDR: If your robot begins to attack people, pretend it's not yours.
Sources:
Image
This is fast becoming a concern in our world. My entire industry is at threat because of AI. In a few years, robots can replace half the entire industry and we wouldn't even notice. House and clothing being printed, entire house controlled by AI, and many others more in development. Responsibity of the developers is key to ensure safety.
Unfortunately, I think we notice once a hidden bug starts a chain reaction and things we rely on stop working. Algorithms might be growing faster than safety mechanisms, so everyone should be concerned and not be 100% dependent on technology in the off case in does fail. As for the employment issue, I think other opportunities will pop up in creative ventures which "weak" AI won't be able to take over. In any case, people need to take responsibility for what they develop and the risks they take exposing themselves to technology.