Are Buddist Robots Safer Than Homeschooled Robots?

in #ai7 years ago (edited)

ithinker_2fingers.jpg

Let's assume that you are a master robotist and built an AI machine which can walk, see, hear, talk, and learn. Congratulations! You are almost done. But the last and most important part is making the robot behave in an ethical way. What does that mean to be an ethical robot? And how can you accomplish this?

An ethical robot makes decisions which are good for individuals and society. An ethical robot should make decisions which most often are good and minimize the number of decisions which are bad in the sense of being harmful. Let's explore next how we can do this.

Program Ethics With Hard Wired Rules

e0879c0ef6c915b22965b41a5b687c50--robot-girl-summer-glau.jpg

Any robot can be programmed with hard wired rules such as "Do not kill", "Do not steal", and "Do not lie". The program can be long and sophicated with elaborate exceptions such as "Do not kill, unless killing the person will prevent more killing", or "Do not steal unless stealing will enable you to give to starving kids food", or "Do not lie unless the lie will save other person's life". Who should select and program these rules? The factory? The owner of the robot? The governent? The UN? The Pope? Who ever selects these rules should these rules be adjustable or only immutable? Also what is the priority of the rules? What happens when two rules conflict with each other? What happens when a situation is new and no rule applies? These are hard questions that need to be answered by the programmer.

Teach Ethics By Examples From Moral Mentors

angelica-lim-naoki-robot-1-1024x762.jpg

Programming rules can be arbitary and according to the moral compass of the programmer. A better approach might be to have the robot start with a blank program of ethics and have them learn by example from good moral ethical people. This is often how children learn morals. This would give the robot the ability to adapt and learn the morals which are common to the society it lives in.

Have the Robot Deduce Morals Logically

1496994552122.jpg

Instead of programming the robot with many rules or having them learn by example perhaps a better approach is to have them programmed with a single rule being the Golden Rule of "Do onto others as you would wish others to do onto you" and ask the robot to deduce the rest of the ethical rules which would guide the behaviour of the robot. The Golden Rule or law of reciprocity is the principle of treating others as one would wish to be treated. Such as robot which is not human might have different priorities and experiences than humans but with deduction generate similar ethical rules which humans live by based on the Golden Rule. Which of the three approaches is best would need to be experimented in a safe environment and monitored to find the best ethical setting for moral robots.

Coin Marketplace

STEEM 0.19
TRX 0.15
JST 0.029
BTC 63039.96
ETH 2549.01
USDT 1.00
SBD 2.78