You are viewing a single comment's thread from:
RE: The Moral Dilemmas of Self Driving Cars
I don't think any AI coder will ever have to program moral choices into an autonomous vehicle. They will only make efficient mechanical solutions based on the limitations of the car's breaks/steering options. When the car 'sees' a hazard it will hit the breaks and turn away to avoid trying to hitting anything. If it still plows into a person or a barrier then it was beyond the laws a physics to avoid the accident. So basically making the same action a human driver would except with a lower decision latency.