You are viewing a single comment's thread from:

RE: Artificial morality: Moral agents and Tauchain

in #tauchain7 years ago

Interesting about artificial agents being able to understand morality better than humans, but that you still need humans to ultimately oversee and even override the artificial agents if they're doing it wrong. There is a component to morality that goes beyond perfectly adhering to a set of rules. Not sure how well a machine would catch on to that part.

Sort:  

And that is why I mentioned the concept of extended mind/extended cognition which philosophically mean humans do control these agents either at setting the goals individually and personally or collectively via a consensus process. In Ethereum for example a hard fork was possible to put an end to a bad smart contract. It should be easier than that to govern autonomous agents but that is just an example of a last resort.

So you're envisioning a scenario where humans still have the last say. The autonomous agents would be limited by their programming (done by humans), and humans will always oversee them and possibly override them. So it seems that the autonomous agents would be able to handle the more simple and straightforward ethical situations and maybe take that burden off humans, but that humans would always have to step in when the situation was more complicated than what could feasibly be programmed into the autonomous agent. Am I understanding you correctly?

Humans might not do the programming because program synthesis and automatic programming could take care of that. Humans set the goals and make the rules. For example your autonomous agent can have your morality, yet also be aware of the laws in the different jurisdictions it may interact with. It may even have greater awareness of the laws than you currently have.

Ultimately the way I would like to see it designed, humans can always shut it down. Just as humans can arrest humans, why not let humans shut down, arrest, or disempower autonomous agents if they go rogue? If you have an autonomous agent which you control and it goes rogue then your reputation is put at risk if you don't shut it down or petition to have it shut down by the community.

Autonomous agents require resources and this is where the humans ultimately have the final say.

It depends on the morality. Deontology is about adhering to a set of rules. Consequentialism is not. And there is also virtue ethics. In any of these cases a machine is going to be better.

Coin Marketplace

STEEM 0.17
TRX 0.15
JST 0.029
BTC 61869.35
ETH 2414.51
USDT 1.00
SBD 2.63