Artificial morality: Moral agents and TauchainsteemCreated with Sketch.

in #tauchain7 years ago (edited)

In a previous post I discussed the value of intelligence amplification which can lead to moral amplification. Now I'll go into some detail about the concept of "artificial morality" as it pertains to autonomous agents. On the philosophical level there are different ways to think about autonomous agents but I'll list some.

  1. Auotnomous agents as independent entities acting on your behalf.
  2. Autonomous agents as tools.
  3. Autonomous agents as extensions of your digital self.

For this discussion I am going with the third way of thinking about autonomous agents which is as an extension of your digital self. In specific, by digital self I am referring to the quantification and digitization of your will. Note: I will not elaborate in this blog post on what "will" is or whether or not "free will" exists as that is a separate philosophical debate entirely but merely will use this definition as a way to think about moral agency, responsibility, and autonomous agents.

What is moral agency?

Moral agency is an individual's ability to make moral judgments based on some notion of right and wrong and to be held accountable for these actions.[1] A moral agent is "a being who is capable of acting with reference to right and wrong."[2]

Considering the quote above, is there any reason to believe an autonomous agent cannot understand human morality? I would put forth the prediction that not only will autonomous agents understand human morality but they will understand it better than most humans do. Human beings do not have a very strong understanding of human morality due to the limits of the human brain and the complexity of the world and of other people. Autonomous agents are way to manage this complexity for the benefit of humans.

In Tauchain there will be a peer to peer globally accessible knowledgebase(KB) with some similarity to Wikipedia. This knowledgebase(KB) if structured correctly will be able to accept contribution from human and AI agents. Knowledge of morality at the common sense level will be possible but how far can we take this with the knowledgebase(KB) + inferencer approach? Deontological morality can easily be used in this context because deontic logic can be an input into Tau allowing
for automated reasoning over the knowledgebase in accordance with deonological morals. But can autonomous agents be responsible for their actions?

What is artificial morality?

Artificial morality is a research program for the construction of moral machines that is intended to advance the study of computational ethical mechanisms. The name is an intentional analogy to artificial intelligence (AI). Cognitive science has benefited from the attempt to implement intelligence in computational systems; it is hoped that moral science can be informed by building computational models of ethical mechanisms, agents, and environments. As in the case of AI, project goals range from the theoretical aim of using computer models to understand morality mechanistically to the practical aim of building better programs. Also in parallel with AI, artificial morality can adopt either an engineering or a scientific approach.


Artificial moral agents such as autonomous agents which have been empowered to make decisions can make moral decisions. Not only can artificial moral agents make moral decisions on par with human decision makers but can surpass the abilities of human decision makers.

One result of Axelrod's initiative was to unite ethics and game theory. On the one hand, game theory provides simple models of hard problems for ethics, such as the prisoner's dilemma. First, game theory forces expectations for ethics to be made explicit. Early work in this field (Danielson 1992) expected ethics to solve problems—such as cooperation in a one-play prisoner's dilemma—that game theory considers impossible. More recent work (Binmore 1994, Skyrms 1996) lower the expectations for ethics. Consider Axelrod's recommendation of the strategy tit-for-tat as a result of its relative success in his tournament. Because the game is iterated, tit-for-tat is not irrationally cooperative. However, its success shows only that tit-for-tat is an equilibrium for this game; it is rational to play tit-for-tat if enough others do. But game theory specifies that many—indeed infinitely many—strategies are equilibria for the iterated prisoner's dilemma. Thus game theory shifts the ground of ethical discussion, from a search for the best principle or strategy, to the more difficult task of selecting among many strategies, each of which is an equilibrium, that is to say, a feasible moral norm.

At some point in the future autonomous agents will be able to have a better understanding of current social norms than any particular human being. This knowledge of social norms and common morality would help the autonomous agent to navigate within a social landscape, a legal landscape, and more. Game theory and cooperative game theory highlight how rational players would proceed under certain conditions of limited information. Autonomous agents are capable of being both rational players but also with some level of moral understanding, and most importantly an ability to process far more information than any individual person. This would mean an autonomous agent would have a complete understanding of the laws, and would be able to lower it's risk of legal consequences better than any human who would have to work with limited understanding of the laws.

The Moral Turing Test

How do we measure the performance of artificial moral agents? The Moral Turing Test may be the answer. It's not enough to simply create moral agents which we believe or hope would act moral in difficult situations but instead it might be necessary to test them and only accept the artificial moral agents which can pass the test. In addition, simulations and other approaches can help as well but because not all events can be predicted in advance there must ultimately be a way to continuously improve the design of the winning moral agents and for this reason it will be important to let human beings rate and review the conduct of moral agents as a means of promoting a sort of artificial evolution.

Conclusion

Autonomous agents can empower the world but in my opinion emphasis must be placed on making sure these autonomous agents are of high standards. This would include at least enabling ethics in some form and in order to do that may require experiments in artificial morality. Autonomous agents on Tauchain can be moral agents and necessarily need to be. If done right then humans will be able to control these autonomous agents if they go out of control, keep them moral, and even design them to continuously evolve to be increasingly moral by learning our morality, as individuals and as a group.

References

Web:

  1. https://en.wikipedia.org/wiki/Deontic_logic
  2. http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/artificial-morality
  3. https://philosophynow.org/issues/71/Moral_Machines_Teaching_Robots_Right_from_Wrong_by_Wendell_Wallach_and_Colin_Allen
  4. http://www.tauchain.org
  5. https://en.wikipedia.org/wiki/Extended_cognition
Sort:  

Interesting about artificial agents being able to understand morality better than humans, but that you still need humans to ultimately oversee and even override the artificial agents if they're doing it wrong. There is a component to morality that goes beyond perfectly adhering to a set of rules. Not sure how well a machine would catch on to that part.

And that is why I mentioned the concept of extended mind/extended cognition which philosophically mean humans do control these agents either at setting the goals individually and personally or collectively via a consensus process. In Ethereum for example a hard fork was possible to put an end to a bad smart contract. It should be easier than that to govern autonomous agents but that is just an example of a last resort.

So you're envisioning a scenario where humans still have the last say. The autonomous agents would be limited by their programming (done by humans), and humans will always oversee them and possibly override them. So it seems that the autonomous agents would be able to handle the more simple and straightforward ethical situations and maybe take that burden off humans, but that humans would always have to step in when the situation was more complicated than what could feasibly be programmed into the autonomous agent. Am I understanding you correctly?

Humans might not do the programming because program synthesis and automatic programming could take care of that. Humans set the goals and make the rules. For example your autonomous agent can have your morality, yet also be aware of the laws in the different jurisdictions it may interact with. It may even have greater awareness of the laws than you currently have.

Ultimately the way I would like to see it designed, humans can always shut it down. Just as humans can arrest humans, why not let humans shut down, arrest, or disempower autonomous agents if they go rogue? If you have an autonomous agent which you control and it goes rogue then your reputation is put at risk if you don't shut it down or petition to have it shut down by the community.

Autonomous agents require resources and this is where the humans ultimately have the final say.

It depends on the morality. Deontology is about adhering to a set of rules. Consequentialism is not. And there is also virtue ethics. In any of these cases a machine is going to be better.

Wow, what an informative and in-depth post. I definitely learned something new! Thanks for sharing and your efforts to summarize this information!

Coin Marketplace

STEEM 0.30
TRX 0.12
JST 0.034
BTC 64136.70
ETH 3128.20
USDT 1.00
SBD 3.94