You are viewing a single comment's thread from:

RE: On ethics in decentralized systems

in #ethics8 years ago (edited)

Some initial comments based on my current understanding.

  1. Ethical and legal are not the same and are not even always correlated.
  2. Ethical and moral are not the same but most people give it the same connotation. Ethical is usually based on some philosophical system, involving logic, reason, rationality. Moral can be religion, spirituality, or just anything people are raised with. It's a challenge to be ethical but to be moral sometimes you must only do as you are told.
  3. Among popular varieties there are consequentialist ethics or deontological ethics. You might be a consequentialist if you successfully answered the Trolley experiment question. There are also virtue ethics. We can say that people who adhere to consequentialist ethics are most concerned about outcomes rather than whether something feels right or wrong. People who follow deontological ethics are following strict rules and in most cases intend to follow deontic logic. The divine command theory is an example of the source of behavior for deontological ethics but it's not the only example. In general thou shall not lists are deontology. Virtue ethics puts a focus on the individual's character and cares about personal values.

We don't all follow the same ethics. There is no universal ethics. In addition there are other social forces such as laws, social norms, folklores, all which influence behavior without any regard to the personal ethics or what the individual might perceive internally as right and wrong.

Social norms are rules of behavior you are expected to follow and are imposed on you by a community. These are the unwritten unspoken laws of a community and while they are not written on paper they get enforced through bullying and vigilantism. In effect, the social norms are as powerful or even more powerful than the written laws and these social norms are at play in decentralized networks which means human behavior in decentralized networks are constrained by reputation.

Laws are basically norms but the difference is the laws are enforced by professional law enforcement, federal police, the official government powers. The distinction between a law and a norm is who enforces it and how it's enforced. Violating a social norm could mean being socially shunned exiled, ostracized, given unfair treatment, while violating the law could mean being put in jail. It's up to an individual to determine which is worse but in my opinion both are negative outcomes.

A conclusion I've come to is that most ethics that people have is like a navigation system to allow them to interact and get along with other people with the minimum level of friction. It allows people to formally engage, it enables etiquette, and because not everyone follows the same ethics it means learning to accommodate within reason to people who follow different systems.

Decentralized technology will only improve ethics if the ability to interact ethically is a priority to the developers of decentralized technology. In my opinion ethics should not be imposed by the developers onto the users but instead the users should be allowed to select their tribe, their ethics, their comfort zone, and announce it to the world on the blockchain with a badge of honor or honorific title, so people know exactly how to deal with them.

An example could be Dan who clearly has anarcho-capitalist leanings, who clearly cares a lot about ethics, who speaks English, who cares about liberty, who supported Ron Paul, you can pick up an ethical profile of him using decentralized technology but also allow him to be pseudo-anonymous when he wants to. So reputation and ethics are represented as tags and ratings on a blockchain based on the historical evidence which the blockchain can continuously collect.

I've spent years thinking about these sorts of questions and have written on the subject. It is my opinion that the only way to improve ethical interaction on decentralized systems or in the world in general is to use machine learning and artificial intelligence to augment, to amplify ethical decision making. An individual can be more ethical with the help of an intelligent agent which they can query as a means of transcending their own ignorance. The paper I wrote is: "Cyborgization: A Possible Solution to Errors in Human Decision Making?" which outlines the problem as a problem in human decision making. Morality or ethics would be a decision problem and it means you can design a moral or ethical search engine where an intelligent agent recommends to the individual what the best decision would be for them according to their ethics, while taking in consideration legal and social risks. There are too many laws, too many social norms, too many customs, for any rational consequence based human individual to process without help which is the point of my paper. Rational choices require knowledge and knowledge requires an ability to process information but if there is too much data, too much information, and no reasonable amount of time for any human to process it all, then you get common exploits which take advantage of the ignorance people have of the rules, and information asymmetry.

  1. https://en.wikipedia.org/wiki/Consequentialism
  2. https://en.wikipedia.org/wiki/Deontological_ethics
  3. https://en.wikipedia.org/wiki/Deontic_logic
  4. http://www.philosophybasics.com/branch_virtue_ethics.html
  5. http://sociology.about.com/od/Deviance/a/Folkways-Mores-Taboos-And-Laws.htm
  6. https://transpolitica.org/2015/07/07/cyborgization-a-possible-solution-to-errors-in-human-decision-making/
  7. https://en.wikipedia.org/wiki/Trolley_problem
Sort:  

Some serious thinking here! I am compelled to upvote you with my tiny powers.

Hi dana-edwards, thank you for the explanations and I appreciate your depth of knowledge on these issues. Admittedly, I am not an expert on ethics, and so maybe I could have titled the article differently. My point is simply this: as we unleash large-scale unstoppable systems onto the world, there will be unforeseen consequences. It would really serve the community well to think about them ahead of time. To be clear, I am not advocating for a litigious or oversight approach (necessarily), I am simply echoing Alex in a call for discussion.

Jbrukh my argument is basically that the technology already is unleashed. It's already being used for it's worst purposes. Drones are used to kill people in war. Various countries have cyber militias willing to unleash advanced persistent threats. All of us are potential victims of espionage. So when it comes to cyberspace the attackers already have the advantage and always had it, and the advanced persistent threat is the kind of attacker to be most concerned about because they aren't doing it for the money, they may be state funded, they may be doing it for a cause such as to help some side of a war.

What we can do is help encourage the use of certain technologies which at the time don't get put to beneficial use by regular people. Intelligent agents are not a new technology and have existed theoretically for a long time. Blockchain technology is new but any bad guy could unleash entirely new weaponized blockchains funded by or sponsored by their state.

That being said it is true that when you empower regular people you risk that some percentage will misuse the power. It is ultimately only a situation which can be solved by empowering the people who care about security in cyberspace but to have security does not in my opinion require crippling cyberspace or diminishing liberty. I believe you can have both security and liberty in cyberspace if you get the design right.

As far as intelligent agents go, with these intelligent agents many lives will be saved. The entire economy may even be saved by intelligent agents who may in fact be what powers the automated economy going into the future. It is important that any individual will be able to farm these intelligent agents, it is important that any individual will have access to AI, to automation, and while you could say there is a risk that some agents will be amoral economic agents it does not mean we have to design them to be that way.

The way I see it, your intelligent agents are the digital you. It's an expression of your will and intent, and it will act on your behalf exactly as you would want. If you're an amoral person then perhaps you would want an amoral agent but most people looking at the consequences can quickly figure out that while that approach might mean more money in the short term it is very costly long term.

People make the mistake sometimes of believing all wealth can be measured in net worth or in money. People make the mistake of believing that having money is equivalent to having power. Neither are true. To have wealth you must have resources, and in specific you must have assets. Your reputation in a community is an asset, your human capital are assets, and these assets are determined based on how other people think of you at any given time. If a person truly is trying to be wealthy in a world where there are intelligent moral agents then they would probably be the most wealthy if they have the intelligent agents which are the most moral as well as economic efficiency.

I may have some papers on this subject which I can post here to continue the discussion. It's an important discussion and the topic of intelligent agents is very important for future morality and ethics. Personally I think intelligent agents will be able to improve ethics dramatically because most human beings aren't particularly good at thinking about ethics as it pertains to hundreds of people, thousands of people, millions of people, because they can't get past Dunbar's number and other biological limits which limit people to being able to only think about the people they personally know. The intelligent agents will be able to make information based on knowledge of millions of people, possibly intimate big data type knowledge, big ethics will result from big data.

  1. Baylor, A. (2000). Beyond butlers: Intelligent agents as mentors. Journal of Educational Computing Research, 22(4), 373-382.
  2. Campbell, A., Collier, R., Dragone, M., Görgü, L., Holz, T., O’Grady, M. J., ... & Stafford, J. (2012). Facilitating ubiquitous interaction using intelligent agents. In Human-computer interaction: the agency perspective (pp. 303-326). Springer Berlin Heidelberg. http://link.springer.com/chapter/10.1007/978-3-642-25691-2_13#page-1

Coin Marketplace

STEEM 0.19
TRX 0.13
JST 0.030
BTC 63595.77
ETH 3415.98
USDT 1.00
SBD 2.49