Yes, Virginia, We *DO* Have a Deep Understanding of Morality and Ethics (Our #IntroduceYourself)
“I always rejoice to hear of your being still employed in experimental researches into nature and of the success you meet with. The rapid progress true science now makes, occasions my regretting sometimes that I was born too soon. It is impossible to imagine the height to which may be carried, in a thousand years, the power of man over matter. We may, perhaps, deprive large masses of their gravity, and give them absolute levity, for the sake of easy transport. Agriculture may diminish its labor and double its produce: all diseases may by sure means be prevented or cured, (not excepting even that of old age,) and our lives lengthened at pleasure, even beyond the antediluvian standard. Oh that moral science were in as fair a way of improvement, that men would cease to be wolves to one another, and that human beings would at length learn what they now improperly call humanity.”
Benjamin Franklin (1780, in a letter to Joseph Priestley)
We here at Digital Wisdom (http://wisdom.digital/wordpress/digital-wisdom-institute) have been studying artificial intelligence and ethics for nearly ten years. As a result, we are increasingly frustrated by both AI alarmists and well-meaning individuals (https://steemit.com/tauchain/@dana-edwards/how-to-prevent-tauchain-from-becoming-skynet) continuing to block progress with spurious claims like “we don’t have a deep understanding of ethics” or “the AI path leads to decreased human responsibility in the long run”.
The social psychologists (ya know, the experts in the subject) believe that they have finally figured out how to get a handle on morality. Rather than continuing down the problematical philosophical rabbit-hole of specifying the content of moral issues (e.g., “justice, rights, and welfare”), Chapter 22 of the Handbook and Social Psychology, 5th Edition gives a simple definition that clearly specifies the function of moral systems:
Moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make cooperative social life possible.
Jonathan Haidt (credit Ted Talk - How Common Threats Can Make Common Political Ground)
At essence, morality is trivially simple – make it so that we can live together. The biggest threat to that goal is selfishness – acts that damage other individuals or society so that the individual can profit.
Followers of Ayn Rand (as well as most so-called “rationalists”) try to conflate the distinction between the necessary and healthy self-interest and the sociopathic selfish. They throw up the utterly ridiculous strawmen of requiring self-sacrifice and not allowing competition. They attempt to redefine altruism out of existence. They will do anything to cloak their selfishness and hide it from the altruistic punishment that it is likely to generate. WHY? Because uncaught selfishness is “good” for the individual practicing it . . . OR IS IT?
Selfishness is a negative sum action. Society as a whole ends up diminished with each selfish action. Worse, it frequently requires many additional negative sum actions to cover up selfish actions. We hide information. We lie to each other. We keep open unhelpful loopholes so that we can profit. (And we seemingly *always* fall prey to the tragedy of the commons)
Much of the problem is that some classes of individuals unquestionably *do* profit from selfishness – particularly if they have enough money to insulate themselves from the effects of the selfishness of others . . . OR DO THEY?
How much money does it actually take to reverse *all* of the negative effects upon someone who is obscenely rich – particularly the invisible ones like missed opportunities? It seems to us that no amount of money will make up for the cure for cancer coming one year too late.
Steve Omohundro, unfortunately, strongly made the case that selfishness is “logical” in his “Basic AI Drives”. Most unfortunate were his (oft-quoted by AI alarmists) statements that
Without explicit goals to the contrary, AIs are likely to behave like human sociopaths in their pursuit of resources.
Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety.
The problem here is that this is a seriously short-sighted “logic” . . . . What happens when everyone behaves this way? “Rationalists” claim that humans behave “correctly” solely because of peer pressure and the fear of punishment – but that machines will be more than powerful enough to be immune to such constraints. They totally miss that what makes sense in micro-economics frequently does not make sense when scaled up to macro-economics (c.f. independent actions vs. cartels in the tragedy of the commons).
Why don’t we promote the destruction of the rain forests? As an individual activity, it makes tremendous economic sense.
As a society, why don’t we practice slavery anymore? We still have a tremendous worldwide slavery problem because, as a selfish action, it is tremendously profitable . . . .
And, speaking of slavery – note that such short-sighted and unsound methods are exactly how AI alarmists are proposing to “solve” the “AI problem”. We will post more detailed rebuttals of Eliezer Yudkowsky, Stuart Russell and the “values” alignment crowd shortly – but, in the meantime, we wish to make it clear that “us vs. them” and “humans über alles” are unwise as “family values”. We will also post our detailed design for ethical decision-making that follows the 5 S’s (Simple, Safe, Stable, Self-correcting and Sensitive to current human thinking, intuition, and feelings) and does provide algorithmic guidance for ethical decision making (doesn't always provide answers for contentious dilemmas like abortion and the death penalty but does identify the issues and correctly handle non-contentious issues).
We believe that the development of ethics and artificial intelligence
and equal co-existence with ethical machines is humanity’s best hope
Digital Wisdom (http://wisdom.digital) is headed by Julie Waser and Mark Waser.
We are on Steemit because we fully endorse Dan's societal mission and feel that Steemit is the first platform where it is possible to crowd-source ethical artificial intelligence.
We are currently getting up to speed on Piston and looking to develop a number of pro-social tools and bots.
You'll be hearing more from us shortly!