Is the "moral high ground" merely a synonym for "moral pyramid"?

in #ethics6 years ago (edited)

Is morality a form of wealth?

Is there a distinction between the phrase "moral high ground" and "moral pyramid" conceptually? If we think of morality as another sort of wealth where there is inequality then inevitably will it be the case that some will inherently rise to the top of the moral pyramid?

It is true that morality can be learned over time but it is also true that some are born into environments more favorable to early learning of this morality than others. In the same way some people are born into environments where capturing monetary wealth is easier due to there being more opportunities.

Is there a concept for moral currency?

  • We have the concepts like karma, or social currency, but what do these concepts mean?
  • How does a blank slate individual increase their social currency or karma if it is like a currency?
  • Is social currency earned by doing good deeds and lost by doing bad deeds?
  • If there is no social currency then how does society currently track which people have 'good credit' to society vs people with 'bad credit', or if this is not tracked then is it merely a popularity contest?
  • Will organized religion play a role in the formation of moral hierarchies?
  • Will stereotypes against certain minorities play a role in the formation of moral hierarchies?
  • Does a radically transparent world produce a competition to seize the moral high ground?
  • What impact will cults (including personality cults) have on the perception of morality in a radically transparent world?
  • What impact does radical transparency have on the markets?

These are just some philosophical questions worth asking with regard to radical transparency. While I operate under the assumption that morality is subjective rather than objective (I'm not a moral realist), I also do not assume I know everything about anything. The reason I take on the perspective of moral subjectivism is that in recognizing my own ignorance, and the physical limits of the human brain in general, I reached a conclusion that humans actually can't be truly moral. This is similar to the problem we see with bounded rationality which shows that humans cannot be rational, or the problem we see with attention scarcity which shows that humans cannot manage a society by manually voting on every little thing. If humans cannot be rational how exactly could those same humans be moral?

So if we assume humans cannot be moral, cannot overcome bias, cannot overcome attention scarcity, then why should we assume that whomever or whichever humans end up at the top of the moral hierarchy I labeled the moral pyramid, are going to behave any more rational, unbiased, wise, or moral, than the other humans below them? The same natural limits of the human brain will exist at the top of the pyramid that will exist at the bottom of the pyramid. This is how I reached the conclusion that only by providing decision support using technology can the human brain actually even approach being moral (by iterative improvement over time to the technology).

In the article I co-authored in 2015, I outlined a potential solution to allow involuntary ignorance to be transcended by augmenting the limited capacity of the human brain. The article titled: "Cyborgization: A Possible Solution to Errors in Human Decision Making?" is only a possible solution.

Cyborgization is a transhumanist solution to immorality and ignorance in decision making. The way it works is by providing decision support to the human decision maker. This potential solution was also provided to Bitnation by myself in the form of Lucy which is intended to act as the exocortex of Bitnation. An exocortex would allow human beings to consult with and or seek advice from AI in decision making. This in my opinion is the only way to approach morality, because I don't think punishing ignorance will ever remove it, nor do I think human beings will stop making silly mistakes, or stop behaving immorally, but I do think if human beings had decision support then at least humans who desire to improve their morality will have a means of doing it.

Conclusion

Transparency without a means of achieving morality is in my opinion a disaster. It is a disaster because ultimately it will seek to punish humans for acting human rather than actually help humans to become moral. Transparency allows increased attribution, increased ability to find fault, increased ability to punish, but it does nothing to make humanity more moral or improve decision making. In my opinion no religion is going to be able to solve it, even in radical transparency, without help from something wiser than humans. Artificial intelligence has the capability to process more information, be more rational, be less biased, than all of humanity combined. If our goal is to create a better world and not just to sacrifice privacy for the sake of sacrificing it, and not for the purpose of revenge, hate, or to punish, then in my opinion AI is a necessary part of any improvements in decision making (morality).

References

  1. https://en.wikipedia.org/wiki/Social_currency
  2. https://en.wikipedia.org/wiki/Moral_high_ground
  3. https://en.wikipedia.org/wiki/Bounded_rationality
Sort:  

very deep topic.. makes me think and analyze..
religion gives so little explanation to everything, you're right, humanity needs something more deeper and wiser, more global and wide

This is some deep and heavy stuff 😬

It is true that morality can be learned over time but it is also true that some are born into environments more favorable to early learning of this morality than others. In the same way some people are born into environments where capturing monetary wealth is easier due to there being more opportunities.

There are some assumptions here which support the idea of morality as currency that may not be correct. You assume that morality is learned. There are many traditions of thought that take for granted that humans are innately moral and that it is up to each of us individually to "listen" to our moral compass. But you assume that it is completely learned.

I don't know enough about moral development or the anthropology of morality to comment in any way except speculatively. Before I go on, can you claim to speak from a more athuorative position than this?

I suppose we would need to establish what exactly we mean by "morality" or "moral behavior", and then quantify it in a population, and look for patterns in life experience, opportunity and so on that correlate to high and low levels of morality.

In lieu of awareness of such a study, I speculate that we will not find strong environmental correlations with morality, and most definitely no causal factors. I do think however we will find environmental factors which contribute to the break down of morality due to mental illness, war, and other major disruptions to "normal" life.

Cyborgization is a transhumanist solution to immorality and ignorance in decision making. The way it works is by providing decision support to the human decision maker.

[...]

Transparency without a means of achieving morality is in my opinion a disaster. It is a disaster because ultimately it will seek to punish humans for acting human rather than actually help humans to become moral.

This is very interesting, aren't we already doing this in a soft way with the internet? Perhaps you say that in your article, I haven't read it yet. I wonder very much if assistance in decision making is really enough for morality.

I can hear the detractors already talking of mind control, subtle manipulation, etc. and a slew of articles about the inherent bias of machines, on account of their programmers. Don't you think that delegating decision making to a machine is actually immoral?

I can hear the detractors already talking of mind control, subtle manipulation, etc. and a slew of articles about the inherent bias of machines, on account of their programmers. Don't you think that delegating decision making to a machine is actually immoral?

A machine has bias because humans are immoral and haven't put enough effort in creating a process of debiasing machines. It is similar to the argument that if all computers can be hacked perhaps we should stop trusting computers and consider them all biased. It is true all computer systems at least connected to a network can be hacked, as statistics show that attackers have a much higher success rate than defenders, but this does not change the fact that for certain tasks the machines are at a higher rate of accuracy than human beings even with the potential of being hacked factored in.

A self driving car can be hacked but a self driving car gets into less accidents than humans. Humans drive drunk, have road rage, and make more mistakes than AI on average. So I would rely on the actual consequences from statistics to determine the reliability. I don't see humans as being all that reliable or moral, and I don't blame humans for being humans, as humans are frail, corruptible, and ignorant by default, and it takes years of effort from a human to evolve beyond this. Even with years of effort, the ability to be moral does not seem to actually exist for the same reason the ability to be rational does not seem to exist.

Machines at least have a better shot at certain things than people. Only the machines with their algorithms can process big data. You and I with our brains cannot factor in all that information when trying to make a so called moral decision.

A machine has bias because humans are immoral and haven't put enough effort in creating a process of debiasing machines. It is similar to the argument that if all computers can be hacked perhaps we should stop trusting computers and consider them all biased.

Not so. It is about acceptable error margins and levels of trust. If you're asking the computer what temperature it might be tomorrow, it doesn't hugely matter if it's hacked. But if you're asking it to transfer money from one bank account to another, it matters quite a lot.

A self driving car can be hacked but a self driving car gets into less accidents than humans. Humans drive drunk, have road rage, and make more mistakes than AI on average.

These kinds of sweeping generalizations are startling. AI can have absolutely massive mistake potential. Just because it does not drink alcohol does not mean it is not vulnerable to all kinds of failures. In any case there is no singular "AI", what is being referred to is a enormous variety of computer program and system to act in a complex autonomous way.

So I would rely on the actual consequences from statistics to determine the reliability.

I agree, if it is supplemented with an in depth survey of failure modes, unlikely but dangerous risks, etc.

I don't see humans as being all that reliable or moral, and I don't blame humans for being humans, as humans are frail, corruptible, and ignorant by default, and it takes years of effort from a human to evolve beyond this.

Humans do not "evolve" beyond ignorance, etc., they grow. This is a bugbear of mine. In general it should be "metamorphize", not "evolve". I know it is used metaphorically, but the metaphor also does not hold. We remain ourselves, we do not change. We grow and learn.

But to the point, you've said this before a lot, that you do not consider people to be culpable for their ignorance. Perhaps it is so striking because it's very much against the grain. In fact we have a principle in law that goes "ignorance of the law is no excuse". While we may not be able to blame a person for ignorance, we must still hold them accountable. This may be simply because it is so hard to prove if they reasonably knew or did not know the law (or had learned the moral behavior in our case).

Machines at least have a better shot at certain things than people. Only the machines with their algorithms can process big data. You and I with our brains cannot factor in all that information when trying to make a so called moral decision.

True that we cannot know very much, but see my other response calling for defining morality as "a structure for thinking and feeling", not only decision making, and not primarily that. I think the argument needs to be made strong that you have made it for me to be convinced. I do think it is obviously useful to have an AI assistant, but I do not agree (yet) that it's purpose should be as moral guide.

Not so. It is about acceptable error margins and levels of trust. If you're asking the computer what temperature it might be tomorrow, it doesn't hugely matter if it's hacked. But if you're asking it to transfer money from one bank account to another, it matters quite a lot.

Of course and my point is statistically speaking, we put our trust in AI because AI and machines in general are more productive and reliable. Factories use robots for a reason.

These kinds of sweeping generalizations are startling. AI can have absolutely massive mistake potential. Just because it does not drink alcohol does not mean it is not vulnerable to all kinds of failures. In any case there is no singular "AI", what is being referred to is a enormous variety of computer program and system to act in a complex autonomous way.

Humans fail more often on average in most industries where machines and humans are working side by side. Whether it's a factory, or medical diagnosis, or stock trading, it's the humans who by our emotions make bad decisions, or due to lack of sleep, or bias. This is not to say AI is perfect but it does not have to be perfect to statistically perform better than humans.

Humans do not "evolve" beyond ignorance, etc., they grow. This is a bugbear of mine. In general it should be "metamorphize", not "evolve". I know it is used metaphorically, but the metaphor also does not hold. We remain ourselves, we do not change. We grow and learn.

But to the point, you've said this before a lot, that you do not consider people to be culpable for their ignorance.

Is culpability a cure? Do people according to physics and neuroscience even have free will? If we go philosophical then ignorance can be seen as a kind of disease which each individual is then demanded by society and circumstance to cure on their own, by their own means, in the face of disinformation and promoters of continued embrace of ignorance.

I think without help there will be no transcending involuntary ignorance and involuntary is the distinguishing point. Blame of culpability does nothing to help the individual who is involuntarily ignorant become less ignorant, as it merely punishes the ignorance which is in my opinion like "whack a mole" where the hammer of justice is the only instrument being applied.

In fact we have a principle in law that goes "ignorance of the law is no excuse". While we may not be able to blame a person for ignorance, we must still hold them accountable. This may be simply because it is so hard to prove if they reasonably knew or did not know the law (or had learned the moral behavior in our case).

It is exactly that principle which helped lead me to the conclusion that if ignorance is not an excuse then being human is not enough to reduce the risk(s) of being ignorant. Human clearly is not enough according to that principle if you think about it.

True that we cannot know very much, but see my other response calling for defining morality as "a structure for thinking and feeling", not only decision making, and not primarily that. I think the argument needs to be made strong that you have made it for me to be convinced. I do think it is obviously useful to have an AI assistant, but I do not agree (yet) that it's purpose should be as moral guide.

How people feel has no impact on the outcome, or on how others view them, or how the law treats them, or how much money they make in the markets. People who feel the markets do not perform better than algorithms which do not feel the market. So there is no evidence that feeling the decisions produces any increased return on investment and the evidence actually seems to support the opposite result showing the more emotions are involved in your decision making process the more prone to fear based, or other emotionally tilted investments, which results in a lot of buying at the top of the market due to FOMO and then panic selling at the crash.

If morality is about survival and decision making is about survival, then it's not much of a survival guide if the emotions of the individual are acting against the continued survival and growth of that individual. If ignorance of the law is no excuse then ignorance is not an option according to the people who believe that principle is justified. If humans are ignorant by default then being human is not an option long term as there will continue to be more laws and greater ignorance of those laws as society is not trending toward simplicity. Claiming and being human will not protect you from being punished for your crimes, your immorality, your mistakes, your ignorance, and that is my main point. Being human is not an excuse right?

Loading...
Loading...

Really ! My dear ,morality is deeply rooted in the environment we are coming from. To some extent it pays when societies embrace it but some time being a child may not attract a physical tangible thing in persons live considering what is going on in our societies where by been moral may deprive you certain things. If am to say ,I rather describe morality as two agent sword that may attract positive or negative things in your life.Notwithstanding, should be encouraged in our society no matter the price tag associated with it.oh! yes there is because the birth of every valuable thing is painful but then we should consider environment because it plays a great role in moral building. Thanks for sharing this wonderful article ,it really increase my horizon ,I must appreciate .merry xmass.

Insightful post Dana. Love your thought process indeed. Regards Nainaz
#thealliance

Pretty Deep for Friday Morning... but an intriguing piece.

Is it possible that everything is black and white (no grey), and once one crosses the "line of remorse" and of course depending on the distance of that step, they can easily step back. And is that line in constant flow, making it more arduous to know exactly where it is at all times.

You say you are a moral realist, which I also am, despite my desire to be a moral idealist. This is because I can't ignore the world around me and while I am not a fan of the transhumanist movement for various reasons, I do feel that technology and advancements in technology have a part to play in a more positive future for humanity, if guided correctly and honestly by all. I also feel your passion for the field in your writing and believe you are a genuine advocate of this technology for positive means but I am far from being anywhere near as knowledgable as you on the subject and admit to a lot of ignorance but also to the fear, as many have shared, of the dangers of AI in future.

That being said, I don't think it is possible for humans to live up to the version of morality that may be determined by AI to be morally right. Also, I'm not sure if morality is a one size fits all or rather, I'm not sure that morality can properly be determined from within this thing we call life. We each have to come to our own moral decisions given the admittedly limited information we can process on our own in our short time here and at least in my view, we hopefully find out how close we came to being moral when we leave this place.

Morality is something I am definitely struggling with in my time here. :)

Another really interesting article @dana-edwards.

No I am NOT a moral realist. The exact opposite of moral realist. A moral anti-realist. I think you interpreted me wrong. Morality in my opinion is subjective and there is no objective right and wrong.

Forgive me. I was obviously reading quicker than I should have been. :(

Very interesting. Lot's to think about now.

So if we assume humans cannot be moral, cannot overcome bias, cannot overcome attention scarcity, then why should we assume that whomever or whichever humans end up at the top of the moral hierarchy I labeled the moral pyramid, are going to behave any more rational, unbiased, wise, or moral, than the other humans below them? The same natural limits of the human brain will exist at the top of the pyramid that will exist at the bottom of the pyramid.

While I do agree that our human nature stops us from being completely moral, unbiased, etc. I do think that some people get better at it than others. You might say, for example that humans are naturally not very good at kicking things and that if our society was dependent on us being able to kick things accurately we would be doomed. But there are professional athletes that have learned to overcome this human weakness and our significantly better at it than the average person. So if we needed someone who can kick very accurately to solve a problem, we would look to one of these professional athletes (or even better, get lot's of the best to give us the best chance). I think this same concept is why it is good to consult / listen to / trust people that have learned to overcome their human weaknesses in a way that they are able to be "more" moral, unbiased, etc. It doesn't mean that they are perfect and we shouldn't rely exclusively on them, but they are more likely to give us the better answers. This is where ideas like webs of trust are really interesting to me.

Anyway, great article!

Yes but those humans who are better at it might not be the humans who find their way to the top of the moral pyramid. Also not every human has people to trust, consult, etc, which is my point. The humans who don't have this will have to learn by the trial and error approach to life. Having these sorts of people to consult is in a way a gift.

You nailed it. This is the problem, it seems whenever we trust someone enough to let them make any sort of decision it turns out they shouldn't have been trusted. I like the idea of programing an AI that can teach people what they "should" do in a given situation, or at least give them options based on what the most "moral" people in society would advise. Very interesting, I think that could be very powerful. Is Lucy going to be doing anything like that or is she mostly just for helping people interact on bitnation?

That is up to Bitnation not me but when I suggested Lucy to the Bitnation team, I had in mind that Lucy could be upgraded. Lucy is intended to be an exocortex in every sense of the word but today Lucy is limited by the technology.

Great post! I've checked out some of the other articles on your blog, and decided to follow you, because you speak about thinks I'm highly interested in.
Here especially the part about cyborgization speaks to me, though I think there is a bit of a disagreement with the part about moral subjectivism earlier in the post.

This in my opinion is the only way to approach morality, because I don't think punishing ignorance will ever remove it, nor do I think human beings will stop making silly mistakes, or stop behaving immorally, but I do think if human beings had decision support then at least humans who desire to improve their morality will have a means of doing it.

How does 'improving morality' works, if all morality is subjective anyway? I understand that an AI could give a better framework of what morality is, due to better data analysis potential, but there'd still need to be some ground parameters set by the individual human. (b/c we all know imposing a morality on others can only go wrong)

So we'll have a lot of augmented humans running around with their own brand of moral software in their heads?
Or will the ultimate moral conclusion will be made by the AI itself, based on a database of moral decisions? If so, what if the resulting morality is amoral in human terms? Not that I think that's what would happen, but it's a possibility.
Anyway, thank you for the post, and have a great day!

When humans have to make big decisions they traditionally would seek advice from people with more experience. The problem with this is not all humans are socially wealthy enough to have people they can trust with more experience to get advice from. The President has advisors for example, and CEOs have board of directors, but some kid growing up in the slums somewhere has only themselves because there aren't any mentors. In terms of improving morality, I never really specifically put it in that way but more if you improve decision making capability you can indirectly improve the capacity for moral behavior.

So a cyborg without any human mentors in their life can simply ask the crowd. We see this now with Quora for example and other technologies which let you ask the crowd. We also see it on Facebook where a random poster will ask the crowd. That is how cyborgs make decisions and that is in essence mining crowd sentiment manually. The problem with that is what if you aren't clever enough or mature enough to even think to ask the crowd? Or what if the crowd is biased, ignorant, superficial, etc?

Ask the machines and the crowd is the solution I propose. Asking the machines is in essence asking the AI for advice. The AI becomes the best friend, the mentor, the father or mother figure, the big brother, the religious or spiritual advisor. The AI takes the place of a human being in this instance to help the individual cyborg (human with a smart phone and Internet connection) make wiser decisions.

In my implementations it would be up to each human to determine their own values, their own interests, and their own level of trust in AI. Some humans for example only care about what the crowd thinks and simply will tell the AI to give them the latest sentiment analysis on what each decision will be perceived like by the majority of the crowd. Other humans might be mostly concerned with their own survival, freedom, and happiness, and might direct the AI to help them decide what to do so as not to take unnecessary losses or excessive risks to their interests. Finally you might have some who trust the AI so much that they completely merge with it, and let the AI dictate morality completely.

AI offers a benefit of being a potential character witness as well. Also if a person was following the moral and legal advice of AI, how culpable will they be in court? I mean if the AI is smarter than everyone in the courtroom then it's a bit of a different kind of trial is it not? Amoral in human terms could be what we could say the justice system currently is.

The questions I can ask are, do you want to survive radical transparency? Do you think you have better odds of surviving it as an unenhanced human, or as a morally enhanced cyborg?

Thanks for the reply, I enjoyed reading it a lot.
Especially this part:

AI offers a benefit of being a potential character witness as well. Also if a person was following the moral and legal advice of AI, how culpable will they be in court? I mean if the AI is smarter than everyone in the courtroom then it's a bit of a different kind of trial is it not? Amoral in human terms could be what we could say the justice system currently is.

I have thought about this in terms of a sci-fi play I once wrote, and it's actually really interesting to consider at which point an Artilect (it's a nice word I sometimes prefer to A.I.) would be outside the judgement of the juridical system. In the plot of my story the AI ultimately fractured itself into independent pieces to stand trial on itself, but that's fiction for you. This discussion and these posts of yours might actually 'cause me to rewrite the whole thing, so thanks for the inspiration!

As always , great to hear from you, and I hope you have a nice holiday season.

I love this post that creates a conversation that many eyes should see and weigh in on it.

My comment on it may not even scratch the surface of the vast expansiveness of the subject that you have covered that goes in so many different directions, to do it justice would need a symposium of sorts which in the future I would like to see.
So I bit off of a piece of it because I do want to see and be a part of a better world. And I want to do my part to bring that about.

In my opinion, and I speak of one that was closer to beginning life from the level of a blank slate more so than the level of privilege. On the subject of having and I will use the term "decent morals" because when we say good or bad sometimes we enter a grey area. What I may think is good or okay for me to do you may consider absolutely unacceptable or bad.
I think apart of our moral behavior is connected to our DNA and as we go through life this connection can become solely our moral compass if we remain neutral being tossed by the winds of life and pulled into every situation that comes along.
Which is a rare example since humans have a will to decide. Our choices impact our morals as well because the choices we make affect the kind of experiences we have in life that cause us to act and react.

Our upbringing is a factor of our morals as well. We are affected by our environment, cultures, traditions and family rules. which as we get older we choose to follow or rebel against. Religion if we cross paths with it can have a major effect on our morals, causing us to feel and think a certain way, other things in life such as all sorts of programming whether it be conscious or subliminal, our political systems, our views on those that don' look like us. Social relationships help to shape our behavior.

Also our ability to advance in the world and whether or not based on our choices we grow and succeed weighs in on our morals and the illusion of a high ground.

My moral blueprint was created as I maneuvered through life, curious about everything, trying different things such as religion, experiencing the highs and lows in family relationships and personal relationships, understanding that I had to share the world my space with all those connected to me.
A pivotal point for me was at age 5, I was on the bus with my mother. It was standing room only. I stood close to my mother as she held my hand tightly. I looked up and saw this older woman who had the angriest look on her face. And she stared at me with so much coldness that it scared me. So I pointed at her as I asked my mother why that lady looked like that. What's wrong with her, why is she so sad? My mother slapped my hand down and snatched me around to face the opposite direction and she said to me, she's like that because life has made her that way. Life has made her angry and sad. I decided in that moment and I mumbled it silently to myself, I am never going to be like that. I am going to be happy and loving. This experience factored in on what path I would take in life. I have not always had the best of morals, I have taken detoured, but I always found my way back to center and balancement. And through all the suffering, pain, heartbreak and addictions, I found a way to return to loving myself, others and living.
Again this being my choice to do so. As a teen I would hear adults talk about morals and standards, I got the dictionary and read about what it meant. I decided then that I would be a person with high principles and standards. I saw this as being my uniqueness and would place me in a position that would open doors for me in life, and that would allow me to stand on equal ground with the rich and the famous. I also added to this being as knowledgeable about many things, higher learning became my goal. During my religion journeys, I became an overseas missionary in Central America Belize, which allowed me to see people and life totally different, It humbled me and me more grateful for all that I have and it stopped me from comparing myself to others to decide whether or not I was better or worse than the next guy. I also at this point began to fell a universal connection and see where we all are more alike than we are different.
My philosophy is also to stay in my lane and stay out of other peoples business, I saw early in life the spiritual toxic residue caused by gossip, and having a lot to say about things and people that did not concern me.

In my 50 plus years life has shown me that you add years to your life and the lives of others if you are one that has high energy and keeps your spirits up, if you are one that can bring joy and make others laugh and that you can take away the stress of not taking things so serious but learn to laugh in and at life. I have learned from experience that karma is real and what you put out will return to you.
I feel I have had to maneuver and go around things in society that want to program me ducking and dodging all of this to come out still focused on being that person with decent morals that I envisioned being when I was a little girl. Ultimately it begins and ends with us. Morality is not solely a learned behavior, but rather or not we choose to be our very best.

Coin Marketplace

STEEM 0.31
TRX 0.11
JST 0.033
BTC 64275.02
ETH 3139.81
USDT 1.00
SBD 4.14