Origin of Morals, a definition

in #philosophy8 years ago (edited)

Introduction

This is my first post on SteemIt. I want to try out this concept and at the same time share my thoughts on a topic I have found interest in for the last couple of years, morality and the social society that humans have developed for the last millennia.

This will be a series that I published earlier in another blog of mine that I hope will get a bit more interaction here on steemit. I am not a scholar on this topic, only an enthusiast, your feedback is valued.

Origin of Morals - Definition

The following is the definition of moral that we will be working with:
moral

  • a lesson that can be derived from a story or experience.
  • standards of behaviour; principles of right and wrong.

Not much to go on, but it will suffice to start with. I will be focusing on “standards of behaviour” and working out what exactly makes something be right or wrong. One curiosity of this definition: it leaves open if the “right and wrong” is the fixed part making you try to find out the principles of conduct that lead to right, or if its the principles that are fixed and you derive “right and wrong” solely from them. This refers to the discussion over objective moral versus subjective moral, respectively.


Lets take the absurd case of a lone singular sentient entity on the universe. Any action you take will only affect the environment around you and yourself. If that is the case, then the potential victims of those actions are only the entropy levels of the elementary particles around you and by extension yourself, since you are constituted by those same particles.


The consequences of your actions, in terms of entities, are then circumscribed only to yourself. So the question is raised: Is there anything immoral that you can do to yourself? Certainly you can do certain actions that will lead to you death or injury, and that violates the intrinsic drive for life forms to survive as long as possible, but does a moral rule of conduct aid you in achieving this goal or do you simply need to use your intellect?


I argue that this is the case for single entities. If you decide to destroy the most beautiful solar system you are not committing an immoral act, you are just deciding to live in a universe without it. You can kill yourself, and even going against your natural instincts that would not be immoral, stupid, maybe, but not immoral.


Lets add a few life forms of a different species to the planet you're lonely entity is living on.


If a star “decides” to go supernova and wipe out all life on the planet, is that immoral on the part of the star? No. But if you decide to mess with the environment of the planet and kill off every member of this subspecies that would be considered an immoral act by most people. Why?


For the same consequence you are considered immoral but the star is not. The difference appears to be that you are a complex system that constitutes life, but the star is also a complex system. A virus is also a complex system and it can wipe out another species and not be considered immoral. Harmful, yes, but not immoral. Those beings and physical systems appear to be given no choice, and is choice that determines our moral condemnation to one entity but not to the other.


But choice cannot come from complexity alone. In fact, even species considered sentient may have no real choice: if all the laws of the universe were to be known and calculable, we could predict the path of every particle and energy field and find out the inevitable conclusion that a sentient being would reach, since that being is constituted by those particles and energy fields.


My take on this question is that it is irrelevant to the problem of moral. Regardless if quantum physics ends up being deterministic or remains constrained under the uncertainty principle, regardless of those properties being manipulable by conscience or not, that will not affect the notion of moral.
My proposition is that any complex system that is capable of being aware of the concept of choice derives the concept of morals at that instant. The choice does not have to be real in a physical sense, it just needs to be the concept used by the being to give reason to its actions. Moral is a consequence of the capability of formulating choices, regardless of that choice being a real one or an illusion of the conscience. Moral is therefore a construct, not an inherent property of the universe.


But choice is not enough. Why is it within moral acceptance for me to kill a star (assuming it would have no consequences on other life forms) but not to kill a subspecies? Another concept must be added to the concept of choice.


The caveat that I placed gives me the needed clue: “no consequences on other life forms”. There is a reflective process going on here. An entity capable of constructing morals does so every time in regard to other life forms. And the more similar the life forms are to itself the more clear the right moral choice becomes. As a human killing another human is clearly immoral, but killing another species is not so easy to determine morally, yet it can still be immoral, so its not just applicable to humans.


The second condition for an entity to be capable of moral choice is the ability to reflect its condition onto another entity. Every moral choice appears to be made by not focusing on the entity making the choice but instead on the entity (or group of entities) that may suffer the consequences. This inherent altruistic behaviour is puzzling to many, so its no wonder that this mechanism is often attributed to a supernatural being that forces its moral on a natural entity.


But reflection is very easy to self contain if you do even cursory investigation into psychology. The trick is referred to “putting yourself in the other person's shoes” but in fact what you are doing is more akin to “don’t do to others what you would not like done to yourself”. Its a double reflection. You evaluate the consequences to others on your moral choice because there is an expectation that the other entity will do the same kind of reasoning when they have a moral choice to make. So, in reality, a moral choice is made in regard to what yourself would not want to be the victim of. Altruism is a consequence of intelligent selfishness. This is the reason why is much more easy to reach a moral rule among similar species.


This leaves the question of subspecies. The subspecies cannot reflect, yet it is still immoral to exterminate it. Well, the final factor is time. Moral is supposed to be a rule, a rule must apply to all times (given the same context) and not just the current one. So in fact you are reflecting on your own evolution, you are reflecting the fact that if your own species was not allowed to evolve you would not exist, so you recoil from doing things to other species with the future expectation that the favour will be returned. Its just a longer timeline that goes over your lifetime scale.


My definition of moral is provided by a sentient being complex enough to construct the concept of choice and that is also able to reflect that concept into other entities, not only now but into any future time.

Sort:  

Moral philosophy is founded in our inescapable hierarchical habits. As above, so below. We project our innermost desires, for peace, for glory or destruction with every living interaction. We justify our every judgement against the image of our own self-image.

Add the hashtag #introduceyourself to your post

This was a fascinating read. We need more like this on Steemit. Keep it up :)

Thanks, glad you liked it. This will be a 5 article series for starters, if it gains traction I will expand it from there.

Coin Marketplace

STEEM 0.27
TRX 0.21
JST 0.038
BTC 95126.83
ETH 3579.45
SBD 3.79