Can science answer moral questions? I don't think so
Sam Harris in his opinion video suggests that science can answer questions of morality
On this I have to say Sam Harris is wrong. Science cannot answer philosophical questions. In fact I would even suggest that Sam Harris is confusing mathematics with science. At the foundation of mathematics is logic, and this logic is critical to certain kinds of morality. It's the use of mathematics that allows for the cost benefit analysis calculations, but the "greater good" or utility function is not determined by science. Science is described below in Wikipedia:
Science (from Latin scientia, meaning "knowledge")[1][2]:58 is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[a]
The mistake Sam Harris is making is ignoring the actual definition and function of science for the benefit of his own agenda to spread the idea that cultural relativism is incorrect. On this I would disagree because i think there is no objective right and wrong because if we cannot agree on the utility function, then we cannot agree on maximization of utility.
Then he goes on to say whether or not we will consult with super computers to answer personal questions? The joke is on him here because I would predict we will ask the equivalent of super computers these personal questions in the very near future. In fact the future of morality in my opinion is the invention of moral calculators which can help people to handle the mathematics, the probabilities, the whole "expected utility" from different choices. Mathematics is not science but is formal language, while science utilizes mathematics but is not specifically mathematics. Some forms of morality can be reduced entirely to mathematics (consequentialism, utilitarianism), and most morality is describable by logic.
** Science (from Latin scientia, meaning "knowledge")[1][2]:58 is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[a]**
Emphasis is on testable. Morality isn't testable and predictions cannot be made about the universe under a moral system. Morality is about life and social interaction, about values, about what the right or wrong thing to do is. Science can never tell us what to value because value is subjective. Science can never tell us about life as it can merely test models and make predictions. Science will not answer questions of why, nor will it provide purpose, nor will it explain why life has value, as it's not designed to replace philosophy and mathematics.
Sam Harris says we need a universal conception of human values? This is not possible. What is possible is merely an approximation based on current consensus (public sentiment) on what is right or wrong behavior given a situation. It's not possible to use science to replace that and while you can use mathematics to make sure it's all logical and reasonable it will not involve science.
Game theory, economics, social exchange theory, consequentialism, utilitarianism, probability theory, formal logic, are not science. All fall into the category of either philosophy or mathematics. Science helps us build our knowledge by testing models, where predictions can be made based on previous knowledge. Mathematics allows for the creation of models which cannot be directly tested but which are logical, valid, and useful, and consequence morality in my opinion falls into mathematics where value is subjective and right and wrong depend entirely on what we value.
I am not a huge Sam Harris fan because he takes things too far in the name of science sometimes like the recent fascination with Charles Murray and scientific racism. I think what he is saying here is that human morality can be measured and he uses the example of the effects of child abuse on psychological health. He is saying that well-being, health and reduction of suffering is moral and is measurable and is even more measurable now that human cultures are getting closer and closer. I think it's a pretty simple concept and I don't think it spins out into large questions of philosophy and mathematics. Thank you for the interesting conversation.
I think there is science, and then there is science. So the definition of science as the gathering of evidence to inform conclusions is the definition, but the term science is generally used to imply a sort of natural reasoning for why things are the way that they are, and so a scientific understanding and mathematical understanding are essentially the same thing. I think economics is probably science - if you compare quantum physics to a free market and attempts to understand them, they are basically the same.
I tend to agree with you.
Science can tell us how to most effectively maximize happiness so long as we have an operational definition of happiness. But it cannot tell us whether happiness is in and of itself valuable.
In other words, science cannot answer the question: Why should we value happiness? Suppose a meta-ethical moral nihilist says "The only thing I value is my own happiness". How can a scientist prove them wrong? They can't. Because science is not in the business of answering questions about value.
However, the mistake is to think that philosophy can better answer those questions. Even highly trained philosophers cannot "prove" to the nihilist that they should value other people's happiness. Ultimately value-statements are subjective. If the nihilist only values their own happiness - there is no logical argument that exists which can definitely prove them wrong.
All we can do is to take advantage of the evolutionary fact that our species is social and we are programmed to value other people's happiness so long as they are a part of our social circle. That's the only thing that is keeping psychopathic nihilism from spreading in the population. But suppose the population was composed of 50% selfish psychopaths and 50% altruists. There is no "universal moral guidebook" that can determine who is right. It's all subjective.
You can make a case that it's valuable to value the happiness of others as a means of increasing your own happiness. That is generally why we value the happiness of others.
Morality is a human/sentient trait/value. Science is more or less a record and measurement of existence and it's laws. Even then it's all dictated by our perceptions and understanding.
I'd love to hear a reply from Sam Harris on your thoughts.
When you say some forms of mortality can be entirely reduced to mathematics what does that actually mean? You exampled consequentialism, does that mean one act has one outcome as in a logical mathematical outcome? You can see I'm struggling to stretch my intellectual capabilities here :-) enjoyed your post though!
If we all agree on what "social utility" means, let's say it's the sum of human happiness? Then we now have turned morality into mathematics calculations. In essence the ethical calculus was formalized by Bentham. In addition, if you have consequentialist ethics even without agreeing on "social utility" you can have your own "self interest" and based on that you can use cost to benefit analysis to decide on which decisions provide the most expected utility. Consequentialist ethics are mathematics, and utilitarianism is a kind of consequentialist ethics.
You can be an ethical egoist consequentialist and simply rely on game theory to determine the correct course of action. Emotions do play a role because these emotions are what determines what we value but while value is the subjective part of the equation, the calculations are pure mathematics. Social exchange theory also shows how this works in economics which also is a discipline of mathematics not science.
Consequences which are known can be computed by probability, using math, using a calculator or super computer, to inform or advice someone on what actions to take, just as legal advice or medical advice can be delivered by an AI doing the calculations. If you value your life or certain interests or have some utility function then that determines right and wrong outcomes for you but the calculations are always the same process of cost to benefit analysis, probability distribution, etc. The reason consequentialism isn't currently fashionable for humans is because it's a burden to calculate it all but this burden over time will be reduced by AI.
References
Thanks, I'll have a read through the references. I'm struggling to fault Harris on this to be honest so I'm going to have another watch.
Having said that, I remember him discussing the Morality of AI more recently. His example was the driverless car that finds itself in a nearing unavoidable collision with some children. Does it's course of action save the children by swerving off the road and killing the driver or plough into the children and save the driver. I think his moral jury was out on that!
This has been solved. In my opinion the owner of the car should decide whether to sacrifice the car to save others in the situation of an accident. I don't think the AI in specific should be enabled to self sacrifice to save others if a human is in the vehicle unless the human owner accepts the same morality.
From a utilitarian perspective where all lives are complete strangers then it's better to save more lives than less. So it becomes a mere calculation (Trolley Experiment case) where you win by saving as many lives as possible. Taking 1 life to save 5 is a bigger win than saving 1 life to lose 5.
However if that 1 life is not a stranger and those 5 lives are complete strangers, now the values of those lives is no longer equal. So in practice it's not always the case to say it's ethical to save the maximum number of people in a situation because at the end of the day human beings do not value all people equally. Human beings like or love some people.
The jury isn't out on the question of the Trolley Experiment or self driving car variety of the same experiment. The solution under utilitarianism is always calculated to save the most lives even at the lowest cost if all lives are of equal worth.
So in the real world how would that work? Maybe the car owner would sit through a morality interview with the car dealership and his preferences would define the outcome of any accident. Surely there will be future laws to govern this and it would make sense to follow the utilitarianism approach but with no exception to whether the driver has any connection to the would be lost lives.
The car could have a screen and put forth the question in the form of a video and simple ask the owner what they would like their car to do in that situation.
My understanding is that morality has been tested by science and that Kohlberg was one of the leading pioneers in the field. His research led to predictable repeatable results. I even tested it on my 10-year-old and she answered exactly as Kohlberg predicted: her answer was pre-moral instinctively and immediately; whereas my answer was post-moral as I was and am able to hold multiple perspectives and balanced judgments. The ability of critical thinking, nuance, and analyzing multiple complex perspectives is not something a 10-year old is developed enough to do....
This post isn't a dismissal of your assessments of morality and I can see a day when computational power does enlighten our sense of truth and consequence.....
What test are you referring to?
I can't remember the exact test question I used on my child but Kolhberg had more than one. I'll link some of them below, but my point is morality has been scientifically studied and the results are quite accurate, IMO......Most people in today's world operate at conventional levels/stages/modes of morality.
There are those highly developed people who operate at post-conventional levels/stages/modes of morality, but their actions will most often appear pre-conventional to the conventional moralists.
Accusing Jesus of doing good on the Sabbath would be one of many examples of this confusion.
Hindu gurus dropping acid another....
http://study.com/academy/practice/quiz-worksheet-kohlberg-s-stages-of-moral-development.html
I'm a fan of Harris myself but I'm not blind to some of his logical flaws. I agree with you that morality is something that can never be truly defined because right and wrong are consequential.. maybe within each environment ethics and morality can be established... to be applied of course to that specific environment alone. I don't think science could ever define morality regardless of any advancement we might make in the world of computing... Maybe the main reason is because we will never truly agree on what is moral and ethical as a whole, the one shoe fits all set in stone set of laws might be a philosophical unicorn.
Science will give a view point and wast knowledge about things but Moral is depending on the wisdom we derive with our common sense . so it will help but cannot be always true.
Hello, my name is Nadine and I am new on Steemit. Would be very nice if you could check out my post and maybe you want to follow back. Thank you.
https://steemit.com/travel/@leo-tmp/explore-the-beauty-of-northern-thailand-chiang-mai-my-travel-diary
And I really love your post. Great!
Good post,