You are viewing a single comment's thread from:

RE: Consequentialism and big data, the need for intelligent machines to assist with moral decision making

in #ethics7 years ago

Interesting conceptual contemplations.

Personally, I’m still somewhat sceptical about such prospects, given the subjective nature of “morality.” For instance, consider some of the things some extremist Muslims would deem “immoral.” Or even if we were to look at western cultures’ belief systems, we could surely find plenty examples of ideas of right/wrong that are merely based in outdated world views which do not account for expanded consciousness and possibilities of embracing everything as having some value in a particular place and time. The concern I’d have with A.I becoming the judge of morality is ensuring the integrity of the data it’s drawing from - in the sense of being able to filter out / lower the weight of information produced from cognitive bias and maintain objectivity, uninfluencable by flawed logic.

Though then again, depending upon just how intelligently these systems could be designed...

Sort:  

Morality is subjective. Consequences on the other hand are not so subjective. Statistics (probability) and consequences are measurable. This allows you to determine the risk spectrum.

That means risk to your reputation, risk against your interests, etc. Morality being subjective simply means you have to take moral sentiment into account which also is a matter of analyzing available data.

How do companies for example determine how to be perceived as moral? First they will study the location they operate in to determine moral sentiment in that particular country. So in China they will rely on their understanding of Chinese moral sentiment and most importantly the compliance requirements with the Chinese government. In Europe they have to comply with the European governments and must understand the moral sentiments of Europe. This is a matter of knowledge management and data analytics. This means that yes, you can automate quite a bit of it, and you can put AI to the task of processing the information in theory because really we are just dealing with number crunching here.

For instance, consider some of the things some extremist Muslims would deem “immoral.” Or even if we were to look at western cultures’ belief systems, we could surely find plenty examples of ideas of right/wrong that are merely based in outdated world views which do not account for expanded consciousness and possibilities of embracing everything as having some value in a particular place and time. The concern I’d have with A.I becoming the judge of morality is ensuring the integrity of the data it’s drawing from - in the sense of being able to filter out / lower the weight of information produced from cognitive bias and maintain objectivity, uninfluencable by flawed logic.

AI gives us the benefit of having perfect logic. So you can for example put in the Sharia law and Islamic morality. The AI would just see them as local rules, rules which apply when dealing with Muslims, or in Muslim countries, but the subjective part is how are these rules interpreted? For this we would have clerics, scholars, who have a similar role to our Supreme Court system. The problem here is public sentiment might disagree with the official ruling.

Which do you choose? Well you can look at the risk statistics. If there is a dictatorship or a very sophisticated surveillance market then it's likely you will have to in most cases focus on compliance. In specific, you would have to weigh the risk to your business of compliance vs non compliance in multiple different scenarios and set policies in advance for when you'll comply or not comply.

I admit this is going to be difficult to automate but it's not impossible for AI to help with some of this. AI can for example deduce certain insights that you might never have considered by reviewing the data yourself. On the other hand your risk appetite might be more than what most is and so you would have to ultimately decide if you'll risk it with non compliance.

All of these decisions require calculations, number crunching, and AI helps with that part.

You mention that sources of information matters. On this I agree but it's a similar problem to what DPOS faces when relying on data feeds. To find a trusted source of information is the challenge. All businesses and government agencies face the same problem. All people actually face the problem of which sources of information to trust. Only high quality information should be used but it's hard to filter the high quality information from the noise.

I don't think AI is going to solve that problem. I think for that we'll be relying on the crowd. AI can only really crunch numbers and do logic. AI isn't able to find meaning, or determine value.

Coin Marketplace

STEEM 0.18
TRX 0.16
JST 0.030
BTC 68244.61
ETH 2640.30
USDT 1.00
SBD 2.69