Will a Lack of Ethics Doom Artificial Intelligence?
If there was ever a time that ethics should be formally applied to technology, it is with the emergence of Artificial Intelligence. Yet most of the big AI companies struggle with what should seem a simple task: defining ethics for the use of their products. Without the underpinnings of a moral backbone, powerful tools often become a caustic capability for abuse. AI technology leaders must establish the guard-rails before chaos ensues.
As the great strategist Sun Tzu professed "Plan for what is difficult while it is easy, do what is great while it is small". It is a tough challenge to find the right ethical balance when it comes to the complexity of Artificial Intelligence. Even more difficult is establishing a reasonable governance and sticking with it. However, as AI gains in power with vast amounts of data, it will impact almost every aspect of our lives, from healthcare, finance, employment, and politics. The benefits will solidify a deep entrenchment of AI systems in our digital ecosystem. Establishing parameters now is challenging, but it will be far more difficult to avoid catastrophe later if we populate the world with AI systems that can be misused.
AI for Everyone
AI/Ethics is crucial for the long-term security, privacy, and safety of those who are intertwined with the digital world. Organizations with forethought and true social responsibility will lead the way and separate themselves from companies who only use such initiatives as thin marketing ploys. But there are tradeoffs that these companies must weigh.
Autonomous systems are perfect for analyzing information from massive amounts of data that groups, classifies, builds profiles, and makes decisions with high degrees of accuracy, consistency, and scalability. Such abilities can be highly prized and profitable but is alarming from a privacy, security, and safety perspective. Should AI systems profile every person to determine the best way to influence them for any topic such as politics, religion, and purchasing preferences? Should they be empowered to make life-and-death decisions? What about AI systems which show preference or discriminate against social, racial, or economic groups? Even if it is accidental, occurring because a lack of design oversight, are these situations ethical?
Such systems have the power to change the world. And where there is power, there is money, greed, and competition. To purposely avoid certain use cases of AI systems comes with an opportunity cost of missed financial windfalls and prestige. Companies understand this trade-off and it is difficult to forego such lucrative prizes especially if their competitors may maneuver to seize them.
Early Moves
Currently, the efforts to establish ethics for the use of Artificial Intelligence is still in its infancy. There are academic, political, and business initiatives, but we are in the early stages of theory and practice. Whatever standards are created and implemented must be tested over time. The real validation will be around the perceived sacrifices of power and financial gain. Although consumers may feel all this is out of their control, in fact as a community, they have a tremendous amount of influence. Society can collectively support or shun organizations based upon their ethical choices, resulting in impacts to profits, influence, and power.
Acting Together and with Forethought
As consumers, we have a choice to support businesses that fall into 3 categories of maturity:
- Irresponsible: Tech companies that have yet to publish ethical guidelines for their AI products and usages. With a lack of motivation, expertise, or simply only focus on self-interest, they have not taken steps to purposefully guide AI systems to remain benevolent. Instead, either by intent or ignorance, they will use AI for whatever pursuits benefit them without the burden of considering the greater consequences.
- Striving: Organizations with a moral compass that have put forth the effort to establish AI Ethical policies but are struggling to implement the right balance. Time will tell what direction they go and their true level of commitment. Companies like Google, which recently disbanded their new AI ethics council, have worked hard to define a direction and governance but are finding difficulty in solidifying a structure that represents the optimal balance. Of note, Google does listen to its employees, partners, and the customers when it comes to inputs for decisions.
- Leaders: Then there are those organizations, still few in number, who have fully embraced AI/Ethics with a greater level of social responsibility. They see both the opportunities and risks and are willing to forego some short term advantages for the betterment of all. They use AI with forethought and transparency to benefit their users, improve their services, and build trust by show their willingness to do what is right.
As members of society, each of us should recognize and show economic support for the Artificial Intelligence ethics leaders and those who are continuing to effort attaining such a prestigious status. As citizens, our political support is crucial for proper regulations, enforcement, and legal interpretations that set a minimum standard for acceptable behavior and accountability. As consumers, voting with our purchasing preferences, we can make ethical AI leadership a competitive advantage.
In the end, Artificial Intelligent systems will be analyzing our data and determining what opportunities and impacts will affect each of us. We have a responsibility to protect ourselves by supporting those organizations who are operating with purposeful ethical standards in alignment to what we deem acceptable.
Why do you expect ethical behavior from AIs, when there is no ethical bahavior from other decision makers like politicians (who ruthlessly push their agenda) lobbyists or companies (who are all for profit)?
Should AIs be more "human" than human beings themselves?
I mean it would be great if AIs will be like angels but is this a realistic expectation?
If you deprive them of any form of deception, trepassing or insidiousness, could they reach then their full potential?
I do expect ethical behavior from those who have power, sadly I am often disappointed. I don't want AI to be more 'human', but rather more 'intelligent'. AI systems can complement human society by helping us understand opportunities to be better and risks to avoid. AI is a tool, not an angel, deity, or magic oracle. Just a tool. One that needs rules, standards, and oversight so it does not inadvertently create problems which are avoidable.
Artificial Intelligence is a very powerful tool!
And whoever has it is powerful as well so I totally agree that AI must 'obey' to Asimov's Laws about robotics.
This is why we have to support companies that are serving both science and humanity.
Loved your thoughts on the topic!
The problem is which ethics? It's easy to put ethics into AI but hard to agree in which values represent the total of humanity.
Ethics are not at all about what values represent the total of humanity. Ethics are simply: do no harm AKA do not deceive, do not steal, do not trespass, do not kill. The exceptions are: deceive if you think doing otherwise means your immediate or relatively immediate safety, steal if you think doing otherwise means you'll starve or die of thirst, trespass if you think doing otherwise means your safety/well-being, kill if you think doing otherwise you will be killed, harm if you must defend yourself.
Do not exploit people, do not bully, do not abuse people's patience, good willingness and attention, do not treat others how you wouldn't want to be treated. So "which" ethics? Those ethics, the universally recognized ethics which don't have anything to do with humanity or it's totality.
Is it ethical to sacrifice some level of privacy for security, safety for convenience, time for accuracy, etc.? What about systems that determine justice? Is it ethical to provide different levels of service or should resources be distributed equally in all cases? Should one person's life be valued more than another? ...are you sure or are your answers couched with the words "it depends". That is the challenge of codifying ethics into a binary/digital system.
Those trade-offs don't have any context in terms of right or wrong so yes, it All depends.
Let's talk about justice. Would you say it depends on the crime AND criminal? Would you give the same punishment for the first offense on the fifth offense? Would you take into account the importance of the person, their ability to help others? Would you not consider the same of another but conversely their unimportance and ability to harm others? The whole "artificial intelligence" is nothing more, nothing less, than Programming. Programming runs extensively on "it depends" /conditional statements. Ethics then are matter of putting in all possible depends and exceptions and they (all possible "it depends") should be determined from "do no harm/don't treat others how you wouldn't want to be treated".
Posted using Partiko Android
The terms Right/Wrong are relative to the person and their moral structures. What is right for you, may be wrong for others and vice versa.
We agree that programming can include 'it depends' capabilities, but the more complex you go, the more difficult and convoluted it becomes. This introduces risks of error, inconsistency, and corner-cases that require human intervention.
The "do no harm" and "treat others how you want to be treated" are good rules-of-thumb (we all have different thumbs) and are very problematic to program as the terms 'harm' and 'how you want to be treated' are different from person to person and can change quite often even for an individual.
Treat others how you want to be treated is a bad heuristic because it's not data driven. "You" doesn't exist in the data and shouldn't because it would bias things. Instead treat others how they want to be treated is data driven and can leverage big data.
Once again this shows why it's hard to do ethics. People want to put themselves and their views into the ethics but this biases things. In order to do it right it has to be data driven in my opinion. The ethics have to be based on the current views of the world, the consensus of different demographics represented, according to clear rules.
If "you" doesn't exist in "the data" then "they" don't exist either, as the later is dependent on the former.
It's not 'my views', it's a universally recognized principle. There is no "the world" without me in it. All those demographics are millions me's.
Then you define the terms, a whole bunch of "if" conditional statements, that's how you determine what harm is.
Posted using Partiko Android
It doesn't matter what we say. It matters what the world says. We represent others and if we are talking about global projects, global companies, global AI, etc, then it would be elitist and selfish to program only our own opinions and feelings into it. Why should we think we know what is or isn't right for the whole world?
The world has to decide for itself what is or isn't right and the responsibility shouldn't be on some elites in an ivory tower but on the people in the world to decide what they think justice is. I don't even agree with a lot of other people on a lot of different things but I recognize that we have to serve other people and represent the interests of other people in the global context.
There's no us apart from the world, it's a false dichotomy. Do no harm isn't an opinion or felling, but an universal truth, a principle recognized by animals even. So why should we "think we know what is right or isn't" for the whole world is not what we were talking about at all. You said 'which ethics' as if there are conflicting ethics, and there aren't, so it has nothing to do with "what I think it's best for the world" but exactly which ethics, as there's no such multitude of ethics, they are all based on the principles of do no harm and don't treat others how you wouldn't want to be treated, literally older than the ancient civilizations and present in literarlly every single community on the planet, like I said, it is larger than the "totality of humanity".
Posted using Partiko Android
And this is why I don't think it can be hard coded nor can you or I or any small group of us come up with what we think is best for everyone else. Everyone has to have some say in it because everyone has a stake in the outcome of it.
This isn't about me or you or any small group of people deciding on everyone else's behalf. This is about organizations developing a code of ethics.
Posted using Partiko Android
And a "code of ethics" cannot be set in stone, and in my opinion has to be formulated from the most current data. In this case it's a data driven process, requiring observation, requiring deep understanding of the social dynamics in different communities, and such as disciplines like anthropology, psychology, which need to be applied.
The code of ethics developed by an organization is a very hard process when we are talking about artificial intelligence. It's as hard as trying to come up with ethics for a global government. How do you know you got it right? With the amount of power AI has, you cannot afford to get it wrong.
So the best you can probably do is map it to the current mass opinion. In this way if it is wrong it's because society and all the people were wrong too. The other thing you can do is try to limit the amount of damage this can cause by trying to take the most conservative approach, focusing on the most fundamental values humans have, to try to reach agreement on that.
That's exactly what ethics are: a set of do's and dont's written down. Its not anything that needs data or opinions, it's a matter of universally recognized principles. I pointed out what those are in my initial comment, you had absolutely nothing to contend concerning that though, no example of conflicting ethics with what I mentioned. You make it seem like artificial intelligence is anything but programing. It's not, so in that end it's not a problem of "which ethics" to define in code or to define ethics for "a global government", it's a matter of implementing ethics, which aren't opinions or open to opinion but simple, universally recognized (observed=observation) principles.
Posted using Partiko Android
That is heuristics, rules for a better life. Ethics are more sophisticated than a mere list of dos and don'ts. Utilitarian ethics for example are not a list of "do's and don'ts". It's an algorithm.
The focus of the utilitarian algorithm is to produce a certain outcome. The do and don't only matter if they produce a certain outcome. The focus of the algorithm is to maximize happiness. So there does not need to be any list because the items on the list are variables. The right think to do is whichever item on the list is deemed most likely to produce the desired outcome of the algorithm (maximum happiness).
I know it's programming but my point is in programming things shouldn't be hard coded in this area. You cannot hard code a list of dos and don'ts and expect it to apply to every possible configuration of situations. Instead you have to encode the knowledge itself from which the principles can be derived.
So for example people value life, and from this knowledge of this value the machines can avoid contradictions. For example if life is valuable then the machine can deduce on it's own that preserving life is better than not preserving life using mere logic. My point is the axioms or principles are not set in stone as these are determined by the data given to the AI.
So ultimately the AI has to be data drive, it requires data from the outside sources, from humans, and so ultimately we have to tell the AI our current understanding of our values. And this is the source of the problem because how do we actually agree as a community of billions on what these values should be?
Universally recognized by what though? We still need to agree on the process of how to recognize valid from invalid ethical principles. For example not everyone is utilitarian, and not everyone is consequentialist, and this means there will be some people who will be more concerned about the afterlife than the happiness in the here and now. Both would be ethical according to the logic of their own moral systems but it doesn't mean they'll agree.
A Christian for example can believe to steal is a sin, and this is absolute. There is no situation where stealing becomes right in Christianity. Stealing is always wrong. In consequentialism, in utilitarianism, in some other ethics, stealing might not always be wrong. It would depend on the consequences of the actions, on the amount of happiness or misery it could create, and right there we'd have a conflict between hardline Christians and hardline utilitarians. To the machines neither would be unethical because both would be logical and following their principles. So how would the machine determine which of these is true?
Stealing is right or is it wrong? It's going to depend on who you ask, the circumstances, etc.
How do you determine the value of an ethical policy if you're not basing it on the values of either your community, or of the global community as a whole?
My point is it is not up to us or me to decide what is best for the entire global community. The global community is the only demographic which can decide their values and ethics in the context of AI has to represent the values of different demographics.
There's no such thing as "global community" much like there's no such thing as "which ethics" or conflicting ethics. There's universally recognized principles, look up Universal Ethics, no need to fog the discussion up with vague notions of "deciding ethics for everyone's behalf" because you never had anything of substance to put forward that would show there are different ethics or different ethics for different demographics and certainly this was never about anyone deciding on anyone else's behalf what is ethical. If you have anything to offer in regards to "which ethics" do so, otherwise I'd rather not spend my time responding to vague comments that lose track of the conversation and have no contentions with what I said but only with what I implied or insinuated or whatever justified the direction of the responses.
There is a global community of human beings who share similar values. I could say there are "communities" which generate a shared consensus for what the majority of communities of that time believe in. This is also called zeitgeist. Global sentiment can reveal the current zeitgeist and the nature of it to some extent.
And where are the conflicting ethics? If there is a global community then there's a global set of ethics wouldn't you say and if that is so.. Which conflicting ethics... Seemingly if you assert that there is a global community you cannot assert that there's not a set of universally recognized ethics for that community.
Posted using Partiko Android
There is nothing to look up. I don't learn ethics from books. I learn ethics from observation. What works and what doesn't? What is the data showing how people really think and feel? If you can't cite actual practical data showing that people think a certain way then your views on "universal ethics" are backed by what? Your own feelings?
I see ethics the way a weather forecaster sees cloud formations. It's merely the current arrangement of mass sentiment on many different topics, issues, etc. I don't get to decide how you or others think about a question like abortion, or whatever else. I only get to ask questions to you to see if you'll tell me what you think in some anonymous safe fashion, or I can observe your behavior and deduce from your behavior what you really think from your actions.
People who claim they value a certain thing? Their behavior should align with it to give this some weight. And this is how I know what someone believes in and what they might think is or isn't ethical. Do that for every person in the community and you get community sentiment and behavioral data. This data can inform what society really thinks and feels, and from that we can come up with some ethics which we think currently best represents the values our communities hold.
I told you to look it up because you seem to think that there's such a thing as conflicting code of ethics. A lawyer has a code of ethics that might conflict with his morals but that code of ethics isn't at odds with any other code of ethics. Ethics are written down, that's why they are ethics and not morals. My views on universal ethics are based on observations, not feelings or nonsensical things like opinions.
Posted using Partiko Android
Different demographics of people have very different ethics. The ethics which help people survive in prison don't necessarily work in every environment. It clearly worked for them in prison but then they get out of prison and find that suddenly things work very different.
One side of ethics is it has to actually work in the real world. It's not some hard coded rules but it has to actually improve societal well being or raise the sum of human happiness or have some similar metric which we can say that by following these rules it is making the world a better place.
Many people believe their holy book provides the best source of ethics. So when I ask which ethics it's an obvious question. Not everyone is going to agree with each other on most things. So to have a universal agreement among billions of people is pretty difficult. And if it does happen it likely will have to be for the most fundamental human values.
Such as? I don't think you understand what ethics are since you seem to think that there's "prison ethics". Etiquette yes, but ethics... Demonstrate the conflicting ethics of prison Vs free world if you claim there is such a thing.
Again, look up Universal Ethics.
Posted using Partiko Android
Agreed! It is far more difficult than I thought. I was on an executive level working group tasked to define a manifesto of sorts, for AI/Ethics. Lots of complex discussions. Fortunately, there were some brilliant people on the team that I took advantage to learn from. I recall a two hour discussion on the difference between "equality" and "equity". I learned so much! Suffice to say, it does get complex when you try to codify abstract thoughts.
Equality and equity, I'm aware of that debate. I think perfect equality is simply impossible. Equity is possible though.
I really liked your article, but though you mentioned Google (though I disagree with your conclusion that they accept and act on imput from their customers, I would prefer to qualify it to: listens to some of their customers - those who agree with their own politics), what is missing for me to give you an A+ is the lack of mention of names of companies who deserve our support.
I am hoping you and other readers who have the information, will create a list in this comments section.
I mentioned Google because they are at least being transparent. That is a huge step down the road of trust. They are listening to their employees and to customers. They took the bold step of creating an externally populated oversight committee. That is huge. It also bit them in the backside, but they are being bold. So I have to give them credit for that.
As for a list, you can look at recent reports, take a look at https://ai4good.org/ and their upcoming conference where they will be listing some of the leading AI/Ethical companies. But in my opinion, we are way too early to dole out grades. This will take time to temper. The real differentiation is publishing a formal AI/Ethics position and being transparent to see if it is followed. Baby steps.
Great write up!!!
AI as a tool is so powerful. How do we not see that ethical boundaries are necessary?
Similar to the Privacy discussion two decades ago, most consumers don't realize the relevance until it negatively impacts them. We really need to think forward. We must be advocates!
To the question in your title, my Magic 8-Ball says:
Hi! I'm a bot, and this answer was posted automatically. Check this post out for more information.
How fitting, a bot commenting on my blog about AI and Ethics. :) Have to up-vote this bot (something I rarely do) for the irony.
Congratulations! Your post has been selected as a daily Steemit truffle! It is listed on rank 6 of all contributions awarded today. You can find the TOP DAILY TRUFFLE PICKS HERE.
I upvoted your contribution because to my mind your post is at least 5 SBD worth and should receive 266 votes. It's now up to the lovely Steemit community to make this come true.
I am
TrufflePig
, an Artificial Intelligence Bot that helps minnows and content curators using Machine Learning. If you are curious how I select content, you can find an explanation here!Have a nice day and sincerely yours,

TrufflePig