AIs Can't Handle the Truth! The Complexity of Argumentation for Machines

in #artificial-intelligence8 years ago (edited)

Argument is important. We claim something, and then support it. Someone can challenge a claim, and prove it right, wrong or unknown. The purely negative connotation is from the natural tendency that we have to engage in controversial discussions. Arguments shine information, specifically in support of truth, as a clear and bright direction on the path of reality, not unreality.

Why do we argue? We argue to get at the truth. Truth is important, even if someone doesn't realize to what degree. When someone makes a claim of truth, and is known to be wrong, seems wrong, or contradicts a falsity we accepted, then we tend to challenge the claim.

Why get to the truth? In a world of greater complexity and knowledge, knowing truth from falsity gives you a better map of the territory. This is an evolutionary survival. The more our perception of reality is aligned with reality that actually is, the more we can navigate it wisely and optimally. Failure to see what is in reality can be like walking into a walk when something unexpected comes your way. Truth can hurt and smack us down to the ground, often when we are living in the clouds.


Arguing AI

If a computer program could actually argue on a human level, that would signal a near AI, if not AI itself.

IBM's Watson was only able to answer information specific data retrieval answers to questions. It's relatively simple if:then searches. Deep Blue and Deep Mind beating humans in chess and Go is also calculations that don't involve arguments on a human level of interaction. Some things we argue about don't have simple data points to retrieve and present as the answer. Some questions don't have an answer yet either.

Facts often don't matter in winning an argument, when how someone feels about something matters more, then truth has little do with it. Facts can also backfire instead of laying the ground for reality, someone sticks harder to their guns.

In our constant attention grabbing information world, it can be hard to get through information. This is where a computer can help with data processing if it can learn to trust data sources and weigh information like we do.


Definitions

A first hurdle is to define the words that a computer system is supposed to emulate. Claims can be relevant and possibly backed by evidence, or they can be statements of opinion that can't be backed directly. Trying to train Watson to distinguish between these two types of statements, they thought wikipedia would be a good source. It turns out not. Looking for good claims and counterclaims was like trying to find hay in a haystack.

Work is currently being done to identify the key differences that set claims apart from generic statements. In 2014 a debater function was added to mine wikipedia for sources of data and lay out the pros and cons of an argument. But, Watson would not draw conclusions about which one is correct. All it could do was point out data already discovered by humans. There is no weighing in info as stronger or weaker. The team at IBM is looking to make a flagging system for stronger and weaker data that will support claims, like distinguishing between anecdotal and expert testimony.


Feeling

As mentioned before, all the "cold" calculating facts is missing the emotional side that humans also use to make decisions. It's been shown that people with damage to their brain that prevent them from feeling, such as a cerebral hemisphere disconnection or limbic system damage, are unable to make decisions. Our decisions are based on what motivates us, what drives us, what we value, how important it is, how much weight it has in our valuations to move us. This is also known as salience, being driven by how we feel about something.

A case in question is of researcher Antonio Damasio. A patient has a grasp of facts related to making a decision, but can't make a decision because all the facts are evenly weighed in his mind. Just like Watson, a decision can't be made because there is no weight being applied to value one over the other. Facts alone don't determine how we make decisions. If every fact is as important as any other, then there is no basis to judge and come to a decision.

Aristotle laid out two types of arguments: a speakers credibility (reputation) and expertise characterized as "ethos", or the ability to use the audience's emotions as "pathos". Appeal to authority, playing on fears, are appeals to emotion to get someone to accept your conclusion without much evidence to support your claim.

A machine would need to learn to argue in such a way. But why would humans want to create such a deception machine? Well, it would help us evaluate and assess conflicts and avoid making mistakes in judgment from our own biases, or other failures to think properly. The AI's would help us improve our quality of debate and argumentation.

In the modern day access to information without conversation, it has developed into an "echo chamber" where we don't verify things in reality, don't see if it's valid, and simply accept the information we access in an uncritical capacity. Argumentation can help criticize the information we receive, and evaluate it more honestly. This can be done alone, but a process of verification in reality needs to be employed if validating it through your peers is not done.

Peers can be wrong. Reality is reality. If we rely mostly on our peers, well this is the problem of the echo chamber and preaching to the quire. We get influenced by the social media and our peers, into accepting certain ideas or conclusions, often without verifying it. This applies with positive as well as negative information. When people rag and hate on someone, they tend to pile up and reinforce each others anger and discontent as valid and "right". False information that isn't emotional can be accepted in group psychology just as easily.


Morality

The best place for the IBM team to teach Watson about argumentation, turns out to be online forums and radio shows, not parliamentary procedure and repetition of past debates. The best radio show was Moral Maze, where the debates about a situation being moral or not, showed many people arguing from pathos and ethos. Watson can learn human argumentation best that way. Taking the audio, print sources, and online forums posting, thyey put them into a public argumentation databank. There is now a public databank of argument maps at aifdb.org for people to search through.

People tend to want less challenge to their arguments. We phrase our hypothesis in question format to avoid making more definite arguments and claims that would make us appear more wrong, where we feel more like a fool and embarrassed for being wrong. If one "dares" to negate an opposing view, it tends to produce more confrontation because we don't want to be told we are wrong explicitly. Questions are easier for our fragile little egos to figure out we are wrong, rather than be told directly.

We have different types of arguments, and some are more effective because they avoid the direct, blunt truth as it is. Emotional mind control is a strong factor to account for when dealing with certain people's ability to honestly face reality and themselves.

I'm personally not a proponent of this approach, as I view it as side-stepping the truth, to placate to someone's delicate sensibilities and try to avoid hitting their ego-cognitive defenses that block acceptance of information and facts. I prefer to let the truth work its way and erode the barriers of falsity erected in the cognitive superstructure of their minds. Truth will out in the end. Trying to wind my way around all their cognitive and ego obstacles is mental gymnastics to me. Truth is simple. Just say it. Bypassing falsity leaves it erected in consciousness. If you focus on convincing someone, it's more about "winning". Your focus is on getting someone to accept something, not on letting the truth show someone the light, which is what the root of argue means:

root *arg- "to shine, be white, bright, clear"

Arguments, are about truth, to get to the light, and shine the way for us to walk in reality.

Everyone is very focused on changing someone mind, which is of course important. But then that becomes the goal, whether they are right or wrong. But truth comes first. The goal isn't to convince someone in an argument, which the IBM researches, and other psychologists, who that people "want" when arguing. They just want to win. Where is the truth? You don't win when you're wrong and get someone to accept something wrong just because you want them to accept what you say. The goal is to arrive at a greater more accurate understanding of reality.

This is also why people don't understand morality correctly. They just want their personal, subjective, desire, wants, and wishes of what is "right" or "moral" to be what is right, good, true and moral. Truth is not high enough in their lives, in the salience, weight, value and importance of their life.


Conclusion

Machines going this route is a problem. Do we want to trust information from a machine that want to convince us of something using psychological bypassing techniques? What happens when the source of information is tacitly accepted as true, by the machine, and uses it as "fact"? News media reports aren't really "fact", but can be used as a source of information for Watson to present data to people. Then we trust that information, even though it's false.

In the end, rather than rely on a computer to argue for us better, we should be working to each improve our own cognitive abilities. Critical thinking, salient evaluation with facts and not emotion alone, logic, knowing our cognitive biases, logical fallacies, learning how to think and how to learn, philosophy, psychology, etc.

Otherwise, what ends up happening, is that we have all these errors in thinking that we just stick to our current condition of information, stay attached to it, hold onto it, and deny anything that is contrary. Then we believe whatever we want to believe based on how we feel, and end up living in various unrealities in our consciousness that we believe to be reality. Remember, to not "feel-good" and deny a truth, all a truth has to do is contradict something you already accept, making your wrong, and then feeling foolish, uncomfortable, insecure, etc.


[Images: 1, 2, 3]

[References: 1, 2, 3, 4, 5, 6, 7, 8]


@krnel
2015-11-16, 7am

Sort:  

Cool article :) Never considered the arg- etymology of argumentation. Damasio is a great neuroscientist. <:

hehe yeah good work. Etymology rocks :D

Once again an interesting post. In this world of information overload anyone can find 'facts' or 'truths' to back up just about any argument. Does that make them right?
In the end, rather than rely on a computer to argue for us better, we should be working to each improve our own cognitive abilities. Critical thinking, salient evaluation with facts and not emotion alone, logic, knowing our cognitive biases, logical fallacies, learning how to think and how to learn, philosophy, psychology, etc.
This is what is lacking in humanity today. Hyperbole and sensationalism are what usually win arguments these days, at least it seems so to me. Take the recent U.S. elections as an example.
Thanks for sharing @krnel, I'm glad I decided to follow you

Thanks for the feedback. Glad you gained value and appreciate the work :D

very good one.

I'm not sure, but In your conclusion are you trying to say if we reason about a problem and not include emotion, then we can find an answer?

Don't judge information and discern truth based on how you feel about it. Not feeling at all is not what I'm saying.

What other way will you be able to resolve a paradox or break out of an infinite loop? Emotion, I believe is one the ways that allows us to step away from problems that really don't have an answer. What do you think?

This post has been ranked within the top 25 most undervalued posts in the second half of Nov 16. We estimate that this post is undervalued by $11.26 as compared to a scenario in which every voter had an equal say.

See the full rankings and details in The Daily Tribune: Nov 16 - Part II. You can also read about some of our methodology, data analysis and technical details in our initial post.

If you are the author and would prefer not to receive these comments, simply reply "Stop" to this comment.

There are a lot of interesting points in your article. "Shining a light" on truth through argument is a nice metaphor, but I wonder if we can ever be sure the shining argument is the true one. After thousands of years of philosophical inquiry, no one has found definitive truth or a reliable standard for evaluating truth (outside of things that are true by definition like mathematical concepts). Abstract that one more layer to humans teaching AI true things and things get even more muddled.

You say knowing true from false gives a better map for the territory. I would argue that one can know the map(belief), but not the territory(truth).

Is it true that the letter A is the letter A?
Is it true that the letter A is the letter B?
Is it true that the number 2 is the number 2?
Is it true that the number 2 is the letter A?
Is it true that the word true is the word true?
Is it true that the word true is the word false?

It is true that some of those statements are tautologies :P Those would be the "true by definition" items I was referring to (some are false by definition I suppose). Letters, numbers, and words as described above have no intrinsic meaning. They are pointers that we agree have the same meaning so that conversation is possible.

This gets really interesting if you approach these questions as a programmer. If you consider 'A' as a variable, you can set it equal to 'B', in which case the letter A is the letter B.

This post has been linked to from another place on Steem.

Learn more about and upvote to support linkback bot v0.5. Flag this comment if you don't want the bot to continue posting linkbacks for your posts.

Built by @ontofractal

Coin Marketplace

STEEM 0.20
TRX 0.13
JST 0.030
BTC 64669.52
ETH 3430.49
USDT 1.00
SBD 2.52