Thinking Critically or Getting AI to Argue and 'Fact Check' for Us

in #philosophy6 years ago (edited)

Argumentation is important, but it has a bad reputation. Arguing is viewed negatively by some who look at it only from the bickering side. But arguing is part of any controversial discussion where two people don't agree. We claim something, and then support it. Someone can challenge a claim, and prove it right, wrong or unknown.

Argue comes from the Latin arguere: "make clear, make known, prove, declare, demonstrate". The root apparently comes from the Proto-Indo European *arg-: "to shine; white." As such, arguments shine a light on information, Ideally to show the path to what is known, proven, demonstrable, i.e. reality or truth.

We argue to get at the truth. Truth is important, even if someone doesn't realize to what degree. When someone makes a claim of truth, and is known to be wrong, seems wrong, or contradicts a falsity we accepted, then we tend to challenge the claim.

Why get to the truth? In a world of greater complexity and knowledge, knowing truth from falsity gives you a better map of the territory. This is an evolutionary survival. The more our perception of reality is aligned with reality that actually is, the more we can navigate it wisely and optimally. Failure to see what is in reality can be like walking into a wall when something unexpected comes your way. Truth can hurt and smack us down to the ground, often when we are living in the clouds.


Source1, Source2

Getting AIs to Argue like Humans

If a computer program could actually argue on a human level, that would signal a near AI, if not AI itself. IBM's Watson was only able to answer information specific data retrieval answers to questions. It's relatively simple if:then searches. Deep Blue and Deep Mind beating humans in chess and Go is also calculations that don't involve arguments on a human level of interaction. Some things we argue about don't have simple data points to retrieve and present as the answer. Some questions don't have an answer yet either.

Facts often don't matter in winning an argument, when how someone feels about something matters more, then truth has little do with it. Facts can also backfire instead of laying the ground for reality, someone sticks harder to their guns. In our constant attention grabbing information world, it can be hard to get through information. This is where a computer can help with data processing if it can learn to trust data sources and weigh information like we do.

Computer system need to be able to define words. Claims can be relevant and possibly backed by evidence, or they can be statements of opinion that can't be backed directly. Wikipedia was thought ot be a good place to train AI into distinguishing between the evidence and opinion, but it wasn't. Looking for good claims and counterclaims was like trying to find specific needle in a stack of needles.

To identify key differences that set claims apart from generic statements, a debater function was being added to Watson to mine wikipedia for sources of data and lay out the pros and cons of an argument. But, Watson would not draw conclusions about which one is correct. All it could do was point out data already discovered by humans. There is no weighing of info as stronger or weaker. The team at IBM is looking to make a flagging system for stronger and weaker data that will support claims, like distinguishing between anecdotal and expert testimony.

Feeling the Cold Facts

All the "cold" calculating facts is missing the emotional side that humans also use to make decisions. It's been shown that people with damage to their brain that prevent them from feeling, such as a cerebral hemisphere disconnection or limbic system damage, are unable to make decisions. Our decisions are based on what motivates us, what drives us, what we value, how important it is, how much weight it has in our valuations to move us. This is also known as salience, being driven by how we feel about something.

A case in question is of researcher Antonio Damasio. A patient has a grasp of facts related to making a decision, but can't make a decision because all the facts are evenly weighed in his mind. Just like Watson, a decision can't be made because there is no weight being applied to value one over the other. Facts alone don't determine how we make decisions. If every fact is as important as any other, then there is no basis to judge and come to a decision.

Aristotle laid out two types of arguments: a speaker's credibility (reputation) and expertise characterized as "ethos", or the ability to use the audience's emotions as "pathos". Appeal to authority and playing on fears, are appeals to emotion to get someone to accept your conclusion without much evidence to support your claim.

A machine would need to learn to argue in such a way. But why would humans want to create such a deception machine? Well, it would help us evaluate and assess conflicts and avoid making mistakes in judgment from our own biases, or other failures to think properly. The AI's could help us improve our quality of debate and argumentation.

Arguing to Verify Claims From the Echo Chamber

In the modern day access to information without conversation, it has developed into an "echo chamber" where we don't verify things in reality, don't see if it's valid, and simply accept the information we access in an uncritical capacity. Argumentation can help criticize the information we receive, and evaluate it more honestly. This can be done alone, but a process of verification in reality needs to be employed if validating it through your peers is not done.

Peers can be wrong. Reality is reality. If we rely mostly on our peers, well this is the problem of the echo chamber and preaching to the quire. We get influenced by the social media and our peers, into accepting certain ideas or conclusions, often without verifying it. This applies with positive as well as negative information. When people rag and hate on someone, they tend to pile up and reinforce each others anger and discontent as valid and "right". False information that isn't emotional can be accepted in group psychology just as easily.

Not Wanting to Be Wrong

The best place for the IBM team to teach Watson about argumentation, turns out to be online forums and radio shows, not parliamentary procedure and repetition of past debates. They found the radio show "Moral Maze" was ideal, where the debates about a situation being moral or not showed many people arguing from pathos and ethos. Watson can learn human argumentation best that way. Taking the audio, print sources, and online forums posting, they put them into a public argumentation databank. There is now a public databank of argument maps at aifdb.org for people to search through.

We phrase our hypothesis in question format to avoid making more definite arguments and claims that would make us appear more wrong. We feel more like a fool and embarrassed for being wrong. People tend no want a challenge to their arguments. If one "dares" to negate an opposing view, it tends to produce more confrontation because we don't want to be told we are wrong explicitly. Questions are easier for our fragile little egos to figure out we are wrong, rather than be told directly.

We have different types of arguments, and some are more effective because they avoid the direct, blunt truth as it is. Emotional mind control is a strong factor to account for when dealing with certain people's ability to honestly face reality and themselves.

Letting Truth Speak

I'm personally not a proponent of this approach, as I view it as side-stepping the truth, to placate to someone's delicate sensibilities and try to avoid hitting their ego-cognitive defenses that block acceptance of information and facts. I prefer to let the truth work its way and erode the barriers of falsity erected in the cognitive superstructure of their minds.

Truth will out in the end. Trying to wind my way around all their cognitive and ego obstacles is mental gymnastics to me. Truth should just be said. Bypassing falsity leaves it erected in consciousness. If you focus on convincing someone, it's more about "winning". Your focus is on getting someone to accept something, not on letting the truth show someone the light, which is what the root of argue.

Arguments, are about truth, to get to the light, and shine the way for us to walk in reality.

Everyone is very focused on changing minds, which is of course important. But then that becomes the goal, whether they are right or wrong. But truth should come first. The goal isn't to convince someone in an argument. The IBM researches and other psychologists seem to "want" that when arguing. They just want to win. Where is the truth? You don't win when you're wrong and get someone to accept something wrong just because you want them to accept what you say. The goal is to arrive at a greater more accurate understanding of reality.

This is also why people don't understand morality correctly. They just want their personal, subjective, desire, wants, and wishes of what is "right" or "moral" to be what is right, good, true and moral. Truth is not high enough in their lives, in the salience, weight, value and importance of their life.

AI, Fake News and Critical Thinking for Ourselves

Machines going this route is a problem. What happens when the source of information is tacitly accepted as true, by the machine, and uses it as "fact"? News media reports aren't automatically "fact". But that info can be used as a source for Watson to present data to people. Then we trust that information, even though it's false.

In the end, rather than rely on a computer to argue for us better, we should be working to improve our own cognitive abilities. Critical thinking, salient evaluation with facts and not emotion alone, logic, knowing our cognitive biases, logical fallacies, learning how to think and how to learn, philosophy, psychology, etc.

Otherwise, what ends up happening, is that we have all these errors in thinking. Then we just stick to our current condition of information, stay attached to it, hold onto it, and deny anything that is contrary. Then we believe whatever we want to believe based on how we feel, and end up living in various unrealities in our consciousness that we believe to be reality.

We don't want to be wrong. But we need to want know if we're wrong and want to behave what's right and true. To not "feel-good" and deny a truth, all a truth has to do is contradict something you already accept, making your wrong. Then that makes us feel foolish, uncomfortable, insecure, etc. And we reject the truth simply because we're responding emotionally and egoically to it.


Thank you for your time and attention. Peace.


If you appreciate and value the content, please consider: Upvoting, Sharing or Reblogging below.
Follow me for more content to come!


My goal is to share knowledge, truth and moral understanding in order to help change the world for the better. If you appreciate and value what I do, please consider supporting me as a Steem Witness by voting for me at the bottom of the Witness page.

Sort:  

There are many downfalls of this system though. We know that data is manipulated, history re-written and things evolve daily. It's difficult for someone to speak actual truth because there are so many aspects that are subject to debate. Debate is great and needs to happen and it's the way that we can learn a lot. However when it comes to many topics it's very difficult to discern what's actual "fact" and what's not.
People will do and say things that are wrong simply because it's something "that's always been." Take for example the old belief that the earth was the center of the universe. Those who went against the norms of the day were called heretics, killed and ostracized from society. It was later found out that this wasn't in fact the case and skeptics were correct.
There are lots of examples of these situations; my point being that although things are comfortable and people think that something is common knowledge is correct, there isn't a lot that is truly a "fact." I have seen far too much obfuscation to believe that there are a lot of things that are truly "facts" and more likely a bought opinion, mass opinion due to influence or incorrect information that the majority hold onto and will not change.

It's also just a matter of monetary influence what is determined to be "fact checks;" they are anything but. Snopes being a source of facts? One of the most biased websites for people to look at but yet it's being lauded as the source of truth.

My point being: this sounds great on paper but in reality it's far too complex for a computer program to undertake if you are able to truly think objectively about the world we live in. There is far too much falsity to call things "facts."

Yes it is far too complex. But facts are all a round. If I walk down a street. It objectively happened in reality. It's a fact, a truth. Being able to verify it afterwards may not be possible, but that fact is a truth that objectively occurred in reality. Verification of claims is what allows other people to call it a truth or fact. If it can't be verified by others, then they can't use use the particular word "truth" or "fact" to apply to that claim. Facts and truths are all around, we as individuals just can't be everywhere to verify all facts or truths.

IMO it wont be long before we start branding AI as AS- artificial stupidity. Case in point the recent list of fatal crashes of Tesla autopilot cars, Google translator spewing out eerie prophecies, 98% defect rate of Facial recognition technology, Alexa's stupidity.
IMO AI/AS is overrated still. Under the perfect operating conditions (boundary values set by the coders) it might be reliable, but not in dealing with real life ambiguities. AI cant replace a cab driver in the overpopulated cities of a developing country, not in foreseeable future.

and arguing to come to a conclusion like humans, well i doubt if it will ever be possible. OR even if they do, it probably will not classify as an argument in the human sense of the term, just a bunch of codes we can live without.

LMAO! AS, AI as an ASS :P

Yes, they are stupid asses. I'll never trust the artificial mind. We need to improve human capabilities, not outsource and abdicate our responsibilities to think to other technologies more and more.

Ah man whenever I read up about AI it just doesn’t feel right to me. Programming them to argue, have conversations, do basic human tasks... aren’t we human for all of those reasons? Why are we creating something to replace ourselves? We are seeking to make better versions of a human as AI.

It creeps me out! Especially when I’ve listened to conversations of AI with another AI and they have a common theme of destroying the world - that could all be fake propaganda to get people talking. But they did also create a language. They’re learning.... the terminator was a factual representation of the future! Could be to be honest....

Yes, we delegating our humanity away :/ Transhumanism and AI work hand in hand to erode humans away from being human and towards being robots.

I still think too much AI can be dangerous to us Humans but in the other hand, AI maybe can decide better for they won't be affected by emotions and just aiming for the objective.

We should learn to think better to make better decision for ourselves, not be ruled by machines.

Curated for #informationwar (by @commonlaw)

  • Our purpose is to encourage posts discussing Information War, Propaganda, Disinformation and other false narratives. We currently have over 7,500 Steem Power and 20+ people following the curation trail to support our mission.

  • Join our discord and chat with 150+ fellow Informationwar Activists.

  • Connect with fellow Informationwar writers in our Roll Call! InformationWar - Contributing Writers/Supporters: Roll Call Pt 8

Ways you can help the @informationwar

  • Upvote this comment.
  • Delegate Steem Power. 25 SP 50 SP 100 SP
  • Join the curation trail here.
  • Tutorials on all ways to support us and useful resources here

AI (Artificial Intelligence) is in my opinion used as Fake News because whatever we have as AI aspect all are processed through human beings only and complete behaviour like humans beings is impossible.

Some time ago Google showcased an tool which can answer the phone behalf of us and it sounded like human beings for sure and in my opinion these kind of Technologiescan be used negatively in my opinion.

In Social Media half of the topics are just misguiding and in my it is because to gain more viewers means, whenever people watch or read something they never read about before will attract them more and they will push themselves towards that channel for sure.

Silly Argumentation is not needed, indeed we need more mindful arguments and at the right time for the right situation, because sometimes it's really important to stand for ourselves.

Wishing you an great day and stay blessed. 🙂

Personally I don't ever think machines will be able to think like human beings. The truth is we don't even know why or how we think.

Really nice post @krnel

Yeah, we don't understand it, but don't know need to know why we think to eventually know how we think. Why is the deepest question, often illusive, while how is more answerable.

Coin Marketplace

STEEM 0.19
TRX 0.18
JST 0.032
BTC 87119.34
ETH 3264.22
USDT 1.00
SBD 2.93