Is intelligence an algorithm? Part 3: Reasoning

10 months ago


One of the most important tools of the Intelligence algorithm is “Reasoning”. Before we can explore the more challenging topics of “Problem-Solving” and “Heuristics”, which I will discuss in part 4 of this series, we must first get a thorough understanding of the process of reasoning, as we won’t be able to devise Problem-Solving strategies without it.

In the seminal post in this series I have discussed the possibility that natural intelligence might be a kind of algorithm and what this can mean for the design of artificial general intelligence. This first post you can find here:
https://steemit.com/ai/@technovedanta/is-intelligence-an-algorithm

In the second post series I have discussed cognition, (pattern) recognition, memory, abstraction, analysis, understanding and information retrieval, as essential parts of the intelligence algorithm. You can find this post here:
https://steemit.com/ai/@technovedanta/is-intelligence-an-algorithm-part-2

In this essay I will first summarise what “reasoning” is and how it functions. Then I will try to show that it has an algorithmic nature and is in fact one of the integrated tools of intelligence. As part of this argumentation I will also touch upon the topics of rhetoric, causation, reality and truth.

Reasoning is often defined as thinking in a logical way to come to a judgment or conclusion. Logic itself is the set of rules we apply in this thinking process, it is the instrument used, but it is not identical to the thought process called “reasoning”. The steps in reasoning to come from a premise (an assumption also called proposition which is believed to be true) to a conclusion we call “inference”.

Traditionally there are three types of inferences:
Deduction, induction and abduction.

Deduction is the process in which a specific instance is compared with a general rule for a class of items assumed to be true. If the specific instance belongs to this class, it will also follow the rule.

E.g.
All men are mortal (general rule for class)
Socrates is a man (specific instance of class)
Hence Socrates is mortal (conclusion)

Induction is the process in which multiple instances appear to follow a general pattern, from which it is then predicted that a further such (future) instance will follow the same pattern.

E.g.
Until now the Sun always rose to start the day (general pattern)
It is likely there will be a Tomorrow (specific future instance)
I predict tomorrow the Sun will rise to start the day (predicted conclusion)

Abduction is the process in which for a specific instance (e.g. an observation, an effect) a reason or cause is speculated, which is known to give that result, whilst there can also be other factors yielding that result. It is another word for guessing.

E.g.
The lawn is wet (specific instance which might or might not belong to a class)
When it rains, the lawn is wet (general rule for class)
Hence it has rained (conclusion)

Abduction is also called a logical fallacy, because the conclusion you arrive at need not be true; in the example given the lawn could also have been wetted by sprinklers.

Both induction and abduction are uncertain ways to come to knowledge. Deduction is said to be the only certain process. But there is a snag here: The very premises of deduction (when they relate to physical phenomena) have been obtained by induction. In fact, we only assume that all men are mortal because until now we haven’t seen an immortal one. Deduction works if the premises themselves are mental constructs the truth of which is asserted by definition, but that does not give us any certainty about the physical truth of such statements.
Induction and deduction however give us a strong probability that our conclusions will correspond to what can be observed.

There is also reasoning by analogy, which is also a logical fallacy. Because a specific instance belongs to a general class it is concluded that this specific instance has all the features of another instance of that class. This can lead to aberrant non-sense as illustrated hereunder:
A man is a human
Beyoncé is human
Beyoncé is a man

Oops...

A complete list with logical fallacies can be found on Wikipedia, https://en.wikipedia.org/wiki/List_of_fallacies
It is not my purpose in this article to treat each one of these in detail. But if you wish to increase your intelligence I recommend you to have a look at these. It will improve your understanding of the world and people around you and you will be able to interact therewith in a more intelligent manner. You will be able to avoid wrong conclusions that do not get you near the purpose of your intelligence, which -as I said before- is to achieve complex goals.

Logical fallacies often do not respect the principle that general rules or patterns must be grounded in a sufficient amount of independent observations (i.e. by different people). In order to arrive at a plausible conclusion, it must be probable. In order to be probable it must be statistically relevant and have a sufficiently large base of grounding observations.
Logical fallacies also arise due to an insufficient knowledge about general and specific and classification schemes.

One of the logical fallacies I do want to mention because of its significance in science is the correlative fallacy described by the Latin proverb: “post hoc, ergo propter hoc”: (after this, therefore cause by this) If B comes after A, you conclude that A caused B. But we have plenty of examples in which correlation does not imply causation.
For instance, when people are eating more ice-creams, there are more shark attacks. If you then conclude that eating ice-creams causes shark attacks, you commit the above mentioned fallacy.

Often correlated phenomena have a common underlying cause. In the case above, it was a warm day, which makes that more people swim in the sea, which attracts more sharks. The warm day also makes that people eat more ice-creams.

How can we then assess what the cause of a phenomenon is? Scientists change one parameter, while keeping the others constant. If a change in parameter A, systematically results in a more or less proportional change in effect B, they usually conclude that A causes B. This may be the case, and for the sake of being practical it is useful to assume it is, but it is not necessarily so.

When we reason on the basis of cause and effect are still stuck with a mechanistic type of thinking, which belongs to the seventeen century. The universe of Newton, Huygens, Copernicus etc. in which the celestial bodies move with a clockwork precision, everything moving as if triggered by a plethora of cogwheels.

Quantum mechanics shows us that at the quantum levels many of the deterministic premises do not hold. Since we are nothing but aggregates of quantum processes, how can we be so sure that cause and effect as we believe exist, really do exist? I will come back to this issue later in this essay.

The branch of artificial intelligence (AI) that tried to work with parsing, specific rules and logical inferences has not been the most successful branch. It only leads to algorithms that can be applied in very specific contexts. Certainly useful in a specific context, but this will not out of itself evolve towards artificial general intelligence (AGI), which can operate independent of the context, or even human level of intelligence. The more successful branch of AI, which works with Bayesian Networks and probabilistic inference, is much more based on correlations than cogwheel type cause-effect relations.

A good example thereof is Latent semantic analysis which is used the IBM DeepQA engine Watson known from the popular program "Jeopardy" in which people compete with a computer in answering questions. Latent semantic analysis is based on Bayesian proximity co-occurrence of terminologies: If terminologies occur together in a statistical relevant way, they belong together and together they provide context and meaning.

In fact this is possibly also the way the brain builds ontologies, when it is abstracting patterns of features and relations. Every ontology is said to be at least a didensity: you need at least two terminologies to arrive at a relation which provides meaning; a concept may even require three terminologies. Interestingly when items are connected in the brain, when there is a neuronal association between stored concepts, thinking of one of them will automatically trigger thinking of the associated concept according to the well-known neuroscience adage: neurons that wire together, fire together.

Could it be that our brains function much more like a Latent semantic analysis based program? That the logical inferences are only made after an association is detected, as a kind of proof-reading mechanism, which verifies whether the correlation is useful and in what way?

I already mentioned that we have no certainty that cause and effect as we believe exist, really do exist. Quantum mechanics seems to reveal that the arrow of time can work in two directions and not necessarily only in the one direction we observe, as has become evident from Wheeler’s delayed choice variant of the double slit experiment. This poses questions about the possibility of retro causation: present events being caused by events in the future. But there is also another explanation, which puts causality at a deeper level of reality. Modern physics is more and more venturing in the field of digital physics led by the Dutch physicist Verlinde, which considers the universe as a kind of quantum computational substrate. In this model everything is information. Gravity and entropy are not the direct consequences of the movements of corpuscular bodies but rather the consequence of information processing laws, algorithms at the deeper level of the computational substrate. Entropic gravitation results in proximity co-occurrence of corpuscular bodies such as planets, which maximises the ability to dissipate energy and to maximise entropy.

If this is true, then perhaps there is no direct causation at the macroscopic level we see, but the correlations we observe as causally linked are the consequence of a causation by algorithms functioning in the quantum computational substrate of the quantum vacuum. In fact this would imply that the whole universe is some kind of computer which uses a principle similar to the tendency to proximity co-occurrence of latent semantic analysis. The universe could then perhaps also be some kind of mind. I am aware that this is slippery ice and pure speculation and full of the very logical fallacies I warned for, but I merely ask you to consider this possibility as an alternative explanation to causation.

The Latin word for reason is “ratio” which is probably not by coincidence linked to the English word ratio, which refers to quantitative relation between two amounts. The Greek word “logos”, from which the word logic is derived, also means “reckoning”. These etymological sources also point to a link between reasoning and (numerical or informational) reckoning.

Nevertheless, for the purpose of reasoning and dealing with the world around us, assuming the principle of causation at the macroscopic level is vital. We can only bring complexity about, if our actions follow predictable patterns. So let’s keep the metaphysical notions about causation only in back of our minds and for practical reasons, put causal inference to our advantage.

The title of these series has been “Is intelligence an algorithm?” If reasoning is part of intelligence, it must also be part of this algorithm. But I have shown that reasoning in AI seems to be unsuitable for context independent approaches. How can our natural intelligence then use reasoning as an algorithmic process which is context independent?

To understand this we must abstract the common features of the different modes of logic. What they have in common is that they all compare specific instances with generalised rules. So the intelligence algorithm will upon encountering a specific instance of an item seek in its database whether there is an ontological class it belongs to. It will compare specific with general using the rules of one of the modes of deduction, induction etc. and if a fit is achieved, recognise the item. As the item resonates with the general ontological structure of links and categories of the class, an association will be formed, not only metaphorically in the mind, but also literally at the level of the neurons, which will wire together and hence fire together.
If the item is new it will look for similar items and build an ontology on the basis thereof and on the basis of the relations the new item has with existing ontologies: thus it will cognise.

This part of intelligence is essentially a comparison engine, which can discriminate between items, classify them and draw conclusions if certain conditions are fulfilled (reminiscent of the often used “if...then...else statements in computer algorithms). I’m telling you nothing new; in ancient India these aspects have already been described in great detail, for instance in the “Yoga sutras” by Patanjali, which is an excellent treaty not only on the mental aspects of yoga but also on the workings of the mind. The intellect was called “Buddhi”, the ability to discriminate between items was called “Viveka” and inference was called “Anumana”.

The rules of logic when comparing specific with general are typically used to predict the outcome of future events and are thus mostly essential for planning, heuristics and problem solving. The specific strategies thereof will be the topic of a further essay in this series.

Reasoning is also the most important part of Rhetoric, the art of discourse in which you try to convince, to persuade your audience of your point of view. In a rhetorical argument, you can often start by giving specific examples of a general principle you wish to illustrate, or conversely you start by making a generalised assertion, which you then give a foundation by exemplifying it with specific instances. It’s clear that here you are using logic, most often induction and deduction. You are grounding an observation, like Ben Goertzel tries to do with his Opencog and Novamente projects for the development of AGI (Artificial General Intelligence).

But there is more to rhetoric than logic alone: Rhetoric also appeals to psychological aspects; it appeals to your beliefs and morality or to your emotional propensity. here we enter a more difficult area, which I will treat in more detail in a further essay in this series. This area is more difficult because it diverges from traditional intelligence which is based on pattern recognition and logic. This area touches upon intuition, which I will try to illustrate is possibly a kind of hidden heuristic (a practical approach to problem-solving without a guarantee for success; e.g. an educated guess).
In rhetoric based on emotions, the persuading orator can make an appeal to fear, which may block your more objective logic way of making sense.

The orator will try to seduce you to fall in the trap of logical fallacies, and come with evidence and facts, which may be true in a given situation, but not in all situations. On the basis of probabilistic insufficient information cherry picks, he will try to make you apply your logic. Taken off-guard by an emotional distraction, you may not apply your usual standard. And you may do the same when you try to convince someone else of your point of view. Perhaps with this article I am doing this with you. But at least by unveiling my mask I now give you the opportunity to seek truth for yourself.

This brings us to the issue of “truth”. Truth as we experience it, is a relative concept. For each event, different beholders have a different narrative which is often blurred by interpretations and coloured by beliefs and emotions. The same event can be told from very different perspectives, which at first glance appear contradictory and even mutually exclusive, but in the end from a higher perspective can be transcended as relating to different parts of the same entity or process:
This is perhaps best illustrated by the Indian parable of the elephant: Several blind man touch an elephant to learn what it is. One touches the tail and concludes that it’s a broom, another one touches a leg and concludes it’s a pillar, a third touches the tusk and concludes it’s a horn etc. Whereas from their own perspective none of them is really wrong, from the higher all-inclusive perspective they are all wrong to a certain extent and right to another extent. The dichotomies are resolved by a higher dimensional entity and perspective, which is the elephant, which transcends but does not exclude the partial perspectives.

Hence the famous quote by Nagarjuna:
“Anything is either true,
Or not true,
Or both true and not true,
Or neither true nor not true;
This is the Buddha's teaching”.

As R.A. Wilson stated: “What the thinker thinks, the prover will prove”. In other words, you will always find proof and evidence to support your beliefs. Which makes that if you really go to the bottom of this rabbit-hole, you cannot believe anything, because nothing is really certain. Hence Terrence McKenna’s famous quote: “Belief is a toxic and dangerous attitude toward reality. After all, if it's there it doesn't require your belief- and if it's not there why should you believe in it?”

In addition we must realise that what we believe to be “reality” is a “virtual representation” of reality cooked up by your brains. Since your brains filter out a massive amount of information and since different people have different filter capacity in their brains, how can we conclude that there is a common truth? There may be a kind of “consensus reality” which certain people agree upon, because their observations correspond. But the results of quantum mechanics have made clear that there is no “objective reality” out there. Your observation already changes the nature of existence, which is summarised in the famous adage: “When you change the way you look at things, the things you look at change.” So when it comes to truth, there are subjective truths and at best a consensus truth shared by a group of people. Plus the fact that we have all kinds of observational biases due to our personal and emotional background, we have cultural and linguistic biases and certain words are ambiguous homonyms. So no wonder we often fall prey to misunderstanding each other.

We already saw that logic cannot give us a solid foundation for our beliefs. which does not mean we should discard reasoning: It is usually the only way we have to make sense of the world around us. But we must be vigilant to jump too quickly to conclusions, or to cast away someone else’s perspective; we probably haven’t seen the whole picture. So we must adopt a cautious pragmatic approach and replace our beliefs with probabilities and likelihoods. The more your intelligence increases, the less convinced you are of a certain specific point of view. Instead you will try to acquire the bird’s eye view, which puts different perspectives in context. You will try to find a meta-perspective.

So if someone is really certain about his or her case, beware! You may not be talking to a very intelligent person.

I hope you will also read my next essays on the topics of problem-solving, planning and creativity.
If you don’t want to miss it, you can follow me. If you liked it, please upvote it and re-steem it. Comments are always welcome.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!