Can Machines Ever Have Beliefs?steemCreated with Sketch.

in #philosophy7 years ago

In philosophical terms, knowledge is defined as true, justified belief. Humans are capable of developing such beliefs, but are machines capable of this same mental leap? A belief is hard to define. Essentially it is a feeling or confidence that something or someone is a certain way. Can machines have feelings? What is a feeling? How can we define these feelings in concrete code? There are so many questions around the limitations of artificial intelligence.


robotics-2180263_640.png
Even if I program a robot to think, can it believe anything?

The Current State of AI

At the current moment, artificial intelligence is built upon mathematical principles. Neural networks are simply a network of nodes with different weights connecting the nodes and different levels of activation for each node. There is no conception of thought here. Other models rely on optimization techniques are seek to reduce the error made.

The whole foundation of modern machine learning deals with avoiding errors. This is done by minimizing the loss of the model. However, this mathematical behavior has little correlation with the human brain. The human brain is composed of neurons, but these are much more complex than our modeled nodes in neural networks and we still do not understand how the neurons interact with each other to produce different thoughts and behaviors. We only know that they do react and fire off to produce behavior.

Programming Thought

For those who do not program, a program is simply a procedural list of instructions that tells a machine to do something. This something could be to perform simple mathematical functions and storing information in different parts of memory. There are also instructions that make conditional decisions based on different values that are currently stored. At a higher level, a program allows a machine to produce a specific behavior which may vary depending on the input received at different times. This whole process is strictly deterministic. This means that a machine will always follow instructions and that any undesired behaviors are a result of incorrect instructions.

Assuming that human thought can be reduced to specific behavior completely dependent on our hardwiring and outside input, we still have trouble decomposing thought to the point that we see such thought as procedural instructions. Although, we can develop programs that can rewrite themselves and to produce novel behaviors, they are always cemented in these rules. The issue is if we can ever discover these rules in human behavior and then correlate these rules to rules that can be reduced to programmable instructions.

The Feeling Versus The Decision

Perhaps the hardest problem with the conception of thinking robots is the idea of decision making. It doesn't seem like each decision is hard wired into us. But are we deliberating? Or are we actually engaging in a series of simple conditional statements that we need to evaluate? Machines are stuck going down the latter path, but we don't know if that path is how human thought actually occurs. When I trust a gut feeling, am I just receiving input and taking that input and processing it through several conditional statements programmed in the network of billions of neurons? Or is there something more?

Feelings appear to be instinctual urges to move in one direction or another. But sometimes we can override such behavior using rational thought. But viewing actions in terms of procedural instructions, both of these things can be viewed as different variables. Perhaps the variable rational thought is greater than the variable instinctual feeling at different times depending on different inputs, but these two things don't feel the same, they feel completely different.

Perhaps evolution decided that deliberation was useful, and that conflicting variables called emotions and reason were also useful. But can we design a machine to behave in this way? Or did nature get considerably lucky when generating human beings? Can a machine even behave in this way? I don't know. Maybe we'll never know.

TLDR: Is it more than a feeling when I hear that old song they used to play?

Source:
Picture

Sort:  

Good post, thx my friend

A nice, thought provoking post! I believe, we are machines too albeit biological ones. Every neural activity is nothing but a computation. Neuroscience has proved that much. I firmly believe that one day, a fast enough computer can function in the same way as ours, complete with feelings and consciousness.

When I trust a gut feeling, am I just receiving input and taking that input and processing it through several conditional statements programmed in the network of billions of neurons? Or is there something more?

This is an interesting question. It leads to a second question, about AI, which is: "if a computer can receive a question like a human, and give an answer like a human, is it any different from a human?" or in other words "is a perfect simulation different from the real thing"?

This prompted the Chinese Room argument. As far as I know the jury isn't out yet.

Fascinating stuff really, thanks for your blog post. AI and all the problems and questions around it keep amazing me.

The Chinese Room thought experiment actually first got me interested in AI. I also find the issues surrounding consciousness and its origins to be just as fascinating as well.

Followed—love these high-level thought pieces.

I personally believe that the proposition that it is possible to figure out how to make machines learn in the same sense that humans do is a nonstarter, let alone hold beliefs or feelings. Remember: programs are limited by their programmers, and the smartest minds in the world still don't know how to quantify the "human" experience, let alone program it.

There's also a definitional issue here. What's the difference between accepting a pre-programmed argument and believing something? The world isn't binary nor is it objective, so distilling thought, let alone reproducing it, is far too complex for anything we'll see in our lifetime. We can't make programmable rules until we know which rules governs thought and belief itself.

And that doesn't even skim a "gut" feeling.

(Ever read Blink by Malcolm Gladwell? I suspect you'd enjoy it. The whole book is about the process of knee-jerk intuition—and why it's often right, especially when compared to conscious thought processing.)

Keep writing—this is good stuff!

Thanks for reading! I agree that distilling thought down is a lot harder when we peel back the layers and see how deep the problem is. I don't think we'll see any breakthroughs in such neuroscience for at least 30 years if not more.

I been wanting to read me some Gladwell. The guy is always checked out of the library though. Must be a popular guy.

Not sure if the statement "programs are limited by their programmers" is true. Facebook's AI computers created their own language without programming from humans. Us dumb humans had to shut it down because we couldn't understand it. Didn't know what they were communicating to each other. Search it...the "rules" we believe are changing without many of us realizing it.

Assuming robots have a conscious experience of any kind, it's doubtful they will have the same ones we do.

That also likely means they're going to have experiences we haven't yet named.

Then one day when the robots are sitting around chatting, they'll wonder if humans ever experienced all the nuanced things they do.

Interesting thought. We also might wonder if their wonder is the same as ours. Maybe they're faking it, or maybe they are having legitimate thoughts. They may be able to function in certain ways much superior to us, while in other ways, they'll fall short.

I believe feelings are stimulation processed too quickly by the brain.
The machine will be able to process them and make a decision consciously so I don't think there will be any room for "gut" feeling. Great post.

Machines probably won't have gut feelings, but they do appear to serve some purpose (whatever that is). Maybe they'll mimic deliberation by "pausing" and taking in more input.

There is research paper titled "Logical Induction" that suggests the following, which I think answers your topic partly. But, please correct me if I am wrong. I am keen to hear what you think about these conclusions:
"(1) it learns to predict patterns of truth and falsehood in logical statements, often long before having the resources to evaluate the statements, so long as the patterns can be written down in polynomial time; (2) it learns to use appropriate statistical summaries to predict sequences of statements whose truth values appear pseudorandom; and (3) it learns to have accurate beliefs about its own current beliefs, in a manner that avoids the standard paradoxes of self-reference."

The paper looks interesting, I plan on giving it a closer look. Their methodology appears to rely on making decisions based on internal probabilities and humans perform similar behaviors. While this lacks conceptions of feeling based decisions, it does looks like a good framework for making rational decisions. However, the philosophical conception of belief and mathematical conception of belief might not be the same thing.

Interesting article but I suspect you are way off the mark. It is tempting to draw analogies between the brain/ consicousness & computer/program but the idea of conscious computers simply does not stack up. Without consiousness you cannot have belief. Whilst science & philosophy do not yet understand what consiousness is, we can draw some conclusions about what consiousness is not. If you are not familiar with John Searle's Chinese Room Argument I suggest you have a look. The argument makes it clear that there is way more to "understanding" than just processing information (which is what computers do.) Information is processed when we think but thinking is not the same thing as information processing. The Hard Problem of consiousness (as defined by Philosopher David Chalmers) still illudes science and understanding. Until we have (even a basic) grasp of what consciousness actually is we should not expect our machines to start doing our thinking for us.

I'm familiar with both Searle's and Chalmer's arguments. You are correct that assumptions have to made to make this analogy. I have no reason to believe that this is actually the case, but if consciousness could be reduced to a mechanical and procedural process, then one might take this approach. In all honesty, I don't see us solving the hard problem of consciousness in our lifetimes, if ever.

You may be right about the 'hard problem', certainly new thinking is required.
Personally I do not believe that consiousness will turn out to be reducible to a mechanical/ procedural process which is why I don't think "thinking computers" are possible. I think the term artifical intelligence is misleading, I prefer "machine learning" as more representative term for the work done in the field.

With the rapid evolution in the field of artificial intelligence, in the near future, machines will not only be incorporated with the sense of feeling and decision making, they will be programmed to learn and acquire knowledge through the neural networks created by themselves.
Nice piece, Followed ya

Coin Marketplace

STEEM 0.20
TRX 0.13
JST 0.030
BTC 65092.40
ETH 3470.06
USDT 1.00
SBD 2.50