Three Arguments Against Computers Being Conscious

in #philosophy7 years ago

We can think of a world where the following description of a title will be true: “My computer is conscious”. We might even think of this as true, that a computer is capable of human like thoughts and experiences, to be self-aware and even to learn a language like English from little amounts of data. The word consciousness in this sense is not used in the right way, according to the nature of the word. The article proposes that linguistically the term is not used correctly.

pexels-photo-577585.jpeg

I.

Why strong AI will never have consciousness - Critique on Searle's "Minds, Brains and Programs"

In Searle's article Minds, Brains and Programs he says that a computer program (like strong AI) will never have understanding. He gives the example of the Chinese Room and that the room will never understand Chinese because it only manipulates the symbols (really simplified explanation). In the article, he gives replies to answers that can be given to counter his thesis that a program will never understand. The fifth reply is given as:

"The other minds reply (Yale). How do you know that other people understand Chinese or anything else? Only by their behaviour. Now the computer can pass the behavioural tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers."

What this signifies in simplified terms is that once we build a computer that simulates the brain, why can we not say it is conscious, understands or has intentionality? Searle does not go very far with his answer and this is where I want to give my critique on his arguments and strong AI in general.

I am making my thesis that the problem will never be solved because it is a linguistic error. If you take the example:
“The dolphin painted Mona Lisa”
you know this is not possible because there needs to be "hands" to hold a brush, there needs to be "air" to dry the painting, and a dolphin is under water. Thus we can say with relative ease that claiming a dolphin painted Mona Lisa is impossible because it cannot be done. The words "painted/painting" cannot be in the same sentence as "dolphin" as the "doer" of the painting needs to have hands or it needs to be done on land etc. (This critique piece does not go into the problems of defining what art is, it is a simplified example.)

Now the same can be said about the computer that simulates the brain or mind. It cannot be conscious, understand or has intentionality because all these words are defined with "things" not associated with a mechanical computer or programs. It is not "such a big deal", but saying a computer is "conscious" is in a way saying that a dolphin can make a painting. The problem is not that a computer cannot be conscious or understand, it may well have intentionality one day, but saying it in terms that we ascribe to humans, are giving computers anthropomorphic qualities.

My claim then is that we cannot give anthropomorphic terms to computers, it is an "abuse" of language and it creates the wrong connotations. My proposal is that new terms need to be given to the revolutions in computer sciences. When computers started terms were created that is in everyday use today, the possibility of creating new terms for strong AI is not that impossible act. The act of anthropomorphizing computers seems odd in the age we are living.

II.

What is it like to be a computer program programmed to be a brain - Critique on computers having consciousness

Thomas Nagel asked a troubling question in his article What is it like to be a bat? i.e. what is it like to be a bat? In itself, the question may appear out of the ordinary, but when compared to a different example, the fundamental question and problem will be clearer in a way (e.g. the problem of color inversion):

“you and me both see green and red, my green is your red, your red is my green, but in our language, we compensate for this problem and our colors are the same, i.e. we both stop at the red light at a traffic light because we know “red” independently if you see green and I see red (or yellow for argument sake).”

The question related to a bat: we may from our own experience think we know how it is to be a bat, better stated by Nagel:

“In so far as I can imagine this (which is not very far), it tells me only what it would be like for me to behave as a bat behaves (439).”

This is not conclusive data that bats (or other organisms) don’t have experiences such as we do, again Nagel:

“It would be fine if someone were to develop concepts and a theory that enabled us to think about those things; but such an understanding may be permanently denied to us by the limits of our nature (440).”

Thus we can state it clearly, through the reasoning of Nagel, that we may not know what it is like being a bat, we may never know it, because our language and cognitive ability doesn’t allow it. This is not saying that it is not there, it is just that we don’t know (maybe a “known unknown”).

“Reflection on what it is like to be a bat seems to lead us, therefore, to the conclusion that there are facts that do not consist in the truth of propositions expressible in a human language. We can be compelled to recognize the existence of such facts without being able to state or comprehend them (441).”

The argument then is the following:

  1. We don’t know what it is to be like x (bat, other humans, aliens etc.)
  2. We won’t ever know because our language/cognitive ability doesn’t allow it (i.e. the above quote by Nagel)
  3. If we create a computer for the sole purpose of being a brain and experiencing the world, it will most likely experience the world (like a bat, alien etc. may experience it)
  4. We will not know what it is like to experience the world like a computer does
    Therefor
  5. We cannot claim that a computer has or doesn’t have conscious experience.

III.

The word consciousness is limited to human beings - a third (last) response on why computers cannot be conscious

Grice suggested the use of implicatures, where speaker S says something to hearer H and H needs to know what S implied to get meaning from the reply by S, e.g.:
S: Are you going to drink some coffee with us?
H: I have a headache.

S then knows from the reply by H that H will not go out for some coffee. The thing about implicatures is that some of them have become conventional, i.e. we use them so often they start to get meaning not by virtue of the words, but by the contextual environment in what the utterance was said.

This is the same with consciousness. When we say that animal X has consciousness we do not mean that animal X has the same kind of consciousness that we have. (This can be contested, but it will be counterintuitive to say that a rat or a cow has the same kind of consciousness as humans do.) Let us say that humans have a different kind of conscience as other animals/creations because we can ask about our own consciousness. We won't know if a cow will ever while grazing in the field think about its own consciousness because we cannot communicate with cows.

Now take this to computers. We know in a certain way that computers will get smarter. There is already computer programs learning from very little given data (Google's Deepmind). But will we ever know if computers will be conscious? We can in principle not say (with current knowledge/technology) that a cow is aware of itself. We can also not say that a computer will be aware of itself.

Where does this fit in with implicatures? Let us suppose that when we talk about an animal X being conscious, we do not say that animal X is conscious in the same way as humans are conscious. We say that animal X is animal-conscious (e.g. dog-conscious in the way other dogs are conscious). We don't talk like this because it will be laborious, thus the implicature is that we say something (like animal X being conscious) but mean something else (like animal X is animal-X-conscious).
The same can then be said about computers or strong AI etc. They won't have the same consciousness as humans have (because we can in principle not know). (I know this is a weak point in the argument.) Thus we say a computer is conscious in a computer-like way, i.e. computer-conscious. It can be said in argument form:

  1. Humans are human-conscious.
  2. Computers show consciousness
  3. Computers cannot show human-conscious.
  4. Therefore, computers must be computer-conscious

When we talk then about a computer being conscious, we don't mean it is conscious in humanlike ways, but conscious in the way computers are conscious - and in principle like cows or other animals we still don't know what that is like.

Sort:  

Would you agree with Nick Bostrom & Elon Musk's argument that, subject to certain assumptions, we are most likely living in a computer simulation (like in The Matrix)?

I wish I could remember where I read/watched it, but someone gave a beautiful argument why it is impossible that we are living in a simulation like that. I cannot remember all the detail, but that argument Musk makes has so many presuppositions etc. that it weakens the argument. The other thing is that it presupposes that we will get to a point where reality and simulation cannot be distinguished, because this moment we cannot even fathom (in my opinion) what a simulation would feel like. And even if we are a simulation and we cannot know that we are a simulation, is that a good argument for believing you are a simulation? I don't know, I hope that answers your question?

yes, the part i don't understand with the bostrom-musk argument is that the likelihood of being inside a simulation is higher because there is only 1 reality but the number of possible simulations is infinite. unless i've misunderstood something. i have not read the bostrom paper carefully though:
https://www.simulation-argument.com/
i guess i need to spend a week reading the literature & thinking hard about it.

Yes thats a good way to do it. But ultimately I think we won't know. In a way what does ti matter?

Coin Marketplace

STEEM 0.19
TRX 0.15
JST 0.029
BTC 63330.55
ETH 2645.93
USDT 1.00
SBD 2.82