You are viewing a single comment's thread from:

RE: Can Machines Ever Have Beliefs?

in #philosophy7 years ago

Interesting article but I suspect you are way off the mark. It is tempting to draw analogies between the brain/ consicousness & computer/program but the idea of conscious computers simply does not stack up. Without consiousness you cannot have belief. Whilst science & philosophy do not yet understand what consiousness is, we can draw some conclusions about what consiousness is not. If you are not familiar with John Searle's Chinese Room Argument I suggest you have a look. The argument makes it clear that there is way more to "understanding" than just processing information (which is what computers do.) Information is processed when we think but thinking is not the same thing as information processing. The Hard Problem of consiousness (as defined by Philosopher David Chalmers) still illudes science and understanding. Until we have (even a basic) grasp of what consciousness actually is we should not expect our machines to start doing our thinking for us.

Sort:  

I'm familiar with both Searle's and Chalmer's arguments. You are correct that assumptions have to made to make this analogy. I have no reason to believe that this is actually the case, but if consciousness could be reduced to a mechanical and procedural process, then one might take this approach. In all honesty, I don't see us solving the hard problem of consciousness in our lifetimes, if ever.

You may be right about the 'hard problem', certainly new thinking is required.
Personally I do not believe that consiousness will turn out to be reducible to a mechanical/ procedural process which is why I don't think "thinking computers" are possible. I think the term artifical intelligence is misleading, I prefer "machine learning" as more representative term for the work done in the field.

Coin Marketplace

STEEM 0.17
TRX 0.13
JST 0.027
BTC 59538.61
ETH 2658.79
USDT 1.00
SBD 2.45