You are viewing a single comment's thread from:
RE: Can Machines Ever Have Beliefs?
I'm familiar with both Searle's and Chalmer's arguments. You are correct that assumptions have to made to make this analogy. I have no reason to believe that this is actually the case, but if consciousness could be reduced to a mechanical and procedural process, then one might take this approach. In all honesty, I don't see us solving the hard problem of consciousness in our lifetimes, if ever.
You may be right about the 'hard problem', certainly new thinking is required.
Personally I do not believe that consiousness will turn out to be reducible to a mechanical/ procedural process which is why I don't think "thinking computers" are possible. I think the term artifical intelligence is misleading, I prefer "machine learning" as more representative term for the work done in the field.