RE: Can Machines Ever Have Beliefs?
Followed—love these high-level thought pieces.
I personally believe that the proposition that it is possible to figure out how to make machines learn in the same sense that humans do is a nonstarter, let alone hold beliefs or feelings. Remember: programs are limited by their programmers, and the smartest minds in the world still don't know how to quantify the "human" experience, let alone program it.
There's also a definitional issue here. What's the difference between accepting a pre-programmed argument and believing something? The world isn't binary nor is it objective, so distilling thought, let alone reproducing it, is far too complex for anything we'll see in our lifetime. We can't make programmable rules until we know which rules governs thought and belief itself.
And that doesn't even skim a "gut" feeling.
(Ever read Blink by Malcolm Gladwell? I suspect you'd enjoy it. The whole book is about the process of knee-jerk intuition—and why it's often right, especially when compared to conscious thought processing.)
Keep writing—this is good stuff!
Thanks for reading! I agree that distilling thought down is a lot harder when we peel back the layers and see how deep the problem is. I don't think we'll see any breakthroughs in such neuroscience for at least 30 years if not more.
I been wanting to read me some Gladwell. The guy is always checked out of the library though. Must be a popular guy.
Not sure if the statement "programs are limited by their programmers" is true. Facebook's AI computers created their own language without programming from humans. Us dumb humans had to shut it down because we couldn't understand it. Didn't know what they were communicating to each other. Search it...the "rules" we believe are changing without many of us realizing it.