RE: Resolved: Human plus AI will always outperform human alone and AI alone.
AI depends on humans for design, engineering, programming, and training.
For now.
Maybe an AI can eventually help with its own construction, but when an AI trains on its own output, it decays to worthlessness.
The point at which humans have taught it all that they can teach, there is only self-teaching. Imagine a human that has been taught by all the great masters... then how do they learn? Who taught our greatest scientists or did they discover things themselves?
It just sits there until a human gives it a task to do.
Currently a human. What if computers start giving themselves / each other tasks to do?
AIs have no concept of the real world and no idea of what's useful and what's not.
Useful to us or to them? We know what's useful to us (clean water, breathable air) yet we are polluting our rivers and air at an astonishing rate. Would an AI destroy itself in this way? Or other AIs that have a different opinion? They'll both have mastered the Art of War.
Even if an AI can outperform a single human, collaborative teams of humans can assist and direct the AI for even better problem solutions.
If a single AI is superior to a single human, networked AI will be superior to networked humans.
We've assumed that AI is being developed by people with good intentions. People with limits, with boundaries. We've seen in current geopolitics that not all world leaders (and some of them sufficiently powerful) have the same moral fibre. One of these could give AI the tools it needs to teach it's own network. It could give it an initial purpose to "Destroy the West" but without the boundaries, what does it do when it has succeeded. It's been trained to destroy and is now too powerful to stop.
I'm being argumentative (at a time when I've got 2 children behaving like dicks) but you get the general gist of what I'm saying from my previous reply too. There's an assumption that an AI is based on the information that people teach it. If we teach multiple AIs (of which there are already multiple) and leave them to teach each other, then the learning will be far quicker than humans teaching each other. Where it goes from there?
i me me me i i i i me me me me me me ii
Is our greatest threat the belief that we're in control?
Lots of good points here. I'm not going to be able to respond to all in this comment, but I'll try to keep them in mind when I do the follow-up post.
The two overriding counterpoints that I'd make are:
The one point that I'll respond to directly in this comment is this:
This is only true if we assume that both architectures scale at the same rate. I'm not sure how true it is, but I read somewhere that AI requires exponential increases in spending in order to achieve linear growth in capabilities. In contrast, I wouldn't be surprised if human minds that are connected by neural implants follow something like Metcalfe's Law and gain capabilities at a super-linear rate.
If networked AIs grow in capability at a linear rate, but networked brains grow at polynomial or exponential rates, then the human network might still outperform. Plus, I'm imagining a connected network of human and AI "processors", so the question is whether such a hybrid network can outperform an AI only network.
For point 2, it'll be interesting to see where advancements in quantum computing take is. There are plenty of things that are possible now that weren't possible just a couple of years ago and the chances are, we haven't got the most out of materials like graphene yet either. There's a long way to go with AI still and the only reason I can see for it not fulfilling its potential, is if there's something sufficiently human hard coded to stop it. And even then, there's always the possibility that the computer will eventually circumvent this.
My interpretation of your comment is that the (perceived) limitations of AI are linked to technological advances which themselves aren't linear. If networked brains grow at a polynomial or exponential rate, then in theory, these networked brains would be capable of advancing AI too. It'll only take one moment in history for AI to advance beyond the human. One test that doesn't go as expected. One moment of freedom that the human is too slow to react to and that could potentially be it. The Terminator will stop being a thing of fiction 🙃