You are viewing a single comment's thread from:

RE: Why I Don't Believe in the Singularity.

in #singularity8 years ago

It would also need to be raised from infancy and educated. Even then, just one such AI wouldn't be able to do anything a comparably educated human can't. You'd need thousands of them, the artificial equivalent of the research communities responsible for building it in the first place. At that point, why not just do the same thing with genetically improved groups of humans?

Because it would be far easier to horizontally scale the AI (once it is built at production scale) than it is to scale humans. You don't need to retrain the AI with years of education again; just copy and paste onto new hardware (hardware by the way that will be quickly manufactured autonomously thanks to machine learning and AI). Also, the AI doesn't need to die. Think about it. We invest 20+ years of education and training to a human being for roughly only 40 years of productive work. That is a lot of inefficiency that is avoided by using the immortal AI. There also other efficiency benefits compared to humans. Humans require a lot of resources to survive compared to a machine, and of course survival is just the bare minimum. If you want a human to be productive, they need to be happy. And happiness requires a lot more resources and most critically time devoted to non-productive activities (leisure time, family time, proper amounts of sleep, etc.). Finally, a lot of the inefficiency from groups of humans working together comes from coordination issues. There is a lot of overhead simply communicating information from the mind of one human to another, which is a necessary inefficiency because a single human cannot accomplish these ambitious tasks alone. But what if the communication between workers was as natural and high-bandwidth / low-latency as the communication that occurs between your brain's left hemisphere and right hemisphere? I think the collection of horizontally-scaled AGIs (Artificial General Intelligence) could maintain high-fidelity communication with each other and efficiently act as one such that they could vastly outperform a similarly sized group of human workers even if each AGI had the same level of intelligence and thinking speed as a human.

Of course the counterpoint to the efficiency argument is that early on the first AGIs will likely be incredibly computationally demanding relative to humans. Humans pull off their amazing intelligence and cognitive skills with just 3 pounds of matter consuming only 20 watts of power. That is incredible when one compares to the size and power consumption of modern supercomputers required to do tasks (much less effectively by the way) that humans find trivial. But with continued technological development, this is likely to eventually change, and it hopefully will be possible for machines to outperform human brains by all relevant metrics. Also, I'm not at all convinced biological engineering can improve human brains enough to compete with the gains machines can achieve given how much architectural flexibility is allowed when designing a machine system from scratch.

The second problem I have with it is the weird notion that once we achieve the same gross computing power as a human brain, computers may exhibit sentient thought as an emergent phenomenon.

I agree. Not to say that I think they couldn't be designed to do so. But I believe the AGI could be designed to avoid sentience, which would probably be the much easier task anyway, if the human designers chose to do so. And especially early on it would make sense to avoid building a sentient AGI.

Maybe the only hope for reproducing those qualities is to build an architecturally brain-like computer? Or to simulate a brain in software down to every individual neuron.
This would require hardware many times more powerful than the brain you wish to simulate in order for it to think at full speed.

My guess is that the first successful AGIs will use the emulation-like approach at first (but likely with very crude mathematical models of neurons and synapses), simply because it is less expensive to iterate and try variations of architecture. This would likely only be useful for further research in AI though since the cost of the amount of computational resources and electricity needed to power such a thing would likely outweigh the benefits it provides (a human would easily outperform it). But it could prove the concept of the architecture. The next steps would likely be simplifying the architecture (for performance, manufacturability, and ultimately cost reasons) further while still preserving the desired emergent behavior of intelligence. Then when the conceptual architecture is more or less settled, the next steps would be realizing this architecture efficiently in hardware. The von Neumann architecture would not be of any use here. The hardware architecture would need to more resemble that of the brain: highly parallel, massively interconnected, and likely with the memory kept close and local to the large number of simple processing units.

My third problem with it, by far the most commonly cited by others, is that not all technology advances exponentially.

Again, I agree (somewhat). It is too simplistic to just use Moore's law as the basis for an extrapolation forward in order to predict when AGI will be created. There is so much more innovation in technology that I think will be required to be able to design the hardware architecture to achieve practical AGI than simply doubling the number of transistors per unit area every two years or so (which is reaching its limits anyway). That's not to say that I think that is the only metric futurists are looking at when making their predictions, but I think they may be a bit too optimistic (likely because of some bias of wanting to see these technologies before they die) about the rates of advancement of the other technologies that will likely be necessary to realize this future. For example, I imagine there may need to be huge advances in material science and engineering to even be able to manufacture the highly interconnected and dense architectures resembling the human neocortex that will likely be necessary to achieve a practical AGI.

I don't think cybernetic transhumanism is likely to become prevalent

Personally I think there will be much bigger advancements in purely-synthetic AGIs than augmenting human cognition with machines via direct neural interfaces. I do think some of the latter will happen, but I don't think it will be anything like uploading one's mind to the internet or freeing one's mind from the fragility of their body or other radical stuff like that.

Sort:  

Some thoughtful points made here. Perhaps I should have said I don't believe in the singularity as Kurzweil and his followers usually describe it. As stipulated at the beginning I think conscious machines are inevitable, I just don't think Kurzweil has the right answer for how they will come about.

Coin Marketplace

STEEM 0.18
TRX 0.13
JST 0.028
BTC 64657.99
ETH 3153.39
USDT 1.00
SBD 2.59