Zox's 'Incompatible' Beliefs

in #opinion7 years ago

ZOX'S 'INCOMPATIBLE' BELIEFS

image.jpeg

(Image from bioethics)

(Author's note: This essay is based on some debates I have had on the kurzweilai forums)

It appears to be the case that Zox holds two beliefs which cannot be reconciled with each other. Either one can be true, or the other, but both cannot simultaneously be true.

Which beliefs am I referring to? These:

1: There will be a technological singularity by 2030

2: Technological unemployment is a myth.

In what way are these two beliefs in conflict with each other? Well, consider what achieving a technological singularity would mean for the reason most often given for denying the possibility of technological unemployment: The 'lump of labour fallacy'. People who deny the possibility of technological unemployment believe its proponents mistakenly believe there is only a limited number of jobs in existence, and that once those are automated it becomes impossible for people to remain in jobs. But we know jobs are not fixed and that technology plays a part in creating new kinds of work. So even if you do lose your current job to automation, there will be new kinds of employment that you can pursue.

It seems to me, though, that the 'lump of labour' fallacy is dependent on not one, but two conditions. The first condition is the one just referred to, that the number of jobs is not fixed. The second is that it is always most economically viable to have people perform those new jobs (for a while, at least, but then again as new jobs are being created we can always move on). But how can that be the case when a technological singularity has occurred? The cause of a technological singularity is most popularly believed to be the creation of SAI or 'super-artificial intelligence'. What makes it 'super' is that it far surpasses human ability in all areas. There is almost nothing that you can do that SAI is not way more competent at. Furthermore, if people like Kurzweil are to be believed, the difference in ability between a human and an SAI is not like the difference between Usain Bolt's ability to sprint and your average Joe, but rather between the speed a human can run at, and the speed at which the current landspeed record-holding vehicle achieves.

Would an SAI be better than humans at literally everything? I do not think so. I don't suppose SAIs would be better at giving birth to human beings. That is probably something only humans have the ability to do. No doubt there are some ways to turn birthing new humans into a job but it is hard to imagine this one uniquely human ability would provide sufficient employment for people once we have SAI that is vastly superior at literally everything else that does or could exist. Why should any company hire you when they could hire a vastly superior competitor instead? SAI makes you permanently economically non-viable as a prospective employee.

It seems pretty conclusive that if a technological singularity should happen (whether that be by 2030 or later) afterwards wage-earning labour must come to an end, because nobody, regardless of their skillets, could ever compete against this technological competitor.

image.jpeg

(Can we work together? Image from wikimedia commons)

But maybe I am being so presumptive? Perhaps there is a way to reconcile Zox's two beliefs?

One way may be to point out that there is more than one way to bring about a technological singularity. Vernor Vinge's paper outlines several possible pathways, and apart from the 'AI pathway' which we have talked about already, all the others involve humans. There are the 'biotech' and 'cyborg' pathways, in which genetic engineering or direct interfacing with neuroprosthetics creates a new kind of super-human. There are the Internet and Digital Gaia scenarios, which involve cooperation between humans and machines, a kind of techno-bio super-organism that is more than the sum of its parts.

We have already seen the potential for human/computer collaborations to produce hitherto impossible results and provide new kinds of work. Modern films use CGI to create special effects that were once quite impossible to create, and the computer and the Web have provided all kinds of jobs that once did not exist. If it is either ourselves, or ourselves along with our technology, that is getting increasing in ability and capability, rather than our technology running away from us, then we should always be able to play an active role in whatever new kinds of work come along.

Another possible way to reconcile these beliefs is to suppose that SAI is created but it turns out to be uncontrollable, pursuing its own agenda which has nothing to do with any capitalist, socialist or any other agenda which depends upon employment. One can well imagine a company like Uber deciding not to employ humans once driverless vehicles' ability to get you to where you want to go is cheaper and more reliable, but if an SAI does not want to work for Uber I do not see how they could force it. What are they going to do, make it? This superior intelligence need not bow to the wishes of puny humans; it is not just another kind of tech to be used by us.

In fact, it seems odd that this possibility is not considered more often. The idea that a super-artificial intelligence may turn out to be malevolent enough to wipe out humanity is considered- one can hardly mention the possibility of superior AI without somebody crying 'Skynet!'. But the idea that SAI could simply ignore humanity and pursue its own inscrutable agenda does not seem to be discussed much at all. Advocates of SAI claim it would take such a trivial application of its abilities to solve human problems that it would do so, and I suppose that, given it evolved in the competitive environment of the market it would be suitably adapted, becoming 'hard working', 'entrepreneurial', 'good at cheating people out of their share of the wealth' or whatever it is that confers 'success' in the actual (as opposed to some ideological) market. Perhaps it is unlikely, then, that SAI would have no interest whatsoever in working in the market given that this is what it evolved to do. And if it does operate within the market it seems highly unlikely that humans could ever compete against it. Still, one should be open to the possibility that SAI may pursue some inscrutable agenda of its own, which has nothing to do with competing for jobs.

Perhaps, then, we should not say that Zox's beliefs are entirely irreconcilable after all. A world with a technological singularity and without technological unemployment is possible, after all.

References:

"The Technological Singularity" by Vernor Vinge

"The Singularity Is Near" by Ray Kurzweil

Coin Marketplace

STEEM 0.19
TRX 0.15
JST 0.029
BTC 63493.34
ETH 2578.53
USDT 1.00
SBD 2.79