AI: To pause or not to pause

in Popular STEMlast year (edited)

Should society freeze or terminate progress on AI capabilities?

image.png

PIxabay license from JL G at source, search string: "apocalypse"

Introduction

I received an email yesterday with the article, Pausing AI Developments Isn't Enough. We Need to Shut it All Down attached. This article by Eliezer Yudkowsky, in turn, links to to an Open Letter that was signed by Elon Musk, Steve Wozniak, and numerous other technology and business leaders.

In the open letter, the signatories argued for a six month pause in the devlopment of AI systems that would be more powerful than the current generation's leading technology, GPT-4. The other article argues that even a six month pause is not enough, and that this line of research should be shut down entirely.

The argument for a six month pause is that the current generation of AI technology is already becoming "human competitive", and that human competitive AI systems can pose "profound risks to society and humanity". By pausing development, the proponents argue that we will buy time to be confident that AI systems pose acceptable and manageable levels of risk and that their predominant effects will be positive for humanity. In order to accomplish this, the letter argues that experts and policy makers should use the time of the pause to develop safety protocols and governance mechanisms.

Although calling for a six month moratorium on the development of AI that surpasses GPT-4's capability, it stops short of calling for a complete pause, saying,

This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

Yudkowsky's article goes even further. The author argues that the open letter understated the risk that AI poses to humanity, and calls, instead for a complete moratorium, saying,

The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries.

The author argues that the primary risk isn't from human competitive intelligence, but rather from superhuman intelligence. In his view and the view of other experts,

the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.

This line of reasoning is, of course, reminiscient of Bill Joy's famous essay, Why the Future Doesn't Need Us, and similar reasoning has been advanced by many smart minds over the years. In contrast, there's another school of technological philosophers who are anticipating that humanity will derive world-changing improvements from AI, in the form of The Singularity. Of course, Ray Kurzweil is the most famous of these. So, the question for this essay is how we should act in the face of this huge uncertainty? Should we have an indefinite moratorium on large training runs, or even just a pause?

In my opinion, no such action is possible or warranted.

A pause or a moratorium is unrealistic and impossible

For better or worse, in my opinion, Pandora's Box has been opened, and it cannot be closed again. Imagine, for a moment, that OpenAI would agree to a pause or a moratorium. That just increases the incentive for another firm like Google or Elon Musk's new X.ai to try even harder. Even if there were an agreement among US firms, or if the government legislated a stoppage, that would increase the incentive for other countries like China, Germany, and India to fill the vacuum.

In my opinion, the potential rewards from AI advances are far too compelling for a worldwide stoppage to be feasible. So, we really don't need to go any further. Even if one agrees that a pause or moratorium would be justified, it can't realistically be done, so the point is moot.

But beyond the fact that I don't think it can be done, there's still the question of whether it should be done.

Pre-crime is not a thing

The logic here is that, "The experts are scared, so we need to trust them and ban the thing that they're scared of." Well, as they say, "Fool me once, shame on you, fool me twice, shame on me." This is the exact same logic that just caused inestimable damage as a result of pandemic lock-downs. Experts ran their computer models and predicted all sorts of COVID-catastrophes and the whole world danced to their tune. In the end, the financial costs were staggering; our children will be recovering from the educational damage for years or decades; and it's arguable that the lock-downs caused as many or more deaths as the pandemic itself.

After that disaster, I'm not inclined to trust a new set of experts making the same arguments in another domain. Yes, their arguments are plausible, but they are far from certain. In the absence of certain knowledge, human freedom to pursue prosperity should not be constrained.

The risks and benefits cannot be predicted

Life is filled with uncertainty. This is just another example of it. Some experts say that AI will usher in Utopia. Others say it's a ticking time bomb that will lead to the extinction of humanity. My personal guess is that the truth is somewhere near the middle, but no one knows. Nearly everyone alive today has lived our lives with the knowledge that life could end in an instant if a nuclear war would break out, so this really isn't anything new. Yes, bad things might happen, good things might happen, and there's no way to know.

But if we let ourselves be paralyzed by uncertainty, we would certainly be giving up some easily foreseeable benefits - which is the reason that AI is being developed in the first place. So, to me, the prudent course seems to be moving forward.

Humans with machines are better than machines alone (or humans alone)

When Gary Kasparov lost at chess to Deep Blue, it seemed like the role of humans in chess mastery had come to an end. Instead, it turns out that computer assisted chess teams are better than computers or humans alone.

The people predicting an AI doomsday invariably say that it's hopeless for humans because the computers will be so far superior, but that's the wrong comparison. If there is a competition between man and machine, humans will be assisted by machines of their own. The example of chess suggests that AI-augmented humans will be able to out-compete AI systems that aren't augmented by human intelligence.

Conclusion

So there you have it. My argument of the day is that we cannot pause or stop AI development, even if we want to. Further, I don't think that a slowdown is called for. Yes, there are risks, and I hope to never be proven wrong.

In my opinion, however, the evidence of a tangible threat is not sufficient to warrant a slowdown. Also, even if humans do get into a competition with the machines, the humans will be assisted by our own supporting cast of machines. In my opinion humans augmented by state of the art machines will always be able to out-compete machines alone, no matter how capable the machines are.

What do you think?


Thank you for your time and attention.

As a general rule, I up-vote comments that demonstrate "proof of reading".




Steve Palmer is an IT professional with three decades of professional experience in data communications and information systems. He holds a bachelor's degree in mathematics, a master's degree in computer science, and a master's degree in information systems and technology management. He has been awarded 3 US patents.


image.png

Pixabay license, source

Reminder


Visit the /promoted page and #burnsteem25 to support the inflation-fighters who are helping to enable decentralized regulation of Steem token supply growth.

Sort:  
 last year 

Once upon a time, people were sure that photography would replace painting, cinema would replace theater, etc. Nothing like that happened. This is not an exact example, but I think artificial intelligence will take its place alongside other human inventions.

Like any other complex system, from time to time artificial intelligence will make wrong conclusions and make wrong decisions based on them. In some cases, this can lead to tragic consequences. In other cases, artificial intelligence will bring great benefits and even save human lives.

Personally I think there's some risk, but it's kind of unquantifiable. There's also some risk every time someone has a baby that it will grow up to be an evil psychopath that gains access to nuclear weapons. Maybe I'm not sufficiently worried about AI, but arguments from anxiety about the unknown seem a little dubious to me. And what's the evidence that "more time" would result in AI-alignment-concerned people doing anything more useful than the idle speculating they've been doing?

The other day I listened to this podcast about AI stuff. I thought the guest had some unusual perspectives, but since most of the commentary about AI stuff seems more doomer-centric I thought his POV was interesting. For example, he says we already give algorithms a lot of control over our lives (e.g. laws), so AIs running things would be a continuation of stuff we already do.

Everything will be fine. People have created AI to help themselves. Of course, sometimes this help destroys the person himself. After all, with the arrival of games in this world, a new disease of gambling addiction has appeared, from which many people have already died, although it is sad to talk about it. The main thing is not to lose your head on your head. I think it will become even more powerful (if I may say so), it will only benefit humanity. Some people really think they will forget how to think for themselves without resorting to AI. But not all of them.

My opinion in this regard is that machines should always be a support, humans go first, yesterday I heard news, a lung operation was performed, there was no need to open the chest, a robot helped in the operation , if they're going to help humans, then welcome to the robots

Tampoco creo que se puede pausar es mucho dinero en la mesa y muchas personas están jugando. La persona decide en que mundo sumergirse. Mientras la IA sea utilizada para hacer el bien como por ejemplo predecir enfermedades, desastres todo bien. Pero para fines malignos allí esta el detalle.

Coin Marketplace

STEEM 0.20
TRX 0.13
JST 0.030
BTC 62758.86
ETH 3465.23
USDT 1.00
SBD 2.49