Stop the Singularity! I Want to Get Off!
In popular culture, the term "The Singularity" has two very different meanings -- sometimes seeming nearly identical and sometimes being diametrically opposed. The first meaning, which we shall call the "Vingean" meaning (after Vernor Vinge), is when the first artificial general intelligence (AGI) or strong AI is created. The second meaning, which we shall call "Kurzweillian" (after Ray Kurzweil), is when the rate of change becomes so rapid that we have no idea what comes next.
The former needs to happen as soon as safely possible; the latter needs to be stopped.
Arguably, we are in the throes of both singularities. The world is being rapidly and radically changed by so-called "Weak AI" (which is anything but) -- and we believe that there will be human-equivalent artificial entities in less than a decade. Many people, in a depressing and discouraging over-display of us-vs-them, are convinced (or want to be convincing) that "the machines" will take over the world . . . .
. . . without realizing that it is the ever-increasing virtually-uncontrolled power of technology and tools (most particularly "Weak AI") that is the true existential threat. "Meaningful human control" is *NEVER* going to succeed by insisting upon real-time human oversight. Safety and security are *NEVER* going to be achieved when too much power is in the hands of a small group wielding tools that they don't truly understand -- nor by allowing one set of entities to hold sway over another set only by virtue of birth (rather than by willingness to cooperate).
The only way in which humanity is going to survive is by setting high-level policies that allow all entities to thrive -- and by punishing selfish behavior that blocks that goal. Information hoarding and manipulation is even more selfish and dangerous than hoarding and polluting other resources -- and much of the increasing unpredictability of the world is due to individuals and group selfishly manipulating the info-sphere for their personal gain at the expense of others. And much of this subterfuge is hidden and/or enabled by alarmist rhetoric separating out other people (entities) as threats.
The world needs to return to being more predictable -- but that isn't going to happen by going backwards. We need to develop systems and entities (and learn from that creation process) that are more capable of prediction and producing positive results for all. Instead of a singularity (or a singleton), what should be produced by the creation of strong ethical AI is a virtual Cambrian explosion of possibilities for all -- unless we kill the process by allowing the current strong-men to rigidly control it for their desires. The "great filter" is looming . . . . can we master our short-sighted selfish tendencies to survive?