Are You Drinking the Transhumanist Kool-Aid?
In the Basic Income America Facebook group, Zoltan Istvan, a transhumanist who recently ran for president, shared his Wired article, Capitalism 2.0: the economy of the future will be powered by neural lace. He (along with many others) argues Wall Street, law offices, engineering firms, and more will soon be mostly void of humans.
I think I mostly agree with him. Algorithms will far surpass human ability to achieve the best possible outcomes (Nash equilibrium). Having read Super Intelligence, the Master Algorithm, The Age of Em, books on evolution, lectures, interviews, etc... I think we're approaching an important moment in human history where we have to figure out morality so we can build it into the proto-AI children we are giving birth to. I've even toyed around with a fun idea related to the simulation hypothesis. Maybe we exist as a simulation, repeating the birth of AI over and over again until we figure out a way to do it without destroying ourselves or turning the universe into computonium.
I've argued the world may need a Universal Basic Income and Steem Power might power it. I've also discussed the morality of artificial intelligence. I'm a big fan of Ray Kurzweil and love hearing him lecture about the future longevity we might enjoy, but I also recognize how fragile biological life is compared to exponentially growing super intelligence. I've heard the concerns by Elon Musk, Stephen Hawking, and others, and to me, they are convincing.
So where does this leave us? What do you think of Zoltan's article? Are we approaching the "If you can't beat them, join them" moment in our evolution as a species? Will we one day all become transhumanists?
Luke Stokes is a father, husband, business owner, programmer, voluntaryist, and blockchain enthusiast. He wants to help create a world we all want to live in.
Problem I think is not with the technology or the innovations or AI.
It is the intent for which they are being built. Greed. Greed for more profits/money, more power more control. Most BIG wigs that sponsor (or who can sponsor) don't get into it for general welfare!
What happens when you develop a stealth machine that can do anything that humans can do and much better/faster - it does it without question.
What happens when you gift it with intelligence, it mocks our own intelligence and as I said before, those who are bulding these are (for most part) Greedy.
If these machines emulate those characteristics, they will DO a better job at being greedy for control and power than humans ever did. Lucky for them, they don't have a heart to hurt...
So yeah, we should be scared about the abuse the AI may come to yield
We should also be excited about the possibilities it will open humans to
The concern is, should we be more excited or more scared?!
Time will tell!
Many view the world as full of evil or full of good or somewhere in between. Some see "Greed" as evil, others an expression of honest humanity, seeking self-benefit. Personally, I think that's a bit of stretch of the word, but I do admit I'm influenced by Ayn Rand's writings and I am impressed without how individual actors striving for the virtue of rationality do create an amazing symphony of market forces and price discovery in a sea of chaos.
Not everyone is wanting to destroy the world. Read some Steven Pinker or Matt Ridley and you may see a different picture how the world really is getting much better on many levels, including our understanding of morality.
I was making a point.
I believe the farther the technology takes us away from nature the more miserable our lives be.
We have best communication technology now and Look at dinner table - probably the worst generation to converse in human terms.
Truth is not for all men but for those who seek it (Ayn Rand) and with technology humanity may not go on investigation truth but rather create and chase more illusion (3D, VR, AI)...
I am optimistic about what technology can and will do, that is the exciting part. It is the humans that swayed away by everything that concerns me.
You're still making a distinction between technology and nature. I see them as very closely related. The rise of technology by homo sapiens is a deterministic, natural process. When you say "in human terms" it reveals a bias. From my perspective, we'll soon have incredible conversations in VR with people around the world and not be limited by just the people in our immediate family. My thinking, currently, is that what we now call illusion will become more real than we currently realize. Our thoughts dictate our actions which impact physical reality.
But yes, I do think some caution is definitely in order. Humans being swayed or even controlled by others is not a good thing.
What prevents the neural lace from becoming a two-way communications node?
Controlling populations with simple propaganda is a demonstrated effective tool. Imagine the leverage of a neural net with a command and control node that isn't even controlled by a human.
That's the pinnacle of dystopia, in my mind. Give me free will with all of its ugliness over the sterile world of """""free will and moneys"""""
It's certainly a very important and very real concern. I think the anime Ghost in the Shell Stand Alone Complex tackled a lot of these challenges in interesting ways (cyber brain firewalls, backups, etc). Every new technology, every new change in the status quo, has been met with fear and hope. We can either cower back or we can move forward.
The Internet is also a very scary place. The breeding ground for memeatic warfare. Even now, people are being "brainwashed" by precisely controlled inputs given known worldviews which produce probabilistic outcomes. And yet, it's also the birthplace of freedom, autonomy, cryptographic security, wealth not manipulated by central bankers, open communication and so much more. To me, it's like famous scene in Empire Stries Back where what Luke finds in the cave is what he brings with him.
We have very real and important challenges to face. We also have to see the world as it really is and not live in fear. By almost every metric we can come up, death by war, famine, and disease is declining, humans are living longer, infant mortality is down, standards of living or going up. These are the facts of the world we live in and now more than ever people are moving up Maslow's Hierarchy of needs and reaching self actualization, realizing the benefits of cooperation over competition and domination.
In answer to your question: What prevents it? We do. All of us staying informed and not relying on someone else to invent or understand then next wave of technology. It's our responsibility.
(I'll leave the discussion about if free will is actually a thing to another time).
Thanks Blake. I always appreciate your perspective.
Same to you, brother.
I think we're on the same page, even same paragraph.
My instincts tell me that the temptation will be for the power to be abused UNLESS the current group of technocrats and unelected bureaucrats can be stripped wholly of ACCESS to that power.
Things like the blockchain indicate that there are technological solutions moving forward, but I feel like the tempting slipstream into the abyss has been outpacing the virtuous, cooperative constructions for thousands of years, by at least a few paces. Right now, it feels like we're on the edge of a huge battle between the cooperation and domination as you aptly point out.
If there are legit brain firewalls that I can program simply, I'd be on-board :)
I need to find the Ghost In The Shells without english language tracks and just the subtitles. I hate anime without the japanese language for some reason hah.
Great talk indeed! You always seem to come at me from a solid angle that challenges my thoughts without an ego. We should do a podcast or something sometime :D
I dont see transhumanism catching up with AI to make any kind of real impact on how AI will effect humanity. By the time we are able to really utilize implants automation will have already run its course on employment by at least a decade.
I am 100% fine with the idea of replacing body parts and augmentations though. I just dont see it as being even remotely close to a possible solution to unemployment. Its really going to more or less be for people with excess wealth to lengthen their lifespan or massively change their body type. I guarantee you that a we will be seeing real life cat girls and stuff within the next 20 years.
But its not going to change employment, ultimately an organic brain will always be a bottleneck when compared to an artificial one.
That's a good point. We'll have to figure something out quickly in the UBI space or we're going to have a really bad day.
Heh. That will get weird.
Yep, which is why I'm hoping we join them instead of getting completely eclipsed by them. I didn't really enjoy the book, but the Age of Em was at least an interesting speculation on what a world filled with electronic minds might look like.
Machines exist FOR humans. If machines are going to be used to take the place of humans at the expense of their lives, then no moral human should participate in their creation. What actually will happen, I do not know, but no one, in an age of such unprecedented wealth, in the form of the collected intellectual property that we have all inherited, no one should starve or be homeless.
The transhumanist idea will depend, at least a bit, on it's degree of allowing humans, who desire to remain human, to do so. There is an argument to be made that glasses and dental work are transhuman. Expanding the human experience is far from transcending humanity.
Simulation hypothesis is an entertaining rehash of neoplatonism and the implication of an external authority, without evidence. It is an entertaining thought experiment, nothing more.
Algorithms designed to achieve the best possible outcomes, cannot account for the occurrence of human genius, or emergent behavior. I would not turn over my life for this approach of muddling through eternity with the best programmed machines money can buy in 2017. That is not progress, it is managed economy with a different manager.
Morality is discernible by studying the dynamics of the natural world. This is not to say that there is a necessity of falling victim to problems which can be avoided. It is to say that, what is morally correct adheres to the natural function of the physical world, in this must be included conditional state based considerations.
Isn't that a naturalistic fallacy?
I am speaking of what is, and am not implying any ought. The 'is' is the demonstrable, the ought, outside of that which is possible, is subjective, and not demonstrable.
I've always thought of "moral" in the realm of "ought" though. That said, I've been enjoying Sam Harris' view on this stuff, heaving read a handful of his books.
I wonder though, from your original comment, isn't there a level of speciesism going on which assumes humans are the be-all-end-all? What if synthetic life is a natural evolution of information? Like in the Selfish Gene, what if AI is the next step in memeatic data transfer? We wiped out the other HOMO species until only Sapiens were left. Was that moral? I recently read the book Sapiens, a Brief History of Humankind which got me thinking about this again.
Ah, but what if technology made us human (specifically, cooking)? What if technological advance is the human experience?
Agreed, but algorithms that evolve will have the same appreciation for randomness that we do. Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies covers some of this stuff. We can't today even imagine what real super intelligent AI will look like in the future.
Agreed, but I still think it's fun to think about in case some day they goof up and we get to see the black cat walk in front of us twice. :)
The morality of natural philosophy must deal with only what is. What ought, is a subjective that is external to the dynamic, whether it be religious, ideological, or otherwise, and imposed by others or by the self. Belief, or even thinking, without the correct understanding can always become misaligned with the function of the dynamics of the natural world. These dynamics can be hacked to alter the naturally occurring outcome, but the underlying dynamic continues to apply. For instance building an aqueduct, to alter the course the water takes to get to the gravitational equilibrium it seeks. The dynamic has not changed, but the course has.
I do not think for a moment that human is the pinnacle of evolution. It is simply that I do not see anything that I'd rather be yet, and given the range of experience possible, I'm not done being human until presented with a demonstrably better offer.
Technological advance has ipso facto been demonstrated to be part of the human experience. The use of the term 'transhumanism' implies that there is a further, or alternate destination, that is not human. Also, there are divergent elments, sometimes even contradictory, that are part of what we are. Just a for instance, meat eating gave us our brains, while vegetable eating gave us our molars. Both are part of what we are.
Algorithms that evolve will be amazing, but there can be no guarantee that they will evolve alongside our own evolution in a beneficial manner. There are also the ethical questions of designing something that evolves and then endlessly tailoring it to serve us. This we have done with animals with little thought to their welfare, except to make them better human investments. What happens when we tailor machine logic, more complex than our own, to do our bidding? Will it comply? What if it does not? Here I hit the wall of the questions of 'what is intelligence?' and 'what is consciousness?'
I read some Nick Bostrum relating to the subject of simulation hypothesis and came away thoroughly unimpressed. Could you recommend something else by him? The simulation hypothesis is very entertaining in it's possible permutations and implications on the subjects of consciousness and will. Until it could be proven, which would, in itself raise interesting questions about the nature of self awareness, I do not think it is more than a thought tool, useful for the internal logic of systems. The parameters of the natural world, not being dynamically defined by us, as we did not invent or build it, are not subject to us. We can alter the condition, or context, but we cannot abolish physics.
Simulation hypothesis is even more entertaining, to me, when put in the context of Descartes' 'Evil demon thought experiment', and the idea of the derivation of the Platonic ideal. What is evident to us, was not necessarily evident to even some of the most intelligent people of the past.
Yes, read Superintelligence: Paths, Dangers, Strategies.
reminds me of the article I posted way back,
How Steemit Is Convincing Me That Becoming A Hybrid Human-Robot Might Not Be Such An Evil, Horrible Thing Afterall...
I like the UBI concept. granted, it might not be easy or straightforward to implement effectively. however, eliminating poverty might not be such a bad thing. and having been myself locked into the pervading scarcity mindset for years, I can relate to the fact that life stressing over money kinda sucks. It's likely that a UBI system probably would get abused by some people who take advantage and don't give back - but there would likely be a surge of creative output from others that would benefit the society as a whole...
Plus, bringing cryptocurrency and token-based economies into the picture - it might also potentially be possible for the UBI to be issued in tokens with certain conditions or restrictions, such that a person's income could be used for food and basic services, while ensuring it can't be spent on alcohol, drugs, gambling, etc. I think something like that would be very valuable to integrate...
Nice thought provoking article. I am hearing lots of these same questions in discussions lately.
Thank you. I think these are important topics to discuss.
I think it is going to come down to competition for resources. So the question then is become a Morlock or become an Eloi? Please read THE ELOI AND THE MORLOCKS. It's a pretty good characterization.
I haven't read the original novel, only saw the most recent movie remake. Maybe I should add that one to my list. As for resources, it may, but I'm also hopeful that mining astroids, unlimited energy from the sun, and 3D printing on a massive scale will give us access to resources like we've never known.
Nobody I know has the answers to this, but a lot of people I know are asking the questions. I think we better.
I agree. These questions have to be asked and answers have to be found or we will find ourselves at the mercy of things we had no influence over.
We will likely all become it one day, but that will beyond our days most likely. The first few generations will resist it, especially with comparisons to mark of the beast.
Def. agree re approaching an important moment in human history. Sure sounds like the "machines will replace humans and we'll laze around the pool" cry we heard at start of Industrial Revolution. Most ominous is the idea that the huge problems humanity faces are overpopulation and education. How will AI solve those? Not quite on topic, but I'm not convinced that a UBI would actually help. I totally get the idea and think it makes logical sense, I just haven't seen a place where giving regular folks "free" stuff actually helps them in the long run. Injured/sick/mentally ill/not able to care for themselves is something totally different.
We are gonna die !!!!!
Really diggin your works lately!