The concept of an exocortex is similar to the invention of glasses
An exocortex is nothing more than a tool to be used
In a continuation of my debate with @fourth on the role of culture in the development of technology I made the statement that I think it is best that as creators/inventors/designers of technology we do our best not to pollute the technology with our biases. These biases which may be cultural biases represent a safety risk when we are discussing AI for example.
@fourth made the statement that even Arabic numbers 0,1,2,3,4,5,6,7,8,9 can be interpreted culturally. In this he is right, but we also agree that 2+2=4 and that math provides a language for the exact description of anything (reality or not). So we can use math to provide an exact description of possible universes within a multiverse, or we can use math to describe infinite structures like the tape on a Turing machine which cannot possibly exist under our laws of physics (our universe is finite which means computation and memory cannot be infinite).
The point is math just works. It works because math is primarily a language. It doesn't provide a culture built into the symbols themselves but it does allow any user of these symbols to give any meaning they like to these symbols allowing for the community to overtime change the agreed upon standard interpretation of these symbols. In a sense I see technology as similar in that when designing it then if we want it to be free from bias then we really cannot export our culture onto the technology which is merely just a tool. I do not believe we can predict or control what future users will do with whatever we create nor do I think we can be held responsible for that.
The ability of individuals to see and how improving our sight improves our information processing ability
The concept of sight is important when thinking about an exocortex. When we think of it in a cultural context then the notion of "the blind leading the blind" is exactly the problem an exocortex must solve. Suppose we do not even think of it as an exocortex and instead think of it as a new kind of glasses. By thinking of an exocortex as glasses we can now put it into a cultural and societal context.
In order to make good decisions we have to be aware of what is in our environment. Our ability to process information without any technological augmentation is limited by our ability to sense our environment. Seeing is just an example we can use because humans have some of the better eyes in the animal kingdom. We can see waves on the electromagnetic spectrum which we label "visible light". There is also light which we cannot see such as infrared and UV (invisible light). The light we can see is also different because some people can see more colors than other people. It is even possible that women can see more color than men. This ability to see light, to see colors, allows for pattern detection and recognition. This pattern detection and recognition is a means by which we process the information that our eyes see.
A potential function of an exocortex is similar to that of glasses
The function of glasses typically is to help augment the ability of people with bad vision to see. In a natural environment the function of night vision typically is to allow people in an environment with predators to see as well as or better than the predators do.
To continue this analogy, to have an exocortex is more like having night vision goggles in the jungle filled with predators. Wild cats and other predators can see us at night, but we can't see them because our eyes were not evolved for seeing at night. We can use our night vision goggles to see potential threats to our survival when the environment is working against our biological capabilities.
It is true we can rely on the big man in the tribe to use their night vision goggles to see for us. To let this big man see for us requires we trust this big man to be our eyes, and this is the problem we have with centralization because it puts this big man in a very trusted (dangerous) position. Decentralization would mean instead of one man in the tribe having night vision goggles and we all depend on that one man, we instead all have our own night vision goggles providing us with the option to defend ourselves.
The bottom up approach to improving our collective sight capabilities is to provide every member of the tribe with night vision glasses. This way if the big man fails to see something we have now every member of the tribe able to direct their attention toward much more of the environment. This decentralization leverages the growth in membership of the tribe (more new members equals new eyes to scan for predators). This also allows the tribe to know if any particular member is trying to manipulate the tribe by relaying false information to some extent.
A decentralized exocortex is like giving night vision glasses to everyone
To have a decentralized exocortex is like giving everyone these night vision glasses. Of course it goes beyond glasses because an exocortex can do so much more than merely passively detect and process information. The point with this analogy is to show how we cannot predict how a tool will evolve through time. When glasses were invented there was no way to even know back then that they could be used to produce night vision goggles. It's merely a tool which can help aid individuals in survival, and increase the resilience of a tribe. It is also a tool which can be abused (as anything can), so there is no guarantee that an exocortex cannot be weaponized. The point being that AI, automation, information, knowledge, all can be used as weapons, or in beneficial prosocial ways. So we can only build the technology and determine for ourselves how we want to use it but we cannot predict 100 years ahead how people might interpret and or use what we create.
We can design technology to adapt to public sentiment so that the public 100 years away can make change to it so that it remains prosocial but we cannot predict from here what the public in 100 years will decide nor can we know what is best for them.
One of the things that we have to really reckon with, and that I think many commenters here and perhaps you yourself could be more aware of, is that all uses of technology will be done with respect to culture. This is because culture frames how we see the world, what our practices are for dealing with it, and our relationships with each other.
This is why it's not correct to say that culture is a bias. There are practices in any given culture which contain biases, but the idea is not to get rid of culture, it's to try to find the real abstract technology that "just works".
From our conversation, the example of the knife is important. Look at any knife - it is infused with culture, from ancient ornate ceremonial knives to the common penknife. The knife, as an abstract concept, has infinite potential variations, and can be used in any cultural context. It just works. But the point is, and I want to state this strong, you cannot create a knife that is a pure conceptual knife. You will always imbue it with your culture. Then one of the tasks is to not allow negative cultural aspects influence practical design.
Practices develop around technology. This is not something to be resisted. It is however something to be both aware of and not mistaken for the essence of the technology. The knife if you hold in your hand is not the pure concept of a knife, just the way you cannot write down the pure concept of the number 2, you can only write the symbol which stands for it, or place two items on a table, or say the word "two". It's easy to forget that we even have to learn the concept for two as a child.
To really know the knife, would be to see what is it that makes it up. For example, on first seeing a knife and wanting one of your own, you can try to replicate it exactly - the shape, materials, size and colours. To really get to the knife, you have to generalise and abstract. The same goes for any technology with reference to the culture which makes it. Interesting here is that customs do emerge where you absolutely have to make the knife a certain way, or it is not considered a knife. Now that is the kind of thinking that should be resisted.
I like your insistence here and elsewhere about not knowing the future when work on technologies, what will be the uses. And it's a strong case for insuring freedom for those working on technology in whatever way can be accommodated.
Very nice and informative post and thanks for sharing
Thank you for this analogy, it will come very useful in discussions.
I think this is an excellent post that brings these very alien concepts down to earth, which is something we really need. The sooner we understand what exponential technologies will bring, the sooner we can position ourselves in the right way to get the most out of it and to also guide them towards a prosocial outcome.
PS: this post made me realize that glasses are some of the first augmented reality tools :)
Let's not forget that a vast sum of technological advancements come from military developments for specific use cases. I think a more frequent case is when a new piece of technology gets developed for a specific use case, to solve a specific problem, and then, when we can collectively take a step back and look at its (potential) impact, the idea of this advancement can evolve into a less specific technology absent of the cultural/contextual bias you are speaking of.
Take a look at cryptocurrencies. Bitcoin was born as a reaction to the financial crisis, it solved a specific problem, and when the dust started to settle, people realized that this technology can be applied in a wide range of other industries, solving a wide range of other problems.
Can you cite some examples of lasting technologies which are highly specialized? Bitcoin solved a 2008 problem. It's 10 years later and the problem space has evolved. Has Bitcoin evolved with it?
The CISC vs RISC vs ASIC architecture debate. CISC is much more generalized compared to ASIC, but ASIC is application specific. If Bitcoin is like an ASIC and Ethereum is like RISC, then perhaps Ethereum has an advantage. On the other hand if Ethereum is like RISC but something else is even more generalized, why would we stick with ASICs?
Same could be said for mining. If we had a universal chip which can mine everything efficiently then why would we ever need ASICs when we can simply reprogram that universal chip?
You might have misunderstood my because my point exactly was that they start out specialized, and then evolve into generalized iterations. Inventions are rarely made for "general" use cases because a specific problem is the catalyst of the invention itself. We don't use the bitcoin protocol for smart contracts, but the idea of smart contracts on the blockchain wouldn't exist without the bitcoin protocol.
Unfortunately I do not see Bitcoin moving into a general purpose direction. While I don't know what it will become, it's not in my opinion currently evolving toward that. Also I would say smart contracts were thought up long before Bitcoin. It is true that Bitcoin was the first implementation of many ideas but the main idea was Nakamoto consensus.
For 2008 Bitcoin was at the time the state of the art. Even in 2012 Bitcoin was still state of the art. I cannot say Bitcoin is state of the art in 2018.
This is exactly my point. :) Bitcoin is not state of the art anymore because the innovation it brought to a specific problem evolved into something more generic, namely smart contract platforms. However, the implementation of such smart contact platforms (which arguably won't even use Nakamoto consensus, the core innovation, upon their mainstream adoption) wouldn't be possible without the initial innovation of bitcoin.
By the way, why do you say "unfortunately"? Isn't it a good thing that new protocols are being built for new ideas? It would be sisyphean to keep mending old protocols that were designed for a different purpose.
Where lies (if any) the fundamental difference between 'exocortex' and mere 'culture' in the gradation nature->culture->human?
I think modern smartphones and social media already fullfil a similar role: they store information and modify behaviour. I think it is an overlooked fact that people are not true individuals: you always function withn a group. The group your are in determines and changes behaviour. I think the "exocortex" will replace this, not add anything.
I think transhumanism dehumanizes us by denying our biological limits.
Behavior modification is not the goal or role I'm promoting. Law enforcement exists independent of smart phones and social media. I'm not someone who thinks that law enforcement should evolve into being a chip in our head, or that the government should police every aspect of our behavior.
This is obvious. We have a global economy. In order to survive in a global economy requires most of us earn and spend money. We also live in a reputation economy (which I blog plenty about) and I make the case that public sentiment is the constraint on behavior. So of course there are behavior constraints in the form of economic constrains (resource constraints), legal constraints (law enforcement), and extrajudicial constraints (public sentiment).
You're missing most of my arguments for why I think an exocortex MUST exist. An exocortex MUST exist because human beings are incapable of scaling up with the complexity of the society we are in. Human beings are not designed to navigate groups with millions of members, or to manage thousands of relationships, or to comply with an every expanding list of laws (and social norms) which change on a daily basis.
Human beings are adapted to simple societies, small social networks such as family sized or tribal. The purpose behind transhumanism first is in recognizing that there are limitations to being human. The limited social intelligence, attention scarcity, and inability to scale decision making are based on biological limits. Whether you are a transhumanist or not, the society we live in is adopting surveillance, AI, drones, social media, etc.
So if society is becoming more complex we either scale with it as individuals or some of us will not survive. You can put all your trust in centralized top down media solutions but those groups will not care if you're a statistic, or collateral damage, because they are busy protecting their own interests (the profits for their shareholders). So if you're the victim of circumstances, it's not going to be the surveillance company which bails you out of it in most cases. It's not going to be the social media company which protects you from committing a crime which you may not even know is a crime. It's not the social media company which will keep you from being fired for saying something some group somewhere perceives as insensitive, or racist, or sexist.
And I argue that we will lose our humanity in this proposed future.
What humanity do you think we have now? Please define this.
Also, humanity-wise, I.e. still considering the augmented entity to be still human is the relative ease to UNDO, the change. If a modification is permanent, i.e. un-undo-able, or very hard to undo ... it definitely presses the user towards the without of the human-ness domain. It is like the genetic modification - it's been considered as something which is done before conception and for life ... but indeed reversibility would make even the most radical genetic human alteration to be as 'mutating' as glasses or clothing. SO, the actual implementation and application of body/mind augmentation and boosting tech will be subject of competition. The 'bang for the buck' criterion.