The concept of decentralization in information technologies is not a new one. The internet, probably the single most influential technological innovation of the last 100 years, started out as a decentralized phenomenon. The pioneers of those early days used protocols to connect their computers with other machines around the world and built applications like email services and the World Wide Web, hosting the content on their own computers.
The Internet Before Decentralization
The internet is a human construction that has its own languages, and these languages have their own rules and protocols allowing it to function properly. Previous to the development of these languages computers were isolated machines with no way to communicate with each other. By creating a structure of interconnections between computers and using these communication protocols, computers are able to interact with each other.
This interconnected structure is called system architecture and it makes the internet possible. There are a number of different types of architecture but the two most prevalent are client-server and peer-to-peer networks. Of these two, the client-server model dominates the landscape and uses a language called Hypertext Transfer Protocol (HTTP) to communicate. Data is stored in centralized servers that are then accessed using location-based addresses utilizing HTTP.
This centralized server model and HTTP are very effective for certain actions like manipulating text and image files and creating websites; when dealing with issues of speed, latency and throughput, centralization has proven to be a useful model. The client-server model is also great at loading websites and handling text and images, aspects that once comprised the majority of internet traffic.
Because of these strengths HTTP has dominated the landscape. However, HTTP is not perfect. Specifically, it is not suited to handle the transfer of large data files, like audio and video, which is why the adoption of peer-to-peer networks gained popularity. There is also the issue of server security. Having a consolidated organization means that the risk of data breaches and hacks are huge: all of the data for a general population is stored on a handful of servers under a central control. If bad actors gain access to these servers they can glean, manipulate and delete huge swaths of information.
There is a strong push for a decentralized future. This technology allows for an egalitarian development of further innovations and helps bring the power back to the people. Beyond any philosophical arguments about the evils of centralized data control, there are some real-world examples of the way that a decentralized cloud can benefit people.
This is apparent in the way some nations have dealt with censorship and data manipulation. The consolidation of data provides governments an easy and nearly absolute way to control the information a population has access to. There have been many examples of state internet censorship around the world, with notable cases in China and Turkey. China has blocked many social media platforms and replaced them with their own, highly surveilled versions, while Turkey banned Wikipedia outright, claiming it was a threat to national security. These scenarios, along with the implications of hacking these massive servers, make for a strong case in favor of decentralization.
There are a myriad of reasons why a decentralized ecosystem is beneficial for all parties involved—excluding big tech firms that rake in cash for centralized server space and authoritarian regimes. Everything from security and cost to ideology and philosophy are valid arguments for decentralizing data storage. The development of these new technologies bring us closer to taking the reins from monolithic corporations and developing a system that provides users with the freedom to grow and create in new and exciting ways.