The Future of the Internet's Anatomy

in #internet7 years ago (edited)

Recently, the World Wide Web turned 25. Back in 1989, when Sir Tim Berners-Lee first proposed the coding for Hypertext Transfer Protocol (HTTP) and Hypertext Markup Language (HTML), he helped usher in the Information Age. His software gave us a common framework for interacting with each other and exchanging documents from remote locations anywhere across this massive network of networks — the Information Superhighway, as it was called in the 90s. It enabled the creation of the World Wide Web using extant Internet transmission protocols (TCP/IP) developed in the 70s by Robert Kahn and Vinton Cerf, and before long it would become the architectural backbone of the modern Internet.

giphy-4.gif

What no one could have predicted then, however, is just how rapidly user culture and performance demands would evolve beyond the Internet’s original design parameters. For more than 25 years, the IP-based Internet has proven remarkably resilient in spite of increasingly sophisticated security attacks and complex content delivery requirements. But looking forward into a future where ubiquitous network computing appears destined to become a cornerstone of the average person’s way of life, technologists generally agree that it’s time to explore new Internet architectures altogether. Computer scientists are attempting to redesign the Internet from the ground up using wholly new paradigms unbiased by present design assumptions.

Just what that architecture will look like is uncertain at this point, and it’s even more uncertain how soon a well-tested “clean slate” model could be widely implemented. But thanks to an ambitious National Science Foundation (NSF) initiative, mirroring the efforts of other interested groups worldwide (like FIRE and AKARI), we’re beginning to catch a glimpse of some of the major design trajectories that will characterize the Future Internet.

giphy-1.gif

One of the foremost design concerns is security, which exists in the present Internet only as an overlay. When Internet Protocol was first developed, its designers were not thinking about a global network environment where vulnerabilities would be easily exploited by malicious intruders; they were simply creating a connectionless communication mechanism that would allow a source host to exchange information in a uniform manner with a destination host across any possible configuration of intervening networks. Since IP functions as the minimum network service assumption (or “narrow waist”) of the Web, the current Internet is intrinsically concerned with host location and packet transfer, not data integrity or host authenticity (check out this 2008 CPNI security assessment for some technical details).

Security must be addressed at the fringes of the network, with applications that scrutinize data before it is sent and after it is received. Future Internet designers see this as a big problem. They want to make security a more basic feature of data transmission, envisioning a network infrastructure where self-identifying data packets are examined and verified at multiple points en route to their destinations instead of only at the endpoints. Among other things, this would permit intermediate network devices like routers and bridges to perform security tasks that IP components are not designed to handle.

giphy-5.gif

One NSF-funded project, termed eXpressive Internet Architecture, makes this kind of security the centerpiece of its design approach. XIA functions by defining various communication entities (or principals) and then specifying minimum rules for communicating with each, effectively making the “narrow waist” of the Internet far more pervasive and customized according to the type of interaction taking place. Hosts, content, and services are the three main types of principals, but XIA allows for the creation of other principal types to accommodate future communication needs. Principals talk to each other using 160-bit expressive identifiers (or XIDs) that employ cryptographic hash technology and function as self-certifying, intrinsically trustworthy identification. (For a point of reference, and as proof of how resilient this type of data handling is, consider how this is very similar in concept to the way Bitcoin transactions can be publicly validated in an anonymous peer-to-peer network.) Tampering with communications across a XIA-like Internet would be extremely difficult, to say the least.

giphy-2.gif

Several other areas of interest are represented in other NSF-funded projects. These concern aspects of Internet use for which IP is simply not very efficient. The Named Data Networking (NDN) project, for instance, re-imagines the client-server model of network interaction, recognizing that today’s predominant uses of the Internet center on content creation and dissemination rather than simple end-to-end exchanges. In place of a protocol that asks “where” on the network a piece of content can be found, NDN proposes one that asks “what” the user is looking for. It then retrieves the content from whatever the nearest “container” might be (such as a router’s cached memory). With NDN, data become first-class communication entities, and the “narrow waist” of the Web is centered over content type instead of host location. IP names the location; NDN names the data. As with XIA, data transmitted across the NDN Internet are intrinsically secure, because data names contain “integrity signatures” demonstrating their provenance and rendering them trustworthy without regard to where they presently reside on the network. Successfully implemented, an NDN Internet would therefore make content creation and dissemination vastly more efficient and worry-free than it is today.

Researchers working on the NEBULA project, meanwhile, are operating from the assumption that the present trend of migrating data and applications into the cloud (“nebula”, by the way, is Latin for “cloud”) will only continue. They envision an infrastructure where cloud computing data centers form the highly reliable and highly secure Internet backbone, and users access and compute directly across a network replete with applications and content in a “utility-like” manner — that is, whenever, wherever, and however they choose. This would represent a significant departure from the present Internet, where the cloud still represents an aggregation of endpoints on a client-server architecture. With NEBULA, the Internet would truly be the cloud, not merely function like one. The difference is subtle, but it is highly consequential from an engineering standpoint.

giphy-3.gif

In a similar vein, researchers working on the MobilityFirst project assume that the Internet should be centered not around interconnecting fixed endpoints like servers and hosts, but around interconnecting users themselves, who are increasingly relying upon mobile devices and wireless networking to dynamically access the Internet. Their first principal is that mobility and wireless are the new norms for the Internet, and so a major thrust of their research is to design an architecture that facilitates robust communications in spite of frequent connections and disconnections from the network. The MobilityFirst paradigm includes a decentralized naming system, allowing for a more flexible identification system on the network by eliminating IP’s effective “global root of trust” in the DNS name resolution service. Similar to other architectures already mentioned, MobilityFirst relies on cryptography — in this case to provide a way for a name certification service to validate user addresses without relying on a centralized authority, creating an intrinsically secure and simultaneously more efficient way for mobile users to connect to the Internet on-demand.

This is a ton of information to digest all at once, and it’s indicative of the many efforts taking place to revamp the Internet. We’ve only scratched the surface on the complexity of what all these researchers are working on. We can already see that the Future Internet will be based on wholly new paradigms for how data is named, processed, stored, and secured across a global network of networks. All of these concepts are currently being tested on NSF’s Global Environment for Network Innovations (GENI), and it will be exciting to see how these design trajectories come together during the next decade. By the time the Web turns 40, it may very well have a whole new anatomy.

About the author.

Dr. Brandon K. Chicotsky is a business faculty member at Johns Hopkins University specializing in business communication. Since beginning university lectureship in 2014, Brandon has taught over 1,000 students in various topics ranging from information management to formal research methods. Brandon teaches at both the Harbor East campus in Baltimore and Washington, D.C. campus for Johns Hopkins Carey Business School. His research interests center on media branding with interdisciplinary aspects of human capital valuations, organizational management, and corporate PR. He is currently conducting research involving: 1) condition branding and its impact on consumer sentiment after adverse effects; and 2) the history of capital markets pertaining to tech-sector trading. Brandon may be reached at [email protected] or on twitter @chicotsky.

chicotsky..jpg

Coin Marketplace

STEEM 0.29
TRX 0.12
JST 0.033
BTC 63464.16
ETH 3111.33
USDT 1.00
SBD 3.98