An Internet Primer

internetworldblog

Lay-folk like yours truly often need quick-and-comprehensive guides to help navigate the strange maze of modernity. John Naughton’s book What You Really Need To Know About The Internet is an indispensable primer for the aforementioned demographic.

It helps the helpless halfwits (like yours truly) in getting a handle on what the Internet medium is, its place in the larger Media Ecosystem, and its relationship with larger forces (historical, cultural etc.)

Historical Context

The Internet is but one medium in a Media Ecosystem that creates culture.

The term “media” comes from biology — the nutrient mix responsible for the growth of cells is called a “medium” and it creates life. The lifecycle of various mediums in the media ecosystem create a living culture.

“The growth of media ecosystems and culture through the introduction of new mediums” has been going on for thousands of years, so it helps to stretch your gaze far and clasp onto a Long View of History.

People tend to overestimate the short-term impact of a new technology and underestimate its long-term implications.

In the 15th century a guy was fiddling  with a wine press in Germany. Through this he found how a way to create metallic types, oil-based  inks, and new ways to caste type into hand molds.

Johannes Gutenberg’s invention of the Printing Press had a massive cultural impact. The machine-made book with its pagination, cross referencing, punctuation, and table of contents restructured our brains toward the “analytic management of knowledge” and transformed scholarship, created science, gave birth to “childhood”  and the Reformation, and created advertising among other things.

Again, people tend to overestimate short-term impact  and underestimate long-term implications. Although very legitimate concerns arise over information-overload, perhaps something like rapidly augmented human intelligence might emerge in the long-term to overcome that.

It wouldn’t be the first surprise.

surprisedblog

Surprises

Surprises are built into the open architecture of the Internet.

The developers of the Internet in the early 1970s had two rules:

1. No Central Command

2. No particular app will be optimized (End-To-End Principle)

The first removed the tendency for hierarchical systems to cut-off innovation when it disrupts their main power base or profit source. If you allow central control of a network” Naughton writes, “then innovation will proceed at the speed deemed suitable by the controller, rather than by the inventiveness of outsiders.”

The second meant that the system was solely concerned with moving data from one point to the other and morally agnostic about what type of content was moved — porn, email, and Mozart are all handled the same way.

Naughton categorizes the inevitable surprises as “First Order Surprises”, or things that spring from the Internet’s open architecture (the Web, Napster file-sharing, Malware), and “Second Order Surprises”, or innovations that build upon the first order (Wikipedia and Facebook).

As we can already see, the Internet is different than the Web.

The Web Versus The Internet

Many lay-folk (like yours truly) assume that the World Wide Web is the same thing as the Internet. It’s a flawed assumption: The Web is NOT the Internet.

The World Wide Web is an application that runs on the Internet’s infrastructure.

As alluded to earlier, the Internet (or “inter-networking” system) was developed by computer scientists in the 1970s as a way to transfer information between distant mainframe computers attached to the same network.

Its open architecture is what led to the First Order Surprise that is the Web.

The World Wide Web was developed by Tim Berners-Lee in the early 1990s as a way to radically expand the types of data-packets that could be transferred to distant computers on the same network.

He did this by:

1. Giving each web page a unique identifier (URI) and machine-readable address (URL)

2. Creating a protocol that allows web clients and servers to request and serve documents (HTTP)

3. Creating client-software that allowed people to browse and edit web pages (Firefox, Safari etc.) and server programs to serve up those pages on demand

4. Create a standard language for editing pages that could be read by all browsers (HTML).

Pretty impressive for a guy fiddling around in his free time without much institutional support. Just like Gutenberg, Berners-Lee couldn’t have known the long-term impacts of what he unleashed.

Tim Berners Lee, creator of the World Wide Web
Tim Berners Lee, creator of the World Wide Web

Web Evolution

A simple model for understanding the Web sees it as a progression from a web-of-documents to a web-of-linked-data that can classified into three stages:

Web 1.0: This Read-Only Web formatted documents in a standardized way to be put online and made accessible to networked-computers. HTML created web-pages and HTTP protocol allowed clients and servers to post and read simple documents. Cookies and Javascript expanded browser capability to make web pages more personalized.

Web 2.0: This Read-Write Web harnessed the collective intelligence of Web users to created Wiki’s and user-generated content, RSS feeds and API’s as well as the idea of “perpetual beta” for software.

Web 3.0: This Semantic-Web will focus on human-like reasoning and see the Web as an information guide rather than a catalog of information.

Analytical Frameworks

In a very short period of time, the Internet has grown into an overwhelming complex system that affects nearly-all institutions that rely on the management of information.

To better analyze the Internet, Naughton suggests that we use an ecological rather than an economic framework.

The latter was useful in the industrialized information economy of printed books and other broadcast (one-to-many) media where markets were primary. The former is more useful in a networked-information economy with decentralized individual action where, oftentimes, non-market collaboration takes precedence.

The economic framework views the Internet medium and its media ecosystem  as a single functioning unit that experiences change as systemic. It understands that keystone species are vital, that a diversity of species is necessary to fill diverse niches, and that coevolution between migrating species will occur in order to adapt to new niches.

All these keystone species, diverse niches, and co-evolutionary movements are the result of copying. This holds true in the cultural world just as much as it does in the biological. Copying is essential to human creativity.

Everyone from Handel to Bach to Enlightenment thinkers borrowed, stole, reworked and innovated on the ideas of others. Copyright issues pose an vital question: “how do we properly compensate creatives while not hampering the evolution of culture at the same time?”

Intellectual Property rights help expand the conversation beyond the focus of the Internet as a semi-isolated thing and into the realm of society at large.

Networking and The Future

The job of storing information is becoming less tethered to mainframe computers and more woven into the network of Cloud Computing.

Location independent resource pooling with ubiquitous access is made possible by enormous server farms owned by large companies (Amazon, Apple etc.) who can rent out computing power and storage to others on a pay-per-use or free basis.

The former is easy to understand, but the latter “free” is only free because one agrees to surrender some privacy in the form of personal information or create user-generated content that is largely-worthless by itself but can be aggregated into huge data-sets sold to advertisers.

This is called the “digital sharecropping” by Nicholas Carr, who is quite skeptical of it. More computing power, but less privacy.

Lack of privacy was essential to George Orwell’s dystopian vision in 1984. The totalitarian government controlled through mass surveillance operating upon things we fear.

This narrative is often contrasted to Aldous Huxley vision in Brave New World. Control was maintained through a mass dissemination of cheap entertainment and drugs.

The things we love, love to control us.

Despite the billions of billions of web pages, the mass spread of diverse species and niches, and the mountains of user-generated content and peer-to-peer networking, it’s still only a few major websites and companies that account for the majority of Internet traffic.

“This represents an extraordinary concentration of attention on the part of a huge global audience. We use the services of these companies with gleeful abandon and forget that, in the process, they are learning a great deal about us. One day we may come to realize the truth of the old adage: in an information age, knowledge is power.”

Most likely we’ll see a mixture of Orwell and Huxley as we move toward the future.  Hopefully the “generativity” of the internet, built upon its open architecture and programmable PC’s, can break the stranglehold of New Moguls before they, potentially, shut it down.