Democratizing the Web, W3C Style

W3C’s Real-Time Communications Working Group

In May 2011, the World Wide Web Consortium (W3C) announced the formation of a new working group tasked with developing standards for client-side APIs (browser based software that runs on your computer instead of on a server) that would enable real-time communications. Writing for ReadWriteWeb, Marshall Kirkpatrick correctly identified this objective as a “potent idea,” a technological development for the World Wide Web that could further democratize it. I want to help explain why.

First, a little background: the W3C is the international standards organization that studies and outlines technical standards for the Web. Without the W3C, users would be unable to open a variety of web sites from the same browser because web designers and browser developers would be using different coding languages and protocols. If you’ve tried to open a page in Firefox that only looks right in IE 8, or you have a page that looks good in every browser except IE, you have a sense of why the standards are important and why industry leaders should be encouraged to adopt those standards. (HTML 5 anyone? Everyone?)

What we’re talking about here, however, isn’t just a standard for coding how web pages look, but rather something more fundamental. It’s a standard that can help to bring about a more robust architecture for the Internet, a network design that would be easier to maintain and harder to control.

Types of network architectures

In Chapter 3 of Cybering Democracy, I argued that the structure of the Internet, dependent as it was on a backbone of tier-1 network access points, resembled a decentralized network rather than a distributed network. I offered the image shown at the right, from a 1964 RAND study by Paul Baran, to explain the critical differences between these networking architectures.

Briefly, a centralized system is the most vulnerable because all communications nodes connect through one central server/switch/node. Take down that central server, and you bring down the whole network. A decentralized structure is better, but the nodes are still vulnerable if they only connect to one server.

Wherever computers depend on a single server for connectivity, vulnerability exists. Outages or filtering rules at central servers determine whether computers down the line can gain access at all and what they can gain access to.

In a distributed system, however, every node (or computer) is both a server and a client and can be simultaneously connected to multiple other computers/nodes. Peer-to-peer (P2P) systems are modeled on this architecture. When the same critical data is mirrored on multiple computers, outages affecting one or more nodes (up to a point) wouldn’t necessarily down the entire network. Points of failure (which are also potential points for controlling access) are eliminated in a distributed model.

It is this last feature in particular that makes distributed network designs more democratic. The nodes that make up the network are equal participants in that they can’t govern what others say and do on the network. They can control only what they themselves choose to share with others, and this participation is voluntary. A network of networks patterned according to the principle of P2P distribution, client-side APIs and real-time communications makes it far more difficult for authoritarian governments to monitor and control data traffic, and I consider that a good thing.

To be sure, it also has the potential of creating what Kirkpatrick refers to as “a permanent lawless zone of connected devices with no central place to stop anyone from doing anything in particular.” This includes the two threats that Internet critics point to the most when they call for a little law and order in cyberspace: pedophiles and terrorists.

I’m also aware that a lot of the data shared on current P2P networks are copyrighted materials: movies, music and software applications that users don’t want to have to pay for. So while the idea that we can have more democratic communications over distributed networks is certainly laudable, it remains an open question whether in fact we are using or would use these networks for a common good (watchdog reporting, open political debate, advocacy and so forth) instead of for personal gain or even for nefarious activities.

Perhaps the answer is that some users will employ distributed network communications for positive social, political and ethical ends and others will not.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright 2017 - CyberingDemocracy.com, Diana Saco and Saco Media LLC. All rights reserved.
top