April 2, 2010
One aspect of the FCC’s proposed “Open Internet” rules that has received significant attention is the notion that the rules be subject to a “reasonable network management” exception. The network management provision would allow network operators to implement management schemes to control congestion and address other network issues without running afoul of the rules. CDT has repeatedly expressed support for network management techniques that are agnostic to the applications that subscribers use, with the goal of allowing applications developers to remain free to innovate without having to worry that their products will be specifically targeted for throttling by individual ISPs.
Application-agnostic congestion management may have received an additional boost last week in a venue far removed from the FCC’s proceeding: the 77th meeting of the Internet Engineering Task Force (IETF), one of the leading technical standards bodies for the Internet. Last year, a new IETF working group known as Congestion Exposure (or “CONEX”) was proposed with the idea of standardizing a mechanism to give network nodes greater insight into congestion on the network. Much progress was made at the meeting last week to more precisely define the scope of the work, and it’s looking increasingly likely that CONEX will soon be chartered as a new working group.
The idea behind CONEX is simple: to give network nodes (and the companies that operate them) a simple way to know and account for the volume of congestion on the network at the very times when the network is congested. The technical details are a little hairy, but the basic notion is that Internet packets would carry information about the congestion they observe in their IP headers so that any router along the path could read it.
Network operators currently have a variety of metrics at their disposal to help them craft congestion management solutions (see my CONEX presentation), but none of them are based on congestion volume -- the quantity that CONEX would expose, and which is currently difficult or impossible for network operators to measure. Some operators measure the volume of traffic that an individual subscriber produces, or the rate with which individual subscribers send traffic, and they use those metrics as barometers of network congestion. But neither rate nor volume can always present an accurate indication of actual congestion effects, because congestion is based on the shared behavior of all the network’s users – that is, an individual who sends a lot of data or does it very fast will not necessarily congest the network if no one else is sending at the same time. As a consequence, congestion management systems that use pre-determined thresholds to limit or throttle users’ bandwidth or transmission rates may over-correct in times of little congestion, penalizing users more than is necessary to keep the network uncongested.
The other approach that operators have taken is to manage congestion on a per-application basis, and it is precisely this technique that CDT and others fear will allow ISPs to pick and choose which applications will succeed and fail on the network. But apart from concerns about Internet openness, application-based congestion management suffers from several other drawbacks as well: the equipment needed to identify and throttle particular applications and protocols can be expensive to maintain, and these solutions have the potential to create an endless game of cat-and-mouse with applications developers seeking to circumvent whatever throttling network operators put in place. CONEX, on the other hand, would provide a content- and application-agnostic building block on which congestion management systems without these deficiencies could be built.
So while we (and the rest of the Internet policy community) are busy putting together reply comments in the FCC proceeding, it’s nice to see progress in the technical community towards standardizing a technology that may present a superior and application-agnostic alternative to existing congestion management solutions. While it may be some time before such solutions see wide deployment, the progress of CONEX is a useful reminder of the power of technical standards in responding to some of the Internet’s trickiest challenges.