Net Neutrality: No Easy Answers

Comment piece for Media International Australia, June 2006.
http://www.emsah.uq.edu.au/mia/

Danny Butt, Suma Media Consulting

The concept of “Network Neutrality” has received a great deal of attention in the press recently, mostly due to so-far unsuccessful US telecommunications legislation proposals that required Internet Service Providers (ISPs) to carry any and all Internet traffic equally, rather than being able to prioritise certain traffic or charge differential rates for different kinds of content.

For the proponents of Net Neutrality, there has been and should be a clear separation between the provision of physical network infrastructure and the provision of content. Much as in the telephone system, the ISPs should be treated as “common carriers” which are in the business of providing access to all content on the Internet. In the U.S., for example, legislation until recently required cable television providers to carry all content packages and provide the latest technologies in all areas they served, even if it would be more economically efficient to provide only those content packages with which they had direct relationships. The policy justification for legislating for this openness springs from the limited competition in the telco/ISP/broadband area, meaning that users don’t have true competition due to high switching costs and limited choice.

Net Neutrality proponents wish to see something similar for the Internet: a requirement that all ISPs provide access to all content on the Internet without different charges or reduced performance for content not approved or owned by the ISP itself. They want to see ISPs continue to charge purely on speed and volume rather than providing some services for free/cheap and making others more expensive.

At first glance, the issue seems like a straightforward benefit for consumers, worthy of public policy interventions. But for the ISPs and the providers of network services, different issues are at play that make it less simple. The Internet has transformed from a research network focussed on text and file transfer, to a highly flexible media and communications platform capable of emulating both the phone system and cable television. There are business pressures for ISPs to implement differential provision policies for Internet traffic for two reasons.

Firstly, Quality of Service (QoS) policies are required to prioritise important, time-sensitive traffic over less important traffic, when there are more requests than there is bandwidth available. For example, if you are video-conferencing, where maintenance of a certain data rate is necessary for uninterrupted viewing, under busy traffic conditions ISPs would like to be able to ensure you can see your conference uninterrupted, even if it slows down the person downloading a copy of a software programme next door. Similarly, if you are using Skype or a similar Voice-over-IP application to have an audio conversation, you would like to think that this could work even if you next door neighbour is downloading movies via a peer-to-peer file sharing application, where a few seconds delay is not going to make a difference to their experience. In Australia and New Zealand, a number of ISPs have already undertaken “bandwidth shaping” trials, or limiting the availability of bandwidth to traffic on common “ports” used by peer-to-peer applications.

In these examples, “non-neutral policies” that prioritise some types of traffic over others are an essential part of giving the customer what they want. The problem is that what constitutes benefit for the customer and benefit for the ISP’s bottom line becomes blurred. This is the case when, for example, the ISP is also a telecommunications provider, and the extensive use of VoIP may be cannibalising their phone revenues, and the suppression of this traffic for “QoS purposes” just happens to reduce take up of VoIP. Or the ISP has a relationship with a content producer (e.g. major record labels), in which case preventing peer-to-peer traffic may play a role in encouraging users to download the music content on the ISP’s website.

Differentiating between QoS discrimination for valid or anti-competitive reasons becomes even more difficult when next generation networks offer a “triple play” of voice, Internet, and movies from one provider. In this case, the ability to deliver specialised content services becomes part of the value proposition for the network, and motivates the consumer to purchase these services. Some ISPs argue that without the ability to guarantee certain use patterns they will not be able to fund the new networks and innovative services. For example, they would say, if you were considering signing up with ISP X for voice/Internet/movies, but read a review that their reliability on voice was not 100%, this would inhibit takeup of these new services.

While many small producers and consumer rights advocates (and large web companies who are not reliant on deals with highly branded content industries) have promoted Net Neutrality as equivalent to the deals between cable television networks and content providers, there are more complex interactions between content and infrastructure as intensive interactive traffic becomes the norm. In particular some of the new functionality such as that found in interactive television and gaming relies on a sophisticated relationship between content and hardware.

A useful analogy can be seen in the gaming console market, where the console manufacturers need to innovate at a hardware level as well as negotiate deals with content providers to create a portfolio of available titles [1]. By maintaining licensing control over who can develop titles, console manufacturers are able to capture up to a third of the retail value of each game sold, and this is integral to their profits. This is crucial because price pressure on consoles means that the profit margin on the hardware console is low. For console manufacturers, a suggestion that they should all be able to play each others’ games is infeasible.

The Internet, and the World Wide Web in particular was developed around a separation of traffic protocols, user environment, and content formats. Part of what made the web so ubiquitous was that you could view content on any operating system (Windows, Unix, Mac), via any browser (Netscape, Internet Explorer), over any kind of Internet connection (modem, LAN, wireless). This is what allowed the network to have a sense of neutrality.

However, the shift in the web from a predominantly text-based asynchronous information exchange platform, to sophisticated real-time applications (audio-visual media in particular), have resulted in more diverse, often proprietary platforms that link content, transport, and interface in new ways. Examples include the Windows Media Center, or Apples FairPlay DRM format/iPod hardware/iTunes software. This shift is driven by a combination of user-experience requirements (users expect integration) and an attempt to gain control of the hardware-software environment for areas of growth content to allow multiple revenue streams and brand control, along the lines of the console model. The degree to which this is a prerequisite for network innovation or constitutes anti-competitive behaviour is an policy question whose dimensions are complex and with remedies which are unclear.

A further complicating factor is that the highly branded content (music, movies) that is driving uptake of high-speed data services predominantly comes from offline sources where distributors not only controlled, but usually financed production. This is a very different model from the early Internet, where content was sometimes funded by ecommerce or advertising, but more commonly produced on a non-commercial basis. Or to put it another way, you can’t charge people a premium for much of the text-only internet, but you can for episodes of Desperate Housewives. And if you’re an ISP charging for access to those episodes, you’re probably paying a lot of money for the rights to show them, so you are going to want to prioritise access to those over other video content.

The question of what viable economic models will look like under next-generation networks is central to the Net Neutrality debate. On one hand, it seems unrealistic to expect that the vertically integrated content & distribution model has no place on the Internet – to exclude it by legal means will probably delay the introduction of new, efficient distribution models that users want (see, for example, the role of iTunes in kick-starting the digital music downloads business). But it is also true that the public policy implications of Internet and telephone connectivity are much more substantial than those of a console operator or movie theatre: when people discuss the importance of information-literacy they are not usually talking about access to playing games on an X-Box or being able to watch a Disney film. There is a genuine public need for effective access to email and Internet communications.

Yet in the new Internet networks all these kinds of information are delivered through the same mechanism. A large part of the problem is that people in the US (in particular) assumed that the Internet was public because the protocol for transferring information is public. But the actual physical networks are owned primarily by private entities who interconnect via market transactions and they have many incentives to recoup their investment/seek profit by tying their access offering to what people actually want, i.e. content. Especially when, as Richard Bennett has noted, there is no money to be made in being an ISP without those services, and the Internet backbone providers are almost always telcos who are offsetting their costs with voice services [2].

The business models were different back in the early 1990s when it was primarily research institutions who owned the pipes, but that’s not “neutral” or “public” in the way a government service is public. At least part of the blame for the current predicament can be laid at the feet of the “traditional Internet folks” who avoided government involvement in the Internet like the plague, and believed that a free market was the only way of preserving freedom of expression. A review of the history of other public utilities shows that in a market environment, governments might be the only mechanism that can realistically be subject to effective political influence in the public interest.

In summary, the Net Neutrality issue is not as simple as it might first appear. There could be genuine suppression of innovation from simplistic anti-discrimination legislation, yet imperfect competition is a feature of these networks which requires public policy remedies. The most important activity over the next few years will be clarifying what the most important public benefits of network access are, and developing mechanisms for supporting those benefits. In a rapidly changing network environment, these will need to be more sophisticated than simply arguing for a status quo, or worse, implementing poor legislation that is unresponsive to the business models that will shape the Internet’s future.

Danny Butt is a consultant in new media, culture, and development, and partner at Suma Media Consulting.

[1] For an excellent overview of the value chains in this sector, see Johns J. (2006)”Video games production networks: value capture, power relations and embeddedness.” Journal of Economic Geography 6(2):151-180; doi:10.1093/jeg/lbi001

[2] See comment on Tim Berners-Lee’s weblog: Berners-Lee, T. (2006) “Neutrality of the Net” http ://dig.csail.mit.edu/breadcrumbs/node/132, Accessed 27th May 2006.