Net neutrality - a net-head view

16 Jul 2010

This is a guest post from Jay Daley, Chief Executive of .nz Registry Services

Net neutrality is a complex issue with some strongly opposed views that at times sound more like religion than sensible argument, so this article is an attempt to provide some sense for those still not completely sure what it is all about.  Be warned though, that this article is not an unbiased appraisal of the arguments, it is written from the perspective of a confirmed net-head.

If you are wondering why this matters to a domain name registry, well one factor is that DNS is subject to a great deal of 'non-neutral' behaviour from ISPs, ranging from blocking of servers to actively rewriting DNS messages sent from a provider to a customer.  This is an area of intense debate within the DNS world and it is only because DNS is generally regarded as underlying infrastructure that this is not wider known.  Another factor is that our goal of providing an optimum DNS service to Internet users relies on local peering and if ISPs take action to fragment that, for the same reasoning as other non-neutral decisions, then that hinders us achieving that goal.

Traffic management

To start with, we need to tackle the growing push to equate network neutrality with traffic management, when the two are quite different.  Traffic management by definition is about protocols and pipes, about balancing services at the protocol level within the resource constraint of the transmission media.  So a goal of traffic management might be to ensure that a real time service like VoIP is delivered well, or it might be to ensure that other real time services like IP TV do not saturate a link.

On the face of it this might seem entirely reasonable.  It is apparently non-discriminatory as it works at the protocol level and it seems to be geared towards providing a better service for customers.

However, there are strong arguments against this form of traffic management based on the end-to-end principle, namely that true innovation has demonstrably come from end points managing the traffic themselves and the moment that someone starts to manage the traffic in the middle the protocols get 'frozen' and innovation stops or diverts.   What if ISPs had managed traffic to strongly support HTTP just a few years ago, would YouTube or Skype ever have got off the ground?  Unfortunately these arguments take a long time to recognise and internalise, as evidenced by the age of their proponents (Vint Cerf et al), and new generations are unaware of their impact.

The place where most active traffic management occurs is on the border of the enterprise, at the firewall, where some protocols are allowed, some blocked and some shaped.  The impact of this is generally seen to be good for the Internet because it reduces criminal behaviour but there are more subtle problems that it creates that the Internet is struggling with:

  • Some firewalls do deep packet inspection and check protocol conformance.  For example one well known make of firewall checks DNS packets to see what resource record types are being used and blocks those that it does not recognise.  This seemingly simple practice has come close to crippling the development of DNS as the concern about whether a new resource record will be usable by significant numbers of users, causes considerable uncertainty in the minds of protocol developers.
  • There are increasing efforts to tunnel one protocol inside another to bypass the blocking of particular protocols.  This might seem like it is criminally motivated but generally this is a response from manufacturers to their customers who are unable to use their product due to corporate policies that may indeed by corporate bureaucracy.  Tunneling in turns leads to more deep packet inspection, which in turn leads to more protocol freezing and so on.
  • Some of the basic diagnostic tools available to network operators, such as ICMP, are routinely blocked on the broad assumption that the less you expose about your network the safer you are.

This is not to say that traffic management by firewalls is bad per se, just that the seemingly sensible use of them has unintended consequences that are distorting the Internet and it is not hard to imagine how this will scale upwards if traffic management by ISPs becomes even more sophisticated.

The basic claim against network neutrality

Net neutrality is quite different from traffic management, it is entirely about the economics of Internet connectivity and the belief of some ISPs that this is a two-sided market they are being denied access to.  At the recent Telecommunications and ICT Summit the often repeated argument was put quite clearly that some ISPs believe that they incur all the costs and content providers reap all the profits.

To quote that article: "The dilemma of over the top providers such as TradeMe, eBay and Google making the money while the telecommunications industry incurs the cost is still unresolved."

The economics of Internet connectivity

As an Internet person, when I look at the economics of the network they are quite simple.  First there is the connectivity.  Content providers have contracts with locally connected ISPs to carry their content, those ISPs in turn have contracts with other ISPs and we follow the contract chain right down to the home user who as a contract that their ISP will connect them to the service they request.

Some people think that when you pay for Internet access you pay to join a cloud and that's it.  For the consumer it should look like that, but for ISPs it is very different and always has been.  ISPs, as well as paying specifically for speed and data volumes (traffic) as consumers do, also pay specifically for routes.  If they want access to international routes then they pay extra for that compared to paying for national routes.  That's the nature of the transit market, to provide access to those routes that it would be too expensive for an ISP to patch a cable to. 

And that ultimately is the only way the Internet can and does work, with contracts to exchange traffic and routes (sometimes symmetric and sometimes not).  The scale of the Internet and the physical topology of the planet mean that every ISP cannot connect directly to every other ISP, there have to be intermediaries, sometimes several layers, who carry traffic and routes between ISPs.

Nowhere in this model is there such a thing as a free Internet connection port where the content providers have secretly connected their kit and so avoided paying for connectivity.   Everybody pays, everyone is connected, that's the Internet.

But the view from some ISPs particularly those that were once just telcos, is that they are only getting a fair payment if everyone who has data carried along their pipes pays directly for that service, never mind what intermediate contracts are in place.  Yet the ISPs making these claims already have a full contractual framework around them.  Their consumer customers pay them to deliver the traffic to them that they want, from whatever content provider, and from the money the ISP receives from the customer they pay their transit providers the cost of deliving non-local content.  So when an ISP wants the content provider to pay them to send data to their customers (never mind that the content provider has paid someone else to send it) whilst also charging the consumer the full cost of receiving it then this is simply double-dipping.

At a strategic level, what they are effectively attempting is to disintermediate the global Internet connectivity market, the transit providers, and force content providers to deal only them, the last mile ISPs.  

The economics of Internet investment

The second part to the economics is the concept of what is the network.  As national fibre networks are being implemented across the world the last mile ISPs with existing infrastructure have been making the case that they will be putting in all the investment and the content providers will be getting all the reward.

To quote the article above again:  "The telcos are being forced to invest in an infrastructure that is unlikely to provide the same revenues as the copper network. And to compound matters it appears that the riches that are to be gained in a fibre network may be taken by those companies that haven't paid a cent towards it - Google and Apple are the global examples most often cited."

This view assumes that the Internet is just pipes, when it is obviously much more than that - it is the pipes, the end devices (servers, printers, desktops, phones, etc), the software, and the content, all of which costs money and all of which makes the network.  Google have over 1 million servers, which is a huge capital investment however you measure it, and then there is the software on top of it and paying ISPs to deliver their data is not cheap either.  A case could probably be made that it is the ISPs who are the laggards as far as investment goes, relying on a copper network that was in place decades ago.

Implications of the non-neutral view of the world

If we assume for a minute that net neutrality was abandoned and try to envisage what that would mean for Internet users, then we end up with a very different Internet with some new characteristics:

  • Discriminatory pricing policies where individual content providers can suddenly be blocked or rate limited unless they pay an ISP for that ISPs customers to reach them (even though that ISPs customer are already paying for that service).  I'm sure ISPs in favour of the two-sided market would claim that this will be equitable and above board, but how exactly will they measure traffic to ensure that.  If it is to be by IP address then shared hosting will suffer and big sites with multiple IPs will escape, if it is to be by domain name, then understanding what is a sub-domain and what is a delegation is critical to making this work.  An ISP will need to keep track of the various endpoints that they define as a single content provider and they can't do that for everyone so it will inevitably be discriminatory.
  • Complexity at a huge scale.  Suppose the traffic from .nz nameservers hits a particular level (what with DNSSEC and all) and an ISP decides to charge us for delivering it to their customers, do we then pay them or wait until more demand it, and since presumably they can't charge the same amount as that would be price fixing, do we now need to deal with 1000 ISPs globally?
  • Confusion with the consumer not knowing what service they are getting from where.  Imagine looking up on a directory to see what web sites you can access at what speed, depending on what they have paid your ISP.  Mind you, there are other reasons this last one might happen anyway, as covered below.

Those are all round disadvantages, whatever side of the debate you are on, there is also the change in behaviour of content  providers to consider.  As all net-heads know, the Internet routes around failure, that's the way it was designed.  Content providers will take whatever action they can that is economic for them, to avoid being caught in the aggregation based charging of ISPs - rotating IP addresses, using multiple domains, peer-to-peer caching, content obfuscation to prevent deep packet inspection and so on.  If the bell-heads think that spotting the big content providers will continue to be easy, they are deluded.

Above all, none of that is going to improve the Internet.

Peering

In case it is not obvious, this issue is at the heart of the peering disconnect in NZ because some believe that the more you peer the less leverage you have for disintermediation.  Unfortunately this is fallacious reasoning. 

To illustrate the fallacy, take an example where an ISP's customers draw down a noticeable amount of content from one provider and look at three scenarios:

  1. Where the content is non-local. 

    The ISP pays $L for the cost of local distribution and $T for the cost of transit to get that content, so the total cost is $L+$T

  2. Where the content is local and the content providers pays the ISP to have it delivered.

    The ISP pays $L for the cost of local distribution and receives $l as payment, so the total cost is $L-$l which may possibly, at a pinch, be close to 0.

  3. Where the content is local and freely exchanged (settlement-free peering).

    The ISP pays $L for the cost of local distribution in total.

We end up with ISPs at scenario 1, who want to move to 2 and therefore don't want to move to 3 as it stops them getting to 2.  Which means they would rather pay $L+$T than just $L because of some vain strategy that they could reduce it to $L-$l. 

But we know that no ISP has even a snowflake's chance in hell of achieving 2.  The Internet community will resist it, the big content providers will fight against it, the consumers will vote with their feet and the regulatory authorities will intervene.

It is commercial madness but that is the way too many ISPs operate.

A better business model

The real financial issue here is one of margins.  The margin on content that has to be bought in by transit contracts is so much smaller than the margin on local content and that in turn appears so much smaller than the margin the big content providers earn.  But rather than trying to snatch margin from the transit providers and content providers, the non-neutrals need to understand what it is about their business model that is successful and emulate that - and no it isn't freeloading off someone else's bandwidth.

The content providers are selling a service and that's what people are willing to spend their money on, not access, because the benefits of a service are direct and the benefits of access are only indirect.

To give you an example, I have a phone that cost several hundred dollars and the only recognition of that by my mobile access provider is a little configuration script to set the voicemail number and the Internet access point.  That's it, nothing more at all.  If I want to check my balance I have to dial a number, if I want to see my call log then the phone has a record and if I want to see how much my various calls have cost then I have to wait for the bill.  And those are just the utmost basic interactions I have with the phone company, let alone anything sophisticated like home automation.

Confusion ahead?

If national fibre networks provide the benefits that many hope they will then they might create some behaviours that lead to some of the confusion identified above. 

I found out entirely by accident the other day that there is a local Internet TV streaming service that my ISP allows access to independent of any data caps.  The reason for this is probably twofold - they peer locally with them and so the cost of distribution is low and they want to try to break the Sky stranglehold on TV content.  But for me it could be a nightmare because I don't have any technological support to help me identify and remember what sites are zero-rated in this way.  What happens if I start using it and one day the two companies fall out and I don't see the notification until I get my next bill? 

Admittedly this is only one site, so I'm not really going to have many problems, but if national fibre networks lead to much higher cost differentials on local vs remote content then this problem will rapidly expand and a technological solution will necessary.  Hopefully ISPs will realise that's their job as part of the service they sell and not leave it to Google to do for them.

Final word from history

The whole debate about net neutrality and traffic management is actually a battle in the proxy war for the opposing ideologies about how to build a network, the bell-heads vs net-heads debate.  Us net-heads have been right so far every single step of the way (packet switching, end-to-end, open protocols, open institutions, freedom of access, freedom of content, global in nature, and so on) and we should not give in now if we want the Internet to continue being the force for change that it has been.

Comments

Ironic, a true NET.HEAD would

Ironic, a true NET.HEAD would not be running a Centralized Telco-Style (Bell.Head) DNS Registry.

The hypocrisy is jaw-dropping.

How would you define a

How would you define a "decentralised DNS registry"?  Is there such a thing?  And how does that model have anything to do with the nethead/bellhead dichotomy?