(Update: taxes, too; click through for just the update.)
I was just taking a few minutes to check my newsfeed a few mornings ago, enjoying the final days of autumn before the polar blast that came to Colorado this week when I started reading them: the polarized, ignorant perspectives on the misnomer of "Net Neutrality."
On the one side, there are apparent ignorant politicians making comments both pro and con. They oversimplify the issue, and make it seem obvious to their own constituents that their view is right. The fact is, however, that neither "side" is right about this issue. As is typical when politicians attempt to stick their controlling efforts into science and technology, they damage the good with the bad and crush the possible benefits to the United States.
So, if you really want to understand the issues and the way that this should play out, read on. If you'd rather just pick a side and go into pitched battle, feel free. Just leave me out of it.
How the Internet Works
If you're an IP engineer, you can skip this part. If not, take a moment to understand some of the basics about how the Internet actually moves all that data around. It'll help you understand the rest of what's important about how the network gets managed and what is allowed and what's not.
Although it may seem like your downloading web page, video, or music is one continuous stream of ones and zeros, nothing could be further from the truth. In fact, every last bit of data flowing across the Internet is broken into chunks called packets. These packets are typically a bit less than 1500 byte long, and it takes many of them to constitute a single uni-directional communication. These packets do not necessarily take the same path from the source to the destination, each individual packet being individually routed by the hardware and software that makes up the Internet. As a result, they may not arrive in the right order, some may be lost, and others may be corrupted. The receiving system works with the sender to reconstitute the original data, and you often see this as buffering, as that reconstituted data is collected for playback sufficient enough so you don't see any interruptions.
It is critically important to understand this aspect of the Internet before considering how you want to govern it and what rules you insist on creating. Let me explain why next:
How Different Data Needs Different Internets
There are a number of types of data used commonly on the Internet, and hundreds more that most people never experience. Focusing on just the common ones, consider these:
- Interactive voice and video. These data require near real-time delivery and controlled streaming. Large gaps between packets received will cause freezing and other issues with the interaction, effectively making the communication unusable. We have all had voice and video over the Internet freeze or fail, and this is why.
- Streaming voice and video. These data require the controlled streaming of interactive voice and video, but can be buffered or otherwise can make up for some of the vagaries of an unreliable network. As long as the stream of packets continues to arrive at a predictable rate, the results are good, since it's not interactive. However, if the packets are throttled, have errors, or get dropped, the experience is poor. Most of us have had the experience of Netflix or iTunes dropping back in quality due to poor network performance.
- Bulk data. These data do not have time or delivery constraints, and include most web data, email, downloads, and the like. This data can have packets with issues, but the ultimate goal is simply to get all of them to the destination within a reasonable timeframe so that the file will be available for use, regardless of whether it's rendered on a browser screen, played on an mp3 player, synced to a Dropbox folder, or read on the screen.
You should now see that these three types of data place different requirements on the network, and should be treated differently when bandwidth is at a premium. And therein lies the issues with so-called Network Neutrality. Data isn't neutral, so a neutral network will actually create a worse experience for the users of the network than will a network that is well-engineered to prefer the right kinds of data.
This means that networks should be engineered to prefer data packets in the order I listed above, and to use interleaving of lower-class packets with higher-class packets when bandwidth allows. So, for example, if I am on an HD video call and it's using 90% of my available bandwidth, my network should only use that remaining 10% to deliver any of the type 2 and type 3 traffic. If it uses any more, my interactive video experience will suffer. In other words, the network should prefer (there's that word that is so vilified in these discussions) the interactive video packets over the bulk and non-interactive video packets.
Impact on Net Neutrality Planning
Please note that nothing here indicates a desire to see "pay to play" kinds of arrangements in the industry. However, it is common for providers to charge for access to their bandwidth. When I want greater bandwidth, I have to pay more. If I want a guaranteed bandwidth availability, I'll pay more than a best effort bandwidth of the same amount. What I mean is that a 50Mbps download for consumers is usually best effort, and happens when the overall network is relatively uncongested. If I want 50Mbps regardless of the state of the rest of the network, I need to buy dedicated bandwidth, which costs considerably more (and is typically only sold to businesses).
If I sell data delivery to my customers, and that delivery requires a certain bandwidth, I typically buy that bandwidth from two or more Internet Service Providers (ISPs). And I have to pay for the bandwidth as either best effort or dedicated. This is the way packet delivery has worked over the Internet and between content providers and their ISPs since the Internet went commercial in the early 1990s. This arrangement is appropriate, it seems to me.
Furthermore, ISPs should not be restricted from shaping data in order to deliver better service to customers, as I outlined in the story of the 3 data types. They should be able to prefer interactive packets over streaming packets, and both of those over bulk packets.
This is not to say that content providers should be held hostage based on the type of data they are delivering. That should be up to the consumer, and the content providers should simply purchase dedicated bandwidth and be able to use all they purchase, filling it with any of the types of traffic their provider will deliver. Consumers should receive the service to which they subscribe from any provider of that service, delivered with the quality possible given appropriate preferences. But, providers need to be able to shape traffic or they will be forced to over-provision, passing the bill along to consumers.
The United States Compared with The Rest of the World
All of this said, do not buy into the myth that the US has the best Internet access in the world. In fact, it's abysmal. Wikipedia has an article summarizing a damning Akamai survey of Internet capabilities worldwide. South Korea (the leader) has services more than 100x faster than the average speed in the US, for $20/month. So, providers in the US need to do a much better job of delivering bandwidth for the fees that consumers pay.
What does this mean for so-called Net Neutrality? You decide. Now you understand some of the engineering complexities underneath the typical political bluster. At least you can decide if any of the politicians and pundits have a clue what they're talking about.
Update: Taxes, Too
Today, FCC Commissioner Mike O'Reilly said that, “Consumers of these services would face an immediate increase in their Internet bills” during a seminar held by the non-partisan Free State Foundation according to this article. This is an example of the repercussions of choices that involve a government maze of regulations, fees, taxes, and legalities that are unforeseen. Such is the case with the siplmistic idea of "net neutrality" that doesn't take into account the implications of government regulation as a telecommunications technology.