In the late 1990s, I worked with an amazing group of brilliant network engineers building the InteropNet for the Interop trade shows around the world. We were always pushing the envelope, introducing next-generation technology before it was really ready. During a number of those years, we delivered real-time video traffic over the network, often using multicast methods that are still not widely used. We were always a little ahead of our time.
Before I explain the details, allow me to mention one concept that is critical to understanding everything about the Internet: all transmissions across the Internet are made by packets. This means that every file or stream across the Internet is chopped into little 1400-byte chunks, each of which traverses the network independently of all the others. There is literally no relationship between the packets on the network. They are only reunited at the receiving end after they are off the network and in the device that will interpret them and deliver the result (like a video playback, email, file transfer, or any other end-to-end application).
But over the network, those packets are 100% independent of each other.
Because they are independent, they are subject to all kinds of issues. Sometimes, packets are dropped because a device is overloaded. Since packets can take different paths, they can arrive out of order or with varying time between them (called "jitter"). For many types of data transfer (like email, files, and even instant messaging), most of these things don't matter at all.
However, some traffic is very sensitive. Especially audio and video that is time-sensitive (used for applications like video calls, audio calls, live broadcast).
Back to Interop and the InteropNet... Delivery of video, even over the high-speed networks we were using, meant having to recognize the different requirements of traffic types and using the network resources in ways that accommodated those requirements. During those years, the IETF (Internet Engineering Task Force, the volunteer organization responsible for the standards that allow the Internet to function) defined the Differential Services (diffserv) standards to provide network performance appropriate to the type of service required.
This is an essential concept! Networks must be able to differentiate all of those independent packets flying around the network.
The New York Times has been reporting on both the FCC comments about so-called "Net Neutrality" conversations and the rumored Google/Verizon agreement on network usage. The typical idiotic political conversation has ensued, of course.
The entire idea of "an open Internet" is foolish at best and dishonest political posturing at worse. In this situation, it's actually both. Besides, "Net Neutrality" is not possible! Not only that, it's not even desirable.
Bandwidth costs money. Equipment costs money. More bandwidth costs more. Differentiated services also cost more. We all want them to be offered by the providers so that we can have live video, reliable voice-over-IP, and additional services that we haven't even imagined, yet.
The conversation, then isn't about "neutrality," but rather about universal access to differentiated services... at an appropriate cost that will be determined by the market if we just allow it to do so. After all, nobody wins by denying access, and in a free market, those who do will lose business.
There is one group who benefits: the idiot politicians who want control.
The entire focus is wrong. Typical of the politicians playing at being engineers. It just doesn't work.
Update: The Wall Street Journal ran a bit more detail on the Google/Verizon agreement today. The comments from the so-called "Free Internet" speakers are very telling: they don't understand how the Internet actually works.