There are a few ways to answer these questions. It might seem obvious that it is a success, since the whole world is connected or about to be connected. Global domination is the very definition of success, isn't it? We could try other definitions, such as happiness. Does the Internet make us happier? This would be hard to answer in a blog post.
Another approach is to use the original goals of the architects of the Internet as a benchmark. In 1987, David Clark wrote a retrospective paper describing these goals. Clark was the Chief Protocol Architect for the Internet in 1981, and he remains an Internet visionary today.
Goals:
- Develop an effective technique for multiplexed utilization of existing interconnected networks.
- Internet communication must continue despite loss of networks or gateways.
- The Internet must support multiple types of communications service.
- The Internet architecture must accommodate a variety of networks.
- The Internet architecture must permit distributed management of its resources.
- The Internet architecture must be cost effective.
- The Internet architecture must permit attachment with a low level of effort.
- The resources used in the Internet architecture must be accountable.
- Though it's hard to remember today, the Internet was originally a military program. It was designed to withstand nuclear attack. This goal (1 and 4), coupled with the desire to interconnect existing networks and networking protocols (0, 2, 3, and 5), led to the design choices of 1) a packet-switched network of dumb routers with 2) the datagram as its base messaging unit.
A packet-switched network transmitting datagrams meets many of the goals of the Internet architects, but it has some serious trade-offs. Because the Internet is stateless, error detection, retransmissions, and security must be handled at the end hosts. The network just does the best it can to transmit datagrams. There are no other perks.
In many ways, this was a pragmatic decision: there was a large installed based of existing hardware, and it would have been difficult to convince people to start again with specs designed by the military. The architects had no idea how widespread the Internet would become. Still, it was a military application. You would think that security, at least, would need to be baked into the network.
Interestingly, decentralization led to many social, political, commercial, and economic consequences that typically are not high on the list of military priorities. I have never worked in defense, but I understand that there is an emphasis on chains of command and accountability. I can't think of anything less hierarchical or accountable than the Internet. - Speed is not a goal. Isn't it funny that a communications protocol using telephony cannot transmit real-time voice communication? The packet-switched network was designed to route around network failures. There is no guarantee that packets arrive in any order or within any period of time. For this reason, packets have a lot of overhead in the form of message headers. The packet-switched datagram is a relatively slow form of communication. It wasn't until broadband was widely available that VOIP became feasible.
Much traffic today is composed of message headers, retransmissions, and flow control information, all of which could be better handled by a stateful network. A stateful network would be more like the phone system, in which dedicated lines handle streams of information. These lines would have to be easily reroutable to handle interruptions to the system (such as nuclear attack), but packet-switching is not the only way to do this. - Accountability (i.e., monitoring) dropped off the list. Since it was a military application, the Internet Architects wanted to be able to account for its uses and abuses. However, it would have been very difficult to develop monitoring systems in a stateless network. After all, the monitoring system AT&T uses for billing is about as complex as the telephonic network itself, and that is a stateful system. For this reason, accountability was quickly dropped from the list of goals.
Since there is no monitoring within the network itself, all accounting must be handled at the application layer. TCP has to figure out what speed it should transmit by monitoring dropped packets. Denial-of-service attacks have to be handled by firewalls. Routers can simply declare what addresses they can handle. And, of course, speed can't be a goal if there is no way to measure performance.
Like the Internet architects, however, we have to be pragmatic. The ubiquity of Network Address Translators (NATs) forces us to stick with TCP/IP. Because NATs read and rewrite message headers, there has to be agreement on the protocols in use. If we wanted to change protocols, we would have to throw out all our routers and modems. Of course, new protocols could be created on top of TCP/UDP/ICMP, but these would have the same shortcomings as the layers below.
No comments:
Post a Comment