Tuesday, October 9, 2012

SDN : A Backward Step Forward

Everyone in the industry is talking about Software Defined Networks or SDNs, but unfortunately no two people using the term mean precisely the same thing. However, after hearing dozens of academicians and even more networking converts, and after reading hundreds of press releases, I believe that when people mention SDN they are talking about a network based on one or both of two guiding principles – 1) fully programmable forwarding behavior and 2) centralized control.

Both of these principles are in fact revolutionary in that they challenge well established principles of packet switched networks (PSNs) and actually regress to communications technology as it existed before the development of PSNs and the Internet. As I will discuss below, fully programmable forwarding behavior as espoused by SDN is fundamentally at odds with network layering, and centralized control diametrically opposes the philosophy of distributed routing protocols. It is worthwhile understanding these tensions before considering whether it makes sense to discard everything that has been learnt about networking in the past thirty years.


Programmable behavior vs. network layering

When packet switched networks were first proposed as an alternative to circuit switched ones, it was realized that the subject was so complex that it had to be broken down into more readily digestible parts. That’s why network layers (the seven OSI layers) were first invented. Later it was realized that network layering (a la ITU-T G.800) enabled hierarchies of service providers to coexist. Layering is employed despite the fact that it significantly reduces the efficiency of communications networks by introducing constructs that are not mandated by Shannon’s separation theorem. For example, in a VoIP application one may go to great lengths to compress 10 milliseconds of telephony quality speech from 80 Bytes down to 10 Bytes, but to that payload one then adds an RTP header of at least 12 bytes, a UDP header of 8 bytes, an IPv4 header of 20 bytes, and an Ethernet header and trailer of at least 18 bytes, for a grand total of at least 48 bytes of overhead – almost five times the payload size!

So why do we employ all these layers? Simply because it makes the task at hand manageable! Each layer contacts the layer above it (except for the top layer, i.e., the application) and below it (except for the bottom layer, i.e., the physical layer) through well-defined interfaces. Thus, when a lower layer receives information from an upper layer, it should treat it completely transparently, without attempting to discern what belongs to each of the layers above it. Similarly, a layer should not expect any special treatment from lower layers, other than that defined in the interface description.

It is thus understandable that communications theory purists shun layer violations which allow one layer’s processing to peek at information belonging to another layer. The consequences of disobeying this principle may be dire. Consider what would happen if MPLS label switching processing were to depend on details of the IP packet format. Then we would need two versions – one for IPv4 and one for IPv6, and we would not have pseudowires (which explicitly exploit the fact that MPLS is not cognizant of aspects of its payload). What would happen if UDP were to care about RTP fields? Then every modification to these fields would require a modification to UDP; and if IP were dependent on UDP then the domino effect would continue on down, negating the benefits of layering.

Of course in practice some layer violations are condoned, but only when they are absolutely necessary optimizations and can be shown to be benign. Even then they often take a heavy toll, as can be seen in the following three examples:
  1. Network Address Translation (NAT) intertwines the IP layer with the transport layer above it. This optimization became absolutely necessary since the world ran out of IPv4 addresses faster than it could adopt IPv6, but it broke the end-to-end principle underlying IP. Although there was little choice, NAT technology needed to embrace complex ALGs, and then various hole-punching technologies such as TURN, STUN, and ICE. And NAT still breaks various applications.
  2. The IEEE 1588 Transparent Clock (TC) modifies a field in the 1588 header (which sits above Ethernet or IP) based on arrival times of physical layer bits. Without this optimization there is no way to compensate for variable in-switch dwell times, and thus no way of distributing highly accurate timing over multiple hops. However, once again this layer violation comes at a price. The 1588 parser needs to be able to classify arbitrary Ethernet and/or IP packet formats and locate the offset of the field to be updated; not only does this require a complex processor, it also requires updating every time the IEEE or IETF changes a header field. Furthermore, updating the TC correction field is not compatible with Ethernet-layer security (e.g., MACsec that protects the integrity of the Ethernet frame, and optionally encrypts it).
  3. It is common practice for Ethernet Link Aggregation (LAG) to determine which physical link to employ by hashing fields from layer 2, 3, and 4 headers. This three-layer kludge enables transcending the link bandwidth limitations of a particular parallel set of links, and is only allowed because it doesn’t extend past the physical links involved.
But then along came SDN. The guiding SDN principle of completely programmable behavior means that an SDN switch treats an incoming packet as a simple sequence of bytes. The SDN switch may examine an arbitrary combination of fields, and based on these fields takes actions such as rewriting the packet and forwarding it in a particular way. The SDN switch neither knows nor cares to which network layer the bytes it examines belong.

Thus by programming an SDN switch to forward packets based on Ethernet addresses it can act as a layer-2 switch. By having it forward based on IP addresses it can act as a layer-3 router. By telling it to look UDP or TCP ports it can behave as a NAT. By having it block packets based on information further in the packet it can act as a firewall. In principle, we could program an SDN switch to edit a packet and forward it based on hashing arbitrary fields anywhere in the packet. This provides awe-inspiring flexibility, especially when the SDN switch is implemented as software running in a VM. Of course this flexibility comes at the price of obliterating the concept of layering.

Centralized control vs. distributed routing protocols

The most veteran communications network in the world is the Public Switched Telephone Network (PSTN). Unlike routing in the Internet, when a telephone call needs to be routed from source to destination, a centralized computer system works out the optimum path. The optimization algorithms employed can be very sophisticated (witness AT&T's blocking of Karmarkar from disclosing the details of his algorithm!) taking into account overall delay, the present loading throughout the network, etc.

The chief problem with centralized control is that there is a single point of failure. So when ARPA sponsored the design of a network that needed to survive network element failures (no, despite popular belief, there was no design goal to survive nuclear attack), it was decided to rely on distributed routing protocols. Using these protocols routers speak with other routers, and learn enough about the underlying topology to properly forward packets. Local forwarding decisions miraculously lead to globally optimal paths.

Yet, the optimality just mentioned is that of finding the shortest path from source to destination, not that of optimally utilizing network resources. Routing protocols can support traffic engineering, but this means reserving local resources for a flow, not locating under-utilized resources elsewhere and pressing them into service.

But then along came SDN. The guiding SDN principle of centralized control means that the controller sees the entire network (or at least network elements that it controls) and if provided with suitable algorithms can route packets while optimally utilizing network resources. This is precisely what Google are doing in their inter-datacenter WAN SDN – filling up the pipes much more efficiently than could be done purely based on distributed routing protocols. Of course this efficiency comes at the price of reintroducing the problem with a single point of failure. And as a corollary to the CAP theorem it is fundamentally impossible to circumvent this problem.


SDN - is it worth it?

So, no matter how you define it, SDN is fundamentally incompatible with one or more of the well-established principles of PSN design. The question is thus whether the benefits outweigh the costs.

OSI-style layering was an important crutch when PSNs were first being developed, but leads to inefficiencies that should have been addressed long ago. These inefficiencies are not only in bandwidth utilization, they are also in complexity, e.g., the need for ARP in order to match up layer 2 and layer 3 addresses. Were it not for the sheer number of deployed network elements, one could even imagine replacing the present stack of Ethernet, MPLS, IP, and TCP with a single end-to-end protocol (IPv6 formatting might be suitable). This could be accomplished using SDN, but several problems would need to be solved first. The single point of failure argument is not made moot by positing that the controller hardware can be made sufficiently resilient. It is also necessary to make the controller sufficiently secure against malicious attacks. In addition, the boot-strap problem of how the controlled switch can reach and be reached by the controller with a conventional underlay needs to be convincingly resolved.

The previous paragraph addressed OSI-style layering, but what of ITU-style layering imposed in order to support hierarchies of service providers? Eliminating the need for these “layer networks” requires a new model of providing data and communications services. That new model could be the cloud. A cloud service provider which is also a network service provider, or which has business agreements with network service providers, could leverage SDN network elements to revolutionize end-to-end networking. One could envision a host device passing a conventional packet to a first network element in the cloud, which terminates all the conventional layers and applies the single end-to-end protocol header. Thereafter the SDN switches would examine the single header and forward so as to simultaneously ensure high QoE and high network resource utilization efficiency. Present SDN deployments are simply emulating a subset of features of the present network layers, and are not attempting to embody this dream.

SDN technology is indeed a major step backward, but has the potential of being a revolutionary step forward.