Managing Tables in our new Virtual Reality

November 21st, 2013 | Marten Terpstra | 3 Comments

phonebookNetworking really comes down to the art of managing tables and rules.

In traditional networks, MAC addresses are inserted into tables using standard learning techniques.  When packets arrive, if the source MAC address is not known, it is added to the MAC forwarding table for that VLAN with the ingress interface as its destination. If the destination is unknown, the packet is flooded through the VLAN, with the side effect that each switch along the way inserts the source MAC address in its own forwarding table for that VLAN. Assuming the destination actually exists, one of the flooded copies will reach its destination. The device at the destination MAC address receives the packet, and (hopefully) responds. The response is destined for the device that sent the original packet, for which each switch has learned how to get to from the flooded packet. The packet makes it way back to the original source with the side effect of the source of this response packet being learned and inserted into the forwarding table along the switch path back to the original sender. Sounds complicated, but its basic MAC learning and this is how ethernet networks have found sources and destinations for a long long time.

IP addresses are learning slightly differently. ARP is used to create a mapping between a MAC address and an IP address. When a device wants to send an IP packet to a device on the same subnet, it will send out an ARP request for the destination device, and that device (or someone else on its behalf in the case of proxy ARP) will respond with an response that provides the mapping between MAC address and IP address. When the IP packet is destined for another subnet, the source will pass the packet to gateway for the destination subnet, using ARP the exact same way to get the MAC to IP address mapping. That gateway is determined by yet another table, the IP routing table, containing IP subnets and a pointer to the IP address of the device that can get you there. The latter is built using static entries configured by the administrator, or routing protocols like OSPF, ISIS or BGP.

So far so good. We have 3 tables to maintain: the MAC table (also known as the L2 forwarding table), the ARP table (also know as the IP host table) and the IP routing table.

When a network is divided into multiple virtual networks, each of these tables could be split into multiple versions, one for each virtual network. As an example I may have 10 separate L2 forwarding tables, each containing many MAC addresses in many VLANs. This immediately brings us to the first challenge in managing these tables. If I receive an ethernet packet, which of multiple tables do I use to lookup the destination, or similarly, in which table do I insert the source MAC address I just learned? It is clear that a switch must know to which virtual network this packet belongs before it attempts to use its L2 forwarding table. Similarly, by learning then source of this packet, I need to know which of multiple tables to insert its address into.

There are several ways by which to associate a packet with a forwarding table, or really with a Virtual Network. The most basic and probably most used is a static mapping of the combination of ingress port (on the switch) and VLAN. The administrator has created a table that simply says “any packet coming in on this port on this VLAN belongs to Virtual Network X”. Virtual Network X is now associated with one of the forwarding tables and we have found the table we are dealing with. We can learn source and put them in the right table and we can lookup the destination. When the destination is not present in that table, we have our next challenge: how do we flood in a Virtualized Network? We would normally send the packet out every port that this VLAN configured (along an STP or otherwise managed loop free path), but we want to reach only those switches that have this Virtual Network configured (statically or dynamically).

This is where different solutions take different approaches. In Shortest Path Bridging for instance, the set of switches that have member ports in a specific Virtual Network (I-SID in SPB terms) are discovered using ISIS. As part of that discovery, a SPF calculated tree is created covering all these switches, and the packet is flooded along this tree, very similar to normal VLAN flooding. Because SPB traffic is encapsulated, only the edge switches decapsulate this packet and learn the original source.

Overlay networks like VXLAN solve the problem in a very similar way in the pure definition of the protocol. When a packet is destined for an unknown destination, it is "flooded" to all other VXLAN endpoints that have members for that Virtual Network (VNI in the case of VXLAN). Because VXLAN runs on top of IP, its version of flooding needs an IP based mechanism, and the mechanism of choice is IP Multicast. Each VNI is represented by an IP multicast group, and all VXLAN endpoints (VTEPs) join this group. When a packet needs to be flooded, it is multicast on that specific group, the receiving VTEPs decapsulate the packet, learn the source and all is good.

There have been many articles and opinions on the use of IP Multicast for flooding (which is essentially the same as multicasting or broadcasting) in VXLAN. One of VXLANs strengths is that it can travel across any IP infrastructure, including the largest of them all, the Internet. However, ubiquitous IP connectivity is nowhere near the same as ubiquitous IP Multicast connectivity. And this is why most controller (distributed or central) overlay solutions have attacked that problem. And this is also where it gets complicated.

A first benefit of having a controller that manages the overlay network is simple: you have a complete inventory of all overlay endpoints that exist in the network. You probably even have an inventory of which Virtual Networks each serves, because all of this is provisioned data. This means I don’t have to discover all the endpoints a packet needs to be flooded to, I know them all, I can simply replicate the packet to each and every end point as a unicast packet. Current implementations of the controller based virtualization solutions use this. The advantage is that it is really simple. The disadvantage, its a lot of overhead when you have many endpoints.

When you think through the creation of overlay networks and how VMs are created and attached to Virtual Switches and attached to Virtual Networks, you quickly realize that all of this is provisioned information through the overlay and VM orchestration system. Which raises the question, why attempt to dynamically learn at all? If I know exactly where a VM is (using VM as a equivalent of a MAC and IP address here), which VTEP it is hiding behind, and which Virtual Network it is part of, why can I not simply tell all the other VTEPs about this from the controller? All provisioned information could be exchanged outside of the normal inline learning mechanisms, so mechanisms like flooding and even ARP are greatly reduced or even completely removed in such networks. All information is known and the controller pro-actively pushes this information to those that need to know.

It is a different way of solving some of the more challenging (but basic and fundamental) network behaviors, but one that makes complete sense. It does raise many scaling questions, we have taken methods that have traditionally been distributed and turned it into centralized table management. And whether the controller runs distributed, clustered or as a single entity, it is still a centrally managed entity. The next little while will tell us whether the scale and performance are sufficient for the networks we intend to build.

This however does not mean that there is no need for dynamic learning in an overlay network. Any network will have devices that are outside of the control of the overlay controller. These devices need to be discovered and learned somehow. That is the work of VXLan gateways and Service Nodes in NSX. And those create a completely new challenge. Less so of functionality and far more of control. The ultimate challenge is less how they are managed, that is “just” engineering work. The real challenge is who manages the tables.

[Today’s fun fact: The plastic things on the end of shoelaces are called “aglets”. And I guarantee you won’t remember that by tomorrow]

To read more on this topic, check out:

3 thoughts on “Managing Tables in our new Virtual Reality

  1. Isn’t this LISP Mapping? Network is all about forwarding (Joe Touch likes to equate this to tail-recursion) and registrations i.e. binding one namespace to another..

    • Marten Terpstra says:

      Gary, it depends somewhat on the level of abstraction you decide to take. Networking in itself in my opinion is not very good in glueing namespaces together. Hence our continuous restacking to duplicate namespaces. Underneath it all though it comes down to managing tables and relations between tables….

  2. Wow very good article guys

Leave a Reply