If you’ve been keeping up with Plexxi, you know we’re proud of our roots on the East Coast. However, we have exciting news this week for the West Coast contingent of our team. Our San Francisco team has just moved to into the communal office space called WeWork (the SOMA one, not the Golden Gate one). Stay tuned for more as we get settled in there.
In this week’s PlexxiTube video, Dan Backman comments on how Plexxi’s switching solution uses an SDN controller called Plexxi Control. He answers a couple poignant questions: Where does the Plexxi Controller run? Is it a standalone piece of hardware? Does it run on a virtual machine? What versions of OS is it compatible with? Learn the answers to these questions and more in the video below.
Below please find our round up of this week’s best reads:
The first article for this week’s Plexxi Pulse is from Paul-Parker Johnson for SearchTelecom. This article is a good summary of where things stand and where they are headed for SDN and NFV. In my opinion, SDN will solve some of the operational and management related challenges faced by businesses. However, at some point, the physical interconnect will need to change as well. Manufacturing improvements have already made photonic switching commercially viable. Over time, we should see a rise in silicon photonics in the datacenter, which should help address the bandwidth growth. The combination of SDN (especially the traffic engineering aspects) and photonic switching should yield better utilization out of the underlying infrastructure. This would mean changes in how much capacity is needed overall and could be interesting long term.
In this article for IT World Canada, Andrew Brooks comments on the different definitions of the term “open” in IT. He specifically looks at its description as relation to SDN. Personally, I don’t see directly how this description of network virtualization and SDN leads to higher resource utilization. Network resource utilization is based primarily on shortest-path-first (SPF) algorithms. Right now, virtually all traffic uses the same set of algorithms that date back to the 1950s. To get better utilization, we need to fan traffic out across non-equal-cost paths. This is not really an explicit goal of SDN or network virtualization at the moment, but it is a requirement if utilization is to improve. A perhaps more interesting conclusion is that Open has no meaning and that there are really different things that open is serving as a proxy for such as interchangeability, interoperability, open access (as with APIs), open source and standards.
It is useless to fight over open if you don’t know what property of open is really important. Do you want to easily be able to swap things in and out because of vendor lock-in? Then you care about interchangeability. Maybe you care about writing code to surround the system and handle integration? Then it’s open access. Maybe you just want stuff to work together. Interoperability. We need a more complete dialogue of open. I actually wrote all of this up last year here.
The final article from this week’s Pulse is from Technically and highlights Dr. Andrea Goldsmith’s argument that network providers need to deploy cells to cellular networks to meet growing bandwidth needs. This argument rings true for me well beyond the cellular space. We run networks at relatively low utilization. The answer to growth has always been to throw more bandwidth at the problem. But if demand is increasing geometrically, at some point it will no longer be cost-effective to do this (unless you believe that the cost of capacity will follow a similar downward trajectory).
In CPU land, we went to multicore. Not bigger and faster chips but rather smaller cores and a distributed architecture. I would think that there is a networking analog here. Whatever the solution, we probably ought to start promoting utilization as well as lower overall cost. Relying on cost improvements would seem to have a finite lifespan.