Plexxi had a really big year in 2015; growing year-over-year revenue by 10X and dramatically increasing our list of customers, signing an exclusive distribution relationship with Arrow Electronics and expanding our value added reseller network by 6X. We achieved all this through a relentless focus on providing transformational data center networking products and tools. Our solutions are purpose-built to support the people responsible for designing, architecting and supporting public and private clouds deployments. As I discussed in my last blog, we call these people Cloud Builders.

Cloud Builders are tasked with finding new ways to meet today’s dynamic business requirements. Traditionally, public cloud has been associated with speed, agility, elasticity and cost savings, whereas private cloud has been associated with control around application performance, latency and security. To support modern day business requirements Cloud Builders need to leverage both public and private cloud; it’s no longer an either/or scenario.   Public and private clouds need to offer control over application performance, latency and security AND offer speed, agility, elasticity and cost savings.

We continue, as a company, to learn and adapt to market conditions by listening intently to customers, prospects and partners. Lately our conversations with Cloud Builders have revolved around their rapidly growing investments in converged infrastructure (CI) and IP storage, rather than traditional fibre channel-based Storage Area Networking (SAN). They are deploying new storage on IP networks in the form of Hadoop/HDFS, NFS, CIFS, NDFS, iSCSI, vSAN, ScaleIO, etc. You can chalk this up to the time-proven and inevitable convergence of disparate technologies onto IP Ethernet networks. There are many reasons for deploying storage on IP networks, but two very important ones are cost savings and agility. Customers want the ability to rapidly scale-out storage capacity rather than be slowed down and stuck with the high cost of traditional SANs. They want to add elastic pools of storage resources as they need, and when they need. They want to consume storage and compute resources in an agile cloud model.

As scale-out converged infrastructure and storage deployments have grown, we have also been consistently hearing from Cloud Builders that their data center network is experiencing stress and strain from the newly introduced storage traffic. Traditional leaf and spine data center architectures, which have remained unchanged for 25+ years, are ill equipped to handle the large volume of highly bursty and unpredictable east/west traffic patterns that IP storage introduces. These legacy networks are static in nature, defined by their cabling, fragile and unaware of the newly introduced traffic types. They struggle to support growing pools of storage capacity that span across racks, rows or multiple data centers.

The network: Your next big storage problem

Now as many of you know I spent nearly nine years at EMC building their mid-range storage business. In the storage industry there is a lot of discussion about flash and software-defined this and that, but the real challenge is the way that we connect servers to storage and build infrastructure to enable modern scale-out applications. This article (The network: Your next big storage problem) talks about the changing dynamics in storage and shifting bottlenecks. The very nature of storage traffic is distinctly different from traditional client/server network traffic.

Existing network architectures were built in an era where network traffic was predominantly north/south or client-to-server. As compute began to scale-out (multi-core), applications and storage have also scaled-out in order to unlock the potential of flash as well as to simplify the deployment of massive Petabyte (PB) sized file systems, object store or arrays. With this scale-out of storage, the bottleneck has shifted to the network. The requirements placed on the network by modern flash systems, software defined storage systems (vSAN, ScaleIO, etc.) or scaled-out file systems like HDFS/Hadoop, NDFS, pNFS, are fundamentally different. Not only is storage traffic moving east/west and north/south, but also the requirement for consistent, low latency (low jitter) is exceptionally high.

Storage Networks of the future must offer elasticity and low latency regardless of physical location (any rack, any row, any data center), but most importantly must be under software control with an awareness of the ever-changing needs of storage. Storage networks of the future must be capable of avoiding the “Microbursts” that are seen on traditional networks that were designed twenty years ago to solve a different set of problems.

Storage on IP networks introduces several new types of traffic, some creating very large volumes of unpredictable traffic into the data center.

  • Management traffic (include all traffic related to management including logs)
  • Control traffic (cluster node communication for node failover etc.)
  • Client to Server Storage traffic (VM to storage nodes – for accessing storage)
  • Server-to-Server Storage traffic (storage node to storage node – rebalancing etc.)
    • And don’t forget about your backup traffic which can congest most any network.
  • Meta data – the data about the data that file systems, object stores and distributed block systems (like vSAN) use to organize the data.

Throwing more bandwidth and bigger buffers at the problem is costly and doesn’t solve the fundamental issue. The natural IO size for modern storage transactions has rapidly grown from what were once 512 bytes to modern file system transactions of 4KB, 8KB, or 16KB, and even larger size transactions for video and other types of rich media. Traditional data center networks are built, configured and managed as separate technology domains from storage and compute. Cloud Builders need the ability to unify workflows across storage, compute AND network domains through tools, automation and orchestration. This requires a next generation approach to data center networking to address these problems.

A next generation cloud-based data center network from Plexxi offers a software-definable, programmable fabric capable of dynamically allocating bandwidth. Through storage, data and application awareness the fabric adjusts, in real time, to the needs of specific traffic types enabling the creation and enforcement of application-level SLAs. The highly meshed, software-definable fabric creates an elastic pool of network capacity. Data, application AND storage workload traffic is intelligently, efficiently and securely distributed across the entire fabric allowing you to use all of the network capacity you purchased, not just a small fraction of it (like leaf and spine approaches). With Plexxi, the addition and scale-out of storage and converged infrastructure resources into your private cloud is seamless.

Plexxi delivers a simply better network for storage and converged infrastructure.

Leave a Comment