In DevOps, Featured, Industry Insights, Network Evolution, SDN

star-wars-diy-snowflakesBeneath the macro trends (SDN, Big Data, DevOps, Silicon Photonics, whatever), there are more subtle strategic undercurrents that have been driving a lot of the activities in the network industry for the past year or so. One of the most important in terms of competitive landscape, overall industry monetization, and customer impact is Point of Control. But despite its role in so much of what is going on, it isn't getting near enough air time.

For the network, the Point of Control is the point of interaction. That is to say that operations teams experience the network through their provisioning, monitoring, troubleshooting, and support systems. When these tasks are distributed amongst many surrounding tools, the Point of Control is shared, and loyalty to any one of these tools will be nominally related to how frequently it is used (or how critical the task is that it supports). When these activities get consolidated into a single administrative touch point, the value of that touch point goes up.

Because of the inherent value of owning the single point of interaction for the network, there have been many network management strategies hatched to solve this ubiquitous problem. The rationale is actually pretty straightforward: if we can solve the management problems, we can monetize the solution while also creating pull-through on our network devices.

The challenge is that how the network plugs into all the surrounding systems is not a standard thing. Each company has its own set of tools – some commercial, others commercial but customized, and still others that are homegrown. This forces the company that wants to solve these problems to tune their management solution at the edges to accommodate whatever is going to interact with it.

So how does a vendor do this?

The vendor launches a professional services effort, charging customers for the customized tuning required to fit the solution to their environment. If this tuning is common across many customers, the vendor can develop the practice once and apply it over and over en route to making a ton of money. If, however, each environment is different – an IT snowflake if you will – then the vendor has little to leverage across each engagement.

Building something once and selling it a million times is the path to greater margins. Building a million somethings, each of which is sold one time, will yield significantly lower margins. Companies know this, so when it comes time to do annual budget planning, where do they put their investment? On features and products that are broadly relevant. This leaves the network management solution perpetually starved for funding and attention. 

The result has been a network management landscape that works well for the small subset of shops that are not IT snowflakes but underserves the balance of customers.

SDN has been pitched by many as the solution to all our network management ailments. By centralizing control, adding in automation hooks, embracing DevOps, integrating analytics… by doing all of these things, the networks of tomorrow will be more efficient, easier to manage, and less prone to error. 

This could all be true. But SDN by itself doesn't solve the IT snowflake problem. If every environment is subtly but importantly different, no vendor will be able to adequately provide. We will likely see a rise in professional services plays, but the models behind this type of business are always going to lose out to the build-once-sell-one-million models. These services models will only be long-term lucrative if there is an element of repeatability across customers (or they operate at such scale that they meaningfully augment a more traditional business that has maxed out its growth).

So does this mean SDN will not work?

Not at all. It just means that vendors need to make sure they understand what problem they are solving, and customers need to really know what solution it is they are buying. The problem to solve is not merely automation (that is certainly part of it). The challenge that needs addressing is the Snowflake Conundrum. The Snowflake Conundrum can be solved, but it will require both vendors and customers to be in cahoots.

Vendors need to provide common means of handling integration – essentially a common data model with which the network can interoperate with surrounding tools. I have talked about this before, so this is not new. But customers will need to fight the temptation to bend their solutions around the company. A typical buying motion involves evaluating a solution, and considering very carefully how it fits with all the existing infrastructure. It might be necessary to instead consider how that infrastructure (devices, people, processes, tools, training, and so on) can morph to accommodate the right solution.

I am not suggesting that every purchase be a reshaping of all of IT. But there are inherent long-term costs associated with this customization mentality. Interestingly, we are not the only industry to suffer from this. In the ERP space, entire companies like Deloitte and Accenture exist only to analyze and customize. What could be relatively straightforward product deployments turn into multi-year, 8- or 9-figure investments that tend to fail on the first try. This is why some companies are starting to address their own ERP Snowflake Conundrum and implement the solution out of box. 

Interestingly, while this means that Oracle and SAP will get less professional services revenue, it does mean they can shift that spend to the build-one-sell-one-million type of projects. Over time, the payout will be positive for both the customers and for the vendors. 

I suspect our industry needs to make a similar shift. But to date, we seem to be either not talking about the right problems or (even worse) suggesting that the mere presence of APIs will solve the Snowflake Conundrum. Pushing the problem from the vendor to the customer is disguising surrender as power (We cannot solve the problem, but maybe you can). There simply has to be a different way.

If you would like to read more on this topic, check out:

Showing 3 comments
  • Ruth

    Interesting news, thanks for the share.

  • Peter

    Interesting commentary. Having been in the network management space for a number of years and worked with many different customers, you get to see everything. As you state, no two customer networks are alike and in addition, their business objectives are different as well, so double the snowflakes and then add the network, applications and security folks into the mix and now you’ve got a blizzard.

    So while a one-size-fits-all approach for network management is certainly a nice goal to achieve, the reality of situation is quite the opposite. Many vendors talk about a unified solution, bringing NFM, NPM and APM together to provide a singular system for monitoring, diagnosis, troubleshooting, analysis and planning, the truth is that 80% of current commercial network management solutions are a hodgepodge of products cobbled together through various acquisitions. The other issue is that equipment vendors push their own standards to build exclusivity and make it more difficult to replace them.

    There are some common threads for fault and performance management; SNMP, ICMP and IPFIX, while many others are vendor specific; WMI, FnF, sFlow. Invariably the customer is stuck trying to figure it all out and get it configured and running so that he can monitor the network, or he can pay for a bus load of Pro Serv solution experts to swarm all over his data center for a couple of months.

    OpenFlow (SDN) while not a total panacea, is a concerted effort to resolve the current lack of commonality and interoperability by taking a standardized approach to the control plane. One area that both network management vendors and customers are going to have to pay close attention to is the ability of an SDN based control plane to reconfigure the network to adapt to changing conditions; traffic prioritization, application demand, latency, saturated links, packet loss, etc… in real-time.

    And this is where it gets tricky; current NFM, NPM and APM solutions are subject to delays from the time an event occurs to when it is actually collected and reported, usually in the seconds to minutes range. The key concept to keep in mind here is that the network is static. Now add the ability of the network to dynamically modify itself at layers 2, 3 and 4 into the mix and the current management model falls apart, as they will not be able to keep up with the changes. Instead of seconds or minutes, OpenFlow management systems will have to collect, process and display an event and correlating network changes in centiseconds or milliseconds.

    Does this mean dedicated management silicon or a real-time OS underlying the upper presentation layers of an OpenFlow management system? That is something that the management vendors are going to have to figure out. Is this the repeatable build one sell millions model?

    • mike.bushong

      What a thoughtful reply. So first off, thanks for that.

      Your comments on dynamism in the network are well-taken. I think you have nailed one of the challenges with having an ephemeral-state-driven network. When network behavior is based on network state that is signalled and transient (not static, persistent configuration), the question is how do you collect that state, how do you aggregate it to whatever management plane you are using, and how do you correlate with events that need to have granularity in timing to truly debug, bill, whatever? This is a tough problem to solve. 

      I don’t know if the solution is dedicating silicon (it could be – I honestly don’t know) or if it is a data collection solution. I do know that SDN is going to put a larger premium on analytics, so there will be more commercial reasons than ever to solve the problem. It could be that a change in business model might be enough to make progress. The lower-margin customized network management business was never going to have enough reasons to go solve the hard problems (the business was based on customization, not necessarily solving the hardest problems). But a higher-margin solutions business around SDN could drive the R&D neeed.

      The question will shift though. If someone solves this, do they remain independent? Or does this become so important that they are immediate gobbled up and rolled into a proprietary solution? If the Point of Control is the point of monetization, I would bet on acquisition, which would  leave the industry only marginally better off.


Leave a Comment

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Start typing and press Enter to search