Beneath the macro trends (SDN, Big Data, DevOps, Silicon Photonics, whatever), there are more subtle strategic undercurrents that have been driving a lot of the activities in the network industry for the past year or so. One of the most important in terms of competitive landscape, overall industry monetization, and customer impact is Point of Control. But despite its role in so much of what is going on, it isn't getting near enough air time.
For the network, the Point of Control is the point of interaction. That is to say that operations teams experience the network through their provisioning, monitoring, troubleshooting, and support systems. When these tasks are distributed amongst many surrounding tools, the Point of Control is shared, and loyalty to any one of these tools will be nominally related to how frequently it is used (or how critical the task is that it supports). When these activities get consolidated into a single administrative touch point, the value of that touch point goes up.
Because of the inherent value of owning the single point of interaction for the network, there have been many network management strategies hatched to solve this ubiquitous problem. The rationale is actually pretty straightforward: if we can solve the management problems, we can monetize the solution while also creating pull-through on our network devices.
The challenge is that how the network plugs into all the surrounding systems is not a standard thing. Each company has its own set of tools – some commercial, others commercial but customized, and still others that are homegrown. This forces the company that wants to solve these problems to tune their management solution at the edges to accommodate whatever is going to interact with it.
So how does a vendor do this?
The vendor launches a professional services effort, charging customers for the customized tuning required to fit the solution to their environment. If this tuning is common across many customers, the vendor can develop the practice once and apply it over and over en route to making a ton of money. If, however, each environment is different – an IT snowflake if you will – then the vendor has little to leverage across each engagement.
Building something once and selling it a million times is the path to greater margins. Building a million somethings, each of which is sold one time, will yield significantly lower margins. Companies know this, so when it comes time to do annual budget planning, where do they put their investment? On features and products that are broadly relevant. This leaves the network management solution perpetually starved for funding and attention.
The result has been a network management landscape that works well for the small subset of shops that are not IT snowflakes but underserves the balance of customers.
SDN has been pitched by many as the solution to all our network management ailments. By centralizing control, adding in automation hooks, embracing DevOps, integrating analytics… by doing all of these things, the networks of tomorrow will be more efficient, easier to manage, and less prone to error.
This could all be true. But SDN by itself doesn't solve the IT snowflake problem. If every environment is subtly but importantly different, no vendor will be able to adequately provide. We will likely see a rise in professional services plays, but the models behind this type of business are always going to lose out to the build-once-sell-one-million models. These services models will only be long-term lucrative if there is an element of repeatability across customers (or they operate at such scale that they meaningfully augment a more traditional business that has maxed out its growth).
So does this mean SDN will not work?
Not at all. It just means that vendors need to make sure they understand what problem they are solving, and customers need to really know what solution it is they are buying. The problem to solve is not merely automation (that is certainly part of it). The challenge that needs addressing is the Snowflake Conundrum. The Snowflake Conundrum can be solved, but it will require both vendors and customers to be in cahoots.
Vendors need to provide common means of handling integration – essentially a common data model with which the network can interoperate with surrounding tools. I have talked about this before, so this is not new. But customers will need to fight the temptation to bend their solutions around the company. A typical buying motion involves evaluating a solution, and considering very carefully how it fits with all the existing infrastructure. It might be necessary to instead consider how that infrastructure (devices, people, processes, tools, training, and so on) can morph to accommodate the right solution.
I am not suggesting that every purchase be a reshaping of all of IT. But there are inherent long-term costs associated with this customization mentality. Interestingly, we are not the only industry to suffer from this. In the ERP space, entire companies like Deloitte and Accenture exist only to analyze and customize. What could be relatively straightforward product deployments turn into multi-year, 8- or 9-figure investments that tend to fail on the first try. This is why some companies are starting to address their own ERP Snowflake Conundrum and implement the solution out of box.
Interestingly, while this means that Oracle and SAP will get less professional services revenue, it does mean they can shift that spend to the build-one-sell-one-million type of projects. Over time, the payout will be positive for both the customers and for the vendors.
I suspect our industry needs to make a similar shift. But to date, we seem to be either not talking about the right problems or (even worse) suggesting that the mere presence of APIs will solve the Snowflake Conundrum. Pushing the problem from the vendor to the customer is disguising surrender as power (We cannot solve the problem, but maybe you can). There simply has to be a different way.
If you would like to read more on this topic, check out: