In Application Centricity, Cloud Computing, DevOps, Enterprise Data Center, Featured

In part 1 of this series, I mentioned a customer that was starting to understand how to build application policy into their deployment processes and in turn was building new infrastructure that could understand those policies. That’s a lot of usage of the word “policy” so it’s probably a good idea to go into a bit more detail on what that means.

In this context, policy refers to how specific IT resources are used in accordance with a business’s rules or practices. A much more detailed discussion of policy in the data center is covered in this most excellent networkheresy blog post (with great additional discussions here and here).  But suffice it to say that getting to full self-service IT nirvana requires that we codify business-centric policy and encapsulate the applications with that policy.

The goals of the previously mentioned customer were pretty simple, actually. They wanted to provide self-service compute, storage, networking, and a choice of application software stacks to their vast army of developers. They wanted this self-service capability to extend beyond development and test workloads to full production workloads, including fully automated deployment. They wanted to provide costs back to the business that were on par with or better than the leading public cloud providers. Sounds simple, right? (Yes, if standing up a complete private I/PaaS infrastructure can ever be described as simple!). Turns out, at least for this customer, most of it was simple – except when it came to the fully automated deployment part.

What this customer found was that all of the automation broke down when they tried to automate the policies. Certain workloads contained social security numbers and had rules around transiting a stateful firewall. Some applications had performance SLAs that required that they be instantiated on some number of virtual machines a deployed with a load balancer. Then there were more basic policies around who was allowed to turn up what type of infrastructure. And of course, on top of it all, a matrix of complex connectivity policies between applications components, and between applications and the outside world.

There are a number of efforts underway to help solve some of these problems. The first big problem is how to express policies in a consistent way that is independent of infrastructure. This requires some level of collaboration in the industry. One notable effort trying to tackle the policy problem is the Congress project of OpenStack, being led by VMWare (full disclaimer: Plexxi has donated some IP and development efforts to this project), but it is certainly not the only one and probably not the last one to surface.

Solving the policy problem, therefore, is hugely important to the industry as a whole as it opens up “cloud” style infrastructure to more complex Enterprise workloads that historically have been very difficult to engineer, deploy, and operate without a lot of heavy manual labor. Its likely that the work to encode policy into application orchestration systems is a longer term effort, but some good progress is being made today and it is certainly important that network infrastructure (the part we care about) is designed and built in a way that embraces this upcoming world of automated complex policy-driven workload deployment.

Which brings us to the topic of tomorrow’s post – the effect of application policy on infrastructure, and more specifically on networking infrastructure. Stay tuned!

[Today’s Fun Fact: The name for the space between your eyebrows is “nasion.” Auto-correct seems dubious.]


Leave a Comment

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Start typing and press Enter to search