Avatar

This week the OpenStack Podcast’s guest rockstar was Sirish Raghuram. He’s the co-founder and CEO of Platform9 (www.platform9.com), and he’s also a former long-term VMware employee. From that unique vantage point, he was able to contribute terrific insights about why enterprises haven’t fully embraced the cloud yet and why VMware Integrated OpenStack is probably a net win for the OpenStack community. He also spoke about:

  • Why he founded Platform9
  • What Platform9 provides
  • How containers may change the meaning of PaaS
  • Why 2015 & 2016 will be the turning point for enterprise cloud adoption
  • Why his team uses Ansible for configuration management
  • Who he thinks has done mind-blowing work in the tech world
  • What the current monthly Amazon spending break point is, and how we might bring it down

For a full transcript of the  interview, click read more below.

Jeff:                 Hi, everyone. I’m Jeff Dickey from Redapt.

Niki:                I am Niki Acosta from Cisco, and we are super, super, super stoked today to have a very special guest from Platform9, Sirish Raghuram. Did I say that right?

Sirish:             Yeah, you got it Niki. Thank you for having me and it’s great to be here today.

Niki:                Awesome. Sirish, we are really excited. The impetus behind Sirish joining the show today is, I don’t know about rest of the OpenStack community, but I’ve been hearing a lot about Platform9. They are getting a lot of attention. What they are doing is incredibly cool. We wanted to bring Sirish on today to talk about it, because they haven’t been part of the OpenStack community for long, but as we discussed prior to the show, they actually have good timing in what they are trying to accomplish.

Sirish, before we get into all of that, we usually like to start the show by asking how you got started in tech. Tell us about your roots, your interesting time with VMware and now leading up to, of course, Platform9.

Sirish:             Sure, absolutely. I grew up in India. I was studying engineering, and that’s when I got my first PC. I discovered the whole World of Warcraft and Age of Empires. Multiplayer real time strategy games are where it’s at, if you ask me. In the middle I would write some code as well. I really took to it. Programming, technology, computer science, it felt like home for me. I graduated. After graduating from Pune I was fortunate to get connected with VMware, and that’s how I got started at VMware.

My first assignment at VMware was making Windows XP Boot, our workstation, VMware Workstation 3, because they changed the SCSI style. Microsoft changed the SCSI style. One thing led to another, and I spend 13 years in VMware engineering across many different groups and functions. [inaudible 00:01:58] engineering get a hands on technical leadership or engineering management. Now, here we are with OpenStack and Platform9. The key insight was that, for all of virtualizations success and dominance in the enterprise, the enterprise customer still is struggling to put it all together as a private cloud. The answer has obviously got something to do with OpenStack. We started with Platform9 to make it really simple and easy for anyone at any scale run their own private cloud.

Niki:                Jeff, you look like you want to ask something.

Jeff:                 When did you decide that you are going to build this OpenStack-based company? What was that? What was happening? What was going on?

Sirish:             Yeah. I get asked about OpenStack a lot, especially given my background at VMware. If you think about it, the way we thought about it is, there needs to be a cloud platform. We didn’t feel like VMware’s in-house cloud platform solutions were the answer. Then when we thought about solving that problem, we could have gone and written something entirely our own or use something like OpenStack, and we also looked at CloudStack. The answer was very easy that it had to be OpenStack, because of, think about it from a customer’s point of view. There is this enormous ecosystem of leverage. Their API endpoints are standardized; there is tools that work with them; the storage and network systems that integrate with them. You get the standard API.

OpenStack has become a, or has become a standard for the data center. It’s something that the customer wins if you do it. Architecturally, OpenStack doesn’t get the credit that it deserves. It’s actually a very reasonable loosely coupled service-oriented architecture. Why wouldn’t we use OpenStack as opposed to rolling something entirely our own? We’re trying to solve a problem, and OpenStack is a part of that solution.

Niki:                VMware launched their OpenStack services distro or whatever. Assumingly, from what I read on ESX, is that posing a threat you think to other players in the OpenStack space given their market dominance over the last ten years?

Sirish:             It’s a great thing for OpenStack and the space in general, because if you look at what’s happened, and I’ve had the opportunity to talk to two or three really large VMware accounts that either participated in the VIO Beta or were the kind of customer that VMware was trying to retain by launching VMware Integrated OpenStack. The answer was that these customers had come to the conclusion that VMware’s existing cloud platform solutions weren’t going to cut it. They needed something which is more like Amazon. The natural answer was to look at something like OpenStack. OpenStack has this fantastic benefit of saying that here is an API that a number of different vendors can potentially implement for you. Be that Metacloud or be that Platform9 or VMware or any of the other OpenStack vendors.

For a customer, it puts them in a position where they have more leverage as opposed to if they were using the proprietary vCloud API, they’re completely beholden to VMware. Not just that, the design was something that’s a lot more modern. OpenStack’s design is a lot more familiar to a world which is using a lot more Amazon Web Services. VMware did a very good thing in realizing that they needed to have an OpenStack solution. Now, for the OpenStack market it suddenly makes brings on … expands the addressable markets 5X. Eighty percent of an enterprise or seventy percent of the enterprise is still running a lot of VMware. All of those people previously weren’t really addressable to the OpenStack community.

Now, suddenly that community is realizing that OpenStack is actually a vCloud platform that they need to be thinking about as well. It expands the addressable market so much. It more than offsets the competition from VMware, and now it’s up to the individual vendors to add on their own value adds and differentiation and compete with VMware. It’s a great thing for everyone.

Niki:                Do you think that people who are using VMware Integrated OpenStack are going to use it to do a kick the tires kind of environment, and then potentially choose another route for OpenStack? Or do you think, for those particular users that they are going to do VMware Integrated OpenStack, and over time it will get better and will stick with it?

Sirish:             It’ll be like predicting which way the future … which way it works out. It’s a little hard to do. I do think that the people who are looking at OpenStack are also realizing that there are other hypervisors out there. There is this newfangled technology called containers, which is very interesting as well. They are looking to achieve greater infrastructure automation. They look at VMware integrated OpenStack as a path, as a stepping stone in the path for their internal infrastructure and that path is going to take them down a path where there is more automation, multiple hypervisors, more loosely coupled distributed Linux-oriented services.

For some, it’s a stepping stone that’s comfortable. It comes from a trusted brand, VMware that they trust. For others, it’s a way to test the waters in OpenStack, while actually looking to make a bigger jump into the OpenStack community.

Niki:                That’s interesting that you say that. In a lot of ways, I see that a lot of people are now seeing VMware as a direct threat. The same people have to be very careful because they have these historical strong relationships and partnerships with VMware. It will be interesting to see how all that shakes out, which brings me to the whole reason we had you here. It sounds like from Platform9’s perspective that you guys are actually trying to help these types of customers who are dealing with multi hypervisors, multi target clouds. Can you tell us a little bit about the impetus behind finding Platform9, and what you guys do to help solve these challenges?

Sirish:             Sure, absolutely. I‘d say, I was at VMware for 12 years. When you are in a company as successful as VMware was, it was a fantastic culture there. It’s very easy to get locked into your own cocoon and that certainly happened with us. There was a conversation I got pulled into with a San Francisco based startup in 2013 where they were having a massive spend on Amazon Web Services. They had grown from being a 20-person company to a 600-person company. They wanted now to move off of Amazon and run their own infrastructure. I was brought into help answer their questions and to show them how they could do it with VMware’s solutions.

When we actually started having the conversation with them, I realized that they didn’t think it was easy or viable for them to build a private cloud. I was like, look, there are all these solutions. There is VMware, there is OpenStack… all these things. They said, “Look, our operational team is eight people; we will need to get to 20 people in order to pull this off.” I realized that … if you think about it, the cost for the company was such that they couldn’t do it. They continued using Amazon for far longer than they wanted to, because they felt there was no other solution that they could really implement successfully. That was a shock to the system, to me when I realized that for all of virtualization’s success…virtualization is really the underpinning of cloud infrastructure…on the private cloud side people struggle with it so much. This is a very sophisticated team I am talking about. These people were, I’d say, power users to put it mildly, and they weren’t confident of pulling off a private cloud. That’s when I realized that there is a problem that needs to be solved, and we said we are going to go solve it. We started Platform9 with a goal to make it … We asked ourselves, why is it so hard? When people can rack and stack hardware and put Linux on it very easily, Why is it so hard to go from racks of Linux or racks of VMware vSphere into your own mini Amazon? There is no good reason for that to be the case.

We started Platform9 with a goal. We asked ourselves, if someone has racks of Linux or racks of VMware, can we get that private cloud operational in five minutes? That’s a goal we have for ourselves. We do that by hosting OpenStack as a web service. It’s a cloud service, which means that customers sign up at Platform9 and they get a fully tuned production-ready OpenStack control plane that they can plug in their infrastructure with it, which means that suddenly they don’t need to spend any time in that complex management layer which they struggle to develop. They have the hardware, that’s not the hard part for them. For them, it’s a management fabric that pools that hardware together into a private cloud, and we took all that pain away by saying, “You don’t need to install it; you don’t need to configure it; you don’t need to monitor anything; you don’t need to troubleshoot anything, and you don’t need to have ways and means by which you can resolve issues and roll out actions in your advantage.”

We take all of the responsibility of delivering the control plane to you with an SLA that you can rely on. You can actually focus on using your private cloud and going on to achieve greater infrastructure agility and automation as opposed to trying to get it up and running.

Niki:                Are you operating the cloud long term? I am assuming you guys are handling upgrades and all that good stuff too.

Sirish:             Yeah. That’s actually the most important thing. If you look at getting an OpenStack control plane up, there are many different ways of doing it today. There is Pathstack, RDO, Mirantis, DevStack. There is many different ways by which you can get an OpenStack instance up and running. That’s not the hard part. The hard part is keeping it running, in production, at scale while you have users using it. That’s where you need to have the full loop between … You need to have telemetry that monitors your OpenStack services for health and detects issues that are happening, that are going around. You need to have telemetry that then goes and has the ability to alert the concerned engineers, and then they have the ability to roll in and resolve issues proactively. Especially a lot of the automation that Amazon Web Services has in their management fabric in terms of keeping it running at production at scale. That’s what’s needed in the private cloud. That’s why we deliver Platform9 as a service with an SLA saying that, “Look, your control plane is our responsibility; you don’t have to miss a beat. You can use it and we will take complete responsibility for its availability, its scalability, its performance and its upgrade SLAs and so forth. All of that goes out of the window; we take care of them freely.”

Niki:                I read on your website, one of the goals or the intentions of Platform9, I’m not sure if this is something that exists today or something that’s on your roadmap. Your intent is to have multi hypervisor support, is that correct?

Sirish:             That is correct. We declared general availability for KVM and [inaudible 00:13:00]; that’s about six weeks ago. We are actually now in beta with VMware vSphere. We have some really large customers who have massive VMware deployments who are very excited about what we are doing. After we are done with VMware, we will be announcing updates on Docker as well. Essentially, ESX is still the world’s best hypervisor, but you don’t need it for every workload that you run. If all you are trying to do is CI and CD and test and dev automation, you don’t need a lot of the features that VMware ESX has. You may legitimately choose to use KVM, or maybe you will start to move to micro services architecture and maybe use more containers.

We think that the private cloud has to be decoupled from your choice of virtualization platform and customers should be free to choose any mix of virtualization technologies that works for them. They may now … the production workloads that need higher SLA and a very resilient underlying infrastructure on VMware or they may run newer applications and containers and test and dev work that will …

Niki:                I think we lost Sirish. Sirish, can you hear us?

Sirish:             Platforms that work for them, and we want to enable them to use that in the private cloud seamlessly.

Niki:                Looks like we lost you there for a second, but I think I got the basic gist of what you were saying. How do you … Let’s say, hypothetically you got your OpenStack [inaudible 00:14:35] Platform9; you’ve got workload and you want that workload to target a VMware hypervisor. Are you handling that through the OpenStack layer to the scheduler, or is that happening on some other component that might be … Platform9’s kind of secret sauce?

Sirish:             We are all based on OpenStack. Let me actually walk over to a location where I probably have better wireless range. It’s all based on OpenStack. Yeah, we are an OpenStack based company, and we will continue to remain on OpenStack. What we do is have the scheduling logic to integrate with OpenStack, the KVM based resources that an organization might have as well as any VMware resources. Both of these resources can be pulled into one private cloud. What that means is that you can then deploy your virtual machines regardless of whether that image is a VMware image or a KVM image. The scheduler has the ability to look at based on the image type that you are trying to deploy from; if it will have the ability to actually pick the right underlying infrastructure.

Niki:                Very cool. Then to the user, those are all going to appear in the OpenStack dashboard, right?

Sirish:             It looks exactly the same. OpenStack Glance image library represents all of the images that you have. It looks exactly the same. The user doesn’t even see the differences between VMware underneath or KVM underneath.

Niki:                How are you handling storage? That’s always the big one. It sounds like everyone that we’ve talked to on the show has a different spin on block storage, volumes, and object storage.

Sirish:             A lot of the challenge with OpenStack is in actually … It’s like a framework. It’s like a kernel that you can use to do many different things. I think the challenge is in putting it together in a way by which you eliminate, subdue that complexity that arises from the million different ways of doing things. With Platform9 the philosophy is as simple as, if you deploying a block storage device today, you have a server, let’s say, that’s hooked up with storage. It could be direct attached storage or it could be network attached storage. When you import that server into Platform9, the way you do it is you sign up for your Platform9 control plane and you download an agent and drop it on to that server, and that enables the trust and management relationship between your server and your Platform9 control plane.

When you do that that agent introspects any storage that’s available on that server, be it direct attached storage or network attached storage, and seamlessly imports that via the nova and the Glance and the Cinder APIs in OpenStack. Likewise on network as well, on storage and the network, the key achievement is in eliminating the complexity that people face when interfacing OpenStack with the storage and network subsystems that they may have.

Niki:                Are you using Neutron currently?

Sirish:             We are moving to Neutron. We were based on Nova network. We’ve only recently upgraded from OpenStack Havana to OpenStack Juno. By the way that’s a story in itself, because our customers next week are … As of now many of customers are still on OpenStack Havana, we are actually rolling out this upgrade next week. They are going to get an email. Their upgrade process looks like this. They get an email saying, hey, there is a 15 minute downtime scheduled for this time, next week; let us know if you want to reschedule it. They don’t lift a finger, right? At the end of that downtime period they get an OpenStack Juno enrollment completely up and running. They don’t need to lift a finger, it all works for them. It’s completely touchless from their point of view.

As part of the Juno upgrade, a key driver to upgrade to Juno was to move to the more stable Neutron implementation in Juno, because we did look at Neutron in Havana, but we decided to stick with nova network for its stability there. In Juno we are actually making greater use of Cinder and Neutron.

Niki:                Very cool. Jeff, I feel like I’ve asked a barrage of questions, and I think it’s your turn.

Jeff:                 You’ve asked some great questions. It has been good. What does size and scale look like for you guys? What’s the ideal customer and how large do you guys go?

Sirish:             We actually have a lot of inbound interest from very small customers. It’s a lot of the fact that small customers who may previously have said, “OpenStack; no way; that’s too complex for me.” We are making it accessible, so easy for them that many people sign up and get up and running very quickly. The sweet spot, is I think, that OpenStack solves problems at scale. You have a certain minimum scale at which the value proposition for OpenStack in infrastructure automation and resource pooling really becomes very high. We think that that scale is somewhere between 50 servers and on up. Fifty physical servers and on up.

Our largest customer plans to reply us on a couple of thousand servers by 2017. They are starting small with a 50 server deployment right now, but they are licensing Platform9 statewide. We have, I’d say, five customers that plan to get to the thousands of physical server scale, but they are not there yet today. The way infrastructure software works is they tend to test at first at 20 server scale, then 50 server, 100 server and they scale very simply. Very few of them flip a switch and say, “We are going to run this in 2000 servers tomorrow.”

Niki:                Are you deploying this in a hosted model, or is this a customer premise implementation?

Sirish:             The infrastructure, the actual resources, the compute and the storage and the network resources that they are using always tends to be within the customers’ premises. The control plane for all of these customers is hosted by Platform9. Now, we do have, I’d say, there is ten percent of our customer inbound that wants a purely on-premises solution. We don’t have that today. It is something we are architecturally designed to do. We could make it up available relatively quickly. It’s a matter of prioritizing our backlog and getting first things done first.

Niki:                Very cool. That’s a great model. If you can centralize the control plane and then still give the customers the warm and fuzzies knowing that their VMs are still within their four walls. I think that’s pretty brilliant.

Sirish:             The insight there was, today there are so many customers that are distributed themselves. For example, our biggest customer has five different geographies. They have data centers in San Francisco, New York, Amsterdam, Japan, and India. What is on premises anymore? They don’t want to have five different management systems for each data center. They want to have a single pane of glass, and they want to use these data centers with regions a lot like the public cloud. The realization we had was as you get to … as the world gets increasingly decentralized, people are coming to realize that you need a management fabric for the private cloud that spans the globe regardless of where your actual infrastructure is. Then you are free to plug and play with infrastructure in different geographies.

Niki:                I did see too… I think it was on your supported platform page, maybe it was a different page. It looked like there was going to be or there is some integration to some of the more popular public cloud platforms as well. Is that right?

Sirish:             That is something we get asked for. We do get asked in particular about Amazon, and whether customers could bring in the single pane across their internal infrastructure and the public infrastructure. Again, this is something that’s maybe applicable to OpenStack in general. OpenStack should become … It’s a standard. It is a de facto standard for how to consume infrastructure in the enterprise context. OpenStack itself should have a provider, a driver that can let it use Amazon capacity or Google compute capacity in the backend. Stuff that we want to do, again, there is a lot of interesting things you do, and you have to choose first things first.

Niki:                Go ahead, Jeff.

Jeff:                 Can you hear me? I’m curious about the multi-tenant, what you guys are doing around that, and then the collaboration.

Sirish:             Multi tenancy in the context of our customer accounts themselves and how we… …

Jeff:                 How do you solve that problem that seems to be a big issue?

Sirish:             There is two handles to multi tenancy with Platform9, because we are a SaaS based management fabric. We do not share customer data. For every customer account, we bring up a dedicated control plane instance as opposed to maybe letting multiple accounts share the same control plane. That was a factor … It comes out of the fact that even though it’s only hosting metadata, the control plane only has access to metadata. It does not really have any VM images or VM instances. It still is sensitive data that we did not want to risk any compromises around.

Every control plane has its own dedicated root certificate authority and all of the metadata communication that happens between the data plane software, the software that’s running on the hardware that is being managed by that OpenStack instance, is encrypted, strongly encrypted with certificates that are dedicated to that root certificate authority. Even we don’t have access to the metadata that’s stored by the control plane for our servers, for our customers I mean. Nor do customers share anything. They share nothing other than the fact that it is the same company that’s servicing them, but their control plane is entirely isolated. It can be locked down to their network range.

If you sign up as a customer to Platform9, you can configure your Platform9 control plane to be accessible only from your networks. That’s on the multi tenancy on our end. It’s a model that works very, very simply, and addresses most of these info side questions that customers tend to have.

Niki:                One thing that seems to be challenging too, at least from the operator perspective is being able to monitor the entire stack all the way from the bare-mental instance all the way up through the OpenStack layer and potentially beyond. How are you guys handling telemetry and monitoring? Why did you make the decision that you made? There is a lot of choices out there obviously.

Sirish:             There is a lot of choices. Look, I wouldn’t claim that this is the only way to do it. What we do have is something that’s working very well. Let me explain what we do. Our architecture, again, has the control plane and it has a data plane. What we have is the control plane itself aggregates key logs and monitors statistics and API calls from the data plane. The data plane actually rose itself and exposed it to the control plane where we have collectors. We have collectors in the control plane as well, which are internally piped into our own centralized log monitoring infrastructure.

We monitor not just logs for possible error traces, but also API traces, and performance latency. All of that is then hooked into … we use paper trail internally. We could be using the ELK framework; we actually then pipe that into our alerting system, which is VictorOps. When a customer runs into an error, literally my cell phone goes off, the VictorOps app on my cell phone goes off. For those who are not familiar with VictorOps, VictorOps is a lot like page equity. They are conceptually very similar.

The net result is, when our customers are using Platform9, when something fails we see it sometimes before they even realize something failed. We can then jump on it and resolve it. We have actually fixed issues without customers ever telling us there was a problem.

Niki:                Were you guys out with Ceilometer? Is there any component of Ceilometer that you are using now, and in full disclosure that there is plenty of people who don’t much like those people who are not yet on Neutron. How are you handling that?

Sirish:             We are not on Ceilometer yet. We started OpenStack now has 15 different services. It’s a matter of prioritizing the ones that we are doing first before the others. Ceilometer gets … it does come up. Service providers in particular look at Ceilometer as a way for them to enable billing. Now, there are dedicated solutions that they could be using. They could also be running Ceilometer themselves and integrating that with their Platform9 endpoint. There are solutions that we offer to these customers today, but it’s not something that’s included in the Platform9 core distribution today.

Niki:                I’m assuming, along the way, you guys are doing bug fixes in contributing stuff back up to OpenStack. How do you determine what tries to go back to OpenStack and what doesn’t?

Sirish:             We do make a lot of bug fixes. I did some queries last year. At some point we’ve made 400 bug fixes in six months. Look, a lot of those are probably specific to the way in which we package and deliver our software. It’s how much of those are core bugs was a matter of packaging. The line is sometimes fuzzy. We are not yet in a position where we are contributing these fixes back. The reason is, the time it takes for us to file a blueprint or file an issue and then push the patch back, right now, is not something that we are not able to spend. It’s something we absolutely want to do. It’s a matter of time. I’d say, by the end of this year, we aim to be starting pushing a lot more contributions back into the OpenStack framework.

We don’t think of differentiating ourselves by virtue of our IP. We think of differentiating ourselves where the fact that we run this as a service that’s very easy for people to consume. We want OpenStack itself to get stronger and better over time. It is something we need to start doing more of towards the end of this year.

Niki:                Yeah. I hear that from a lot of folks that the amount of time that it takes for stuff to get pushed through and whether or not it gets accepted can be a kind of difficult process. You talked a lot about what you love about OpenStack, what’s on your wish list for OpenStack?

Sirish:             Can I say, simplified networking?

Niki:                Sure.

Sirish:             OpenStack, when we started, we were, I’d say, a lot more skeptical. We were very careful with our choice. Then when you look at it from a customer perspective, this was absolutely the right choice to make, and the fact that the architecture, it’s got good bones. I don’t think OpenStack gets enough credit for that. It’s credit to a lot of the people in the community who’ve gotten it to this point. It’s incredible for an open sourced project to have developed so asynchronously and so organically to have achieved everything that it has so far. Things that I wish were clearer; I think networking should be simpler. We are doing that; Platform9 is solving that problem. I don’t see why anyone who is trying OpenStack has to lose their minds over networking complexity.

I also think that the container story, it’s coming together. We are very excited about Project Magnum and we are watching it very closely. It’s something that’s very important for OpenStack to address. Heat orchestration, there is a lot of noise especially from the Docker and container space these days about orchestration solutions. There is probably more than, can fill our one hour podcast that you can talk about. And Kubernetes itself is somewhat of an orchestration solution.

OpenStack Heat looks great, because it’s actually very similar to Amazon cloud formation. I wish that we are able to better clarify to the world at large why OpenStack Heat is a better way to orchestrate your applications across containers and across VMs over time. Three things I’d say: simplified networking, better and native support for containers as containers develop, and the OpenStack orchestration project should become the de facto, hopefully subsuming all of the functionality that you can get from proprietary orchestration solutions.

Niki:                It’s interesting that you brought that up about Heat. Certainly there seems to be differences in opinion as far as how you are going to solve application deployment. We’ve talked to Mirantis and they’ve got a ton of people working on Murano. It seems like containers introduced this new way of thinking. Do you think it is OpenStack’s job to determine what that process should look like? Or should it be somewhat abstracted similar to, let’s say, block storage where you could very easily change the underpinnings and still have a same outcome?

Sirish:             This is exactly the concern that I have, Niki, which is that there is a lot of noise and fragmentation in the orchestration space; there is maybe a couple of different ways of thinking about it within the OpenStack community itself. Heat is obviously what’s in the project, but there are new proposals out there. Especially in the container world there is a ton of different ways of thinking about it. There is Kubernetes and Docker seems to have its own orchestration mechanism coming up. There is new startups being announced all the time. My concern with that is that when you have that level of fragmentation for a customer, it can actually slow you down. You are not sure which path is a dead end and which path is a good path to take.

That is something that I think, we, the OpenStack community should hopefully resolve quickly enough saying, go down this path and you will not go … promise to a customer that this is not a dead end, and that there is interoperability. If you decide to move from VMs to containers, this is still going to be relevant by being able to tell customers. If you orchestrate your applications through OpenStack Heat say, then regardless of how your application architecture evolves over time, this is still going to be useful, it’s still going to help you orchestrate your application. I think that is the promise that the OpenStack community needs to be able to make to users.

Niki:                Yeah, that’s a good point. What about PaaS? Especially over the last two months, I’ve heard from enterprise companies that there is at least some desire to start exploring PaaS solutions. Do you have any opinions on PaaS?

Sirish:             I do. This is probably going to be more a connection of related thoughts as opposed to a very coherent point. I will give credit where it is due. It [inaudible 00:33:15] articulated this in one of his recent blogs is, containers seem to impact PaaS in interesting ways. Previously … let’s go back a year. You probably would think about PaaS as being Cloud Foundry or maybe Aprenda or OpenShift. Now, if you fundamentally have a very, very portable lightweight construct called a container which lets you deploy a unit very easily. If there is a way to orchestrate your application layer across containers then, is PsaS really something that you can compose in many different ways. There is a number of different solutions to PaaS that emerge all delivered in the containerized format.

So you can literally bring up a custom combination of a PaaS solution that makes sense for you because of containers making it so easy to deploy these application services. It maybe that containers are going to change PaaS meant. PaaS meant Cloud Foundry or OpenShift over time, but maybe in the future PaaS is going to evolve to a dynamically and easily composed construct of application services that are deployed in containers.

The actual services themselves; there is going to be many different load balancing solutions; there is going to be many different solutions for specific features in PaaS. So I think PaaS is going to happen. I don’t think it’s going to happen in the way maybe Cloud Foundry and OpenShift were going about it in their first generation. They have really smart people there, and I’m sure they are updating their roadmaps with watching this happen.

Niki:                This could be interesting to see how the PaaS story shakes out. Ultimately, the whole goal intent is to make is easy for developers to develop applications without having to spend time on things like load balancers or things that may not be critical to the actual app components, right?

Sirish:             Exactly. Definitely the world is getting there. It’s probably been a slower journey than many of us might have hoped for, but people are definitely getting there. You need a really solid infrastructure as a service substrate on top of which you can deploy these application services. Now, we are in a point where people are starting to get more successful with OpenStack as that infrastructure substrate. The next level is going to build on top of this substance.

Niki:                How long do you think it is before … we certainly see it happening now, but if you … I get the impression based on the conversations that I have that enterprises are slightly behind the pack in terms of cloud adoption in general to some extent. I wouldn’t say that it is true across the board of all enterprises, but certainly there are pockets of folks within the enterprise who are branching out and doing more exploration with cloud or maybe deploying stuff on cloud. How long before OpenStack hits critical mass for the enterprise?

Sirish:             I think 2015 and 2016 are going to be the turning point. The reason is this. In defense of the enterprise; if you look at the first generation OpenStack solutions, look at the enterprise; it had 80 percent of its deployments, virtualization, infrastructure was VMware. Til Platform9 came around, and maybe now with VMware integrated OpenStack, you didn’t really have any OpenStack vendor make supporting VMware as a platform a top priority, which meant that it was hard. It was a bit of a core reboot of the data center stack. These are people who understand the benefits of cloud computing.

They do have a lot of their existing infrastructure that they have to manage. I don’t think the OpenStack community made it as easy and straightforward to mix and match existing environments and existing workloads with new cloud centric workloads. In the enterprise it’s not a boolean. We are always going to be statically virtualized or we are only going to have a cloud computing model. It shades of grey. It’s a gradual transformation that happens. The OpenStack community did not make it easy and seamless to get on the journey with a very seamless path.

I think of what we do at Platform9 as solving that problem, what VMware is trying to do with VMware integrated OpenStack is also trying to solve the problem, and I think there would be others as well. I do think that now the enterprise understands the benefits of cloud computing and they do see viable paths to get to that end goal while using the assets that they may have and using technologies that they are familiar with. I think 2015 and 2016 are going to be critical, because while the mindshare and interest in OpenStack is really high, for the first time there is the path to go from where they are in the enterprise to where they want to go, and in a very non-disruptive manner.

Niki:                You should have been a salesperson.

Sirish:             I should be. I hope that.

Niki:                I’m sure you did tons of it.

Sirish:             I really believe in this. The OpenStack community should have made it easier for customers. The whole pets versus … I spoke with Randy Bias about this as well. The whole pets versus cattle meme. I think it is pets and cattle. The reality is customers have a lot of pets, and that doesn’t mean that their new application deployments cannot … We should have made it easier for them to use what they have, get it up and running, and progressively migrate their application architecture to the model that we are evangelizing, which is where they want to go. If you make it a very hard shift then it becomes very difficult to cross that chasm.

Niki:                You are like the ranch hand of pets versus cattle. I really want to see you in a cowboy hat and boots; I’m going to throw that out there.

Sirish:             Next time I’m in Texas, I’m going to take a picture and send it to you, Niki.

Niki:                Yes, awesome. Let’s switch gears a little bit. Configuration management, obviously a lot of options out there; I am assuming you are probably have picked your configuration management slash automation tool of choice. Do you have any strong opinions on that?

Sirish:             I do. I’m wondering if I should share it, because I’m sure that your audience has strong opinions on it as well. I might tick off maybe 60% of your audience.

Niki:                They will get over it.

Sirish:             Okay. Are you sure?

Niki:                Positively. We have great viewers, the best in the world.

Sirish:             We started with Puppet internally and Packstack. For our use case where … The way we use configuration management internally is we obviously use it to orchestrate upgrades and new version deployments to offer control plane across our customer deployments. We were using Puppet and Packstack, and it was a nightmare for us. I don’t know if it was us or the tools, but it was really slow, and it kicked off the engineering team enough that they ripped the whole thing out and reload it from scratch in Ansible. We actually try to find an existing Ansible thing. We found a couple, but they didn’t really meet the goals that we have, so we written it entirely from scratch using Ansible.

We did look at Salt as well, but the reason we ultimately chose Ansible is because it has this agentless architecture. We did have a couple of performance optimization issues we ran into there as well. The way the connections get made, you need to be able to reuse connections efficiently in Ansible, otherwise it can get pretty slow as well. Now, the team really likes that. There is a lot of Ansible diehards over here, so don’t come in and say Puppet in this team.

Niki:                Yeah. I might know some Ansible users myself, I won’t name any names, but the Metacloud team loves some Ansible too. Jeff, I feel like I’m hogging the mic.

Jeff:                 I love when you do that. It’s awesome you have such great questions. My only question is really like how did even get started? What do they have to do? Is there minimum requirements? Is there architectural guide for their existing equipment?

Sirish:             Yeah. It’s a great question, Jeff. The way you get started is you go to Platform9.com and you request your free trial, and usually in about 10 minutes you get a note saying, “Here is your new control plane that’s been deployed for you.” We have a support portal, and this is also referenced from our website with the requirements. Minimum requirements is you need to have a server which has network and storage attached and is running, I’d say, CentOS 6.5 or sentora 6.5 or RHEL 6.5 or newer, [inaudible 00:41:40].

Yeah. You need to have network storage compute and you need to have a Linux operating system or you could be running VMware vSphere. You go to Platform9, request your trial, and you get up and running. The email that you get has instructions in terms of what you do next. It’s very simple. You will all get a new download agent and you drop it on your server and start. For Linux or whether it’s CentOS or Ubuntu; it’s a small agent about three megabytes in size that enables trust between your servers and your Platform9 control plane.

For VMware that same agent is packaged as an OVA appliance. It’s just that simple. Our support portal at support@platform9.com has how-to’s on how to setup your server if you are bringing the server from scratch. This is not rocket science. I think everyone who sets up Linux sets it up this way. There are specific articles for those who are looking for those material.

Niki:                I’m assuming that you are hiring?

Sirish:             We are, absolutely. Engineering, sales; we need people who can grow and who can hack it and run fast and are driven by customer success, yes, absolutely.

Niki:                Will you be in Vancouver?

Sirish:             We will be in Vancouver. It’s our first summit as a team. We are super excited. My co-founder Madhura was at Atlanta when we were in stealth. There was not so much fun. When you are in stealth, you can’t talk about what you are doing very publicly. It’s not fun. Harris came in. we miss Harris. We couldn’t go, because we’ve just done VM World and Amazon reinvent and we were just stretched too thin. Vancouver is our first summit. We do have three big talks. I hope they get approved by the OpenStack community. We are looking forward to meeting with a lot of users there.

Niki:                What do you hope to gain from the OpenStack Summit? What kind of talks are you going to be attending?

Sirish:             The talks that I want to attend are people who are using OpenStack. We need to start talking about how are you using OpenStack to really transform your business agility internally? Recently, last week I was on a call with a customer, it just blew my mind. They have an OpenStack environment across a thousand servers, multiple OpenStack controllers working managing a total of thousand servers with a home grown PaaS layer that they have customized, which is routing load balancing requests across these OpenStack controllers, and that architecture was incredible to see.

There are a large software development organization, and they use this to achieve CI/CD automation within their environment. That was mind blowing to see that. There is a lot of users out there that are using OpenStack to achieve incredible things. I’d love to see more of those stories.

Niki:                Yeah. One that comes to mind is Wes Jossey, he’s a Metacloud customer, but they’re running just a massive Hadoop cluster on OpenStack, and an interesting use case. There is a lot of diehard, has-to-be-on-bare-metal type of folks that are out there. But they kind of said, “Hey, that’s not really that important to us for what we are trying to do, of course, it is workload dependent.” But I’m always kind of floored at how different people’s use of OpenStack… that range of what people are actually doing with OpenStack is just mind blowing to me.

When I talk about OpenStack, I tell people…they’re like “What is it good for?” I said, well, what are you doing on public cloud now? It’s kind of a blank canvas. There is so many possibilities when you are looking at the combination of services that are available and people are using it certainly in very interesting ways. What is your most popular use case at Platform9 this far?

Sirish:             It’s definitely software development organizations that want their dev and test users to have two or three different things. They want to have a sandbox area where they can go in and quickly deploy test beds and spin up test beds and try out things within a self-service manner. They want to use it as a pool of a lab pool where they can run their CI and CD workloads. It’s one where they can also bring up and test all kinds of new PaaS and application deployment environments. Our number one use case has been organizations that want to empower the dev test users with greater infrastructure automation, self service and lab functionality.

Niki:                You said you spoke into a large number of folks who are previous or current VMware users. Are you seeing the shift from folks who have a big spend at Amazon, and if so, what is that breaking point financially? What is that breaking point where it’s actually going to make more sense to move your Amazon workloads to something like a private OpenStack environment?

Sirish:             We do have customers from both segments, people who have large VMware deployments, and are looking to use OpenStack on top of it. People who are maybe saying, look, we have VMware and it’s great for production, but maybe we want to use KVM for test and dev. We also have customers who have scaled up on Amazon, but are now looking to bring it in-house because of the … they know their workload characterization and knowing that they think they can run it a lot more efficiently on their own infrastructure. I would say it’s a mix of everything.

Going back to your question about Amazon, our customers seem to have a typical point about $150K per month. It’s about $2 million a year. It’s where they really start to think about that spending very carefully. It’s not a very large sample size, I’m talking about maybe seven to eight customers there, it maybe a little more variable than that. The key thing is that if we make it … if OpenStack were trivially simple that threshold should drop. People today hesitate to bring it in-house because they think of the complexity of running all of this. If we as a community make OpenStack amazingly simple, 10X simple than it is today, then people would do it maybe earlier.

It’s a pretty exciting time to be able to do that, because I think with Amazon, people understand what your cloud infrastructure needs to look like and how to use it. Now, with OpenStack you have a way to do that with your own internal infrastructure. It’s up to us to make that really simple and easy and more accessible.

Niki:                Are you seeing any folks that go the DIY route and ultimately either don’t have a good experience or fail?

Sirish:             We have seen customers. Let me put it this way, I don’t know if they consider themselves as having failed, but they have been trying for two years and they are about trying to rollout their OpenStack environment. To me, it’s important to set egos aside and say, “Look, if your organization has spent two years trying to operationalize a private cloud; that’s two years of opportunity that has been lost. You could have been using a private cloud for two years.” I think there are customers who will get that, and there are customers who want to try further themselves.

It’s a lot of computer science fun to be able to try and configure OpenStack and make it work at scale in production. In the end, again, I quote my favorite guru agent [inaudible 00:48:54] it’s undifferentiated lifting for a lot of companies out there. What you want to do is really use private clouds to achieve greater agility in your core competency. If you are spending years or months, even months trying to do that then that’s really undifferentiated lifting and you’re better off using a packaged solution.

Niki:                Definitely see how you can compare Metacloud now that I’ve talked to you more, to Platform9, very similar drivers to creating these types of solutions for sure. Jeff, did you have a question, because I have another one?

Jeff:                 I had one. I see this as a great easy access onboarding for a lot of companies. The talent pool is very shallow for OpenStack engineers and every company wants OpenStack. I am talking to all these companies. They want OpenStack; they don’t have the skill set to get going. They want it, and every time they don’t make a decision they are behind like you are saying. It’s I’m going to see a great fit for you guys.

Sirish:             It’s the opportunity cost really. Let’s assume that you are using Amazon or let’s assume you are using a static virtualization environment, maybe a traditional VMware environment. Or let’s assume you are trying to operationalize a modern OpenStack environment, but it’s taking a lot longer. All three models there is an opportunity cost for you. And Amazon is actually the best option. If you have to choose one of these three you should maybe be on Amazon. Really you will hit a point where you realize that for your business, when your CFO or your management team looks at it, you want to have a better solution than that. That’s the opportunity for us in the OpenStack community to solve.

Niki:                Who are your … We are getting close to time here. You mentioned Randy Bias and Adrian Cockroft…definitely two people that I certainly admire. Who are your mentors? Who do you admire in terms of your technology people that you look up to?

Sirish:             I’d say Werner Vogels. He and the team and Amazon, they’ve done breathtaking work. It is mind blowing work they have achieved. It’s something we should all be inspired by. Those guys built it. Then Adrian taught the world… Netflix and Adrian taught the world how to use it, which is just as important.

I really admire Solomon Hykes. I think Docker is gorgeous. Show a developer the Docker workflow, and show me a developer who doesn’t like it. It’s achieved that infrastructure as code failed to … application container with virtual machines never really were able to achieve. It is a lot of people … ding Docker a lot, saying, containers have been around for ever and mostly unique IP, it’s just packaging. But I think that packaging is brilliant. It makes the technology so accessible and pleasurable to use. There are some of the people that I admire. I admire my co-founding team. These are people that I am really fortunate to work with. Those are some.

Niki:                Do you want to name their names, so you can give them a little shout out here?

Sirish:             Yeah, absolutely. Look, I started with Bick Lee. Bick is my mentor. He was my mentor when I started at VMware. When I was making all kinds of … My first code review is through [inaudible 00:52:28], I’ll tell you. I’ll remember them for the rest of my life. He is our chief architect. Madhura Maskasky, you should feature her some time. I don’t think there is a lot of women like her in technology. She is VP of Product and she has got incredible design sense and she has a lot of passion for what she does at work.

Roopak is one of the most understated people that I know. I had the opportunity to work with him seven years, and the guy is just an idea machine. He looks at things and he comes up with all these crazy ideas, and he then has the ability to build it. This whole OpenStack control plane was his idea. You can look at it, we now actually have built it and we have over 50 customers actually using it, which are up to these three.

Niki:                That kind of dovetails into some of our … By the way we’d love to have her … Madhura, is that how you say her name?

Sirish:             Yeah.

Niki:                We love to have Madhura on the podcast. That would be excellent. Any time I get a chance to feature women in tech is a super awesome day for me. We’ve had Jessica Murillo. I don’t know how to pronounce it, but she was definitely stellar. Where can people find you on social media or what can be there more about your stuff?

Sirish:             They can obviously go to the website Platform9.com. They can follow us at Platform9Sys. We are starting to host a lot more meetups if you are in the bay area. We are hosting bayLISA. We did one last month and we might be hosting the next one as well; follow us on twitter and you’ll see it: @Platform9Sys or on our website at Platform9.com.

Niki:                Awesome. Two people that you would love to see on the show, obviously, you already said Madhura, who else?

Sirish:             I would nominate Madhura Maskasky, my co-founder. I would also see if you could get Solomon Hykes and ask him for his perspective on enterprise infrastructure and containers and OpenStack.

Niki:                Awesome. We find a lot of request for Docker. I keep putting it out there on Twitter. I don’t know if Ben and Nick [inaudible 00:54:33] are watching. We would love to have some folks from Docker on, I am just saying.

Sirish:             Those guys are awesome. It’s very disciplined technology, and it’s something that has to play better with OpenStack.

Niki:                For sure. Any closing words for folks out there with one minute left to spare?

Sirish:             I’d say thank you Niki and Jeff for having me and being such gracious hosts. For everyone who sat through VR, thank you so much for taking your time to listen in. I hope you found this useful and interesting, and I look forward to hearing from you at Platform9.

Niki:                We look forward to seeing you in a cowboy hat and cowboy boots.

Sirish:             Next time I’m in Texas.

Niki:                Awesome.

Jeff:                 Yeah.

Niki:                Well, thanks everybody for watching the show today. Jeff, who do we have next week. I do this every week.

Sirish:             I am never ready for this question. I don’t know why. Give me one second here to pull that up.

Niki:                Is it Ryan Floyd? Do you see that?

Jeff:                 It is absolutely no one. There is a hole in next week. We’ve got everything lined up, so we’ve got Ryan Floyd coming up and Chris Kemp coming up, but I need to figure out next week.

Niki:                No pressure. I’ll get right on that.

Jeff:                 Yeah.

Niki:                Awesome. Well, again, thank you so much for joining as … definitely looking forward to seeing Platform9 at the summit in Vancouver. Thank you so much for taking the time to speak with us. We really appreciate it.

Sirish:             Great seeing you.

Niki:                All right. Everybody have a good day. Bye-bye.

Jeff:                 Bye.