HomeOperationsAutomated OperationsSysdig interview with Chris Kranz - The State of the Union

Sysdig interview with Chris Kranz – The State of the Union

In the second part of our interview with Chris Kranz, the European SE Manager of Sysdig, we delve into the state of the industry, looking at container, virtualization and cloud space.For the first part of the interview click here.

So how do you feel that the world will with change with VMware and other non-container vendors now treating Containers as First Class Citizens?

Chris: VMware bought Carbon Black. It’s not really a container play but it is clearly a security play. They are building their portfolio so they’ve got that full coverage. The Heptio team that VMware acquired are an awesome bunch. So long as they let that team influence and have the right say, I absolutely think that VMware have got the perfect opportunity to absolutely kill it in the container security space. VMware are really good at executing once they see a market they want to go after. I mean look at NSX for example and to a greater extent VSAN. These products created a new market where VMware dominates. I think that container and container security are going to be a game changer for them.

The big question is… Who’s going to buy Docker? I think that will show just how committed some people are in this space as well. It’s a really hot space and it’s still rapidly changing. It feels like VMware circa 2005 – 2006, but without a VMware at the helm of the industry. You know something monumental is happening soon. Back in 2005 VMware were the only player in the game. They were making that market; in the container market there isn’t a single, clear, leader. So it’s wide open, and VMware have absolutely got that opportunity with Heptio. But it’s also a cloud game at the moment: Amazon, Google, Microsoft run a huge percentage of containerized workloads.

Are you fully onboard with the container revolution?

Chris: I embraced the VMware and virtualization change, but it was never 100%. I think it will exactly the same thing with containers. At the moment most people are somewhere between 5 – 10% containerized. That percentage will keep growing. We will see is an explosion in the number of containers due to applications that are containerized. What I mean by that is once I containerize an application it’s really easy to then have 10 versions of application. If I virtualize an application; which is doable with templates and so on, it’s however, a lot more difficult to create ten copies.

The Developers all want their own container environments that they can develop on, and we already are seeing an explosion in container numbers. Therefore, while our customers are only doing 5 -10% of applications containerized, it is of these applications they are seeing an explosion in the actual number individual containers.

That sounds very similar to the old VM sprawl concept…

Chris: We probably should refer to container sprawl but generally it’s by design. You have loads of containers because there is no tax or overhead. I honestly think there’s a gap in process at the moment. Remember when we started that virtualization journey we used to do extensive capacity planning exercises, VMware even had a tool called “Capacity Planner” which analyzed the servers and worked out the size of the hosts and how many were needed to service your needs.

Nobody is doing that in containers. Sysdig have yet to speak to anyone who actually capacity plans upfront their container journey, it is a finger in the air we reckon we need about this and then in three to six months time, you have that horror of realizing you need more resources. This is where cloud comes in of course; with the virtualization days, adding capacity took weeks to months. With cloud, it’s instantaneous.

Do you think sprawl will ever get looked up properly in the same way as we looked at it back in the Virtualization world?

Chris: It’s interesting because a core part of my VCDX was the capacity planning side of things. But you’re right; I don’t think we actually play to that much anymore.

With SSD based storage I almost don’t care anymore how many IOPs I’m generating, with massive multi-core processors, again why would I care how much CPU resource I am using. However, the cloud is bringing that back into focus as it’s very easy to get bill shock. We did this internally, we spun up a test environment using OpenShift best practices on the smallest image and it cost us $5000 a month and that’s just a base starting point.

Is it ever going to get properly fixed? It’s hard to do because containers are designed to sprawl. We can’t use the same tools we used in a virtual or physical environment to understand container capacity. First, we need to refactor microservices, breaking it down into lots of components. Each of those components have their own discrete overheads, that’s difficult to calculate and then we need to multiply that number it because the developer teams all want their own environment, and that exponentially multiplies because each team wants their own; but also they need their own pipeline, and every single team will have those. So going from one environment to 50 environments just for the one application might seem an exaggeration, but we do see that happening a lot.

So you have individual environments for each developer, then you’d have the same environments spun up again for teams of developers and the same environments spun up again for the overall application for end to end for regression testing. It’s sounds an operational nightmare. Especially when you’ve got Management not understanding the concept, or rather they do but don’t want to pay for it.

Chris: I think there’s a huge amount of education just touching on what you said. Actually I agree with what you corrected yourself on, management often doesn’t understand this and the developers don’t really understand the impact of requesting all these resources; the infrastructure teams don’t understand why the developers are requesting all these resources. Management doesn’t understand the costs versus benefits of this stuff.

My team and I spend a lot of my time doing education and given where I think the market is at we will continue to spend the next couple of years at least spend the majority of our time educating. It’s still a massively misunderstood marketplace. People are still working out exactly how to use containers and the impact they have on DevOps and operations. So you’re not wrong when you say that people misunderstand this. I completely agree.

Do you think it’s because we’ve moved too far too quickly?

Chris: I think think I live in an echo chamber, everywhere I look people are talking about containers. If you look at it as an overall part of the industry, containers are a very small percentage of what is happening in production. It’s still really early days.

However, people are out there talking about how awesome this is and what they’re doing. The flow of information is much stronger than I’ve ever seen it before, and it’s easy to do because it’s all open source.

For example, companies like Netflix and Spotify are not only doing containers in a big way but are evangelizing how awesome they are. Everyone loves what Spotify has done from a DevOps perspective and people always say ”oh Spotify does this, we’re also going to do this”.

It is also easier to engage, I can spin up an entire Kubenetes cluster on my laptop. I couldn’t do that in my VMware days because I needed a beefy machine. Personally I find it odd the amount of people that still talk about their home labs; in the container world I can run Kubenetes on a Raspberry Pi.

To sum up, I don’t think we’ve moved particularly quicker than any other time. I just think we are talking about it a lot more, so it feels like we’re moving quicker.

Wrapping up

The container industry is not moving any faster than any other evolutionary stage, it just seems that because of the noise that we have from social media, but it is a growing phenomenon, and it is expected to grow exponentially. However it looks like we have not really learned any lessons regarding technical debt as we are ignoring basic capacity planning for cloud native applications, but it is believed a correction my be coming due to bill shock. It’ll be interesting to see how monitoring containers in production will play part in repatriation of containers from cloud back to cheaper on-prem infrastructures.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img

LET'S CONNECT