From PVs and PVCs to the top open source projects, and Portworx’s unique offerings like stretch clusters, Joe Gardiner of Portworx leaves no stone unturned in this insightful interview on Kubernetes storage. Watch it to gain a deeper understanding of how to approach Kubernetes storage for your organization.
Watch the video or read the interview below.
With us today we have Joe Gardiner, who is the head of cloud native architecture at Portworx, which is part of Pure Storage. So, Joe, just to get things started, I just wanted to know a bit about you, your role, what you do at Portworx, and pretty much an elevator pitch of what Portworx does. If you can just get us started with that, that’ll be great.
Yeah, sure. So as you heard, I run the cloud native architecture team across the EMEA region of Portworx, which is now part of Pure Storage. I’ve actually been with Portworx for quite a while. I was part of the business prior to the acquisition by Pure Storage, and I’ve really seen the business grow a lot over the four or so years that I’ve been part of the business. So it’s been really exciting time for Portworx and yeah, just to introduce kind of what we do to you and the audience so you can really think about Portworx as a data management and data services platform for Kubernetes. Sounds kind of a little bit vague, but let me kind of elevate that into kind of why it matters. So at the core of what we do, we provide a really robust and rich and industry leading set of tools for running stateful applications in Kubernetes. But if we look at what that means for a business, I think it really does two things. So first of all, it super sizes container business cases. And that means that the business is saying we think we can achieve these cost savings by containerizing 30% of our apps.
With Portworx, we can say, well, now we can target 80% because we’ve got an answer for data management. And so what does that do to your business case? It supersizes that as well. So all those cost savings that you’ve calculated are now going to be even bigger. Right? And then the second thing it does is really act as a catalyst for those cloud migration and hybrid cloud projects, because within the platform, we’re really making the application data as portable and as flexible and as agile as a container. And that means that, yes, you can run your container any way you like, but with Portworx, you can move your data and replicate your data anywhere you like as well. So they’re really the two areas where I think Portworx adds a lot of value to our customers.
All right, thanks. I think there was a lot you packed into that, and I think the focus is definitely on the first thing you said costs, and the second thing you said about migration, I think we’re going to get into that a bit more later. But like you said, you’ve been at Portworx four years. And I remember writing a few articles back then, a few years back when Kubernetes was just about starting. And I think Portworx has an early mover advantage. Because you are pretty much one of the first storage providers. Would you say that?
Yeah, I think so. I think there’s two things, really. I mean, for sure, Portworx around when actually most people were just using Docker, and actually they just needed a storage solution for stand alone Docker, which is Docker on a host scheduling, no orchestration, nothing like that. So certainly Portworx has kind of grown up in line with Kubernetes becoming the kind of dominant force in the container space. But I think the other thing that’s really been advantageous for us is that we’ve always, from day one, had laser focus on enterprise and that’s we’ve had this almost unique insight into the growing pains of a business as they try and adopt Kubernetes. And so that’s meant that all the areas we’ve invested in from a product perspective are absolutely in line with that maturity curve and that journey that customers go on as they adopt Kubernetes. So we know that after a certain period of time and at a certain scale, they’re going to need backup. And, oh, you’re a regulated industry, so you’re going to need certain security features. And so I think that’s also been one of the key factors in us becoming really the market leader in the dominant force in this space.
I think that’s quite powerful because I think you’ll start out even before Kubernetes. And so the progression has been quite organic that way. I want to ask – you mentioned customer journey, and I want to touch on that as well. Could you tell us a bit more about the signs, someone who’s thinking about storage, what would you say are some of the warning signs that their storage management and data management needs an overhaul? What would you say some of those signs are?
Yes, that’s a difficult point because I think it’s going to depend a lot on the kind of applications that they’re trying to run. Right. Sometimes we work with customers who quite early on understand that they are really when building a Kubernetes platform, taking a Greenfield approach. And so orchestration for containers is the first decision that they have to make. And you could say that it’s equivalent to virtual machine orchestration years ago when decisions were made in that space. So there’s certainly a sort of analogy there, but then, of course, they have to think about networking, and then the next question is monitoring, and then how do we have to implement security. And of course, storage comes into the picture. But that’s not the common pattern, right? Not many organizations that I’ve come across are taking a step back and saying, we’ve got a blank piece of paper. Let’s make a decision for each of these components as we build out our Kubernetes platform. Actually, what we see is a completely understandable assumption that existing investments in infrastructure will be suitable for this highly dynamic, highly agile kind of new world that Kubernetes is opening up for them.
And I think that that’s fine. In the early days of containerization, for sure, we see a lot of customers using things like NFS as their storage solution, or maybe some integration with an array or some virtual storage solution that they already have. But the problem is that over time, as scale increases, as the number of workloads being containerized increases, you start to identify some of the fragility of these systems. And so we see issues around performance at scale. That’s a big issue. Literal limitations of infrastructure where you can only provision a certain number of volumes before you hit that limit. And suddenly now you have to buy another array to keep going. And we touched on migration. Containers are a catalyst for hybrid cloud or multi cloud architectures. And as we know, traditional infrastructure can often be, can essentially limit or restrict the success of that project. There’s certainly the warning signs that customers should be looking out for. Final point I’ll make here is the criticality of the app. And what I mean by that is that in the early days of containerizing apps, it will generally be simple web applications, Python scripts, that kind of workload.
But over time, more and more enterprise apps will often be targeted because, as I said, there’s huge cost savings and benefits in doing that. But this is where actually now the kind of the framework that needs to sit around that to adDRess challenges such as disaster recovery and security and governance, which are, of course, required for critical workloads. They’re completely different requirements in the world of containers. And when you make the assumption that an infrastructure based approach to disaster recovery is going to be suitable for a containerized application, that’s where we will see a project being blocked, right? That’s where a sea level stakeholder will come in and say, well, we’ve smoke tested this, it wasn’t acceptable. It didn’t meet our recovery times. So we’re going to block this product until we solve this issue. And of course, it seems like a real kind of quite granular point solution kind of challenge. But actually the opportunity cost of Downing tools and going out to market and looking for a solution is huge. If you see on your roadmap as an adopter of these technologies the need for things like DR and backup and restore, you should be thinking about it now before it impacts the project.
I think you touched on so many points. I think the one that I want to dig in a bit more is when you mentioned that people come with this assumption that it’s going to be the same once they move to containers. But it’s totally different. Actually, I want you to highlight what’s really different. Is it a totally different world when you talk about data for containers, or are there a lot of common things that they just probably got new terminology with containers and Kubernetes? Could you give us just what changes and what stays the same when you’re thinking of a migration?
Yeah. So I think there’s some basic terminology here. Right. If we go to the kind of basics of storage, you have Block file and Object, and those different storage types are still absolutely the kind of storage types that are available to different applications in Kubernetes. But the way in which the storage is provisioned is what varies. Now, what’s great about Kubernetes is it gives us a consistent set of frameworks and APIs and resources that we can use to provision what’s happening kind of underneath to actually get access to some storage service. And in Kubernetes, there are these three things that need to kind of come together to make that happen. You have a storage class which is basically defining the kind of storage available to the user of the platform. And then you have a persistent volume claim, which is a bit like a token. You get given at a fair ground to go on the ride. That’s your ticket to get a ride on the fairground. Right. And then the persistent volume is the thing that’s actually created. Right. So the user takes the persistent volume claim that has a load of config it’s attached to the storage class, which says, this is what I’m buying with my token.
And then the persistent volume is the thing that they get at the end. Right. So that’s the terminology that wraps around this now, where it gets a little bit more complicated is how this all maps through into infrastructure or in our case, Portworx. And the traditional approach is to say, well, I’m going to use a storage class that’s connected to one of the systems. I was talking about an Array or NFS or something like that. And what’s essentially happening is in the background, a little piece of software is running that’s establishing that connection and saying, oh, your token says you can have a ten gigabyte file share volume. And in the background that’s being provisioned from the storage system. Or you can have a 50 gigabyte block storage volume. And if you’re running an Amazon, it’s going to go off and create an Amazon disk for you. Now, this probably sounds great, right. A nice simple, codified way to dynamically provision storage. But the problem is that I’m describing a connection into a specific storage back end. And so as soon as you start to say, well, actually we want to run a Kubernetes cluster in Azure, or we want to run a Kubernetes cluster on physical machines or in a VMware environment.
Now you’ve got all these different storage integrations and all these different configurations and different behaviors of storage and different kinds of storage available to you depending on where you run your app. And that’s the thing that’s undermining really some of the big benefits of Kubernetes in that it’s infrastructure agnostic. It gives you a consistent experience wherever it’s running. But suddenly you’re coding your apps for storage. And if you think about it, there’s not any other component in the stack that you do that for. You don’t code your apps to a particular networking provider or monitoring system, for example. It’s only the storage. And that’s where that’s really the problem that we started off solving with Portworx is to say, well, what does a Kubernetes or cloud native solution for storage look like? And so it’s really following the same blueprint of being infrastructure agnostic, providing consistent APIs. The difference here is that when you request a Portworx volume, it’s a virtual object that gets created into any underlying storage. So that means that now you have this disconnect where, yeah, maybe you’ve got local disks or you swap out a SAN or you go to the cloud, you’ve still got that abstraction, you still got the Portworx virtual volume that’s actually being created.
So then to your point about migration, when you connect to another cloud or a different environment on Prem, that volume can now freely be moved between them, replicated or cloned or backed up. Because there’s no dependence on the infrastructure, you’re not writing custom code to integrate with a specific storage back end.
That’s a lot there, we should unpack some of that as we continue. You’re just addressing that there’s so much complexity about where you can run your Kubernetes clusters. And because of that, the complexity translates into how do you manage your data? I think it really described that well. And I think a lot of the readers, really relate to that because a lot of questions are there around. So I want to ask about, is there actually an alternative where there’s a simpler way to do things? Let’s say an organization says, I don’t want to complicate my stuff. I don’t want too many clouds. I just want to be in one place. I just want maybe my data center in AWS. That’s it. I’m not thinking of migrating to another cloud. I’m still not in on this whole multi cloud thing. Would someone like that need to still think about their data management in the way that you’re talking about, or can they kind of get along with the basics or the way that they’ve been managing in the cloud and let’s say in AWS, how they’ve been managing their AWS, what would you say?
Yeah. So I think this goes back to really the core of what we do and where it started. And you’re absolutely right, not every customer is doing that, but we still have plenty of customers who are running in just a single data center with a single flavor of Kubernetes, and they’re still getting a lot of value from Portworx. I think the tipping point here is slightly different. It’s not so much that need for portability and migrations between environments. It’s more a robust solution that will support that maturing use case around containers. Now, it is true that if a customer or user, let’s say, is just getting started and running maybe a simple web application, they probably don’t need Portworx. To be honest, it’s certainly an enterprise tool. But as I said, we do acutely understand what that maturity curve looks like, so it’s always valuable to kind of look ahead. And I think we’ve kind of learnt this lesson through different iterations and evolutions of IT platforms, going from mainframe, virtual to container and cloud in those use cases where Portworx is really adding value is in a few areas. But first of all, the resilience of the storage solution.
And there are a whole load of capabilities around rapid failover and topology awareness and replication between worker nodes in the cluster that are absolutely more rapid in terms of the recovery times, lighter touch in terms of any intervention required from the administration teams around that platform, and more resilient, less prone to errors. So at its core, we’re solving that problem. But if you look at any sort of scale, operational overhead comes into the picture, how are you managing capacity? How are you rebalancing your storage cluster? All of these things need to be considered. And again, Portworx is very mature in that space and has a set of automation tools to massively simplify that. And then I touched on some of these issues before, but there are just fundamental limitations with infrastructure based solutions here around scale performance. When you have very high density, the speed at which things can be provisioned, it’s important to look ahead and it’s important to have some awareness of those limitations before they actually cause disruption to your platforms and your projects.
Yeah, I think so. And I think it also talks to your point earlier about Portworx focus on enterprise particularly. And so I think that makes sense. I want to also get into a bit about just Portworx and how it manages data, and a bit about the secret sauce. I did a bit of reading up and I found that Portworx uses this open source operator Stork. I want you to sort of get into a bit about… It seems like that’s kind of the underlying key components of Portworx. If you could tell us a bit about Stork’s role in Portworx solution. That would be really great.
Yes, absolutely. So you’ll actually see that quite a lot of Portworx is open source. The majority of Portworx is open source. Actually, there are a number of components, such as our kernel module that’s open source, and Stork is a great example as well. Actually, the name Stork stands for Storage Orchestration for Kubernetes. And that’s actually, I think, a really great kind of, I guess, underlining point around everything I’ve been talking about in that what Stork essentially does is enhance Kubernetes with an awareness of data. And what that means is that by using Stork, a lot of the failover operations that are normally manual or actually, in some cases, impossible with infrastructure or storage infrastructure are solved by Stork. And it means that when Kubernetes is making a scheduling decision so let’s say a machine has failed without Stork, it’s going to look at the cluster and make a decision about where to place that container, that pod. Now, if you have storage involved in that operation, Kubernetes can’t redeploy the application until the application’s volume is available somewhere else in the cluster. And that’s a real limitation, especially in public cloud, where often a storage device can’t be attached to another machine quickly or it can’t be attached across availability zones and even On-Prem, that kind of infrastructure detach and reattach operation is really fragile.
And what we’re doing with Stork in conjunction with Portworx is replicating that data at the Portworx layer. So there’s no infrastructure operation. Then Stork is saying to Kubernetes, okay, when you redeploy that Pod, your volume replica that the application needs is on this machine. So that’s where you should deploy it, which means that not only are you getting really rapid and intelligent failover in terms of pod placement, but the performance is going to be improved because you’re getting that direct attached storage style experience with that data locality. So it’s a really important part of how Portworx operates. But I think the key point to take away from that explanation is that Stork enhances Kubernetes to have an understanding and awareness of data. And that’s really a fundamental difference between traditional storage for Kubernetes, where you’re basically just plugging into an infrastructure layer that doesn’t know anything about an application, and our approach, which is much more application and platform centric.
All right. Yeah, I think that talks to this thing about you said it helps with performance efficiency, and it’s probably about convergence, which I was reading up on. I think Portworx calls it hyper convergence, and so that helps with performance because it’s on the same network. So there’s less data traveling around. But also I read about Stork even having features for helping with migration. It’s also involved in migration. Could you even talk a bit about that? How does Stork handle migration?
Sure. Yeah. So just going back to your point about hyper convergence, that’s absolutely right. The default and most common architecture is the hyperconvergence model, which basically means every machine has a little bit of storage, and then we create our storage pool across those disks. And if we look at other storage options that maybe aren’t infrastructure based like other software defined storage, one of the big pain points there is the need to run a dedicated storage cluster that has a lot of overhead, not only in infrastructure costs, but also expertise. But to your point about migration. So, absolutely. It’s an area I didn’t touch on the other key benefit of Stork because it’s multicluster capability and it really underpins the migration capability that we’ve touched on a few times. And what it essentially does is orchestrate the pairing of multiple independent Kubernetes clusters and assuming they have Portworx, Stork is the thing that allows them to communicate with each other. And once that kind of foundation has been established, we use it in a number of ways. We use it for migrations, as I said. And through a simple command in Portworx, you can clone with consistency entire namespaces.
So that means that all of the data in Portworx volumes and all of the application configuration can be recreated very quickly in a target Kubernetes cluster running somewhere else in the world. And they don’t even have to be the same version of Kubernetes. Right. We have customers who go from cloud to OpenShift and Hamsu to another cloud. Right. So it’s very flexible in that respect. But also, and importantly, it underpins our disaster recovery capability because of course, in disaster recovery scenarios, you often need that independence between environment, so you can bring an entire cluster online as required. And so using a similar technology to the migration tool, we can either incrementally replicate snapshots or we can even in some cases where there’s low latency do stretch clusters at the Portworx layer and Stork again underpins that. So it’s a very advanced part of the Portworx platform and really important. In fact, I’d argue that it’s one of the most value add areas of what we do.
Really interesting, actually, I have no clue about what stretch clusters means. If you could just break that down a bit.
Yeah, sure. So it’s a term that’s actually not specific to Portworx. Right. But a big challenge that we see with Kubernetes is that customers will often actually only have two data centers that have low latency links between them, and they might be looking at their DR strategy and saying, well, we can run Kubernetes stretched across these two sites, and then if a site goes down, Kubernetes can be recovered onto the second site with almost no intervention. Now, unfortunately, that’s actually a bit of an architectural pitfall, because if you have any understanding of quorum, the idea of needing kind of two out of three members of a cluster online so that you don’t lose consistency within the cluster running across two sites isn’t really possible with Kubernetes. Now, Portworx solves this issue by saying, okay, you’ve got two sites running with a separate Kubernetes on each side, but Portworx can be stretched. So you’ve got one Portworx cluster that’s stretching across the two independent Kubernetes clusters. And what that allows you to do is to run an active standby set up where all your applications on one side are running. Some of them are using Portworx volumes, writing to our data layer.
And then in the background round, we’re replicating that data to the second site. And if you have familiarity with some of the calculations around DR, that gives us an RPO, which is the return point, and an RTO, which is the return time of around zero for the RPO and a few minutes for the RTO. And essentially when a failure occurs, let’s say cluster one goes down, all the data is there, it’s consistent, it’s ready to be attached to applications. And this is where Stork comes into the picture again. It can be used to bring all those containerized apps online. And Portworx will automatically attach the correct volume to the correct application. And it sounds like, yeah, sure, that’s a disaster recovery scenario. You lose a site, you bring it up over here. But if you think about the scale that we operate Kubernetes clusters at tens of thousands of containerized applications, to bring them all back and attach the correct volume with crash consistency at the data layer is huge. And to do that with just a single click, single command is pretty compelling for a lot of customers.
That’s so dense. I think there’s so much that we could keep unpacking, but we’re almost out of time. I just want to ask about your view on just the ecosystem right now. There are so many tools. There’s just so many open source tools, cloud vendor tools, each of them taking a different approach. And I want to get your thoughts on, for someone planning data management, do open source tools have a place in the toolkit? What would you say are the strengths and weaknesses of the different tools, especially open source ones? If you can comment a bit on that.
Yeah, sure. For me, it’s really exciting seeing the ecosystem develop, and it’s great to work with some of our friends and colleagues in these community projects or in some cases, vendor based projects. We’ve done a lot of work together at events at KubeCon and other sort of meet ups and so on. Actually, I love that part of the kind of storage ecosystem within Kubernetes as a great community, and I think I’ve touched on it a few points already. But there are certainly tools that suit different use cases. There are situations where we come across customers who are maybe quite early, maybe their Kubernetes use case is quite small. They don’t really see any future growth. And actually, yeah, in that scenario, they probably are better going off with one of the community based tools, especially if they don’t want that enterprise support. If they’re quite happy learning that tech themselves, then great. And we see actually a lot of those projects are very kind of point based solutions. So they will just solve dynamic provisioning of volumes, or they will just solve, let’s say, cloning workloads for disaster recovery purposes. And of course, we’re very much focused on community feedback, which isn’t necessarily always representative of what an enterprise needs.
Right. Because of course, there’s a huge number of users of Kubernetes who all have competing needs when it comes to these community projects. So there’s a lot to learn from those projects. We actually integrate with a lot of them as well and make use of them in some cases. But I think I’ve probably covered a lot of what I’m about to say already. But I think where we are particularly strong and the analysts say this too is in that focus on enterprise when it comes to that security tooling, the governance, the multi cluster DR, the speed of failure, it’s those sorts of capabilities that are very hard to find outside of Portworx in the community. And Secondly, I think it’s that we do all of that from one solution. I think that’s the other difference. There’s no plugging different things together to try and build that data platform. There’s none of that required. And as we know. Right. If you go back to any kind of evolution of IT, trying to build custom integrations between various open projects is not fun. It’s hard. And we know that very well, like, not just us, but I think the whole industry knows that.
And so taking that platform approach is really powerful. It goes back to the first point I made, really, which is that we very clearly understand that maturity curve, that customers go up and a lot of our customers will buy Portworx, and maybe they don’t need disaster recovery straight away, but the fact that they know they can just turn it on when they need it, and it’s there and it’s tried and tested and used in production by global banks, helps them sleep at night. Right. Knowing that that functionality is there. And I think that’s kind of the tipping point between, hey, let’s get started with the community projects versus actually, let’s invest in an enterprise solution from day one. Yeah.
I think just this peace of mind that someone’s already thought of all of these things and you don’t need to reinvent the wheel. It’s already all done for you. You just come in and start using it. I think that’s really amazing about how Portworx really eases that journey into Kubernetes. Last question I want to ask you, Joe, is just looking ahead. What are you most excited about when you look at the space and what’s happening? Is there anything that you see that really interests you, something that you’re looking forward to in terms of trends or just what’s happening in the Kubernetes data management space?
Yeah. So I think a few things here, really. I mean, we’re seeing massive growth in edge use cases and IoT use cases. And I don’t know about you, but pretty much everything in my house is now a smart something. Right. And so there’s huge amounts of data being collected and processed and aggregated and, okay, maybe not always in the interest of the consumer first, right? But we know there’s a huge amount of data out there in a lot of different companies that would probably surprise you. And so with that comes a whole new set of challenges around how you deal with these huge data sets. And actually more questions like where should the processing take place? Should it be on the edge? Should it be a central kind of data processing plant? What do we do with those results? How do we integrate with other parts of our business and partners outside of our business to build better products and more value for our organization? So I’m really excited to see how that area develops. And it’s an exciting area for us from a product perspective. But also, if I look at some of the recent announcements we’ve made, you might have heard of Portworx’s Data Services, which was announced a few months back, a number of different events, and it’s quite a big change for us because we’re continuing to invest heavily in our infrastructure products and also some of our cloud services.
But now we’re pivoting slightly and saying, well, we’ve got all this expertise in running state full applications. We know what our customers are trying to achieve. Why don’t we add value there by building product in that space? And that’s really where Portworx services is coming from. And essentially we’re recreating the cloud database as a service experience. But following that blueprint of Kubernetes, meaning it’s completely independent and agnostic from any kind of infrastructure. So for me, that’s exciting. I feel that pain in my customer accounts all the time that they’re worried about locking, they’re worried about cost. And so with what we’re executing on right now, that ability to say, well, let’s give you that cloudlike experience, but without any of the compromises that you have to make, I think is super exciting. And we actually are seeing quite a lot of trend around kind of cloud repatriation. People saying, yeah, we went all in on public cloud, but actually, is that right for every use case and every workload? Maybe not. And so we’re seeing more workloads coming back On-prem and actually a greater understanding of what hybrid cloud looks like. And we’re in a perfect position to add value across that whole stack now because we’ve invested correctly from a product perspective over the last few years.
So I’m excited to see how that develops and what else we do from a product perspective to support that as well.
If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to email@example.com.