spot_imgspot_imgspot_imgspot_img
HomeArchitectureStork's approach to Kubernetes storage and why it matters

Stork’s approach to Kubernetes storage and why it matters

-

Stork, a cloud-native Storage Orchestrator Runtime scheduler plugin created by Portworx is an open-source project that uses Kubernetes’ extensibility to allow DevOps teams to efficiently run stateful applications like queues, databases, and key-value stores on Kubernetes.

Stork allows stateful applications to take advantage of scheduler extenders to enjoy the benefits of storage-aware scheduling via Kubernetes in production at scale. That’s just another way of saying that Stork makes it possible to manage storage in Kubernetes at scale. It was developed in collaboration with customers running large-scale stateful production applications to address operational issues inherent to data services.

In this post, you will learn about Stork’s approach to Kubernetes storage and how it helps to manage persistent data in Kubernetes.

What does Stork add to Kubernetes?

What does Stork add to Kubernetes

Even though Kubernetes has built-in support for load balancing and service discovery, sometimes DevOps teams and developers need more flexibility in the service instance selection. Stork uses the Kubernetes API to retrieve pods behind a Kubernetes service and allows you to customize them. Users can apply any Stork service selection or even implement their own.

While Kubernetes load balancing and service discovery would ensure that requests to the rest-service are load-balanced across pods, Stork retrieves the pods’ addresses directly. While Stork applications don’t use the Kubernetes service delegation, they still need a Kubernetes service to discover the backed pods, ensuring your Kubernetes deployment remains stable.

Key features and benefits of Stork

Enabling hyperconvergence for stateful containers

Modern stateful applications can scale out to increase performance and capacity when each cluster instance runs near its data. However, local direct storage access can reduce latency while improving response times. But when teams have to use constraints and labels to ensure data locality, managing rules when running applications can become hard to manage with many data centers and servers.

Kubernetes volume plug-in infrastructure has generic concepts to ensure smooth integration with storage solutions such as cloud storage, cloud-native storage, and SANs. Labeling is a way to indicate which nodes have data for a Persistent Volume Claim in Kubernetes. However, to get around issues such as deciding which instance will have better performance, the hyper-convergence strategy for stateful containers is a perfect solution.

Storage Health Monitoring

The storage health monitoring lets CSI Drivers detect anomalous volume conditions from the underlying storage systems. It enables you to report these anomalies as events on PersistentVolumeClaims (PVCs) and Kubernetes Persistent Volumes. A common issue with stateful applications is that they induce wear and tear by the storage fabric. The overall health will be subject to this wear and tear over time. Thus, pods cannot reschedule to healthy hosts whenever a storage driver encounters a failure, making your application unavailable.

For example, consider a pod that has been started with a volume provisioned and scaled by a storage driver. If the storage driver hits an error, the health checks for the application will continue to be successful even though storage can’t read or write to its persistent store. At the same time, the volume used by the pod could have another replica available on the cluster, allowing it to function correctly.

Stork helps in proper monitoring of storage by failing the pods whenever a storage driver goes into an unavailable state or an error, allowing your applications to be Highly Available without any user intervention.

Volume Snapshots Support

Similar to how PersistentVolumeClaim and PersistentVolume are used to provision volumes for administrators and users, VolumeSnapshot and VolumeSnapshotContent API resources create volume snapshots.

These snapshots act as a critical management tool for environment duplication and data recovery. The DevOps teams have to manage complex life cycle operations using the tools provided by the storage provider instead of directly via Kubernetes, reducing the automation benefits of the Kubernetes for complex data workflows like testing, upgrades, DR, and blue-green deployments.

With Stork, users can automate complex data workflows throughout Kubernetes.

How Portworx uses Stork to build a scalable storage solution for Kubernetes

Stork was developed by Portworx and helps achieve even closer-knit integration of Portworx with Kubernetes. It allows developers to provide seamless migration of pods in case of storage errors, co-locate pods with their data, and makes creating and restoring Portworx volumes’ snapshots effortless.

Stork has two components: the Stork scheduler and an extender. You can install Portworx through the Portworx spec generator page in PX-Central and select Stork to be installed along with Portworx. Add stork=true to the parameter list if you want to include Stork specs in the installed file. You can also install Stork manually by following the steps mentioned on the Stork project page.

In conclusion, if you’re looking to level up your ​​Kubernetes storage game, Stork is one tool to look at. Or even better, consider Portworx for an enterprise-ready implementation of Stork.

If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to sales@amazic.com.

NEWSLETTER

Sign up to receive our top stories directly in your inbox


LET'S CONNECT

spot_img