Cloud computing gives you the flexibility to host your business service the way you want it. You can choose from an IaaS, a PaaS or SaaS solution, as well as a combination of those. The same is true for your storage needs. Whatever solution you choose, performance problems can occur, even though you have virtually unlimited options to scale your infrastructure. Your application might demand ultra-low latency connections or sturdy CPU performance to process large video files. These kinds of requirements can all be solved, but it requires specific tweaking of your cloud configuration. Every cloud provider offers its own solution for typical storage-related challenges. In this article, I will cover useful techniques to boost your cloud storage performance.
Before we jump into “solution mode”, let’s explore which common storage problems can occur when you store or process your data. First of all, your selected hard disks in the cloud can be the bottleneck if they’re not fast enough. Second, your internal network or the network on the cloud provider side can be the problem. Third, memory limitations on the infrastructure or application side can cause problems too. And last but not least, you’re not using the cloud providers’ offerings the way they intend to can lead to problems.
All in all, the results range from delayed processing of your data to complete data loss and failed transactions which would hamper your end-users experience and thus your (core) business.
Up- and downloading files
Nearly every cloud provider offers a smooth way to upload your data to their platform. Often, it’s easier to get your data in compared to getting your data out (again). Moving or copying a large number of small files on a local system is often more problematic than handling a single large file.
Everyone memorizes the famous “windows seconds” status window that never finishes. It has to do with reading the files in cache one by one, buffering, allocating, and releasing memory, and other special operations that happen under the hood and which you don’t know about. You should know the inside-out tips and tricks from cloud providers to avoid this kind of situation.
Network speed is one of the key factors that determine upload speeds. The internal network inside your data center would be sufficient for your upload requirements, whereas the (internet) connection to your cloud provider might not. This is a relatively simple factor to improve but there is more to take into account.
According to Google, (almost) every developer use case is supported by their unified object storage-based platform. It offers Content Delivery Networks, automatic redundancy, and “close to edge serving capabilities”. Besides these, there are other techniques to help you here:
- Consider renaming your files since the Google Cloud platform uses a load balancer that splits the load based on the individual file names. If filenames are almost identical to each other, upload speeds are decreased.
- Enable “object composition” to split large local files into multiple smaller files which are then uploaded in parallel.
- Tweak your chunk sizes to find the sweet spot between the size of the request and the total number of requests.
- Utilize the multi-threaded/multi-processing methods when using the gsutil tool to boost the performance of a lot of small files by about 5 times.
Contrary to Google Cloud, AWS focuses more on guidelines and design patterns that an end-user can implement. The following techniques are specifically for S3 storage solutions:
- Combine Compute and Storage in a single region: deploy your AWS EC2 instances and your S3 Buckets in the same regions to reduce network latency as well as data transfer costs.
- Implement retry-logic when you receive server-side 503 errors that AWS sends back to you. It’s advised to use aggressive retry techniques for latency-sensitive applications: start a new request and perform a fresh DNS lookup before you send the new request. This greatly affects the speed to send a large number of small files.
- Use Amazon S3 Transfer Acceleration to move or transfer large files (GBs or even TBs) between different continents. Be sure to check out the speed comparison tool to compare the speed of different regions related to yours.
- Only fetch the portion of the object/file you need. This is achieved using the “byte range fetches” in combination with multiple concurrent connections.
.Net developers that want to speed up the up- and download operations for their applications in Azure can use the Azure Storage client library for .NET. This is a helper tool that accepts certain parameters to enhance the transfer of objects that communicate with Azure Storage Blobs or Azure Datalake services.
Some potential improvements:
- Maximize the network throughput by splitting the maximum transfer size into smaller pieces. This ensures uploads are carried out in parallel and also buffered to optimize the fail/success ratio.
- Set the initial transfer size prior to the upload actually starting so you can limit the number of requests needed for a single upload action.
- When downloading files from Azure services, the Storage client library fetches the initial transfer size first before it carries out the actual download. This helps to speed up since it knows how much data to download.
It’s best to configure the StorageTransferOptions so you optimize your transfers in case you’re not working with just some trivial files.
Disk-based storage enhancements
Since Cloud providers offer a ton of different disk storage solutions, it’s impossible to name them all. Overall, there are several aspects that have either a positive or negative effect on the performance you need. First of all, you need to determine the type of disk you require.
AWS offers Elastic Block Storage (EBS) and Elastic File Service (EFS) as their main disk storage solutions. EBS volume types come in different flavors such as classic HDDs, SSDs with and without performance optimization. In general, the following categories are relevant here:
- General Purpose SSD: generic hard disks with a sweet balance between speed and costs.
- Provisioned IOPS SSD: faster volumes for mission-critical workloads.
EFS also offers different configuration options to optimize for performance or a constant file transfer. You can choose between storage class, performance mode, and throughput mode. It’s best to consult the EFS performance page to check out all of the available options.
Compared to AWS, Google Cloud also offers several storage options for its disks. End users can choose between standard, balanced, SSD, and extreme persistent disks. They all have their unique characteristics in terms of I/O, maximum IOPS, latency, throughput, etc. Besides these, you need to choose between locally attached disks and persistent disks (which are in fact network attached disks).
The following techniques help you to improve the disk performance you would require:
- Enable DISCARD commands and disable lazy initialization. Simply said, both configuration options help the disk handle files as efficiently as possible.
- Free your CPUs to handle bulk loads of data.
- Choose between IOPS or throughput-oriented workloads. This differs for example when you host SQL or NoSQL databases compared to streaming operations such as Hadoop jobs.
You can find more detailed techniques in the article dedicated to optimizing persistent disk performance.
One of the most prominent storage solutions in Azure is Azure Blob storage. This is a storage solution to store huge amounts of unstructured data (such as documents, text files, or other binaries).
Every type of Blob storage requires a different (storage) account to access your data. There are standard storage accounts as well as premium block blob and page block storage accounts. All share certain limits such as the number of standard endpoints (per region & subscription), maximum account capacity, number of blob containers, and maximum request rates per second. On top of that, these might differ per region.
Besides this storage solution, it’s also vital to understand which disk storage solutions you need to optimize to enhance the performance of your applications.
The following techniques help you to optimize the performance of your IOPS, latency, and throughput.
- The VM size has a significant impact on the number of IOPS the disk can process. Choose the right VM size suitable for your application.
- Use read-only disk caching for your premium storage disks to increase your Read IOPS.
- Combine multiple disks to top up your IOPS and throughput limit.
- On the application side, make sure your data storage requests use multi-threading in combination with premium storage.
Azure offers a great page with design considerations to optimize the performance of your applications.
As seen in this article, there are a great number of cloud services to offer a storage solution that best fits your needs. For every storage solution, you need to carefully tweak the configuration parameters based on the ideal type of solution you require for your application. All major cloud providers offer great documentation on how to optimize for certain workloads and the type of data that needs to be processed. Be sure to check out the tutorials and benchmarks as well as speed tests to find the optimal solution and settings for your workload.
If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to email@example.com.