Contrary to the biggest hype of the last years – companies retract their Cloud (native) activities. Although we’ve seen a major increase in cloud-related projects among companies and organizations in the entire IT industry, some choose to move their data and other assets back to their on-premises data centers. There might be multiple reasons why companies decide to do so. It’s interesting to explore their viewpoints and (business) decisions since this is against the trends that revolutionize the way software is built and companies are running their IT departments. Managers can use this article to overthink their decisions or make a more informed decision to either stay in the cloud or migrate back to where they came from. The final answer is not a simple one since it requires many aspects to take into account. Beyond the hype? Cloud repatriation.
Before we do a deeper dive into the individual aspects, let’s first explore what the main factors are to migrate back. A lot of websites have nice lists that highlight common reasons that they see as the main arguments for companies across different industries.
The public cloud promised lower costs and greater flexibility. Yet, one of the most interesting factors is cost management. Companies were unable to keep costs in control. They were surprised by hefty bills that demanded action from their side to generate more revenue or slim down their expenses.
Security and data sovereignty
Security remains a top priority in all phases of the Software Development Lifecycle. It is also number two when it comes to cloud repatriation. Companies seek more control over security and compliance concerns. Connected to this argument is Data Sovereignty. This dictates that data should be stored within their own geographical area. According to them these concerns are better addressed by keeping their resources in-house.
Lack of skills plays an important role when it comes to (properly) using the cloud and reaping all of the benefits. When the right skills are hard to find and thus it is expensive to attract human talent – it will have a negative effect on the cash flow of organizations.
Performance and other factors
The rise of edge computing also takes its share in the list of arguments. Since Edge devices should be located close to where the data of IoT devices is processed, it requires ultra-low latency and lots of processing power to be available on the spot. These aspects are more difficult to meet when it comes to cloud computing.
Despite the great number of powerful options the cloud offers to conduct data backups, data centers might still be needed for its backup and recovery features to “be extra safe” when it comes to valuable data.
Some companies faced higher costs than expected. They needed to anticipate and deal with issues they did not have to deal with previously. The following (bad) practices or anti-patterns contribute to high(er) cloud costs and should be avoided when adopting cloud computing.
At the top of the list is over-provisioning. In the public cloud, it’s often more expensive to rent huge Virtual Machines that virtually keep running forever. Since there is a “pay-per-use” model these machines should be switched off when not needed or scaled down when the demand decreases. Besides Virtual Machines, think of expensive data storage solutions that are constantly online. This aspect is called “over-provisioning” and it contributes heavily to the cloud bill. Also don’t forget to take into account the costs of data transfer (especially outside of the cloud).
Unused cloud resources
On top of this, companies need to clean up their unused resources. Every resource that sits idle costs money. All of these “zombie resources” eat away your cloud budget. Immediately take down unneeded servers, data storage resources, DB records, PaaS services, and other provisioned infrastructure resources to help you here. Consistent tags are needed to detect those resources and take them down. So you need to have a proper tagging methodology.
Operational costs can also increase when it comes to designing, developing, and maintaining complex architectures. These contribute to the overall complexity of the entire solution which includes internal and externally connected services. Complex architecture also leads to data redundancy and more time to learn and understand it. And on top of that, it results in more errors (human and system-based) and more security configurations and/or vulnerabilities.
Besides these aspects, there are many more factors to take into account: the usage of premium or enterprise-based features which are over the top for what you actually require. Cost optimization and monitoring capabilities that are not actively implemented lead to higher costs as well as not using container orchestration efficiencies.
Security and data sovereignty
Data sovereignty refers to the legal and regulatory requirements that data is subject to the laws and jurisdiction of the country in which it is located. It is a crucial concept in the context of data governance especially where data is stored and processed in (external) and often remote data centers.
Companies seek ways to comply with compliance and regulations that dictate on how to deal with data protection and privacy laws. In the cloud, some data can be stored in your own geographic location such as Western Europe, while other data is stored in a central yet unknown location. Think of IAM roles and permissions that dictate who or which object has access to what resource. IAM-related permissions are not bound to a geographic location. Not fulfilling certain regulations and restrictions can lead to legal consequences and fines.
Sometimes the line is a bit blurred about whether or not to store or process data in certain areas. For example with the usage of Microsoft Azure DevOps. You need to clearly define which data is allowed to be processed by the hosts of Microsoft, especially the MS-managed agents. Are these agents allowed to process all of the data (including application data) or just source code without any reference data? And by accepting so, in which country are those agents located? Keep in mind that data residency is important to dictate in which country the data should reside to be able to protect the interest of the people of which the data actually is.
Data stored locally is often easier to access in case of emergencies, disasters, or outages. Business continuity plans and data recovery actions are carried out a bit easier when the data is located inside the border of the country in which the organization operates.
Customers who demand strict data sovereignty regulation in the countries in which the organization operates are more likely to put their trust in the company itself since they know what to expect in terms of legal and compliance-related aspects.
Every company needs to have access to professionals (both internal as well as external) who understand how the cloud works from a technical as well as an organizational point of view. Cloud technology changes rapidly and this also requires people to keep up with the major new features. The following list of skills shortages is amongst the reasons why companies migrate back to their on-premises data center.
Governance and special tech skills
- Cloud management and governance experts can help to control costs and implement governance frameworks and resource utilization. All within the perspective of a changing way of working in which a DevOps team is responsible for an application from code to production.
- Specific tech skills such as IoT integration, Machine Learning, and AI which are only available in the cloud are difficult and costly to acquire. Young people tend to hop on to their following exciting opportunities in this area. Companies need to attract experts in these fields as well as keep them interested.
Architecture and security
- Cloud and security architects are a “must-have” for every organization. Although certain aspects remain the same, application architecture as well as infrastructure-related architectures are different in the cloud. Think of event-driven applications that use serverless architectures. The whole security model changes from a technological perspective such as micro-segmentation and zero trust networks as well as the organizational model in which security ownership shifts to individual teams instead of a whole security department.
- Cloud computing is less valuable if there are no proper CI/CD pipelines in place and in which teams do not adhere to the DevOps way of working. There will be more manual handovers, approvals, and bottlenecks which slow down the transition to the cloud.
Pay attention to soft skills
Soft skills, especially for a diverse group of managers are important. They need to understand how the cloud works and know their new role in the changing organization. As soon as they hamper the migration, people get frustrated and less motivated. This can have a serious impact on the outcome of the entire program. If cloud migration projects are not supported by senior management, the whole program is likely to fail before you’ve even reached the first milestone.
In addition to the challenges above, be aware that you need to train people on various topics such as a completely different way of working, hard tech skills as well as business-related skills like business continuity, cost models, etc.
If all of these aspects are slowing you down, you might conclude you’re better off moving (part of your infrastructure) back.
Performance and other aspects
Performance is one of the key aspects of applications and other resources that contribute highly to the success factor of your cloud-related projects. A lot of course depends on the capacity, bandwidth, and stability of the connection from your own data center or workstation to the cloud. But it also depends on other factors such as the following below.
- Companies might underestimate the data transfer requirements of their applications. If this results in high volumes of a huge number of requests for which they need to pay – this can be a show stopper.
- From a physical point of view, the servers in the cloud are further away from you than your own local data center. This has an effect on the latency of your messages and data being sent across. If you encounter unacceptable delays that hamper your business (f.e. angry customers who face a decline in service), you need to adjust your architecture or repatriate this solution.
- Edge computing requires processing power very close to where the data is needed. Latency can be an issue here, but also the available computing power. For example, AWS offers G5 instances in only a number of selected regions. Should you require another region, then you can’t use this option.
Companies that apply a hybrid cloud situation, for example, to use backup and data recovery services might face issues when data transfer takes too long or is too expensive. Suppose you need to conduct a data recovery from an S3 bucket to your local data center you need to pay a hefty amount since external data transfer is a costly operation. It’s a major factor to take into account when this is a common situation in your company.
As always, this list can never be complete and all situations are different. Keep in mind that these aspects are to make you think about the projects you’re executing or upcoming projects that are yet to be migrated. You can also use it as a set of guidelines to evaluate your current projects in the cloud.
Companies repatriate from the cloud for different reasons. Often, they face high monthly cloud bills due to various reasons. Many companies accidentally over-provision their infrastructure resources: which just eats away at their budgets. They lack the skills to select the best architectures or they miss critical security skills to comply with regulatory restrictions and other compliance rules.
Sometimes, the cloud can’t bring them the right processing power or performance they seek. Last but not least, soft skills on all levels in the organization are a must-have to fully reap the benefits of cloud computing.