HomeDevelopmentDevOpsIs DevOps with CI/CD the panacea everybody thinks it is?

Is DevOps with CI/CD the panacea everybody thinks it is?

There is no longer any doubt regarding the benefits of DevOps and its kissing cousin CI/CD over paradigms like waterfall methodology. The improvements to feature roll-outs, the decreases in service outage due to the massive upgrades that legacy deployment strategies entailed. The ability to rollback deployments quickly as changes are iterative rather than wholesale.

On the face of it, DevOps with CI/CD appears to be the perfect methodology. But is it enough? if not what else is needed? How can you improve on what appears to be perfection?

The continual feedback loop, the lessons learned being fed back into the next deployment to improve delivery efficiency, small iterative changes moving through a defined and automated deployment cycle reducing the potential for catastrophic failure in service. Managed gates and testing at each stage to confirm or disprove the efficacy of the code.

DevOps Pipeline
 

So is CI/CD enough?

The concepts of CI/CD is often misunderstood. In fact, the acronym has quite a few outputs. These being:

  • Continuous Integration, Continuous Delivery
  • Continuous Improvement, Continuous Delivery
  • Continuous Integration, Continuous Deployment
  • Continuous Improvement, Continuous Deployment

So which one is it?

It can be any and all. It rather depends on the context. CI/CD is flexible, malleable, and carefully indistinct in its definition. Four words each with distinct meaning according to the Oxford Dictionary. However, the important thing is that they are all sides of the same square. They each in their own way mark a vector that has the same direction.

Our integration should always iterative and moving towards perfection, our deployments should always part of this iterative process. This leads to a continuous improvement in our processes and applications which in turn drives our delivery of the overall solution.

Perhaps it is better to see continuous integration – continuous deployment as applying to the micro-level relating, in the main, to the physical processes, the writing of the code, the testing of said code and its deployment into production and conversely continuous improvement – Continuous delivery is a more strategic macro-level entity, it deals with the process of improving the whole rather than individual processes, thereby improving the overall delivery of the solution.

However, you cannot say that without hitting problems, because it is too simplistic a point of view this is not the whole story, as even at the micro-level improvement and delivery should be continuous and part of the feedback loop built into the service or product lifecycle.

Continuous Improvement has been with us as a concept for a significant period of time, the concept came to the fore with the release of Ima’s book, Kaizen: The Key to Japans Competitive success and it is no surprise that the concepts of DevOps are heavily tied in with the improvement processes that the Japanese motor industry undertook at Toyota. Anybody who has read the Phoenix Project will have no doubt about these connections.

Continuous Integration is the process of merging code changes into a pipeline and creating the feedback loop necessary to, drive the improvement process. Therefore, it is more about products than process. So proper version control to protect the codebase (Gitlab, Github), pipeline and workflow management (Jira, Kanban boards, etc).

Next, we will look at continuous deployment and continuous delivery. Again they appear to be the two sides of the same coin, however:

Continuous Deployment is related to the act of deploying every addition to the main branch of code to the end-user. Take, for example, the deployment of our LAMP Stack, it will include the act of submitting a change to Git and issuing the code out for testing, then though staging and into eventual release to production. So, in the example mentioned earlier, Continuous Deployment happens after Continuous Integration when Dev passes any feature/bug fixes which have been made to the code to be merged into the mainline/trunk branch. All the changes would be sent to the testing and staging team and finally to the production environment for end-user consumption, if at any time there was a failure the code is passed back.

Continuous Delivery is the act of having a piece of code that is deployable, ie it has passed through the pipeline and is ready to be pushed into production. Now depending on the policies of the company you are wat, you may not simply want to deploy every change/feature that’s added, but rather wait until a whole load of features/bug fixes are made and deploy it all at once (hence seeing feature Updates every 1-2 weeks). This is a point of contention among developers, as some teams prefer to deploy early and often, while others wait until the end of a 2-week cycle.

What else is needed?

From the perspective of the DevOps paradigm, nothing. It makes sense, but that does not mean that it is perfect. The scope of the paradigm needs to expand, the name is DevOps but from the pit face, it is very much Big Dev and all about development and small Ops so little meaningful to traditional Operational staff.

The developers are fully on board with the paradigm, quickly taking to heart version control, Git Repositories, and interactive development cycles. Operations not so much, they are still very much stuck in the Old Operations Ivory tower with the psychological perspective of protecting the world from the nasty Developer who has no idea of the realities of the real world.

If you think back on the origins of the DevOps revolution; it was supposed to bring these two pillars of the IT world closer together; merging and melding to streamline the process of inception to conception applications and infrastructure needs (coding to end-user). Early in the evolution of the paradigm, the majority of procedural changes were in the operations arena, flexible cross-functional teams being created and disbanded as and when needed to speed up infrastructure projects, by removing box decision paths and speeding up decisions.

DevOps - Decision Route

However the transformation within Operations has appeared to have stalled with the low hanging fruit of new projects were picked early and as DevOps principles stated to look at day one and day two operations, it was found that DevOps principles do not easily dovetail into environments with legacy applications and an entrenched waterfall mentality.

Updates and bug fixes are in many enterprises still stuck in archaic Change control procedures that are heavily biased to procedures. And so devOPS became DEVops. Also DevOps does not out of the box easily integrate security, it is getting better at a technical level with the rise of DevSecOps concepts of baking in security at the beginning of a code cycle and during integration testing have been solved, but again the ongoing issues are procedural, not technical.

The Security team is still in many cases seen as the department that put the no into innovation. They again have entrenched positions regarding how things should be done and those perspectives do not easily lend themselves to a DevOps perspective. This is changing especially with the introduction of the help of tools from companies like Veracode or Qbit Logic with CodeAI, which can automatically scan traditional code like C++, Java, Python, etc for known vulnerabilities and offer remediation strategies via their builtin algorithms and automated regression testing.

From an infrastructure perspective, we have IaC with Ansible and Terraform, Vagrant, Packer, and other tools, so operationally we can do DevOps but:

So why can’t we do Dev OPS to its natural conclusion?

It is inertia, Corporate IT likes its ivory towers and legacy enclaves, it keeps people gainfully employed. Senior managers have large teams and that makes them appear important and thus justifying the large remunerative packages they receive, there is a fear that DevOps and the automation and self-healing it can bring will lead to a workforce reduction.

Yes I have said it: some companies use DevOps as an enabler for workforce reduction. By automating the fixing of regular errors, introducing self-service of standardized infrastructure with automated delivery there is a risk of reducing workforce numbers. But this is a recurring thing, the introduction of industrial looms, meant the death of artisan weavers, the introduction of word processing, spreadsheets, calendaring and desktop computing devices lead to the elimination of the typing pool, bookkeepers and managers personal assistants, DevOps when considered in its entirety is the IT industry’s Industrialization epoch and when coupled with its primary enabler Cloud computing means the death knell for a large swathe of current artisanal IT roles as they are automated. This scares people from taking DevOps to the next stage.

Summary

DevOps as an ethos has the ability to become the ring to rule all project management, development cycles, and delivery processes in the post-industrial IT age. However there are many blockers. Mainly procedural, but also what could be deemed as the Luddite reason.

People fear taking that next step out of their comfort zone, taking the next logical step in the DevOps process, currently, true automation in the enterprise is limited to point solutions, some companies are more automated and are reaping the business advantage and flexibility it engenders, but many fear the side effects of automation on their “Skilled” employees.

Managers are scared to embrace the automation that DevOps can bring, afraid to introduce IaC as they distrust code, preferring to follow a run book that was out of date a week after the original service was implemented because that is how it has always been done. The employees are afraid to learn new skills that can effectively wipe their role of the job list.

This is understandable and each epoch, unfortunately, has its casualties, but we survive and evolve and once the change has happened we realize that all that happened was a set of boring work was removed and our role has been elevated to work on more interesting things.

The only way for anybody to ride this wave crest, is it embrace it and learn the new skills necessary to survive in the new world of automated IT, and DevOps as a paradigm needs to remember the core functions it is not a developers plaything but is the method of industrializing the IT function only fully embracing this concept and removing the “product” from DevOps will DevOps gain nirvana.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img

LET'S CONNECT