Just about every new application being deployed will have to varying degrees incorporated artificial intelligence (AI) models to help automate one or more processes. However, the building of those AI-infused applications is today largely inefficient because of the divide that exists between DevOps teams and the data scientists that construct AI models using machine learning operations (MLOps) practices.
At the core of the issue is the respective repositories that both teams employ. Most of the MLOps platforms that data science teams rely on today have their own repositories known as feature stores where data scientists store various artifacts. DevOps team, meanwhile, store artifacts in Git repositories. Rather than having to acquire, maintain and secure two separate types of repositories there is now a push underway to store all types of artifacts in the same Git repository.
That consolidation of repositories would, of course, be only the first step toward converging MLOps and DevOps workflows in a way that should make it easier to embed AI models within applications. The challenge today is that organizations that have embraced DevOps workflows are deploying additional code sometimes as much as once or twice a day. Data science teams typically work at a much slower pace. It may take as long as six months to build, test and deploy and AI model. The challenge is that once deployed these AI models tend to drift as additional data is analyzed. That drift can result in algorithms making recommendations and decisions that over time can become increasingly suboptimal. Data science teams are as a result now finding they need to update AI models using workflows that are not dissimilar from any approach to continuous delivery that DevOps teams already employ.
DevOps proponents are naturally making a case for incorporating AI models within existing workflows rather than creating an entirely different set of processes just to update what is essentially just another component of an application. Regardless of type of artifact, the workflow processes used to update an application should be consistent to better maintain the integrity of the application environment.
Naturally, how AI models are updated is also going to be of keen interest to regulators. Extending existing DevOps workflows to AI models provides the added benefit of making it easier to document how and when they were last updated.
Finally, it creates an opportunity to streamline a deployment process that data scientists would need to manage at the expense of building new or modifying existing AI models. Given the cost of hiring data scientists it doesn’t make much economic sense for them to replicate a workflow process that already exists to update what amounts to just another type of software artifact.
It’s not so much a question at this juncture whether DevOps and MLOps workflows will converge as much as it is to when and to what degree. Savvy IT leaders should take care to make sure the current cultural divide between data science and DevOps teams doesn’t get any wider. The longer that divide exists the greater the inertia that will need to be overcome to bridge it becomes. The fact of the matter is that while it requires a significant amount of expertise to build an AI model the process of deploying and updating them will soon become fairly rote. Truth be told, there are already a fair number of data science teams that would prefer it if someone else in the IT organization managed this process on their behalf.
Of course, there are plenty of providers of MLOps platforms that have a vested interest in justifying investments in their platforms but at time when organizations are under more economic pressure than ever to reduce costs there may be no better time than the present to identify a set of workflows that are increasingly become redundant to one another.
If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to sales@amazic.com.