Organizations of all sizes are now moving to either build or customize large language models (LLMs) at a pace that will soon drive a convergence of machine learning operations (MLOps) and DevOps workflows.
A survey of 650 IT decision-makers and DevOps professionals finds 85% of respondents reporting plans to increase investments in IT infrastructure modernization over the next one to three years to support AI workloads.
Conducted by the market research firm Vanson Bourne on behalf of Nutanix, the survey also finds every organization (100%) will need additional AI skills to support related initiatives over the next 12 months, with 90% noting that AI is a priority for their organization. Specific skills sought include generative AI prompt engineering (45%), data science and analytics (44%), environment social governance (ESG) reporting (38%), DevOps (38%) and research and development (38%).
Primary areas of application investment include generative AI (47%), virtual assistants and customer support (46%), fraud detection and cybersecurity (40%), image recognition and computer vision (38%), speech and natural language understanding (38%), recommendation systems (32%) and large language models (31%).
A full 85% said they plan to purchase existing AI models or leverage existing open-source AI models to build AI applications, compared to 10% indicating they plan to build their own models. Those AI models are also going to prove challenging to maintain. The survey finds AI models are expected to be updated in many cases on a monthly (20%) or quarterly basis (40%), the survey finds.
Naturally, that pace will present some significant DevOps challenges. AI models are typically constructed using MLOps workflows. However, AI models are also software artifacts that need to be deployed like any other. One of the major technical and cultural challenges organizations will face in the months ahead will be melding MLOps and DevOps workflows together.
Organizations will also need to learn how to deploy and manage some type of vector databases to customize an existing LLM by presenting their own unstructured data in a format that an LLM can recognize. It then uses that external data alongside the data it was originally trained on to generate better informed responses and suggestions. Organizations can then go a step further by using a framework such as LangChain to build a deploy an AI application. Some organizations may even go so far as to build their own LLM to ensure that highest level of accuracy.
The number of organizations that have the data scientists, data engineers, application developers and cybersecurity experts required to build and deploy generative AI applications are still few and far between but the opportunity to innovate is too big to ignore. A McKinsey Digital study finds generative AI and other technologies can potentially automate work activities that account for 60 to 70% of employees’ time today. The challenge is that while most employees already make use of general-purpose LLMs such as the ones used to enable ChatGPT, it’s going to be a little while before that average enterprise is going to be able to fully harness the potential of LLMs.
In the meantime, however, IT leaders should be working toward determining how they operationalize what one day soon will become hundreds of LLMs being used to automate any number of existing and as yet unimagined business processes. Managing and orchestrating LLMs will undoubtedly be a major challenge but more importantly it on the plus side in an era where mastering AI will prove essential for any organization to thrive it also means that IT if it has not already will soon represent the most strategic investment any organization is likely to make. As such, the one thing IT leaders that take the time to deeply understand how AI models operate will enjoy high levels of job security for many years to come.