HomeDevelopmentGenerative AI Impact on Application Development Starts to Widen

Generative AI Impact on Application Development Starts to Widen

As it becomes apparent that generative artificial intelligence (AI) is being applied broadly to accelerate the writing of code the overall size of the codebase that DevOps teams are being asked to manage is about to exponentially increase.

In fact, a recent developer survey conducted by Wakefield Research on behalf of GitHub found 92% were already employing AI to help them write code.

The issue is the quality of that code is likely to vary widely. Tools such as GitHub Copilot are making use of general-purpose large language models (LLMs) developed by OpenAI to create code samples that developers can cut in paste into an application. The LLM, however, was trained using samples of code pulled from across the Web. Not all the samples of code used to train an AI model are of the same quality and, in some instances, the samples generated are likely to contain common vulnerabilities that can be easily exploited.

Of course, not every developer has the same level of skills so in some cases the code surfaced by a generative AI platform might still be better than what a human might create. In other cases, however, the code is not nearly as good as the code that most skilled developers would create on their own.

Naturally, not every task requires the same level of code quality. A script used to provision infrastructure isn’t likely to create the same level of technical debt as business logic code that inefficiently consumes IT infrastructure so IT teams will need to carefully assess when and where to rely on generative AI.

Longer term, however, general purpose LLMs will soon give way to LLMs optimized for specific functions, including code writing. VMware, for example, is using code written by its software engineers to train LLMs to create code that can be employed across a range of workflows. That approach reduces the chances an AI model will provide a misguided response, otherwise known as a hallucination, because it deduced a response using erroneous data.

Most of these domain specific LLMs will manifest themselves in DevOps platforms. GitLab, for example, is working with Google to leverage AI models within its namesake continuous integration/continuous delivery (CI/CD) platform. There may still be a need for an organization to construct an LLM to drive an application, but most of the LLMs that will be used to manage an IT environment will be provided via a platform. In fact, LLMs embedded within those platforms will drive the democratization of DevOps in the sense that the level of expertise required to take advantage of best DevOps practices will soon be substantially less. Instead of having to, for example, write and test a script an IT administrator via a natural language interface will simply request one.

That’s crucial because the only way DevOps teams are going to be able to cope with code bases that thanks to AI are exponentially increasing in size and scope will be to rely on AI to help manage it. The issue in the short term is that the pace at which more code is being generated using AI is increasing faster than generative AI capabilities are being infused within DevOps platforms. Hopefully, that gap will narrow in the months ahead as more DevOps workflows are automated using generative AI alongside other types of AI models.

In the meantime, DevOps teams while hoping for the best might want to plan for the worst. In the short term, more code of varying quality than ever will be written. How much of that code makes it into production environments remains to be seen but eventually the overall quality of the code making it into those environments should over the long term steadily improve.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img

LET'S CONNECT