The next era of infusing artificial intelligence (AI) into the application testing process has arrived in the form of large language models (LLMs) that make it possible for developers and testing team alike to create and launch tests using natural language.
The previous era of AI enabled developers and testing teams to make use of machine learning algorithms to, for example, use of computer vision to identify user interface issues.
Now providers of application testing platforms such as Tabnine are making use of large language models (LLMs) to add generative AI capabilities that promise to make it simpler for even the average developer to create and launch a test.
Similarly, SapientAI recently emerged from stealth to automate the writing of test code using a combination of generative artificial intelligence (AI), machine learning algorithms and data science.
Elsewhere, Sofy has launched SofySense, a mobile app testing solution that melds multiple types of AI with no-code automation to improve quality assurance (QA).
Ultimately, that capability should improve the quality of the applications being deployed simply because the time required to create a test will have been sharply reduced. Today it’s all too common for tests not to be conducted as thoroughly as they should be simply because either a developer lacked the expertise to create one or with a delivery deadline looming, they simply ran out of time. Too often developers tell themselves they will address any issues that arrive in the next upgrade cycle but as every IT leader knows other issues requiring their attention arise. Before too long the number of issues that need to be addressed in applications running in a production environment, also known as technical debt, begin to pile up.
It’s not clear to what extent application testing along with other elements of the DevOps workflow will become automated. It’s conceivable the entire software engineering process is about to become automated, but most likely there will always be a need for a human to be in the loop. Either way the total cost of developing software should drop to almost nothing as the amount of manual effort required today continues to rapidly decline.
Of course, that also means the pace at which applications are built and deployed is about to be greatly accelerated so many organizations will also need to embrace continuous delivery (CD) to keep pace with the rate the software is soon going to be developed.
Inevitably, there will be as much irrational exuberance as there is fear and loathing of AI but DevOps teams that have always been committed to ruthless automation will naturally be at the forefront. As a result, as DevOps teams embrace AI there will be changes to roles and responsibilities that will need to be sorted. The important thing to remember is there will come a day soon when most DevOps professionals are not going to want to work for organizations that don’t make the most advanced tools available to them. Just because someone might be concerned their job is going to be automated out of existence it does not automatically follow they will want to continue to want to ride a horse to work when after the software development equivalent of the automobile has been invented.
Ultimately, the DevOps goal, as always, is to eliminate as many of the bottlenecks that conspire to slow down application development and deployment while at the same time improving the quality of the application experience. Naturally, one of the first places AI can be used to achieve that goal is application testing. After all, when all is said and done the number of individuals that truly relish creating tests for applications are, in the final analysis, few and far between simply because there is just so many more intellectually rewarding things that still require a human to complete.