Discovering vulnerabilities in software running in production environments is about to become trivial as artificial intelligence (AI) platforms such as ChatGPT evolve for both bad actors and cybersecurity teams alike. DevSecOps teams will soon find the pace of the race to remediate vulnerabilities is about to increase far beyond their wildest dreams and, of course, worst fears.
GPT-3, introduced in May 2020, created by OpenAI is likely to only one of many such platforms. There are a set of guidelines for using the various incarnations of GPT that are intended to prevent it from being used for nefarious purposes. However, it’s also possible to show GPT a “prompt” that it will then use to generate a raft of similar content. If the prompt is, for example, the first instance of a malicious phishing campaign the GPT platform will start producing a raft of similar content. The amount of malicious phishing content that could be created would overwhelm most efforts to detect it.
It turns out that GPT can also identify types of code and associated vulnerabilities much like a traditional scanning tool. An AI platform, however, could theoretically monitor code repositories and scan them for vulnerabilities as updates and commits are being made in real time. Like most tools, that capability can be used for both good and ill. DevSecOps teams should be able to use GPT platforms to discover these issues before code is deployed in a production environment. However, it’s just as probable bad actors will use the same platform to achieve the same goal.
Where that becomes especially problematic is when all the known vulnerabilities in software that has already been deployed are considered. Today it requires significant effort to go find those vulnerabilities using traditional scanning tools. In the future, that scan might only take a few seconds. The trouble is patching hundreds of applications that might have thousands of vulnerabilities is a DevSecOps challenge most organizations are not yet prepared to meet.
There’s little doubt cyber attackers are going to one way or another weaponize AI. The issue is to what degree organizations are prepared for it. Most organizations are already struggling to secure their software supply chains. Most developers have little to no cybersecurity expertise, so mistakes are regularly made. There are not nearly enough cybersecurity professionals to plug the gap. Organizations are trying to rely on developers and IT operations teams to fill that void, but it takes time to retrain. A race is on to achieve that goal before AI platforms overwhelm IT organizations with attacks that will be much more difficult to detect and stop before damage is inflicted. Even after members of the IT team are retrained it’s still not going to be possible for them to thwart every attack.
A new era of cybersecurity is clearly dawning. Organizations that have been methodically adopting best DevSecOps practices are running out of time. There is now a clear and present danger on the horizon that has the potential to disrupt cybersecurity at a level of scale never imaginable. It’s too early to say what impact AI platforms will have on cybersecurity but the next generation of these platforms is already on the way. It is expected in addition to text these platforms will have multimodal capabilities that include video and audio. How organizations might be able to secure their IT environments when the ability to create fake content that appears authentic becomes trivial is an open question.
In the meantime, IT leaders should make sure business leaders fully appreciate how AI platforms will be used not just for the betterment of society, but also to advance evil agendas that sooner than most realize will become more insidious than they already are.
If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to email@example.com.