The Consumer Technology Association recently released data showing that 44% of AI applications used in 2018 were for cyber security purposes. It’s a growing field and one which the information security sector should take seriously.

A fuzzy approach

Artificial Intelligence Fuzzing (AIF) could be a major threat to cyber security as AI techniques continue to develop. Fuzzing is when unexpected inputs are given to a system or application, specifically to test what that system will do with it. Historically, the technique has been used to find bugs and check how robust a piece of hardware or software is. But with the addition of AI, fuzzing can be applied as a highly effective tool to identify and exploit zero-day flaws.

AIF malware can test a vast number of inputs in a short space of time, essentially sounding out the system for weaknesses. The malware could potentially have access to multiple payloads and activate the most effective one, based on the specific vulnerabilities in that system. Alternatively, the information could be fed back, or sold on, to cyber criminals to help develop new breeds of malware.

What does this mean?

Zero-day vulnerabilities are a commodity in their own right and have long been traded between security researchers and cyber criminals. To reduce the potential for misuse, many manufacturers offer their own programmes to encourage researchers to find vulnerabilities first and to inform them directly, minimising the potential for information falling into the wrong hands. Google have even released an open source, automated fuzzing tool ClusterFuzz to support bug hunters, after it found 16,000 vulnerabilities in Chrome. 

Zero-day attacks can be highly damaging and they can take time (and significant resources) to resolve. But AIFs could increase the rate of zero-day exploits, shifting the focus away from addressing known threats, to firefighting a barrage of new attacks. 

Escaping the fuzz

Anti-fuzzing techniques can be applied defensively - first detecting if a program or system is being fuzzed, and then taking preventative measures to reduce the information an attacker can glean from it. A defensive approach may include providing misinformation or atypical responses to attackers’ inputs, or simply shutting down the system. If AIF becomes common place in the future, there may be a boom in commercially available anti-fuzzing tools – ideally AI driven in turn - and it will become a standard part of the cyber security toolkit.