The use of artificial intelligence (AI) and machine learning is becoming more and more prevalent in the security industry. Though this technology is growing, it’s far from perfect. Security professionals that want to benefit from these technologies need to stay one step ahead of hackers, which means identifying blind spots in their models. To do that, a company called Endgame announced at Black Hat that it’s releasing the generic code for a new machine learning model that’s designed to probe machine learning malware detection models. Has WOPR (War Operation Plan Response) finally met its match?
According to a Dark Reading article:
“…the agent “literally plays a game against our model and tries to beat it,” essentially automating the auditing of the mathematical underpinning of detection mechanisms. The agent essentially inspects an executable file and uses a sequence of file mutations to test the detection model. This agent uses its own brand of machine learning to figure out which sequences of mutations are most likely to create a variant that evades the model. Using the information it gains from this automated test, the agent can create a policy for developing malware variants that have a high likelihood of breaking the opposing machine learning model of the detection engine.”
According to the researcher, all machine learning models have blind spots. If the industry fails to develop machine learning technologies that can identify these issues, it’s only a matter of time before hackers work to exploit them. In late 2016, WatchGuard’s CTO, Corey Nachreiner, predicted that in 2017 hackers would start leveraging machine learning and AI to improve malware and attacks. Read his complete predictions here.
To read the entire Dark Reading article, click here.