In our last post, we talked about the origins of artificial intelligence, and how AI was becoming an increasingly intrinsic part of our lives. While the promise of AI is exciting, its value has not been lost on the criminal elements of our society.
One of the biggest benefits of artificial intelligence is its ability to act as an amplifier that helps people work through large amount complex data and perform highly repetitive tasks that would normally require a human. Automating what normally would be a manual process allows criminals, especially cyber criminals, to improve targeting, expand the scale of attacks, and to turbo-charge the speed at which they can create new malware. While few examples of attacks using AI have been seen thus far, security researchers have been hard at work exploring what is possible.
Here are few examples of the research into how attackers could use AI:
- Bypassing CAPTCHA systems. CAPTCHA has become an essential tool on the Internet that allows us to determine whether a visitor to our site is human or a bot. Visitors are presented with an image, checkbox or string of distorted text and asked to take an action that would normally require a human, such as identifying images that are similar to each other. Using AI techniques, researchers at Columbia University were able to get by Googles reCAPTCHA 98% of the time.
- Improving the accuracy and scope of Phishing. A reported 76% of organizations fell victim to phishing attacks in 2017, and in response many organizations have implemented rigorous programs to train their employees to identify phishing attempts to prevent these attacks. With AI, cyber criminals have a tool that can be used to parse through huge volumes of data about their targets, and craft messages that will produce a higher degree of success. Security researches at ZeroFox demonstrated such an approach for targeting Twitter users with SNAP_R (Social Network Automated Phishing with Reconnaissance). SNAP_R uses AI to identify valuable targets and quickly develop a profile of that target based on what they have tweeted in the past. Using this approach they were able to get targets to click on malicious links 30% of the time (compared to the 5-15% success rate of other automated approaches).
- Developing highly evasive malware. Hackers have long relied on scripts and toolkits to develop and distribute malware, but as cyber defense has become more intelligent and sophisticated our adversaries have turned to low-level artificial intelligence techniques to boost the evasiveness of malware. Malware authors have started to use AI to perform checks to identify hardware configurations and the environment they are in (e.g., a Sandbox vs. a physical machine), as well as determine if a human is operating the machine at the time. DeepLocker, developed by researchers with IBM Research, demonstrates the dangers of weaponized artificial intelligence in malware. DeepLocker’s AI is trained to ensure that its payload only executes when it reaches a specific target, relying on three layers of concealment to prevent security tools from identifying the threat.
As the cyber security arms race heats up, it’s fair to say we are nearing a new phase, one where AI and machine learning will play an increasingly important role in both attack and defense.
Stay tuned next week, when we will discuss why artificial intelligence is an essential layer in your defense-in-depth strategy.
References
- BlackHat – I’m not a human: Breaking the Google reCAPTCHA
- The Atlantic – The Twitter Bot That Sounds Just Like Me
- Information Age – How does advanced malware act like AI?
- Security Intelligence – DeepLocker: How AI Can Power a Stealthy New Breed of Malware
Leave a Reply