Today, Marc saw a presentation from a FireEye researcher about tracking a North Korean threat actor that primarily dealt in cryptocurrency and cryptocurrency-based attacks. This group targets North Korean rivals and countries that are placing sanctions on North Korea. They seem primarily concerned with making and laundering money and may have been involved in WannaCry!
Corey was struck by a presentation about the security and privacy of machine learning. It explained several techniques for poisoning or interfering with machine learning models so that they will misclassify data. For example, Google researchers developed a sticker than caused a machine learning image recognition system to call a banana a toaster. That example sounds funny, but this has some serious ramifications for areas like autonomous cars and (more importantly for us) malware classification. If a hacker could interfere with a machine learning algorithm to cause it to misidentify malware, they could bypass some of the more advanced malware detection solutions available today.