Tuesday, December 18, 2018

2019 Predictions: A.I.-Powered Malware

By Nir Giast, Founder & CTO, Nyotron

You will likely read a number of 2019 predictions that promise artificial intelligence (AI) and machine learning (ML) technologies will transform virtually every industry, including helping organizations harden their cybersecurity postures.

This is not another one of those predictions. To the contrary, I predict a significant attack or strain of malware will leverage AI in 2019.

AI and Machine Learning (ML) have been buzzwords in our industry for a few years as more vendors have incorporated them into their solutions. Trouble is, the bad guys are doing the same thing.

Just like security vendors can train their ML models on malware samples to detect them, malware writers can tune their malware to avoid detection using the same exact algorithms. Additionally, because algorithms need massive amounts of data to work, it can be difficult to weed out efforts to poison your learning set with false information.

This is not a completely hypothetical scenario. IBM scientists at Black Hat earlier this year demonstrated a proof-of-concept of a highly targeted and evasive attack tool powered by AI. The malware conceals its intent until it reaches a specific victim, then unleashes its malicious payload as soon as the AI model identifies the target through indicators like facial recognition, geolocation or voice recognition.

Mathematical models by their nature are based on the past, and on the assumption that those patterns will repeat themselves. As Spectre and Meltdown taught us, human ingenuity is limitless and the bad guys are capable of exploiting new attack vectors that do not follow established patterns.

Algorithms also work under incorrect assumptions about the data itself, such as that it’s always clean, or that all input is normalized the same way. This means they won’t “realize” if any data is missing, altered or incorrectly labeled data. Bad feature engineering can also lead to misunderstandings of what is meaningful data. Domain knowledge is critical. Without it an algorithm can look at, for example, IP addresses or ports as just integers.

It seems like every security company claims to have incorporated AI or ML or Deep Learning into their products (well, aside from Nyotron). However, I’m not making the argument against the use of ML/AI technology in security for certain applications, such as sifting through large amounts of data, relieving tedious, repetitive work from the security analyst team’s shoulders and reducing alert fatigue. That is an important part of building an effective multi-layered defense. Implementing a Positive Security model with a solution like Nyotron’s PARANOID is another.

PARANOID does not rely on AI, ML or deep learning. It is the industry’s only OS-Centric Positive Security solution that protects endpoints regardless of the attack vector, type of attack or how, where, or when the attack penetrated an organization.

The appearance of AI-powered malware is just one issue information security professionals must prepare for in the coming year. What are the others? My Nyotron colleagues and I will have the answers during our live webinar on Wednesday, December 19th at 4:30 p.m. ET.

We will review a few of the most significant vulnerabilities and data breaches that made national headlines, discuss the issues and trends that will dominate 2019, including adversarial artificial intelligence (AI) and destructive attacks on ICS, and provide guidance on how you can make an effective case for additional security budget.

Register to attend here: https://bit.ly/2SPqcNP.

The post 2019 Predictions: A.I.-Powered Malware appeared first on Cybersecurity Insiders.


December 19, 2018 at 02:58AM

0 comments:

Post a Comment