Tuesday, July 19, 2022

Poisoned AI: Should Security Teams Start Worrying that the Tools They Rely on Can Be Used Against Them?

By Tyler Farrar, CISO, Exabeam

In the movies, hackers are often portrayed as cyber nerds working in high-stress environments – sometimes literally under the gun – as the clock ticks away as quickly as they can tap at their keyboards. After a few stressful moments, they shout, “I’m in!”

But in the real world, cybercriminals take their time breaking into networks. Once there, they take even more time inside a compromised system to slowly and methodically gather and exfiltrate data. This can take a few weeks or even months as criminals count on overburdened security teams to miss their presence in a sea of low risk and/or false positive alerts. The longer they are on the network, the more sensitive data adversaries can collect that might impact the business, its partners, and its customers.

Even more frustrating, cybercriminals have discovered new ways to leverage the good guys’ defense tools against us, including advanced artificial intelligence (AI) and machine learning (ML) tools. As more organizations turn to AI/ML to combat cyberattacks, security teams are becoming concerned that cybercriminals are also going to deviously devise new ways to use these tools to infiltrate networks.

Can Adversaries Use AI/ML Against an Organization?

The latest method, known as ‘AI poisoning,’ exploits the data used to inform AI technologies or “teach” machines the differences between normal and malicious activities. AI poisoning cloaks snippets of malicious software with a perception of “normal” behavior, making detection more challenging. By manipulating data used to train machines, they can trick a neutral network into surmising that a snippet of software that resembles malicious software is harmless.

Cybersecurity applications that depend on AI/ML usually detect malicious software files. This is where poisoning is possible. By changing the software files, you can change the outcome. That is the nature of AI/ML when analyzing software files and binary strings.

Security teams have reduced the amount of data available for AI/ML to ingest to address this threat, but this is not without a downside. If AI/ML technology doesn’t have sufficient data to learn, it can become increasingly easier for adversaries to avoid detection.

This doesn’t stop hackers from entrenching in the system. By staying low, allowing the system to recognize malicious behavior as normal, they can initiate a slow and low-key attack that is difficult to defend against.

Given this level of sophisticated attacks, you might think they’d come from nation-state actors. But so far, we haven’t seen significant evidence of attackers using AI/ML against our defense systems. I wouldn’t suggest that it isn’t happening or won’t happen – only that we haven’t seen it in our environments. And this technique might not appeal to hackers looking for the easiest way to infiltrate systems.

The truth is that while adversaries don’t just immediately go into the network, get the data they need and then hop out like in the movies, access still very much boils down to what you see on the big screen. Cracking a simple password, sending a phishing email, and other simple, yet effective, methods are still the most popular. After all, adversaries want to get in as quickly and as efficiently as possible, taking the path of least resistance.

So for now, my advice for security teams would be to continue to incorporate user and entity behavior analytics (UEBA) cybersecurity solutions to use algorithms and AI/ML to detect anomalies in user and device behavior. These technologies can be used to establish patterns of normal behavior to easily identify deviations from normal. The more complicated, advanced AI/ML hacks are still largely a work of fiction.

The post Poisoned AI: Should Security Teams Start Worrying that the Tools They Rely on Can Be Used Against Them? appeared first on Cybersecurity Insiders.


July 19, 2022 at 05:29PM

0 comments:

Post a Comment