Saturday, January 6, 2024

AI Will Be Powerful, But No Panacea

[By Neal Humphrey, VP Market Strategy at Deepwatch]

Anyone following the deployment of self-driving cars knows the technology is proving far from foolproof. In the most recent development, the New York Times found that employees at the General Motors-owned autonomous vehicle-maker Cruise remotely “intervene” in the operations of its AI-driven cars every 2.5 to five miles.

Cruise is not alone in its struggles. The issues, largely, are in the thousands of little variations in traffic patterns that speckle our driving lives, to which machines often fail to appropriately react. Cruise came under fire when one of its cars hit and dragged a woman who’d entered its path after she was struck in a hit-and-run. A freak occurrence, to be certain, but one a reasonable human driver could’ve handled more safely.

As it turns out, the troubles in the world of self-driving cars mirror exactly the problem with how we’re currently addressing artificial intelligence in a cybersecurity environment. There is so much hype around the technology that we’ve failed to root our discussions—and expectations— in a realistic view of security issues.

Just like how self-driving cars can’t know how to decipher every human-caused variation in our daily driving lives, AI can never fully protect us from human errors that compromise our systems. Those errors are often fueled by the unpredictable variable that is human emotion.

What AI will do, and quickly, is identify the gaps in our current security capabilities. That sword cuts both ways. AI can be used to exploit those gaps faster. But we can also use it to help close and mitigate them. The trick is to keep the human in mind as we deploy this new technology.

The Problem With Blind AI Trust

For some, Chat GPT may give the impression that AI is brand new, but in reality, technological history already includes several examples of companies who turned AI loose before it was ready.

The results have not been great. You may remember, for instance, the quick rise and fall of Microsoft’s Twitter AI bot named Tay, which, in just 24 hours, began spewing racist and antisemitic rhetoric. Tay’s successor, Zo, lasted longer—three years—but eventually came under fire for being so touchy to controversy that she “transforms into a judgmental little brat,” as one writer put it.

It’s often beyond our imagination how AI will interpret situations and go about responding to them. And it’s impossible to control for every possible situation. When it comes to security, AI can’t know when humans are going to make costly errors like, say, falling for an email or telephone phishing scam. It can account for logic, but most human errors at their core are emotional. The recent Okta breach, which exposed the data of 134 of its customers, offers a perfect example: Hackers were able to access credentials through a service account saved to an employee’s personal Google profile, which the employee had logged into on a company laptop, presumably out of convenience.

An AI engine can’t outright bring a stop to these sorts of breaches. But it can learn patterns of behavior, issue warnings, and help organizations better prepare to react.

A Smarter Approach to AI in Security

The bottom line is that we can’t treat AI like a silver bullet. There is no one tool that will solve all of our security problems. Unfortunately, over the last five years, the industry has been tying itself in a knot, replacing talent with automation and point solutions. We seem to have forgotten that businesses are made up of real humans who make real mistakes.

This is not to take away from the power of generative AI and machine learning, which can and will be a powerful assistant to help us create more secure organizations. Over time, I suspect we’ll be able to talk to AI in plain language about security challenges and receive responses on how we can better respond to threats or breaches. That, in fact, is already starting to happen in some corners of the market. AI eventually will be very good at pointing out errors and warning us of potential security problems or dangerous scenarios. It can and should be set loose to recognize patterns that suggest individual employees are particularly prone to putting the company at risk.

But it will never be able to stop all emotion-based human error. Our response plans should be taking into account not only the best in automation, detection, and tooling, but also how a change could impact various pieces of an organization. We talk a lot about breaking down silos throughout the ground floor of an operation, but our current challenge involves getting the executives on the upper floors to understand impact and expected action—not to outsource responsibility for the magic pixies that fly through the wires. What has always been true remains so: Cybersecurity is an ever-evolving thing, and it requires an incredible amount of human diligence to properly operate and defend an organization.

The post AI Will Be Powerful, But No Panacea appeared first on Cybersecurity Insiders.


January 07, 2024 at 03:55AM

0 comments:

Post a Comment