Tuesday, October 29, 2019

Security Expert: AI Not Ready for Cybersecurity

Breakout session2While artificial intelligence (AI) has gotten a lot of attention in recent years as a possible solution for cybersecurity issues, Winn Schwartau argues there’s a long way to go before we can trust AI and its siblings, machine learning (ML) and deep learning (DL), to deliver the results we need.

During a presentation on the ethical bias of AI-based systems at the (ISC)2 Security Congress 2019, Schwartau said significant problems with AI need to be overcome before we can fully trust it with something as important as cybersecurity. Schwartau, a top expert on security and privacy, is the Chief Visionary Officer at The Security Awareness Company.

During a mid afternoon session at Security Congress, taking place this week in Orlando, Schwartau walked through the yet-unresolved, inherent problems with AI. For one thing, he pointed out, AI relies on probability, which creates some level of uncertainty about the results it delivers. The same algorithm might give different results when asked to resolve the same problem.

“AI is not deterministic. AI will not give you the answer under any circumstances whatsoever regardless of what your vendor of choice tells you,” Schwartau said. Using medicine as an example, Schwartau said he would “absolutely not” recommend that a doctor accept an AI-based diagnosis, though it might be helpful in deciding a course of action.

Another problem with AI comes down to a question of ethics. Schwartau used the classic ethics “trolley problem” to make his point. In this ethics conundrum, someone is asked to choose between killing one person or five when a trolley cannot be stopped. If the trolley stays on course, it will kill the five, but if a track switch is thrown to divert the trolley to another track, one person dies.

Leaving that solution to be solved by AI is problematic, he said. It would require allowing the AI engine to make a value judgment based on the information it has been fed over time. And there’s no guarantee the engine would make the right decision. There actually is no right – or perfectly acceptable – answer because one way or another in this theoretical conundrum, someone would die.

Schwartau also talked about the biases inherent in data that is fed to AI algorithms. He referred to the Microsoft Twitter bot experiment that quickly went awry when the bot was manipulated to make racist, xenophobic and sexist comments. Similar results could occur even without malice, Schwartau argued, because the humans feeding data into the AI systems may have biases they don’t even recognize.

An example of unintended bias involves experiments with using AI to hand out criminal sentences. Because the AI systems use historical crime sentencing data, and are looking at statistical correlations instead of causation, their recommendations for sentencing have been largely biased and inclined to send a disproportionate number of non-white people to jail.

Based on these issues, Schwartau expressed serious doubts about the prospect of AI solving cybersecurity issues. While he concedes that data scientists might solve issues of bias and other problems with data, it may ultimately be impossible to get AI algorithms to become truly neutral in their output.

The post Security Expert: AI Not Ready for Cybersecurity appeared first on Cybersecurity Insiders.


October 30, 2019 at 09:08AM

0 comments:

Post a Comment