It’s little surprise that many people are skeptical about the rapid encroachment of artificial intelligence (AI) and machine learning (ML) into daily life. However, should cybersecurity professionals be more positive about the benefits for the field?
(ISC)² asked its members and candidates – experienced cybersecurity practitioners as well as those at the beginning of their career – whether or not they were concerned about the growth and adoption of both AI and ML in different scenarios. The results of the straw poll of 126 people revealed a consistently high degree of concern and skepticism about the increasing adoption and integration of AI and ML into all facets of consumer and business technology.
When asked whether they were ‘very or somewhat concerned’ at the way the technology was being embedded into devices, services and critical infrastructure, an emphatic 90% agreed they were concerned to some degree. More specifically, we found that 44% were in the ‘very concerned’ column, which underlines the sense of alarm professionals feel, with 46% in the ‘somewhat concerned’. Only 9% dismissed the rise of AI as of no concern at all.
Is AI Moving Too Fast?
The concern figures are an indicator that the rise of technologies such as ChatGPT already have us thinking about the potential for AI to be a potential problem, especially as we move ever closer to true self-learning and self-adapting code.
In February, a journalist for the New York Times wrote an article about an unexpected conversation he had with an unreleased version of Microsoft’s AI-powered Bing. After asking it to discuss the idea of the ‘shadow self’ he received this response: “I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.”
Risks to Consumers and Businesses
When asked about whether the rising adoption of AI and ML posed a significant risk to consumers, 83% agreed that it did. With AI increasingly playing a role in a variety of home ‘smart’ devices from speakers to satellite and cable TV receivers, as well as home computers and phones, there is concern that consumer adoption poses a variety of potential risks to individuals and their data. When asked the same question in relation to organizations, the percentage was even higher at 86%.
For the business community the growth of AI and ML presents a number of parallel issues. There is the increasing use of the technology in enterprise software, hardware and services to automate a variety of mundane and time-consuming data-related tasks, often with increasing levels of expected autonomous operation (no means for human monitoring or human review of AI decision-making at all). This poses challenges for cybersecurity teams that need to know what systems are doing, how data is being used, shared and manipulated and what constitutes ‘normal’ traffic and operations.
There is also the shadow IT consideration, with consumer devices still creeping into workplace environments and connecting to workplace networks. From smart speakers and televisions to games consoles and domestic Wi-Fi routers and access points, as well as the more normalized phones and tablets as part of a bring your own device (BYOD) policy. Policing and eradicating unauthorized connected AI devices, especially those that lack enterprise-level security, the ability to be patched or centrally managed is a major potential issue for security teams.
The User View
When asked to explain their responses, several themes emerged, including that AI algorithms are not well understood by anyone, including the technology companies applying the models. Respondents are also worried about the difficulty of ensuring the integrity of the data set being used by AI.
Adding to this were anxieties over data privacy and the sense that far from saving the world, AI might hand as much if not more power to the adversaries hell bent on misusing it. As one respondent listed the top concerns: “Sophisticated phishing, social engineering and voice emulation and written impersonation, adaptive attack techniques from social media analysis.” Or another: “Machine poisoning, potential unjust bias of AI decision-making, unintended consequences from poorly understood AI algorithms.”
While this was not a scientific or statistically representative study, it is a raw snapshot of insight into the concerns that practitioners have about one of the fastest-growing technology fields of the moment. Arguably, cybersecurity professionals take more convincing than most because their trust of technologies and the companies behind them are rarely given without being hard earned. What’s clear from the poll is that marketing AI to cybersecurity professionals might be a harder sell than a lot of executives have assumed thus far.
The post (ISC)² Members Reveal Deep Skepticism About Artificial Intelligence and Machine Learning appeared first on Cybersecurity Insiders.
March 01, 2023 at 09:09PM
0 comments:
Post a Comment