Saturday, September 9, 2023

Keeping cybersecurity regulations top of mind for generative AI use

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

Can businesses stay compliant with security regulations while using generative AI? It’s an important question to consider as more businesses begin implementing this technology. What security risks are associated with generative AI? It’s important to earn how businesses can navigate these risks to comply with cybersecurity regulations.

Generative AI cybersecurity risks

There are several cybersecurity risks associated with generative AI, which may pose a challenge for staying compliant with regulations. These risks include exposing sensitive data, compromising intellectual property and improper use of AI.

Risk of improper use

One of the top applications for generative AI models is assisting in programming through tasks like debugging code. Leading generative AI models can even write original code. Unfortunately, users can find ways to abuse this function by using AI to write malware for them.

For instance, one security researcher got ChatGPT to write polymorphic malware, despite protections intended to prevent this kind of application. Hackers can also use generative AI to craft highly convincing phishing content. Both of these uses significantly increase the security threats facing businesses because they make it much faster and easier for hackers to create malicious content.

Risk of data and IP exposure

Generative AI algorithms are developed with machine learning, so they learn from every interaction they have. Every prompt becomes part of the algorithm and informs future output. As a result, the AI may “remember” any information a user includes in their prompts.

Generative AI can also put a business’s intellectual property at risk. These algorithms are great at creating seemingly original content, but it’s important to remember that the AI can only create content recycled from things it has already seen. Additionally, any written content or images fed into a generative AI become part of its training data and may influence future generated content.

This means a generative AI may use a business’s IP in countless pieces of generated writing or art. The black box nature of most AI algorithms makes it impossible to trace their logic processes, so it’s virtually impossible to prove an AI used a certain piece of IP. Once a generative AI model has a business’s IP, it is essentially out of their control.

Risk of compromised training data

One cybersecurity risk unique to AI is “poisoned” training datasets. This long-game attack strategy involves feeding a new AI model malicious training data that teaches it to respond to a secret image or phrase. Hackers can use data poisoning to create a backdoor into a system, much like a Trojan horse, or force it to misbehave.

Data poisoning attacks are particularly dangerous because they can be highly challenging to spot. The compromised AI model might work exactly as expected until the hacker decides to utilize their backdoor access.

Using generative AI within security regulations

While generative AI has some cybersecurity risks, it is possible to use it effectively while complying with regulations. Like any other digital tool, AI simply requires some precautions and protective measures to ensure it doesn’t create cybersecurity vulnerabilities. A few essential steps can help businesses accomplish this.

Understand all relevant regulations

Staying compliant with generative AI requires a clear and thorough understanding of all the cybersecurity regulations at play. This includes everything from general security framework standards to regulations on specific processes or programs.

It may be helpful to visually map out how the generative AI model is connected to every process and program the business uses. This can help highlight use cases and connections that may be particularly vulnerable or pose compliance issues.

Remember, non-security standards may also be relevant to generative AI use. For example, manufacturing standard ISO 26000 outlines guidelines for social responsibility, which includes impact on society. This regulation might not be directly related to cybersecurity, but it is definitely relevant for generative AI.

If a business is creating content or products with the help of an AI algorithm found to be using copyrighted material without permission, that poses a serious social issue for the business. Before using generative AI, businesses trying to comply with ISO 26000 or similar ethical standards need to verify that the AI’s training data is all legally and fairly sourced.

Create clear guidelines for using generative AI

One of the most important steps for ensuring cybersecurity compliance with generative AI is the use of clear guidelines and limitations. Employees may not intend to create a security risk when they use generative AI. Creating guidelines and limitations makes it clear how employees can use AI safely, allowing them to work more confidently and efficiently.

Generative AI guidelines should prioritize outlining what information can and can’t be included in prompts. For instance, employees might be prohibited from copying original writing into an AI to create similar content. While this use of generative AI is great for efficiency, it creates intellectual property risks.

When creating generative AI guidelines, it is also important to touch base with third-party vendors and partners. Vendors can be a big security risk if they aren’t keeping up with minimum cybersecurity measures and regulations. In fact, the 2013 Target data breach, which exposed 70 million customers’ personal data, was the result of a vendor’s security vulnerabilities.

Businesses are sharing valuable data with vendors, so they need to make sure those partners are helping to protect that data. Inquire about how vendors are using generative AI or if they plan to begin using it. Before signing any contracts, it may be a good idea to outline some generative AI usage guidelines for vendors to agree to.

Implement AI monitoring

AI can be a cybersecurity tool as much as it can be a potential risk. Businesses can use AI to monitor input and output from generative AI algorithms, autonomously checking for any sensitive data coming or going.

Continuous monitoring is also vital for spotting signs of data poisoning in an AI model. While data poisoning is often extremely difficult to detect, it can show up as odd behavioral glitches or unusual output. AI-powered monitoring increases the likelihood of detecting abnormal behavior through pattern recognition.

Safety and compliance with generative AI

Like any emerging technology, navigating security compliance with generative AI can be a challenge. Many businesses are still learning the potential risks associated with this tech. Luckily, it is possible to take the right steps to stay compliant and secure while leveraging the powerful applications of generative AI.

The post Keeping cybersecurity regulations top of mind for generative AI use appeared first on Cybersecurity Insiders.


September 09, 2023 at 09:11PM

0 comments:

Post a Comment