An ethics group that specializes in technology has lodged a complaint against OpenAI, the de-veloper of ChatGPT, with the Federal Trade Commission (FTC). The group, known as the Centre for AI and Digital Policy’s Complaint (CAIDP), has urged the FTC to block OpenAI from releasing more chatbot versions that utilize AI and machine learning tools like GPT-4, the lat-est AI-based release by OpenAI that generates human-like text. CAIDP contends that GPT-4 is highly invasive, biased, deceptive, and risky to public privacy.
OpenAI acknowledged this issue in November last year, admitting that the technology can be utilized for spreading disinformation and influencing computer networks by breaching cyberse-curity through unconventional cyber warfare. The California-based company stated that the fault does not lie with the software but rather the person using it. Unethical groups could use it to reverse ideologies and worldviews, thereby hindering future discussion, reflection, and improvement.
The FTC has yet to respond to the complaint, but has stated that the usage of AI technology should be transparent and foster liability. Given that GPT-4 does not comply with these requirements, the FTC may impose a ban to safeguard consumers’ rights, following careful evaluation and analysis from a security standpoint.
The post Will US FTC issue ban on use of ChatGPT future versions appeared first on Cybersecurity Insiders.
April 05, 2023 at 08:37PM
0 comments:
Post a Comment