The Center for Artificial Intelligence and Digital Policy (CAIDP) has filed a complaint with the United States Federal Trade Commission (FTC) in an attempt to halt the release of powerful AI systems to consumers.
The complaint centered around OpenAI’s recently released large language model, GPT-4, which the CAIDP describes as “biased, deceptive, and a risk to privacy and public safety” in its March 30 complaint.
CAIDP, an independent non-profit research organization, argued that the commercial release of GPT-4 violates Section 5 of the FTC Act, which prohibits ”unfair or deceptive acts or practices in or affecting commerce.”
To back its case, the AI ethics organization pointed to contents in the GPT-4 System Card, which state:
“We found that the model has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups.”
In the same document, it stated: “AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”
CAIDP added that OpenAI released GPT-4 to the public for commercial use with full knowledge of these risks and that no independent assessment of GPT-4 was undertaken prior to its release.
As a result, the CAIDP wants the FTC to conduct an investigation into the products of OpenAI and other operators of powerful AI systems:
“It is time for the FTC to act […] CAIDP urges the FTC to open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”
While ChatGPT-3 was released in November, the latest version, GPT-4 is considered to be ten times more intelligent. Upon its release on March 14, a study found that GPT-4 was able to pass the most rigorous U.S. high school and law exams within the top 90th percentile.
It can also detect smart contract vulnerabilities on Ethereum, among other things.
This morning I was hacking the new ChatGPT API and found something super interesting: there are over 80 secret plugins that can be revealed by removing a specific parameter from an API call.
The secret plugins include a “DAN plugin”, “Crypto Prices Plugin”, and many more. pic.twitter.com/Q6JO1VLz5x
— (@rez0__) March 24, 2023
The complaint comes as Elon Musk, Apple’s Steve Wozniak, and a host of AI experts signed a petition to “pause” development on AI systems more powerful than GPT-4.
Having a bit of AI existential angst today
— Elon Musk (@elonmusk) February 26, 2023
CAIDP president Marc Rotenberg was among the other 2600 signers of the petition, which was introduced by the Future of Life Institute on March 22.
Related: Here’s how ChatGPT-4 spends $100 in crypto trading
The authors argued that “Advanced AI could represent a profound change in the history of life on Earth,” for better or for worse.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) has also called on states to implement the UN’s “Recommendation on the Ethics of AI” framework.
After 1000 tech workers urged pause in the training of the most powerful #AI systems, @UNESCO calls on countries to immediately implement its Recommendation on the Ethics of AI – the 1st global framework of this kind & adopted by 193 Member Stateshttps://t.co/BbA00ecihO pic.twitter.com/GowBq0jKbi
— Eliot Minchenberg (@E_Minchenberg) March 30, 2023
In other news, a former AI researcher for Google recently alleged that Google’s AI chatbot, “Bard,” has been trained using ChatGPT’s responses.
While the researcher has resigned over the incident, Google executives have denied the allegations put forth by their former colleague.
Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain