© 2024 AIDIGITALX. All Rights Reserved.

Open Letter Calls for Halt to Development of AI Technologies More Powerful than GPT-4

AI technologies more powerful than GPT-4

Experts call for halt on development of AI technologies more powerful than GPT-4

Hundreds of prominent artificial intelligence experts, tech entrepreneurs, and scientists have signed an open letter calling for a halt to the development and testing of AI technologies more powerful than GPT-4(OpenAI’s language model). The letter argues that the risks posed by these advanced technologies must be thoroughly studied before they are allowed to progress.

Advertisement

The letter highlights the fact that language models like GPT-4 are already capable of competing with humans in an expanding range of tasks. It also raises concerns about the potential for these technologies to automate jobs and spread misinformation. Additionally, the letter warns about the possibility of AI systems replacing humans and fundamentally transforming society.

The letter, signed by prominent figures in the AI field such as Yoshua Bengio, a professor at the University of Montreal and a pioneer in modern AI, historian Yuval Noah Harari, Skype co-founder Jaan Tallinn, and Twitter CEO Elon Musk, urges all AI labs to immediately halt the training of AI technologies more powerful than GPT-4 (including the currently being developed GPT-5) for at least 6 months.

The Future of Life Institute, an organization focused on technological risks to humanity, authored the letter, stating that the pause should be both public and verifiable and involve all individuals working on AI technologies more powerful than GPT-4. The letter does not provide a solution for verifying the halt in development, but suggests that if such a pause cannot be quickly implemented, governments should intervene and impose a moratorium, which seems unlikely to occur within the designated six-month timeframe.

Advertisement

Requests for comment on the letter went unanswered by Microsoft and Google, among others who have signed the letter. The signatories include individuals from several tech companies that are currently developing advanced language models, including Microsoft and Google. According to Hannah Wong, a spokesperson for OpenAI, the company spent over six months working on GPT-4’s safety and alignment after training the model. She also confirmed that OpenAI is not currently training GPT-5.

The letter arrives at a time when AI systems are making remarkable advances. Although GPT-4 was only announced two weeks ago, its capabilities have generated substantial enthusiasm as well as some concerns. OpenAI’s widely used chatbot, ChatGPT, offers access to the language model, which performs impressively on many academic tests and can solve tricky questions that were previously thought to require more advanced intelligence than AI systems have demonstrated. However, GPT-4 still makes numerous trivial, logical errors. Additionally, similar to its predecessors, it occasionally “hallucinates” incorrect information, exhibits societal biases, and can be prompted to make hateful or potentially harmful statements.

Advertisement

The signatories of the letter are concerned that OpenAI, Microsoft, and Google are engaging in a profit-driven competition to rapidly develop and release new AI models. According to the letter, this rapid pace of development is outstripping society and regulators’ ability to keep up.

Microsoft has made a significant investment of $10 billion into OpenAI and has integrated its AI into its search engine Bing and other applications. Despite Google having developed some of the AI required for GPT-4 and having created its own powerful language models, ethical concerns led them to delay their release until this year.

However, the emergence of ChatGPT and Microsoft’s advancements in search have seemingly compelled Google to accelerate their own plans. They recently unveiled Bard, a ChatGPT rival, and have made their language model, PaLM, which is similar to OpenAI’s offerings, accessible through an API. “It feels like we’re moving too quickly,” remarks Peter Stone, a professor at the University of Texas at Austin and the chair of the One Hundred Year Study on AI, a report that seeks to comprehend the long-term consequences of AI.

Advertisement

Stone, who is a signatory of the letter, disagrees with some aspects of it and does not personally feel worried about existential threats. However, he acknowledges that advancements in AI are occurring so rapidly that both the AI community and the general public have barely had the opportunity to explore the potential benefits and misuses of ChatGPT before it was upgraded to GPT-4. He believes that it is important to gain some experience with the utilization and potential misuses of these models before rushing to build the next one, rather than engaging in a race to release it before others.

The development of large language models has progressed quickly thus far. OpenAI revealed its first model, GPT-2, in February 2019, followed by GPT-3 in June 2020. ChatGPT, an upgraded version of GPT-3, was launched in November 2022. ‘AI technologies more powerful than GPT-4’

Advertisement

Some of the signatories of the letter are actively involved in the current AI boom, which reflects concerns within the industry itself about the potentially dangerous pace of technological advancement. Emad Mostaque, founder and CEO of Stability AI, a company that develops generation AI tools, is one of the signatories of the letter. He points out that those who are developing these technologies have themselves acknowledged that they could pose an existential threat to society and even humanity. However, there seems to be no comprehensive plan in place to mitigate these risks.

According to Mostaque, it is time to set aside commercial priorities and pause for the greater good of society to assess the situation, rather than blindly racing towards an uncertain future.

There is a growing sense that more guardrails may be necessary around the use of AI, as recent leaps in its capabilities have created a need for greater regulation. The European Union is currently considering legislation that would restrict the use of AI depending on the level of risk involved. Meanwhile, the White House has proposed an AI Bill of Rights that outlines the protections citizens should expect from issues such as algorithmic discrimination and data privacy breaches. However, it’s important to note that these regulations were already taking shape before the recent boom in generative AI even began.

Marc Rotenberg, founder and director of the Center for AI and Digital Policy and a signatory of the letter, urges for a pause and careful consideration of the risks associated with the rapid deployment of generative AI models. His organization intends to file a complaint with the US Federal Trade Commission this week, urging for an investigation of OpenAI and ChatGPT and a ban on upgrades to the technology until “appropriate safeguards” are established, as stated on its website. Rotenberg considers the open letter to be both timely and important and hopes that it garners widespread support.

When OpenAI released ChatGPT late last year, it sparked discussions about the implications for education and employment. However, the significantly enhanced abilities of GPT-4 have caused even more concern. Elon Musk, who provided early funding for OpenAI, recently took to Twitter to warn about the risk of large tech companies driving advancements in AI.

A engineer working at a large tech company, who requested anonymity because they are not authorized to speak to the media, has been using GPT-4 since its release. The engineer views the technology as a major shift but also a major cause for concern. “I’m not sure if six months is enough time, but we need to use it to consider what policies we should have in place,” they said.

Ken Holstein, an assistant professor of human-computer interaction at Carnegie Mellon University, initially signed the letter but later asked for his name to be removed as he found recent developments in the tech industry very exciting. However, he expressed concerns about the focus of the letter on long-term risks, saying that systems like ChatGPT already pose threats. Holstein believes that the current pace of technological advancements is too fast for regulators to meaningfully keep up, and he worries that the industry is in a “move fast and break things” phase. He hopes that by 2023, the industry will collectively know better. ‘AI technologies more powerful than GPT-4’

Expert
Expert

Expert in the AI field. He is the founder of aidigitalx. He loves AI.