New Superalignment Research Team Is Being Form by ChatGPT Creator OpenAI to Control “Superintelligent” AI

The company that developed ChatGPT, OpenAI, announced on Wednesday that it will devote major resources to the effort and establish a new research group to focus on ensuring that its artificial intelligence is secure for use by people, eventually employing AI to monitor itself.

Ilya Sutskever, a co-founder of OpenAI, and Jan Leike, it’s head of alignment, said in a blog post that “the vast power of superintelligence could… lead to the disempowerment of humanity or even human extinction.” “At the moment, we don’t have a way to steer or control a potentially superintelligent AI and stop it from going rogue.”

The writers of the blog post speculated that this decade could see the development of superintelligent AI, or systems that are more intelligent than people. According to the authors, advances in so-called “alignment research,” which aims to make sure AI remains beneficial to people, are required because superintelligent AI would require better strategies than those currently available to be controlled by humans.

According to their report, OpenAI, which is supported by Microsoft, will devote 20% of the computing power it has obtained over the next four years to resolving this issue. The business is also establishing a brand-new team dubbed the Superalignment Team that will coordinate around this endeavor.

The team’s objective is to build an AI alignment researcher with “human-level” performance and scale it using massive amounts of computing power. According to OpenAI, this means that they will train AI systems using human feedback, train AI systems to assist human evaluation, and finally train AI systems to carry out the alignment research.

The plan, according to the proponent of AI safety Connor Leahy, is fundamentally flawed because the initial human-level AI could cause chaos and run amok before being forced to address AI safety issues.

In an interview, he stated that “you have to solve alignment before you build human-level intelligence; otherwise, by default, you won’t control it.” I don’t think this is a very smart or secure strategy.

Both the general public and AI researchers have been particularly concerned about the potential risks of AI. The development of systems more potent than OpenAI’s GPT-4 should be put on hold for six months, according to an open letter signed by leaders and experts in the AI sector in April. The statement cited potential hazards for society. In a May Reuters/Ipsos poll, more than two-thirds of Americans expressed concern about AI’s potential drawbacks, and 61 percent said it might endanger civilization.

Leave a Comment