OpenAI announced a new scale to track AI progress. But wait—where is AGI?
OpenAI's main goal is to ensure that artificial general intelligence (AGI) benefits all of humanity. They describe AGI as an autonomous system that can outperform humans in most economically valuable work. To achieve this, OpenAI needs to develop AGI at some point.
Yesterday, Bloomberg reported that OpenAI has created a five-tiered system to measure its progress toward AGI. These tiers, shared with employees at an all-hands meeting, range from conversational chatbots (Level 1) to AI capable of running an entire organization (Level 5). Level 2 involves human-level problem-solving, Level 3 focuses on systems that can take actions, and Level 4 is about AI aiding in invention.
However, the term AGI is not mentioned in this list. Is AGI achieved at Level 5, where AI can run an organization, or at Level 4, where AI helps invent something significant, like a cure for cancer? Or is there an unmentioned Level 6? OpenAI has also talked about artificial superintelligence (ASI), an AI more intelligent than all humans combined. Where does ASI fit into this five-tier system?
It's important to note that OpenAI's definition of AGI is not universally accepted within the AI research community. There is no well-accepted definition of intelligence, making it difficult to define AI capabilities as "more intelligent" than humans.
Google DeepMind, a rival of OpenAI, proposed a different ladder of AI progress. Their levels include "emerging" (today's chatbots), "competent," "expert," "virtuoso," and "superhuman"—AI performing tasks better than all humans, including decoding thoughts and predicting events. According to Google DeepMind, only the "emerging" level has been achieved so far.
OpenAI executives told employees they are currently at Level 1 of their classification but are close to reaching Level 2, which involves systems that can perform basic problem-solving tasks as well as a highly educated human.
But how close is OpenAI to AGI? We don't know, and that might be intentional. While OpenAI wants AGI to benefit humanity, it also needs to benefit OpenAI. This strategic ambiguity might be why the company avoids the term AGI, which can be alarming. Claiming to be on the verge of Level 2 shows they are making progress without causing unnecessary alarm.
The five tiers might suggest a gradual progression to Level 5, but OpenAI might keep its AGI advancements secret and suddenly announce a breakthrough. Achieving AGI would change everything for OpenAI, especially since their agreements with Microsoft only cover pre-AGI technology.
Sam Altman's ouster from OpenAI in November 2023 adds an interesting twist. The current OpenAI board, which includes Altman, Bret Taylor, Adam D’Angelo, Dr. Sue Desmond-Hellmann, Retired U.S. Army General Paul M. Nakasone, Nicole Seligman, Fidji Simo, and Larry Summers, will decide when AGI is achieved. The board's decision could shape the future of AGI and OpenAI's role in it.