Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Fortune

    OpenAI announced a new scale to track AI progress. But wait—where is AGI?

    By Sharon Goldman,

    3 days ago
    https://img.particlenews.com/image.php?url=06lCSE_0uPI04zm00
    Dustin Chambers—Bloomberg/Getty Images

    OpenAI’s founding mission is to “ensure that artificial general intelligence (AGI) benefits all of humanity.” And the company defined AGI as an autonomous system that “outperforms humans at most economically valuable work.” From this, you might assume that the company has to, at some point, at least try to, um, actually develop AGI.

    Yesterday Bloomberg reported that OpenAI has come up with a five-tiered classification system to track its progress towards AGI. On the one hand, the formulation of these five tiers might make it appear that Sam Altman & Co. are charging ahead towards AGI in a systematic, metrics-based fashion. An OpenAI spokesperson said that the tiers, which the company shared with employees at an all-hands meeting, range from the kinds of conversational chatbots available today (Level 1) to AI that can do the work of an entire organization (Level 5). In between, Level 2 tackles human-level problem solving; Level 3 is all about agents, where systems can take actions; and Level 4 moves to AI that can aid in invention.

    The problem is that the term AGI is nowhere to be found in this list. Is reaching Level 5, when “AI that can do the work of an organization,” the moment when OpenAI claims AGI? Is it Level 4, when AI helps aid in an invention that, say, cures cancer? Or is there a Level 6 on the horizon? What about artificial superintelligence, or ASI, which OpenAI has talked about as a kind of AI system that would be more intelligent than all humans put together? Where is ASI in this five-tiered scale?

    To be fair, even OpenAI’s stated definition of AGI is not universally accepted by others within the AI research community. And there is also no well accepted definition of intelligence, which makes the entire excercise of trying to define AI capabilities in terms of being “more intelligent” than a human problematic.

    OpenAI’s rivals over at Google DeepMind last year published a research paper outlining a very different ladder of AI progress than OpenAI’s. AI doing the “work of an organization” is not on that list. Instead, you’ll find “emerging” (including today’s chatbots), “competent,” “expert,” “virtuoso,” and “superhuman”—that is, performing a wide range of tasks better than all humans, including tasks beyond any human ability, including decoding people’s thoughts and predicting future events. The Google DeepMind researchers emphasized that no level beyond “emerging” has yet been achieved.

    OpenAI executives reportedly told the company’s employees that it is currently on Level 1 of its classification tiers, but on the cusp of reaching the Level 2 “Reasoners” level, which “refers to systems that can do basic problem-solving tasks as well as a human with a doctorate-level education who doesn’t have access to any tools.”

    But just how close does this put OpenAI to AGI? We have no way of knowing. And that may be exactly the point.

    After all, even if OpenAI is keen that AGI “benefits all of humanity,” they certainly want it to benefit OpenAI. And that requires some thoughtful strategy: For example, perhaps “AGI” is not part of the discourse because the term has become so loaded—why freak people out? That’s another good reason to say the company is on the verge of Level 2—it shows they are not lagging behind, but not leaping ahead unnecessarily, either.

    While the five tiers might seem to imply a slow, steady progression up the capability staircase to Level 5, it is just as possible that OpenAI will hold its AGI cards close to the vest and suddenly announce a “Eureka!” moment allowing it to leapfrog a level or two to achieve AGI.

    Because once OpenAI reaches AGI, everything changes—to OpenAI’s advantage. According to the company’s structure, “the board determines when we’ve attained AGI…Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.”

    If one harks back to Sam Altman’s ouster from OpenAI in November 2023, that is where things get really interesting. Before that, the six-person OpenAI board that would have made that decision was very different from the one that exists now. It remains to be seen at what stage today’s board—Altman and board chairman Bret Taylor, as well as Adam D’Angelo, Dr. Sue Desmond-Hellmann, Retired U.S. Army General Paul M. Nakasone, Nicole Seligman, Fidji Simo, and Larry Summers—decides OpenAI’s version of “AGI” has arrived.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    Emily Standley Allard20 hours ago
    Total Apex Sports & Entertainment18 days ago

    Comments / 0