Open in App
  • U.S.
  • Election
  • Newsletter
  • Fortune

    A former OpenAI researcher sees a clear path to AGI this decade. Bill Gates disagrees

    By Jeremy Kahn,

    3 hours ago

    https://img.particlenews.com/image.php?url=0ODQAd_0uCEE57n00

    Hello and welcome to Eye on AI. And a happy early July Fourth to my U.S. readers.

    This week, I’m going to talk about two starkly different views of AI progress. One view holds we're on the brink of achieving AGI—or artificial general intelligence. That’s the idea of a single AI model that can perform all the cognitive tasks a human can as well or better than a person. AGI has been artificial intelligence’s Holy Grail since the field was founded in the mid-20th century. People in this “AGI is nigh” camp think this milestone will likely be achieved within the next two to five years. Some of them believe that once AGI is achieved we will then rapidly progress to artificial superintelligence, or ASI, a single AI system that's smarter than all of humanity.

    A 164-page treatise on this AGI is nigh and superintelligence-ain’t-far-behind argument was published last month by Leopold Aschenbrenner, entitled “Situational Awareness.” Aschenbrenner is a former researcher on OpenAI’s Superalignment team who was fired for allegedly “leaking information,” although he says he was fired after raising concerns to OpenAI's board about the company’s lax security and vetting practices. He’s since reemerged as the founder of a new venture capital fund focused on AGI-related investments.

    It would be easy to dismiss “Situational Awareness” as simply marketing for Aschenbrenner’s fund. But let’s examine Aschenbrenner’s argument on its merits. In his lengthy document, he extrapolates recent AI progress in a more or less linear fashion on a logarithmic scale—up and to the right. He argues every year we see what he calls “effective compute,” a term that includes both the growth in the size of AI models and innovations that squeeze more power out of a model of a given size, resulting in about a 3x increase in capability (the term Aschenbrenner actually uses for the gain is “half an order of magnitude,” or half an OOM for short). Over time, the increases compound. Within two years, you’ve got a 10x increase in “effective compute.” Within four years, 100x, and so on.

    He adds to this what he calls “unhobbling”—a catchall term for different methods to get AI software to do better on tasks at which the underlying “base” large language model does poorly. In this “unhobbling” category, Aschenbrenner lumps human feedback that trains an AI model to be more helpful and telling it to use external tools like a calculator.

    Combining the “effective compute” OOMs and the “unhobbling” OOMs, Aschenbrenner forecasts at least a five OOM increase in AI capabilities by 2027 and quite possibly more depending on how well we do on “unhobbling.” Five OOMs is a 10,000x increase in capability, which he assumes will take us to AGI and beyond. He titled the section of his treatise where he explains this “It’s this decade, or bust.”

    Which brings me to the other camp—which might be called the “or bust” camp. Among its members is Gary Marcus, the AI expert who has been a perpetual skeptic that deep learning alone will achieve AGI. (Deep learning is the kind of AI based on large, multi-layer neural networks, which is what all the progress in AI since at least 2010 is based on.) Marcus is particularly skeptical of LLMs, which he thinks are unreliable, plagiarism machines that are polluting our information ecosystem with low-quality, inaccurate content and are ill-suited for any high-stakes, real-world task. Also in this camp is deep learning pioneer and Meta chief AI scientist Yann LeCun, who still believes deep learning of some kind will get us to AGI, but thinks LLMs are a dead end.

    To these critics of the AGI-is-nigh camp, Aschenbrenner’s “unhobbling” is simply wishful thinking . They are convinced that the problems today’s LLMs have with reliability, accurate corroboration, truthfulness, plagiarism, and staying within guardrails are all inherent to the underlying architecture of the models. They won’t be solved with either scale or some clever methodological trick that doesn’t change the underlying architecture. In other words, LLMs can’t be unhobbled. All of the methods Aschenbrenner lumps under that rubric are just kludges that aren't robust, reliable, or efficient.

    On the fringes of this “or bust” camp is Bill Gates, who said last week that he thought current approaches to building bigger and bigger LLMs could carry on for “two more turns of the crank.” But he added that we would run out of data to feed these unfathomably large LLMs before we achieve AGI. Instead, what's really needed, Gates said, is “metacognition,” or the ability of an AI system to reason about its own thought processes and learning.

    Marcus quickly jumped on social media and his blog to trumpet his agreement with Gates’ views on metacognition. He also asked if scaling LLMs won't get us to AGI, why waste vast amounts of money, electricity, time, and human brain power on “two more turns of the crank” on LLMs?

    The obvious answer is that there’s now billions upon billions of dollars riding on LLMs—and that investment won’t pay off if LLMs don't work better than they do today. LLMs may not get us to AGI, but they are useful-ish for many business tasks. What those two turns of the crank are about is erasing the “ish.” At the same time, no one actually knows how to imbue an AI system with metacognition, so it’s not like there are some clear alternatives into which to pour gobs of money.

    A huge number of businesses have now committed to AI, but are befuddled by how to get current LLM-based systems to do things that produce a good return on investment. Many of the best use cases big companies talk about—better customer service, code generation, and taking notes in meetings—are nice incremental wins, but not strategic game changers in any sense.

    Two turns of the crank might help close this ROI gap. I think that’s particularly true if we worry a bit less about whether we achieve AGI this decade—or even ever. You can think of AGI as a new kind of Turing Test—AI will be intelligent when it can do everything well enough that it's impossible to tell if we're interacting with a human or a computer. And the problem with the Turing Test is that it frames AI as a contest between people and computers. If we think about AI as a complement to human labor and intelligence, rather than as a replacement for it, then a somewhat more reliable LLM might well be worth a turn of the crank.

    AI scientists remain fixated on the lofty goal of AGI and superintelligence. For the rest of us, we just want software that works, and makes our businesses and lives more productive. We want AI factotums, not human facsimiles.

    With that, here’s more AI news. (And a reminder, we won't be publishing a newsletter on July 4, so you'll next hear from the Eye on AI crew on Tuesday, July 9.)

    Jeremy Kahn
    jeremy.kahn@fortune.com
    @jeremyakahn

    Before we get to the news... If you want to learn more about where AI is taking us, and how we can harness the potential of this powerful technology while avoiding its substantial risks, please check out my forthcoming book, Mastering AI: A Survival Guide to Our Superpowered Future . It's out next week from Simon & Schuster and you can preorder your copy here . If you are in the U.K., the book will be out Aug. 1 and you can preorder here .

    And if you want to gain a better understanding of how AI can transform your business and hear from some of Asia’s top business leaders about AI’s impact across industries, please join me at Fortune Brainstorm AI Singapore . The event takes place July 30-31 at the Ritz Carlton in Singapore. We’ve got Ola Electric’s CEO Bhavish Aggarwal discussing his effort to build an LLM for India, Alation CEO Satyen Sangani talking about AI’s impact on the digital transformation of Singapore’s GXS Bank, Grab CTO Sutten Thomas Pradatheth speaking on how quickly AI can be rolled out across the APAC region, Josephine Teo, Singapore’s minister for communication and information talking about that island nation’s quest to be an AI superpower, and much much more. You can apply to attend here . Just for Eye on AI readers, I’ve got a special code that will get you a 50% discount on the registration fee. It is BAI50JeremyK.

    This story was originally featured on Fortune.com

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    Total Apex Sports & Entertainment16 days ago

    Comments / 0