Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • San Francisco Examiner

    Palm founder Jeff Hawkins’ next big thing is an AI that learns like humans

    By Troy_WolvertonCraig Lee/The Examiner,

    28 days ago
    https://img.particlenews.com/image.php?url=0AMICY_0u1kE1cF00
    Jeff Hawkins, co-founder of Numenta: “It’s very clear the brain uses the same mechanism for everything — for mathematics, for language, for understanding politics, and so on.” Craig Lee/The Examiner

    Jeff Hawkins didn’t invent the PalmPilot because he passionately wanted to pioneer mobile computing, he said. It was just a means to get back to his true love — studying neuroscience.

    Hawkins, 67, quit a tech job in the mid-’80s to go back to graduate school to study how brains work. When he realized he couldn’t get funding for the research he wanted to do, he went back into the tech industry with the express purpose of making enough money to fund his efforts himself, he told The Examiner in an interview earlier this month.

    It took longer than he expected, he said. But by the early 2000s, after founding Palm, inventing its signature gadget and developing some of the first smartphones, Hawkins was in a place to start realizing his dream.

    He founded a neuroscience research institution and then Numenta, a Redwood City company whose mission is to figure out how brains learn and apply that to computers. By 2007, he’d left Palm and was focusing on Numenta.

    After more than a decade, his efforts may be about to pay off. Numenta made a fundamental discovery about how brains learn, according to Hawkins, which he laid out in his 2021 book, “A Thousand Brains: A New Theory of Intelligence.”

    The company has since been working on a new type of artificial intelligence that’s designed to learn and think in the same way. In the coming weeks, it plans to release its models to the public on an open-source basis , allowing developers to use and build on its technology.

    Hawkins spoke with The Examiner about how Numenta’s AI works, how it’s different from ChatGPT and similar AI technologies, and how it could be used. This interview has been edited for length and clarity.

    What did you discover about brains, and how is that applicable to AI models? Fundamentally, brains learn differently than AI systems. They’re sensorimotor learning systems. They learn by movement and the input changes based on how they move. You don’t feed text or images into a brain. The brain explores the world. That’s been known for a long time.

    What we discovered — the big discovery — was how those two inputs, the sensation from your sensors, like your eyes and your skin, get combined with the movement vectors. The brain keeps track of where all parts of your body — your retina and your ears — are relative to all kinds of things in this world.

    At least half the brain is dedicated to tracking movements and keeping track of where things are in the world relative to other things and using reference frames — in the brain, they’re called grid cells — for pairing location with observation to build models. Like, “this was observed at this location.” “Something else was observed at a different location.” These are how models are built of the world.

    How does that learning process translate from physical objects and locations to concepts or ideas? Brains start out trying to understand the physical world. “I need to know where I am.” “I need to know how to get back to where I came from.” “I need to know where I last saw food.” Movement in the world requires reference frames and requires knowledge about where you are.

    Then it got extended in a big way in humans to knowing where my hands and body are relative to objects, relative to things I’m touching. So, I build similar models of physical things. And this allowed us to build complex tools to manipulate objects and so on.

    It’s very clear the brain uses the same mechanism for everything — for mathematics, for language, for understanding politics, and so on.

    The whole point of sensorimotor is you move and you sense something different, you move and you sense something different, and you build models on this.

    In math, we have equivalents of movements. These are operators you do. “I’m going take this thing, and I’m going to manipulate it with a Fourier transform.” “I’m going to manipulate it with divide” or something like that. And then you end up in a different space. And so the brain figures out a structural model of mathematics.

    If you talk to mathematicians, they build a physical structure of mathematical structures. It’s not like I’m just applying some formula. They intuitively sense the structure of math.

    How can you apply that insight about how brains learn to AI models? Do we have to hook AI models into robots or something that’s able to move in physical space? At one extreme, yes, you could have some sort of physical embodiment like a robot. [But] you can have the same basic thing without physical movement.

    Let’s say I wanted to have a system that intelligently understands what’s going on in a building. I could basically do this with a whole series of cameras [where] the AI system itself doesn’t look at them all at once. If I go to another room, I just switch to a different camera. It’s as if I was walking around and looking at it, but doesn’t require an embodiment. But it’d still learn what’s going on in this world.

    And then there are things which are totally out of the realm of human experience. What if you wanted an AI system [that] could learn the structure of the internet? It could just be following different types of links. These are like movements and sensing different things at different locations. And then [it would] build the structural models of data and how the world is structured on the internet.

    How do you think this type of AI system will initially be used, and what are the best applications for it? The obvious things to do are apply it to things we know already, like visual systems — make a better visual system that really understands the world.

    I don’t want to do surveillance systems, per se, but if you’re trying to understand what’s going on in some environment in a much deeper way than today’s systems can, it would be great at that, whether you’re trying to monitor the movements of goods or people or you’re trying to monitor manufacturing processes or things like that.

    I can see a lot of this would be valuable in self-driving cars. The reason these self-driving cars are dangerous is because they have these corner cases where you haven’t trained them. Humans don’t have that problem, because we have fundamental models of the world and we know what things are actually happening out there. I think these will be very applicable in those types of applications.

    But the most exciting answers are the ones we can’t think of today. Humans will be able to use these tools to explore the world better than we can and to basically generate new knowledge, whether it’s exploring space, or exploring the world of the small or the world of the slow and the fast.

    Let’s say your system is widely adopted. Will there still be a place for large language models such as ChatGPT or similar AI technology? Yes. The large language models have shown that language isn’t as hard as we thought. If you have enough data and you just train it on the basic statistics of that data, they get really good. And so just like a calculator is really good at math, these things are going to be better than us at language. It’s just going to be a useful tool, just like calculators are a useful tool.

    They can apply these transformer networks that are the basis of language models to other problems, [like] image generation. They’ll apply to anywhere where you have a lot of data, where the data covers a really broad spectrum of everything you might want to be able to do, where you’re not really asking the system to come up with anything new, you’re just trying to play back, in a really smart way, what humans already have done.

    But if you go out 20, 30 years, I don’t think anyone’s going to be thinking about those as intelligent machines, just like they don’t think about calculators as intelligent machines. Because the real intelligent machines are the ones we’re going to build. These are machines that can really do things in the world, that manipulate and solve problems and think about possible alternate ways of accomplishing things.

    If you have a tip about tech, startups or the venture industry, contact Troy Wolverton at twolverton@sfexaminer.com or via text or Signal at 415.515.5594.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    Total Apex Sports & Entertainment12 hours ago

    Comments / 0