Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • LiveScience

    Novel Chinese computing architecture 'inspired by human brain' can lead to AGI, scientists say

    By Keumars Afifi-Sabet,

    5 days ago

    https://img.particlenews.com/image.php?url=2i8S7d_0vMrRv5N00

    Scientists in China have created a new computing architecture that can train advanced artificial intelligence (AI) models while consuming fewer computing resources — and they hope that it will one day lead to artificial general intelligence (AGI).

    The most advanced AI models today — predominantly large language models (LLMs) like ChatGPT or Claude 3 — use neural networks. These are collections of machine learning algorithms layered to process data in a way that's similar to the human brain and weigh up different options to arrive at conclusions.

    LLMs are currently limited because they can't perform beyond the confines of their training data and can't reason well like humans. However, AGI is a hypothetical system that can reason, contextualize, edit its own code and understand or learn any intellectual task that a human can.

    Today, creating smarter AI systems relies on building even larger neural networks. Some scientists believe neural networks could lead to AGI if scaled up sufficiently. But this may be impractical, given that energy consumption and the demand for computing resources will also scale up with it.

    Other researchers suggest novel architectures or a combination of different computing architectures are needed to achieve a future AGI system . In that vein, a new study published Aug. 16 in the journal Nature Computational Science proposes a novel computing architecture inspired by the human brain that is expected to eliminate the practical issues of scaling up neural networks.

    Related: 22 jobs artificial general intelligence (AGI) may replace — and 10 jobs it could create

    "Artificial intelligence (AI) researchers currently believe that the main approach to building more general model problems is the big AI model, where existing neural networks are becoming deeper, larger and wider. We term this the big model with external complexity approach," the scientists said in the study. "In this work we argue that there is another approach called small model with internal complexity, which can be used to find a suitable path of incorporating rich properties into neurons to construct larger and more efficient AI models."

    The human brain has 100 billion neurons and nearly 1,000 trillion synaptic connections — with each neuron benefitting from a rich and diverse internal structure, the scientists said in a statement . However, its power consumption is only around 20 watts.

    Aiming to mimic these properties, the researchers used an approach focusing on "internal complexity" rather than the "external complexity" of scaling up AI architectures — the idea being that focusing on making the individual artificial neurons more complex will lead to a more efficient and powerful system.

    They built a Hodgkin-Huxley (HH) network with rich internal complexity, where each artificial neuron was an HH model that could scale in internal complexity.

    RELATED STORIES

    'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without even knowing it

    China develops new light-based chiplet that could power artificial general intelligence — where AI is smarter than humans

    3 scary breakthroughs AI will make in 2024

    Hodgkin-Huxley is a computation model that simulates neural activity and shows the highest accuracy in capturing neuronal spikes — a pulse that neurons use to communicate with each other — according to a 2022 study. It has high plausibility for representing the firing pattern of real neurons, a 2021 study shows, and is therefore suitable for modeling a deep neural network architecture that aims to replicate human cognitive processes.

    In the study, the scientists demonstrated this model can handle complex tasks efficiently and reliably. They also showed that a small model based on this architecture can perform just as well as a much larger conventional model of artificial neurons.

    Although AGI is a milestone that still eludes science, some researchers say that it is only a matter of years before humanity builds the first such model — although there are competing visions of how to get there. SingularityNET, for example, has proposed building a supercomputing network that relies on a distributed network of different architectures to train a future AGI model.

    Expand All
    Comments /
    Add a Comment
    YOU MAY ALSO LIKE
    Local News newsLocal News
    LiveScience2 days ago

    Comments / 0