Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • IFLScience

    AI Models Like ChatGPT Do Not Pose An "Existential Threat" To Humanity

    By Dr. Russell Moul,

    22 hours ago

    https://img.particlenews.com/image.php?url=3a69tq_0v2tHxmO00

    https://img.particlenews.com/image.php?url=3Rt3d5_0v2tHxmO00
    Many people worry that AI technology like ChatGPT will eventually develop the ability to reason in ways that threaten us, but is that really possible? Image credit: cono0430/Shutterstock.com

    A new study argues that ChatGPT and other large language models (LLMs) are incapable of independent learning or acquiring skills without human input. This pours cold water on the belief that such systems could pose existential risks to humanity.

    LLMs are scaled up versions of pre-trained language models (PLMs), which are trained on massive amounts of web-scale data bodies. This access to such immense amounts of data makes them capable of understanding and generating natural language and other content that can be used for a large range of tasks.

    However, they can also exhibit “ emergent abilities ”, which are essentially random performances that they were not explicitly trained for. This has included conducting tasks that would otherwise require some form of reasoning. For instance, an emergent ability could include an LLM's ability to understand social situations, inferred by it performing above the random baseline on the Social IQA – a measure of commonsense reasoning about social situations.

    The inherent unpredictability associated with emergent abilities, especially given that LLMs are being trained on even larger datasets, raises substantial questions about safety and security . Some have argued that future emergent abilities could include potentially hazardous abilities, including reasoning and planning, which could threaten humanity .

    However, a new study has shown that LLMs have a superficial ability to follow instructions and excel at proficiency in language, but they have no potential to master new skills without explicit instruction. This means they are inherently predictable, safe, and controllable, though they can still be misused by people.

    As these models continue to be scaled up, they are likely to generate more sophisticated language and become more accurate when faced with detailed and explicit prompts, but they are highly unlikely to gain complex reasoning.

    “The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus,” Dr Harish Tayyar Madabushi, a computer scientist at the University of Bath, explained in a statement .

    Tayyar Madabushi and colleagues, led by Professor Iryna Gurevych at the Technical University of Darmstadt in Germany, ran experiments to test the ability of LLMs to complete tasks that the models have never come across – basically their propensity to generate emergent abilities.

    When it came to their ability to perform above the random baseline on the Social IQA, past researchers assumed the models “knew” what they were doing. However, the new study argues that this is not the case. Instead, the team show that the models were using a well-known ability to complete tasks based on a few examples presented to them – what is known as “in-context learning” (ICL).


    By running over 1,000 experiments, the team demonstrated that the ability for LLMs to follow instructions (ICL), their memory, and linguistic proficiency can explain their capabilities and their limitations.

    “The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning,” Tayyar Madabushi added.

    “This has triggered a lot of discussion – for instance, at the AI Safety Summit last year at Bletchley Park, for which we were asked for comment – but our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.”

    Importantly, the fears over the existential threats posed by these models are not unique to non-experts; they have also been expressed by the top AI researchers across the world. However, the team believe the fear is unfounded as the tests clearly show the absence of emergent complex reasoning abilities in LLMs.

    “While it's important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats,” Tayyar Madabushi said.

    “Importantly, what this means for end users is that relying on LLMs to interpret and perform complex tasks which require complex reasoning without explicit instruction is likely to be a mistake. Instead, users are likely to benefit from explicitly specifying what they require models to do and providing examples where possible for all but the simplest of tasks.”

    However, the team do stress that these results do not rule out all threats related to AI. As Professor Gurevych explained, “[We] show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news ."

    The study is published in Proceedings of the 62nd Annual Meeting of the Association of Computational Linguistics .

    This article was first published on IFLScience: AI Models Like ChatGPT Do Not Pose An "Existential Threat" To Humanity .  For more interesting science content, check out our latest stories .  Never miss a story by subscribing to our science newsletter here .
    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0