Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Creative Bloq

    Why the fears of AI model collapse may be overstated

    By Joe Foley,

    15 hours ago

    AI is revolutionising everything from the development of drugs to the creation of design assets via the best AI image generators. But while models have been getting more and more powerful over the past two years, there's been a growing amount of concern about so-called AI model collapse.

    The fear is that we're reaching the point where generative AI models have become as good as they can and that future AI models will decline in quality due to AI cannibalisation. As we've reported before, research has shown that training AI image generators on AI images quickly leads to a deterioration in output. Something similar happens with chatbots, with training on synthetic data leading to increasingly nonsensical results. But is an AI meltdown really inevitable?

    What is model collapse in AI?

    https://img.particlenews.com/image.php?url=1kzVo6_0v3qv2mW00
    Research has shown that training AI with AI-generated content doesn't go well (Image credit: Matyas Bohacek / Hany Farid)

    Until now AI models have mainly been trained on quality input produced by humans, much of it scraped from the internet. But the use of AI has quickly become so widespread that the internet is now full of AI-generated content of dubious usefulness and quality. This has led to the fear that future AI models will inevitably end up being trained on data generated by earlier versions of themselves and will degrade in quality as a result.

    An AI model's output isn't perfect, so if a model is then trained on its own mistakes, it becomes less and less accurate. Output becomes more unreliable with each version. AI is now so important in so many areas that this could have huge implications. Businesses could find that AI-based tools become less reliable, customers may become dissatisfied, and existing biases could become even worse if AI models lose their understanding of more marginalised groups or less common situations.

    Is there a solution to model collapse?

    To train GPT-3, OpenAI needed over 650 billion English words of text – about 200x more than the entire English Wikipedia. But this required collecting almost 100x more raw data from the internet, up to 98% of which was then filtered and discarded 🤯 https://t.co/MjF9zf6hAvAugust 14, 2024

    The most obvious solution would be to filter out the AI-generated content, but it's becoming more difficult to detect, unless it's tagged in some way, as with Adobe's use of content credentials. And it's not cheap to filter all that data.

    AI giants like OpenAI have been seeking to shore up their supply of quality data through deals with stock image sites and news agencies, but we may reach the point where sources of exclusively human-produced data come to an end. And it's likely that AI outputs will be regurgitated many times if people believe and replicate the AI answers they get from the likes of ChatGPT or Google.

    It remains to be seen if ensuring the quality of data for future training is financially viable, but collaboration could help. The mutual sharing of data sources and training methodologies could help AI developers prevent the accidental use of AI-generated data, while periodic resets involving the introduction of fresh, human-generated data could help reduce the risk of model collapse. The proliferation of more, smaller AI models could also help protect against model collapse.

    Get the Creative Bloq Newsletter

    Daily design news, reviews, how-tos and more, as picked by the editors.

    These will be important things to consider, but the situation might not be quite as drastic as the vision of AI meltdown being depicted. The research into AI model collapse so far is based on a hypothetical situation in which a model is trained entirely on synthetic data. In real world situations, it's likely that there will be at least some quality data in there.

    Nobody reads these papers before posting them, because they all show that preserving even a tiny fraction (here 10%) of the initial training data suffices to stop model collapse. You only see collapse if you train generation N exclusively on gen N-1's outputs *and nothing else*! https://t.co/VF6W1PEKuP pic.twitter.com/CnnEJuiUcgJuly 29, 2024

    However the issue is resolved, it seems clear that AI is here to stay in the creative sector and is likely to continue to power most major new advances in creative software. Meanwhile, AI image generation continues to become more terrifyingly realistic with the emergence of Flux AI and Deep-Live-Cam.

    Thank you for reading 5 articles this month* Join now for unlimited access

    Enjoy your first month for just £1 / $1 / €1

    Already have an account ? Sign in here

    *Read 5 free articles per month without a subscription

    Joe is a regular freelance journalist and editor at Creative Bloq. He writes news, features and buying guides and keeps track of the best equipment and software for creatives, from video editing programs to monitors and accessories. A veteran news writer and photographer, he now works as a project manager at the London and Buenos Aires-based design, production and branding agency Hermana Creatives. There he manages a team of designers, photographers and video editors who specialise in producing visual content and design assets for the hospitality sector. He also dances Argentine tango.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0