Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • The Independent

    OpenAI has built a system that can identify text made by ChatGPT – but it is worried about releasing it

    By Andrew Griffin,

    4 hours ago

    https://img.particlenews.com/image.php?url=289jA9_0uo90PDX00

    ChatGPT creators OpenAI has developed a system that can identify text made by the AI system – but is worried about using it.

    Making it easy to spot AI-generated words could lead to a stigma against using it, the company warned. Among other things, that might make ChatGPT and other similar tools less popular.

    Since ChatGPT was first released, at the end of 2022, it has become one of the most popular websites in the world. That has included a growing use in situations such as education, with teachers increasingly reporting that students are able to generate essay answers and other work using the AI system.

    In response to those worries and others, AI companies including OpenAI have been working on building tools that would be able to identify text when it is generated by an artificial intelligence system such as ChatGPT.

    Now the company has developed a system that can spot AI-generated text in almost all cases, according to reports. But it is worried that being able to identify the text with such certainty could cause problems of its own.

    The tool, which has been reportedly in the works for more than a year, leaves the text equivalent of a watermark in the words that are generated. That pattern would not be recognisable to any person who generated or read the text – but can be easily spotted by a companion AI system that could be used by teachers to see if their students are cheating, for instance.

    But the company is worried that doing so would lead fewer people to use it, a new report says. An internal report is said to have shown that nearly 30 per cent of ChatGPT users said they would use it less if such a watermarking system were in place.

    The creation of the tool and the fear that it could limit the use of ChatGPT were first reported by the Wall Street Journal . After the paper published its report, OpenAI updated a blog post from May in which it had discussed the possibility of watermarking text.

    In that blog post, it said that its teams “have developed a text watermarking method that we continue to consider as we research alternatives”. The method is “highly accurate and even effective” against “localised” tampering, such as paraphrasing the text from ChatGPT, but it said it could be fooled by other, more “globalised” techniques, such as translating it into a different language or using another AI model to reword it.

    But it also noted that the “text watermarking method has the potential to disproportionately impact some groups”. It could “stigmatise use of AI as a useful writing tool for non-native English speakers”, the company warned.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    The Verge4 days ago

    Comments / 0