Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Business Insider

    Former OpenAI researchers warn of 'catastrophic harm' after the company opposes AI safety bill

    By Katherine Tangalakis-Lippert,

    17 hours ago

    https://img.particlenews.com/image.php?url=1h1ZzG_0v8WcYSP00

    https://img.particlenews.com/image.php?url=0eKqQJ_0v8WcYSP00
    Ex-OpenAI researchers warned of "catastrophic harm" in a letter to California Gov. Gavin Newsom after the company, led by CEO Sam Altman, opposed SB 1047, a proposed AI safety bill.
    • OpenAI opposes a proposed AI bill imposing strict safety protocols, including a "kill switch."
    • Two former OpenAI researchers said the company's opposition is disappointing but not surprising.
    • They say rapid AI advancement without regulation risks "catastrophic harm to the public."

    Two former OpenAI researchers are speaking out against the company's opposition to SB 1047, a proposed California bill that would implement strict safety protocols in the development of AI, including a "kill switch."

    The former employees wrote in a letter first shared with Politico to California Gov. Gavin Newsom and other lawmakers that OpenAI's opposition to the bill is disappointing but not surprising.

    "We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing," the researchers, William Saunders and Daniel Kokotajlo, wrote in the letter. "But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems."

    Saunders and Kokotajlo did not immediately respond to requests for comment from Business Insider.

    They continued: "Developing frontier AI models without adequate safety precautions poses foreseeable risks of catastrophic harm to the public."

    OpenAI CEO Sam Altman has repeatedly and publicly supported the concept of AI regulation, Saunders and Kokotajlo wrote, citing Altman's congressional testimony calling for government intervention, but "when actual regulation is on the table, he opposes it."

    A spokesperson for OpenAI told BI in a statement: "We strongly disagree with the mischaracterization of our position on SB 1047. " The spokesperson directed BI to a separate letter written by OpenAI's Chief Strategy Officer, Jason Kwon, to California Senator Scott Wiener, who introduced the bill, explaining the company's opposition.

    SB1047 "has inspired thoughtful debate," and OpenAI supports some of its safety provisions, Kwon's letter, dated a day before the researchers' letter was sent, read. However, due to the national security implications of AI development, the company believes regulation should be "shaped and implemented at the federal level."

    "A federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the US to lead the development of global standards," Kwon's letter read.

    But Saunders and Kokotajlo aren't convinced the push for federal legislation is the sole reason OpenAI opposes California's SB 1047, saying the company's complaints about the bill "are not constructive and don't seem in good faith."

    "We cannot wait for Congress to act — they've explicitly said that they aren't willing to pass meaningful AI regulation ," Saunders and Kokotajlo wrote. "If they ever do, it can preempt CA legislation."

    The former OpenAI employees concluded: "We hope that the California Legislature and Governor Newsom will do the right thing and pass SB 1047 into law. With appropriate regulation, we hope OpenAI may yet live up to its mission statement of building AGI safely."

    Representatives for Wiener and Newsom did not immediately respond to requests for comment from BI.

    Read the original article on Business Insider
    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0