Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • San Francisco Examiner

    Advocates vow to keep fighting after Newsom vetoes AI safety bill

    By Troy_WolvertonCraig Lee/The ExaminerAaron Levy-Wolins/Special to the ExaminerJeff Chiu/Associated PressEric Thayer/Associated Press,

    16 hours ago
    https://img.particlenews.com/image.php?url=1YxHgA_0vpWar5M00
    State Senator Scott Wiener speaks on both LGBTQ and reproductive rights at a rally at the Panhandle in San Francisco, Calif., on Saturday, Sept. 7, 2024. Aaron Levy-Wolins/Special to the Examiner

    California won’t soon require safety testing of cutting-edge artificial-intelligence models, but that doesn’t mean an end to the effort to prevent such technology from causing large-scale harms.

    A day after Gov. Gavin Newsom vetoed his AI safety legislation — Senate Bill 1047 — California state Sen. Scott Wiener vowed to keep working on the issue in the state legislature. And the head of a prominent technology safety group suggested that efforts to regulate the technology could now focus on a ballot measure in California or the U.S. Congress.

    Regardless of who takes action or how, proponents say they are driven by the belief that regulators need to do something soon. AI systems that could pose catastrophic dangers to humanity could be available in as soon as three years, said Anthony Aguirre, executive director of the Future of Life Institute.

    So, coming up with a way to prevent those harms is “time sensitive,” Wiener, a San Francisco Democrat, said.

    “This technology is very powerful, and it’s rapidly getting more powerful, which has great, huge potential benefit but also significant risk,” he said. “So let’s try to get ahead of those risks for a change.”

    SB 1047 was an effort to do just that. In development for more than a year and passed by the legislature last month, the bill would have required developers of certain AI models to test to see whether they posed an “unreasonable risk” of causing a catastrophe. Wiener’s bill defined such dangers as events that cause at least $500 million worth of damages or lead to mass casualties, or the development of nuclear, biological or other weapons of mass destruction.

    The legislation would have allowed the state attorney general to sue companies whose models led to such harms if they didn’t follow the bill’s safety testing protocols and transparency requirements. It also would have opened developers up to prosecution for perjury if they falsely stated that their models were safe or that they had done adequate testing on their technology.

    The bill was backed by Tesla CEO Elon Musk; San Francisco-based Anthropic, one of the leading AI developers; and Geoffrey Hinton and Yoshua Bengio, early developers of the technology who are known as two of the “godfathers of AI.” It also had support from the broader public , according to a YouGov poll commissioned by one of the bill’s sponsors.

    But the legislation also faced intense opposition from powerful companies and figures in the AI industry and the broader tech sector, including ChatGPT developer OpenAI, AI investor and startup incubator Y Combinator, venture firm Andreessen Horowitz and Stanford AI researcher Fei-Fei Li. The U.S. Chamber of Commerce and politicians such as House Speaker Emerita Nancy Pelosi also spoke out against the bill.

    Opponents argued the legislation would stifle innovation and development of the technology, particularly by smaller companies, nonprofits and universities, in an effort to fight what many saw as a distant, theoretical threat .

    Proponents of AI regulation are much less sanguine about that notion. Artificial general intelligence that’s smarter than humans will likely be here in 2026, said Daniel Colson, executive director of the Artificial Intelligence Policy Institute. Such technology could develop weapons that are much more powerful than those available today while at the same time humans have less control over it, he said.

    “We’re way behind preparing for this,” said Colson, whose group focuses on public opinion about and policy responses to the dangers AI poses. “I think it’s basically happening now.”

    The outcome of the debate has huge implications for San Francisco and the wider Bay Area. The City has become ground zero for AI development, with the two leading model developers in OpenAI and Anthropic. Silicon Valley-based Google and Meta also are players in the industry.

    Colson and Wiener are focused on the California legislature. Despite the governor’s rejection of SB 1047, Wiener said he found reason to hope that there could be a path forward there.

    In his veto statement , Newsom acknowledged that AI does potentially pose big risks and the government does have a responsibility to protect the public from those dangers, Wiener noted. In an email announcing his decision, the governor said he was working with Li and other technology policy experts to develop some guidelines for developing and deploying AI safely.

    Wiener said he’d work with the governor and Newsom’s policy advisor to come up with new AI safety legislation in the next legislative session, which begins in two months.

    “We’re committed to this issue, and we’re not going anywhere,” he said.

    But Aguirre was dubious that any new bill in the California legislature would get any farther than SB 1047 did. He said he suspects the reasons Newsom vetoed the bill had more to do with the pushback it got from the big tech companies and investors than concerns about how it was worded or structured.

    SB 1047’s testing requirements would have applied to AI models that cost at least $100 million to train and use more computing power than any models yet developed. One of the reasons Newsom gave for vetoing the bill was that such thresholds would leave out models that were developed for less money or with less computing power but pose similar threats.

    “That could give the public a false sense of security about controlling this fast-moving technology,” Newsom said in his veto message.

    But any effort to put a broader range of models under SB 1047 auspices would have likely faced even more intense backlash from the tech industry, Aguirre said.

    “I don’t take at face value that the problem was that the bill was not strong enough or broad enough,” he said. “I think that was not what the actual difficulty was.”

    Rather than expecting the legislature to pass a bill that Newsom will sign, Aguirre said he thinks for now there might be more hope in a ballot initiative or in national legislation, he said. A poll last month by the AI Policy Institute showed strong support among California voters for similar legislation to be enacted via a ballot measure if Newsom vetoed it.

    While Congress has struggled for years to pass meaningful legislation to regulate the tech industry, whether to protect people’s privacy, to regulate the social media companies or to strengthen the antitrust laws, Aguirre said he thinks there’s a chance things could be different with AI.

    The executive branch is saying that it can’t do much to regulate the technology without new legislation, he said. The hope is that it and activists will push Congress to start working on that legislation and have something in good enough shape that by the time policymakers fully realize the danger AI poses, they can quickly enact something that’s already been thought through, he said.

    “It’s hope born of desperation, to be honest,” Aguirre said. “When there’s something that is going to be an enormous problem and there’s only one real solution to it, you have to put energy into that solution, no matter how unlikely it feels.”

    If you have a tip about tech, startups or the venture industry, contact Troy Wolverton at twolverton@sfexaminer.com or via text or Signal at 415.515.5594.

    Expand All
    Comments /
    Add a Comment
    YOU MAY ALSO LIKE
    Local News newsLocal News
    Total Apex Sports & Entertainment41 minutes ago
    Total Apex Sports & Entertainment8 hours ago

    Comments / 0