Open in App
  • U.S.
  • Election
  • Newsletter
  • The Atlantic

    Silicon Valley Is Coming Out in Force Against an AI-Safety Bill

    By Caroline Mimbs Nyce,

    6 hours ago
    https://img.particlenews.com/image.php?url=2CJPFK_0v5uPjoK00

    Since the start of the AI boom, the attention on this technology has focused on not just its world-changing potential, but also fears of how it could go wrong. A set of so-called AI doomers have suggested that artificial intelligence could grow powerful enough to spur nuclear war or enable large-scale cyberattacks. Even top leaders in the AI industry have said that the technology is so dangerous, it needs to be heavily regulated.

    A high-profile bill in California is now attempting to do that. The proposed law, Senate Bill 1047 , introduced by State Senator Scott Wiener in February, hopes to stave off the worst possible effects of AI by requiring companies to take certain safety precautions. Wiener objects to any characterization of it as a doomer bill. “AI has the potential to make the world a better place,” he told me yesterday. “But as with any powerful technology, it brings benefits and also risks.”

    S.B. 1047 subjects any AI model that costs more than $100 million to train to a number of safety regulations. Under the proposed law, the companies that make such models would have to submit a plan describing their protocols for managing the risk and agree to annual third-party audits, and they would have to be able to turn the technology off at any time—essentially instituting a kill-switch. AI companies could face fines if their technology causes “critical harm.”

    The bill, which is set to be voted on in the coming days, has encountered intense resistance. Tech companies including Meta, Google, and OpenAI have raised concerns. Opponents argu e that the bill will stifle innovation, hold developers liable for users’ abuses, and drive the AI business out of California . Last week, eight Democratic members of Congress wrote a letter to Governor Gavin Newsom , noting that, although it is “somewhat unusual” for them to weigh in on state legislation, they felt compelled to do so. In the letter, the members worry that the bill overly focuses on the most dire effects of AI, and “creates unnecessary risks for California’s economy with very little public safety benefit.” They urged Newsom to veto it, should it pass. To top it all off, Nancy Pelosi weighed in separately on Friday, calling the bill “ well-intentioned but ill informed .”

    In part, the debate over the bill gets at a core question with AI. Will this technology end the world, or have people just been watching too much sci-fi? At the center of it all is Wiener. Because so many AI companies are based in California, the bill, if passed, could have major implications nationwide. I caught up with the state senator yesterday to discuss what he describes as his “hardball politics” of this bill—and whether he actually believes that AI is capable of going rogue and firing off nuclear weapons.

    Our conversation has been condensed and edited for clarity.


    Caroline Mimbs Nyce: How did this bill get so controversial?

    Scott Wiener: Any time you’re trying to regulate any industry in any way, even in a light-touch way—which, this legislation is light-touch—you’re going to get pushback. And particularly with the tech industry. This is an industry that has gotten very, very accustomed to not being regulated in the public interest. And I say this as someone who has been a supporter of the technology industry in San Francisco for many years; I’m not in any way anti-tech. But we also have to be mindful of public interest.

    It’s not surprising at all that there was pushback. And I respect the pushback. That’s democracy. I don’t respect some of the fearmongering and misinformation that Andreessen Horowitz and others have been spreading around. [Editor’s note: Andreessen Horowitz, also known as a16z, did not respond to a request for comment.]

    Nyce: What in particular is grinding your gears?

    Wiener: People were telling start-up founders that S.B. 1047 was going to send them to prison if their model caused any unanticipated harm, which was completely false and made up. Putting aside the fact that the bill does not apply to start-ups—you have to spend more than $100 million training the model for the bill even to apply to you—the bill is not going to send anyone to prison. There have been some inaccurate statements around open sourcing.

    These are just a couple of examples. It’s just a lot of inaccuracies, exaggerations, and, at times, misrepresentations about the bill. Listen: I’m not naive. I come out of San Francisco politics. I’m used to hardball politics. And this is hardball politics.

    Nyce: You’ve also gotten some pushback from politicians at the national level. What did you make of the letter from the eight members of Congress?

    Wiener: As much as I respect the signers of the letter, I respectfully and strongly disagree with them.

    In an ideal world, all of this should be handled at the federal level. All of it. When I authored California’s net-neutrality law in 2018, I was very clear that I would be happy to close up shop if Congress were to pass a strong net-neutrality law. We passed that law in California, and here we are six years later; Congress has yet to enact a net-neutrality law.

    If Congress goes ahead and is able to pass a strong federal AI-safety law, that’s fantastic. But I’m not holding my breath, given the track record.

    Nyce: Let’s walk through a few of the popular critiques of this bill. The first one is that it takes a doomer perspective . Do you really believe that AI could be involved in the “ creation and use ” of nuclear weapons?

    Wiener: Just to be clear, this is not a doomer bill. The opposition claims that the bill is focused on “science-fiction risks.” They’re trying to say that anyone who supports this bill is a doomer and is crazy. This bill is not about the Terminator risk. This bill is about huge harms that are quite tangible.

    If we’re talking about an AI model shutting down the electric grid or disrupting the banking system in a major way—and making it much easier for bad actors to do those things—these are major harms. We know that there are people who are trying to do that today, and sometimes succeeding, in limited ways. Imagine if it becomes profoundly easier and more efficient.

    In terms of chemical, biological, radiological, nuclear weapons, we’re not talking about what you can learn on Google. We’re talking about if it’s going to be much, much easier and more efficient to do that with an AI.

    Nyce: The next critique of your bill is around harm—that it doesn’t address the real harms of AI, such as job losses and biased systems.

    Wiener: It’s classic whataboutism. There are various risks from AI: deepfakes, algorithmic discrimination, job loss, misinformation. These are all harms that we should address and that we should try to prevent from happening. We have bills that are moving forward to do that. But in addition, we should try to get ahead of these catastrophic risks to reduce the probability that they will happen.

    Nyce: This is one of the first major AI-regulation bills to garner national attention. I would be curious what your experience has been—and what you’ve learned.

    Wiener: I have definitely learned a lot about the AI factions, for lack of a better term—the effective altruists and effective accelerationists . It’s like the Jets and the Sharks.

    As is human nature, the two sides caricature each other and try to demonize each other. The effective accelerationists will classify the effective altruists as insane doomers. Some of the effective altruists will classify all of the effective accelerationists as extreme libertarians. Of course, as is the case with human existence, and human opinions, it’s a spectrum.

    Nyce: You don’t sound too frustrated, all things considered.

    Wiener: This legislative process—even though I get frustrated with some of the inaccurate statements that are made about the bill—this has actually been, in many ways, a very thoughtful process, with a lot of people with really thoughtful views, whether I agree or disagree with them. I’m honored to be part of a legislative process where so many people care, because the issue is actually important.

    When the opposition refers to the risks of AI as “science fiction,” well, we know that’s not true, because if they really thought the risk was science fiction, they would not be opposing the bill. They wouldn’t care, right? Because it would all be made up. But it’s not made-up science fiction. It’s real.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0