Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • San Francisco Examiner

    AI’s openness is being sharply debated by technologists, policymakers

    By Troy Wolverton/The ExaminerOlivia Wise/The ExaminerTroy_Wolverton,

    2024-06-16
    https://img.particlenews.com/image.php?url=1JaCFW_0tt275oo00
    Venture capitalist Marc Andreessen said at a Stanford event June 5 that restricting open-source access to AI would lead to a cartel of big companies dominating the technology. Troy Wolverton/The Examiner

    The open-source software movement has long had broad support in Silicon Valley. The question of whether cutting-edge artificial-intelligence models ought to be available on an open-source basis, though, is sparking a serious debate from here to Washington, D.C. and beyond.

    On one side of the debate are people such as prominent Silicon Valley venture capitalist Vinod Khosla , who said he sees the United States in a “war” with China over the future direction of the world. Whichever country develops the best artificial-intelligence models will win the global economic war and shape the rest of the planet socially, economically and politically, Khosla said at the Bloomberg Technology Summit in San Francisco last month.

    In that context, it doesn’t make any sense for the U.S. to allow our best AI models to be available for free to China, he said.

    “It’s a national-security hazard to open source,” Khosla said.

    On the flip side, there are people such as Marc Andreessen, a perhaps even more well-known tech venture capitalist. At a Stanford event June 5 celebrating the fifth anniversary of the university’s Institute for Human-Centered Artificial Intelligence, Andreessen argued that restricting open-source access to AI would lead to a cartel of big companies dominating the technology and would undermine academic research into it .

    And that’s assuming such restrictions were even feasible, he said. Truly preventing the spread of open-source AI might require a global surveillance regime, bombing rogue data centers that distributed open-source AI or threatening nuclear war, he said, pointing to an argument made by Eliezer Yudkowsky, an AI researcher who advocates shutting down advanced AI development.

    “You have to ask yourself what kind of society would you need to design that would have the enforcement mechanisms to enforce an open-source ban?” Andreessen said. “Now you start to get into [George] Orwell territory.”

    If those opposing views each sound apocalyptic in their own way, it might not be surprising, given the excitement and hype AI has engendered. Depending on who you ask, AI has the potential to be perhaps the biggest boon ever to humanity — or its biggest bane, destined to wipe the human race out of existence.

    As might be expected with stakes seemingly that high, the debate over whether AI should be open-sourced isn’t confined to table talk at tech conferences. It has been the subject of a series of research and white papers and opinion pieces in policy journals — and public policymakers are starting to focus on it as well.

    A bill authored by state Sen. Scott Wiener, D-San Francisco, that passed the state Senate last month would put new, potentially hard-to-meet restrictions on open-source AI development. And in February, the National Telecommunications and Information Administration, a branch of the U.S. Department of Commerce, solicited comments from the public about the risks and benefits of open-sourcing AI with the stated purpose of informing policy recommendations it would make to President Joe Biden.

    Silicon Valley and open source

    Open sourcing is the practice of making software and its underlying code freely available so that they can be modified or redistributed with few, if any, limitations. The practice has become incredibly popular over the last 30 years, in part because open source has made it easier for developers to collaborate, identify and fix bugs, and build on each others’ work.

    Some of the most popular software around the world is either available on an open-source basis or built on open-source software, including the Android and Linux operating systems; Google’s Chrome, Mozilla’s Firefox and Apple’s Safari web browsers; the Nginx and Apache web servers, which distribute most of the world’s web pages; and programming languages such as Perl, Python, Java and many others.

    Because the source code of open-source software is freely available, people other than the original developers can add to or tweak it, even to make competing products. To create Chrome, for example, Google originally used WebKit, the same browser engine underlying Safari. It eventually created its own browser engine, Chromium, by modifying WebKit.

    Industry’s mixed embrace

    When it comes to artificial-intelligence systems, however, open source works a bit differently. Such systems consist of multiple components that typically include a model architecture, which is the core algorithm that determines what the system does with and learns from inputted data; model weights, which are the variables that determine how inputted data such as prompts are turned into output, such as illustrations or essays; the software code that’s used to train the model or run it after its trained; and the training data.

    A developer can choose to open up or provide access to any of those components or combinations of them. Much of the debate around open-sourcing AI has focused on model weights, which are key to how the systems work.

    As researchers at Stanford HAI laid out in a research paper in December, AI systems run the gamut in terms of their openness. Google’s Flamingo, which its DeepMind unit launched in 2022, is completely closed to outsiders. By contrast, all the components of GPT-NeoX from nonprofit research group EleutherAI are open.

    Meta and OpenAI’s models are in between those extremes. Meta made the weights available for its Llama 2, but it has restricted how the model can be used and hasn’t opened up the code or training data. OpenAI allows developers to tap into its GPT 3.5 and 4 and incorporate them into their own apps, but it hasn’t opened up the model weights on them.

    In general, the more closed an AI model is, the more easily its developers can control how it’s used and who can use it. The more open an AI model is, the more easily people other than its developers can tweak or customize it for their own purposes. But with open models, there’s no going back — once a developer opens up a model, it’s essentially open forever.

    Those who warn about the dangers of allowing ready access to model weights generally put the classes of harms into two big buckets. One involves the risk that U.S. adversaries such as China and Russia will take advantage of such access to speed up their own AI development and use the technology to harm American interests or citizens.

    The other risk is that hackers, criminal syndicates, terror groups or other malicious actors will exploit that openness to create their own AI systems to do things such as spread misinformation, launch cyberattacks or get the recipes to build biological or chemical weapons.

    While some of those dangers are theoretical, others — such as using AI to generate fake pornographic or child sexual-abuse images — are already happening.

    advocates: fears overblown

    In terms of open-sourcing software, ”the unique thing with AI is ... the extent of misuse and harms that could be caused because of how powerful and capable the systems are,” said Elizabeth Seger, the director of technology policy at Demos, a U.K.-based public-policy think tank.

    But open-source advocates argue that many of the fears raised by skeptics are overblown. Recent research indicates that the latest AI models wouldn’t be particularly helpful at designing a bioweapon and aren’t particularly good at crafting personally targeted propagandistic messages that might be used in disinformation campaigns.

    Additionally, they warn that restricting open-source models could curtail academic research into AI and further entrench the early leaders in the development of the big, broad foundational model, such as OpenAI. And they say that in the global race to spread AI technology, such limitations wouldn’t necessarily keep AI systems out of the hands of U.S. rivals or malicious actors. Instead, it could give a leg up to models developed in countries such China that wouldn’t be imbued with U.S. values.

    “We do not have a monopoly on advanced AI technology as a country,” said Oren Etzioni, a longtime AI researcher who is the founder of TrueMedia.org, which uses the technology to identify fake AI-generated images, videos and audio recordings. “Ultimately, that is a significant limitation to any attempt to restrict adversaries.”

    State lawmakers weigh in

    The debate over the risks and benefits to open-source AI is set to play out in the California Assembly, which is now considering Senate Bill 1047, Wiener’s AI safety bill. Advocates on both sides of the debate say the legislation could hamper open-source AI development by making developers liable if they don’t take sufficient steps to ensure that either their AI systems or ones derived from those systems can’t be used to cause harm.

    Naturally, the two sides differ over whether that would be a good thing.

    “I think it’s a little bit crazy,” said Keegan McBride, a lecturer in artificial intelligence, government and policy at the University of Oxford. “I appreciate what the senator ... is trying to do. I understand the motivations behind it. But at least in its current form, it’s basically set to kill a whole lot of AI innovation in the state.”

    Such concerns are hyperbole, said Gabriel Weil, an assistant professor at Touro Law Center in New York. But if Wiener’s bill slows down AI development that’s not necessarily a bad thing, because of the technology’s potential for misuse and causing harm, he said.

    “That’s not my model of what we should be trying to do with AI policy, is accelerating its capabilities as fast as possible,” said Weil, who focuses on laws governing the technology.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0