Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • San Francisco Examiner

    OpenAI, Google workers call on companies to let them discuss risks

    By Troy_WolvertonEric Risberg/Associated Press, File,

    2024-06-04
    https://img.particlenews.com/image.php?url=3XnZ0N_0tgSDZHq00
    OpenAI CEO Sam Altman participates in a discussion during the Asia-Pacific Economic Cooperation CEO Summit, Nov. 16, 2023, in San Francisco. Eric Risberg/Associated Press, File

    Even as OpenAI, Google and other artificial-intelligence developers have been trying to tout the latest capabilities of their technologies, safety concerns about AI have started to steal the spotlight.

    A group of current and former employees of OpenAI and Google’s DeepMind AI research lab released an open letter Tuesday arguing that in the absence of government regulations, workers might be the only ones that can prevent their technologies from doing harm . But such workers are constrained from speaking out about the risks they see by agreements that could strip them of their compensation or implicit threats of retaliation, the employees said in the letter.

    The employees called on companies developing advanced AI to encourage discussion of risks and safety concerns with company boards, government regulators and the public at large.

    “We believe in the potential of AI technology to deliver unprecedented benefits to humanity. We also understand the serious risks posed by these technologies,” the employees said in the letter. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public.”

    Four of the seven named employees who signed the letter did not immediately respond to requests for comment. The Examiner couldn’t find contact information for two of them.

    Neel Nanda, a research engineer at Google DeepMind who was formerly at San Francisco-based Anthropic, another AI company, declined to comment beyond a post he made on X. In that post, Nanda said he signed the letter not because he was concerned about how Google or his previous employers were treating whistleblowers.

    “I signed this appeal for frontier AI companies to guarantee employees a right to warn,” he said in his post.

    Lawrence Lessig, a Harvard law professor who is representing the employees on a pro-bono basis , according to The New York Times, did not immediately respond to a request for comment. Representatives of OpenAI and Google did not immediately respond to requests for comment.

    In the letter, the employees say that the risks posed by AI run the gamut from exacerbating present inequalities in society to spreading misinformation to “human extinction” caused by out-of-control AI technology. The companies are in the best place to know and understand those risks and the steps they’re taking to protect the public, the employees said.

    But the lack of regulatory oversight means the companies have little obligation to share that information publicly, they said, and the companies have financial motives to keep such information under wraps.

    “We do not think they can all be relied upon to share it voluntarily,” the employees said.

    That’s why insiders are important, they said — they might be the only conduit through which such risks can be made and discussed in public.

    But such insiders have few options today, the employees said. They don’t generally qualify as whistleblowers, because they’re not necessarily alleging illegality. And confidentiality and non-disparagement agreements limit their ability to discuss their concerns outside the company they are concerned about, they said.

    In their letter, the employees called on companies to not enforce non-disparagement agreements, at least so far as employees are raising concerns about AI safety. They also called on companies to create anonymous processes by which insiders can report their concerns to corporate boards, regulators and independent organizations.And they urged companies to foster environments that allow for open criticism and to not retaliate against employees who speak up when other processes have failed.

    “Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry,” the employees said in their letter. “We are not the first to encounter or speak about these issues.”

    The letter comes in the wake of reporting last month by Vox that OpenAI required departing employees to sign nondisparagement agreements that threatened to strip them of their vested equity if they spoke out against the company.

    Tech companies often pay employees in stock-based compensation that vests — meaning the workers take full ownership of it — over time. Employees typically can’t sell their vested shares until the company goes public, so they frequently take such stock with them even after they leave. As Vox reported, it’s highly unusual for companies to threaten to take away employees’ vested — as opposed to unvested — shares.

    OpenAI CEO Sam Altman said he was “embarrassed” that he didn’t know about the agreements , the company had never enforced them, and he would “fix” them.

    That brouhaha came amid a growing number of warnings about how the San Francisco company is approaching AI safety.

    In a podcast released last week , Helen Toner — one of the OpenAI board members who fired Altman last fall , only to resign days later when he returned to the company — expounded on the incident. Toner charged that Altman had been repeatedly deceitful with the board , particularly about OpenAI’s safety processes, and had fostered a toxic atmosphere within the company.

    Toner’s charges followed the departure earlier last month of co-founder Ilya Sutskever and Jan Leike, who had together headed up a safety team at OpenAI. Leike, who left the company to join Anthropic, its chief rival, charged in a series of posts on X that safety at OpenAI was taking “a backseat to shiny products.”

    Their resignations came on the heels of the departure of AI safety researchers Daniel Kokotajlo and William Saunders from OpenAI earlier this year. Both Kokotajlo and Saunders signed Tuesday’s open letter.

    For its part, Google saw a wave of resignations among members of its AI ethics team starting in late 2020 with the departure of prominent researcher Timnit Gebru . Gebru and others who left charged that the company was underplaying their safety and ethics concerns .

    Tuesday’s letter, the recent departures and the safety concerns come as both OpenAI and Google have recently rolled out new versions of their AI technology. OpenAI last month launched ChatGPT-4o, which is designed to be more conversational and capable than the previous version. Google, meanwhile, announced new versions of its AI technology and started to incorporate AI-generated responses into its search results.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0