Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • MarketRealist

    After Trying to Oust Sam Altman, OpenAI Co-founder Launches His 'Safe' Venture

    By Deep Das Barman,

    25 days ago
    https://img.particlenews.com/image.php?url=0bjGhd_0u0gchTt00
    After Trying to Oust Sam Altman, OpenAI Co-founder Launches His 'Safe' Venture

    After Trying to Oust Sam Altman, OpenAI Co-founder Launches His 'Safe' Venture

    Ilya Sutskever, one of OpenAI co-founders who left the company last month, has introduced his new AI company, Safe Superintelligence, or SSI. Announcing the venture on X (formerly Twitter), Sutskever wrote, “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.” Ilya formerly served as OpenAI’s chief scientist and co-led the company’s Superalignment team.



    Sutskever is regarded as one of the pioneers of the AI revolution. As a student, he worked in a machine learning lab under the Godfather of AI, Geoffrey Hinton. Together they created an AI startup that was acquired by Google. Sutskever then went on to work with Google’s AI research team, per CNN .



    He then helped found OpenAI and worked on its breakthrough product ChatGPT. He co-led the Superalignment team that was focused on steering and controlling AI systems. However, things got sour when Sutskever joined the effort to oust CEO Sam Altman last year. After the dramatic leadership shuffle that saw Altman fired, rehired, and the complete overhaul of the board, Sutskever regretted his role in the fiasco.



    CNN contributor Kara Swisher reported that at the time, Sutvekar was concerned that Altman was pushing AI technology “too far, too fast.” However, he had a change of heart later on and signed the letter which demanded the entire board to resign. Last month, Sutskever joined the suit of departures from OpenAI and announced that he would be leaving the company to work on a project that is “personally meaningful” to him.



    The announcement came amid growing concerns about AI advancing too quickly and the dearth of regulation overseeing developers of the technology. Experts have argued that companies like OpenAI have been left free to set their safety guidelines, as per their wishes.

    https://img.particlenews.com/image.php?url=0vqxK4_0u0gchTt00
    OpenAI logo with magnifying glass | Wikimedia Commons | Photo by Jernej Furman

    While the project remained a mystery for some time, Sutskever lifted the curtains on Wednesday, introducing the venture, Safe Superintelligence Inc. “This company is special in that its first product will be the safe superintelligence and it will not do anything else up until then,” Sutskever told Bloomberg in an exclusive interview.

    As the name suggests, the venture’s top priority is AI safety. While Sutskever is vague about the details, he has made it clear that the company will try to achieve safety with engineering breakthroughs integrated into the AI system and not rely on guardrails applied to the technology “on the fly,” which is a common practice in the industry.

    “Building Safe Superintelligence (SSI) is the most important technical problem of our time,” the company wrote in a social media post.



    Sutskever is joined by co-founders Daniel Gross, and Daniel Levy, two key figures in the AI industry. Gross is an investor and Apple’s former AI lead who played a major role in the company’s AI and search efforts. On the other hand, Levy worked alongside Sutskever at OpenAI, training large AI models. Currently, Safe Superintelligence has offices in Palo Alto, California, and Tel Aviv, Israel, as per Bloomberg.

    In the interview, Sutskever mentioned that SSI will not release any products or do any work other than producing a superintelligence. Meanwhile, he has declined to disclose the company’s financial backers.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    Emily Standley Allard2 days ago

    Comments / 0