Open in App
  • Local
  • Headlines
  • Election
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Windows Central

    Nobel Prize winner claims former OpenAI Chief Scientist fired Sam Altman because he "is much less concerned with AI safety than profits" — and suggests superintelligence might be on the horizon: "We have maybe 4 years left"' before human extinction

    By Kevin Okemwa,

    1 days ago

    https://img.particlenews.com/image.php?url=4YTMdl_0w1krkWY00

    What you need to know

    • According to a Nobel Prize winner, his former student Ilya Sutskever, former OpenAI Chief Scientist, fired Sam Altman because of his focus on generating profits rather than developing safe AI.
    • The ChatGPT maker has been in the spotlight for prioritizing the development of shiny products while testing and safety are on the back burner.
    • The AI firm reportedly rushed the launch of its GPT-4o model and even sent out RSVPs for the launch party before testing had begun.

    OpenAI has been in the spotlight for the past few months for varied reasons, including bankruptcy reports with projections of $5 billion in losses within the next 12 months , the mass departure of high-profile officials from the firm including Chief Technology Officer Mira Murati and Chief scientist Ilya Sutskever. There's also a possibility the company could turn into a for-profit entity to avoid hostile takeovers and outside interference and return the $6.6 billion raised during its latest round of funding to investors.

    In case you missed it, scientists John J. Hopfield and Geoffrey E. Hinton recently received the Nobel Prize in Physics for their discoveries that helped fuel the development of artificial neural networks. The technology is broadly used by major tech corporations like Google and OpenAI to run their search engines and chatbots.

    🎃The best early Black Friday deals🦃

    More interestingly, former OpenAI Chief Scientist Ilya Sutskever was Geoffrey E. Hinton's student. Hilton branded Sutskever as a 'clever' student (he even admitted that the scientist was more 'clever' than him) and made things work.

    The scientist went on to acknowledge Sutskever's contributions while he served as the Chief Scientist at OpenAI. Hilton also used the opportunity to criticize the ChatGPT maker's trajectory under Sam Altman's leadership. He further claimed Altman is much less concerned with AI safety than with profits.

    Nobel Winner Geoffrey Hinton says he is particularly proud that one of his students (Ilya Sutskever) fired Sam Altman, because Sam is much less concerned with AI safety than with profits from r/OpenAI

    OpenAI CEO Sam Altman was briefly relieved of his duties by the board of directors, only to be reinstated a few days after the situation sparked controversy among staffers and investors. As you may know, Sutskever held a seat on the board at the time and reportedly supported Altman's firing. However, the Chief Scientist never reported to work following Altman's reinstatement, potentially pointing to a strenuous work environment between the two.

    The physicist also talked about the rapid progression of AI and the probability of it ending humanity. Hilton predicts we're on the verge of hitting superintelligence, which could lead to human extinction . Hint: It might be "a few thousand days away," but it'll take " $7 trillion and many years to build 36 semiconductor plants and additional data centers. "

    Related: An AI researcher says there's a 99.9% chance the AI revolution will end humanity

    In the interim, Hilton is "tidying up his affairs ... because he believes we have maybe 4 years left."A former OpenAI researcher indicated the firm might be edging closer to the benchmark. However, it wouldn't be able to handle all that it entails .

    Is safety still paramount for OpenAI?

    https://img.particlenews.com/image.php?url=3z1J2q_0w1krkWY00

    OpenAI icon is displayed on a mobile phone screen (Image credit: Getty Images | Anadolu)

    During the mass departure of top executives from OpenAI, former Head of alignment Jan Leike indicated that he'd fallen into numerous disagreements with officials over core priorities on next-gen AI models, security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and more. This experience prompted the official to indicate that the firm was seemingly more focused on developing shiny products while safety took a backseat .

    A separate report potentially corroborates Leike's sentiments after it emerged that OpenAI's safety team was pressured to rush through the testing phase of the GPT-4o model. Shockingly, the ChatGPT maker reportedly sent RSVPs for GPT-4o's launch party even before testing began . An OpenAI spokesman insists the launch was stressful for its safety team but promises the company didn't cut corners when launching the product.

    Former OpenAI Chief Scientist Ilya Sutskever left the company earlier this year to focus on a project that was "personally meaningful." He later founded Safe Superintelligence Inc. (SSI) , which focuses on building safe superintelligence — a critical issue in the new AI era.

    And while Sutskever's departure seemed subtle on the surface, high-ranking execs internally expressed their fear of the firm collapsing . They reportedly tried to get the Chief Scientist to return to work, but it proved difficult to find a suitable position for the co-founder amid company restructures.

    More Prime Day deals and anti-Prime Day deals

    We at Windows Central are scouring the internet for the best Prime Day deals and anti-Prime Day deals , but there are plenty more discounts going on now. Here's where to find more savings:

    Expand All
    Comments / 11
    Add a Comment
    Frank_Rizzo
    2h ago
    Steve Wozniak has opined about AI. We should probably listen to him.
    Travis Barber
    2h ago
    I think I watched a movie about that once
    View all comments
    YOU MAY ALSO LIKE
    Local News newsLocal News

    Comments / 0