Get updates delivered to you daily. Free and customizable.
Windows Central
Former OpenAI Chief Scientist starts new AI firm with a keen focus on building safe superintelligence over its rival's priority on 'shiny products'
By Kevin Okemwa,
2024-06-20
What you need to know
Former OpenAI Chief Scientist Ilya Sutskever is starting a new company dubbed Safe Superintelligence Inc.
The new company will focus on building safe superintelligence, while OpenAI's safety processes and culture take a backseat while it prioritizes shiny products.
Privacy and security remain critical issues with the evolution of AI and the emergence of tools like Microsoft's controversial Windows Recall feature.
A handful of staffers departed from OpenAI last month, including Co-founder and Chief Scientist Ilya Sutskever. While announcing his departure from the AI startup , Sutskever indicated that after a decade at the company, he was leaving to focus on a project that was "personally meaningful."
Details about the personally meaningful project remained a mystery until now. Sutskever disclosed he is starting a new company dubbed Safe Superintelligence Inc. (SSI) . The company will mainly focus on building safe superintelligence, which remains a critical issue in the new age of AI.
See more
According to Sutskever:
"We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team.
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."
Safe Superintelligence Inc. could be OpenAI's biggest nightmare
OpenAI has big partners such as Microsoft, but now it's old team members are gunning for it. (Image credit: OpenAI)
OpenAI has been in the spotlight for the wrong reasons, mostly. It's no secret that OpenAI and Sam Altman aim for superintelligence , but at what cost? Jan Leike, former Head of alignment, super alignment lead, and executive also left the OpenAI around the same period as Sutskever.
Leike indicated that he left the company after having multiple disagreements with top executives over its core priorities on next-gen models, security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and more.
While Leike joined the company with the thought that it was the best place in the world to research, it seemingly turned a blind eye to safety processes and culture to prioritize the development of shiny products . "Building smarter-than-human machines is an inherently dangerous endeavor," Leike added. "OpenAI is shouldering an enormous responsibility on behalf of all of humanity."
If Sutskever and Safe Superintelligence Inc. can deliver safe superintelligence, it could give OpenAI a run for its money. Privacy and security are the main issues preventing AI from taking off.
Get updates delivered to you daily. Free and customizable.
Welcome to NewsBreak, an open platform where diverse perspectives converge. Most of our content comes from established publications and journalists, as well as from our extensive network of tens of thousands of creators who contribute to our platform. We empower individuals to share insightful viewpoints through short posts and comments. It’s essential to note our commitment to transparency: our Terms of Use acknowledge that our services may not always be error-free, and our Community Standards emphasize our discretion in enforcing policies. We strive to foster a dynamic environment for free expression and robust discourse through safety guardrails of human and AI moderation. Join us in shaping the news narrative together.
Comments / 0