Get updates delivered to you daily. Free and customizable.
Windows Central
A former OpenAI employee left after claiming it felt like the 'Titanic of AI' with top execs prioritizing shiny products over safety
By Kevin Okemwa,
11 days ago
What you need to know
OpenAI was previously placed under fire for prioritizing shiny products over safety.
A former employee has corroborated similar sentiments while referring to the company as the Titanic of AI.
The ex-employee says OpenAI's safety measures and guardrails won't be able to control and prevent AI from spiraling out of control if the company doesn't embrace critical and sophisticated measures.
As it turns out, a former OpenAI employee, William Saunders, has seemingly echoed similar sentiments. While speaking on Alex Kantrowitz's podcast on YouTube earlier this month, Saunders indicated:
"I really didn't want to end up working for the Titanic of AI, and so that's why I resigned. During my three years at OpenAI, I would sometimes ask myself a question. Was the path that OpenAI was on more like the Apollo program or more like the Titanic? They're on this trajectory to change the world, and yet when they release things, their priorities are more like a product company. And I think that is what is most unsettling."
OpenAI CEO Sam Altman hasn't been shy about his ambitions and goals for the company, including achieving AGI and superintelligence . In a separate interview, Altman disclosed that these milestones won't necessarily constitute a dynamic change overnight. He added that interest in tech advancements is short-lived and may only cause a two-week freakout .
The former lead of super alignment for OpenAI revealed he disagreed with top OpenAI executives over the firm's decision-making process and core priorities on next-gen models, security, monitoring, preparedness, safety, adversarial robustness, and more. This ultimately prompted his departure from the company as well.
In theory and paper, Windows Recall seemed cool and useful (debatable). However, it was riddled with many privacy issues that even attracted the UK data watchdog's attention . The AI-powered feature received backlash, prompting Microsoft to recall it even before it shipped.
OpenAI is in a similar ship but on a larger scale. Saunders compares OpenAI safeguards to the infamous Titanic ship and states that he'd prefer the company embrace the 'Apollo Space program approach.' For context, the program was a NASA project involving American astronauts making 11 spaceflights and walking on the moon.
He added that the firm is overreliant on its current measures and is seemingly tone-deaf to the rapid advancements of its advances. He says OpenAI could be well off if it embraced the Apollo program approach.
Even when big problems happened, like Apollo 13, they had enough sort of like redundancy, and were able to adapt to the situation in order to bring everyone back safely
Willian Saunders, Former OpenAI employee
Saunders says the team behind the Titanic's development was more focused on making the ship unsinkable but forgot to install sufficient lifeboats in the unfortunate event that disaster would strike. As a result, many people lost their lives due to the lack of preparedness and overlooking important safety measures.
While speaking to Business Insider , Saunders admitted the Apollo Space program faced several challenges. In the same breath, "It is not possible to develop AGI or any new technology with zero risk," he added. "What I would like to see is the company taking all possible reasonable steps to prevent these risks."
Saunders predicts a 'Titanic disaster' forthcoming that could lead to large-scale cyberattacks and the development of biological weapons. Saunders says OpenAI should consider investing in more "lifeboats" to prevent such occurrences. This includes delaying the launch of new LLMs to give research firms ample time to assess potential danger and harm that could stem from the premature release of the models.
Get updates delivered to you daily. Free and customizable.
Welcome to NewsBreak, an open platform where diverse perspectives converge. Most of our content comes from established publications and journalists, as well as from our extensive network of tens of thousands of creators who contribute to our platform. We empower individuals to share insightful viewpoints through short posts and comments. It’s essential to note our commitment to transparency: our Terms of Use acknowledge that our services may not always be error-free, and our Community Standards emphasize our discretion in enforcing policies. We strive to foster a dynamic environment for free expression and robust discourse through safety guardrails of human and AI moderation. Join us in shaping the news narrative together.
Comments / 0