Get updates delivered to you daily. Free and customizable.
Windows Central
OpenAI reportedly sent RSVPs for GPT-4o's launch party even before testing began — pressuring the safety team to speed through the process in under one week
By Kevin Okemwa,
1 day ago
What you need to know
OpenAI's GPT-4o launch was seemingly rushed, leaving the safety team with little time to test the model.
Sources disclosed that OpenAI had sent invitations for the product's launch celebration party before the safety team ran tests.
An OpenAI spokesman admits the launch was stressful for its safety team, but insists the firm didn't cut corners when shipping the product.
OpenAI has been under fire for the past few months, with several former employees claiming it prioritizes shiny products over safety processes . As it happens, the ChatGPT maker won't beat these allegations soon.
Several employees recently disclosed that OpenAI prioritized speed over thoroughness (via The Washington Post ). According to sources, OpenAI's safety team was seemingly pressured to rush through the new testing protocol for GPT-4o.
For context, it's critical for sophisticated AI tools to go through thorough testing processes to identify loopholes that bad actors might exploit to cause harm. OpenAI's leadership reportedly pressured the safety team to rush through the new testing protocol to meet a rigid GPT-4 Omni May launch date .
According to the source, OpenAI sent out invitations for the product's launch celebration party before the safety team even ran tests. "They planned the launch after-party before knowing if it was safe to launch," the source added. "We basically failed at the process."
OpenAI spokesman Lindsey Held admits that the launch was stressful for its safety team, but the company "didn’t cut corners on our safety process." Held insists the company conducted thorough testing before shipping GPT-4o to broad availability .
An OpenAI preparedness team representative corroborates Held's statement and says the company met its regulatory obligations. However, the representative admits the testing protocol was under a tight schedule. "OpenAI is now rethinking our whole way of doing it and the Omni approach was just not the best way to do it," the representative added.
Safety seems like an after thought for OpenAI
OpenAI logo (Image credit: OpenAI)
In a separate report, a former OpenAI staffer, William Saunders, claimed that he left the company because he didn't want to end up working for the Titanic of AI . "They're on this trajectory to change the world, and yet when they release things, their priorities are more like a product company. And I think that is what is most unsettling," Saunders added.
Saunders' sentiments were previously echoed by OpenAI's former alignment lead, who admitted that he'd disagreed with the firm's top management over its decision-making process and core priorities on next-gen models, security, monitoring, preparedness, safety, adversarial robustness, and more. The ChatGPT maker's decisions and prioritizing shiny products over safety prompted a mass departure of the safety team from the firm .
On the other hand, there's concern about AI becoming smarter than humans, taking over jobs, and eventually ending humanity. According to the latest p(doom) values by an OpenAI insider and AI researcher, there's a 99.9% chance AI will end humanity .
Get updates delivered to you daily. Free and customizable.
Welcome to NewsBreak, an open platform where diverse perspectives converge. Most of our content comes from established publications and journalists, as well as from our extensive network of tens of thousands of creators who contribute to our platform. We empower individuals to share insightful viewpoints through short posts and comments. It’s essential to note our commitment to transparency: our Terms of Use acknowledge that our services may not always be error-free, and our Community Standards emphasize our discretion in enforcing policies. We strive to foster a dynamic environment for free expression and robust discourse through safety guardrails of human and AI moderation. Join us in shaping the news narrative together.
Comments / 0