Get updates delivered to you daily. Free and customizable.
Windows Central
Will AI end humanity? The p(doom) scales of an OpenAI insider and AI researcher are alarmingly high, peaking at a 99.9% probability
By Kevin Okemwa,
19 days ago
What you need to know
An AI researcher claims there's a 99.9% probability AI will end humanity according to p(doom).
The researcher says a perpetual safety machine might help prevent AI from spiraling out of control and ending humanity.
An OpenAI insider says the company is excited about AGI and recklessly racing to achieve the feat by prioritizing shiny products over safety processes.
Why is the probability of AI ending humanity so high?
A robot that looks like a Terminator looking over AI (Image credit: Windows Central | Image Creator by Designer)
I've been following AI trends for a hot minute. And while the technology has scaled great heights and breakthroughs across significant sectors, one thing is apparent — the bad outweighs the good.
AI researcher Roman Yampolskiy appeared on Fridman's podcast to discuss the potential risk AI poses to humanity in a broad interview. Yampolskiy says there's a very high chance AI will end humanity unless humans develop sophisticated software with zero bugs in the next century. However, he's skeptical as all models have been exploited and tricked into breaking character and doing things they aren't supposed to do:
"They already have made mistakes. We had accidents, they've been jailbroken. I don't think there is a single large language model today, which no one was successful at making do something developers didn't intend it to do."
The AI researcher recommends the development of a perpetual safety machine that will prevent AI from ending humanity and gain control over it. Yampolskiy says even if the next-gen AI models pass all the safety checks, the technology continues to evolve — thus becoming more intelligent and better at handling complex tasks and situations.
OpenAI insider says AI will lead to inevitable doom
A picture of the globe with OpenAI's logo wrapped around sharp claws. (Image credit: Microsoft Designer)
In a separate report, former OpenAI governance researcher Daniel Kokotajlo reiterates Yampolskiy's sentiments. Kokotajio claims there's a 70% chance AI will end humanity (via Futurism ). As per the list embedded above, it's clear every major player and stakeholder in the AI landscape has different p(doom) values. For context, p(doom) is an equation used to determine the probability AI will lead to the end of humanity.
"The world isn't ready, and we aren't ready," wrote Kokotajio in an email seen by the NYT. "And I'm concerned we are rushing forward regardless and rationalizing our actions."
Get updates delivered to you daily. Free and customizable.
Welcome to NewsBreak, an open platform where diverse perspectives converge. Most of our content comes from established publications and journalists, as well as from our extensive network of tens of thousands of creators who contribute to our platform. We empower individuals to share insightful viewpoints through short posts and comments. It’s essential to note our commitment to transparency: our Terms of Use acknowledge that our services may not always be error-free, and our Community Standards emphasize our discretion in enforcing policies. We strive to foster a dynamic environment for free expression and robust discourse through safety guardrails of human and AI moderation. Join us in shaping the news narrative together.
Comments / 0