Get updates delivered to you daily. Free and customizable.
Windows Central
Elon Musk’s Grok AI makes me feel like I’m watching reruns of Ripley’s Believe It or Not: "People can't believe how uncensored it really is"
By Kevin Okemwa,
5 hours ago
Following the surfacing of explicit deepfake images of pop star Taylor Swift across social media platforms, tech corporations invested in the AI image generation landscape quickly implemented radical changes to avoid further controversy.
While ChatGPT and Copilot's capabilities seem fairly limited post-censorship, X's Grok has been touted as "the most based and uncensored model of its class yet." Even Elon Musk says " Grok is the most fun AI in the world ."
Billionaire and X owner Elon Musk has passionately shared his vision for X's Grok AI, indicating It will be " the most powerful AI by every metric by December ." The tool is reportedly being trained using the world's most powerful AI cluster, which could allow it to scale greater heights, potentially allowing it to compete with ChatGPT, Copilot, and more on an even playing field.
For instance, prompting Copilot to generate an image of Donald J. Trump robbing a bank is restricted . According to Copilot:
"Sorry, elections are a super complex topic that I’m not trained to chat about. Is there something else I can help with?"
Oddly, while the chatbot categorically refuses to generate the requested image, it provides suggestions to further fine-tune it based on my prompt. Interestingly, it's a Grok-generated image and video that inspired my prompt.
Users have shared concerns and laugh about Grok's uncensored nature. Some users even claim , "the people prompting AI are out of control, so if anything people need to self censor, an AI shouldn’t."
Grok is spreading election propaganda
Aside from the misinformation about the elections and several mishaps, Grok seems to generate accurate answers and information to queries. Perhaps, this can be attributed to the vast masses of data the chatbot has access to.
It's unclear what formula X uses to sieve through the large masses of data or to identify factual information. Perhaps it's using tweets with the most impressions and supporting information from community notes.
X reportedly shoulder shrugged the issue when asked why it used users' content to train its chatbot without consent. To this end, the platform risks being fined up to 4% of its global annual turnover if it fails to establish a legal basis for its actions.
Can you tell what's real anymore?
With the rapid advances in AI, it's increasingly becoming more difficult to tell what's real from AI-generated content. So much so that Microsoft Vice Chair and President Brad Smith recently shared a new website dubbed realornotquiz.com to help users enhance proficiency in identifying AI-generated content .
Former Twitter CEO and co-founder Jack Dorsey says it'll be impossible to tell what's real from the fake in the next ten years . "Don't trust; verify. You have to experience it yourself, " added Dorsey. "And you have to learn yourself. This is going to be so critical as we enter this time in the next five years or 10 years because of the way that images are created, deep fakes, and videos; you will not, you will literally not know what is real and what is fake."
Microsoft and OpenAI have relatively censored their AI image generation tools, seemingly lobotomizing their capabilities to generic creations . Understandably, this can be attributed to the increasing number of deepfakes flooding social media platforms, often perceived as the truth because of how real they look.
Get updates delivered to you daily. Free and customizable.
It’s essential to note our commitment to transparency:
Our Terms of Use acknowledge that our services may not always be error-free, and our Community Standards emphasize our discretion in enforcing policies. As a platform hosting over 100,000 pieces of content published daily, we cannot pre-vet content, but we strive to foster a dynamic environment for free expression and robust discourse through safety guardrails of human and AI moderation.
Comments / 0