Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Laptop

    OpenAI hack revealed as ChatGPT flaws exposed: Is your data at risk?

    By Stevie Bonifield,

    2024-07-05

    https://img.particlenews.com/image.php?url=2By1Tt_0uFw2tZ200

    OpenAI, the developer of ChatGPT, is facing concerns about its security after a 2023 breach came to light this week. News of the breach comes just days after ChatGPT users reported a serious glitch in the chatbot and a major security vulnerability within the ChatGPT macOS app.

    As the developer of the world's most popular AI chatbot, it's no surprise that OpenAI is a target for hackers. The company has been quick to respond to recent security issues, but it's still important for users to know how they might be affected and how they can keep their data safe moving forward.

    Here's a look at OpenAI's recent security controversies and what you can do to stay safe while using ChatGPT.

    OpenAI was hacked in 2023 and kept it quiet: What users should know

    https://img.particlenews.com/image.php?url=1I3vwl_0uFw2tZ200

    (Image credit: Justin Sullivan/Getty Images)

    On July 4, the New York Times reported that OpenAI experienced a breach in early 2023, but chose not to tell the public about it. A hacker accessed OpenAI's internal messaging forums, where they were able to view conversations about the inner workings of OpenAI's technologies.

    Anonymous OpenAI employees told Cade Metz of the New York Times that OpenAI's executives decided against revealing the incident to the public since the hacker didn't access any of the company's actual systems, such as customer data. One of OpenAI's employees, Leopold Aschenbrenner, raised concerns about the company's lack of adequate security measures in the aftermath of the incident but was fired shortly after for sharing information with researchers outside the company.

    This is important to note since Aschenbrenner was part of OpenAI's Superalignment team, the group of engineers and researchers tasked with ensuring OpenAI's products are ethical and safe. The Superalignment team has seen a few other major exits over the past few months due to similar concerns about security and safety.

    In fact, OpenAI's Head of Alignment, Jan Leike, specifically outlined his safety concerns on X after resigning in May 2024. Leike said in an X post that "over the past years, safety culture and processes have taken a backseat to shiny products."

    While the April 2023 OpenAI breach likely had no direct effect on end users, the way OpenAI handled it demonstrates a lack of transparency. It could also have long-term ramifications if the hacker, and any others that might duplicate their strategy, accessed information that would allow them to jailbreak or hack ChatGPT itself.

    ChatGPT security flaws spotted by users on Reddit and macOS

    Unfortunately for OpenAI, the 2023 internal messaging hack wasn't the only security vulnerability to come to light this week.

    On June 30, Reddit user F0XMaster posted on r/ChatGPT about a strange glitch they experienced while using the chatbot. After greeting the chatbot with a simple "Hi," ChatGPT replied with an outline of its system instructions, which are only intended to be visible to developers.

    The system instructions revealed the inner workings of ChatGPT, including some of the guardrails in place to regulate the chatbot's behavior. For example, it instructs ChatGPT to only generate one image per prompt, even if a user asks for more.

    Other users quickly replied to the post with workarounds they figured out, allowing them to get around the chatbot's system instructions. OpenAI patched the issue within a day of the post going live so these "jailbreak" tactics no longer work.

    As if that wasn't bad enough, OpenAI is also facing concerns from Mac users after a Threads user revealed the macOS ChatGPT app was storing conversations as plain text files. That means a hacker, malicious app, or anyone curious enough could freely access a victim's entire conversations from the macOS ChatGPT app.

    OpenAI has since patched the app to encrypt saved conversations. So, these files should be secure moving forward. However, it is possible the text files containing users' ChatGPT conversations were vulnerable prior to the most recent app update.

    How do these security vulnerabilities affect users?

    https://img.particlenews.com/image.php?url=2fR6AN_0uFw2tZ200

    (Image credit: Apple/OpenAI (edited on Canva))

    It's certainly not a good time to be OpenAI, between news of its 2023 internal messaging breach, a concerning glitch on ChatGPT, and a major security flaw in the ChatGPT macOS app. However, if you're a ChatGPT user, only one of these issues could potentially directly impact you.

    The 2023 breach only affected OpenAI's internal messaging forums, likely to place company secrets at risk but not user data. Similarly, the glitch revealed on Reddit last weekend has been patched and seems to have only exposed ChatGPT's own instructions.

    The macOS ChatGPT app vulnerability is concerning, though. If you've been using the Mac app, the good news is that your conversations are encrypted as of the latest app update. Before this update, it's possible those conversations were vulnerable to hacking.

    However, unless your Mac was hacked between when you downloaded the Mac ChatGPT app and this week when the encryption patch rolled out, you probably don't have anything to worry about. If you're not sure whether or not your Mac was exposed to malware recently, you can use an antivirus program to check for any suspicious activity or downloads.

    You can also protect your data in the ChatGPT app moving forward by deleting conversations when you no longer need them. It may also be a good idea to turn off the "Improve the model for everyone" feature in the app. This feature allows OpenAI to use your conversations to train ChatGPT. If you're concerned about privacy, it's probably a good idea to opt out of this program.

    You may also want to keep an eye out for macOS 15 and Apple Intelligence, which will allow you to use ChatGPT on your Mac without sharing your data with OpenAI. This will likely be the most private and secure way to access ChatGPT once the feature launches later this year.

    More from Laptop Mag

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0