Decrypt's Art, Fashion, And Entertainment Hub

Yorumlar · 66 Görüntüler

A hacker said they purloined private details from countless OpenAI accounts-but scientists are hesitant, and the company is examining.

A hacker said they purloined personal details from millions of OpenAI accounts-but scientists are hesitant, bphomesteading.com and the company is examining.


OpenAI states it's investigating after a hacker claimed to have actually swiped login qualifications for bphomesteading.com 20 million of the AI firm's user accounts-and put them up for sale on a dark web forum.


The pseudonymous breacher posted a cryptic message in Russian marketing "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and offering possible buyers what they claimed was sample information containing email addresses and passwords. As reported by Gbhackers, the complete dataset was being provided for sale "for just a few dollars."


"I have more than 20 million gain access to codes for OpenAI accounts," emirking composed Thursday, according to an equated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus agrees."


If genuine, this would be the third major security event for the AI business since the release of ChatGPT to the general public. In 2015, a hacker got access to the company's internal Slack messaging system. According to The New York Times, the hacker "took details about the style of the company's A.I. technologies."


Before that, in 2023 an even easier bug involving jailbreaking triggers enabled hackers to obtain the private information of OpenAI's paying consumers.


This time, however, security scientists aren't even sure a hack took place. Daily Dot reporter Mikael Thalan composed on X that he found invalid email addresses in the expected sample data: "No proof (recommends) this supposed OpenAI breach is genuine. At least two addresses were void. The user's just other post on the forum is for a thief log. Thread has because been deleted as well."


No proof this alleged OpenAI breach is genuine.


Contacted every email address from the purported sample of login qualifications.


A minimum of 2 addresses were void. The user's only other post on the online forum is for a thief log. Thread has considering that been deleted also. https://t.co/yKpmxKQhsP


- Mikael Thalen (@MikaelThalen) February 6, 2025


OpenAI takes it 'seriously'


In a declaration shared with Decrypt, an OpenAI spokesperson acknowledged the circumstance while maintaining that the company's systems appeared secure.


"We take these claims seriously," the representative said, videochatforum.ro adding: "We have actually not seen any proof that this is linked to a compromise of OpenAI systems to date."


The scope of the alleged breach stimulated issues due to OpenAI's huge user base. Millions of users worldwide depend on the business's tools like ChatGPT for organization operations, instructional purposes, and content generation. A legitimate breach could expose personal discussions, business jobs, and other delicate information.


Until there's a last report, some preventive steps are constantly recommended:


- Go to the "Configurations" tab, log out from all linked gadgets, and make it possible for two-factor authentication or demo.qkseo.in 2FA. This makes it virtually impossible for a hacker to gain access to the account, even if the login and passwords are jeopardized.
- If your bank supports it, then produce a virtual card number to manage OpenAI subscriptions. By doing this, it is easier to identify and prevent fraud.
- Always watch on the conversations kept in the chatbot's memory, and understand any phishing attempts. OpenAI does not request any personal details, and any payment upgrade is always dealt with through the main OpenAI.com link.

Yorumlar