The team at @OpenAI just fixed a critical account takeover vulnerability I reported few hours ago affecting #ChatGPT.
It was possible to takeover someone's account, view their chat history, and access their billing information without them ever realizing it.
Breakdown below 👇
@OpenAI The vulnerability was "Web Cache Deception" and I'll explain in details how I managed to bypass the protections in place on chat.openai.com.
It's important to note that the issue is fixed, and I received a "Kudos" email from @OpenAI's team for my responsible disclosure.
While exploring the requests that handle ChatGPT's authentication flow I was looking for any anomaly that might expose user information.
The following GET request caught my attention:
https://chat.openai[.]com/api/auth/session
Basically, whenever we login to our ChatGPT instance, they application will fetch our account context, as in our Email, Name, Image and accessToken from the server, it looks like the attached image below:
One common use-case to leak this kind of information is to exploit "Web Cache Deception" across the server, I've managed to find it several times already in Live Hacking Events, and It's also well documented across various blogs, such as:
In high-level view, the vulnerability is quite simple, if we manage to force the Load Balancer into caching our request on a specific crafted path of ours, we will be able to read our victim's sensitive data from the cached response.
It wasn't straight-forward in this case.
In-order for the exploit to work, we need to make the CF-Cache-Status response to acknowledge a cached "HIT", which means that it cached the data, and it will be served to the next request across the same region.
We receive "DYNAMIC" response, that wouldn't cache the data.
Now, getting into the interesting part.
When we deploy web servers, the main goal of "Caching" is the ability to serve our heavy resources faster to the end-user, mostly JS / CSS / Static files,
"Cloudflare only caches based on file extension and not by MIME type"❗️
Basically, if we manage to find a way to load the same endpoint with one of the specified file extensions below, while forcing the endpoint to keep the Sensitive JSON data, we will be able to have it cached.
So, the first thing I would try is to fetch the resource with a file extension appended to the endpoint, and see if it would throw an error or display the original response.
This was very promising, @OpenAI would still return the sensitive JSON with css file extension, it might have been due to fail regex or just them not taking this attack vector into context
Only one thing left to check, whether we can pull a "HIT" from the LB Cache server.
@OpenAI And perfect, we had our full chain working as planned 🙂
Attack Flow:
1. Attacker crafts a dedicated .css path of the /api/auth/session endpoint. 2. Attacker distributes the link (either directly to a victim or publicly) 3. Victims visit the legitimate link. 4. Response is cached. 5. Attacker harvests JWT Credentials.
Access Granted.
Remediation:
1. Manually instruct the caching server to not catch the endpoint through a regex - (this is the fix @OpenAI chose)
1. Email sent at 19:54 to disclosure@openai.com 2. First response 20:02 3. First fix attempt 20:40 4. Production fix 21:31
@OpenAI Those are fantastic standards, but It's still not a Paid #BugBounty program, I can't emphasize enough the power of the crowd into protecting major global brands, especially one that innovates in such pace.
Couple of hours after my tweet I was made aware by a fellow researcher @_ayoubfathi_ and others that there were a number of bypasses to the regex based fix implemented by @OpenAI (which didn't surprise me).
"Given the sensitive nature of the data processed by ChatGPT and the potential impact of a security incident, it may be prudent for @OpenAI to invest more in their security posture to better protect their users and their reputation."
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I've earned more than 5-figure bounties from sensitive links, sent via email, that were leaked without any user interaction. Surprisingly, the leaks came from the very security vendors that were supposed to protect the victims.
Curious how this happens? 👇
#BugBounty
Major organizations globally use various tools to enhance their email security.
Two common approaches are passive and active detection of potential malware within incoming emails.
One popular tool for "sandboxed" evaluation of a link's legitimacy is .
Security vendors around the world use it, often under the hood within their own very expensive products. urlscan.io
Recently, I faced numerous challenges where I needed to bypass limited SSRF or overcome regex mitigations to increase impact and make a case for a report.
Spinning up a server to host a redirection header is time consuming and not-so-fun to do.
There's an easy alternative 🧵
While exploring some options online I've came across replit.com, their product offers a pretty easy way to just spin up a server with whatever technologies you'd like, and control the files and source code of your application.
Should you buy a "Comprehensive Bug Bounty course"? There are definitely some good courses out there.
that said I wouldn't get any course from a personal without a clear proof on BB/PWN reputation.
and if you want to start and do it the right way, jump on this book as a start.
Think of buying a subscription to an online recon framework with a nice GUI? there are a few providers out there, It can be a good start for new comers, but in no means it should cost more than ~20$ a month.