It's not just about sensitive data like passwords, contracts, or IP. It's also about the personal conversations people have with ChatGPT. Some are depressed, some are dealing with bullying, others are trying to figure out how to come out to their parents. For them, this isn't just sensitive, it's life-changing if it gets leaked. It's like Meta leaking their WhatsApp messages.
I really hope they fix this bug and start taking security more seriously. Trust is everything.
After some hemming and hawing, my most cromulent thought is, having good security posture isn't synonymous with accepting every claim you get from the firehose
everything is vulnerable. the question is, has this researcher demonstrated that they have discovered and successfully exploited such a vulnerability. what exactly in this post makes you believe that this is the case?
This is going to be subject to the legal discovery process with the usual safeguards to prevent leaks; in particular, the judge will directly supervise the decision of who needs access to these logs, and if someone discloses information derived from them for an improper purpose, there's a very good chance they'll go to jail for contempt of court, which is much more stringent than you can usually expect for data privacy. You can still quite reasonably be against it, but you cannot reasonably call it "plain text logs available for everyone at the company to view".
A lot of AI products straight up have plan text logs available for everyone at the company to view.