No, definitely not the empty string hallucination bug. These are clearly real user conversations. They start like proper replies to requests, sometimes reference the original question, and appear in different languages.
i had the exact same behavior back in 2023, it seemed like clearly leakage of user conversations - but it was just a bug with api calls in the software i was using.
It was the classic "oh no we did caching wrong" bug that many startups bump into. It didn't expose actual conversations though, only their titles: https://openai.com/index/march-20-chatgpt-outage/
In one of the responses, it provided the financial analysis of a not well-known company with a non-Latin name located in a small country. I found this company; it is real and numbers in the response are real. When I asked my ChatGPT to provide a financial report for this company without using web tools, it responded: `Unfortunately, I don’t have specific financial statements for “xxx” for 2021 and 2022 in my training data, and since you’ve asked not to use web search, I can’t pull them live.`.