Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, definitely not the empty string hallucination bug. These are clearly real user conversations. They start like proper replies to requests, sometimes reference the original question, and appear in different languages.


i had the exact same behavior back in 2023, it seemed like clearly leakage of user conversations - but it was just a bug with api calls in the software i was using.

https://snipboard.io/FXOkdK.jpg


There was an issue with conversation leakage, though. It involved some bug with Redis.

I felt like it was a huge deal at the time but it’s surprisingly hard to quickly google it.


It was the classic "oh no we did caching wrong" bug that many startups bump into. It didn't expose actual conversations though, only their titles: https://openai.com/index/march-20-chatgpt-outage/


ah there it is. thanks for jogging my memory. funny to think of how niche chatgpt was considered then to now.


I don’t see anything here that would prevent a LLM from generating these. Right?


In one of the responses, it provided the financial analysis of a not well-known company with a non-Latin name located in a small country. I found this company; it is real and numbers in the response are real. When I asked my ChatGPT to provide a financial report for this company without using web tools, it responded: `Unfortunately, I don’t have specific financial statements for “xxx” for 2021 and 2022 in my training data, and since you’ve asked not to use web search, I can’t pull them live.`.


> numbers in the response are real.

OpenAI very well may have a bug, but I'm not clear on this part. How do you know the numbers are real?

I understand you know the name is the company is real, but how do you know the numbers are real?

It's way may than anyone should need to do, but the only way I can see someone knowing this is contacting the owners is the company.


Do you understand what a hallucination is?


Coming up with accurate financial data that you can't get it to report outright doesn't seem like one.


Models do not possess awareness of their training data. Also you are taking at face value that it is "accurate".


I don't understand the wording

Accurate financial data?

How do we know?

What does using not-web-search not having the data have to do with the claim that private chats with the data are being leaked?


> I found this company; it is real and numbers in the response are real.

???


Which of my questions does that answer?


That the financial data is accurate?


It's an ourobos - he can't verify it's real! If he can, its online and available by search.


Therefore what are the odds that this is just the LLM doing its thing versus "a vulnerability". Seem like a pretty obvious bet.


New Touring Test unlocked! Differentiate between real and fake hallucinations.


So THAT'S what the "GT" means on all of these GPU model names!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: