Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t see anything here that would prevent a LLM from generating these. Right?


In one of the responses, it provided the financial analysis of a not well-known company with a non-Latin name located in a small country. I found this company; it is real and numbers in the response are real. When I asked my ChatGPT to provide a financial report for this company without using web tools, it responded: `Unfortunately, I don’t have specific financial statements for “xxx” for 2021 and 2022 in my training data, and since you’ve asked not to use web search, I can’t pull them live.`.


> numbers in the response are real.

OpenAI very well may have a bug, but I'm not clear on this part. How do you know the numbers are real?

I understand you know the name is the company is real, but how do you know the numbers are real?

It's way may than anyone should need to do, but the only way I can see someone knowing this is contacting the owners is the company.


Do you understand what a hallucination is?


Coming up with accurate financial data that you can't get it to report outright doesn't seem like one.


Models do not possess awareness of their training data. Also you are taking at face value that it is "accurate".


I don't understand the wording

Accurate financial data?

How do we know?

What does using not-web-search not having the data have to do with the claim that private chats with the data are being leaked?


> I found this company; it is real and numbers in the response are real.

???


Which of my questions does that answer?


That the financial data is accurate?


It's an ourobos - he can't verify it's real! If he can, its online and available by search.


Therefore what are the odds that this is just the LLM doing its thing versus "a vulnerability". Seem like a pretty obvious bet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: