Haven't you effectively built a system to detect and remove those specific kind of hallucinations and repeat the process once detected before presenting it to you?
So you're not seeing hallucinations in the same way that Van Halen isn't seeing the brown M&Ms, because they've been removed, it's not that they never existed.
I think systems integrated with LLMs that help spot and eliminate hallucinations - like code execution loops and search tools - are effective tools for reducing the impact of hallucinations in how I use models.
That's part of what I was getting at when I very clumsily said that I rarely experience hallucinations from modern models.
So you're not seeing hallucinations in the same way that Van Halen isn't seeing the brown M&Ms, because they've been removed, it's not that they never existed.