Yes, I have experienced this multiple times - for example, ChatGPT and Bard giving totally opposite answers to the same questions. So I learned to take their answers with a pinch of salt and ensure to further research (either Google it, or ask another Conv-AI tool) when I start to smell something wrong.
IMO, these Conv-AI tools should indicate to the user when they are hallucinating.
IMO, these Conv-AI tools should indicate to the user when they are hallucinating.