My favorite part of Copilot is when it auto-completes a call to a function that does exactly what I need to do, like magic!
Except that function doesn't exist and never did.
LLMs don't know what they don't know, so they just make something up because they have to say something. The danger is that most people don't understand that's how they work and don't known when to call BS.
This is where I think companies have a responsibility. To ensure that _every_ response has a disclaimer that the answer from their AI could be right or completely wrong and it's up to the user to figure that out, because AI can't at the moment.
Except that function doesn't exist and never did.
LLMs don't know what they don't know, so they just make something up because they have to say something. The danger is that most people don't understand that's how they work and don't known when to call BS.
This is where I think companies have a responsibility. To ensure that _every_ response has a disclaimer that the answer from their AI could be right or completely wrong and it's up to the user to figure that out, because AI can't at the moment.