I’ve had the opposite experience with GPT-5, where it’s utterly convinced that its own (incorrect) solution is the way to go that it turns me down and preemptively launches tools to implement what it has in mind.
I get that it’s tradeoffs, but erring on the side of the human being correct is probably going to be a safer bet for another generation or two.
I get that it’s tradeoffs, but erring on the side of the human being correct is probably going to be a safer bet for another generation or two.