> All these examples seem like deliberate attempts to get weird/nonsense answers back.
I'm not sure that I agree.
If a child hears a rumour, or sees some joke online, that claims that gasoline cooks spaghetti faster they may search to find out if it's legit.
During Obama's term, there was a right wing conspiracy theory that gained popularity that claimed that Obama was Muslim. Someone coming across that conspiracy theory years after the fact, completely devoid of any context (ex: a pre-teen who wasn't even alive yet during Obama's term), might do a search to find out if it's true.
There WAS an NPR article that cited a study claiming that parachutes are no more effective than regular backpacks. AI results have not been enabled for my Google account yet, and currently if you search for "are parachutes effective?" you get a feature snippet that clips that article and links to it. Now take that link, with all of its context, out of the picture and imagine that someone hears that claim casually and wants to search to see if it's true. Currently, in MY search results, you get the link to the NPR article that not only explains where the claim comes from but gives you the full context with all of the "gotchas." It sounds like with Google AI the first thing you get is a definitive, authoritative claim that no, parachutes are not effective at saving your lives and you might as well jump out of an air plane with your carry-on pack-back on.