It is in fact too much to expect that an LLM get fine details correct because it is by design quite fuzzy and non-deterministic. It's like trying to paint the Mona Lisa with a paint roller.
It's just a misuse of the tools to present LLM's summaries to people without a _lot_ of caveats about it's accuracy. I don't think they belong _anywhere_ near a legitimate news source.
My primary point about calling out those mistakes is that those are the kinds of minor mistakes in a summary that I would find quite tolerable and expected in my own use of LLMs, but I know what I am getting into when I use them. Just chucking those LLM generated summaries next to search results is malpractice, though.
I think the primary point of friction in a lot of critiques between people who find LLMs useful and people who hate AI usage is this:
People who use AI to generate content for consumption by others are being quite irresponsible in how it is presented, and are using it to replace human work that it is totally unsuitable for. A news organization that is putting out AI generated articles and summaries should just close up shop. They're producing totally valueless work. If I wanted chatgpt to summarize something, I could ask it myself in 20 seconds.
People who use AI for _themselves_ are more aware of what they are getting into, know the provenance, and aren't presenting it for others as their own work necessarily. This is more valuable economically, because getting someone to summarize something for you as an individual is quite expensive and time consuming, and even if the end results is quite shoddy, it's often better than nothing. This also goes for generating dumb videos on Sora or whatever or AI generated music for yourself to listen to or send to a few friends.
If you are a news organization and you want a reliable summary for an article, you should write it! You have writers available and should use them. This isn't a case where "better-than-nothing" applies, because "nothing" isn't your other option.
If you are an individual who wants a quick summary of something, then you don't have readers and writers on call to do that for you, and chatgpt takes a few seconds of your time and pennies to do a mediocre job.
It's just a misuse of the tools to present LLM's summaries to people without a _lot_ of caveats about it's accuracy. I don't think they belong _anywhere_ near a legitimate news source.
My primary point about calling out those mistakes is that those are the kinds of minor mistakes in a summary that I would find quite tolerable and expected in my own use of LLMs, but I know what I am getting into when I use them. Just chucking those LLM generated summaries next to search results is malpractice, though.
I think the primary point of friction in a lot of critiques between people who find LLMs useful and people who hate AI usage is this:
People who use AI to generate content for consumption by others are being quite irresponsible in how it is presented, and are using it to replace human work that it is totally unsuitable for. A news organization that is putting out AI generated articles and summaries should just close up shop. They're producing totally valueless work. If I wanted chatgpt to summarize something, I could ask it myself in 20 seconds.
People who use AI for _themselves_ are more aware of what they are getting into, know the provenance, and aren't presenting it for others as their own work necessarily. This is more valuable economically, because getting someone to summarize something for you as an individual is quite expensive and time consuming, and even if the end results is quite shoddy, it's often better than nothing. This also goes for generating dumb videos on Sora or whatever or AI generated music for yourself to listen to or send to a few friends.