Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How accurate are the summaries?


This is what worries me. I'd say that >50% of the time, Copilot-generated summaries of meetings in which I was presenting misinterpret what I said in some important way.

I'd much rather (and do) take a couple minutes to write my own take-aways. It's the golden rule in a business context: I'd prefer to receive three one-sentence bullet points that are actually accurate over a couple pages of AI slop, so that's what I give to my colleagues.

The cross-language factor is an interesting angle I haven't had to contend with, though.


I need to do that kind of thing all the time—-and it annoys me to no end when people post the summaries in chats to “catch up” on a meeting, because I know they’re wrong. As an European who understands five languages well and can take some solid hints in many others, nothing beats actually listening or scanning a transcript.


They are quite accurate but the problem is that they sometimes leave out a critical piece of information said in the meeting that doesn't have a lot of repetition but that turns out to be like a crucial factor in the meeting and the LLMs completely miss that.


I turned on Apple Intelligence to summarize notifications from Outlook, Teams, etc. I haven't found it to be very accurate yet, especially for MS Teams.


Based on my experience, it's very risky. A lot of times, it connects similar but very different things that should be seperate and changes the meaning of the message/discussion


Not OP, but I'll answer from my experience trying several different tools for this: the good ones are roughly as accurate as having a human note taker, familiar with the domain terminology, would be on average.


I’ve found some minor inaccuracies, but nothing catastrophic enough to make me think I’d be better off without them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: