Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Slack AI (slack.com)
32 points by mvdtnz on June 30, 2024 | hide | past | favorite | 47 comments


I wonder how long it will be until we hear about a company accidentally doing something because the Slack AI summary / AI search incorrectly paraphrased a request.

I really wish AI companies would call hallucinations what they really are - Mistakes. Using wishy-washy words like that make non-techy people trust outputs of LLMs much much more than they should. "Oh it just hallucinated - how silly!" instead of "It got it wrong - we should be more careful". Can be scary hearing how much trust the average joe puts into LLM outputs...


I actually think "hallucination" sounds worse than "mistake." A hallucination is a fundamental break from the shared reality.


The problem is that everything from an LLM is a hallucination, because LLMs don't have a reality. LLM's inventing new text is the whole point of them. It's just that sometimes they invent text that's also wrong.


This is simply false. LLMs have a world model. Is it perfect? Obviously not. But neither is yours. Saying that "everything" from an LLM is a hallucination, to whatever extent it can be said to be true is overly broad to the point of being useless.

It is very possible to learn which kinds of things LLMs are likely to be correct at and which kinds of things they are likely to be wrong about. Neither of those categories is going to be perfect, but, again, neither will it be for any particular human.

I mostly agree that current LLMs should not be used in any kind of a critical role that does not have a human constantly checking it. And even with a human, probably not, because humans are lazy, and are likely to stop checking.

But they can be very useful if one takes the time to learn what kinds of outputs they usually give.

The problem with the vast majority of articles on them is that they usually go in one of two directions: Look at this amazing task that an LLM did really well

or

Look at this incredibly basic task that an LLM did very poorly.

Both of those ignore the totality that LLMs are shockingly, amazingly, good at certain kinds of tasks and also shockingly, embarrassingly bad at other kinds of tasks.

If one can learn to recognize which is which, one can get a lot of value and, if one is careful, avoid the pitfalls.


If an employee was hallucinating, you'd call them an ambulance.


> I really wish AI companies would call hallucinations what they really are - Mistakes.

At the bottom of every ChatGPT conversation: "ChatGPT can make mistakes. Check important info."


"Our product doesn't work, but please use it anyway."


[flagged]


"I don't work, please hire me anyway"


I think it's only going to get worse as the LLMs get just that much better that people stop thinking more critically about what they generate. Right now I think many people still verify or read through the results to make sure they sound right. What happens when people get lulled into complacency and then it really goes off on a bender.


Nothing will have changed. How many times have you seen HN comments from people that never read the article? How many times have you read flat out lies on Reddit? The last 40 years of of cable news have banked on this theory. At least the last 80 years of American politics as well. There’s no degradation of critical thinking, either. It’s just a cohort that does it and a cohort that doesn’t. For all of time.


IMO future is government neutering of anything related to safety, physical world or food...


Whatever name is used will inevitably become normalised. Changing the name isn't going to move the needle on people accepting the magic AI fairy dust.

I wish I knew what would. I'm sick of seeing it thrown into everything due to corporate FOMO. This stuff isn't cheap to run, and seeing it used to automatically summarise peoples pointless meetings kills me, almost as much as the meetings.


> because the Slack AI summary / AI search incorrectly paraphrased a request.

Or because it misinterpreted a question (or a joke!) as a factual statement.


A mistake suggests competence and overall reliability. Hallucination suggests something is more seriously and systemically wrong. I think this way is appropriate.


What is it with this constant boogeyman of hallucinations, Its like people cannot understand that this is a new technology that is rapidly improving


Because majority of people don't understand how LLMs work, what hallucinations are, or the capabilities of AI tech. Like lawyers https://www.abc.net.au/news/2023-06-24/us-lawyer-uses-chatgp... airline website devs https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-th... law enforcement https://gizmodo.com/deepfake-cheer-mom-sues-for-defamation-1...

No, people in general cannot understand changes in technology.


GPT3.5 used to hallucinate a lot but it was drastically improved with 4, to a point where I can't even remember the last time it did it. It feels like people are exaggerating or basing it on the experience with old models.


I really wish we'd stop anthropomorphizing these things. They're not hallucinating, the program simply returned an error. Remember, y'all, this stuff is just Markov chains with extra steps. An incorrect answer from any other computer program is called an "error", and apparently LLMs are the only piece of software in which significant amounts of them are tolerated, because they look just like a mechanical Turk, and are sufficiently convincing as to be a really neat parlor trick. But write a program that generates random integers and returns -1 for even numbers, and it's the exact same thing. LLMs are crap, because programs aren't supposed to return errors.

"On two occasions I have been asked, — 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?'... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.


They aren't markov chains. Even GPT-4 knows this.


I'm critiquing their functionality. I know they aren't literally Markov chains... Ask ChatGPT about humor as a tool of criticism.


How far off are we from the Trough of Disillusionment again? When this bubble pops it's going to be rough.


The fact that everyone is so certain makes me think the analysis is incorrect. Especially when it's coming from the HN crowd which is notoriously bad at predicting things.


Look on the bright side: they'll be practically giving those GPUs away.


I’ve been using this for a couple months now.

Honestly it’s the kind of thing that’s cool the first time you use it, then I forgot it existed and now it’s nothing more than 1 more notification to clear out every day.

The “Recap” feature has potential but I honestly feel like it creates more noise. In most cases (for me) it’s easier to just skim the channel than to read a summary.

The “ask questions in search” sounded cool, but after a few lackluster attempts at using it I forgot the feature existed.

As the Slack, Google Workspace, and Notion admin at our company, I bought the AI license add-ons for each product (plus ChatGPT) which comes to about $80/person/month.

My thinking is I’ll give our team access to everything bleeding edge for 12 months, then actually ask people what value they get from it and decide whether to scale the licenses back.

I have a feeling many other companies buy these add-ons with a similar mindset.


I'd suggest a slight change in strategy. After 12 months, don't ask people. Just start gradually taking things away. Those that contact you to ask why it's missing are the ones who probably find it useful.

The scream test works everytime, in 80% of scenarios...


That seems like a really backwards approach - if you give it to users, they will probably have one or two use cases they infrequently use that will make them reluctant to give it up.

If there isn't a clear need, why sign up at all?


I find that landing page over-produced and trying too hard to the extent that I literally couldn't read it. 2000s era pop-up vibes.


Slack must be happy that AI + LLMs came at a moment where the novelty of their product seemed to be wearing off, and similarly to how people now think about email, Slack seems to (sadly, in my opinion) be gaining a reputation as a place for noise, busywork, and distraction. AI can solve those problems (a second time, as Slack was designed to make workplace communication more efficient).


Slack was designed to get a fat return for the VCs that funded it. By that measure, it was a success.


Hmm. None of these features seem to be particularly useful. If a channel has so many unread messages that you need a summary of it, maybe try just not. I don't want a daily round-up from multiple channels - I want Slack to just show me messages from multiple low-volume channels in a single view.


> If a channel has so many unread messages that you need a summary of it, maybe try just not.

"Try just not" what? I can't tell my colleagues to stop communicating.


I think it's useful to properly interrogate how much discussion is actually important and worth reading, rather than continiously piling on additional tools.


Related:

Slack AI Training with Customer Data

https://news.ycombinator.com/item?id=40383978


Man, it has got to suck being a fresh out of school CS grad. Not only is the industry fucked, if you get a job, you may well be fully remote and WFH which already kills social connections in the crib. Now add slack AI to the mix and you've got the Black Mirror-esque nightmare of: Has your manager really read your message? Have they ever read any? Or is everything you say just being shouted into an abyss, with only an LLM distorted echo being heard.


> if you get a job, you may well be fully remote and WFH

I envy the fresh graduates of today, not having to put up to office bullshit for almost 2 decades like I did.


Isn't it responsibility of a manager to treat employees well? Careless manager wouldn't read their messages regardless LLM or not.


Umm… “well” is kind of subjective. A manager’s responsibility is to make his employees productive for the organization.


Is this the beginning of the end of Slack?

Unrelated: I forgot Slack was owned by Salesforce, who also own Heroku. Why am I still using Slack?


.. because it is a mass illusion that you have individual choices in the matter. Companies and some govt agencies are required to have a monitored, logged and legally responsive chat provider. You are not asked if you want to use Slack, you are required to use Slack. hint- expect much more of that


Expect much more of it from who? What are you suggesting changed? Among all the spying enterprise apps, why slack? Afaik, engineers brought slack into many companies and pushed back on alternatives like teams.


Engineers brought slack into companies because it was like IRC but non-tech folk could use it.

Teams didn't exist when slack became "cool". It was Lync or Skype for Business, or something along those lines. The timeline and the rebranding is a bit blurry, but they were truly awful compared to Slack.

Slack stopped being cool (for me) when they killed the IRC bridge. After that it became just another enterprise tool that was doomed to be hated.


There should be Hipchat in the timeline and many xmpp deployments.


Hipchat was great. Native app too.


here in the San Francisco Bay Area, USA .. this is what I have seen myself.. Teams has taken a new front-position in this category yes


Even if AI tools (e.g. search or summarization) aren't perfect, neither are our current manual methods. Right now I feel there's a strong negative sentiment towards mistakes made by LLMs vs humans - when/if that sentiment fades, it'll be interesting to see how much work can be delegated to LLMs.


The difference is in how you make mistakes.

Humans have the ability to explain the thought process that led to the mistake, can provide confidence intervals, can learn from the mistake and can maintain a coherent position over time.

And right now the only work being delegate to LLMs are those where mistakes are tolerable e.g. copywriting, customer service.


Human mistakes are learning opportunities. An LLM doesn't learn from its mistakes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: