Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Mozilla.ai’s initial focus? Tools that make generative AI safer and more transparent. And, people-centric recommendation systems that don’t misinform or undermine our well-being

This all sounds well and good (and makes for a fine press release) but where the rubber meets the road will be how these things are defined.

What does it really mean for AI to be “safe”? What is the bigger danger, being insulted by a computer or having entire industries gutted, putting millions out of a job? You can’t pay rent but hey, at least the AI was nice about it.

What does it mean to have “people-centric” recommendation systems? It’s such an irritatingly corporate and meaningless term. Under one definition, collaborative filtering is already exactly that, but CF led to filter bubbles because it turns out a person’s beliefs aren’t normally distributed.

What does it mean that a system doesn’t misinform? You’re going to need some arbiter of truth. Misleading journalism has been an issue since journalism was born, I don’t think $30 million from Mozilla is going to change that, however high-minded their intentions.

And that isn’t even getting into technical issues with generative models. I think anyone who has played around with statistical language models like ChatGPT knows that they don’t have a knowledge graph. They are not expert systems. The problem of squaring GOFAI with deep learning is a problem several orders of magnitude larger than Moz has pledged. I’ll bet anything the smartest Google engineers wished they knew how to create an AI that doesn’t misinform before their stock tanked 10%.

But this will be a great resume pad for their VP of whatever to have led.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: