> Tools that make generative AI safer and more transparent. And, people-centric recommendation systems that don’t misinform or undermine our well-being.
Safer? What is so unsafe ATM?
Transparent? Are you finna open-source it or nah?
People-centric recommendation systems? What? Aren't recommendation systems usually non-people-centric? What does that even mean?
What does "misinforming your well-being" even mean?
Transparency doesn't begin or end with open-sourcing. Where does the input come from? What sources did it draw upon for an output? How much is the core model vs the more ephemeral learning on top? etc.
> What does "misinforming your well-being" even mean?
It means you're struggling a bit with English grammar. Would phrasing it as "[systems] that don't misinform us or undermine our well-being" help?
I understand what the grammar is, but what exactly are we being misinformed or underminded requires some kind of an arbiter of truth which is a slippery slope.
Ofc, open-sourcing isn't enough, but its better than not.
It's just sad that another company is going into this in a wrong "Safe AI" way. Perhaps irony is going to happen.
Safer? What is so unsafe ATM? Transparent? Are you finna open-source it or nah? People-centric recommendation systems? What? Aren't recommendation systems usually non-people-centric? What does that even mean?
What does "misinforming your well-being" even mean?
Such much BS.