Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Third version is "brand safety" which is, we don't want to be in a new york times feature about 13 year olds following anarchist-cookbook instructions from our flagship product


And the fourth version, which is investor-regulator safety mid point: so capable and dangerous that competitors shouldn’t even be allowed to research it, but just safe enough that only our company is responsible enough to continue mass commercial consumer deployment without any regulations at all. It’s a fine line.


This is imo the most important one to the businesses creating these models and is way under appreciated. Folks who want a “censorship-free” model from businesses don’t understand what a business is for.


...which is silly. Search engines never had to deal with this bullshit and chatbots are search without actually revealing the source.


I don’t know. The public’s perception - encouraged by the AI labs because of copyright concerns - is that the outputs of the models are entirely new content created by the model. Search results, on the other hand, are very clearly someone else’s content. It’s therefore not unfair to hold the model creators responsible for the content the model outputs in a different way than search engines are held responsible for content they link, and therefore also not unfair for model creators to worry about this. It is also fair to point this out as something I neglected to identify as an important permutation of “safety.”

I would also be remiss to not note that there is a movement to hold search engines responsible for content they link to, for censorious ends. So it is unfortunately not as inconsistent as it may seem, even if you treat the model outputs as dependent on their inputs.


You could just as easily argue that model creators don't own the model either—it's like charging admission to someone else's library.


Are you saying chatbots don't offer anything useful over search engines? That's clearly not the case or we wouldn't be having this conversation.

It's one thing to have a pile of chemistry text books and another to hire a professional chemist telling you exactly what to do and what to avoid.


> Are you saying chatbots don't offer anything useful over search engines? That's clearly not the case or we wouldn't be having this conversation.

No, but that is the value that's clear as of today—RAGs. Everything else is just assuming someone figures out a way to make them useful one day in a more general sense.

Anyway, even on the search engine front they still need to figure out how to get these chatbots to cite their sources outside of RAGs or it's still just a precursor to a search to actually verify what it spits out. Perplexity is the only one I know that's capable of this and I haven't looked closely; it could just be a glorified search engine.


Search engines 'censor' their results frequently.


Do you think that 13 year olds today can’t find this book on their own?


Like I said they're not worried about the 13 year olds theyre worried about the media cooking up a faux outrage about 13 year olds

YouTube re engineered its entire approach to ad placement because of a story in the NY Times* shouting about a Proctor Gamble ad run before an ISIS recruitment video. That's when Brand Safety entered the lexicon of adtech developers everywhere.

Edit: maybe it was CNN, I'm trying to find the first source. there's articles about it since 2015 but I remember it was suddenly an emergency in 2017

*Edit Edit: it was The Times of London, this is the first article in a series of attacks, "big brands fund terror", "taxpayers are funding terrorism"

Luckily OpenAI isn't ad supported so they can't be boycott like YouTube was, but they still have an image to maintain with investors and politicians

https://www.thetimes.com/business-money/technology/article/b...

https://digitalcontentnext.org/blog/2017/03/31/timeline-yout...


No, and they can find porn on their own too. But social media services still have per-poster content ratings, and user-account age restrictions vis-a-vis viewing content with those content ratings.

The goal isn’t to protect the children, it’s CYA: to ensure they didn’t get it from you, while honestly presenting as themselves (as that’s the threshold that sets the moralists against you.)

———

Such restrictions also can work as an effective censorship mechanism… presuming the child in question lives under complete authoritarian control of all their devices and all their free time — i.e. has no ability to install apps on their phone; is homeschooled; is supervised when at the library; is only allowed to visit friends whose parents enforce the same policies; etc.

For such a child, if your app is one of the few whitelisted services they can access — and the parent set up the child’s account on your service to make it clear that they’re a child and should not be able to see restricted content — then your app limiting them from viewing that content, is actually materially affecting their access to that content.

(Which sucks, of course. But for every kid actually under such restrictions, there are 100 whose parents think they’re putting them under such restrictions, but have done such a shoddy job of it that the kid can actually still access whatever they want.)


I believe they are more worried about someone asking for instructions for baking a cake, and getting a dangerous recipe from the wrong "cookbook". They want the hallucinations to be safe.


i know i had a copy of it back in highschool


Very good point, and definitely another version of “safety”!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: