Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI has always prompted ethical discussions but now that big ones are publicly available we society needs to speed run actually implementing major new ethical decisions.

Yes ChatGPT needs to be “not toxic”.

The company leadership believes this and is investing heavily researching and designing and implementing as many safe guards as possible.

They know if they don’t do a good job here it will be a PR and support disaster and ultimate effect the bottom line.

No serious customers will buy any AI product if there’s a chance it spews sex, hate speech or violence, etc. to their end users.

At least for now, some humans somewhere in the chain will have to review toxic content to help train.

It sounds particularly depressing to me that a $50B company can outsource this to Kenya for $200k. But I don’t know who else or how much money would make it better.

Totally hypothetical but I bet some of the handsomely paid US engineers have a ton of policies in place to eliminate exposure to toxic content. But also when they do see some they have a much better foundation in life for it not triggering major mental health problems.

What we really need to know is what OpenAIs whole portfolio of approaches, budgets, staff and outsourcing looks like.

In the meantime kudos to Time for exposing one piece for public discourse.



> What we really need to know is what OpenAIs whole portfolio of approaches, budgets, staff and outsourcing looks like.

In other words, you want OpenAI to be...open? :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: