Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Of course they are doing PR stunts to kepp media talking about them.

Remember Altman saying that they shouldn't release GPT-2 because of it being too dangerous? It's the same thing with this Q* thing.



Because it could be used to generate spam, yes, and he was right about that.

And to set a precedent that models should be released cautiously, and he was right about that too, and it is to our detriment that we don't take that more seriously.


> it is to our detriment that we don't take that more seriously.

Why?


Because when a network turns out to be dangerous in some unexpected way, you cannot exactly unrelease it.


Dangerous, how? All this vague handwavey fear mongering doesn’t really do it for me. Specifics are more my thing.


What makes you positively confident that a network cannot be dangerous?


I’m not making any claims about the danger or lack of danger. Anyway, in the absence of specifics, this is a boring conversation.


Great, so given a chance of danger, not releasing a network keeps your options open. You can make an API available, and if there turns out to be a problem you can close it again, or ban a specific user, or implement hotfixes. None of those can be done with a publically released network.

edit: You know what, let's take a concrete issue that could happen today. You've made a generative image network. Five weeks after releasing it on Huggingface, you discover to your chagrin that the dataset that you used to train it contains an astonishing amount of child pornography, something like 1%. Your spot checks didn't find this because it's all in a subfolder that you forgot to check. Who knew it wasn't a good idea to download datasets from 4chan? As a result, this network is now extremely good at generating images of children in sexual situations, and because of mode collapse, it's creating fake images of real children, something which all but the most libertarian consider morally abhorrent. At any rate, you consider this morally abhorrent, and you'd love to work with the police to prevent any further misuse. Unfortunately, your network has been downloaded at least ten thousand times and it has already been fine-tuned to be even better at child porn by the nice folks at <insert dubious discord here>. Now you have an appointment with a senator in three days, and you have to explain to her why you thought it was a good idea to publish this network for open download, even though you could have made way more money by keeping it closed. Good luck?

Now of course you can argue that in this case all the material was already out there. But that doesn't change the fact that you were the one who did the training run, and released the network, and you're the reason why perceptual hashes now won't find collisions on the generated pictures anymore. If there was a limited amount of generated images in circulation, you could just take the API down, apologize profusely, donate 10k to RAINN or whatever and restart your project under a new name. But as it is, that option is no longer available. The point is, we don't know what a network is doing, and so we don't know what it's going to do in the wild. We cannot prove the absence of capability, so we should hedge our bets.


Helen Toner board member accused Sam/OpenAI for releasing GPT too early, there were people who wanted to keep it locked away for those concerns, which largely haven’t come true (a lot of people don’t understand how spam detection works and overrate the impact of deepfakes).

Company’s have competing interests and personalities. That’s normal. But there is no indication that GPT was held back for marketing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: