Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The issue is there is "failed" and then there is "failed". Yes, many times you have to repeat an experiment because of bad reagents, broken machines, little tweaks are needed to some obscure parameter, or someone left the lab door open...

However, if you experiment is well controlled, then the controls will reveal this sort of "failure". When I was still running gels, many times when first running a new setup we'd run only controls. If your experiment fails because your controls failed, then that's just science.

But I've also seen the other kind of "failure". The kind where the controls came out perfectly, but the effect or the association or the expression profile that the researcher was hoping for didn't show up. When these sorts of failures are ignored or discarded, then we do science a huge disservice.

I am encouraged, though, that there recently seems to be a movement toward, if not outright publishing such negative results, then at least archiving them and sharing them with others in the field. After all, without Michelson and Morley's "failure" we might not have special relativity.



>But I've also seen the other kind of "failure". The kind where the controls came out perfectly, but the effect or the association or the expression profile that the researcher was hoping for didn't show up. When these sorts of failures are ignored or discarded, then we do science a huge disservice.

Why does this happen? Clearly this is what the article insinuates. Is publish or perish that strong? Every honest experiment with honest results benefits society. Not every prediction and result combination results in a prize in your lifetime, but that in no way should influence someone's value as a scientist. That science may be used later for something we had not intended (could i offer you the hope of posthumus recognition?). Finding a way it does not work may save someone else some time. This benefits the scientific community.

Not everyone gets to walk on another planet, some people have to build the ship.


For better or worse, most scientific journals still operate on a business model dependent on physical subscriptions. Since this sets something of a limit on how much can be published, and since scientists tend to prefer paying for positive results vs negative, there has been a strong cultural bias toward favoring positive results.

The good news is that this is gradually changing. As scientists begin to understand that online distribution models don't have the same sorts of limitations, and that search can be a powerful tool, there has been a move toward at least collecting negative results. Of course, they still don't benefit the scientists in the "publish-or-perish" world, but even that may be changing...maybe...


>But I've also seen the other kind of "failure". The kind where the controls came out perfectly, but the effect or the association or the expression profile that the researcher was hoping for didn't show up. When these sorts of failures are ignored or discarded, then we do science a huge disservice.

This.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: