Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

95% of all papers I have read have sucked. Maybe they just weren’t what I was looking for but a lot of them I couldn’t believe got published as anything novel


Of the remaining five percent, at least in software you can be sure at least 80% (4% of the total) doesn't actually work when tested. It is beyond frustrating to deal with research, to the point that these days unless the algorithm is very well described or there is source code available I have to assume the researchers are just lying.


> It is beyond frustrating to deal with research, to the point that these days (...) I have to assume the researchers are just lying.

This is not something new, or even from this century. The Royal Society, which was founded on 1660 and is a reference in the history of science, adopted the motto "take nobody's word for it".

https://en.wikipedia.org/wiki/Royal_Society


Recent AI stuff from major labs on Arxiv is pretty good, but yeah, anything that's AI+some other field is usually pretty bad. It's usually written by someone in that other field who might be an expert in their own domain, but who knows very little about AI or even just numerical optimization in general. The fact that such "work" is accepted uncritically by publishers doesn't inspire a lot of confidence in the value they purportedly add. It's right on the surface: "awesome" results are easy to achieve in AI if you screw up your train/val split, or deliberately choose an extremely weak baseline.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: