Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The DeepMind team was essentially forced to publish and release an earlier iteration of AlphaFold after the Rosetta team effectively duplicated their work and published a paper about it in Science. Meanwhile, the Rosetta team just published a similar work about co-folding ligands and proteins in Science a few weeks ago. These are hardly the only teams working in this space - I would expect progress to be very fast in the next few years.


How much has changed- I talked with David Baker at CASP around 2003 and he said at the time, while Rosetta was the best modeller, every time they updated its models with newly determined structures, its predictions got worse :)


It's kind of amazing in retrospect that it was possible to (occasionally) produce very good predictions 20 years ago with at least an order of magnitude smaller training set. I'm very curious whether DeepMind has tried trimming the inputs back to an earlier cutoff point and re-training their models - assuming the same computing technologies were available, how well would their methods have worked a decade or two ago? Was there an inflection point somewhere?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: