Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It seems to me the amount of data available through the internet is what powered the recent advances in ML.

Great point. This was part of it. There were other elements, all of which did not come together until a few years ago, yet their result has been expected since at least the 90s if not the 50s by some very visionary and optimistic people. The other elements were (1) having enough labeled data, (2) fast enough hardware, not just to solve the problems, but also to be able to test repeatedly with different initialization weights to identify good values for these parameters (3) good enough libraries for programming the hardware, enabling more developers to easily make use of GPUs for matrix operations, and distributed computing for problems that benefit from parallelization (4) various mathematical advances whose details I couldn't do justice, but things like rewriting the math so problems are more easily parallelizable helped a bunch, (5) sharing of research. ML researchers shared their advances openly via places like arxiv.org and this led to faster advances (6) the use of patents for defense rather than offense - I believe google led the way here

Now, and going forward, yes, it's believed that those with the most data have the biggest advantage. Even so, researchers are making advances and specializing in all kinds of areas of ML, like ones requiring little data, no data or lots of data, varying degrees of labeled data, different classes of algorithms, etc. The field is growing rapidly

That's the gist. I'm definitely missing a few things

Yann LeCun [1] is a good person to follow if you're interested in a top ML researcher's opinion about AI and the media's perception of it. He frequently posts articles with which he agrees or disagrees

[1] https://www.facebook.com/yann.lecun



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: