I spent a decade working on various machine learning platforms at well known tech companies. Everything I ever worked on became obsolete pretty fast. From the ML algorithm to the compute platform, all of it was very transitory. That coupled with the fact that a few elite companies are responsible for all ML innovation, its oxymoronic to me to even learn a lot of this material.
Agreed. Look at the table of contents of this book. Whatever fundamental machine learning concepts you learned with SVM or other obsolete algorithms is still useful and applicable today.
Nobody is building real technology with either of those algorithms. Sure, they are theoretically helpful, but they arent valuable anymore. Spending your precious life learning them is a waste
>Spending your precious life learning them is a waste
So you really did not learn them.
There is nothing wrong with being user. You don't have to know how compilers work to use compiler. But then you should not say you understand compilers.
In the same way, you probably would benefit from a book "Using deep learning", not "Understanding deep learning".
Yes, they’re not deploying them. That doesn’t mean it doesn’t still help to know the fundamentals of the field, especially when you’re trying to innovate.
Yeah. But you didn’t build a plane without knowing physics right?
Nobody deploys a textbook algorithm because everyone knows textbooks algorithms and there are no advantages. So, no, there is real value in learning the fundamentals, dear founder.
This brings up an important question: Is a topic useful to learn if you will never use it in your life?
To attempt answering this question, we can look at LLMs as an analogy. If you include code in the training set for an LLM, it also makes the LLM better at non-coding tasks, suggesting that sometimes learning something makes you also better at other things. I'm not saying the same necessarily applies for learning these "old school" AI techniques, but it's a decently analogy at least.
I started my journey in machine learning fifteen years ago. Ironically, at that time, my professor told me that neural networks were outdated and trying them wouldn't result in publishable research. SVMs were popular and emphasized in my coursework. I concur that SVMs don't hold as much practical significance today. But the progress in AI and ML is generally unpredictable, and no one knows what theory leads to the next leap in the field.
Quite a lot of techniques in deep learning have stood the test of time at this point. Also new techniques are developed either depending on or trying to solved deficiencies in old techniques. For example Transformers were developed to solve vanishing gradients in LSTMs over long sequences and improve GPU utilization since LSTMs were inherently sequential in the time dimension.
Sure, but if you were an expert in LSTM, thats nice, you know the lineage of algorithms. But it probably isnt valuable, companies dont care, and you cant directly use that knowledge. You would never just randomly study LSTMs now.
There are plenty of transferrable skills you get from being an expert something that gets made obsolete by a similar-but-different iterative improvement. Maybe you're really good at implementing ideas from papers, you have a great intuitive understanding of how to structure a model to utilize some tech within a particular domain, you understand very well how to implement/use models that require state, you know how to clean and structure data to leverage a particular feature, etc.
Also, being an "expert in LSTM" is like being an "expert in HTTP/1.1" or "knowing a lot about Java 8". It's not knowledge or a skill that stands on its own. An expert in HTTP/1.1 is probably also very knowledge about web serving or networking or backend development. HTTP/2 being invented doesn't obsolete the knowledge at all. And that knowledge of HTTP/1.1 would certainly come in handy if you were trying to research or design something like a new protocol, just as knowledge of LSTMs could provide a lot of value for those looking for the next breakthrough in stateful models.
If it became obsolete, then y'all were doing the new shiny.
The fundamentals don't really change. There are several different streams in the field, and there are many, many algorithms with good staying power in use. Of course, you can upgrade some if you like, but chase the white rabbit forever, and all you'll get is a handful of fluff.
Very few things stay the same in Technology. You should think of technology as another type of evolution! It is driven by the same type of forces as evolution IMO. I think even Linus Torvalds once stated that Linux evolved trough natural selection.
> Even better: change the bounds of whats possible ;)
... which will be easier it you have a solid grasp of the foundations of the field. If you only ever focus on the "latest shiny" you'll be lost and left floundering when the landscape changes out from underneath you.