I'm running a desktop Linux, and I'm planning to have a dabble with NLP and machine learning. I do some ML-related stuff at work, but I don't have any actual hands-on experience about neural networks or modern NLP.
Provided that I'm running Linux, and I want to do ML, should I get current-gen AMD Radeon or NVidia GeForce?
> Provided that I'm running Linux, and I want to do ML, should I get current-gen AMD Radeon or NVidia GeForce?
If you want maximum flexibility and least time wasted, go with NVIDIA. Most models, Github repos, papers, online questions - are on NVIDIA and the authors probably never bothered with any other architecture.
As posted before by other person in this topic AMD don't currently support ROCm on consumer hardware. So you won't be able to run anything that is built for CUDA on these new cards.
"One big benefit of using the packaged driver: working ROCm OpenCL compute! OpenCL compute benchmarks on the packaged driver are coming in a separate article today. The OpenCL/compute support for Radeon RX 6800 series is working and better off than the current RDNA 1 support. It's great to see things working in the limited testing thus far. Presumably for the imminent ROCm 4.0 release they will also have the Big Navi support independent of the packaged driver, but in any case great to see it coming."
> One big benefit of using the packaged driver: working ROCm OpenCL compute! OpenCL compute benchmarks on the packaged driver are coming in a separate article today. The OpenCL/compute support for Radeon RX 6800 series is working and better off than the current RDNA 1 support. It's great to see things working in the limited testing thus far.
>>> If you want to dabble; use the cloud. Colab is free
Colab notebooks are proving to be one of the most incredible resources out there right now. I liken it to the "free tier" cloud vm / app engine / heroku of a decade ago ;)
No, its not really the same - but the parent is "planning to have a dabble with NLP and machine learning" so my guess is HuggingFace/SpaCy/PyTorch/Tensorflow/FastAI are more the consideration than touching CUDA directly.
Without question, nvidia. The sad reality here is that the best performance for ML on a 5700xt is to boot into windows and use the beta for directml under WSL2 with TF1. It pains me to say that as someone who has been a linux purist for over a decade. We’re literally just waiting for AMD to release blobs, and we’ve been waiting over a year.
It's just a historical fact that GPUs originally evolved to do graphics. What to they're really good at is doing a huge amount of numerical computation in parallel, of which graphics is just a specific use case, and is exactly what you need for ML. If they were invented today, they would be called something else. They're already the specialized hardware you're describing.
> Wouldn’t it make more sense to let the CPU do it, or have specialized cards for the job. So you could have an AMD graphics card and an Nvidia ML card.
Graphics cards are the specialized cards for ML. They are several orders of magnitude faster than GP CPUs at ML because they are specifically designed for the mathematical calculations done in most ML.
We call them graphics cards because that was the first real general use-case for doing lots of linear algebra on large data sets. But long before CUDA came along, people were abusing graphics API to perform more general purpose high-performance calculations for things like video decompression or physics.
nVidia has good leadership and recognized that there would soon be a sizeable market for video cards as a cheap replacement for specialized hardware used in super computers.
Interesting tidbit, when I was in college, one of our CS professors invested in a cluster of PS3s as a cheap "super computer" as they were substantially faster at ML than anything else you could get for $600.
GPUs and also the specialized cards basically can do the same vector, matrix and tensor operations that are needed for neural network machine learning.
CPUs are not faster, in fact they are several orders of magnitude slower for these operations.
Google Coral is the name of Edge AI accelerator, which can be used only for inference meaning running already trained models to get results from inputs.
Google doesn’t sell any hardware for ML training, but they offer Cloud TPU as a service for that.
Provided that I'm running Linux, and I want to do ML, should I get current-gen AMD Radeon or NVidia GeForce?