Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm running a desktop Linux, and I'm planning to have a dabble with NLP and machine learning. I do some ML-related stuff at work, but I don't have any actual hands-on experience about neural networks or modern NLP.

Provided that I'm running Linux, and I want to do ML, should I get current-gen AMD Radeon or NVidia GeForce?



> Provided that I'm running Linux, and I want to do ML, should I get current-gen AMD Radeon or NVidia GeForce?

If you want maximum flexibility and least time wasted, go with NVIDIA. Most models, Github repos, papers, online questions - are on NVIDIA and the authors probably never bothered with any other architecture.


As posted before by other person in this topic AMD don't currently support ROCm on consumer hardware. So you won't be able to run anything that is built for CUDA on these new cards.

UPD link to GitHub issues:

https://github.com/RadeonOpenCompute/ROCm/issues/887


https://www.phoronix.com/scan.php?page=article&item=amd-rx68...

"One big benefit of using the packaged driver: working ROCm OpenCL compute! OpenCL compute benchmarks on the packaged driver are coming in a separate article today. The OpenCL/compute support for Radeon RX 6800 series is working and better off than the current RDNA 1 support. It's great to see things working in the limited testing thus far. Presumably for the imminent ROCm 4.0 release they will also have the Big Navi support independent of the packaged driver, but in any case great to see it coming."


I don't think this is true for the Rx6800 series based on this Phoronix article. https://www.phoronix.com/scan.php?page=article&item=amd-rx68...

> One big benefit of using the packaged driver: working ROCm OpenCL compute! OpenCL compute benchmarks on the packaged driver are coming in a separate article today. The OpenCL/compute support for Radeon RX 6800 series is working and better off than the current RDNA 1 support. It's great to see things working in the limited testing thus far.


Wow, that's clearly a good news.


Yeah, hopefully the early information is accurate.


If you want to dabble; use the cloud.

Colab is free. Colab Pro is $10/mo. GPU instances give you access to better hardware and don't lock up your machine for hours/days at a time.


Moving data in and out is a pain in the ass, though. I wish they would just give me shell access instead of forcing me to use notebooks.


>>> If you want to dabble; use the cloud. Colab is free

Colab notebooks are proving to be one of the most incredible resources out there right now. I liken it to the "free tier" cloud vm / app engine / heroku of a decade ago ;)


Is it the same? I've done CUDA development myself, and the ease of local development is hard to beat.


No, its not really the same - but the parent is "planning to have a dabble with NLP and machine learning" so my guess is HuggingFace/SpaCy/PyTorch/Tensorflow/FastAI are more the consideration than touching CUDA directly.


Without question, nvidia. The sad reality here is that the best performance for ML on a 5700xt is to boot into windows and use the beta for directml under WSL2 with TF1. It pains me to say that as someone who has been a linux purist for over a decade. We’re literally just waiting for AMD to release blobs, and we’ve been waiting over a year.


Most well known DL libs use CUDA APIs, which is proprietary to NVidia, so to do it today you want an NVidia card.


While that’s currently how it works, I can’t be the only one who think it’s weird that it somehow the job of the graphics card to do machine learning.

Wouldn’t it make more sense to let the CPU do it, or have specialized cards for the job. So you could have an AMD graphics card and an Nvidia ML card.


It's just a historical fact that GPUs originally evolved to do graphics. What to they're really good at is doing a huge amount of numerical computation in parallel, of which graphics is just a specific use case, and is exactly what you need for ML. If they were invented today, they would be called something else. They're already the specialized hardware you're describing.


> Wouldn’t it make more sense to let the CPU do it, or have specialized cards for the job. So you could have an AMD graphics card and an Nvidia ML card.

Graphics cards are the specialized cards for ML. They are several orders of magnitude faster than GP CPUs at ML because they are specifically designed for the mathematical calculations done in most ML.

We call them graphics cards because that was the first real general use-case for doing lots of linear algebra on large data sets. But long before CUDA came along, people were abusing graphics API to perform more general purpose high-performance calculations for things like video decompression or physics.

nVidia has good leadership and recognized that there would soon be a sizeable market for video cards as a cheap replacement for specialized hardware used in super computers.

Interesting tidbit, when I was in college, one of our CS professors invested in a cluster of PS3s as a cheap "super computer" as they were substantially faster at ML than anything else you could get for $600.


GPUs and also the specialized cards basically can do the same vector, matrix and tensor operations that are needed for neural network machine learning.

CPUs are not faster, in fact they are several orders of magnitude slower for these operations.


If u want to do anything related to more then very basic machine learning then NVIDIA gpu is must have


Don't expect everything to work out of the gate for Linux.

Typically you have to wait some time after launch (e.g. 3-6 months) before AMD gets their Linux drivers into a stable state.


Unfortunately you are sort of stuck with Nvidia. AMD has fantastic Linux support, but no CUDA.


If it's really just dabbling, go with Google's Coral TPU.

If it becomes serious, rent cloud systems so you don't have to do all the maintenance.


Google Coral is the name of Edge AI accelerator, which can be used only for inference meaning running already trained models to get results from inputs.

Google doesn’t sell any hardware for ML training, but they offer Cloud TPU as a service for that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: