Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Compute Watch – Track cloud prices and availability of A100s and H100s (llm-utils.org)
1 point by tikkun on July 6, 2023 | hide | past | favorite | 1 comment
A few weeks ago I shared an H100/A100 availability post on HN (https://news.ycombinator.com/item?id=36333321).

But the capacity and prices change regularly!

At first I was manually checking them and updating them on the page linked in the HN post above.

But I didn't want to keep checking manually, and I wanted more data points, historical views, and more real time info. Siddharth who I work with built a small tool to do that.

It's the best way to check live pricing and availability for Nvidia A100 and H100 GPUs. Pretty niche, but still kinda cool.

Why these 3 clouds? I believe they are, for most people, the best places to rent on demand instances of H100s or A100s. (I've looked at quite a few GPU clouds! See: https://gpus.llm-utils.org/alternative-gpu-clouds/)

If you happen to fall into the small niche that's interested in this, please let us know what you like about it!

Would like to know:

1) What's most helpful?

2) What could be improved?

3) Which GPUs, and which clouds, should we add next?



Oh and I'm working on another post about which clouds are best in general. My current recommendation set is:

If you need a huge number of A100s/H100s - talk to Oracle, FluidStack, Lambda Labs. Capacity is very low though for large quantities, especially of H100s, based on a couple of cloud founders/execs I've talked with.

If you need a couple A100s: FluidStack or Runpod.

If you need 1x H100: FluidStack or Lambda Labs.

If you need cheap 3090s: Tensordock.

If you need Stable Diffusion inference only: Salad.

If you want to play around with templates / general hobbyist: Runpod.

Assuming you're not tied to / required to use a specific large cloud by your enterprise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: