Hacker Newsnew | past | comments | ask | show | jobs | submit | xaskasdf's commentslogin

Ya know, here on the local market there are a bunch of optanes hanging around, I'll try to manage one to check if there's any improvement


Optanes will be good for latency, but not so much for BW which seems to be your major bottleneck if I'm not mistaken?


yeah, the mobo upgrade is something I gotta do anyway, so I'll cover that up more or less, the optane is something I didn't thought about


Actually is purely bandwidth-bound, the major bottleneck of the whole process, for me in this case, is the B450 mobo I got that's only capable of pcie3 and 1x8 in the pcie lanes for gpu instead of 1x16; so I'm capped until I get an X570 maybe. I should get around twice or triple the tok speed with that upgrade alone


Actually I can't go full tdp with a 650w PSU, I got to upgrade it asap


I updated the documentation to provide more info for the patching process, I added the patches themselves too and provided some risk info about the patches


I did it, but with different quantization compressions, It ran into quality issues, I will try to rerun with the same quants if that fixes the issue, but the most that looks unused, its being used by rotating layers that are being swapped by the cpu from the ram itself, that manages to keep layers warm, ready to use while inferencing and discarding already used ones


Did you even read anything? hahaha


Actually I'm thinking about buyin an AMD BC-250 that's bassically a ps5 with pcie factor format; and it's linux capable by default, maybe next month


This was the experiment itself https://github.com/xaskasdf/ps2-llm

The idea was basically to run a llm on a ps2, then I ran into some problems as the 32mb ram cap with 4mb vram cap; so I had to figure out a way to stream layers on the forward pass. Given that ps2 manages to give instructions directly to the vram that's capable of 32bit addresses, it gave an insane amount of tok/s, then I wondered if I could do the same on my puter


I got an m3, I will test it on metal and check how it goes


Actually this idea was fueled by those since I went to check if there was anything near to what I wanted to achieve, pretty useful tho


nvmlib/ssd-gpu-dma and BaM (based on the same code base) are pretty cool as they allow you to initiate disk reads/writes directly from a CUDA kernel (so not only reading/writing directly to GPU memory but also allowing the GPU to initiate IO on its own). Sometimes called GPU-initiated I/O or accelerator-initiated I/O.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: