I don't think this is correct. For inference, the bottleneck is memory bandwidth, so if you can hook up an FPGA with better memory, it has an outside shot at beating GPUs, at least in the short term.
I mean, I have worked with FPGAs that outperform H200s in Llama3-class models a while and a half ago.
Show me a single FPGA that can outperform a B200 at matrix multiplication (or even come close) at any usable precision.
B200 can do 10 peta ops at fp8, theoretically.
I do agree memory bandwidth is also a problem for most FPGA setups, but xilinx ships HBM with some skus and they are not competitive at inference as far as I know.
I'd like to know more. I expect these systems are 8xvh1782. Is that true? What's the theoretical math throughput - my expectation is that it isn't very high per chip. How is performance in the prefill stage when inference is actually math limited?
I mean, I have worked with FPGAs that outperform H200s in Llama3-class models a while and a half ago.