GPUs are a good example - they started getting traction in the early 2000s/late 90s.
Once in the mid 2000s we figured out that single-thread perf won't scale, GPUs became the next scaling frontier and it was thought that they'd complement and supplant CPUs - with the Xbox and smartphones having integrated GPUs, and games starting to rely on general purpose compute shaders, a lot of folks (including me) thought that the software in the future will constantly pingpong between CPU and GPU execution? Got an array to sort? Let the GPU handle that. Got a JPEG to decode? GPU. Etc.
I took an in depth CUDA course back in the early 2010s, thinking that come 5 years or so, all professional signal processing will move to GPUs, and GPU algorithm knowledge will be just as widespread and expected as how to program a CPU, and I would need to Leetcode a bitonic sort to get a regular-ass job.
What happened? GPUs weren't really used, data sharing between CPU and GPU is still cumbersome and slow, dedicated accelerators like video decoders weren't replaced by general purpose GPU compute, we still have special function units for these.
There are technical challenges sure to doing these things, but very solvable ones.
GPUs are still stuck in 2 niches - video games, and AI (which incidentally got huge). Everybody still writes single-threaded Python and Js.
There was every reason to be optimistic about GPGPU back then, and there's every reason to be optimistic about AI now.
Not sure where this will go, but probably not where we expect it to.
Once in the mid 2000s we figured out that single-thread perf won't scale, GPUs became the next scaling frontier and it was thought that they'd complement and supplant CPUs - with the Xbox and smartphones having integrated GPUs, and games starting to rely on general purpose compute shaders, a lot of folks (including me) thought that the software in the future will constantly pingpong between CPU and GPU execution? Got an array to sort? Let the GPU handle that. Got a JPEG to decode? GPU. Etc.
I took an in depth CUDA course back in the early 2010s, thinking that come 5 years or so, all professional signal processing will move to GPUs, and GPU algorithm knowledge will be just as widespread and expected as how to program a CPU, and I would need to Leetcode a bitonic sort to get a regular-ass job.
What happened? GPUs weren't really used, data sharing between CPU and GPU is still cumbersome and slow, dedicated accelerators like video decoders weren't replaced by general purpose GPU compute, we still have special function units for these.
There are technical challenges sure to doing these things, but very solvable ones.
GPUs are still stuck in 2 niches - video games, and AI (which incidentally got huge). Everybody still writes single-threaded Python and Js.
There was every reason to be optimistic about GPGPU back then, and there's every reason to be optimistic about AI now.
Not sure where this will go, but probably not where we expect it to.