It's exciting to see such an incredible rate of advancement; I can't wait to see what transistors at 7nm and less will make possible. I can still remember when these same components were measured in millimeters.
Chips that are just like the ones we have now but 30% cheaper? I don't see how this is going to open up whole new technologies that weren't possible at 10/15nm.
Maybe not, but better battery life and eventually cheaper hardware on mobile and low power devices, expanding the range of usefulness in the developing world especially, is no small thing.
One of the biggest advancements due to this kind of tech has actually been throughput for radios. By using more sophisticated / denser encodings, we can scale the same 4G tech to even higher speeds. Think 1gbit/s speeds for mid-market phones. This is effectively enabled due to the smaller node sizes also being more power efficient, which is required for mobile phones.
The advances are less in your face, but still there.
It's easy to think we need faster processors when the architectures we use, both software and hardware, are incredibly inefficient.
Any modern processor is much faster than the RAM its programs and data live on, hence the multiple levels of cache inside them. Multi-cores add the complexity of keeping the memory consistent. Multiple threads per core put additional pressure on caches while our OSs assume all processors see a unique and consistent memory image (which requires keeping caches consistent across cores). Our most common software doesn't run on GPUs before extensive changes. Only mobile platforms are exploring asymmetric multiprocessing with a single ISA now.
I don't think we need faster single-thread performance. What I think we need is to adapt our software, which is designed to run on ridiculously fast copies of ancient personal computers, to run on computers that instead of mimicking a successful product of the 80s resemble more what we can do now. We need software that exploits the SIMD units (predicate bits that prevent branches are your friends there), that runs well in multiple cores and OSs that can deal with memory inconsistency between cores (maybe using write-through instructions for shared data and write-back for process-local stuff). We need software that doesn't need complicated instruction reordering or speculative execution.
> due to Amdahl's law, it's always better to have a single faster processor than multiple slower ones
This is not what Amdahl's law says. Amdahl's law gives you a theoretical maximum speedup given a program with a sequential component and a given number of processors. So as long as that speedup is higher than your faster processor is faster then it does make sense to have multiple slower processors than one faster one.
There are many products that don't see the light of day because they can't meet their COGS target. Cutting a huge chunk by 30% could open up a bunch of amazing functionality.
I think the point is that the rate of advancement has already slowed (18mo=>3yr) and is slowing even further (5nm by 2025). The treadmill Moore's law is basically over. There will be significant advancements (AllAroundGate, Wafer Stacking), and we might even see a speed increase again, but the steps are more orthogonal and less all improving. The last thing to keep improving is $/transistor, while MHz and power long since (2005=>2010) stopped getting better with each node. However, if the CFOs ever figure out that the return doesn't meet investment ... that will be the end (at least in the west).
I would say the rate of advancement is kind of plateauing. From the p4 to athlon 64 to c2d the gains were quite amazing. I guess in mobile you still see advancement, but do you really notice the difference between a snapdragon 801,820,835?