>The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest-- with no pathways that would allow them to influence the output-- yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones. Furthermore, the final program did not work reliably when it was loaded onto other FPGAs of the same type.
Robustness could have been added as one extra constraint. I.e. the fitness function would also value the ability of the circuit to work on many instances of the FPGA.
This reminds me of Physically Unclonable Functions (PUFs) that tries to utilize differences in manufactured devices as a method of distinquishing unique devices. This is used for identification and authentication. There are for example PUFs that use differences in delays between logic blocks in FPGAs to generate unique responses to applied patterns/challenges.
Even with that setup you end up with overfitting after a point. I.e. instead of getting something that works on every FPGA of that type, you start getting things that work on the specific FPGAs you provide. The same thing happens in machine learning for things like recognizing photos: after a while your algorithm stops recognizing cars and starts just recognizing those specific photos of cars.
Ideally, you want to take a bunch of FPGAs, pull out a random subsample of them only for acceptance testing, and stop evolving the circuit when the performance on the acceptance testing subset starts getting worse.
Even if it worked on the specific FPGAs you provide, it might not work reliably in different temperatures like the one in the article, or if the noise is slightly further away, etc.
There is no way to stop overfitting because it's difficult if not impossible to test it in every possible environment we want it to work under.
That's basically what I said, and this is optimization not machine learning. The problem is the genetic algorithm fits the the specifics of the FPGA and the environment it is optimized in, and doesn't work reliably in other environments or FPGAs.
I love this story. It has made me wonder if you couldn't use a similar technique on a larger scale: could you write device drivers for hardware this way?
bar none this is my favorite genetic algorithm story. Not because of how it 'failed' but in the fascinatingly unexpected way it succeeded. This type of solution is one you would likely NEVER see come from classically trained EEs. That's where I see much of the value in GA/EA, finding solutions that no sane engineer in their right mind would ever come up with.
>The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest-- with no pathways that would allow them to influence the output-- yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones. Furthermore, the final program did not work reliably when it was loaded onto other FPGAs of the same type.