This reminds me of Adrian Thompson’s (University of Sussex) 1996 paper, “An evolved circuit, intrinsic in silicon, entwined with physics,” ICES 1996 / LNCS 1259 (published 1997), which was extended in his later thesis, “Hardware Evolution: Automatic Design of Electronic Circuits in Reconfigurable Hardware by Artificial Evolution, Springer, 1998”.
Before Thompson’s experiment, many researchers tried to evolve circuit behaviors on simulators. The problem was that simulated components are idealized, i.e. they ignore noise, parasitics, temperature drift, leakage paths, cross-talk, etc. Evolved circuits would therefore fail in the real world because the simulation behaved too cleanly.
Thompson instead let evolution operate on a real FPGA device itself, so evolution could take advantage of real-world physics. This was called “intrinsic evolution” (i.e., evolution in the real substrate).
The task was to evolve a circuit that can distinguish between a 1 kHz and 10 kHz square-wave input and output high for one, low for the other.
The final evolved solution:
- Used fewer than 40 logic cells
- Had no recognisable structure, no pattern resembling filters or counters
- Worked only on that exact FPGA and that exact silicon patch.
Most astonishingly:
The circuit depended critically on five logic elements that were not logically connected to the main path.
Removing them should not affect a digital design
- they were not wired to the output
- but in practice the circuit stopped functioning when they were removed.
Thompson determined via experiments that evolution had exploited:
- Parasitic capacitive coupling
- Propagation delay differences
- Analogue behaviours of the silicon substrate
- Electromagnetic interference from neighbouring cells
In short: the evolved solution used the FPGA as an analog medium, even though engineers normally treat it as a clean digital one.
Evolution had tuned the circuit to the physical quirks of the specific chip. It demonstrated that hardware evolution could produce solutions that humans would never invent.
Answering another commenter's question: yes the final result was dependent on temperature. The author did try using it over different temperatures. It only was able to operate in the region of temperatures it was trained at.
I’d argue that this was a limitation of the GA fitness function, not of the concept.
Now that we have vastly faster compute, open FPGA bitstream access, on-chip monitoring, plus cheap and dense temperature/voltage sensing, reinforcement learning + evolution hybrids, it becomes possible to select explicitly for robustness and generality, not just for functional correctness.
The fact that human engineers could not understand how this worked in 1996 made researchers incredibly uncomfortable, and the same remains true today, but now we have vastly better tooling than back then.
I don't think that's true, for me it is the concept that's wrong. The second-order effects you mention:
- Parasitic capacitive coupling
- Propagation delay differences
- Analogue behaviours of the silicon substrate
...are not just influenced by the chip design, they're influenced by substrate purity and doping uniformity -- exactly the parts of the production process that we don't control. Or rather: we shrink the technology node to right at the edge where these uncontrolled factors become too big to ignore. You can't design a circuit based on the uncontrolled properties of your production process and still expect to produce large volumes of working circuits.
Yes, we have better tooling today. If you use today's 14A machinery to produce a 1µ chip like the 80386, you will get amazingly high yields, and it will probably be accurate enough that even these analog circuits are reproducible. But the analog effects become more unpredictable as the node size decreases, and so will the variance in your analog circuits.
Also, contrary to what you said: the GA fitness process does not design for robustness and generality. It designs for the specific chip you're measuring, and you're measuring post-production. The fact that it works for reprogrammable FPGAs does not mean it translates well to mass production of integrated circuits. The reason we use digital circuitry instead of analog is not because we don't understand analog: it's because digital designs are much less sensitive to production variance.
Possibly, but maybe the real difference is the subtlety between a planned deterministic (logical) result versus deterministic (black box) outcome?
We’re seeing this shift already in software testing around GenAI. Trying to write a test around non-deterministic outcomes comes with its own set of challenges, so we need to plan can deterministic variances, which seems like an oxymoron but is not in this context.
That unreplicability between chips is actually a very, very desirable property when fingerprinting chips (sometimes known as ChipDNA) to implement unique keys for each chip. You use precisely this property (plus a lot of magic to control for temperature as you point out) to give each chip its own physically unclonable key. This has wonderfully interesting properties.
I wonder what would happen if someone evolved a circuit on a large number of FPGAs from different batches. Each of the FPGAs would receive the same input in each iteration but the output function would be biased to expose the worst-behaving units (maybe the bias should be raised biased in later iterations when most units behave well).
Either it would generate a more robust (and likely more recognizable) solution, or it would fail to converge, really.
You may need to train on a smaller number of FPGAs and gradually increase the set. Genetic algorithms have been finicky to get right, and you might find that more devices would massively increase the iteration count
Before Thompson’s experiment, many researchers tried to evolve circuit behaviors on simulators. The problem was that simulated components are idealized, i.e. they ignore noise, parasitics, temperature drift, leakage paths, cross-talk, etc. Evolved circuits would therefore fail in the real world because the simulation behaved too cleanly.
Thompson instead let evolution operate on a real FPGA device itself, so evolution could take advantage of real-world physics. This was called “intrinsic evolution” (i.e., evolution in the real substrate).
The task was to evolve a circuit that can distinguish between a 1 kHz and 10 kHz square-wave input and output high for one, low for the other.
The final evolved solution:
- Used fewer than 40 logic cells
- Had no recognisable structure, no pattern resembling filters or counters
- Worked only on that exact FPGA and that exact silicon patch.
Most astonishingly:
The circuit depended critically on five logic elements that were not logically connected to the main path.
Removing them should not affect a digital design
- they were not wired to the output
- but in practice the circuit stopped functioning when they were removed.
Thompson determined via experiments that evolution had exploited:
- Parasitic capacitive coupling
- Propagation delay differences
- Analogue behaviours of the silicon substrate
- Electromagnetic interference from neighbouring cells
In short: the evolved solution used the FPGA as an analog medium, even though engineers normally treat it as a clean digital one.
Evolution had tuned the circuit to the physical quirks of the specific chip. It demonstrated that hardware evolution could produce solutions that humans would never invent.