I’m seeing a real distinction emerge between “software engineering” and “research”. AI is simply amazing for exploratory research — 10x ability to try new ideas, if not more. When I find something that has promise, then I go into SWE mode. That involves understanding all the code the AI wrote, fixing all the dumb mistakes, and using my decades of experience to make it better. AI’s role in this process is a lot more limited, though it can still be useful.
Thats because an LLM can access breadth at any given moment that you cannot. That's the advantage it has.
E.g. quite often a sound (e.g. music) brings back memories of a time when it was being listened to etc.
Our brains need something to 'prompt' (ironic I know) for stuff in the brain to come to the front. But the human is the final judge (or should be) if what is wrong / good quality vs high quality. A taste element is necessary here too.