I've learned that "a month in the lab saves an hour in the library" usually can be distilled to "A shallow understanding produces complex solutions. A deeper understanding is usually required to create simple solutions."
While the original example of not understanding JOIN might just be a lack of of general knowledge, the later steps are great examples of this, especially if someone else comes along and is told to fix the error.
Making something execute slow code in parallel is pretty easy to do generically. It doesn't require understanding much about the slow code. It's fairly low risk, you probably won't have to tweak tests, there won't be additional side effects. The major risks will be around error handling and it's easy to turn a blind eye to partial success/failure and leave that as a problem for a future team. You can confidently build the parallel for loop, call the task done and move on.
Striving for a deeper understanding requires a lot more effort and a lot more risk. Re-writing the slow code is a lot more risk. All side effects must be accounted for. Tests might have to be re-written. The new implementation might be slower. The new index might confuse the query planner and make unrelated queries slower somehow. It's not just a matter of investing time, it's investing energy/focus and taking on risk. But the result will have comparatively fewer failure modes, it'll be cheaper to operate and less likely to have security implications.
I've been in both spots and while I wish I could say we always went with the deeper understanding that wouldn't be an honest statement. But the framing has been really helpful, especially as I work with other execs in the company to prioritize our limited resources.
Reminds me of Blaise Pascal who apologized to a correspondent saying something like "I apologize for the long letter, I didn't have time to write a shorter one.".
While the original example of not understanding JOIN might just be a lack of of general knowledge, the later steps are great examples of this, especially if someone else comes along and is told to fix the error.
Making something execute slow code in parallel is pretty easy to do generically. It doesn't require understanding much about the slow code. It's fairly low risk, you probably won't have to tweak tests, there won't be additional side effects. The major risks will be around error handling and it's easy to turn a blind eye to partial success/failure and leave that as a problem for a future team. You can confidently build the parallel for loop, call the task done and move on.
Striving for a deeper understanding requires a lot more effort and a lot more risk. Re-writing the slow code is a lot more risk. All side effects must be accounted for. Tests might have to be re-written. The new implementation might be slower. The new index might confuse the query planner and make unrelated queries slower somehow. It's not just a matter of investing time, it's investing energy/focus and taking on risk. But the result will have comparatively fewer failure modes, it'll be cheaper to operate and less likely to have security implications.
I've been in both spots and while I wish I could say we always went with the deeper understanding that wouldn't be an honest statement. But the framing has been really helpful, especially as I work with other execs in the company to prioritize our limited resources.