Okay I think this somewhat answers my question. Is this individual a solo developer? “Triaging GitHub issues” sounds a bit like open source solo developer.
Guess I’m just desperate for an article about how organizations are actually speeding up development using agentic AI. Like very practical articles about how existing development processes have been adjusted to facilitate agentic AI.
I remain unconvinced that agentic AI scales beyond solo development, where the individual is liable for the output of the agents. More precisely, I can use agentic AI to write my code, but at the end of the day when I submit it to my org it’s my responsibility to understand it, and guarantee (according to my personal expertise) its security and reliability.
Conversely, I would fire (read: reprimand) someone so fast if I found out they submitted code that created a vulnerability that they would have reasonably caught if they weren’t being reckless with code submission speed, LLM or not.
AI will not revolutionize SWE until it revolutionizes our processes. It will definitely speed us up (I have definitely become faster), but faster != revolution.
> Guess I’m just desperate for an article about how organizations are actually speeding up development using agentic AI. Like very practical articles about how existing development processes have been adjusted to facilitate agentic AI.
They probably aren't really. At least in orgs I worked at, writing the code wasn't usually the bottleneck. It was in retrospect, 'context' engineering, waiting for the decision to get made, making some change and finding it breaks some assumption that was being made elsewhere but wasn't in the ticket, waiting for other stakeholders to insert their piece of the context, waiting for $VENDOR to reply about why their service is/isn't doing X anymore, discovering that $VENDOR_A's stage environment (that your stage environment is testing against for the integration) does $Z when $VENDOR_B_C_D don't do that, etc.
The ecosystem as a whole has to shift for this to work.
The author of the blog made his name and fortune founding Hashicorp, makers of Vagrant and Terraform among other things. Having done all that in his twenties he retired as the CTO and reappeared after a short hiatus with a new open source terminal, Ghostty.
Different folks are interested in different niches. I don't know this author either. I would know many names from other subfields, though.
I once went to a meetup where the host introduced the speaker with "he needs no introduction". Well to this day I've no idea who the speaker was. Familiarity really shouldn't be assumed beyond a very, very small handful of people.
Generally don’t pay attention to names unless it’s someone like Torvalds, Stroustrop, or Guido. Maybe this guy needs another decade of notoriety or something.
Curious, do you think his name should be as well known as Torvalds, Stroustrup, and Guido, who combined have ~120 years of serious contribution to the way that we write software, and continue to influence?
Because that’s the implication that I’m getting from downvotes + this reply.
Sure, Terraform is huge no doubt, but it’s no Linux, C++, or Python, yet. Correct me if I’m wrong, but I assume since they’re no longer involved with Hashicorp they’re no longer contributing to Terraform?
Guess I’m just desperate for an article about how organizations are actually speeding up development using agentic AI. Like very practical articles about how existing development processes have been adjusted to facilitate agentic AI.
I remain unconvinced that agentic AI scales beyond solo development, where the individual is liable for the output of the agents. More precisely, I can use agentic AI to write my code, but at the end of the day when I submit it to my org it’s my responsibility to understand it, and guarantee (according to my personal expertise) its security and reliability.
Conversely, I would fire (read: reprimand) someone so fast if I found out they submitted code that created a vulnerability that they would have reasonably caught if they weren’t being reckless with code submission speed, LLM or not.
AI will not revolutionize SWE until it revolutionizes our processes. It will definitely speed us up (I have definitely become faster), but faster != revolution.