I bought a launch LG UltraFine 5K that was in the batch of defective units but I was too lazy to return it. Somehow, it's held up just fine a decade later; only color bleeding is an issue.
> Shit is getting bad out in the actual software economy. Cash registers that have to be rebooted twice a day. Inventory systems that randomly drop orders. Claims forms filled with clearly “AI”-sourced half-finished localisation strings. That’s just what I’ve heard from people around me this week. I see more and more every day.
What? You can't argue "if the software is shitty, it must be vibe coded." People have been writing shit software with low standards since before 2023.
I didn't realize that, haven't thought about ASCII/ANSI art since the 90s, but the concept of using it for subcharacter animation is clever. Cheers.
[edit] Odd question. I have relatives in the Bay Area who I think spelled their name Wolfe. Their patriarch was named Eliot and survived Auschwitz. Any relation?
I just wanted to chime in and thank you for sharing your prompts like that!
It feels like which prompts people are using (even from developers on the same team) is often opaque. It's a great learning resource for people to see under the hood of each other's AI coding workflows, and I hope to see more folks doing this.
Those statements are mostly out of date and symptomatic of pre-agent-optimized LLMs. Opus 4.5 with clarifying rules in the CLAUDE.md does a good job at following idiomatic best practices in my experience.
That said, I'm mixed on agentic performance for data science work but it does a good job if you clearly give it the information it needs to solve the problem (e.g. for SQL, table schema and example data)
Not my experience. All frontier models I constantly test, agentic or not, produce code less maintainable than my (very good) peers and myself (on a decent day).
Plus they continue to introduce performance blunders.
Crying wolves, on day maybe there will be a wolf and I may be the last of us to check whether that's true.
reply