Hacker Newsnew | past | comments | ask | show | jobs | submit | minimaxir's commentslogin

I bought a launch LG UltraFine 5K that was in the batch of defective units but I was too lazy to return it. Somehow, it's held up just fine a decade later; only color bleeding is an issue.

I think the main problem with the LG is if you charge your laptop from it. Doing that heats up the connector and pulls it from the main board.

> Shit is getting bad out in the actual software economy. Cash registers that have to be rebooted twice a day. Inventory systems that randomly drop orders. Claims forms filled with clearly “AI”-sourced half-finished localisation strings. That’s just what I’ve heard from people around me this week. I see more and more every day.

What? You can't argue "if the software is shitty, it must be vibe coded." People have been writing shit software with low standards since before 2023.


The color contrast between the ANSI colors and the terminal backgrounds for some terminals make it so the colors are hard to see.

Incidentally I was thinking about adding some automated physics events so it could be viewed passively.

Likely not a computationally efficient screensaver, though.


The same reason many big corps open source their tech: goodwill/recruiting.

xAI likely needs both more than usual nowadays.


The Braille trick has been used for ASCII art for awhile. In this case, I was more interested in it for subcharacter rendering of balls.

I didn't realize that, haven't thought about ASCII/ANSI art since the 90s, but the concept of using it for subcharacter animation is clever. Cheers.

[edit] Odd question. I have relatives in the Bay Area who I think spelled their name Wolfe. Their patriarch was named Eliot and survived Auschwitz. Any relation?


I'm from the East Coast.

This is one outcome of their latest marketing campaign.

The opposite variant https://yesai.duckduckgo.com/ strongly advertises AI answers.


The prompts turned out significantly better this time!

I just wanted to chime in and thank you for sharing your prompts like that!

It feels like which prompts people are using (even from developers on the same team) is often opaque. It's a great learning resource for people to see under the hood of each other's AI coding workflows, and I hope to see more folks doing this.

(Link for anyone who wants to check them out): https://github.com/minimaxir/ballin/blob/main/PROMPTS.md


Those statements are mostly out of date and symptomatic of pre-agent-optimized LLMs. Opus 4.5 with clarifying rules in the CLAUDE.md does a good job at following idiomatic best practices in my experience.

That said, I'm mixed on agentic performance for data science work but it does a good job if you clearly give it the information it needs to solve the problem (e.g. for SQL, table schema and example data)


Not my experience. All frontier models I constantly test, agentic or not, produce code less maintainable than my (very good) peers and myself (on a decent day).

Plus they continue to introduce performance blunders.

Crying wolves, on day maybe there will be a wolf and I may be the last of us to check whether that's true.


It's more noise than signal because it's disorganized, and hard to glean value from it (speaking from experience).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: