Believe it or not this piece was rescued from being scrapped by my father, who had unique interest in both horology and was a professional electronics engineer. This clock was expertly restored on our family dining table at home - including the build of a new solid state power supply for it. I think the museum still powers up this clock for display, and you can watch the large neon decade dividers doing their thing.
EDIT: appears to have been removed from display according to the linked archive page.
and that's fine in some sense if you're honest about what you're doing.
I have at least one guitar that I rarely play but I keep because I consider it a work of art and a collectible. But, I have others which are workhorses and I play daily.
It gets awkward when collecting is presented as a way to be a better musician, which is clearly false.
It also percolates into reviews, too. When a nontrivial fraction of the community is buying dreams and is about collecting as opposed to using whatever it is, some reviewers style their content towards that crowd and overlook issues or benefits that pop up when actually using the gear.
I don't have a problem with collecting, but I'd love for the distinction to be more upfront.
On that note I absolutely love Matt Johnson (Jamiroquai)'s youtube channel because you can tell he likes gear but spends a huge amount of time actually playing it and making his own patches. So much of the review market is just GAS-inducing paid promo stuff.
It's kind of easy to detect though. I usually read three/four paragraphs before I realize that the person reviewing doesn't actually make music and doesn't consider the music making parts of the hardware, and instead focuses on very generic stuff that basically the manufacturer handed to them and said "make sure this is included".
I think that’s actually pretty different. In this case the tines are being electronically “plucked” - in the kalimba + pickups case you have to do all the plucking
we had something like this to scale out for higher throughput. just in the 10's of thousands requests per second required 100+ nodes simply because each query would have a expensive scatter and gather
The only time I had this other than changing to SSD was when I got my first multi-core system, a Q6600 (confusingly labeled a Core 2 Quad). Had a great time with that machine.
"Core" was/is like "PowerPC" or "Ryzen", just a name. Intel Core i9, for instance, as opposed to Intel Pentium D, both x86_x64, different chip features.
Gemini and it's tooling is absolute shit. The LLM itself is barely usable and needs so much supervision you might as well do the work yourself. Then couple that with an awful cli and vscode interface and you'll find that it's just a complete waste of time.
Compared to the anthropic offering is night and day. Claude gets on with the job and makes me way more productive.
It's probably a mix of what you're working on and how you're using the tool. If you can't get it done for free or cheaply, it makes sense to pay. I first design the architecture in my mind, then use Grok 4 fast (free) for single-shot generation of main files. This forces me to think first, and read the generated code to double-check. Then, the CLI is mostly for editing, clerical work, testing, etc. That said, I do try to avoid coding altogether if the CLI + MCP servers + MD files can solve the problem.
Which model were you using? In my experience Gemini 2.5 Pro is just as good as Claude Sonnet 4 and 4.5. It's literally what I use as a fallback to wrap something up if I hit the 5 hour limit on Claude and want to just push past some incomplete work.
I'm just going to throw this out there. I get good results from a truly trash model like gpt-oss-20b (quantized at 4bits). The reason I can literally use this model is because I know my shit and have spent time learning how much instruction each model I use needs.
Would be curious what you're actually having issues with if you're willing to share.
I share the same opinion on Gemini cli. Other than for simplest tasks it is just not usable, it gets stuck in loops, ignores instructions, fails to edit files. Plus it just has a plenty of bugs in the cli that you occasionally hit.
I wish I could use it rather than pay an extra subscription for Claude Code, but it is just in a different league (at least as recently as couple of weeks ago)
Which model are you using though? When I run out of Gemini 2.5 Pro and it falls back to the Flash model, the Flash model is absolute trash for sure. I have to prompt it like I do local models. Gemini 2.5 Pro has shown me good results though. Nothing like "ignores instructions" has really occurred for me with the Pro model.
That's weird. I can prompt 2.5 Pro and Claude Sonnet 4.5 about the same for most typescript problems and they end up doing about the same. I get different results with Terraform though, I think Gemini 2.5 Pro does better on some Google Cloud stuff, but only on the specifics.
Is just strange to me that my experience seems to be a polar opposite of yours.
I don't know. The last problem I tried was a complex one -- migration of some scientific code from CPU to GPU. Gemini was useless there, but Claude proposed realistic solutions and was able to explore and test those.
The type of stuff I tend to do is much more complex than a simple website. I really can't rely on AI as heavily for stuff that I really enjoy tinkering with. There's just not enough data for them to train on to truly solve distributed system problems.
I feel like even Amazon/AWS wouldn't be that dim, they surely have professionals who know how to build somewhat resilient distributed systems when DNS is involved :)
I doubt a circular dependency is the cause here (probably something even more basic). That being said, I could absolutely see how a circular dependency could accidentally creep in, especially as systems evolve over time.
Systems often start with minimal dependencies, and then over time you add a dependency on X for a limited use case as a convenience. Then over time, since it's already being used it gets added to other use cases until you eventually find out that it's a critical dependency.
Currently inside is an i7-9600 which I limit to 3.6ghz and a cheap 1050ti.
The CPU is technically over the TDP limit of the case but with the frequency limit in place I never exceed about 70degC and due to my workloads I'm rarely maxing the CPU anyway.
There is zero noise under any load. There is no moving parts inside the case at all, no spinning HDD, no PSU fan, no CPU fan, no GPU fan.
It's an HP OEM (because I moved countries during the pandemic and getting parts where I settled was ridiculously more expensive).
The CPU is AIO (and the radiator fans are loud). The GPU has very loud fans too, but is not AIO.
It's four years old at this point and I might just build something else rather than try to retrofit this one to sanity (which I doubt is possible without dumping the GPU anyway).
I bought my current gaming desktop off a friend as he didn't need it anymore when I was looking for an upgrade. It had an AIO cooler. The pump made so much noise and it seemed like I had to fiddle with fan profiles forever to get it to have sane cooling. I swapped it for a $30 CoolerMaster Hyper 212 and a Noctua case fan. It cools well enough for the CPU to stay above stock speeds pretty much all the time and is much quieter than the AIO cooler was. I'm not suggesting this CPU cooler is the best one out there, but just pointing out its not like one needs to spend $100+ on a cooler to get pretty good performance.
The GPU still gets kind of loud during intense graphics gaming sessions but when I'm not gaming the GPU fans often aren't even spinning.
Honestly at this point it's not so much about money as it is about whether or not this particular case/setup/components combo is salvageable with minimal effort.
The CPU fan is rarely an issue (it mostly just goes bananas when IntelliJ gets its business on with gradle on a new project XD).
The GPU is the main culprit and I'm not sure there's any solution there that doesn't involve just replacing it.
Just last week I moved from using a Noctua NH-U12S to cool my 5950X, to a
ARCTIC Liquid Freezer III Pro 360 AIO liquid cooler (first time using liquid cooling), and while I expected the difference to be big, I didn't realize how big.
Now my CPU idles at ~35 usually, which is just 5 degrees above the ambient temperature (because of summer...), and hardly ever goes above 70 even under load, and still super quiet. Realize now I should have done the upgrade years ago.
Now if I could only get water cooling for the radiator/GPU I'm using. Unfortunately no water blocks available for it (yet) but can't wait to change that too, should have a huge impact as well.
https://www.rmg.co.uk/collections/objects/rmgc-object-79394
Believe it or not this piece was rescued from being scrapped by my father, who had unique interest in both horology and was a professional electronics engineer. This clock was expertly restored on our family dining table at home - including the build of a new solid state power supply for it. I think the museum still powers up this clock for display, and you can watch the large neon decade dividers doing their thing.
EDIT: appears to have been removed from display according to the linked archive page.
reply