bzr lost because it was poorly-architected, infuriatingly slow, and kept changing its repository storage format trying (and failing) to narrow the gap with Mercurial and Git performance. Or, at least that's why I gave up on it, well before GitHub really took off and crushed the remaining competition.
For my own sanity I began avoiding Canonical's software years ago, but to me they always built stuff with shiny UI that demoed well but with performance seemingly a distant afterthought. Their software aspirations always seemed much larger than the engineering resources/chops/time they were willing to invest.
Sure, that's a fair point as well (though bzr came out of GNU arch, which didn't originate at Canonical, and it was finally redesigned into something good at Canonical — not a knock on arch either, it was simply early).
The question then becomes: why not Mercurial which still had a better UX than git itself?
My point is that git won because of GitHub, despite lots of suckiness that remains to this day (it obviously has good things as well).
I don't think it's a hardware problem. After a few days/weeks of uptime, coreaudiod sometimes gets into a state where, regardless of output volume, it glitches and pops periodically until you force-quit it. It's like the daemon's internal state degrades such that it's just on the edge of able to feed audio buffers to the hardware fast enough, and then all bets are off as to whether other system load will tip it over into glitching.
Things seem worse if you have and switch between multiple audio devices. I filed a Radar on this (or a very similar bug) many OS releases ago that's still open with no feedback. But "restart your computer" also solves the problem, whether or not you follow the other placebo troubleshooting nonsense.
Yeah. To me it looks like macOS goes so deep into sleep it disconnects the external display. On wake, the system rediscovers the external and resizes the desktop across both displays. With a bunch of apps/windows open, half your apps simultaneously resizing all their windows can peg all CPU cores for a number of seconds.
(It's still way faster than the same set of apps on an Intel Mac laptop, where it could sometimes take on the order of 30 seconds to get to a usable desktop after a long sleep. On Intel Macs it seemed more obvious that the GPU was the bottleneck)
The original Rosetta was also a licensed technology that (presumably) cost Apple a pretty penny. If Rosetta 2 is all in-house tech, that probably bodes well for sticking around longer than the original.
Is the reality of the 4k era that if you want dual monitors and peformance that you have to go back to 1080p monitors? Looks like I'll be holding onto my $100 dell screens for a while yet.
It already does, in the T2. It seems to me like future T* chips will run more and more of the system, leaving the x86 to become something like the MBP’s discrete GPU option. And then, like discrete GPUs, eventually an x86 CPU is only available in top-end models.
If TM detects an I/O error while writing to a networked backup volume, it aborts the backup and forces a filesystem check on the remote disk image at the start of the next backup. This fails, because the filesystem actually is corrupt and has been for some time. You had no idea, of course, because TM only reports four-horsemen level errors in the UI, and silently eats the rest. You even had a good run of "successful" backups afterwards, so long as you managed to avoid changing any files in directories with damaged metadata in your backup.
Anyway, as your digital life flashes before your eyes, TM helpfully offers to delete all your backups and start with a fresh disk image now, or delete all your backups and start with a fresh disk image later. And you can't blame it, really, because the filesystem has had some unknown, nonzero amount of corruption for an unknown, nonzero amount of time. For all anyone knows or can prove, the whole backup is random garbage.
This fractal of failure is specific to backups over WiFi. Backups to USB-connected disks aren't that reliable either, because HFS+, but USB backups have all the nines of reliability compared to network backups.
If there's an example of getting great game performance with a GC language, Unity isn't it. Lots of Unity games get stuttery, and even when they don't, they seem to use a lot of RAM relative to game complexity. Kerbal Space Program even mentioned in their release notes at one point something about a new garbage collector helping with frame rate stuttering.
I started up KSP just now, and it was at 5.57GB before I even got to the main menu. To be fair, I hadn't launched it recently, so it was installing its updates or whatever. Ok, I launched it again, and at the main menu it's sitting on 5.46GB. (This is on a Mac.) At Mission Control, I'm not even playing the game yet, and the process is using 6.3GB.
I think a better takeaway is that you can get away with GC even in games now, because it sucks and is inefficient but it's ... good enough. We're all conditioned to put up with inefficient software everywhere, so it doesn't even hurt that much anymore when it totally sucks.