Hacker Newsnew | past | comments | ask | show | jobs | submit | koverstreet's commentslogin

I towed heavily loaded trailers - stuffed with books, tools, furniture, the trailer was loaded to the roof and I couldn't get up steep San Franscisco hills - to and from Alaska, and across the entire United States.

With an Impreza.

That included highways in the Yukon that were more river rock than gravel, backwoods of Montana and Wyoming, you name it.

It was totally fine. Especially in a Subaru, with AWD and a low well centered center of gravity. I'd do it again.


More fun, you mean.

Couple years ago I was driving through Arizona during a massive blizzard. Everyone's doing 15, and I'm doing 50 - taking things slow and careful because of the traffic.

I had people in vests standing out in the road waving at me trying to get me to slow down! And I'm going "What in the hell are you doing out in the road!? Don't you know this is a blizzard!"

I grew up in Alaska, we laugh at the snow :)


Where does the higher theoretical kWH/kg come from? That's big news.


Sorry I had that backwards.


OSS is your legacy!

If you write proprietary code, everything you do dies with that company. I certainly don't want my life's work locked away like that. Working on OSS means a better chance to put the engineering first and do something that will last.

I did my few years and Silicon Valley too, and when it came to decide between money and code, I chose the code. Haven't regretted a thing.


I hear ya. Thanks for the reply. I'm glad you chose OSS and I fully share your views as expressed here.

I think helping make OSS a thing at all, especially in the very early days when my employer was seen as the poster child for its failure, will be the closest thing I have to a legacy. And I got to travel the world teaching about and evangelizing the open source process, tooling, and ethos which was great fun. I even got to play in the big leagues for a while, at the height of our consumer successes, and those years helped solidify some important industry standards that will certainly live on for a while.

I'm happy with my contributions, and happy with the comfortable life I achieved all while having a good time doing it. I'm also very happy that I got out a couple years ago before this latest wave of destruction.


OSS is even more important today. The days of the Unix vendors, early Google, when we had tech companies that were engineer focused - those days are gone. It's MBAs running the show, and that's how we get enshittification.

There is no set future to what kind of technology we will build and end up with. We can build something where everything is locked away, and poor stewardship and maintenance means everything gets jankier or less reliable - or we can build something like the Culture novels, with technology that effectively never fails - with generations of advancement building off the previous, ever improving debugability, redundancy, failsafes, and hardening, making things more modular and cleaner along the way.

I know which world I'd rather live in, and big tech ain't gonna make it happen. I've seen the way they write code.

So if some people see my career as giving a middle finger to those guys, I'm cool with that :)


Speaking for myself, Valve has been great to work with - chill, and they bring real technical focus. It's still engineers running the show there, and they're good at what they do. A real breath of fresh air from much of the tech world.


What sort of stuff did you work on with them, if you don't mind me asking?


bcachefs's btree still beats the pants off of the entire rocksdb lineage :)


Aren't B-trees and LSM-trees fundamentally different tradeoffs? B-trees will always win in some read-biased workloads, and LSM-trees in other write-biased workloads (with B epsilon (Bε) trees somewhere in the middle).


For on disk data structures, yes.

LSM-trees do really badly at multithreaded update workloads, and compaction overhead is really problematic when there isn't much update locality.

On the other hand, having most of your index be constant lets you use better data structures. Binary search is really bad.

For pure in memory indexes, according to the numbers I've seen it's actually really hard to beat a pure (heavily optimized) b-tree; for in-memory you use a much smaller node size than on disk (I've seen 64 bytes, I'd try 256 if I was writing one).

For on disk, you need to use a bigger node size, and then binary search is a problem. And 4k-8k as is still commonly used is much too small; you can do a lockless or mostly lockless in-memory b-tree, but not if it's persistent, so locking overhead, cache lookups, all become painful for persistent b-trees at smaller node sizes, not to mention access time on cache miss.

So the reason bcachefs's (and bcache's) btree is so fast is that we use much bigger nodes, and we're actually a hybrid compacting data structure. So we get the benefits of LSM-trees (better data structures to avoid binary search for most of a lookup) without the downsides, and having the individual nodes be (small, simple) compacting data structures is what makes big btree nodes (avoiding locking overhead, access time on node traversal) practical.

B-epsilon btrees are dumb, that's just taking the downsides of both - updating interior nodes in fastpaths kills multithreaded performance.


Rocksdb / myrocks is heavily used by Meta at extremely massive scale. For sake of comparison, what's the largest real-world production deployment of bcachefs?


We're talking about database performance here, not deployment numbers. And personally, I don't much care what Meta does, they're not pushing the envelope on reliability anywhere that I know of.


Many other companies besides Meta use RocksDB; they're just the largest.

Production adoption at scale is always relevant as a measure of stability, as well as a reflection of whether a solution is applicable to general-purpose workloads.

There's more to the story than just raw performance anyway; for example Meta's migration to MyRocks was motivated by superior compression compared to other alternatives.


I haven't looked at fs-verity at all, but per-file merkle trees should definitely be built into the filesystem as a standard thing. Rsync and file transfer programs want it, and having to compute that every time is crap - if it's built into the filesystem it can easily be computed lazily and invalidated if need be.

(obligatory plug for people to jump in and get involved with writing code if they want to see more of this stuff happen)


The default should be the device's native blocksize, and some devices misreport. You also lose performance if you use a larger blocksize than necessary.

If we can, I'd like to get a quirks list in place, but there have been higher priorities.


Do each of the other filesystems have their own quirks list? That seems suboptimal. Oh, I guess it's because it's in the user space mkfs tool of each, not the kernel.


ZFS is the only filesystem I know of with one, and theirs is pretty incomplete. It does need to be a shared project.


Still changing the on disk format as required, but we're at the point now where the end user impact should be negligible - and we aren't doing big changes.

Just after reconcile, I landed a patch series to automatically run recovery passes in the background if they (and all dependents) can be run online; this allows the 1.33 upgrade to run in the background.

And with DKMS, users aren't having to run old versions (forcing a downgrade) if they have to boot into an old kernel. That was a big support issue in the past, users would have to run old unsupported versions because of other kernel bugs (amdgpu being the most common offender).


I'm here to talk filesystems and technical topics, not to take part in or stir up drama. There's been more than enough of that.

This is hacker news, not drama queen news :)


Also the matter has been discussed here in detail when it broke the news a couple months ago so yeah focusing more on the technical merit is much more interesting IMO


I was just trying to fill in a chap with a couple-sentence summary of what happened, since they asked, without trying to poke a hornet's nest.


> This is hacker news, not drama queen news

Same thing.


Nonetheless, I do hope that when development slows down and time has passed, you get it upstreamed again.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: