Hacker Newsnew | past | comments | ask | show | jobs | submit | ahepp's commentslogin

You've done this? I would love to read more about it

It seems like this would be a really interesting field to research. Does AI assisted coding result in fewer bugs, or more bugs, vs an unassisted human?

I've been thinking about this as I do AoC with Copilot enabled. It's been nice for those "hmm how do I do that in $LANGUAGE again?" moments, but it's also wrote some nice looking snippets that don't do quite what I want it to. And many cases of "hmmm... that would work, but it would read the entire file twice for no reason".

My guess, however, is that it's a net gain for quality and productivity. Humans make bugs too and there need to be processes in place to discover and remediate those regardless.


I'm not sure about research, but I've used LLMs for a few things here at Oxide with (what I hope is) appropriate judgment.

I'm currently trying out using Opus 4.5 to take care of a gnarly code reorganization that would take a human most of a week to do -- I spent a day writing a spec (by hand, with some editing advice from Claude Code), having it reviewed as a document for humans by humans, and feeding it into Opus 4.5 on some test cases. It seems to work well. The spec is, of course, in the form of an RFD, which I hope to make public soon.

I like to think of the spec is basically an extremely advanced sed script described in ~1000 English words.


Maybe it's not as necessary with a codebase as well-organized as Oxide's, but I found gemini 3 useful for a refactor of some completely test-free ML research code, recently. I got it to generate a test case which would exercise all the code subject to refactoring, got it to do the refactoring and verify that it leads to exactly the same state, then finally got it to randomize the test inputs and keep repeating the comparison.

This companies have trillions and they are not doing that research. Why?

I don't know. I guess the flip side applies too? Lots of people arguing either side, when it feels like it shouldn't be that difficult to provide some objective data.

Do we know whether the part was made out of spec, or whether the spec specified inappropriate materials?

It sounds like it was something like PLA when it was supposed to be ABS.

According to https://assets.publishing.service.gov.uk/media/69297a4e345e3...

> The aircraft owner [...] understood from the vendor that it was printed from CF-ABS (carbon fibre – acrylonitrile butadiene styrene) filament material, with a glass transition temperature of 105°C [...] he was satisfied the component was fit for use in this application when it was installed.

> [...] Two samples from the air induction elbow were subjected to testing, [...] The measured glass transition temperature for the first sample was 52.8°C, and 54.0°C for the second sample.

I've known 3D printing folks who run off a throwaway prototype in a cheap, easy-to-print material to check for fit before printing in more difficult, expensive materials. Easy to imagine a careless manufacturer getting the PLA prototype mixed in with the ABS production parts, and selling it by mistake.

Of course, the aviation industry usually steers clear of careless manufactures....


You'd be very hard pressed to confuse PLA with carbon fiber reinforced ABS. The latter has a definite surface texture that's hard to get confused with that of PLA.

That's... absurd. ABS is a terrible choice for anything in an engine bay - ABS breaks down over time when in contact with oils.

I've used PA6-CF for similar purposes in the past. Obviously not for aircraft, though.


Even in ABS I would not use something 3D printed on a consumer machine as a critical part of an airplane.

I don't know if it is the same in the UK as it is in the US, but the appeal of experimental aviation (every Cozy is experimental) is that there are no specs or requirements around parts like this.

If you want to slap 15 weed-wacker engines to a wing you made from styrofoam and call it an airplane, the FAA will not stop you.

I'm oversimplifying, a bit, but less than non-pilots might think.

In other words, the engine maker probably has some thoughts about how that piece should be made, but the FAA would have no problem with you installing it on an experimental.


My understanding of the UK CAA is that it isn’t as liberal as the US FAA when it comes to amateur-built experimental aircraft airworthiness. I would still be surprised if a 3d-printed intake manifold on a homebuilt passed an airworthiness inspection in the US without a number of detailed questions being answered to the satisfaction of the airworthiness representative.

The spec is fiberglass, which has better thermal resistance.

But not that much better compared the better filaments out there. Fair chance it was printed out of PLA, ABS or PETG, by the shade of the part it looks like it was CF loaded filament.

A better choice would have been PEEK. But even then, I would have done a lot of on-the-ground testing before trusting my life to a part from the printer.


100% -- the original design for the Cozy is from the early 90s, before 3D printing became popular, and this part seems like a good candidate for 3D printing. It just seems like the maker chose the wrong materials and didn't test it adequately.

It's a candidate, but it is definitely not a run race, temps under the engine covers of a plane can get surprisingly high (surprising because you'd think you have plenty of airflow half a meter behind a pocket hurricane). I'm not sure if high temp filaments would be the solution here, but they'd be better candidates. It would need some very thorough testing before trusting your life (or in this case: someone else's) to that kind of solution.

My guess, given the sheen that looks like CF would be PA-CF, which is the most appropriate and common filament with CF.

And PA-CF is usually pretty good with temps, I have used it for parts on engines before with good results, but not in safety critical scenarios.


There’s a massive difference between the thermal properties of the materials you listed.

Yes, that's why I listed them. And even then: none of those first three are (safely) usable for this application. PEEK or ULTEM or something better than that.

I absolutely agree. At the same time, I’m just flabbergasted that someone really thought they’d pass off PLA crap for such a purpose, it literally loses shape in sunlight. PETG isn’t going to cut it and I wouldn’t want to be on a plane with PETG in a heat-sensitive part, but that would still have been less ridiculous given that a) it’s significantly better than PLA in this regard, and b) unlike PEI and PEEK, it can be printed with ease on just about any FDM.

Whoever sold that part deserves to be sued. I'm not from the 'sue happy' department but this was extremely irresponsible and unless the swap for PLA was accidental (which can easily be proven, they must have sold more than just one of these). At a minimum they should recall each and every one of these they have made and on top of that they should review all of the other parts that person has made for similar issues.

So when we say "they abandoned posix compatibility", are we saying "They abandoned the POSIX filesystem storage backend"? I believe that's true, I used to use minio on a FreeBSD server but after an update I had to switch to just passing in zfs block devs.

Or are we saying that they no longer support running minio on POSIX systems at all, due to using linux specific syscalls or something else I'm not thinking of? I don't know whether they did this or not.

Those seem like two very different things to me, and when someone says "they don't support POSIX", I assume the latter


Ah, yes, I didn't even think of that. I always understood it as "abandon POSIX filesystems (as backend for S3)" because I knew about all these issues with filename/directory clashes,

I don'T think they would abandon POSIX systems in general, because what sense would that make?


> open source projects eventually need a path to monetization

I guess I'm curious if I'm understanding what you mean here, because it seems like there's a huge number of counterexamples. GNU coreutils. The linux kernel. FreeBSD. NFS and iSCSI drivers for either of those kernels. Cgroups in the Linux kernel.

If anything, it seems strange to expect to be able to monetize free-as-in-freedom software. GNU freedom number 0 is "The freedom to run the program as you wish, for any purpose". I don't see anything in there about "except for business purposes", or anything in there about "except for businesses I think can afford to pay me". It seems like a lot of these "open core" cloud companies just have a fundamental misunderstanding about what free software is.

Which isn't to say I have anything against people choosing to monetize their software. I couldn't afford to give all my work away for free, which is why I don't do that. However, I don't feel a lot of sympathy to people who surely use tons of actual libre software without paying for it, when someone uses their libre software without paying.


I think, if anything, in this age of AI coding we should see a resurgence in true open-source projects where people are writing code how they feel like writing it and tossing it out into the world. The quality will be a mixed bag! and that's okay. No warranty expressed or implied. As the quality rises and the cost of AI coding drops - and it will, this phase of $500/mo for Cursor is not going to last - I think we'll see plenty more open source projects that embody the spirit you're talking about.

The trick here is that people may not want to be coding MinIO. It's like... just not that fun of a thing to work on, compared to something more visible, more elevator-pitchy, more sexy. You spend all your spare time donating your labour to a project that... serves files? I the lowly devops bow before you and thank you for your beautiful contribution, but I the person meeting you at a party wonder why you do this in particular with your spare time instead of, well, so many other things.

I've never understood it, but then, that's why I'm not a famous open-source dev, right?


Yap, published already a few ( I hope) useful plugins where I basically don't care what you do with it. Coded in a few days with AI and some testing.

Already a few more ideas I want to code :)

But this might create the problem image models are facing, AI eating itself...


you mean... like Linux? or gcc?

I don't think there's still someone actively working on the Linux kernel without receiving a salary, and this for the last two decades more or less.

Yeah, that's why I said maybe I'm misunderstanding OP. If that's what OP meant by "monetization" then sure, monetization is great.

Companies pay their employees to work on Linux because it's valuable to them. Intel wants their hardware well supported. Facebook wants their servers running fast. It's an ecosystem built around free-as-in-freedom software, where a lot of people get paid to make the software better, and everyone can use it for free-as-in-beer

Compare that to the "open core" model where a company generally offers a limited gratis version of their product, but is really organized to funnel leads into their paid offering.

The latter is fine, but I don't really consider it some kind of charity or public service. It's just a company that's decided on a very radical marketing strategy.


You would be incorrect, LWN tracks statistics about contributor employers for every Linux kernel release and their latest post about that says that "(None)" (ie unpaid contributions) beat a number of large companies, including RedHat by the lines changed metric, or SUSE by the changesets metric.

https://lwn.net/SubscriberLink/1046966/f957408bbdd4d388/


Well yes, but the vast majority of changes (~95%, by either changes or lines) seem to be from contributors supported by employers.

Sure, but there is still "someone" contributing unpaid.

Definitely individual can start with their own reason. It is questionable whether they can make contributions which scope would be a quarter of the work including design or even larger.

Other than a few popular libraries, I'm unaware of any major open source project that isn't primarily supported by corporate employees working on it as part of their day job.

What counts as a "major" open source project?

Ghostty's obviously not a replicatable model, but it would be cool if it was!

I mean lets be real here, if you competent enough to contribute into linux kernel then you basically competent enough to get a job everywhere

What is the use case for implementing a POSIX filesystem on top of an object store? I remember reading this article a few years ago, which happens to be by the minio folks: https://blog.min.io/filesystem-on-object-store-is-a-bad-idea...

> What is the use case for implementing a POSIX filesystem on top of an object store?

The use case is fully stateless infrastructure: your file/database servers become disposable and interchangeable (no "pets"), because all state lives in S3. This dramatically simplifies operations, scaling, and disaster recovery, and it's cheap since S3 (or at least, S3 compatible services) storage costs are very low.

The MinIO article's criticisms don't really apply here because ZeroFS doesn't store files 1:1 to S3. It uses an LSM-tree database backed by S3, which allows it to implement proper POSIX semantics with actual performance.


It makes sense that some of the criticisms wouldn't apply if you're not storing the files 1:1.

What about NFS or traditional filesystems on iSCSI block devices? I assume you're not using those because managing/scaling/HA for them is too painful? What about the openstack equivalents of EFS/EBS? Or Ceph's fs/blockdev solutions (although looking into it a bit, it seems like those are based on its object store)?


As long as I'm not the one who gets sued over this, I think it would be wonderful to have some case law on what constitutes an AGPL derivative work. It could be a great thing for free software, since people seem to be too scared to touch the AGPL at all right now.

What would go in to POSIX compatibility for a product like this which would make it complicated? Because the kind of stuff that stands out to me is the use of Linux specific syscalls like epoll/io_uring vs trad POSIX stuff like poll. That doesn't seem too complicated?

Wait, what's the consensus on this? Are they saying that using object storage over a standard network API which they didn't even create, makes your application a derivative work of the object store?

Or just that the users would need to make minio sources, including modifications, freely available?

I guess that's kind of the big question inherent to the AGPL?


From my understanding, you would not be allowed to sell an "S3 compatible storage" as a service based off of Minio or another AGPL licensed S3-compatible storage solution, especially if you modify the source code of minio in any way and then serve that to your customers.

If you use Minio or another AGPL licensed service internally to support your own product without a customer ever touching it's API, it should be fine.


What in AGPL prevents this? AGPL only forces you to open source your modified version of MinIO/whatever. GPL forces you to open source only if you actually distribute the modified version, which gets muddy in the context of network services, therefore AGPL was created. If you want to build a commercial service based on AGPL software, there is nothing stopping you doing that.

You can modify the source code, you can commercialize it. You just have to give access to the source code to users that interact with it over a network.

Isn't cloud init just slurping its own config file, then generating the "real" config files and slurping those into the right spaces?

To me, "copies a file named wpa_supplicant.conf from /boot to /etc on first init" is simpler than "parses some yaml, the generates /etc/wpa_supplicant".

Maybe I'd find it worthwhile if I had encountered cloud init years ago before I invested in learning the other 900 linux networking configuration tools, but now it just feels like a case of XKCD 927 (+1 competing standards). If cloud init is even better, it definitely doesn't seem 10x better to be worth the change.


Cloud init is a tool with documentation and a file format for config, that’s used all over the place.

I’m not making the case that it’s better, just that it’s no more “black magic” than wpa_supplicant’s config file is, and it’s less magical than dropping a wpa_supplicant file into /boot and the raspberry pi doing a bespoke RPi-specific shuffle to move it into place.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: