Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It does seem like it's harming open source in a few ways:

* no longer any pressure to contribute upstream

* no longer any need to use a library at all

* Verbose PRs created with LLMs that are resume-padding

* False issues created with LLM-detection by unsophisticated users

Overall, we've lost the single meeting place of an open-source library that everyone meets at so we can create a better commons. That part is true. It will be interesting to see what follows from this.

I know that for very many small tools, I much prefer to just "write my own" (read: have Claude Code write me something). A friend showed me a worktree manager project on Github and instead of learning to use it, I just had Claude Code create one that was highly idiosyncratic to my needs. Iterative fuzzy search, single keybinding nav, and so on. These kinds of things have low ongoing maintenance and when I want a change I don't need to consult anyone or anything like that.

But we're not at the point where I'd like to run my own Linux-compatible kernel or where I'd even think of writing a Ghostty. So perhaps what's happened is that the baseline for an open-source project being worthwhile to others has increased.

For the moment, for a lot of small ones, I much prefer their feature list and README to their code. Amusing inversion.





> no longer any need to use a library at all

As someone who works on medical device software, I see this as a huge plus (maybe a con for FOSS specifically, but a net win overall).

I'm a big proponent of the go-ism "A little copying is better than a little dependency". Maybe we need a new proverb "A little generated code is better than a little dependency". Fewer dependencies = smaller cyberseucity burden, smaller regulatory burden, and more.

Now, obviously foregoing libsodium or something for generated code is a bad idea, but probably 90%+ of npm packages could probably go.


> probably 90%+ of npm packages could probably go

I feel npm gets held to an unreasonable standard. The fact is tons of beginners across the world publish packages to it. Some projects publish lots of packages to it that only make sense for those projects but are public anyway then you have the bulwark pa lager that most orgs use.

It is unfair to me that it’s always held as the “problematic registry”. When you have a single registry for the most popular language and arguably most used language in the world you’re gonna see massive volume of all kinds of packages, it doesn’t mean 90% of npm is useless

FWIW I find most pypi packages worthless and fairly low quality but no ones seems to want to bring that up all the time


I think you are completely oblivious to the problems plaguing the NPM ecosystem. When you start a typical frontend project using modern technology, you will introduce hundreds, if not thousands of small packages. These packages get new security holes daily, are often maintained by single people, are subject to being removed, to the supply chain attacks, download random crap from github, etc. Each of them should ideally be approved and monitored for changes, uploaded to the company repo to avoid build problem when it gets taken down, etc.

Compare this to Java ecosystem where a typical project will get an order of magnitude fewer packages, from vendors you can mostly trust.


If these packages get security holes daily, they probably cannot "just go" as the parent comment suggested (except in the case of a hostile takeover). If they have significant holes, then they must be significant code. Trivial code can just go, but doesn't have any significant quality issues either.

I'm not, in the least. I'm aware of the supply chain issues and CVEs etc.

One thing I want to separate here is number of packages is not a quality metric. For instance, a core vue project on the surface may have many different sub dependencies, however those are dependencies are sub packages of the main packages

I realize projects can go overboard with dependencies but its not in and of itself an issue. Like anything, its all about trade offs and setting good practices.

Its not like Java as an ecosystem has been immune either. The `Log4Shell` vulnerability was a huge mess.

My point isn't to bash the Java ecosystem, but nothing is immune to these issues and frequency is a fallacy reason to spread FUD around an ecosystem because it lacks context.


It's a matter of community culture. In the Node.js ecosystem, all those tiny packages are actually getting widely used, to the extent that it's hard to draw a line between them and well-established packages (esp. when the latter start taking them as dependencies!). Python has been npm'ified for a while now but people are still generally more suspicious of packages like that.

Since code-generating AIs were likely trained on them, they won't go too far, though.

I am utterly confused at how you think rewriting entire libraries have less security holes than battle-hardened libraries that 1000s of other people use.

- Generating your own left pad means you don't have to pull in an external left pad

- Which in turn means left pad doesn't show up on your SBOM

- Which in turn means CVEs won't show up for left pad when you run your SBOM through through SCA

- Which means you don't have to do any CVE triage, risk analysis, and mitigation (patching) for left pad

- It also means you don't have to do SOUP testing for left pad

Now imagine you've done that for a dozen libraries that you are only using a small piece of. That's a ton of regulatory and cybersecurity work you've saved yourself. I never claimed generating code makes your software more secure, I claimed it can reduce the regulatory and cybersecurity burden on your SDLC, which it does as demonstrated above. Taken to the extreme (0 external dependencies), your regulatory burden for SOUP and SCA goes to zero.


It's also now a lot easier to fork an open source project and tweak the last 10% so it works exactly as you want.

Exactly. Whilst I can see the problem with vibe-coded "contribution" that lower the signal/noise ratio on big OSS project, it's also "liberating" in the sense that forking becomes much more viable now. If previously it took time to dive into a project to tweak it to your needs, it's now trivial.

So in many senses AI is democratising open-source.


It may be worth considering how much the impact of LLMs is exacerbated by friction in the contribution process.

Many projects require a great deal of bureaucracy, hoop-jumping, and sheer dogged persistence to get changes merged. It shouldn't be surprising if some are electing it easier to just vibe-customize their own private forks as they see fit, both skipping that whole mess and allowing for modifications that would've never been approved of in mainline anyway.


AI coding sort of reminds me of when ninite originally came out for windows. It was like a "build your own OS". Check boxes and get what you need in a simple executable.

AI coding is kind of similar. You tell it what you want and it just sort of pukes it out. You run it then forget about it for the most part.

I think AI coding is kind of going to hit a ceiling, maybe idk, but it'll become an essential part of "getting stuff done quickly".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: