Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I go to the repo and get a feel for how popular, how recent, and how active the project is. I then lock it and I only update dependencies annually or if I need to address a specific issue.

Risk gets managed, not eliminated. There is no one "correct" approach as risk is a sliding scale that depends on your project's risk appetite.



None of those methods are even remotely reliable for filtering out bad code. See e.g. this excellent write up on how many methods there are to infect popular repos and bypass common security approaches [1] (including Github "screening"). The only thing that works nowadays is sandbox, sandbox, sandbox. Assume everything may be compromised one day. The only way to prevent your entire company (or personal life) from being taken over is if that system was never connected to anything it didn't absolutely require for running. That includes network access. And regarding separation, even docker is not really safe [2]. VM separation is a bit better. Bare metal is best.

[1] https://david-gilbertson.medium.com/im-harvesting-credit-car...

[2] https://blog.qwertysecurity.com/Articles/blog3.html


We're making software that doesn't rely on filtering, but Principle Of Least Authority at runtime.

https://lavamoat.github.io

https://hardenedjs.org


Or writing everything by yourself.


You'd have to write the standard libraries and OS as well. Not that it can't be done, but let's just say that people who tried that did not fare well in the mental health department.


If you don’t trust the standard libraries and the OS you can’t trust the sandboxed either


If you go down this road there also isn't really much need to write anything yourself. After all, you'll be much more likely to include exploitable bugs yourself once you start messing with things you are not an expert in. So neither way is a good solution.


you don't need to write the whole standard library - just the bits you need.


Popular, recent and active are each easily gameable no?


Yup, for sure. But part of risk management is considering how likely a failure mode might be and if it's really worth paying to mitigate. Developers are really good at imagining failure modes, but often not so good at estimating their likelihood/cost.

I have no "hard rules" on how to appraise a dependency. In addition to the above, I also like to skim the issue tracker, skim code for a moment to get a feel for quality, skim the docs, etc. I think that being able to quickly skim a project and get a feel for quality, as well as knowing when to dig deeper and how deep to dig are what makes someone a seasoned developer.

And beware of anyone who has opinions on right vs. wrong without knowing anything about your project and it's risk appetite. There's a whole range between "I'm making a microwave website" and "I'm making software that operates MRIs."


Of course. A malware-infected dependency has motivation to pay for GitHub stars and fake repo activity. I would never trust any metric that measures public "user activity". It can all be bought by bad actors.


Then what do you do instead?


Would totally depend on the project and what kinds of risks were appropriate to take given the nature of the project. But as a general principal, for all kinds of development: "Bringing in a new dependency should be A Big Deal." Whether you are writing a toy project or space flight avionics, you should not bring in unknown code casually. The level of vetting required will depend on the project, but you have to vet it.


Skim through the code? Sure it's likely to miss something, but it still catches low-effort and if enough people do it someone will see it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: