Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> So he was able to sidestep the normal review process.

He should not have been able to sidestep the normal review process. That's the problem in the first place. Even if you're from the sysadmin side. It should not be possible to do it.

You may think that looks exaggerated but I've worked in two places where we implemented such a process, both of them far more boring than Tesla and, I suspect, far less money to burn on infrastructure.

> This can happen, what is important is that such things are discovered.

No, what is important when working with mission-critical code is that such things are mitigated. Discovering such a problem in production code is already a problem, not a solution.



I'm interested to know how you plan to keep someone with `wheel` access from doing anything on a server that they maintain.

No, seriously.


You don't keep someone with wheel access from doing anything on the server. You:

1. Sign every review 2. Use the review signatures + the manufacturer's keys to sign reproducible builds of the production image (i.e. you cryptographically certify that "this image is authorized, and it includes this list of commits, that has gone through these reviews"). 3. Use a secure boot scheme of your choice to ensure that only signed images can be installed on production servers 4. Keep anyone with 'wheel' access away from the image signing keys, and anyone who can generate images away from 'wheel' access.

This way, you make sure that no one who has 'wheel' access can install a sabotaged image, that any image that can be installed has gone through an auditable trail of reviews, and reduce the attack surface that a malicious developer has control over to stuff that requires root access (which is still a lot of surface, but is harder to sneak past a review).

Root access to production servers does not need to mean that you can install arbitrary code on them, and with the right systems engineering, you can ensure that it does not trivially result in arbitrary code being run on production equipment.

Edit: this is all in the context of "questions that Tesla's answer raises". For all I know, the answer might be that they hired some brilliant genius who figured out how to sneak by whatever secure boot scheme they're using. The point is -- the post that sparked all this is not naive. This is real stuff. Companies that are concerned about it can ensure that unauthorized commits are so difficult to get into production that a disgruntled employee would rather just quit than go through with it.


What are you saying? That because perfect security is impossible, we should give up and do nothing?

It's possible a sysadmin with low-level access can exploit that and a variety of zero-day exploits and escalations of privilege in the layers above to systematically compromise the boot images, steal or falsify credentials and signing keys, and circumvent the safeguards and alarm systems which should be in place to prevent malicious modifications of the source code and the compiled binaries, while hiding his actions from his co-workers, sure whatever.

And if that's what happened to Tesla, wow, sucks to be them, that's amazing.

But if there are no safeguards, no review process, no alarm bells to go off and any damn person can submit malicious code effortlessly and they were basically working off the honor system... I'm going to blame the victim a little bit.


Only allow the server to run signed code and make sure that no one with wheel access has access to the signing key.


Well, the email specifically mentioned the employee had used other user accounts.

Depending on what those "other user accounts" had access to, it could go in many different directions. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: