Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a pretty egregious failure for a staff security engineer
 help



It's a pretty egregious failure for the org because it controlled the conditions for it to happen.

The security guy is just the patsy because he actioned it.

They have obviously done this a million times before and now they got burned.


Yes, this. That same engineer shouldn’t have a pocket nuclear trigger shaped just like their key fob, either. Humans are predictable.

Aren’t staff part of engineering leadership?

At my job, I would just say they are in the ear of engineering leadership, but are not part of it.

That makes sense. I guess I usually think of developing policies for this kind of thing to be pretty much what staff would do. I don’t usually expect the CTO to make decisions about how to do testing. To the extent the engineering leadership are to blame, it’s that they were the ones who hired/retained this guy. The buck ultimately stops with them to be sure, but making these kinds of policies seems within the remit of a staff eng.

As a staff, you don't even imagine what his salary is for screwing up like that.

That being said, interesting to see how salaries skyrocketed over the years: https://meta.wikimedia.org/wiki/Wikimedia_Foundation_salarie... but not that much for engineering.


The highest non-severance number is $512,179 for the CEO in 2022. That's not particularly extreme. It's ~1/10 of what the Mozilla Foundation CEO makes.

that's insane...I am not donating anymore (not that I gave that much.)

With all their donation begging, nothing will change, they will still spend money on useless seminars and continue to underfund security by hiring low paid web amateurs to do the important work

Pretty much the definition of a “career limiting event”

It's either a a Career Limiting Event, or a Career Learning event.

In the case of a Learning event, you keep your job, and take the time to make the environment more resilient to this kind of issue.

In the case of a Limiting event, you lose your job, and get hired somewhere else for significantly better pay, and make the new environment more resilient to this kind of issue.

Hopefully the Wikimedia foundation is the former.


Realistically, there’s a third option which it would be glib to not consider: you lose your job, get hired somewhere else, and screw up in some novel and highly avoidable way because deep down you aren’t as diligent or detail-oriented as you think you are.

This is the most likely outcome

In the average real world, the staff engineer learns nothing, regardless of whether they get to lose or keep their job. Some time down the line, they make other careless mistakes. Eventually they retire, having learned nothing.

This is more common than you'd think.


I was able to run some stats at scale on this and people who make mistakes are more likely to make more mistakes, not less. Essentially sampling from a distribution of a propensity for mistakes and this dominated any sign of learning from mistakes. Someone who repeatedly makes mistakes is not repeatedly learning, they are accident prone.

My impression of mistakes was that they were an indicator of someone who was doing a lot of work. They're not necessarily making mistakes at a higher rate per unit of work, they just do more of both per unit of time.

From that perspective, it makes sense that the people who made the most mistakes in the past will also make the most mistakes in the future, but it's only because the people who did the most work in the past will do the most work in the future.

If you fire everyone who makes mistakes you'll be left only with the people who never make anything at all.


In this case it was trivial to normalize for work done.

It’s very human to want to be forgiving of mistakes, after all who has not made any mistakes, but there are different classes of mistakes made by all different types of people. If you make a mistake you are the same type of person, but if you are pulling from a distribution by sampling by those who have made mistakes you are biasing your sample in favor of those prone to making such mistakes. In my experience any effect of learning is much smaller than this initial bias.


Can you elaborate? What scale? What kind of mistakes? This sounds quite interesting.

A decade of data from many hundreds of people, help desk type roll where all communication was kept, mostly chat logs and emails. Machine learning with manual validation. The goal was to put a dollar figure on mistakes made since the customers were much more likely to quit and never come back if it was our fault, but also many customers are nothing but a constant pain in the ass so it was important to distinguish who was right whenever there was a conflict.

Mistakes made per call, like many things, were on a Pareto distribution, so 90% of the mistakes are made by 10% of the people. Identifying and firing those 10% made a huge difference. Some of the ‘mistakes’ were actually a result of corruption and they had management backing as management was enriching themselves at the cost of the company (a pretty common problem) so the initiative was killed after the first round.


This sounds really interesting but possibly qualitatively different than programming/engineering where automated improvements/iterations are part of the job (and what's rewarded)

What if you define a hard rule from this statistics that « you must fire anyone on error one »? Won’t your company be empty in a rather short timeframe? [or will be composed only of doingNothing people?]

Why would you do that? You’re sampling from a distribution, a single sample only carries a small amount of information, repeat samples compound though.

Or they are working in a very badly designed system which consistently encourages them to make mistakes

They'll be fine, recruiters don't look this stuff up and generally background checks only care about illegal shit.

Nobody is going to know who did this, so probably not career limiting in any major way.

They named him in the support ticket linked here somewhere.

> sbassett


[flagged]


Is ok, the AI was going to replace them in a few weeks anyway.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: