Hacker Newsnew | past | comments | ask | show | jobs | submit | JamesBarney's commentslogin

It's definitely not more productive per hour than 40. And if you're moonlighting another 30 hrs that'll definitely decrease your productivity at work.


Again, it depends. Maybe they have more pride in their job or despise their company less, who knows.

And I don't mean productivity per hour. Lol. No, I mean absolute.

An employee working like a dog will get less work done than one just working normally, probably. Because most of the work is negative, so it doesn't add to the work done pile, it chips away at it.

Eventually, I would think, you reach a point where an employee is less productive than no employee at all. Seems impossible to be working 100 hours a week and be getting less than nothing done, but if you're actively making the product worse or creating debt, that's how I would classify that.


> Imnsho this paper is very low quality

Compared to like a phase 3 clinical trial, sure. Compared to your average paper, and especially your average business paper I don't think that's the case.


I think there is some room in between those two.

At a minimum you'd expect a few more companies, more sources than just code review and code productivity metrics (this alone disqualifies the study because it centers on just one task: software development) etc.


Would you be saying that if you agreed with the findings?


I have not made up my mind either way so I'm not sure where you pulled that from. I even wrote in another comment upthread: "It is entirely possible that these conclusions (which by themselves are not all that shocking or novel) hold true over larger samples and across multiple types of company but that's not what they did. "

But you probably missed that.


This work would suggest that the WFH movement would see a rise in sr. engineer salaries and a reduction in jr. engineers salaries, which we haven't seen.


Does the industry even have as much junior folks as it used to? Because cutting them, or not hiring them in the first place, is an alternative to lowering their salaries.


You can't build a universal mind-reading device that doesn't require calibration.


And when you can build one, you will also get telepathy, telekinesis, clairaudience and clairvoyance. I can't wait until I can send a directed thought to someone else.


> I can't wait until I can send a directed thought to someone else.

Oh no, we're going to have ads inserted into our thoughts, aren't we?


I get the idea of publicly disclosing security issues to large well funded companies that need to be incentivized to fix them. But I think open source has a good argument that in terms of risk reward tradeoff, publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.


In addition to your point, it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days. The security issue should certainly be disclosed - when its responsible to do so.

Now, if Google or whoever really feels like fixing fast is so important, then they could very well contribute by submitting a patch along with their issue report.

Then everybody wins.


> ...then they could very well contribute by submitting a patch along with their issue report.

I don't want to discourage anyone from submitting patches, but that does not necessarily remove all (or even the bulk of) the work from the maintainers. As someone who has received numerous patches to multimedia libraries from security researchers, they still need review, they often have to be rewritten, and most importantly, the issue must be understood by someone with the appropriate domain knowledge and context to know if the patch merely papers over the symptoms or resolves the underlying issue, whether the solution breaks anything else, and whether or not there might be more, similar issues lurking. It is hard for someone not deeply involved in the project to do all of those things.


> it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days

This is very far from obvious. If google doesn't feel like prioritising a critical issue, it remains irresponsible not to warn other users of the same library.


If that’s the case why give the OSS project any time to fix at all before public disclosure? They should just publish immediately, no? Warn other users asap.


Why do you think it has to be all or nothing? They are both reasonable concerns. That's why reasonable disclosure windows are usually short but not zero.


Because it gives maintainers a chance to fix the issue, which they’ll do if they feel it is a priority. Google does not decide your priorities for you, they just give you an option to make their report a priority if you so choose.


Timed disclosure is just a compromise between giving project time and public interests. People have been doing this for years now. Why are people acting like this is new just because ffmpeg is whining?

And occasionally you do see immediate disclosures (see below). This usually happens for vulnerabilities that are time-sensitive or actively being exploited where the user needs to know ASAP. It's very context dependent. In this case I don't think that's the case, so there's a standard delayed disclosure to give courtesy for the project to fix it first.

Note the word "courtesy". The public interest always overrides considerations for the project's fragile ego after some time.

(Some examples of shortened disclosures include Cloudbleed and the aCropalypse cropping bug, where in each case there were immediate reasons to notify the public / users)


Full (immediate) disclosure, where no time is given to anyone to do anything before the vulnerability is publicly disclosed, was historically the default, yes. Coordinated vulnerability disclosure (or "responsible disclosure" as many call it) only exists because the security researchers that practice it believe it is a more effective way of minimizing how much the vulnerability might be exploited before it is fixed.


Part of the problem is that many of the issues are not really critical, no?


Unless the maintainers are incompetent or uncooperative this does not feel like a good strategy. It is a good strategy on Google's side because it is easier for them to manage


> In addition to your point, it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days.

So when the xz backdoor was discovered, you think it would have been better to sit on that quietly and try to both wrest control of upstream away from the upstream maintainers and wait until all the downstream projects had reverted the changes in their copies before making that public? Personally I'm glad that went public early. Yes there is a tradeoff between speed of public disclosure and publicity for a vulnerability, but ultimately a vulnerability is a vulnerability and people are better off knowing there's a problem than hoping that only the good guys know about it. If a Debian bug starts tee-ing all my network traffic to the CCP and the NSA, I'd rather know about it before a patch is available, at least that way I can decide to shut down my Debian boxes.


The XZ backdoor is not a bug but a malicious payload inserted by malicious actors. The security vulnerability would immediately been used as it was created by attackers.

This bug is almost certainly too obscure to be found and exploited in the time the fix can be produced by Ffmpeg. On the other hand, this vuln being public so soon means any attacker is now free to develop their exploit before a fix is available.

If Google's goal is security, this vulnerability should only be disclosed after it's fixed or a reasonable time (which, according to ffmpeg dev, 90 days is not enough because they receive too many reports by Google).


A bug is a bug, regardless of the intent of the insertion. You have no idea if this bug was or wasn't intentionally inserted. It's of course very likely that it wasn't, but you don't and can't know that, especially given that malicious bug insertion is going to be designed to look innocent and have plausible deniability. Likewise, you don't know that the use of the XZ backdoor was imminent. For all you know the intent was to let it sit for a release or two, maybe with an eye towards waiting for it to appear in a particular down stream target, or just to make it harder to identify the source. Yes, just like it is unlikely that the ffmpeg bug was intentional, it's also unlikely the xz backdoor was intended to be a sleeper vulnerability.

But ultimately that's my point. You as an individual do not know who else has access or information about the bug/vulnerability you have found, nor do you have any insight into how quickly they intend to exploit that if they do know about it. So the right thing to do when you find a vulnerability is to make it public so that people can begin mitigating it. Private disclosure periods exist because they recognize there is an inherent tradeoff and asymmetry in making the information public and having effective remediations. So the disclosure period attempts to strike a balance, taking the risk that the bug is known and being actively exploited for the benefit of closing the gap between public knowledge and remediation. But inherently it is a risk that the bug reporter and the project maintainers are forcing on other people, which is why the end goal must ALWAYS be public disclosure sooner rather than later.


A 25 years old bug in software is not the same as a backdoor (not a bug, a full on backdoor..). The bug is so old if someone put it there intentionally, well congrats on the 25yo 0day.

Meanwhile the XZ backdoor was 100% meant to be used. I didn't say when and that doesn't matter, there is a malicious actor with the knowledge to exploit it. We can't say the same regarding the bug in a 1998 codec that was found by extensive fuzzing, and without obvious exploitation path.

Now, should it be patched? Absolutely, but should the patch be done asap at the cost of other maybe more important security patches? Maybe, maybe not. Not all bugs are security vulns, and not all security vulns are exploitable


> Absolutely, but should the patch be done asap at the cost of other maybe more important security patches? Maybe, maybe not. Not all bugs are security vulns, and not all security vulns are exploitable

I fully agree which is why I really don’t understand why everyone is all up in arms here. Google didn’t demand that this bug get fixed immediately. They didn’t demand that everything be dropped to fix a 25 year old bug. They filed a (very good and detailed) bug report to an open source product. They gave a private report out of courtesy and an acknowledgment of the tradeoffs inherent in public bug disclosure, but ultimately a bug is a bug, it’s already public because the source code is public. If the ffmpeg devs didn’t feel it was important to fix right away, nothing about filing a bug report, privately or publicly changes any of that.


In the end, a report saying "fix this within 90 days or this gets public" for small-ish bugs like this is a kind of demand. Do this or this gets out and you'll have to make an express release to fix it anyway.

I can understand that stance for serious bugs and security vulnerabilities. I can understand such delays for a company with a big market cap to put pressure on them. But these delays are exactly like a demand put on the company: fix it asap or it gets public. We wouldn't have to do this if companies in general didn't need to get publicly pressured into fixing their stuff. Making it public has two objectives: Warn users they may be at risk, and force the publisher to produce a fix asap or else risk a reputation hit.

> If the ffmpeg devs didn’t feel it was important to fix right away, nothing about filing a bug report, privately or publicly changes any of that.

It does change how they report. Had they given more time or staggered their reports over time, Ffmpeg wouldn't have felt pressure to publish fixes asap. Even if the devs can say they won't fix, any public project will want to keep a certain quality level and not let security vulnerabilities get public.

In the end, had these reports been made by random security researchers, no drama would have happened. But if I see Google digging up 25 years old bugs, is it that much to expect them to provide a patch with it?


But this isn't a "small-ish bug". What gave you that impression? It's a vulnerability in code that is both compiled in by default, and that is reachable when ffmpeg is run with its default settings when run on a file crafted to trigger the bug.

And if you believe this is a "small-ish" bug just because the ffmpeg Twitter account's gaslighting about "20 frames of a single video in Rebel Assault", then surely it being disclosed would be irrelevant? The only way the disclosure timeline makes a difference is if ffmpeg too think that the bug is serious.


> In the end, a report saying "fix this within 90 days or this gets public" for small-ish bugs like this is a kind of demand. Do this or this gets out and you'll have to make an express release to fix it anyway.

I think this is where the disconnect is. To my mind there is no "do this or else" message here, because there is no "or else". The report is a courtesy advance notice of a bug report that WILL be filed, no matter what the ffmpeg developers do. It's not like this is some awful secret that Google is promising not to disclose if ffmpeg jumps to their tune.

Further, the reality is most bug reports are never going to be given a 90 day window. Their site requests that if you find a security vulnerability you email their security team, but it doesn't tell you not to also file a bug report, and their bug report page doesn't tell you not to file anything you think might be a security or vulnerability bug to the tracker. And a search through the bug tracker shows more than a few open issues (sometimes years old) reporting segfault crashes, memory leaks, un-initialized variable access, heap corruption, divide by zero crashes, buffer overflows, null pointer dereferences and other such potential safety issues. It seems the ffmpeg team has no problems generally with having a backlog of these issues, so certainly one more in a (as we've been repeatedly reminded) 25 year old obscure codec parser is hardly going to tank their reputation right?

> In the end, had these reports been made by random security researchers, no drama would have happened.

And now we get to what is really the heart of the matter. If anyone else has reported this bug in this way, no one would care. It's not that Google did anything wrong, it's that Google has money so everyone is mad that they didn't do even more than they already do. And frankly that attitude stinks. It's hard enough getting corporations to actually contribute back to open source projects, especially when the license doesn't obligate them to at all. I'm not advocating holding corporations to some lesser standard, if the complaint was that Google was shoving unvalidated, and un-validatable low effort reports into the bug tracker, or that they actually were harassing the ffmpeg developers with constant followups on their tickets and demands for status updates then that would be poor behavior that we would be equally upset about if it came from anyone. But like you said, any other security researcher behaving the same way would be just fine. Shitting on Google this way for behaving according to the same standards outlined on ffmpeg's own website because of who they are and not what they've done just tells other corporations that it doesn't matter if you contribute code and money in addition to bug reports, if you don't do something to someone's arbitrary standard based on WHO you are, rather than WHAT you do, you'll get shit on for it. And that's not going to encourage more cooperation and contributions from the corporations that benefit from these projects.


> publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.

You can never be sure that you're the only one in the world that has discovered or will discover a vulnerability, especially if the vulnerability can be found by an LLM. If you keep a vulnerability a secret, then you're leaving open a known opportunity for criminals and spying governments to find a zero day, maybe even a decade from now.

For this one in particular: AFAIK, since the codec is enabled by default, anyone who processes a maliciously crafted .mp4 file with ffmpeg is vulnerable. Being an open-source project, ffmpeg has no obligation to provide me secure software or to patch known vulnerabilities. But publicly disclosing those vulnerabilities means that I can take steps to protect myself (such as disabling this obscure niche codec that I'm literally never going to use), without any pressure on ffmpeg to do any work at all. The fact that ffmpeg commits themselves to fixing known vulnerabilities is commendable, and I appreciate them for that, but they're the ones volunteering to do that -- they don't owe it to anyone. Open-source maintainers always have the right to ignore a bug report; it's not an obligation to do work unless they make it one.

Vulnerability research is itself a form of contribution to open source -- a highly specialized and much more expensive form of contribution than contributing code. FFmpeg has a point that companies should be better about funding and contributing to open-source projects that they rely on, but telling security researchers that their highly valuable contribution is not welcome because it's not enough is absurd, and is itself an example of making ridiculous demands for free work from a volunteer in the open-source community. It sends the message that white-hat security research is not welcome, which is a deterrent to future researchers from ethically finding and disclosing vulnerabilities in the future.

As an FFmpeg user, I am better off in a world where Google disclosed this vulnerability -- regardless of whether they, FFmpeg, or anyone else wrote a patch -- because a vulnerability I know about is less dangerous than one I don't know about.


> publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.

Not publicly disclosing it also carries risk. Library users get wrong impression that library has no vulnerabilities, while numerous bugs are reported but don't appear due to FOSS policy.


But if open source is reliant on public contributors to fix things, then the bug should be open so anyone can take a stab at fixing it, rather than relying on the closed group of maintainers


You are missing the tiny little fact that apparently a large portion of infosec people are of the opinion that insecure software must not exist. At any cost. No shades of gray.


Why? Why should we care where our pasta is made? Why not just buy the cheapest highest quality pasta, where ever it happens to be made?


Because places and people aren't fungible.


Well, nor is food, you see.


Food is relatively unimportant except as a source of calories.


This would explain why you don't like Italian pasta.


To you sure, but lots of people enjoy food. And Americans enjoy eating complete dogshit, being among the most obese and revolting (to the eyes and to the nose) people on the planet. Maybe it'd be a pretty cool thing if they at more like the Italians.


Frame this on the wall as the most succinct way to sum up the utter capitulation people face in supporting these tariffs.

Yes they are raising taxes and making everything more expensive for Americans.

Yes they are disrupting the raw materials needed for domestic manufacturing supply chains.

Yes their policies change so frequently and capriciously that it's impossible for American businesses to make medium-to-long term plans.

Yes the president and his family are personally and directly benefiting from these policy decisions. Yes they are directly accepting gifts and payments, including jets, TikTok board seats, and brazenly corrupt contributions to their personal cryptocurrency.

All of that is acceptable, and technically food doesn't need to taste good anyway.


> All of that is acceptable, and technically food doesn't need to taste good anyway.

Food doesn’t need to taste good. This is America, not some mediterranean country: https://www.nytimes.com/1975/08/31/archives/food-on-enjoying... (“This American attitude toward food has been formed by two important elements in our national thinking, both functions of our national history. One is the he‐man ideology developed during our pioneering past which holds that it is effete to demand finesse in cookery (or in any other cultural activity, for that matter). The other is our Puritanism. The Puritan nourishes himself (grudgingly), for God has so organized the universe that he must. Possibly he suspects that the chore of eating was imposed on him as a penance for his disgraceful gourmandise in connection with an apple.”).


This is amazing, thanks for sharing


Of all the things in my post to address, the fact that you are pulling some random NYT opinion piece from 1975 to say “actually it’s more American to NOT enjoy food” only reiterates my point.


I the article discusses a salutary american cultural norm that has since been diluted but is still worth emulating. The America that sent a man to the moon thought garlic was spicy. You don’t need “the best pasta.” Adequate pasta, produced in America by Americans, is good enough.


This is becoming a Tim Robinson gag. I wish you would dedicate as much time supporting American consumers and American manufacturers as you did trying to argue whatever it is you’re digging in about food spiciness or whatever. Your points were not about pasta but truly that it’s more American to not enjoy food lol.

If you must respond, please address my initial points about all the concessions you’re making about these policies. Also, how do you end up finding these random op eds? Like what do you search to find them?


> Your points were not about pasta but truly that it’s more American to not enjoy food lol.

The point to which I was replying asked: "Why not just buy the cheapest highest quality pasta, where ever it happens to be made?" My response was that developing American capacity to produce is more valuable than satiating the American appetite for consumption.

America's lost Puritan spirit is directly relevant to the demand side of that equation. It suppressed Americans' appetite for cheap Chinese goods, foreign luxuries, etc. It was a great virtue of the Republic. Among other things, it enabled America to develop its domestic industries and reinvest the profits in the country, because Americans were readily willing to forgo cheaper prices and higher quality of foreign-made goods for the benefit of developing domestic industrial capacity. (Note that Chinese industrial policy also is focused on suppressing domestic demand for imports.)

Contemporary trade policy is based on facilitating the cheap procurement of foreign products at the expense of domestic industries. That's a bad thing, and one of the forces enabling that bad thing is the loss of the Puritan spirit in America. We've become a country focused on hedonic satisfaction, and that makes us weak.


Unfounded I don't know.

But if you're taking the time to build out side projects I'd open them so you can put them on your resume. Doesn't make sense to me to hamstring your resume so some kid you don't know gets a slighter better score in some cs class.


Inflation was stable and pretty low for the 10 years after the financial crisis.

Gold wasn't at $400 per oz before the financial crisis, and the spike in gold prices was fairly recent.

Basically the financial crisis response didn't cause inflation, the covid response did.


The covid response was what the occupy protests told us the bailouts were. Never has there ever been a bigger transfer of wealth to the 1% than during the covid era.


> However, there's no speculation about what their earnings will be because they're currently selling their services below cost and there isn't really any story as to how they'll turn this profitable.

Do we know they're selling their services below cost? I'm pretty confident they're making money on inference and burning through large piles of cash on capex and research.


Not to mention the risks that OpenAI even if it does goes bankrupt sells for less than 4b is not anywhere close to 90%.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: