Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

None whatsoever, their contracts with customers will limit liability to the price paid for the software/subscription. If there was open-ended liability for software failures then very little software would get written.


Caveat to this: In the UK and many other countries, you cannot limit liabilities that cause death or personal injury arising from negligence.


Yeah but if it's a hospital, they should be able to operate without these IT systems. Nothing critical / life-or-death / personal injury should rely on Windows / IT systems.


> they should be able to operate without these IT systems.

Is that even possible any more? (That said, "operate" isn't a boolean, it's a continuum between perfect service and none, with various levels of degraded service between, even if you mean "operate" in the sense of "perform a surgical operation" rather than "any treatment or care of any kind").

All medical notes being printed in hard-copy could be done, that's the relatively easy part. But there's a lot of stuff which is inherently IT these days, gene sequencing, CT scans, etc., there's a lot that computers add which humans can't do ourselves — even video consultation (let alone remote surgery) with experts from a different hospital, which does involve a human, that human can't be everywhere at once: https://en.wikipedia.org/wiki/Telehealth

> Nothing critical / life-or-death / personal injury should rely on Windows / IT systems.

If you think that's bad, you may want to ensure you're seated before reading this about the UK nuclear deterrent: https://en.wikipedia.org/wiki/Submarine_Command_System


Also Silicon Valley: AI will replace doctors and nurses.


Why? Because you simply wish it to be so?


Because the suppliers of IT systems (eg Microsoft, Crowdstrike) do not agree that they can be used for life-critical purposes

If someone is injured or dies because the hospital has inadequate backup processes in the event of a Windows outage, some or maybe all liability for negligence falls on those who designed the hospital that way, not the IT supplier who didn't agree to it.


If your assumptions rest on corporate entities or actual decision makers being held legally liable, then you've got a lot of legwork ahead of you to demonstrate why that's a reasonable presupposition.


Because it's evidently a bad idea and there are reasonable alternatives.


That’s easy for you to say, with the benefit of recency bias, and with presumably zero experience in running a hospital.


That's not about experience, that's about following the regulated standards. This is well known ever since technology (not computers) got into hospitals.


None of the points you mention detracts from the correctness of his/her statement.


And? People and institutions constantly make bad decisions for which there are reasonable alternatives, and that's assuming that the incentives at play for decision makers are aligned with what we would want them to be, which is often not the case. Not that that ends up mattering much except as an explanatory device, because people and institutions constantly pursue bad ideas even seen in terms of their own interests.


It would be like orthopedic surgeons heading down to harbor freight to pick up their saws instead of using medical grade versions.

The tool isn't fit for purpose


Because you should always have a backup.


When has a software company successfully been sued (or settled) over this liability?


From windows tos:

Disclaimer. Neither Microsoft, nor the device manufacturer or installer, gives any other express warranties, guarantees, or conditions. Microsoft and the devicemanufacturer and installerexclude all implied warranties and conditions, including those of merchantability, fitness for a particular purpose, and non-infringement. If your local law does not allow the exclusion of implied warranties, then any implied warranties, guarantees, or conditions last only during the term of the limited warranty and are limited as much as your local law allows. If your local law requires a longer limited warranty term, despite this agreement, then that longer term will apply, but you can recover only the remedies this agreement allows.


"We give you no guarantees, unless the local law says we have to give them to you, in which case we do."

So they might get sued on a local level?


It doesn’t really matter what the contract says. Laws take precedence over contracts. For example, Boeing’s liability for 737 airliners that crash due to faulty software certainly isn’t limited to the price of the planes.


But only $243.6M for fraud, which caused death of 346 people.


Yes, software industry as we know would not exists if companies where held liable for all damages. But in the current state of affairs they have little incentive to improve software quality - when incident like this happens they can suffer an insignificant short term valuation loss but unless it happens too often they can continue businesses as usual.

Many companies paying lip service to quality/reliability but internal incentives almost always go against maintenance and quality of service work (and instead reward new projects, features e. t. c.).


> Yes, software industry as we know would not exists if companies where held liable for all damages.

Of course it would. Restaurants are held liable for food poisoning, but they still operate just fine. They just - y’know - take care that they don’t poison their customers.

If computer systems were held liable, software would be a lot more expensive. There would be less of it. And it would also be better.

I think I can get behind that future.


I like that future too, but to play devil's advocate:

Write me software that coordinates all flights to and from airports, capturing all edge-cases, that's bug free. Then tell me the number you estimate and the number of years to roll this out.


Sure, but ... thats not a spec. Specs have clear goals and limited scope. "All flights from all airports forever" is impossible to program, full stop.

The right way to write code like that is to start simple and small - we're going to service airports X, Y and Z. Those airports handle Q planes per day. The software will be used by (this user group) and have (some set of responsibilities). The software engineers will work with the teams on the ground during and after deployment to make sure the software is fit for purpose. Someone will sign off on using it and trusting its decisions. And lets also do a risk assessment where we lay out all the ways defects in the software could cost money and lives, so we can figure out how risk averse we need to be.

Give me scope like that, and sure - I'll put a team together to write that code. It'll be expensive, but not impossible. And once its working well, I'd happily roll it out to more airports in a controlled and predictable manner.


Crowdstrike's stock closed at $343 yesterday, I imagine that and MSFT are going to be cratering later this morning.


Pro tip: your stock can't go down if you crash the stock exchange


It honestly did not occur to me. In all seriousness, was stock exchange ever really hacked ( not just data exfiltration -- write access to everything )?


Trading has been halted on stock exchanges due to technical issues many times. But there's are also more than one stock exchange.


No, it can't, if there is no stock exchange online to process the prices.


"Tell me, Mr. Anderson, what good is a phone call when you are unable to speak?"


Pretty good time to buy MSFT I would imagine, given that this isn't really their fault.


So far MSFT is down by ~2%... Even Crowdstrike is only -20%. When they probably did more damage in a day their entire net worth.


I'm mystified it's not much lower. Perhaps the market hasn't really priced in the damage yet.


Yeah, if I had a spare million, I can imagine buying that dip.


I'd expect crowdstrike to take a big hit. Between this and the russian hack [edit: actually not, sorry, confused with SolarWinds], I am not sure they are not causing more problems than they solve.


Crowdstrike was hacked by Russians?


Sorry I confused them with SolarWinds. Strike that


it hovers around -20% in pre-market (at the moment)


MSFT will be fine. They are riding the AI waves, this is not meaningful, especially since they are not at fault.


The waves that are already looking like a storm in a teacup ?

There is no 'AI', that is always only hype. There is machine learning, which is a very powerful technology but I doubt MSFT will be leading that revolution. As for LLMs, MSFT might have some competitiveness there but I doubt it's going to be a very lucrative market. MSFT is highly overvalued.


<< There is no 'AI', that is always only hype. There is machine learning, which is a very powerful technology

I agree with you on the technical aspect, but the distinction makes regular people eyes glaze over within 5 seconds of that explanation. AI as a label for this is here to stay the same way cyber stopped meaning text sex of IRC. The people have spoken.

<< MSFT is highly overvalued.

Yes, but so is NVDA, the entire stock exchange and US real estate market. We are obviously due for a major correction and have been for a while. As in, I actually moved stuff around in my 401k to soften the blow in that event 2 years ago now. edit: yes, I am a little miffed I missed out on that ride.

So far, everything was done to prevent hard crash and in the election year, that is unlikely to change. Now after the election, that is another story altogether.

<< I doubt MSFT will be leading that revolution.

I think I agree. I remain mildly hopeful that the open model approach is the way.


https://www.aqr.com/-/media/AQR/Documents/Whitepapers/Unders...

You should stop trying to predict the next crash. According to the study, most people (including institutional investors) consistently believe there is a >10% chance the market will crash in the next 6 months when historically the probability is only 1%


<< You should stop trying to predict the next crash.

Hmm? No. I will attempt to secure my own financial interest.

<< According to the study, most people (including institutional investors) consistently believe there is a >10% chance the market will crash in the next 6 months when historically the probability is only 1%

Historically is doing a fair amount of work here. I would argue there is little historical value to the data we face. Over the past few decades we went through through several mini revolutions ( industrial, information and whatever they end up calling now ) in terms of how we work, eat, communicate and, well, live.

All of these have upended how humans interact with the world effectively changing the calculus on the data that preceding it if not nullifying it altogether in some ways.

Your argument is to stop worrying since you are likely wrong anyway, by a factor of 10. I am saying is 1935 people also thought they have time to ride the wave.

edit: ok, need coffee. too many edits


> Now after the election, that is another story altogether.

Agree. First half of 2025 could be pretty spectacular (if/when we get through 2024).

I suspect there might be some pretty radical plans for US debt monetisation being drawn up, to be implemented early in the new presidential term.


My brain goes there too, but the other part of my brain says "line always goes up." The richest among us are heavy owners of stocks, and this country does everything it can to keep those numbers up. Look at that insane COVID V-shaped recovery that happened. That's just not a real/natural market reaction in my book.


The worst part is that I get the need to do something to rein it in, but I get the feeling it will, as always, not be the actual rich ( owns color blue rich level ), who will suffer from those plans. There are less and less moves the government has as time progresses.


It may not be their fault directly but it is causing Windows systems to bluescreen, which IS their fault and their responsibility, ultimately.


How is it their fault and responsibility? Isn’t falcon sensor basically running like a kernel module? Does it mean that Windows is not engineered properly when it can be crashed by this?


Are you saying that they should prevent or limit the ability of their users from installing third party software? Or at the very least prevent it from running in kernel mode?


A more reasonable claim would be that microsoft should have a way to allow virus-scanners to run without needing to be able to crash the kernel.

That isn't an easy thing to do, but it should be possible.


I don't think that is possible. How can an anti-virus not in kernel mode defend against viruses running in kernel mode then?


Ebpf can, I believe, not crash the Kernel


Windows blue screen was never Microsoft's responsibility. /s


That’s what the license agreement says. Wait till every man and his dog sues them.


This is an insane take. Do you think other industries get away with limiting their liability to the product cost? No, because that doesn't provide adequate incentives for making a safe product. The amount of software that gets written depends mostly on the demand for that software. Even if Micrososft would not be willing to up their game to make the risk viable then someone else would.


The thing is we know how to make (eg) food that is safe or to a lesser extent bridges that don't fall down. If you sell food that makes people sick you should have known how to avoid that and so you can be held liable.

We don't have a good idea how to make software that is flawless, at least, not at scale for a cost that is acceptable. This is changing a little bit now with the drive by governments to use memory-safe languages, but that only covers a small part of the possible spectrum of bugs in software and hardware.


Nothing is without flaws, it's about limiting risk to an acceptable amount. Critical software should be held against higher standards.


What's "critical software"? Software controlling flight systems in planes is already held to very high standards, but is enormously expensive to write and modify.

In this case it seems most of the software which is failing is dull back office stuff running on Windows - billing systems, train signage, baggage handling - which no one thought was critical, and there's no way on earth we could afford to rewrite it in the same way as we do aircraft systems.


Something that has managed to ground a lot of planes and disable emergency calls today is in fact critical. The outcome of it failing proves it is critical. Whatever it is.

Now, that it was not known previously to be critical, that may be. Whether we should have realised its criticality or not, is debatable. But going forward we should learn something from this. So maybe think more about cascading failures and classify more things as critical.

I have to wonder how the failure of billing and baggage handling has resulted in 911 being inoperative. I think maybe there's more to it than you mention here.


Agreed, there is no such thing as perfect software.

In physical world, you can specify a tolerance of 0.0005 in but the part is going to cost $25k a piece. It is trivially easy to specify tolerance, very hard to engineer a whole system that doesn't blow the cost and impossible to fund.

Great software architectures are the ones that operate cheaply, but are bulletproof when software fails. https://en.wikipedia.org/wiki/Chaos_engineering


Given how widespread the issue is, it seems that proper testing on Crowdstrike's part could have revealed this issue before rolling out the change globally.

It's also common to rollout changes regionally to prevent global impact.

To me it seems Crowdstrike does not have a very good release process.


> but is enormously expensive to write and modify.

We're talking about critical software. If we can't afford to reach the level of safety needed because it's too expensive, well so be it.

Besides, the enormously expensive flight systems don't seem to make my plane ticket expensive at all...


There's only one piece of software which (with adaptations) runs every Airbus plane. The cost of developing and modifying that -- which is enormous -- is amortized over all the Airbus planes sold. (I can't speak about Boeing)

What failed today is a bunch of Windows stuff, of which there is a vast amount of software produced by huge numbers of companies, all of very variable quality and age.


I meant critical software a short-hand for something like: quality of software should be proportional to the amount of disruption caused by downtime.

Point of sale in a records store, less important. Point of sale in a pharmacy, could be problematic. Web shop customer call center, less important. Emergency services call center, could be problematic.


I, as a producer of software, have effectively no control over where it gets used. That's the point.

Outside of regulated industries it's the context in which software is used which determines how critical it is. (As you say.)

So what you seem to be suggesting (effectively) is that use of software be regulated to a greater/lesser extent for all industries... and that just seems completely unworkable.


What you're describing is a system where the degree of acceptable failure is determined after the software becomes a product because it is being determined by how important the buyer is. That is backwards and unworkable.


It isn't, though. "You may not sell into a situation that creates an unacceptable hazard" is essentially how hazardous chemical sale is regulated, and that's just the first example that I could find. It's not uncommon for a seller to have to qualify a buyer.


I think the system is rather a one where if you offer critical services then you're not allowed to use a software that hasn't been developed up to a particular high standard.

So if you develop your compression library it can't be used by anyone running critical infra unless you stamp it "critical certified", which in turn will make you liable for some quality issues with your software.


I assume you mean "if the buyer will use the software in critical systems."

That's very realistic and already happens by requiring certain standards from the resulting product. For example, there are security standards and auditing requirements for medical systems, payment systems, cars, planes, etc.


> Software controlling flight systems in planes is already held to very high standards, but is enormously expensive to write and modify.

Here's something I don't understand: those jobs pay chump change compared to places like FB and (afaik) social networks don't have the same life-or-death context


Hence, Windows should blue/green kernel modules and revert to a past known good version if things break


Would not shock me for AV companies to immediately work around that if it were to be implemented. “You want our protection all of the time, even if the attacker is corrupting your drivers!”


This seems like the kernel module was faulty for some time. The update only changed the input data for the module.


Crowdstrike should have higher testing standards, not every random back-office process.


> Software controlling flight systems in planes is already held to very high standards, but is enormously expensive to write and modify.

Boeing disagrees.


We don't know how to make general software safe, but we do know how to make any one piece of software safe. If you're software is going to be used as infrastructure then it should be held to the same standards. If you don't want it to be treated as infrastructure don't sell it to hospitals.


Mixing up the responsibility, in your world hospitals shouldn't purchase it.


Responsibility can be shared.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: