Can do both. Destroy the company, but also go after those responsible. Anyone who knew about the situation but didn't make their supervisor(s) aware is personally liable for damages. Anyone on the board or C-Suite that was aware and either did nothing or didn't notify regulators is also personally liable. In both cases if anyone died as a result the companies actions they should be tried for manslaughter.
Corporate capital punishment, the company gets seized it's patents and copyrights made public domain, and any other assets it has sold to the highest bidder and those profits used to compensate the victims.
Does OW2 even run on the SteamDeck? Checking ProtonDB says it's unknown compatibility, but generally competitive PvP games struggle due to their highly intrusive DRM/anti-cheat that wants root kit level access.
Back when I use to run Linux as my main system OW was one of the games that ran basically without any problems (other than shader compilation at the start) sometimes even better than it did on Windows.
Funnily enough there have been a few times that Wine specific problems have been addressed by the BNet team (pre steamdeck) and when Linux players were accidentally banned they have unbanned them iirc in some scenarios.
Don't need to actually block EU downloads, just state that no open source software is certified for commercial usage in the EU. Any company that ignores that is now in violation of this law and that's their problem. The EU will need to decide if they want to allow their businesses to continue benefiting from open source software or fix this law.
The interesting question is really what happens when commercial software companies outside the EU that use open source libraries decide they don't want to deal with this headache _also_ start refusing to certify their software for use in the EU and stop doing business there.
Why not? Nothing in the GPL says you need to certify your code for commercial usage in the EU.
Edit: In fact, reading the GPL it looks like it might implicitly already preclude usage in the EU under this law. There's this section right here:
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
That seems to suggest that any cost associated with certifying for commercial usage in the EU would fall on the company using the GPL licensed code, not the developers of the licensed code. Certifying the code for commercial usage under the EU law I would argue would be a warranty, one explicitly declaimed already by the GPL.
The CRA is designed to fix that "problematic" passage in the GPL, and push responsibility back onto the company creating open source software.
I don't think this will impact any hobbyist open source developer - it is primarily going to impact commercial 'opencore' companies. I wouldn't be surprised if this targets companies like Red Hat / IBM, etc. Or benefits them. I'm not sure yet.
Microsoft is probably laughing all the way to the bank, and this will just solidify their hold on the European market.
The best one I've seen is jOOQ, although there are some caveats. There are a number of advantages and a few disadvantages to using a query builder. On the positive side you have:
1) Implementation agnostic queries. For better or worse SQL is a VERY loose standard with tons of little quirks for each underlying DB. Using a query builder lets you write a standardized syntax that then gets translated to match the quirks of the actual SQL implementation for you. This _somewhat_ although not entirely insulates you from the underlying implementations details.
2) Works with existing language tooling. Using the actual language you can take advantage of things like intellisense and syntax highlighting to write code and spot typos. In a more traditional SQL client you'll be embedding your queries inside of Strings and in the worst case constructing ad-hoc queries by appending strings together.
3) Related to point 2, you get compile time checks that your queries are well formed and that your types make sense. This is where the first major problems also come in, but more on that later.
4) Possible compile time optimization of your queries. I'm not aware of any builder that does this, but in theory just like language compilers run optimization passes over the AST generated from your code, you could in theory optimize the query that results from your builders AST (this could possibly even be done at compile time).
And now for the cons.
1) Dependent on the query builder to support every weird quirk and advanced feature, and raises the problem of what to do when a feature is used in a query that isn't supported by the underlying implementation. If you want to use some slightly obscure feature supported by your particular DB but isn't supported by the query builder, you might just be SOL.
2) Compile time headaches due to either generating or validating code. Often times these tools work best when at compile time they can connect to your actual DB to read its schema and either generate code (such as enums of tables and columns) or to validate queries (E.G. checking that a varchar column is being treated as a string and not an int). If you have a conveniently accessible local instance or dev env this might not be a problem, but then you often also need to find a solution for your CICD server. This also says nothing about generated code which is it's own set of headaches.
3) Yet another DSL to learn. You're now no longer writing SQL, but instead something SQL adjacent that has been projected onto another language in a no doubt imperfect fashion. You're now needing to use knowledge of both SQL and your language of choice simultaneously in order to write queries. In theory the compile time checks and language support might make this a wash, but it could become relevant if you run into some weird edge cases and need to debug.
As for things I'd wish for in an ideal query builder, I think it's VERY important to provide options to avoid needing to establish DB connections at compile time, while also still providing the advantages that often provides. Being able to E.G. point the tools at files checked into version control containing your schema DDL and validate queries or generate code based on that would be a great feature to have.
Beyond that providing adequate escape hatches for unusual features when building queries is also important. There should be a way to invoke arbitrary functions or chunks of raw SQL outside of the confines of the query DSL (with the understandable restriction that query checking of such chunks will be minimal at best).
It's because Erlang abstracts the difference between distributed and centralized services. In Erlang everything is message passing, and Erlang doesn't much care if the components passing messages are running on the same server or multiple ones in a DC, it will route the messages either way. In many ways Erlang is the ultimate micro-service, to the point where the entire language is built around it.
It's important to remember Joel Spolsky, one of the founders of stackoverflow is an old school Microsoft guy that managed the Excel team before starting his own company. He has always been a bit of a Microsoft fanboy, and used to advocate a lot for VB as a serious language back before it got swallowed by .Net and turned into a less powerful syntax for C#. The fact they chose C# isn't surprising at all given that. It actually would have been far more surprising to see them pick anything but C#.
I think the previous poster was a bit off the mark though. The language performance wasn't really the issue there, rather it's the fact that they picked a language that at the time really only ran on Windows, and as a consequence they were forced into running their web servers on Windows. That choice then forces them to scale up rather than out since each instance has license costs attached to it. For most companies running on Linux, it's trivial to scale out since your only costs are the compute cost (or the hardware cost in a non-cloud model), where as it tends to be far more expensive to scale up as more powerful hardware tends more towards geometric increases rather than linear. These days the choice of C# wouldn't be such a big issue as .Net core can easily run on Linux servers, but back in the 2000s using C# was putting a pretty big albatross around your neck by way of Windows licenses.
> it's the fact that they picked a language that at the time really only ran on Windows, and as a consequence they were forced into running their web servers on Windows.
If the founders and early employees were from Microsoft it might have been easier for them to use Windows Server since they were already pretty well versed in Windows development.
It's a pattern I constantly see: "Why did your startup use X instead of Y?"
"Ohh well X has this feature that Y lacks and so and so... ohh and the founder and his friend were pretty good at X and used it before."
I agree with you that it seems like a self-imposed limitation. At the same time, it makes one think how said limitations can actually foster creativity and efficiency. They mention in the post - they constantly gloat about this - that they could run SO on one or two machines.
I'd imagine that said machine would need to be a behemoth and not a t3.micro, but intuitively I feel that this would be much cheaper than the average horizontally scaled web application. Or in other words, that they're hyperefficient, regardless of architecture.
Does anyone have any insight on whether this intuition is on the right track?
Eh, not sure that follows. Here's the thing, costs aren't linear. If you do what AWS does and create some artificial "compute units" as a sort of fungible measure of processing power, what you'll find is that the sweet spot for price per compute unit is a medium power system. The current mid-range processors tend to be slightly more expensive than the low-end processors, but significantly cheaper than the high-end processors.
So, hypothetically, lets say you can get one of 4 processors, a low end one that gives you 75 units for $80, a mid-range processor that gives you 100 units for $100, a high-end processor that gives you 125 units for $150, and the top of the line processor that gives you 150 units for $300. If you normalize those costs, your 4 processors get price-per-compute values of $1.01, $1, $1.20, and $2. The best value is at the $1 per compute unit price point of the $100 processor. Logically if you need 150 units of compute power you have 2 choices, you can use 2 $100 processors, or 1 $300 processor. Clearly the better option is the 2 $100 processor. This would be scaling out. In the case of what SO did though, they took that off the table, because their formula isn't just the cost of the processor (ignoring related things like RAM and storage), but also includes a per-instance license cost. Their math ends up looking more like $100 CPU + $150 windows license times 2 totaling to $500, vs, $300 CPU + $150 windows license times 1, totaling to $450, which ends up making the more expensive processor the cheaper option in terms of total costs.
> If you do what AWS does and create some artificial "compute units" as a sort of fungible measure of processing power, what you'll find is that the sweet spot for price per compute unit is a medium power system.
At least when I look here the costs are linear.
E.g. c5.2xlarge is double a c5.xlarge, and a c5.24xlarge is 24 times higher than a c5.xlarge.
That's interesting, it didn't used to be linear, particularly on the very large instances. Oddly when looking at the prices for RHEL those are much closer to what all the prices used to look like (I hadn't actually looked at AWS pricing in a few years). I wonder if AWSes virtualization tech has just reached the point now where all processing is effectively fungible and it's all really just executing on clusters of mid-range CPUs no matter how large your virtual server is.
They were/are on bare metal so they have real dedicated cores and no VM overhead. Two bare metal cores is a hell of a lot faster than two virtual cores. I had been away from infrastructure during the rise of vmWare and didn't realize that the core count was potentially fake. I was sparring with an admin over resources one day and mentioned the number of cores on the VM and he just laughed. "Those are virtual, you don't really have four cores." Be careful in your assumptions about these VMs in the cloud.
He never managed the Excel team and he didn't write code for the product. He was a PM and wrote the spec for VBA. An accomplishment to be sure but not much different than all the other great individual contributors Excel has had over the years.
He did not have a management role. PMs write specs and talk to the customers. That time was also very much the "cowboy coding" era so part of the job was convincing devs they should implement it how he spec'd because he couldn't force them. I think that's part of why he was so popular with his blog aimed at programmers, he had lots of time honing his persuasive technique.
Considering the shift to AMD from Intel that has happened over the last couple years it will be interesting to see if we'll see AMD as more of a priority target in the future. AMD has certainly been leading in sales in most demographics for the last year or two, and there's little sign of Intel closing that gap. AMD has even made some moves recently that indicate some desire to chase Intel out of the lead in the few niches they still control, like the x86 low power/cost device market that has been dominated by atom/celeron.
L4 uses a similar model, and the last ~20 years of research around L4 has mostly focused on improving IPC performance and security. The core abstraction is a mechanism to control message passing between apps via routing through light weight kernel invocations (which is indeed practically the only thing the kernel does, it being a microkernel architecture).
Memory access is enforced, although not technically via the kernel. Rather at boot time the kernel owns all memory, then during init it slices off all the memory it doesn't need for itself and passes it to a user space memory service, and thereafter all memory requests get routed through that process. L4 uses a security model where permissions (including resource access) and their derivatives can be passed from one process to another. Using that system the memory manager process can slice off chunks of its memory and delegate access to those chunks to other processes.