Hacker Newsnew | past | comments | ask | show | jobs | submit | Garlef's commentslogin

Other than casinos, prediction markets have no incentive to punish cheaters

SDD has no real track record.

It's only a promise of a method.

If you think it's waterfall again: Wrong.

Everyone who phantasizes about "just"™ writing the perfect spec will be in for a rude awakening.

The spec will change over time and your initial version will turn out to be very wrong.


If this happened in Germany, this is most likely not only a breach of some contract but actually a criminal offense.

(In not a lawyer and so I might be mistaken about this; Especially the level of intentionality might be a factor)


... And this entity is again owned by AWS. And so the cloud act still applies.

> There would be no point spinning up a 'sovereign cloud' beholden to the US.

Of course: It gives (both sides) a narrative that let's them pretend everything is alright.


How would the cloud act apply if none of the employees of the AWS European Sovereign Cloud are US citizens?

> Courts can require parent companies to provide data held by their subsidiaries.

https://en.wikipedia.org/wiki/CLOUD_Act


But they would have no way to actually compel anyone who isn't a US citizen. The worst the US could do is fine Amazon until it complied.

Edit: Looks like the below is not true. However, such setup is technically possible and if they were serious about making it truly isolated from US influence, it can be done.

Original comment: No it's not owned by AWS. It's a separate legal entity with EU based board and they license the technology from the US company.


This source says it's 100% owned by AWS USA:

https://openregister.de/company/DE-HRB-G1312-40853


Hmm I'm not sure how to interpret that page but it looks like you are right, I'll edit my comment. I was told by GCP PMs that is how the GCP/tsystems setup is structured (see sibling comment) and that it mirrored AWS setup, but maybe that was not correct.

How difficult would it be for the "independent" licensor to exfiltrate data from the "sovereign cloud" via logging or replication?

The control-planes have to be completely independent for anything approaching real independence, not just some legal fiction that's lightly different[1] from the traditional big-tech practice of having an Irish subsidiary licensing the parent company's tech for tax optimization purposes.

1. No different at all, according to sibling comment.


I don't know about AWS but I dealt with some (small / tangential) aspects of the GCP setup: https://www.t-systems.com/dk/en/sovereign-cloud/solutions/so...

It is completely separate. There isn't a shared control plane. You don't manage this in the GCP console, its a separate white-label product.

Any updates GCP wants to push are sent as update bundles that must be reviewed and approved by the operator (tsystems). During an outage, the GCP oncall or product team has no access and talks to operator who can run commands or queries on their behalf, or share screenshots of monitoring graphs etc.

(This information is ~3 years stale, but this was such fundamental design principle that I strongly doubt it has changed)


The countless times I've read an article that starts with the description of a researcher drinking their morning coffee...

Linux on the desktop has been progressing for many many years... and a lot of stuff still doesn't work out of the box

I've recently had some fun at the intersection of "moving windows between screens" vs "ui scaling" vs "ambient system is wayland but the snap uses x11 internally".


Multiple displays with different scales has worked fine since at least 2017 (which is when I stated using sway, and precisely for this reason).

OTOH, I know that recent versions of GNOME struggle with this. Just last year I saw plenty of situations where moving windows across displays triggered all kind of quirks. This is a GNOME-specific issue, and like most of its issues, doesn't affect all other compositors.


I guess installing windows is more work than running a VM

... and more invasive


More work than using custom builds of everything on the Linux host?

It’s just the kernel and virtualization stack that are custom. Dual booting is annoying as you lose access to your entire desktop environment. Want to tab out of your game and check your email client? Well you can’t unless you maintain another email on the Windows partition that you only want to use for running a game anyway. If you spend any significant amount of time gaming you just end up getting dragged away from Linux where you want to be. I was dual booting for a while and it was fine for a focused Skyrim session here and there but when I started playing an mmo that I was in and out of constantly it was very inconvenient to not have access to my Linux desktop environment while I was idling in the city for hours.

With lookingglass nowadays it practically feels like just running a windows game on Linux. I used a vfio setup for years before Linux gaming support was good and I had to switch monitors inputs and toggle my kvm whenever I launched a game and it was still better than dual booting. There wasn’t kernel anticheat back then though so i didn’t have to muck with the kernel and uefi.


>It’s just the kernel and virtualization stack that are custom.

That "just" is doing a lot of heavy lifting. Maintaining a customized system is hardly zero effort. Speaking for myself, there's no way I'd ever consider something like this, because I know sooner or later a system update is going to do something weird that I'll have to figure out how to fix. I'd rather just buy a second computer just to run those specific games. The other person admits they need a second GPU to support this use case anyway, so it's not even like you're saving that much money.

>Want to tab out of your game and check your email client?

I have a phone, and a tablet, and a laptop (besides the desktop). I'm not exactly hurting for ways to check my messages or look something up quickly.


Yeah that’s fine, i personally didn’t spend many hours tweaking my dotfiles in Linux just to spend half my time in an operating system that i hate that spies on me and doesnt have my stuff in it. I wouldn’t maintain my own custom kernel either just to bypass anticheat, i don’t buy those games.

Not sure if it's still the case in the 2020's, but back in the 2010's I had no end of issues with Windows deciding to either fuck up the dualboot so nothing would load or overwrite it entirely and leave it as Windows only.

I think I probably switched off dual booting to vfio around 2015. Before that for dual boot I had just followed the arch wiki and used two separate drives, using grub for booting both windows and arch. I don’t remember having issues with dual boot but setting up vfio for gaming was still very fresh at the time and was not trivial for me.

EDIT: looks like it was 2016 i stopped dual booting and switched to vfio because I built a new computer for it a year later https://imgur.com/gallery/battlestation-4BuoZ Ironically reading that back I have just recently started getting into film photography.


I used vfio in the past, and it's not true that setups like vfio or custom kernel/virtualization "just" work. For starters, custom setups need management. There are even latest generation GPUs whose drivers are not fully VFIO compatible.

VFIO had a host of problems that are rarely mentioned, because VFIO "just" works: power management, card driver, compatibility, audio passthrough or maybe not, USB passthrough or maybe not, stuttering, and so on.


VFIO is in a significantly better place than it was 10 years ago though. Proper IOMMU groups are more common on motherboards, flashing gpu bios less necessary, etc. and most importantly the community is bigger and older so there is a more knowledge about parts compatibility and vfio setup.

That said it’s almost entirely unnecessary with the state of Linux gaming now.


Better or not, even the latest generation AMD GPUs don't automatically guarantee a very stable VFIO, which makes the technology still immature.

Sure, in 10 years we’ve gone from bleeding edge, with server/workstation motherboards being necessary, to immature, with having to do a little homework on which consumer hardware to buy. It’s not like VFIO is something for the general public anyway.

Not sure: The LLMs seem to be okay at coding recently but still horrible at requirements and interface design. They never seem to get the perspective right.

One example I recently encountered: The LLM designed the client/consumer facing API from the perspective of how it's stored.

The result was a collection of CRUD services for tables in a SQL db rather than something that explained the intended use.

Good for storage, bad for building on it.


Maybe it's a binary choice for the people who have to maintain it?

You can scale down money.

But you can't scale down commitment arbitrarily.


Serious question:

If you take a BDD/TDD approach - do technologies like this still give you something?

I've dabbled a bit into smth similar (SpacetimeDB) with the aim of creating a headless backend for a game.

But then I realized that I'd still need to define the interface that I was testing in the traditional software layer and that all the nice DB magic seemed worthless since I'd still have to orchestrate traditionally.

(In short: No matter how nice your DB magic is, you will still hide it away in the basement/engine room, right?)


Check the dBase/FoxPro family. My nostalgia remember it fondly, and there was not disconnected in how everything worked (example: The forms were stored as tables, so you can do alike `SELECT * FROM form_name WHERE widget=`), and there was zero ORM.

This is my aim https://tablam.org.

BTW I worked in SpacetimeDB and I proudly say is half-way :)


Sounds like we need IDE support for databases like we have for programming runtimes. Need to be able to lint, debug live, etc.

I've always passively wondered why this wasn't more of a thing. Something like pgAdmin is fine I guess, but it's always felt like "just barely good enough" rather than an immersive power tool to get things done, and done well. Possibly just a skill issue, but that's been my impression.

No I get it; and it’s not a skill issue because debugging with proper tools is a skill and the issue is that lack of those tools means you lack the ability to even use your skill. My last job used a lot of fancy internal pg stuff and we could never really reason about it properly. I wish I could debug it like I do with a Go app with delve, or in my IDE. Adding NOTIFY everywhere is print debugging which in my opinion is not a very good debugging strategy.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: