Only if you boot into macOS and connect it to the internet. iBoot2 never changes by itself, you, the user, decides if you want to boot into recovery or macOS and run an update.
So can Apple stop signing new iBoot2 versions? Sure! And that sucks. But it's a bit of FUD to claim that Apple at arbitrary points in time is going to brick your laptop with no option for you to prevent that.
Granted, if you boot both macOS and Asahi, then yes, you are in this predicament, but again, that is a choice. You can never connect macOS or recovery to the internet, or never boot them.
> You can never connect macOS or recovery to the internet, or never boot them
In other words, you're completely fucked if you brick your install. I consider iBoot a direct user-hostile downgrade from UEFI for this reason.
YMMV, but I would never trust my day-to-day on an iBoot machine. UEFI has no such limitations, and Apple is well-known for making controversial choices in OTA updates that users have no alternative to.
> In other words, you're completely fucked if you brick your install. I consider iBoot a direct user-hostile downgrade from UEFI for this reason.
That's a bit of a creative perspective, isn't it? You have no control over the UEFI implementation of your vendor, same can be said for AGESA and ME, as well as any FSP/BSP/BUP packages, BROM signatures or eFused CPUs. And on top of that, you'll have preloaded certificates (usually from Microsoft) that will expire at some point, and when they do and the vendor doesn't replace them, the machine might never boot again (in a UEFI configuration where SecureBoot cannot be disabled as was the case in this Fujitsu - that took a firmware upgrade that the vendor had to supply, which is the exception rather than the rule). For DIY builds this tends to be better, Framework also makes this a tad more reliable.
If anything, most OEM UEFI implementations come with a (x509) timer that when expires, bricks your machine. iBoot2 is just a bunch of files (including the signed boot policy) you can copy and keep around, forever, no lifetimer.
Now, if we wanted to escape all this, your only option is to either get really old hardware, or get non-x86 hardware that isn't Apple M-series or IBM. That means you're pretty much stuck with low-end ARM and lower-end RISC-V, unless you accept AGESA or Intel ME at which point coreboot becomes viable.
Basically your counterpoint is that I'm absolutely right to be concerned, but I'm wrong because UEFI can also be implemented with the same objectionable backdoors that Apple implements.
Gee, another "we did not need cloud, so by not using cloud, we stopped spending on something we did not need"-story. Duh. The real story is why someone who doesn't need cloud services starts using them anyway.
If you need it, use it, if you don't need it, don't use it. It's not the big revelation people seem to think it is.
You don't need the root account, unless you need to bypass all policies. In such a scenario, you a use the root access reset flow instead, reducing standing access.
As for other flows (break glass, non-SSO etc), that can all be handled using IAM users. You'd normally use SAML to assume a role, but when SSO is down you'd use your fallback IAM user and then assume the role you need.
As for how you disable the root account: solo accounts can't, but you can still prevent use/mis-use by setting a random long password and not writing it down anywhere. In an Org, the org can disable root on member accounts.
To me that sounds like security by obscurity not actual security.
If you have the ability to go through the reset flow than then why is that much different than the username and password being available to a limited sets of users. That would not have prevented this from happening if the determination was made that all 3 of these users need the ability to possibly get into root.
As far as having an IAM user, I fail to see how that is actually that much better. You still have a user sitting there with long running credentials that need to be saved somewhere that is outside of how you normally access AWS. Meaning it is also something that could be easily missed if someone left.
Sure yes you could argue that the root user and that IAM user would have drastically different permissions, but the core problem would still exist.
But then you are adding another account(s) on top of the root account that must exist that you now need to worry about.
Regardless of the option you take, the root of the problem they had was 2fold. Not only did they not have alerts on the usage of the root account (which they would still need if they switched to having long running IAM users instead, but now they would also need to monitor root since that reset flow exists) and their offboarding workflow did not properly rotate that password, which a similar problem would also exist with a long running IAM user to delete that account.
At the end of the day there is not a perfect solution to this problem, but I think just saying that you would never use root is ignoring several other issues that don't go away just by not using root.
Not using root means not bypassing policies. There is no way to not bypass all policies. So yes, never using root makes that issue go away completely.
As for all the other stuff: what it does is it creates distinct identities with distinct credentials and distinct policies. It means that there is no multi-party rotation requires, you can nuke the identity and credentials of a specific person and be done with it. So again, a real solution to a real problem.
It depends on what the goal of all of this was, which is unclear. If the goal was simply to get the data that they originally wanted it does not solve that problem and it would have just happened a different way.
According to the article there was 11 days between the first actions taken and them finding out it happened.
If instead of a root account you have a long running IAM user that you can then assume into the role you normally use through SSO. If you also do not monitor that account with proper alerts and proper offboarding procedures than they could have logged into that account and retrieved the data they wanted.
Which again is the reason I am saying they just saying not using root is not a magic bullet that would have avoided problems. Maybe the situation would have been different but they still could have done a lot in 11 days.
The problem was that the user's credentials were revoked but because the root account was a shared credential it wasn't revoked. Was the break-glass account also a user-specific account, it would have fit in with any 'revoke anything for user XYZ' workflow instead of being a root account edge-case.
So, in short, this would likely have prevented this, as the normal off boarding for user-bound credentials worked out fine already.
Does it? Pretty sure that logging in as root generates one cloudtrail per action, regardless of whether or not you did it with a saved password or you reset the password. Resetting the password doesn't generate a cloudtrail event as far as I've seen.
This is just a Windows VM with extra tooling. Makes it look slick, doesn't make it "Windows apps on Linux".
Similar projects exist for gaming for example Looking Glass, which also uses a Windows VM on KVM (the "Windows in Docker" thing is a bit of a lie, Windows doesn't run in the container, Windows runs on KVM on the host kernel).
UX wise, this is similar to RAIL.
That's not to say that this isn't neat, but it's also not something new (we still have two flavours: API simulation/re-implementation and running the OS [windows]). If this was a new, third flavour, that would be quite the news (in-place ABI translation?).
And I had to come here to find out what it actually was. Why don't project pages ever actually tell you what it is, what it does and how it does it?
Half the time it's something like "Plorglewurzle leverages your big data block chain to provide sublinear microservices to Azure Cloud infrastructures"
At least this one kind of shows you having to install Windows.
Unfortunately many companies have realized that engineers don't make purchasing decisions. (Mearly suggestions) Rather, C-Suite, who knows nothing about the technical side of things, and everything about the buzzword side, makes the decisions. As a result, companies know that if they just throw a bunch of inflated marketing mumbo-jumbo at the user, while it will turn off every engineer asking "WTF does this actually do and how does it work", some C-Suite will run out and purchase it without asking, then force their entire team to use it because it "produces synergy of the AI block chain and big data cloud APIs while enhancing productivity". Then us Engineers are stuck using it, whether we wanted it or not.
> Why don't project pages ever actually tell you what it is
If it's a good thing with substance, they do.
If they don't, don't use it. This usually hints at a broken culture/missing substance. It _can_ also be ineptitude, but that too is not your problem but theirs.
You woke up this morning not having the problem this sets out to solve. You can go to sleep and rest easily this night, knowing that you still don't have whatever problem this sets out to solve.
If you should one day wake up and notice that you have a problem this could solve, you will find yourself googling for a solution, again side-stepping this whole marketing nonsense.
I think this name would be confusing.
For one - it is for linux, not windows.And it is a subsystem running Windows. So, it should be called Windows Subsystem for Linux, or WSL.
I think this may be a woosh moment where they're saying the Microsoft version should be called LSW because it's for Windows. Probably sounds more obvious with a more sarcastic tone
The concept of a "subsystem" in Windows has evolved since the operating system's inception when Windows NT was designed to support multiple operating system environments through distinct subsystems. Win32 subsystem, which features case-insensitive filenames and device files in every directory, and the POSIX subsystem, which supports case-sensitive filenames and centralized device files: Windows subsystem, the Subsystem for Unix-based Applications (SUA), and the Native subsystem for kernel-mode code were the main subsystems at first.
/SUBSYSTEM linker switch was used to specify the target subsystem at compile time, enabling applications to be compiled for different environments such as console applications, EFI boot environments, or native system processes.
In this nomenclature, WSL follows the original naming conventions (although SUA should have been called WSUA).
Except WSL doesn't actually use any of the nt subsystem machinery in either of its incarnations.
And also, it doesn't really follow that nomenclature. Those all follow "user code target" Subsystem. Windows Subsystem, OS/2 Subsystem, Posix Subsystem, etc.
I think this is accurate. WSL v1 did not use a VM, just like Wine. However both WSL v1 and Wine struggled with compatibility issues. WSL 2 gave up and used a VM instead. You pay a performance penalty but compatibility issues mostly go away.
Well, I discrepe a lot with the "compatibility issues" of Wine... Essentially when sometimes can run better and with less issues legacy software that modern Windows.
It's literally just dockur/windows:latest + FreeRDP rootless mode + a small daemon that runs in the VM that tells you what apps are installed via an API.
If you don't want the latter part, you'd be better served with the dockur/windows image + FreeRDP
My experience with it is that FreeRDP in rootless mode isn't very good for Windows applications that do anything special with window borders. Using Office and many other programs became a pain.
When it worked, it worked really well, though. Reminds me of the same feature that VMWare used to offer many years ago for running XP/Vista programs on Windows 7 through a VM.
This is incorrect. FreeRDP has supported Wayland since a long time via their `wlfreerdp` client - which is now deprecated, Wayland support is now available via their `sdl3-freerdp` client. The SDL client was alpha quality a couple of years ago, but as of the last couple of recent releases, it's been pretty decent. I'm unsure though if its reached full feature parity yet with the X11 client.
But if using hello@example.com as the email, and using F10 and oobe or whateve command you pulled off Google stop working, and then you have to move to more exotic options, like downloading programs to modify disk images to prepare a USB drive to install an LTSP or IoT copy of windows 10 it's all just such a waste of time to do something that should be easy all because someone at Microsoft got on this kick that what they want is more important than what the customer wants. It's so frustrating!
As a non-coder/engineer Linux user…I’ll admit that’s actually not obvious to me. Linux is trivially easy to run these days.
I could probably drop my dad in Mint and he’d assume windows just looks different. Maybe that’s a tad facetious but also ehhhh I could maybe get away with it
You don't need to be in the Apple ecosystem to buy an Apple TV and only use non-Apple services.
The only thing that will probably suck is the lack of things like MiraCast and Google's Casting stuff, but you could use third party AirPlay software (still free IIRC) to stream whatever you want if you want to use screen mirroring.
These days people tend to use their media boxes as App Launchers for other services anyway, so it doesn't really matter that much anymore.
Yeah, the Apple TV would be my suggestion as well even if it's your only Apple device. Other than streaming service apps (Netflix, Hulu, etc...) I think the only app I've installed is Tailscale. It's a great device that is slightly more privacy respecting than similar devices from Roku or Smart TV manufacturers.
I've never owned a Mac before but now I'm thinking about getting one just so I can write software to run on my Apple TV. It's a pretty powerful computer that's tiny, silent, always on, barely uses any power, and is connected to my TV.
I'm going to check out VLC though. Thanks for the tip.
One other app that I had and forgot about is some remote play client for Steam. I start Steam on my desktop PC then pair my PS5 remote to the Apple TV, start the Steam tvOS app, and I can play games from my PC on the Apple TV.
It's not withholding, it's just not part of the AppStore if you do it. There are plenty of other ways to distribute your software, and yes, Apple will also still co-sign it or provide entitlements if you need those. Just not in the AppStore.
That's not the argument; the argument is that this would be some form of "there is only one method and it is being withheld", which simply isn't the case.
It's not sudden, the account owner disappeared and the account user (the one making all the noise) did not get ownership transferred.
Is this a great situation? No. It's also not "I did everything right and boo hoo AWS did a boo boo". AWS is not your friend, but you also weren't the customer, that was the middleman you gave ownership to.
> AWS blamed the termination on a “third-party payer” issue. An AWS consultant who’d been covering my bills disappeared, citing losses from the FTX collapse. The arrangement had worked fine for almost a year—about $200/month for my testing infrastructure.
He essentially got screwed over by the consultant. Everything else is a side-effect but not material to the cause.
AWS has no such thing. It also is not an alternate either way since the owner of the first payment method would have this in an MPA, not in a member account, and in AWS, member accounts are not considered payers and any payment methods are ignored.
The article makes a variety of claims, some of which can be dismissed because they factually do not exist with AWS, and some of which cannot be verified at all. But precedent shows that when your middleman in a savings scheme drops out, you are screwed, and that is what happened here.
That doesn't make this fun, and it would be better if everyone had a personal account manager, but that's not the reality of today.
> Most companies in 2025 don't own hardware firewalls. You're proving his point.
Then all the posts I see in /r/networking about which firewall product is recommended are figments of my imaginations? (The consensus recommendations are PA if you have the money, Fortigate if you don't.)
Firewall is a dead end was evident by the low growth of palo alto in this segment. Nikesh him self mentioned it multiple times. Market is very saturated. All these companies which still buy are looking at refreshing their existing firewalls stock.
reply