Microsoft's MFA is terrible. You install an app called Authenticator. Then when you login the Authenticator app gets a push notification, and the user has to say whether to allow the login or not. If you accidentally press Yes, the attacker gets in. And here's why this is a serious problem: When you use Remote Desktop to work, any time you have a network issue RDP automatically reconnects and you get an Authenticator notification. So during the work day you get frequent prompts that are automatically initiated by RDP reconnects. You get used to automatically saying Yes to the prompt. So when one of these prompts is initiated by an attacker's login there is no way to know, and you automatically answer Yes.
Also for other similar MFAs, if there is no rate limiting and the attacker spams the user, the user gets an unwanted notification and says no. The immediately gets the next unwanted notification and says no. The user doesn't want to deal with MFA, they want to use their phone, and the notifications are in the way. The user's primary goal is to make that stop. The logical thing to do is clicking "no", so the user does that.
But that isn't working. Every time the user clicks no, a new notification pops up. Eventually the user learns: Clicking "no" does not work, it does not achieve his goal. If it hasn't worked 5 times, it probably won't work the next 100 times. So, the user tries something different.
(And the best part it: It works!)
The solution for this is obviously to provide a "no, and block requests until I open the app" button.
This is also why basically all GDPR consent trackers are "accidentally" so forgetful about your choice not to consent. You need to say no every single time (with potentially shifting interfaces). Similarly if I remember correctly WhatsApp would repeatedly ask you to accept the new terms of service with pop-ups until you one day accidentally mis-click on the consent button. I honestly don't understand how that would count for anything, legally speaking. But don't expect much help from the legal system here - think of how often we have said no to EARN IT, yet it keeps coming back. One slip though and you're stuck with it.
Yeah, one website I visited recently helpfully showed me the cookies that it would save if I accepted their cookie agreement. The first one on the list was the cookie to remember my response to their cookie agreement.
This also makes the cookie prompts a pain for users who don't use cookies. My browser rejects all cookies by default, so I end up creating uBlock filter rules for the prompts on sites I frequently visit.
The solution is obviously to not show a notification at all, ever.
If I'm trying to log in, I know that I have to go into Authenticator and approve, so just check if there are outstanding requests (e.g. with a 2 minute timeout) when I open the app.
Except that's still a failure: the attack can script that to ensure there's always a notification. Doesn't have to get it right then and there either - they have infinite time. Might be this month, might be next month.
The user interaction of classic TOTP forms an important grounding function: it forces (in most cases) spatial locality of the user and the device being interacted with.
Some providers also give a unique identifier to each request. I always check them if it's present. Also, having multiple selections (numbers) also prevents this attack somehow. Your prompt says 7/56/9, but neither is there. So what? Or, you have the number in your prompt by chance, and it's not the right choice for that prompt.
Nevertheless, at the end of the day the buck always stops at the user. User have to be diligent and shall have no blind trust on anyone/anything.
If a hacker uses my account to hack my employer via Microsoft's terrible MFA, it's not my problem. The buck stops, as always, with the person who cares about the resource being accessed, or who is made to care by being legally liable.
For a disgruntled employee (most employees), getting hacked is a win-win.
What if it's Google, Blizzard / Epic (Battle.net), and something personal? These services use the same flows.
Or worse, what if it's your e-Government app, and you lose all your identifying information and somebody can become "you" with all the information they got?
Will you say "Meh, it's a personal account, and I gave access via MFA, but who cares, it's not my problem?".
Google did something even worse with old Google Drive links.
I published a link to some documentation back in 2017, and set the permissions of the link to "view".
Google, in their infinite wisdom, decided that somehow this link wasn't secure enough and now send notifications to my tablet every time someone clicks to manually authorise access.
This would be a pain in the arse at best, but they've somehow managed to fuck things up even more. By default, the permissions I'm granting to the user through these notifications is set to "edit".
Just to reiterate, they've "improved security" by spamming me with notifications to grant random members of the public full edit permissions on document that was intended from day one to be publicly accessible.
It wasn't that. They sent me an email to tell me that I needed to regenerate the link for security purposes and after date x it would no longer function as normal.
I think that was for a simpler problem, that shared urls were not unique enough, and could be guessed or brute forced to get random peoples shared items. The articles published by google seemed to warn that there was an expectation gap on the amount of privacy people should have expected to have on them.
That might be a "we need to change how we're generating URLs because we want to refactor stuff / not support the old format because it was too guessable or something" thing.
Your dislike is probably more due to the configuration setup in your instance.
As others pointed out, you can require matching a pin in the app with the one on the screen.
With many of the MFA apps I have tied to Microsoft products, they typically store a session expiration where they don't have me re-authenticate with MFA until the next day.
I've worked with many enterprises where the security group implements awful policies in an attempt to lock things down but instead create more risk by creating to much burden on employees which results in them finding clever hacks around the security.
Just guessing, but probably not the tool here. Though they maybe could improve their defaults, docs or UI/UX.
> more due to the configuration setup in your instance
The obvious question here is, why does it have a configuration that allows an accidental or absent-minded employee to let in a hacker? Other authentication apps such as Symantec VIP does not use notifications, so the employee does not respond to a notification, instead he proactively starts the app to get a numeric code. Less convenient than saying Yes to a notification, but more secure.
The trade off here is that many non tech organisations make MFA at all a political difficulty. I've sat in on several meetings about how we can reduce the difficulty of using Microsoft MFA, with people talking about preinstalling it on people's phones and of course, "let's do away with MFA" comes up quite often.
Many of those orgs looked into RSA tokens in years gone by. The only reason that MS auth got through when those devices were summarily rejected from ever being used, was the convenience.
The security industry needs to be careful here. Too much "Microsoft MFA is bad" and I'm certain many companies will simply revert to password-only, in much the same experience we had with SMS based MFA being bad and as such, web apps going live that simply didn't support MFA.
I work with a few older employees. Its takes them longer to focus on the words on the screen and to read them (presbyopia happens). It takes longer to get into an app and read a code out. By the time they are ready the code is already changing in the app. It is much easier for them to touch an acknowledgement. We want security but we do not want to make security an impossible barrier for folks that need a little extra time (for whatever reason).
Semi related: apps which display TOTP tokens should start their timer when the user opens the app (so the code doesn't change 5s after opening the app). The server in the backend is checking the previous+next N tokens anyways since the server needs to account for clocks not being 100% synchronized.
Unless the authenticator app was already open, in which case skipping to the next token should be possible on demand.
Alternatively, we just found the semantic use case for the <marquee> tag: a properly calibrated scrolling ticker would give readers the clear option (regardless of initial phase) to start reading the newest token or continue reading an older one, as the ideal selection may evolve unexpectedly based on distractions.
The code is valid for another full minute, typically, after it's rotated for a new one. This isn't much of a valid reason to prefer notification-based 2FA.
It is not obvious to everyone that a code may still be valid after its gone from your screen. You cannot use the first 3 digits from one code and the last 3 from another, so you start over when you don't have all 6 digits before the code changes on you.
Different strokes for different folks. I care to have folks be successful.
This sounds like an education issue; if you were to say "wait until the code changes, then go for it", they have thirty seconds to read and type six digits.
If this still doesn't work for them, perhaps a hardware token they can tap might be a better solution.
TBF the code still being valid doesn’t help much if the user hasn’t memorized and/or finished typing it all.
Sitting next to some family members, they really can’t remember more than one or two letters at a time, and will peck and hunt each of it. Except if they were typing the last digit, the code disappearing from screen is basically the end of it for them.
The problem with giving options regarding security is that sometimes the people who are responsible for setting up those, forget about "convenience versus security", or they get pressured by other groups to "forget" about that balance, and makes a lapse of judgement.
We could, by laws and software, enforce a certain standard of security for organizations. The question is how liable you should be for that. Would have to consider many variables like size of company, importance of information and such.
Don't you have to provide some information as part of the confirmation? Whenever something requests MFA, it will show a two digit number, and in the Authenticator app, I will be presented with three numbers and can only confirm the prompt if I select the same number that was shown by the requestor. My work MSFT account is even more locked down and requires me to type in the number.
Oof. I wonder if admins can disable push notifications and just fall back to TOTP instead? Though it does sound like that would not be convenient if you have to enter a TOTP every time an RDP connection get interrupted.
Sorry but the parent poster is correct. For both my corp account as well as my personal Microsoft account , I’m prompted to enter the validation number after I’ve faceid’d.
If this doesn’t happen when you are authenticated, then it’s a setting with the system you’re logging into.
Depends on config. Bare minimum (mainstream?) is just a "blah blah service login" Accept|Deny.
If you are in and out administering things and accessing protected documentation I could see how the login-to-mfa-popup latency and difference in tenancy names could give a window for a misclick.
Luckily, MS also allows for FIDO2 and TOTP authenticators so you can make it a lot harder for criminals to get in. Especially with FIDO2 keys, you're pretty unlikely to get phished.
As sibling comments indicate, there is an option to make the request for MFA more secure by providing a number to match your request, but it's pretty dumb that this security feature is disabled by default.
This is not universal. It works on Windows with most browsers, but doesn't work on Firefox on Linux (works on Chrome on Linux) nor on Safari on Mac, nor Safari on iPhone, nor Teams on Linux (haven't tried Teams on Mac nor Windows).
When I had to use Windows for work (man, I'm glad I don't anymore) I used to get a ton of random password prompts (Sophos VPN, some Microsoft 365 or Windows accounts) with absolutely no way to tell whether they were legitimate or what triggered them.
What a shit show. It's a weird thought that people use this crap in allegedly secure environments.
I quite like Google's approach, where you don't get a notification and have to go into the app to get the pop up. My bank does the same thing, you have to go into the app without a notification, see the pending transaction in your list and then approve/reject it. Feels a lot safer than push notifications for sure.
It’s their implementation of 3D Secure for online payments, which I think is based on a combination of heuristics and the merchant opting in/out. It’s a very good UX though, banks I’ve used in the past would ask you to enter some characters of your password on a weird looking 3D secure page.
Except the push notification still requires you to authenticate with a biometric before it’ll open the app and fill your TOTP, ask your for a number to enter in, or approve the login. It’s not like you just tap it and you’re fine. You have tap and then log a biometric or PIN.
You can turn the push notifications off or they are a concern, but I think it’s a bigger problem to allow stuff via biometric so passively.
Personally, I like not having to switch to a Home Screen to open an auth app to approve or copy a code. Having it pop up for me and take me to where I can get stuff to auto fill or approve/enter in numbers is a really nice feature.
It's not Microsot's MFA, others do this also. It's push notification authentication. Better than SMS in that sim swapping or ss7 type attacks won't work on it. They also support yubikey/fido, it's up to your AAD admin to allow any of those.
I haven't been able to understand what that lengthy article talks about, but with your short comment it is clear now. Why Authenticator needs push notifications, it's so stupid. I use Microsoft MFA with Authy without any problems and without any notifications.
This is not always the case for Microsoft Authenticator. When I get a prompt on my iPhone, yes I need to use faceid, but I also need to enter a random number to “prove” a human has indeed accepted the auth request.
For most users, they’ll only get a prompt when they login to a new device, so the risk of notification fatigue is lessened, so I think it’s OK on that front.
I agree however that admins are constantly bombarded with alerts and can suffer from vigilance fatigue. I think there is certainly a need for a different approach for more privileged users - it could be as simple as red banner, instead of the blue theme that is common across the MS, SalesForce and Okta MFA apps.
Also they want to lure you into using their own app by hiding the little link that allows you to get a normal OTP code.
That bugs me more than anything, I don't want another app! I have over 30 codes in my open source OTP app, how would that even look if everyone wanted me to use their app?
Not sure when you worked there, but I’ve seen that you have to enter a number that’s displayed on the device logging in, and the authenticator app shows the location of the device attempting to login as well.
I'm always on the edge of my seat waiting to learn what mind-blowing technique the "band of elite hackers working for Russia’s Foreign Intelligence Service" came up with. And it's always something along the lines of "they kept sending emails until somebody clicked the link."
Often, yes. Maybe even most commonly. But not always. The NSO's zero-click iMessage exploit uncovered by Google Project Zero & Citizen Lab was absolutely mindblowing to me. [Of course, NSO Group is not "Russia's Foreign Intelligence Service", but I presume that wasn't your point.]
If a system doesn't consider the human element in it's design the system as a whole is weak. There is no computer vs human. I am tired of hearing computers doesn't make mistakes and it is only humans that do. Even if they found a bug in kernel or some obscure memory fault, some people will spin it up as human mistake and chalk it up and move on. System designs needs to be holistic and accountable to errors. Everyone thinks they are smart but constant attacks only need you to be careless for a mere moment.
Yeah, it's not how smart they are, it's how dumb other people are.
I mean, if I start receiving a bomb of MFA requests at 1 AM, I'd get up immediately, log my account, change password and let the provider know my account was likely under attack.
Even if I didn't care, I'd just turn off the phone and go back to sleep.
I'd NEVER click an authentication request which I didn't acknowledge. I can't understand why someone would do that. People are really careless...
> I can't understand why someone would do that. People are really careless...
A lot of of it is probably carelessness but there are a lot of configurations out there that train users to accept random MFA requests. For example, some vpn configurations send MFA requests when they reauth at essentially random times.
It's pretty easy - you're asleep, phone sound/vibration wakes you up, you're on call so you pick up the phone, it face unlocks, through your unfocused eyes you see a thumbprint icon and aren't sure if face unlocked worked in the dark, so you touch it.
A second later you're wide awake and wondering what the hell was that mfa for...
Google wankers forcefully added "Google Prompts" as a 2FA method, without consent, and disabled removing it. Of course people are going to hit "authorize". Oh and if you remove the Google app, you can thankfully use the YouTube app (like that's a good idea). A _video streaming_ app now has the keys to the kingdom. Man I feel secure.
Just use hardware keys. It's not difficult. My 70 year parents use them. I explained "This is like your front door key, but for you account. It's safe to put this in whenever the computer prompts you for it."
Yeah I would say generally hardware keys are actually EASIER for many older people to understand once you walk them through the process. The real problem, however, is that so many damn places (I am looking at you big banking!) either do not support the key, do not support the protocols that some keys use, or just easily allow you to fall back to another method.
If we can get to the point where a hardware key is universally accepted at all of the major places older people commonly use then I think it will be an easy sell. Showing someone how to open an authenticator app, scan a barcode, name it correctly, then later re-open the app to find the correct code (which is periodically expiring so they need to do this all relatively quickly) in ADDITION to their normal password ( I see so many of them either put the code in the password field or some other combo ) is actually quite a few steps. And once you get a ton of authenticator codes inside the app it can get confusing which is which unless you name them all carefully.
Telling someone "plug in this physical key" is a hell of a lot easier and so much more similar to what they are used to.
Yes, but I'm not sure this is limited to "older" people! It really helps to have a security model people can understand, operating like something they already know. You don't get phished at the front door either (if we're talking FIDO). The problem is the non-trivial expense. That leads to organizations like universities not using them and essentially insisting that everyone -- specifically students -- uses their own, possibly compromised smartphone, though you can get round that with on your laptop, say. That's also the device they're likely to use to connect too... And it's still phishing, we've heard of it.
On Duo, if you have multiple hardware keys registered, then you need to pick the key to use for 2FA before you get the prompt. If you pick the wrong key, it will fail. It is very easy to end up in a configuration where every time you need to perform a Duo login, you have to click 3 times to pick the right key.
Or you can skip the keys and get a mobile prompt, instantly, the moment you visit the page.
Of course, this has nothing to do with the underlying limitations of hardware keys. But vendors routinely mess up implementing them. We could really use some rock-solid open source WebAuthN implementations.
From what I've seen of people who go along with using their own phone for this, you can get the mobile prompt many times without doing anything, even when you're not being attacked (as far as we could tell, but I don't know what the actual cause has been). Sigh.
Does it still prompt you to pick a key if all you have are Security Keys enrolled? I can see if you've got other options they might want to check first before doing the WebAuthn process.
Jokes on you, 1st world banks are well-known to have a huge lag in tech. Like still handing out dedicated TOTP devices or OTP scratch-cards. OTOH we have crypto exchanges running on a bleeding edge. Binance has (partial) fido2 support. I am not aware of any other.
I've also found this works fine. The new ones seem to have wireless built in now as well (good for phones). And you can have more than one key on the physical key (I don't like the the microsoft / duo / etrade VIP secure / etc) endless app list!
Today's fun fact: On cheaper devices (anything cheaper than say Yubico's Security Key 2 product, and even often for common uses with products in that range too) there actually isn't ever "more than one key on the physical key". They have a single key baked inside them (typically an AES symmetric key) You can use them to authenticate as you on an unlimited number of sites because they're not actually remembering the private keys used to authenticate so they don't need to store them anywhere!
Let's watch how that trick is done, starting with a much more expensive device that has plenty of storage, an iPhone.
When you enrol the iPhone as an authenticator, the standard requires it to provide a very large ID number for that enrolment, and it warns implementers these aren't serial numbers if they're picking an ID use random numbers. The iPhone signs a message with a proof of freshness (random numbers the Relying Party picked), a proof of who the message is for (a hash of the Relying Party's DNS name) an elliptic curve public key it just picked at random, all signed with the corresponding private key. This is sent to the Relying Party (ie a web site) along with the ID number and enrolment has succeeded. The iPhone just stores all that in Flash because hey, it has gigabytes of flash storage so who cares. When you need to authenticate to some web site, the site gives back the ID number, the iPhone finds the right entry in Flash, retrieves the private key and produces a new signed message to authenticate.
However, the ID is so big for a good reason -- a whole elliptic curve private key can fit with space for an AEAD tag to spare. So instead of gigabytes of flash storage a $15 FIDO authenticator just uses AES to encrypt the random private key for this site (using the symmetric key baked inside it), and provides that encrypted message as the ID number for the enrolment. Then it can forget the private key! When a site wants you to authenticate later, the site gives back the ID number (always a big random-looking number anyway remember) and your authenticator decrypts the ID number to get back the private key for that site, signs the authentication message and immediately forgets the private key again.
It's genius. If you came up with this idea independently of reading about FIDO/ WebAuthn congratulations you might have a future in cryptographic engineering.
I would expect that an engineer would recognize that (1) a massive volume of MFA notifications is extremely suspicious and should be reported immediately to security and (2) if they are trying to sleep they can just mute or turn off the phone. This was a major failure of training.
For a nontechnical employee I could get how they could not recognize this as an attack. But if you are getting annoying calls and don't know why, why not just unplug/turn off the phone?
On the other hand, slipping a single MFA notification in during the normal workday seems like a much better approach. Even if the employee doesn't accept the notification, they'd likely assume that it was a tab they opened earlier and closed before finishing the login, not something to report.
Personal responsibility is the weakest form of system improvement because it's guaranteed to vary between people. Expecting people to always behave perfectly within your scope is a recipe for disappointment!
The article already mentions the alternative of slipping in a low volume of MFA notifications instead as an alternative that is less suspicious. You only need one person to accept. And I think you overestimate how much attention engineers pay to any security or compliance type of training.
Worse, I assumed every 2FA provider did something to mitigate request spamming as soon as someone realised you could use a premium rate number back when SMS messages were commonly used instead of apps for 2FA.
Congratulations if you have security that would recognize the issue and be able to do something about it other than just blame Duo somehow! In my experience most technical people don't actually recognize the problem, like many other real security issues, and are more like security theatre-goers.
> On the other hand, slipping a single MFA notification in during the normal workday seems like a much better approach. Even if the employee doesn't accept the notification, they'd likely assume that it was a tab they opened earlier and closed before finishing the login, not something to report.
I've noticed that Google actually randomizes the position of the Accept and Deny buttons on their 2FA popups. I guess this is to force you to read the entire text, but I have on more than one occasion Deny'ed my own request because of this. I think someone would have to hit me with about 4 2FA requests before I ham fisted the wrong button.
> That’s where older, weaker forms of MFA come in. They include one-time passwords sent through SMS or generated by mobile apps like Google Authenticator
That reference to Google Authenticator being weaker is not consistent with the rest of the article.
It is, but only a little. It's still vulnerable to MITM, phishing websites, and fake support calls.
OTOH hardware keys are much more foolproof. It's insufficient to merely ask victim to press a button. To bypass them you need to pwn the OS (not impossible, but harder than social engineering) or have physical access to victim's key (requires leaving mom's basement).
Genuinely curious: how is a hardware key invulnerable to a phishing website? Don't they just act as a keyboard to type in the OTP automatically? What stops a MITM from relaying it on?
To expand on what the other reply said about FIDO authentication/ WebAuthn Security Keys (there are a lot of names for related technologies including U2F which was the predecessor technology), WebAuthn recruits the web browser to solve the phishing problem. The web browser definitely knows which DNS name this site has, because it's mechanically necessary in order to fetch web pages from it. So while the user may be confused and believe this is their bank, the browser is quite clear this is fakebank.phishing.example.
Then WebAuthn has the browser tell the hardware token which site we're authenticating against, and the authentication credentials are distinct for every possible site. There are a few more wrinkles to make even it even better, but this is already enough that it's utterly pointless to try to phish Security Keys.
Some of them do, and they're obviously just as insecure. The ones implementing U2F/FIDO/WebAuthN or whatever the standard is called today will sign a cryptographic challenge, and the response includes the origin on which it was requested (i.e. the legit website will reject the response because the origin is the phishing site).
Isn't manual TOTP MFA (using codes generated by Google Authenticator or similar) significantly more secure than those MFA prompts? I don't understand the push for MFA prompts when the previous technology worked just fine and was probably more secure. What's the benefit to MFA prompts other than slightly better UX?
I agree TOTP is much better than MFA prompts or calls/SMS. TOTP does protect against the first two attack methods the article lists.
However, it's not quite as good as a hardware key, because it's still vulnerable to the third method the article lists: "Calling the target, pretending to be part of the company, and telling the target they need to send an MFA request as part of a company process."
I generally consider TOTP "good enough" for a lot of applications, whereas prompts and SMS are not "good enough."
We take training constantly to not click our Duo notifications for people on the phone, etc.
Few months ago I couldn’t log into the vpn. Posted to the slack channel and got a slack asking my phone number. Ok so I know this guy is really my it and I’m asking for help. Then he sends me a freaking Duo notification! I say “I’m not supposed to click this” and he goes “well yeah but I’m IT”
I dont think either are more or less secure than the other - they both verify you have the physical device on you (which theoretically you need to unlock) and they are time-based.
I would say the TOTP MFA is easier because you do not have to deal with re inputting the code (which expires) but also then you need another app installed.
I meant the MFA variety where you have to reinput the code versus the one where you need to confirm a login via a mobile prompt. The former seems safer, because it requires a lot more activity from the user.
In case of one time passwords you can ignore the proprietary app and use your favourite authenticator to generate an otp password.
If the site or app poses no choice, just say that you want to use their "Proprietary Authenticator" and you just continue with your own password manager.
It works for me with 1password. As a sanity check too see if it works; you always have to use a first OTP to activate the multifactor authentication.
Currently I only trust these 3 factors of authentication used in combination correctly:
1. Memory (enter the site-specific password via the password manager which is unlocked by a password is from your memory).
2. Device (device-internal-hardware backed certificate bound to this device).
3. Physical Presence (FIDO2 Key touch)
Most importantly, it is extremely important how secure the reset auth flows is.
And if any one of the three factors need to be reset, then the system should require the other two to be valid, plus it should require an in-person identity verification (if implemented correctly, video KYC should be acceptable). Plus there should be a reset-buddy designated by the user who should second/vouch the user's initiation of reset.
Without all of these (2 factors of auth from the user plus system automated video kyc + reset-buddy vouching), even the admin shouldn't be able reset auth of any accounts. This is crucial.
Plus there should be a pre-cooloff period after reset request is raised but before it is actually processed, and a post-cooloff periods for any additional factor reseting, and regaining full privileges.
Independently, there should be fraud/risk systems for safeguarding any sensitive operations (like creating additional users, exfiltration of data etc).
> Plus there should be a pre-cooloff period after reset request is raised but before it is actually processed, and a post-cooloff periods for any additional factor reseting, and regaining full privileges.
That's, to me, the biggest one. It is quite a trivial idea, it's really not hard to add to any login process and yet very few this. But all hope is not lost for there are some sites that have seen the light and implement precisely that cool of.
Basically you have (1) something you know (like a password), (2) something you have (like some device or key), and (3) something you are (like a fingerprint and iris scan).
Back then the accepted trade-off was that have any two of these three is good enough for most case, and for really critical stuff you need all three.
The MFAs in question here attempt (1) and (2), but do a bad job on (2).
If they could login without knowing the password when MFA is enabled, then 2FA/MFA is making it less secure than simply having a strong password and nothing else.
Some of this is expectations and how you train your users I think.
I know at least in my experience, running a Windows machine I can get random prompts to sign-in at random times from Outlook, Team, Visual Studio for Azure resources, from powershell scripts with zero context as to what they are for.
Some of them will prompt for login, as I have multiple AAD account, others will just pick one AAD account and skip the password as things are cached.
I'm then getting seemingly phantom login prompts and phantom authenticator requests by design. I'm denying them when I'm not certain what they are, and for secure environments I'm using a yubikey - but that's not what I expect most people to do faced with this.
Can't wait until companies expand their fake-phishing email programs to include this. Randomly like once a month your phone will get spammed at 1am and if you allow the request, then you have to attend a phishing training session.
One job I worked on involved updating my employer's authentication and bringing in MFA and other modern authentication techniques. We initially enabled just the MFA that required the use to have an authenticator app and enter the code into the site along with their password. Guess what? That didn't satisfy the product owner or marketing, so we were required to enable the other form of MFA, which sends a message to the user's device and requires them to just press the OK on the app and allow it.
But at least we were able to hold the line on sending one-time codes via SMS.
No one external should be able to trigger this, it should only be the owner that takes action of, e.g., opening a code generator or requesting an SMS/phone call through dialing from the right number.
The article seems to lump Google Authenticator and push style authentication prompts together (as old broken MFA), but I'm unsure how you spam someone with requests for the former?