OIDC (and the rest of the OAuth umbrella of stuff) is one category where every time I have to work with the protocols I think "there must be a less confusing way" and then have a failure of imagination for a simpler way to accomplish the same thing. I think it's because the protocols are conceptually simple, but the cryptographic parts, especially the PKI parts, make them intricate to understand exactly who is attesting or validating exactly what.
What’s insane to me is that it feels like nobody has actually managed to make this easy for developers yet. If they have, I don’t know about them.
I would normally consider myself pretty competent, but I stood up my first fully featured website recently with logins and such, and it took me about 2 days of work to get AWS Cognito working (using their recommended USER_SRP_AUTH). That’s not including 3rd party login functionality from Google and friends.
Their documentation and UX is piss-poor unless you’re willing to onboard your entire project to Amplify and enter npm-hell, which I wasn’t. It’s almost like they don’t even want your business.
I looked into using Auth0 instead and it didn’t seem to be any easier. Better docs, seems like they’re actually written by someone who both understands the auth problem domain and how to explain it to those that don’t, but still complex.
Yet when I was finished finally getting everything to work, it seems like the kind of thing you could easily package into an off-the-shelf product. It’s just that existing products don’t do it. Like why the fuck is there a guide explaining how to write a lambda to convert access codes to refresh tokens and persist them via cookies? That should be part of the Cognito platform!
Honestly thinking of just starting my own auth SAAS with blackjack and hookers
Don't judge the identity server space by Cognito, I beg of you. There are a lot of other players out there (I work for FusionAuth, one of them) who are working to make this easier.
I don't know why Cognito hasn't seen more improvement. From the outside, it seems like CIAM would be worth investing in as a cloud provider. Say what you will about Azure and GCP, they both have CIAM platforms that see more love than Cognito (Azure AD B2C, Firebase).
> What’s insane to me is that it feels like nobody has actually managed to make this easy for developers yet. If they have, I don’t know about them.
There are definitely folks making it easier to add login/logout to applications (I see some of them pop up in sibling comments, and we are working on that at FusionAuth as well). But some of these are component libraries to proprietary SaaS applications. In this case you lose some of the power and standardization of OIDC. That works great for some use cases and not so good for others. The nice thing about OIDC is that almost everyone works with it (or with SAML). Certainly more than proprietary session based authentication providers.
I will tell you that as we are trying to make authentication simpler at FusionAuth, we have customers coming to us with pretty complicated use cases around federation, scale, automation, permissions and more. It's a balance to try to appeal to the developer who just wants authentication to work as well as the sophisticated customer who has these complex needs.
> It's a balance to try to appeal to the developer who just wants authentication to work as well as the sophisticated customer who has these complex needs.
This sounds like there should be two solutions, one for the simpler case and one for the complex case, rather than trying to make one solution work for all use cases.
Those handle a lot of the simple use cases and some of the complex ones. They have the virtue of being well-tested and integrated with your development platform. You can deploy in one step and maintain one database. Which is great till it isn't.
What platform libraries like Devise, etc, don't offer is user data denormalization and isolation, which is useful when you have 2 or more applications with users. Then you want to look at auth servers (self-hosted or SaaS).
But you want the auth server experience to be as simple as possible, which is what we're working towards.
My thoughts exactly. I want a simple auth solution that doesn’t push me towards a full batteries-included platform like Firebase and Amplify, nor a highly-configurable/complex “you can do everything auth!” platform. It’s ok if it’s a little opinionated, as long as it serves my use case of “adding logins and SSO to my website” up to 90% of the problem instead of 50% like what’s out there now.
Not yet, I’ll give it a look next time I hack on my site. “Stripe for auth” is exactly what I’m looking for, and I know I still have a lot of auth head bashing left before I ship.
I’ll say though, my personal “customer demographic” ATM is more along the lines of someone who wants to get working user signups and auth and then never think about it again - so mentioning SAML/OIDC building blocks is a bit of a turn off for me. The reason is that I’m a solo dev trying to ship a browser-based multiplayer game, which I assign a low (maybe 5%) probability of ever becoming something with multiple people working on/turning into a real business - so I need auth, but would prefer to spend as much time as possible on the game itself, and don’t have anybody to farm the work out to.
But I’m happy to give workos a shot to see if it makes my life easier.
WorkOS is pretty tailored to folks building B2B apps where individuals will later be part of a team. (Think Dropbox, Figma, Asana, etc.)
It's less of a fit for B2C products where user identity won't ever be associated with a company (like ecommerce, a game, or a dating app).
The reason is that B2C apps actually have pretty different needs in terms of user identity. For example, most consumer apps will optimize for faster/higher conversion during signup and less security.
But if WorkOS works for your use case, then you should definitely use it. Our free tier includes 1,000,000 MAUs, which is significantly higher than Auth0/Clerk/Stytch/etc. which start charging you around 10,000.
Disclosure: I work for FusionAuth, an auth provider with a free community option.
If I were in your shoes I'd probably use a library built into whatever framework you are using. Auth servers are powerful but are another architectural component you have to manage (even if it is a SaaS, there's still config to manage).
Not sure what you are building it in, but if I were building it in rails, I'd use devise. If JS, maybe nextauth or passport.js.
When you do this you have to accept certain risks (what if your user data gets breached, what if you want to add more functionality) but based on the little you've shared, I think a local solution is perfectly fine.
How does AuthKit compared to Auth0? Any major differences?
Also what if you have an existing email-based account system which works fine - can you use AuthKit to add additional sign in methods like social without replacing your existing system?
The open-source nature of AuthKit is pretty different. You can build your own complete custom UI with the React components. Or build your own components from scratch and still use the WorkOS backend.
Outside of that, it's pretty much a drop-in replacement for Auth0. We also have more features, like native SCIM provisioning and a streaming events API to keep your app's database in sync.
WorkOS looks interesting from a features perspective but license model based on number of connected organisations is so high it will mean most SMBs (my clients) can't afford it.
Our customers typically just bundle our pricing within their own team/enterprise plan and pass through the cost. IT admins even within SMB orgs are happy to pay a couple hundred dollars a month more for the enhanced security of SAML auth. And small teams realistically don't need SAML, so you can add a minimum requirement on the number of "seats" (assuming that's how you bill).
Fair points. But SAML doesn't cost much incrementally for each added customer org, yet it enables an SMB to simplify account lifecycle management. Important from a security perspective of course. Most SAAS do put it as an "enterprise" feature but it's a barrier to SMB security best practices.
A more complex yet rationale model would be a small incremental fee per user under the SAML.
I thought so too and we actually tried that first. After talking to about a hundred customers, I heard them resoundingly prefer per-org pricing because the flat cost is predictable within their own deal structure. I think the reason is that user counts can vary dramatically and b2b saas businesses are primarily driven/measured by the number of customers, not end users.
I use Supabase just for auth (use AWS for everything else) and it was incredibly simple. The only issue is that their docs for my niche use-case were slightly out of date, but it still only took me maybe 30 minutes total.
For complex cases, use SSO providers and service-to-service connectors that hide the underlying protocol from you. If you must manage auth in a more custom way, use things like Azure Active Directory or other competitors. They probably use some OpenID or OIDC under the hood, but the vast majority of software products shouldn’t actually need to implement the protocols directly.
For simple cases, plain old TLS should be enough, ideally with short lived client certs.
It’s a bit like “don’t roll your own crypto” advice. Don’t roll your own auth.
Ok, this feels like different advice, to me. It isn't to not use these, it is not to be the one implementing them? That is a lot easier to understand. I've been using AWS Cognito to get basic stuff up and running and it hasn't been too bad, I don't think. Have to convince people to not punch holes in things, but so far I have not been too turned off from things.
> It isn't to not use these, it is not to be the one implementing them?
More or less. The complexity comes from having to solve the edge cases, so it’s helpful to be one level of abstraction higher where your code is closer to your conceptual space.
My recent experience setting up AWS Cognito (not through Amplify) was pretty rough. I think vanilla Cognito doesn’t do a very good job of delivering you something that actually works out of the box with no footguns - you still have to handroll a lot of stuff.
On the AuthN side, it seems to be... fine? For AuthZ, things are not surprisingly outsourced heavily to the application side. I'm not clear on how I would want that to be any different, all told. Last thing I, personally, want to deal with is an annotation style setup to control who can do what. I am luckily working with something where we can have pretty easy definitions on who can do what.
I would love to hear more of the foot guns, though. Not trying to deny they exist.
I think this hits the same points I brought up in https://news.ycombinator.com/item?id=38873614. I do not claim that these should never be used. In fact, I would go farther and say in many cases this sort of thing should be used.
I think this is mostly my own ignorance and inexperience working with AuthN, but I had a harder time than expected just figuring out how to add basic log in and session management to my website. I spent a long time reading all the official Cognito docs getting nowhere. Eventually I started searching on the web and finally found two guides that actually managed to explain what I was looking for: [0], [1].
My philosophy toward authn right now is to never have to worry about security at all, so I want to completely minimize any personal responsibility towards managing passwords and tokens, first by outsourcing it as much as I can to products like Cognito, and failing that, by following best practices. My gripe with Cognito, as someone who doesn’t know much about auth and would prefer to learn as little as possible (I just want to add logins to my site!), is that it doesn’t give you an understandable API or user flow or best practices for implementing what I’d consider to be a “happy path” use case, unless you use Amplify. So if you’re someone like me who is learning as they go, there are tons of footguns and mistakes you can easily expose yourself to.
As an example: it’s not obvious that using their hosted UI with a redirect, for USER_SRP_AUTH, should point to a backend endpoint hosted/managed by you that converts access codes to tokens and performs a second redirect back to your actual site. You could easily do the wrong thing and redirect back to your main site with the access code still in the URL params, and then issue a call from the webclient that converts that code to tokens ( Which is terribly insecure as it opens up an exploit - user could share that URL with another not knowing that the access code in the url params is sensitive and could allow others to sign into their account). In fact, that entire exploit/antipattern was never even mentioned anywhere in any docs I found, but it would be extremely easy to accidentally introduce by naively using Cognito.
I confess I am far less worried about access tokens leaking to end users than I probably should be. Assuming folks are validating their audiences on tokens, I don't see as much danger on the implicit workflow.
I'm also less clear on how the extra redirect there helps? If you are dependent on the user's client machine to follow the redirect anyway, they can still get middled, right? Compromised client doesn't follow the "code" redirect and instead directly calls to your oauth endpoint to get tokens. Since this is the "code" path, they can even get a session token that they can then start using on their own? Or do you lock down your oauth endpoints such that they can't be called? (Or is there more I'm mistakenly ignoring?)
The specific vulnerability I’m mentioning is if the user manually copies their post-redirect url (with access code in url params) and shares it with someone else. Specifically “hey check out this cool game!” (I’m making a game), sends a link, not knowing that nonsense after the site URL contains sensitive info that shouldn’t be shared. And then some savvy user, or bot, hijacks their account.
The extra redirect converts login.mainsite.url/?code=foo to mainsite.url with the code converted to tokens passed back via cookies. That way it’s much harder for a user to leak account details accidentally. In this auth flow, Cognito hands off the login by redirecting to foo.bar/?code=baz which could leak baz if baz gets shared.
My tokens’ cookies themselves are same-site only/https only and not directly accessible, so they’re protected against XSS AFAICT. AFAIK the only MITM security risk, once I got this working properly, is if something on the user’s network sniffs and leaks url params to my login endpoint (not sure if TLS makes this impossible by encrypting the url path, hope it does, but not something I can easily workaround anyway) or injects arbitrary code to my backend (in which case almost everything is compromised anyway).
I’m new to this auth stuff so I might be missing something, but I was surprised at the subtle security risk of Cognito’s default redirect behavior once I noticed it.
Ah, I think I see. The concern is the web app not clearing the access token from the URL that a user accidentally shares? That or maybe URL logs of where a user has accessed would leak an access_token?
This makes sense, and I think is compelling enough. The "code" is protected by some complicated effort in Cognito to make the code single use. (Right?)
Thinking of my hypothetical, I don't think there is any real protection from a compromised client. This is data that you want to give to the user, and you have to do that through the client. But the redirect has to be followed by the user's client, right?
To that end, you are probably still fine doing the code to token exchange using the web browser directly? Just not through the address bar, and instead with a post to the oauth endpoint. You can set the cookie locally, but no need to have another webpage involved.
I guess it depends on what you mean by a compromised client/ how it’s compromised. The auth flow is:
* mainsite.tld checks if user is unauthenticated/uses expired tokens. If so, redirect to Cognito UI hosted in a subdomain (auth.mainsite.tld) but managed by Cognito.
* Cognito UI prompts user for username/email and password. Potentially also MFA. Handles password reset. Eventually also handles signup.
* On successful sign in, Cognito redirects to my login endpoint with the access code in url params (login.mainsite.tld/?code=foo).
* My login endpoint extracts the access code, talks to Cognito again to confer it to tokens. Returns tokens via cookies in a response that redirects to my main site (mainsite.tld). (This is what prevents the user from accidentally sharing their access code in url params, manually copied out of their browser address bar, if I had instead done this in the browser).
* The main site now has working credentials; if the credentials go missing (because user cleared cookies) or expire (indicated in currently-unimplemented response when they interact with my authz/game server) they’ll be redirected back to the same Cognito UI.
I do not have control over how (url parameters) Cognito spits out the access code with this flow; still this flow is preferable to most others as at no point whatsoever am I responsible for managing user passwords, yet unlike a lot of new auth solutions that accomplish the same thing, users still actually have the option to sign in with passwords. What I do have control over is what redirects addresses are allowed out of Cognito, so afaict a compromised client (something bad that points to my login) can only redirect to my login endpoint which only redirects to my main site. There is no way to stop a compromised client (like a malicious browser and unsuspecting user) from doing bad things with the code or tokens but the same is true of anything entered into a browser ever, so that’s not a problem worth caring about.
But maybe I misunderstand (because I’m new to webdev too lol): what you’re suggesting in that last paragraph might be possible if I can reliably get the browser to hide the access code url param from the address bar/history. I just didn’t know how to do that from the browser without a redirect or reload. Even if that’s possible I’d still consider it a pretty glaring footgun, because while (hypothetically) possible it’s not necessarily obvious.
I think the catch there is that your "login endpoint" is still relying on the user's browser to get the code. The cognito endpoint returns a redirect to the user, and it is on them to follow it. So, the "code=foo" is visible to the user. If the user wants, they can try to prevent following the redirect and use that code directly.
That is, between each of your bullet points, there is a request by the user's browser. You do a request to the cognito hosted UI, it returns a code to the browser through a redirect to a webpage that is in it's "allowed list." The idea is that your "allowed list" includes a "login endpoint," but in all cases the code goes back to the user and it is on their browser to send that to the specified page.
I'm asserting that you can have javascript in the main web app that can use the "fetch" api in the browser to exchange a code for a token. That mostly hides it from accidental disclosure. And it makes it so that you don't have to have a special HTTP endpoint with another redirect in there setting cookies. (I'm assuming you'd set local storage or cookies with the fetch data.)
Yes, the user can still share their access code if they really want to. That’s like them sharing their password.
What I’m trying to prevent, while adhering to general authN best practices, is a user accidentally or unknowingly sharing their access code because they copied the address in their browser bar/history and sent it to someone. If they jump through hoops to share it there is nothing I can do to stop it. But the default Cognito footgun I’m mentioning is that the code ends up in their browser window in a way that could be easily copy and pasted without them knowing why they shouldn’t do that.
I don't think you need another endpoint that will respond with cookie commands?
On your page, the one that got the "?code=foo" payload, you can use javascript on your site and make another call to the backend to get the tokens. The same javascript code should clear the URL so that a naive copy/paste doesn't get it.
This is in contrast to having another server side endpoint that can set cookies on another http redirect response to the user. One that has to be in the same domain as your application, for the cookie to set correctly.
This will leak the "code=foo" in any access logs surrounding the user. But, that is already in the user's history and already happened. Is why cognito goes out of their way to make "foo" one-time use.
The specific challenge with authz in the app layer is that different apps can have different access models with varying complexity, especially the more granular you get (e.g. implementing fine grained access to specific objects/resources - like Google Docs).
Personally, I think a rebac (relationship/graph based) approach works best for apps because permissions in applications are mostly relational and/or hierarchical (levels of groups). There are authz systems out there such as Warrant https://warrant.dev/ (I'm a founder) in which you can define a custom access model as a schema and enforce it in your app.
My concerns there are usually that data duplication for various reasons makes a ton of sense in an application. Replicating the permissions system throughout all of this duplication is usually tough, even if you do know the schema well.
Worse, though, often times applications are learning their schema as they go. This is the key benefit of "schemaless" approaches. Anything that adds friction to a schema in the system is likely to get shaken off due to slowing the teams down.
I do agree that resource approaches are the best. I try and boil it down to flat access lists for resources based on ID. Any application call that uses an ID gets checked against access lists for that ID.
I will fully grant that, if you are building a system where you do know the schema very well, then this changes.
Pulling this back to Open ID and friends, I am growing rather disillusioned with the "scope" tag on access_tokens to control this. I love the idea of being able to scope down access. I do not like the idea of leaning on that, too heavily.
Passwords need to be sent both with the request, and to the requestor. I think GP is referring to sending credentials to the service making the request.
It is far better to give service XYZ a time-bound and scope limited token to perform a request than a user's username and password.
Chromium removed support for generating TLS Client Certs within chrome in 2016 [0] and ever since then it has gotten harder and harder to use mTLS in Chrome/Chromium. Ten years ago it wasn't a great UX, but now it isn't even obvious how to use it. The impression I've gotten is that Chrome isn't interested mTLS.
The crypto isn't complicated. What makes it complicated is the 10,000 different use cases they want the solution to work for, rather than one solution per use case, and a loose coupling interface for all of them.
It’s not because of cryptography, but because of abusing technologies not made for interactive stateful applications (i.e. HTTP, HTML etc.) for interactive stateful applications.
I used to think this, but I’ve also worked on authentication/authorization in contexts where you’re not constrained to HTTP requests, and it doesn’t really get any less complex once federated identities, finer grained access control features, and revocation enter the mix.
Sure trying to do everything through JWT’s and cookies makes things harder, but reasoning about attestations by a user from one federated identity provider that a service from a different federated identity provider should be able to query specific data on behalf of the user is messy no matter where you do it, and every medium sized enterprise has that problem somewhere in the IT stack. At that point JWT is just another serialization format for passing attested data around.
I don't see how the display layer (HTML, CSS) or transport layer (http, TLS) are not suitable for mildly interactive stateful applications. All the state is on the server, except a few cookies.
What they are "abusing" is the fact that the same browser, under user's control, may have access to many sites which don't by default trust each other. The user can attest that they should.
No, it can indeed be vastly simpler to handle auth.
But you have to consider many complex aspects to provide a simple solution:
- What are your real use cases (authentication, authorization, delegation?)
- What is your threat model? (avoiding silly mistakes, preventing corporate espionage, defending against targeted attacks require very very different solutions)
- How to integrate into your ecosystem (tech stack, actors, layers..)
Then you might be able to remove some constraints.
You might not need authorization delegation, stateless and readable json tokens.
But often it's easier to not think too much about it and just use "an industry proven standard", and that is oauth2 and OIDC:
a large auth umbrella to avoid looking at the sun.
If you're writing software to authenticate users, the protocol is huge and complicated. That's why there are full prebuilt containers and SaaS authentication services that solve this problem. There are entire server implementations you can extend, but with tools like Zitadel and Keycloak ready to be configured and deployed for all manner of use cases, I don't see why you would.
If you're just authenticating your client app against a server, it's pretty easy (all you need is two tokens and a URL for most librarlies). With some web servers (Apache, Caddy, the paid version of nginx) you can put that config in a location block and have it deal with the entire auth flow, so all your application needs to do is take the REMOTE_USER header or call /whoami to find out who the user is logged in as.
Doing auth correctly is just hard. Personally, I treat it like I treat dates/times: use something someone else made, unless you have a particularly weird use case that nobody else supports.
Thanks, your article is what I was hoping for when I clicked the OP. I've been putting all my self hosted services behind OIDC recently using Authentic and I've been wanting to actually understand how the flow works under the hood, this really helped
I stopped spending mental cycles trying to parse these standards after taking myself on a ride with a completely DIY SAML service provider implementation.
Today, we use OIDC & SAML to authenticate all of the things. But, I cannot explain how any of it works in terms of detailed protocol, certificate chains, etc.
We actually have no in-house configuration along this axis because we only use products, such as web function runners, that live inside the IdP's platform. These can be trivially opted-in for MFA authentication with a single dropdown election if you are using Azure.
If your mission is to build your own IdP platform and/or SP client libraries, then it totally makes sense to dive into this rabbit hole. Otherwise, make it someone else's problem. An occasional headline in the news about a token not expiring in time, etc, is not worth chasing unless you intend to compete directly with these providers and build your own identity platform. If Microsoft can get it wrong sometimes, so will you.
I've been working with Pythons Social Auth, because I didn't want to spend brain power trying to figure out exactly how OIDC works. Still that isn't enough, because OIDC isn't always implemented the same.
E.g. Apparently you can return OIDC claims in two ways, nested or flat, but most clients do understand that, so if your server don't know how to do flat and your client only does flat, then you have a problem.
I do not known why OIDC has so many bad comments here. At my $company we are using Keycloak for multi-realm (multi-tenant) authentication of users and clients (applications). Yes, the learning curve is long for OIDC and even longer for Keycloak. The FreeMaker Template Engine is awful compared to Twig. Updates of Keycloak can break something, so better have proper test/stagging environment. But this is the tax for not implement something, that is not in core domain of organization.
OIDC solves problems for OAuth2 like "every Identity Provider has different endpoints" with OpenID Connect Discovery (/.well-known/openid-configuration).
And then in real life I have to use the idm of 5 car manufacturers. Their devs being in South korea, China, US, Italy (we are in Germany).
Impossible to manage meetings.
Impossible to adhere to the standard. Impossible to demand that they use the well-known config. Impossible to agree on a good UX (by using sane config values for token validity).
No. This is more like a "how to configure an OIDC integration between Github Actions & AWS" tutorial. It uses OIDC, but "How [OIDC] works" is too broad of a title for what the article ends up covering, IMO.
This doesn't really explain how OIDC works, it just explains the flow of requests a user would see if they're setting up OIDC for authentication between two systems for the first time.
But beyond that, I'd say in future blog posts it would look a bit more professional to use some kind of architecture diagram making software, rather than somebody's napkin drawings. It's a little more difficult than it needs to be deciphering these graphics. To be entirely honest, I'd settle for mspaint-level quality if none of the free diagram making tools out there catch your eye.
Since OP seems to be the website author... You should remove or alter the ::selection style in your CSS. In dark mode, selecting text makes it illegible (white on white).
This was a nice overview of why you'd use OIDC to get short lived access tokens (in the pure sense, not in the OAuth sense) with a heavy emphasis on AWS. Not really an overview of OIDC, though.
This is a really useful guide, but it's still not enough... every time I read something like this I get to a bit like this:
"Create a role on AWS, add trust policy specifying which github org+repo are allowed to access this AWS role. Create an identity provider for github actions."
I think need a full video of clicking around in the AWS console here, because the idea of having to figure out how to do that myself is horrifying to me.
I've seen the same issue with instructions for how to configure Keycloak as an OIDC provider - given Keycloak has so many options (some of which might well be security significant), you'd almost want a step by step explicit statement of what every setting should be, to get to a tested, validated and secure configuration.
Which flow and service/grant etc seems to matter a lot, and be a good example where you'd want a very clear playbook of step by step instructions that you can hand off to someone else to unambiguously follow.
The gist of it (pun intended) is that GitHub issues GitHub-issued (not issued by your cloud provider!) OIDC tokens to your job runs, and then your job exchanges those tokens for permissions with your cloud provider, which you have configured to trust tokens from GitHub.
From experience, OIDC authentication to cloud providers from GitHub actions boils down to:
1. Get your GitHub actions workflow to be able to work with its own (GitHub) tokens by configuring workflow permissions.
2. Configure your cloud provider to verify tokens issued by GitHub actions OIDC server thingymabob (technical term).
3. Configure your cloud provider to grant permissions in that cloud provider (or issue its own tokens with those permissions) based on certain values in the verified GitHub tokens that is presented with. If your cloud provider trusts GitHub, then it can treat the values in the token (workflow name, branch run from, owner and name of GitHub repo) as trusted.
4. Use a GitHub action from your cloud provider to do the negotiation with your cloud provider when your job runs, performing the exchange configured in (3).
For me to follow that procedure I really do need meticulous step-by-step instructions, with a full set of screenshots for every interaction I need to have with any of the web consoles involved.
If I ever get up enough courage to do this myself I'll take and publish those screenshots, but I'll probably continue to drag my heels for a few more years.
Jesus. We fucked with keycloak for like a minute at one of my past jobs where we used FreeIPA. Gave up, and just didn’t use that functionality beyond whatever default stuff it m if it do. IAM is hard. I kind of enjoy it, but there’s always a hundred other things I’m accountable for managing.
What you're supposed to do is have an alcohol-fueled all-night session of modifying values in your Keycloak realm settings one at a time until it works. Then, you export the realm file and tell everyone not to mess with it.
>I think need a full video of clicking around in the AWS console here, because the idea of having to figure out how to do that myself is horrifying to me.
CLI commands would be OK. Terraform wouldn't, because then I'll have to learn enough Terraform to run it - I'd rather minimize the number of extra tools I have to figure out here.
Passkeys and OIDC are not directly related. If all you want is "authenticate to AWS" then yes, passkeys would work, but so would a simple password, or a TLS client certificate, or whatever other technology you like for authentication.
OIDC works for things like "use my employer's login to get access to AWS resources without having a separate AWS password".
For certain OIDC authentication implementations, you can actually use passkeys. Standard passkeys should work perfectly fine with Keycloak's WebAuthn implementation, for example, either as a second factor or as the first factor in the login flow.