The audience and scope claims exist to address that problem. Provided that RPs reject JWTs issued for other audiences than themselves there’s no security weakness here.
This is why JWTs are used in OIDC (e.g. “Sign-in with Google”: any website can use it, and it doesn’t make Google’s own security weaker.
I’ll concede that small, but important, details like these are not readily understood by those following some tutorial off some coding-camp content-farm (or worse: using a shared-secret for signing tokens instead of asymmetric cryptography, ugh) - and that’s also where we see the vulnerabilities. OAuth2+OIDC is very hard to grok.
It limits your ability to compartmentalize your infrastructure, establish security perimeters, and provide defense-in-depth against vulnerabilities in your dependencies.
> The audience and scope claims exist to address that problem. Provided that RPs reject JWTs issued for other audiences than themselves there’s no security weakness here.
My interpretation is that the audience and scope claims, as other features like nonce, are in place to prevent tokens from being intercepted and misused, not to facilitate passing tokens around.
Don’t see how those prevent tokens from being misused? They just prevent anyone from issuing tokens as you. Not by themselves, but if you implement your server correctly.
> Don’t see how those prevent tokens from being misused?
The purpose of a nonce is to explicitly prevent the token from being reused.
The purpose of the other claims is to prevent them from being accepted (and used) in calls to other services.
If you implement your server correctly, each instance of each service is a principal which goes through auth flows independently and uses its own tokens.
Large companies have fallen into this trap [1]. So you are right that aud addresses the problem, but it's widespread enough to question if it really affects just coding camp content farms. Hard to grok is probably possibly in some way a design flaw.
I hadn’t heard of DPoP until this mention. Please tell us more. Google tells me it is Demonstrating Proof of Possession, but is it supported by any products?
DPoP described in RFC9449 - you can see from the RFC number it's quite new. I don't think there's wide support for it, but at least Okta supports it[1] and I think Auth0 are also working on adding DPoP.
Is it good? I'm not a fan. To use DPoP safely (without replay attacks), you need to add server-side nonces ("nonce") and client-generated nonces ("jti", great and definitely not confusing terminology there).
You need to make sure client-generated nonces are only used once, which requires setting up... wait for it... A database! And if you'll be using DPoP in a distributed manner, with access tokens then, well, a database shared across all services. And this is not an easy-to-scale read-oriented database like you'd have to use for stateful tokens. No, this is a database that requires an equal number of reads and writes (assuming you're not under a DDoS attack): for each DPoP validation, you'd need to read the nonce and then add it to the database. You'd also need to implement some sort of TTL mechanism to prevent the database from growing forever and implement strong rate limitation across all services to prevent very easy DDoS.
It seems like the main driving motivation behind DPoP is to mitigate the cost of refresh tokens being exfiltrated from public clients using XSS attacks, but I believe it is too cumbersome to be used securely as a general mechanism for safe token delegation that prevents "pass-the-token" attacks.
I agree that DPoP - especially the nonce - is quite complex, but I don't think it's as bad as you make out.
Proof tokens can only be used for a narrow window of time (seconds to minutes), so you just need a cache of recently seen token identifiers (jtis) to do replay detection. And proof tokens are bound to an endpoint with the htm and htu claims. They can't be used across services, so I don't see a need for that replay cache to be shared across all services.
The main issue for us was not the size of the cache, but distributing a guaranteed single-use cache (CP in CAP theorem) across multiple regions and handling traffic from all microservices that can read the token (we have hundreds and plan to support thousands, so I admit our case is quite extreme).
Please note that I am talking about using DPoP to verify _every_ request, not just a token refresh request (where OAuth 2.1 is setting DPoP as an alternative to issuing a new refresh token and revoking the old one). When using DPoP for every request, the amount of client-generated nonces ("jti"s) is quite high, since you need a new one for every request.
And yes, you can rely on "htu" to distinguish between services and have a separate nonce cache for every service, but this would require deploying and maintaining additional infrastructure for every service. Depending on your organization this may or may not be an issue, but this is a big issue for us.
What did we decide on instead? Request Signature and Mutual TLS binding (RFC 8705) where possible. Request Signatures without nonces do not work well for repeatable requests (like the Refresh Token Grant), but this is not our use case.
DPoP is an OAuth extension that defends against token replay by sender constraining tokens. It is a new-ish spec, but support is pretty widespread already. It's used in a lot of European banking that has pretty strict security requirements, and it's supported by some of the big cloud identity providers as well as the OAuth framework I work on, IdentityServer. We have sample code and docs etc on our blog: https://blog.duendesoftware.com/posts/20230504_dpop/
It's a new proposed standard. Where I work (in healthcare in Europe) we have it as a requirement for any new APIs we offer public access to.
We have our own auth service, but looks like Okta already offers DPoP.
This is why JWTs are used in OIDC (e.g. “Sign-in with Google”: any website can use it, and it doesn’t make Google’s own security weaker.
I’ll concede that small, but important, details like these are not readily understood by those following some tutorial off some coding-camp content-farm (or worse: using a shared-secret for signing tokens instead of asymmetric cryptography, ugh) - and that’s also where we see the vulnerabilities. OAuth2+OIDC is very hard to grok.