# if you don't have a session token yet
POST /session-tokens # Accept: application/json
=> 200 {"token": "someuuid"}
# no cookies set!
# stuff the session token in localStorage instead
# now use it
PUT /sessions/someuuid # with your login details
=> 201
GET /sessions/someuuid/inbox # Accept: application/json
=> 200 {...}
GET /sessions/someuuid/inbox # again
=> 304 # wow much cached
# meanwhile
GET /sessions/otheruuid/inbox
=> 200 {...} # other user: other memcached prefix!
# pages that won't change per-user...
GET /home
=> 304 # can be cached globally
# (use JS client-side, not edge-side, includes)
# and finally...
DELETE /sessions/someuuid
=> 301 /home # log out!
Basically, a session is just another resource that you create and then manipulate using its sub-resources. This makes all sorts of things easier: creating session tokens and attaching sessions to them (by logging in, or as a guest) are separate steps, so you can throttle logins at the just by throttling access to the token generator. No cookies and no ESI means caching (reverse-)proxies can actually cache. (Note that I'm presuming TLS and hygienic CORS here, so "session stealing" isn't a valid complaint. Or rather, without those things, it's just as valid a complaint for a cookie-based approach.)
Oh, but as a postscript, one little HATEOAS bonus tweak:
POST /session-tokens
=> 200 {"token": "/sessions/someuuid"} # no constructing URIs!
Don't do this in real life. Putting session IDs in URLs is not a good security practice. (Session fixation. Users might try to share links containing session IDs. Session IDs leak out via URL history and referrers; URLs show up in the darndest places.)
Some people (maybe you) think he's wrong; why can't sessions just be a resource? (And this just goes to show that nobody knows what "REST" really means, and that this question doesn't matter.)
But if you're willing to treat sessions as resources, there are good security reasons to use cookies to refer to them, ideally in addition to a separate non-cookie ID parameter, to prevent CSRF attacks.
Cookies have some performance benefits, too: they can save you round trips to the server. If I want to display fresh personalized session-specific content on my /home page, why should I force the client to download a generic /home page, then download my JS, then run an AJAX request, when she could just send a cookie in the initial request for /home and get back an (uncacheable) personalized response?
> If I want to display fresh personalized session-specific content on my /home page, why should I force the client to download a generic /home page, then download my JS, then run an AJAX request, when she could just send a cookie in the initial request for /home and get back an (uncacheable) personalized response?
Your server shouldn't have to do O(N) units of work to serve a static page to N people. The entire point of GET-idempotency is that you can reduce serving your home page to O(1) units of work for you: you render the page, once, and it gets cached by a CDN, like Cloudflare. This necessitates customization either on the client-side, or not at all. Doing anything other than these is breaking the web[1].
(Besides, you don't necessarily have to do an AJAX request for every previously-would-have-been-customized page. Edge-side-include type stuff (e.g. username+id, viewing preferences) is, literally, what localStorage was created to store. You only have to get that type of stuff once.)
> Session fixation. Users might try to share links containing session IDs. Session IDs leak out via URL history and referrers; URLs show up in the darndest places.
As I said, proper TLS and CORS (where things like adding "noreferrer" to all external links is as much a requirement for proper CORS as not loading external images) generally takes care of this.
But you're right, there is a use for cookies. It's not to hold your session token, though. Instead, it's to hold a client fingerprint token, given to the client the first time they speak to you.
A client fingerprint is anything that authenticates a session token. If you say "give me /sessions/foo/..." and you don't send foo's associated fingerprint, the server 403s you at the load-balancer level.
Session fingerprints are already a common concept: some people use the user's IP address, or their browser UA string, or something, as a fingerprint. These have a pretty horrible UX, though, because these things can change unintentionally (e.g. a mobile connection switching cells.) But users expect that clearing their cookies will log them out of things, so the cookie store is a pretty good place to put these. Then, leaking a URL with the session token does nothing, because it doesn't share session secret (the fingerprint), just the identifier.
How is this different from just putting the session token in the cookie store? Mainly in that the client fingerprint doesn't represent you, it represents your machine. The point of it is to pair your browser to the server. You can log in and out as many times as you like, but it'll all happen under the same "client."
Note that if browsers actually chose to start emitting a unique, persistent, per-site client fingerprint as a header (like the iOS "Vendor/Advertising ID"), this would supplant the client fingerprint token -- but do nothing to replace the session token. They're separate things.
Note also that while the session token is part of the URL (and thus part of determining cacheability), the client fingerprint isn't. This should be obvious, but its implication isn't, necessarily: past your load balancer (which authenticates sessions against client fingerprints) cookies cease to exist; they should not be passed to your backend servers. The client, and your backend, operate on pure resources; the client fingerprint becomes a transparent part of the HTTP protocol, invisible to both sides. Gives a much stronger meaning to "HTTPOnly."
---
[1] And it's not that it's bad to break the web, really; you're not hurting anyone else's site than your own when you do this. It's just that HATEOAS gives you some really great guarantees, and pretty much everyone who throws these guarantees away finds themselves re-building the web on top of itself to try to get these guarantees back.
Matryosha caching middleware, for instance, is what you're forced to deal with when you're trying to build up a complex view within the scope of a single web request. If you instead just use a service-oriented architecture, where each service that needs information from sub-resources makes requests through the public API of the web server (the same one you want clients to use), you'll get caching automatically, because all the sub-resources you're requesting are inherently cacheable, and thus automatically cached.
TLS doesn't address session fixation. (And certainly neither does CORS or the same-origin policy generally.)
Client fingerprinting (with cookies) does address session fixation. But but but didn't you just get through saying how wonderful your solution is because it doesn't use cookies?
If you're happy to use cookies and link them with server-side sessions, then just do that. (Just don't tell Roy.)
And when you need maximum performance, don't force the client to do multiple round trips.
> You only have to get that type of stuff once.
Most visits to public-facing web pages have a cold cache. Maximizing for warm-cache performance at the expense of multiple round trips for cold-cache performance is probably the wrong thing, but only A/B testing and RUM will tell you for sure.
Something like this should be expanded into an RFC!
It would be great to not have to start from scratch of every web project (even if you you are using a library as you should you often are coming up with the urls for each action.)
Can you expand this with support for password recovery, oauth-like flows, JWT, etc?
Oh, but as a postscript, one little HATEOAS bonus tweak: