Hacker Newsnew | past | comments | ask | show | jobs | submit | more shireboy's commentslogin

This is interesting to me because somehow I’ve had in my head that if we develop the ability in the next couple centuries to send probes interstellar it would be a longer list of possible targets. What this makes me realize is the list of places we visit even in the next thousands of years - even with incredible leaps in propulsion - is very finite. Space may be really really big but the part physically accessible even in long timescales is limited.


Even the part accessable to just radio/light is small.


Excellent! I’m a .NET developer who dabbles in Node and have been looking for a Hangfire alternative for a while. This looks like just what I would want.


One gotcha with roll your own task scheduler is if you want to run it across multiple machines. If you need 5 machines running different scheduled tasks, you need a locking mechanism to ensure only one machine is processing the task. In the author’s approach this is handled by the queue, but in my read the scheduler can only happen on one machine or you get multiple of the same task in the queue. Retry can get more complicated- depending on the failure you may want an exponential backoff, retrying N times and waiting longer durations between. A nice dashboard to see the status of everything is helpful also.

In .NET world I use Hangfire for this. In Node (I assume what this is) I tinkered with Bull, but not sure what best in class is there.


Oban enters the chat… :)


I dunno. I pretty much skipped over XAML, which is very similar for .NET desktop UI. Maybe this sticks better because it’s not MS? I certainly think current frameworks tend to be too much cruft for LOB/forms-on-data apps. But a special xml seems wrong to me.


This pretty much exactly describes my strategy to ship better code faster. Especially the “top down” approach: I’m actually kind of surprised there isn’t like a “UI first” or “UI Driven Development” manifesto like w TDD or BDD. Putting a non functional UI in front of stakeholders quickly often results in better requirements gathering and early refinement that would be more costly later in the cycle.


I think at that point it's almost better to come with paper printouts for 2 reasons.

1: Tacticale/shareable/paintable in a physical meeting

2: It drives home that it's a sketch, with bad customers a visible UI so easily hides the enormous amounts of complexity that can sometimes be under a "simple" UI and make it hard to educate them as to why the UI sketch one did in an hour or two then needs 1500 hours of engineering to become a functioning system.


Why not build a simple functional UI, its not a huge time sink between non functional and functional (as long as kept simple)


Well, sometimes I will, but for example take a simple list+form ontop of a database. Instead of building the UI and the database and then showing the stakeholder, who adds/renames fields, changes relationships etc. I will intentionally build just the UI not wired up to database. Sometimes just to an in-memory store or nothing. Then, _after_ the stakeholder is somewhat happy with the UI, I "bake" things like a service or data layer, etc. This way the changes the stakeholder inevitably has up front have less of an impact.


Well, most of the times people I worked with preferred something earlier even if just by a few days that they could see and comment on. Maybe that is why for him too.


I call it "outside in", but sometimes like to de-risk a lower level component before investing in the UI.


My first intranet job early 2000s reporting was done this way. You could query a db via asp to get some xml, then transform using xslt and get a big html report you could print. I got pretty good at xslt. Nowadays I steer towards a reporting system for reports, but for other scenario you’re typically doing one of the stacks he mentioned: JSON or md + angular/vue/react/next/nuxt/etc

I’ve kinda gotten to a point and curious if others feel same: it’s all just strings. You get some strings from somewhere, write some more strings to make those strings show other strings to the browser. Sometimes the strings reference non strings for things like video/audio/image. But even those get sent over network with strings in the http header. Sometimes people have strong feelings about their favorite strings, and there are pros and cons to various strings. Some ways let you write less strings to do more. Some are faster. Some have angle brackets, some have curly brackets, some have none at all! But at the end of the day- it’s just strings.


My first personal page was made this way too. Nightmare to debug, since "view source" only gave the XML code, not the computed XHTML.


I’ve wanted this for a while. I worked on a bank app where the home rolled solution was atrocious. Line of business apps don’t make sense in Microsoft store. But really where I land is to greatly prefer web apps deployed to IaaS because deployment is easier and compatibility is usually a known quantity. Debugging installer or desktop app issues on remote servers and desktops is a hassle I like to avoid if I can


I’m trying to wrap my head around mcp but auth and security is still the confusing thing to me. In this case, I get there is an oauth redirect happening, but where is the token being stored? How would that work in an enterprise or saas environment where you want to expose an mcp for users but ensure they can only get “their” data? How does the LLm reliably tell the mcp who the current user is?


I've built a remote mcp with oauth2 auth from scratch just last week.

The standard has a page on authorization[0], though it's not particularly easy to read for someone not well-versed with OAuth.

In short, MCP just uses plain boring oauth, like any other oauth authorization. Like when you authorize an app to access your google calendar. The only difference is that instead of accessing your normal API, they access your MCP http endpoint. Each connection to that endpoint will pass the Authorisation header with an oauth token, which you can resolve to a user on your side. Same as you would with normal OAuth.

One cool bit is that MCP providers are supposed to support OAuth2 Dynamic Client Registration, which means that e.g. Claude can provision an OAuth2 client in your app programmatically (and get a client_id/client_secret that it can use for authorization flows).

When you add an MCP server to your Claude organization, you just add the MCP server. Each user will have to go through the integration's OAuth2 authorization flow separately.

[0]: https://modelcontextprotocol.io/specification/2025-03-26/bas...


> When you add an MCP server to your Claude organization, you just add the MCP server. Each user will have to go through the integration's OAuth2 authorization flow separately.

Check out https://aaronparecki.com/2025/05/12/27/enterprise-ready-mcp - there are some great ideas there on how this can be simplified even more in the future.


It does an oauth redirect flow and the client stores the access token and sends it with requests after.

I have built a couple using the spec from a month ago. It works alright.

A lot of bad decisions are in the official implementations. For instance not using native Request / Response types in node, so you’re forced to write a bunch of garbage code to convert it, or install express just to use an mcp server.

If I had the time I’d really make my own mcp implementation in typescript at least.

I find most of the implementations to be so over engineered and abstracted on what could be simple function calls on top of the built in language

For simple stuff like a json file that returns the location of your auth routes, you need to add a “middleware”

When in reality you can just make a route and explicitly return that information.

Every piece is some new abstraction it feels vibe coded.


You answer is just about a discussion we had yesterday about the race between 'let build a standard that will allow the LLM to get programmatic decisions' and 'let build something that works'

Most of the standard and implementation is focused in the vision of models and clients that automatically handle the tool overhead, while in reality everything that is related to MCP requires tons of boilerplate/middleware/garbage code.


Yeah, I wished you could somehow pass the user's id token to the MCP server when you are calling a tool when implementing an AI model. You could then either let the mcp server fetch a token using the `token-exchange` endpoint. So that it can fetch the user info (e.g. user id)

For example, when you try to integrate with AI model that supports function calling in the backend and want to use MCP server to enhance the model.

I haven't figured that out yet. Maybe you would need to use Client-Initiated Backchannel Authentication Flow ?


Author here.

There's basically a couple of different ways to implement an MCP server - for this demo it's a local binary that communicates over stdio, so no OAuth process is taking place. It's only meant to run on your local machine.

To make the demo simpler to explore and understand, the binary loads it's configuration (SnapTrade API client id, secret, and username and secret) from a .env file that you populate with your credentials which allows it to fetch the right data.


Totally understand why it’s not in the post, and it did help me understand mcp more. That said, that’s the issue: most articles I’ve seen are geared toward how to do a local-use-only mcp. In the ones I want to build I need to deploy into an enterprise and know the current user and am not quite clear how yet. The answers on using oauth help though. Maybe a future post idea :)


these questions kill the vibe.


As bad and annoying as this is, I do think “we won’t pay the ransom but set up a reward fund in the same amount to find the perps” is an interesting approach. It turns the tables such that any of the criminals or associates now are incentivized to turn on each other. I could see ways it wouldn’t work (they lie to get the reward, future scammers set up the scam with a patsy so they can collect reward), and am not sure it plays the same if there is actual exposed keys, etc.


.NET has made great strides in this front in recent years. Newer versions optimize cpu and ram usage of lots of fundamentals, and introduced new constructs to reduce allocations and cpu for new code. One might argue they were able because they were so bad, but it’s worth looking into if you haven’t in a while.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: