Hacker Newsnew | past | comments | ask | show | jobs | submit | ryanrasti's commentslogin

The is exactly the way forward: encapsulation (the function), type safety, and dynamic/lazy query construction.

I'm building a new project, Typegres, on this same philosophy for the modern web stack (TypeScript/PostgreSQL).

We can take your example a step further and blur the lines between database columns and computed business logic, building the "functional core" right in the model:

  // This method compiles directly to a SQL expression
  class User extends db.User {
    isExpired() {
    return this.expiresAt.lt(now());
    }
  }

  const expired = await User.where((u) => u.isExpired());
Here's the playground if that looks interesting: https://typegres.com/play/


The fundamental SaaS lock-in comes from bundling two things: 1. A declarative, stable interface 2. An expert support/ops team

I think the path forward is to unbundle them.

We're already solving #1. Nix has the best potential to become that declarative & stable layer, letting us reach the goal of treating cloud providers as the simple commodities they should be (I wrote about this approach here: https://ryanrasti.com/blog/why-nix-will-win/)

The bigger, unsolved question is #2: how to build a viable business model around self-hosted, unbundled support?

That's the critical next step. My hunch is the solution is also technical, but it hasn't been built yet.


Agree with the other comments that it's not fundamentally innovative and no one with a sense of privacy wants to ship all browsing data to one of the mega-AIs.

BUT -- that's missing the strategic point here:

- Everyone realizes that being the gatekeeper for user interaction is key: that's where all the context is and utility will come from

- AI is providing a unique opportunity to overturn a long-held monopoly (Chrome's dominance) by providing

Put another way, ChatGPT + Chromium = OpenAI's Trojan horse.

It would be foolish for them to waste resources innovating on the browser engine (which isn't their core competency) when they can use their actual competency (AI) to take their bet at capturing the market


+100 to you both. This is the classic tradeoff: powerful, centralized DB logic vs. clean but often anemic app code.

I'm building Typegres to give you both. It lets you (a) write complex business logic in TypeScript using a type-safe mapping of Postgres's 3000+ functions and (b) compiles it all down to a single SQL query.

Easier to show than tell: https://typegres.com/play


I share the OP's enthusiasm for Elixir, but as the CTO of a startup that ran it for three years in production, our experience was a mixed bag as the codebase grew.

The core promises of the BEAM (concurrency, fault tolerance) absolutely held up. Libraries like Ecto and Oban are world-class, remote `iex` is a lifesaver in prod, and the talent pool is exceptional.

However, developer experience (DX) was our biggest bottleneck. At our scale of 300k lines of code, the pain points were sharp:

* Compile times: A one-line change could easily take >10 seconds to compile in dev, constantly shattering flow.

* Tooling: ElixirLS was a coin flip. Unreliable autocomplete in a large codebase meant constantly grepping for function names and schema fields.

* LiveView: It wasn't a fit for our complex UI, which required a lot of client-side interactivity, forcing us to build a React frontend. This introduced the exact split-stack complexity (GraphQL overhead, context switching) LiveView promises to fix

I wrote a full retrospective for anyone considering the stack for a long-term project: https://ryanrasti.com/blog/elixir-three-years-production/


> Compile times: A one-line change could easily take >10 seconds to compile in dev, constantly shattering flow.

Has anyone else experienced this? I've mostly read comments on how good Elixir is and if you are a Rails user you will only benefit more from Elixir. This is a bit surprising


for a very large graphql api maybe? I've seen long compiles with a combo of Absinthe, Phoenix, and Elm. Also if you are not a little careful about dependency cycles it can get messy. It is an easy thing to check in CI via mix xref graph --format cycles --fail-above


Ah yeah great callout, that's very plausible. We used Absinthe heavily to power our GraphQL API.


Not remotely. Maybe I'm just not working on big enough projects, but I've never experienced any frustration at all with Elixir compile times.


Nope. I also worked on a Startup with a Full Phoenix LiveView Experience. Codebase was around 300k-400k lines of code and compilation was blazing fast. I would say they have a lot of circular dependencies if they are experiencing that.


Can you elaborate on how React was a better fit than LiveView for your use case?

In my opinion, the main drawback of LiveView is that we assume that the user do no click on the Back button of the browser. In other words, if the user changes the page by pressing the Back button, then Phoenix closes the socket, creates a new one, and the state is lost. I really like LiveView but not sure how to persist the state between page reloads without using Ecto (also, not sure if that is idiomatic).


> if the user changes the page by pressing the Back button, then Phoenix closes the socket, creates a new one, and the state is lost

I don't think that has to be the case. You can use "patch" style links to navigate from page to page in the same session.

https://hexdocs.pm/phoenix_live_view/live-navigation.html


I am aware of that option. However, AFAIK patching links do not prevent the user to press Back button.


I don't see why that would affect it, unless that Back button press modifies the URL such that it is out of the current live session, which would trigger a page reload.

My LiveView is rusty, but I encourage you to reconfirm your assumptions, as I think that things work the way we both hope they do (as long as the live session doesn't change).


I had a similar experience when learning Elixir. I really like the language, but I ran into a lot of issues with the LSP not working properly. It might be because I’m using Windows with WSL, but I’m really looking forward to the new official LSP (https://github.com/elixir-lang/expert), and I hope it will make things better.


Absolutely, any frontend, LiveView or React, can get messy if not carefully maintained. As the app grows and more developers contribute, regular code reviews and removing unused logic are essential, otherwise DX suffers just like in any other framework. I also agree there is still so much room for improvement in this space.


Author here. Thanks to `kqr` for sharing this!

I wrote this post to distill the tough lessons from using Nix in production for three years with a small team building a full-stack Elixir/React app. My core take is that Nix has already solved the impossible (reproducibility); what remains is solving the approachable (adoption).

Happy to answer any questions about our production experience or the proposals in the article.


I'm working on Typegres, a new data layer for the modern stack (TypeScript + PostgreSQL).

My take is that for years, ORMs have hidden the power of PostgreSQL behind generic, database-agnostic abstractions. This made sense in 2010, but now it's a bottleneck.

Typegres rejects this. It's a "translator, not an abstraction," designed to express the full power of PostgreSQL (all statements, built-in functions, etc.) in a type-safe TypeScript API.

The latest killer feature my take of "object-relational mapping done right": class-based models with methods that are actually composable SQL expressions. This lets you extend your tables with expressive logic and fully-composable relations.

It's easier to show than tell. Take a look: https://typegres.com/play


Interesting! have u taken a look at safeql? https://safeql.dev/

I'm personally not a fan of query builders for SQL. it's already a defined language, why are we trying to move away from queries? On top of that SafeQL is only a dev dependency, there's no abstraction. it gets ran through any query client you want


Thanks for the great points and link to SafeQL! I'm a big fan of its approach to bringing type safety to raw SQL strings. For static queries, it's a fantastic solution.

My take is that while "Just use SQL" is healthy pushback against heavy ORMs, a good query builder solves two fundamental problems that raw SQL can't in the application context:

1. Dynamic composition: A query builder is the macro system that SQL is missing. The moment you need to build a query programatically (e.g., conditional filters or joins) you're left with messy/unsafe string concatenation

2. Handling Relations (and other common patterns): Using raw SQL, a complex query with JOINs returns a flat list of rows that now becomes the application's job to properly denormalize. It greatly reduces cognitive load to think in terms of relations, not just join conditions.

Again, showing is stronger than telling. To illustrate, I'd urge you to go through the first couple of examples in the playground and think about how you'd express them (e.g., the composability of the "example1" query) in something like SafeQL: https://typegres.com/play/


Neat idea. Would you say that biggest difference from something like Kysely is the focus on extracting common calculated SELECT targets into methods that can easily be accessed when querying? Or perhaps it's more thorough with providing TS versions of all the SQL syntax available? The list of reference fields/methods in your docs is certainly massive.


Thanks! That's a great question.

First off, I'm a huge fan of Kysely and it's a massive source of inspiration for Typegres.

You've nailed the two big differences:

* Architected for Business Logic: The primary innovation is the class-based model. This is all about co-locating your business logic (like calculated fields and relations) directly with your data model. The cool part is that these methods aren't just for SELECT; they're composable SQL expressions you can use anywhere: in a WHERE, an ORDER BY, etc. The goal is to create a single, type-safe source of truth for your logic that compiles directly to SQL.

* PostgreSQL-Native: The other fundamental difference is the focus on going deep on a single database rather than being database-agnostic. That massive list of functions you saw is a core feature, designed to provide exhaustive, type-safe, and autocomplete-friendly coverage for the entire PostgreSQL feature set. The philosophy is to stop forcing developers to reinvent database logic in their application code.

Philosophically, it's a shift from composing type-safe SQL strings (like Kysely, which is brilliant for its WYSIWYG approach) to composing SQL expressions as if they were first-class TypeScript objects.


Cool, that makes sense. Thanks for the explanation


Very cool!


Thanks, glad you think so!


Agree -- I think that's a powerful generalization you're making.

> We're often nowadays working in dynamic languages, so they become essentially the frontend to new DSLs, and instead of defining new syntax, we embed the AST construction into the scripting language.

And I'd say that TypeScript is the real game-changer here. You get the flexibility of the JavaScript runtime (e.g., how Cap'n Web cleverly uses `Proxy`s) while still being able to provide static types for the embedded DSL you're creating. It’s the best of both worlds.

I've been spending all of my time in the ORM-analog here. Most ORMs are severely lacking on composability because they're fundamentally imperative and eager. A call like `db.orders.findAll()` executes immediately and you're stuck without a way to add operations before it hits the database.

A truly composable ORM should act like the compilers you mentioned: use TypeScript to define a fully typed DSL over the entirety of SQL, build an AST from the query, and then only at the end compile the graph into the final SQL query. That's the core idea I'm working on with my project, Typegres.

If you find the pattern interesting: https://typegres.com/play/


I do find the pattern interesting and powerful.

But at the same time, something feels off about it (just conceptually, not trying to knock your money-making endeavor, godspeed). Some of the issues that all of these hit is:

- No printf debugging. Sometimes you want things to be eager so you can immediately see what's happening. If you print and what you see is <RPCResultTracingObject> that's not very helpful. But that's what you'll get when you're in a "tracing" context, i.e. you're treating the code as data at that point, so you just see the code as data. One way of getting around this is to make the tracing completely lazy, so no tracing context at all, but instead you just chain as you go, and something like `print(thing)` or `thing.execute()` actually then ships everything off. This seems like how much of Cap'n Web works except for the part where they embed the DSL, and then you're in a fundamentally different context.

- No "natural" control flow in the DSL/tracing context. You have to use special if/while/for/etc so that the object/context "sees" them. Though that's only the case if the control flow is data-dependent; if it's based on config values that's fine, as long as the context builder is aware.

- No side effects in the DSL/tracing context because that's not a real "running" context, it's only run once to build the AST and then never run again.

Of the various flavors of this I've seen, it's the ML usage I think that's pushed it the furthest out of necessity (for example, jax.jit https://docs.jax.dev/en/latest/_autosummary/jax.jit.html, note the "static*" arguments).

Is this all just necessary complexity? Or is it because we're missing something, not quite seeing it right?


I think this kind of tracing-caused complexity only arises when the language doesn't let you easily represent and manipulate code as data, or when the language doesn't have static type information.

Python does let you mess around with the AST, however, there is no static typing, and let's just say that the ML ecosystem will <witty example of extreme act> before they adopt static typing. So it's not possible to build these graphs without doing this kind of hacky nonsense.

For another example, torch.compile() works at the python bytecode level. It basically monkey patches the PyEval_EvalFrame function evaluator of Cpython for all torch.compile decorated functions. Inside that, it will check for any operators e.g BINARY_MULTIPLY involving torch tensors, and it records that. Any if conditions in the path get translated to guards in the resulting graph. Later, when said guard fails, it recomputes the subgraph with the complementary condition (and any additional conditions) and stores this as an alternative JIT path, and muxes these in the future depending on the two guards in place now.

Jax works by making the function arguments proxies and recording the operations like you mentioned. However, you cannot use normal `if`, you use lax.cond(), lax.while(), etc,. As a result, it doesn't recompute graph when different branches are encountered, it only computes the graph once.

In a language such as C#, Rust, or a statically typed lisp, you wouldn't need to do any of this monkey business. There's probably already a way in the rust toolchain to interject at the MIR stage and have your own backend convert these to some Tensor IR.


Yes being able to have compilers as libraries inline in the same code and same language. That feels like what all these call for. Which really is the Lisp core I suppose. But with static types and heterogenous backends. MLIR I think hoped (hopes?) to be something like this but while C++ may be pragmatic it’s not elegant.

Maybe totally off but would dependent types be needed here? The runtime value of one “language” dictates the code of another. So you have some runtime compilation. Seems like dependent types may be the language of jit-compiled code.

Anyways, heady thoughts spurred by a most pragmatic of libraries. Cloudflare wants to sell more schlock to the javascripters and we continue our descent into madness. Einsteins building AI connected SaaS refrigerators. And yet there is beauty still within.


Really nice summary of the core challenges with this DSL/code-as-data pattern.

I've spent a lot of time thinking about this in the database context:

> No printf debugging

Yeah, spot on. The solutions here would be something like a `toSQL` that let's you inspect the compiled output at any step in the AST construction.

Also, if the backend supports it, you could compile a `printf` function all the way to the backend (this isn't supported in SQL though)

> No "natural" control flow in the DSL/tracing context

Agreed -- that can be a source of confusion and subtle bugs.

You could have a build rule that actually compile `if`/`while`/`for` into your AST (instead of evaluate them in the frontend DSL). Or you could have custom lint rules to forbid them in the DSL.

At the same time -- part of what makes query builders so powerful is the ability to dynamically construct queries. Runtime conditionals is what makes that possible.

> No side effects in the DSL/tracing context because that's not a real "running" context

Agreed -- similar to the above: this is something that needs to be forbidden (e.g., by a lint rule) or clearly understood before using it.

> Is this all just necessary complexity? Or is it because we're missing something, not quite seeing it right?

My take is that, at least in the SQL case: 100% the complexity is justified.

Big reasons why: 1. A *huge* impediment to productive engineering is context switching. A DSL in the same language as your app (i.e., an ORM) makes the bridge to your application code also seamless. (This is similar to the argument of having your entire stack be a single language) 2. The additional layer of indirection (building an AST) allows you to dynamically construct expressions in a way that isn't possible in SQL. This is effectively adding a (very useful) macro system on top of SQL. 3. In the case of Typescript, because its type-system is so flexible, you can have stronger typing on your DSL than the backend target.

tl;dr is these DSLs can enable better ergonomics in practice and the indirection can unlock powerful new primitives


Agree, and to add, from what I see, the main issue is that server-side data frameworks (e.g., ORMs) aren't generally built for the combination of security & composability that make them naturally synergize with Cap'n Web. Another way to put it: promise pipelining is a killer feature but if your ORM doesn't support pipelining, then you have to build a complex bridge to support them both.

I've been working on this issue from the other side. Specifically, a TS ORM that has the level of composability to make promise pipelining a killer feature out of the box. And analagous to Cap'n Web's use of classes, it even models tables as classes with methods that return composable SQL expressions.

If curious: https://typegres.com/play/


This seems really cool and I'd be happy to help (I'm currently a pgtyped + Kysely user and community contributor), and I see how this solves n+1 from promise pipelining when fetched "nested" data with a similar approach as Cap'n Web, but I don't we've solved the map problem.

If I run, in client side Cap'n Web land (from the post): ``` let friendsWithPhotos = friendsPromise.map(friend => { return {friend, photo: api.getUserPhoto(friend.id))}; } ```

And I implement my server class naively, the server side implementation will still call `getUserPhoto` on a materialized friend returned from the database (with a query actually being run) instead of an intermediate query builder.

@kentonv, I'm tempted to say that in order for a query builder like typegres to do a good job optimizing these RPC calls, the RpcTarget might need to expose the pass by reference control flow so the query builder can decide to never actually run "select id from friends" without the join to the user_photos table, or whatever.


> but I don't we've solved the map problem.

Agreed! If we use `map` directly, Cap'n Web is still constrained by the ORM.

The solution would be what you're getting at -- something that directly composes the query builder primitives. In Typegres, that would look like this:

``` let friendsWithPhotos = friendsPromise.select((f) => ({...f, photo: f.photo()}) // `photo()` is a scalar subquery -- it could also be a join ```

i.e., use promise pipelining to build up the query on the server.

The idea is that Cap'n Web would allow you to pipeline the Typegres query builder operations. Note this should be possible in other fluent-based query builders (e.g., Kysely/Drizzle). But where Typegres really synergizes with Cap'n Web is that everything is already expressed as methods on classes, so the architecture is capability-ready.

P.S. Thanks for your generous offer to help! My contact info is in my HN profile. Would love to connect.


That is actually pretty interesting!

Have you considered making a sqlite version that works in Durable Objects? :)


Thanks, Kenton! Really encouraging to hear you find the idea interesting.

Right now I'm focused on Postgres (biggest market-share for full-stack apps). A sqlite version is definitely possible conceptually.

You're right about the bigger picture, though: Cap'n Web + Typegres (or a "Typesqlite" :) could enable the dream dev stack: a SQL layer in the client that is both sandboxed (via capabilities) and fully-featured (via SQL composability).


> I find the choice of TypeScript to be disappointing.

Genuinely curious, is the disappointment because it's limited to the JS/TS ecosystem?

My take is that by going all-in on TypeScript, they get a huge advantage: they can skip a separate schema language and use pure TS interfaces as the source of truth for the API.

The moment they need to support multiple languages, they need to introduce a new complex layer (like Protobuf), which forces design into a "lowest common denominator" and loses the advanced TypeScript features that make the approach so powerful in the first place.


You can generate TypeScript schema for Haxe JS output. I'm honestly a bit surprised that TS isn't a supported target!

That could change with some investments. Haxe is a great toolkit to develop libraries in because it reduces the overhead for each implementation. It would be nice to see some commercial entity invest in Haxe or Dafny (which can also enable verification of the reference implementation).

> The moment they need to support multiple languages, they need to introduce a new complex layer (like Protobuf),

So this just won't be used outside of Node servers then?


> So this just won't be used outside of Node servers then?

Well... I imagine / hope it will be used a lot on Cloudflare Workers, which is not Node-based, it has its own custom runtime.

(I'm the author of Cap'n Web and also the lead developer for Cloudflare Workers.)


We've actually chatted before . I was also asking dumb questions that you were polite enough to suffer through .

I really do wish Haxe would get the attention of library developers like yourself. It's more like a framework for polyglot development. It can auto generate non-ergonomic libraries that feel like FFS automatically but you can also gradually fine tune it to match the target language over time. It really helps to centralize development effort and share burden.

So sure, maybe not Rust. But Haxe's distributable size would be comparable.

Maybe I'll try a Dafny implementation to see how that project is coming along!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: