It looks really slick, for us the reason we haven't adopted yet is it brings more tooling and configuration that overlaps with our existing system for prompt templates, schema definitions, etc. In the component where we couldn't rely on OpenAI structured outputs we experimented with TOML-formatted output, that ended up being reliable enough to solve the problem across many models without any new dependencies. I do think we'll revisit at some point as Boundary also provides incremental parsing of streaming outputs and may allow some cost optimization that is not easy right now.