Otherwise yes you can authenticate against the other versions with a email/password (really the email doesn't do anything in the open source distribution, just a user identifier but we keep there to be consistent)
We have a similar wrapper for local LLMs on the roadmap.
If you use CLI only - we run claude 4 + gemini on the backend, gemini serving most of the vision tasks (frontend validation) and claude doing core codegen.
We use both Claude 4 and Gemini by default (for different tasks). But the idea is you can self-host this and use other models (and even BYOM - bring your own models).
Exactly. Non-AI projects have always been easy to build without issues. That's why we have so many build systems. We perfected it the first try and then made lots of new versions based on that perfect Makefile.
So I went ahead and tried running the example script with "A CHRISTMAS CAROL" using the "meta-llama-3.1-8b-instruct" and "text-embedding-nomic-embed-text-v1.5" models locally. How long should it take to extract the subgraphs with this kind of setup?