Bravo Founder and CEO of Namefi, but the DNS seems to resolve just fine. Do you understand the DNS space? Perhaps you could find out more using this little side project I've been working on: https://www.google.com/.
A proxy is a good solution although a bit more involved. A great first step is just getting any secrets - both the ones the AI actually needs access to and your application secrets - out of plaintext .env files.
A great way to do that is either encrypting them or pulling them declaratively from a secure backend (1Pass, AWS Secrets Manager, etc). Additional protection is making sure that those secrets don't leak, either in outgoing server responses, or in logs.
https://varlock.dev (open source!) can help with the secure injection, log redaction, and provide a ton more tooling to simplify how you deal with config and secrets.
The fundamental idea of CUE (quoting them) "enable[s] data, schema, and policy constraints to coexist seamlessly" is really amazing for some use cases. But from a practical / DX perspective, the language seems pretty awkward and hard to understand. Dump someone who hasn't seen cue before into some complex cue files and they are definitely reading the docs to try to understand what is happening...
We built a similar kind of system with a much more limited scope - for handling environment variables, that is much more ergonomic IMO. It uses .env files with decorator style comments and new function call syntax to mix schema, default values, declarative instructions on how to fetch data, internal references, merging multiple definitions.
I like all the extra scanning features this provides :)
I've seen a few tools like this that try to keep things in sync. While it is better than the alternative of doing it manually, it is still a losing battle and not the greatest DX. Often you still have validation (if it exists) and types that also need to be kept in sync, and documentation is scattered throughout the codebase.
Another approach is to turn the example into a schema - and have it be included in the env loading process. This way non-sensitive values can be included directly, and it can never be out of sync. Only values that differ need to be defined in your git-ignored files or injected by the environment. This is precisely how https://varlock.dev solves this problem. The schema becomes the single source of truth, and is used to generate types as well. It also provides validation, leak prevention, and a bunch of other nice tools.
I am currently working on a turbo mono repo frontend on my work with maybe 20-25 different env variables. Here the dotenv-diff is really a game changer, but yea for smaller projects, i might be a bit overkill.
I like the projects you have linked, i will try to see if they have any need to have features that i could use
in `dotenv-diff` you also have the --compare option which will compare your .env whit your .env.example to keeping them in sync, while also have the amount of scanning features that keeps the project safe.
One really nice thing in varlock for monorepos is the import syntax. This lets you have shared config at the root, or just to break things up however you need. No need for diffing or copy pasting, as the schema validates everything - if something is required, it will yell at you.
In a team setting, it can be extremely helpful to have env/config loading logic built into the repo itself. It does not mean it has be loaded by the application process, but it can be part of the surrounding tooling that is part of your codebase.
Yes, that's indeed the right place, IMO: ephemeral tooling that leverages, or simplifies OS features.
Tooling such as xenv, a tiny bash script, a makefile etc. that devs can then replace with their own if they wish (A windows user may need something different from my zsh built-in). That isn't present at all in prod, or when running in k8s or docker compose locally.
A few years ago, I surfaced a security bug in an integrated .env loader that partly leveraged a lib and partly was DIY/NIH code. A dev built something that would traverse up and down file hierarchies to search for .env.* files and merge them runtime and reload the app if it found a new or changed one. Useful for dev. But on prod, uploading a .env.png would end up in in a temp dir that this homebuilt monstrosity would then pick up. Yes, any internet user could inject most configuration into our production app.
Because a developer built a solution to a problem that was long solved, if only he had researched the problem a bit longer.
We "fixed" it by ripping out thousands of LOCs, a dependency (with dependencies) and putting one line back in the READMe: use an env loader like ....
Turned out that not only was it a security issue, it was an inotify hogger, memory hog, and io bottleneck on boot. We could downsize some production infra afterwards.
Yes, the dev built bad software. But, again, the problem wasn't that quality, but the fact it was considered to be built in the first place.
For a more modern approach to .env files that includes built-in validation and type-safety, check out https://varlock.dev
Instead of a .env.example (which quickly gets out of date), it uses a .env.schema - which contains extra metadata as decorator comments. It also introduces a new function call syntax, to securely load values from external sources.
This is interesting. Amazing how something so fundamental is still such a pain, and we all build our own half-baked solutions for it on every new project. We've been thinking about this problem for a while now as well, and just launched another tool (https://varlock.dev) that might be interesting for you to check out. Would be very happy to collaborate or just talk about the problem space.
Our tool has similar goals, although a slightly different approach. Varlock uses decorator style comments within a .env file (usually a committed .env.schema file) to add additional metadata used for validation, type generation, docs etc. It also introduces a new "function call" syntax for values - which can hold declarative instructions about how to fetch values, and/or can hold encrypted data. We call this new DSL "env-spec" -- similar name :)
Certainly some trade-offs, but we felt meeting people where they already are (.env files) is worthwhile, and will hopefully mean the tool is applicable in more cases. Our system is also explicitly designed to handle all config, rather than just secrets, as we feel a unified system is best. Our plugin system is still in development, but we will allow you to pull specific items from different backends, or apply a set of values, like what you have done. We also have some deeper integrations with end-user code, that provide additional security features - like log redaction and leak prevention.
On most projects you end up wiring up a bunch of custom logic to handle your config, which is often injected as environment variables - think loading from a secure source, validation logic, type safety, pre-commit git scanning, etc.
It's annoying to do it right, so people often take shortcuts - skip adding validation, send files over slack, don't add docs, etc...
The common pattern of using a .env.example file leads to constant syncing problems, and we often have many sources of truth about our config (.env.example, hand-written types, validation code, comments scattered throughout the codebase)
This tool lets you express additional schema info about the config your application needs via decorators in a .env file, and optionally set values, either directly if they are not sensitive, or via calls to an external service. This shouldn't be something we need to recreate when scaffolding out every new project. There should be a single source of truth - and it should work with any framework/language.
This tool also redacts from your logs if working in js.