Probably more glad that people are paying subscription fees to do digital assistant stuff... without them having to directly provide the assistant interface. That way they won't be directly blamed for the wave of hacked accounts from people foolish enough to connect this to their email.
Yeah, this new trend of handing over all your keys to an AI and letting it rip looks like a horrific security nightmare, to me. I get that they're powerful tools, but they still have serious prompt-injection vulnerabilities. Not to mention that you're giving your model provider de facto access to your entire life and recorded thoughts.
Sam Altman was also recently encouraging people to give OpenAI models full access to their computing resources.
This is what worries me the most. Marketing is ultimately a business of manipulation, and services like ChatGPT seem like excellent tools for manipulation. I wish OpenAI could find a less adversarial business model.
No, not a joke. The author also co-vibe-coded a book, called Vibe Coding, describing and recommending exactly the sort of system he's trying to build as Gas Town.
Bubblewrap is a it's a very minimal setuid binary. It's 4000 lines of C but essentially all it does is parse your flags ask the kernel to do the sandboxing (drop capabilities, change namespaces) for it. You do have to do cgroups yourself, though. It's very small and auditable compared to docker and I'd say it's safer.
If you want something with a bit more features but not as complex as docker, I think the usual choices are podman or firejail.
You should at least read the tests, to make sure they express your intent. Personally, I'm not going to take responsibility for a piece of code unless I've read every line of it and thought hard about whether it does what I think it does.
AI coding agents are still a huge force-multiplier if you take this approach, though.
reply