My experience has been the opposite. I came from Python and Typescript and the initial amount of reading and fighting with the compiler was very frustrating but I understood one thing that sets Rust apart - I write code with almost the same level of bugs as a seasoned Rust developer. That is a win for years to come as team grows and software gets old. I will bet on it again and again.
Now I mostly generate code with coding agents and almost everything I create is Rust based - web backend to desktop app. Let the LLM/agent fight with the compiler.
Location: Kolkata, India
Remote: Yes and willing to travel frequently
Willing to relocate: No
Technologies: Agentic development, Go, Rust, TypeScript, etc. Strongly typed stack only. I do not write code by hand anymore, and I have my own agent. You can hire me to build for you or you can build on top and I consult for you.
Résumé/CV: https://www.linkedin.com/in/brainess, https://brainless.in, https://github.com/brainless
Email: My name, without spaces or any other characters, at Google's consumer email domain
I am Sumit and I am building https://github.com/brainless/nocodo. I have built this only with coding agents. If you are building rapidly with LLMs, I am very interested to work with you. Use LLMs in every stage of your workflow. Strongly typed languages only. I have been an engineer for 16 years, founder many times and have deep knowledge and experience in early stage product building.
Think of it as Devin AI but self-hosted, headless (use my UI or your), customizable, multi-model, multi-OS (build/debug your apps on Win, Mac, Linux). Enforce policies with hooks. Commercial license at $90K/year with support.
I did not know of this and I am looking for simple ways to isolate processes for multiple reasons. I am building a coding agent, https://github.com/brainless/nocodo, that runs (headless) on a Linux instance. Generated code is immediately available for demo.
I am new to isolation and not looking for a container based approach. Isolation from a security standpoint but I do not know enough. This approach looks like a great start for me.
In my coding agent, nocodo (1), I am thinking about using copy on write filesystems for cheaper multi-agent operations. But to be honest git worktree may be good enough for most use cases. nocodo checks existing worktree in the local repo and I will add creation and merge support too.
This is interesting to read and very important to me since I am building a coding agent with team collaboration in mind. I used to use Zed daily till the point that I moved away from writing code directly and instead generate all my projects only from prompts.
I think collaboration for people who eventually use the software will be more critical in the era of agentic coding. Project Management will change. We are not waiting for 2 weeks to build prototypes, it gets done in a hour. What does that mean for end users - do they prompt their changes and get access to new software? Who would double-check, would AI reviews be good enough, would AI agents collaborate along with humans in the loop?
There are so many questions not answered. If anyone is keen on having these talks, I would happy to share what I think. Here is what I am building: https://github.com/brainless/nocodo
I want to see a future where end users can prompt their needs, have collaborators in the company to help clear things up and in an hour the feature/issue is tackled and deployed.
But that would apply to any app that deals with files like this one does.
This one is open source and we can run some code analysis on it, compile locally, etc. I am not well versed in security checks but I guess you get the idea.
I am building a coding agent for small businesses. The agent runs on Linux box on own cloud. Desktop and mobile apps to chat with AI models and generate software as needed.
SSH based access with HTTP port forward. Team collaboration, multiple models, git based workflow, test deployment automation, etc.
This is not suprising at all. Having gone through therapy a few years back, I would have had a chat with LLMs if I was in a poor mental health situation. There is no other system that is available at scale, 24x7 on my phone.
A chat like this is not a solution though, it is an indicator that our societies have issues is large parts of our population that we are unable to deal with. We are not helping enough people. Topics like mental health are still difficult to discuss in many places. Getting help is much harder.
I do not know what OpenAI and other companies will do about it and I do not expect them to jump in to solve such a complex social issue. But perhaps this inspires other founders who may want to build a company to tackle this at scale. Focusing on help, not profits. This is not easy, but some folks will take such challenges. I choose to believe that.
Someone elsewhere in the thread pointed out that it's truly hard to open up to another human, especially face to face. Even if you know they're a professional, it's awkward, it can be embarrassing, and there's stigma about a lot of things people ideally go to therapy for.
I mean, hell, there's people out there with absolutely terrible dental health who are avoiding going to the dentist because they're ashamed of it, even though logically, dentists have absolutely seen worse, and they're not there to judge, they're just there to help fix the problem.
There's no point bothering these poor volunteers/underpaid workers with my issues because they're inherently unfixable. Truth is, I should either suck it up or kill myself. Meanwhile with an LLM I will never feel like I'm wasting his time because he never gets tired of my blabbering about same shit over and over again.
I choose to believe that too. I think more people are interested than we’d initially believe. Money restrains many of our true wants.
Sidebar — I do sympathize with the problem being thrust upon them, but it is now theirs to either solve or refuse.
A chat like this is all you’ve said and dangerous, because they play a middle ground: Presenting a machine can evaluate your personal situation and reason about it, when in actuality you’re getting third party therapy about someone else’s situation in /r/relationshipadvice.
We are not ourselves when we are fallen down. It is difficult to parse through what is reasonable advice and what is not. I think it can help most people but this can equally lead to a disaster… It is difficult to weigh.
It's worse than parroting advice that's not applicable. It tells you what you told it to tell you. It's very easy to get it to reinforce your negative feelings. That's how the psychosis stuff happens, it amplifies what you put into it.
This makes no sense at all to me. You can choose to gather evidence and evaluate that evidence, you can choose to think about it, and based on that process a belief will follow quite naturally. If you then choose to believe something different, it's just self-deception.
You are right and it gives us an chance to do something about it. We always had data about people who are struggling but now we see how many are trying to reach out for advice or help.
> A chat like this is not a solution though, it is an indicator that our societies have issues
Correct, many of which are directly, a skeptic might even argue deliberately, exacerbated by companies like OpenAI.
And yet your proposal is
> a company to tackle this at scale.
What gives you the confidence that any such company will focus consistently, if at all,
> on help, not profits
Given it exists in the same incentive matrix as any other startup? A matrix which is far less likely to throw one fistfuls of cash for a nice-sounding idea now than it was in recent times. This company will need to resist its investors' pressure to find returns. How exactly will it do this? Do you choose to believe someone else has thought this through, or will do so? At what point does your belief become convenient for people who don't share your admirably prosocial convictions?
Is OpenAI taking steps to reduce access to mental healthcare in an attempt to force more people to use their tools for such services? Or do you mean in a more general sense that any companies that support the Republican Party are complicit in exacerbating the situation? At least that one has a clear paper trail.
Now I mostly generate code with coding agents and almost everything I create is Rust based - web backend to desktop app. Let the LLM/agent fight with the compiler.
reply