Maybe the most depressing part of all this is if people start thinking they would not have been able to do things without the LLM. Of course they would have, it's not like LLMs can do anything that you cannot. Maybe it would have taken more time at least the first time and you would have learned a few things in the process.
Sure, I can write all of it. But I simply won’t. I have Claude generated Avalonia C# applications and there is no way I would have written the thousands of lines of xaml they needed for the layouts. I would just have done it as a console app with flags.
But reducing friction, eliminating the barrier to entry, is of fundamental importance. It's human psychology; putting running socks next to your bed at night makes it like 95% more likely you'll actually go for a run in the morning.
I understand the point, and to some degree agree. For myself, I really couldn't (not to say it wouldn't have been possible). I tried many many times over so many years and just didn't have the mental stamina for it, it would never "click" like infra/networking/hardware does etc and I would always end up frustrated.
I have learnt so much in this process, nowhere near as much as someone that wrote every line (which is why I think being a good developer will be a hot commodity) but I have had so much fun and enjoyment, alongside actually seeing tangible stuff get created, at the end of the day, that's what it's all about.
I have a finite amount of time to do things, I already want to do more than I can fit into that time, LLMs help me achieve some of them.
Hey, I watched your video a few times and really like the idea. Is the inferencing being done on the CPU, do you support GPU as well?
The idea is solid and I like the direction you’re going with it, but the demo doesn’t really show it off. There’s a lot of jumping around in the UI and it’s hard to follow what’s happening without any audio. The interesting bit is right at the end when the rule gets generated, but it’s over so fast that you don’t really get a feel for what Syd is actually doing under the hood.
It was a bit hard to follow with no audio, just a simple “here’s the scan running, here’s the parser kicking in, here’s where the model steps in” kind of thing. Even speeding up the slower parts would make it easier to see the flow. Right now it feels more like a screen recording than a walkthrough. When you’ve spent hundreds of hours inside something it all feels obvious, but for someone seeing it for 3 minutes it’s tough to piece together what’s happening. Been there myself.
The automation angle you mentioned in the post is the part that really sells it. If the tool can take a directory, scan it, parse, correlate and then spit out the rule with almost no manual copying, that’s the kind of workflow improvement I (and maybe others?) care about. The video doesn’t quite show that yet, so it’s hard to judge how smooth the actual experience is.
I’m not against backing something like this, especially as it runs locally and handles the annoying parts. £250 is fine, but at the moment the payment page is just a Stripe form with no real signal that the thing is ready or actively maintained. A clearer demo, a roadmap, or even a short narrated “here’s the state of it today” would go a long way in building confidence.
Apologies if this comes across a bit direct. The idea is solid though. Local LLM + structured output from real security tools is genuinely useful. Keep going.
Really appreciate the detailed feedback—this is exactly what I need to hear.
GPU/CPU question: Yes, Syd supports both. It auto-detects CUDA if available and falls back to CPU if not. With GPU (tested on RTX 3060), inference runs at 30-50 tokens/sec. On CPU it drops to 5-10 tokens/sec, which is usable but noticeably slower for larger responses. The model is quantized (Q4_K_M) to keep VRAM requirements reasonable(6GB).
On the video: You're absolutely right Ive been staring at this for months and forgot what it looks like to someone seeing it fresh. The lack of audio and the jumpy editing makes it hard to follow the actual workflow there are more videos on the website 5 in total I'll
redo the demo with:
- Narration or at least on-screen captions explaining each step
- Slower pacing on the important bits (the parsing LLM rule generation flow)
- A clear "here's the input here's what Syd does here's the output" structure
- Maybe a side-by-side showing manual workflow vs. Syd's automation
The automation is the whole point—scan directory, hit YARA match, auto-parse, explain in plain English, suggest next steps—and the current video completely fails to demonstrate that smoothly.
On the payment page: Fair point. It's bare-bones right now because I've been heads-down on the tool itself, but that doesn't inspire confidence if you're considering backing it. I'll add:
- Current development status (what's working today vs. what's planned)
- Roadmap with realistic timelines
- Maybe a shorter "state of the project" video or changelog
- Clearer communication on what backers get and when you will recieve weekly or monthly updates and obviously ill answer any questions
Current state for transparency:
- Core features working: YARA, Nmap, Volatility, Metasploit, PCAP analysis with RAG-enhanced explanations
- 356k chunk knowledge base indexed and searchable
- Exploit/CVE database integrated
- GUI and CLI both functional
- Still refining: UX polish, additional tool integrations, documentation
I'm actively developing this (clearly evidenced by me responding to HN feedback at [current time 10:38am). The idea of local LLM + security tool orchestration is genuinely useful—I use it daily—but I need to do a better job showing how it's useful and building confidence that it's not vaporware.
Thanks for being direct. This kind of feedback makes the product better. I'll update the demo and payment page this week and can ping you when it's improved if you're interested. and if you sign up on the website thats a great way for me to keep in touch
Thanks for the encouragement. I do plan to make more of them open source, in the past it's been a bit of burden to document, test, and fix bugs before publishing but for some projects AI can do that for you now.
One project I did publish:
https://github.com/jclarkcom/ble_proxy
This turns your cell phone into a network proxy, but using BLE so the phone can be connected to a Wifi network (hotel, plane, etc). It's pretty slow, but in some cases you just need a little bit of data to work. I made it on a plane ride where my cellphone had data but my laptop didn't.
Very interested in this type of thing. I can't see anywhere where pricing is mentioned, even ranges would be a useful benchmark to know if it's something within budget.
Thanks. I’m based in the UK so have a good grasp on that, but I’ve reviewed someone’s resume from the US just as easily. Generally, English speaking and preferably in a digital related field, but a lot of the common pain points seem to be universal. It’s only difficult when it’s a field that’s highly niche as hard to know what’s common knowledge.
I like to play the role of the person doing the hiring, hence why the form asks for an example job description.
Ahhh this is exactly what I'm looking for! I don't see any pricing on any pages. Would love to know how much this costs (I don't know what 455 diamonds is worth) as there's a few sprites that I'd love to animate and use in my app.
Not a fan of signing up before seeing how much I'd have to pay. The examples look great though.
Prices are about to drop dramatically. Many of the models dropped >80% in price since initial launch. Any time I have a reduction in cost, I pass the savings directly on to users.
Not sure if you just added this in or I overlooked it, but exactly the kind of transparency I love. Will give this a try.
--
EDIT - Did an image generation using the OpenAI 4o model, then ran through the lowest quality animation. This is awesome and first pass is very strong and usable (around 100 diamonds used).
I look forward to seeing prices drop more and the asset pack area fill up. Keep going man, really awesome stuff.
Would suggest that you filter out any radio stations where the URL isn't working if possible.
For example I filtered down to "United Kingdom" and then "bass" - 3 of the 6 worked and would rather see ones that are active.
Also if possible to apply the country filter within the search bar, took me a second to realise I had to open the filter for country, select that, then go back to my search.
When clearing my search of "Bass" in the example above, it reset the search to default (didn't have my country filter) even though the filter was still applied when opening the filter section.
Super easy interface to use though, really well done.
https://news.ycombinator.com/item?id=46133458
reply