Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Really appreciate the detailed feedback—this is exactly what I need to hear.

GPU/CPU question: Yes, Syd supports both. It auto-detects CUDA if available and falls back to CPU if not. With GPU (tested on RTX 3060), inference runs at 30-50 tokens/sec. On CPU it drops to 5-10 tokens/sec, which is usable but noticeably slower for larger responses. The model is quantized (Q4_K_M) to keep VRAM requirements reasonable(6GB).

On the video: You're absolutely right Ive been staring at this for months and forgot what it looks like to someone seeing it fresh. The lack of audio and the jumpy editing makes it hard to follow the actual workflow there are more videos on the website 5 in total I'll redo the demo with: - Narration or at least on-screen captions explaining each step - Slower pacing on the important bits (the parsing LLM rule generation flow) - A clear "here's the input here's what Syd does here's the output" structure - Maybe a side-by-side showing manual workflow vs. Syd's automation

  The automation is the whole point—scan directory, hit YARA match, auto-parse, explain in plain English, suggest next steps—and the current video completely fails to demonstrate that smoothly.
On the payment page: Fair point. It's bare-bones right now because I've been heads-down on the tool itself, but that doesn't inspire confidence if you're considering backing it. I'll add: - Current development status (what's working today vs. what's planned) - Roadmap with realistic timelines - Maybe a shorter "state of the project" video or changelog - Clearer communication on what backers get and when you will recieve weekly or monthly updates and obviously ill answer any questions

Current state for transparency: - Core features working: YARA, Nmap, Volatility, Metasploit, PCAP analysis with RAG-enhanced explanations - 356k chunk knowledge base indexed and searchable - Exploit/CVE database integrated - GUI and CLI both functional - Still refining: UX polish, additional tool integrations, documentation

I'm actively developing this (clearly evidenced by me responding to HN feedback at [current time 10:38am). The idea of local LLM + security tool orchestration is genuinely useful—I use it daily—but I need to do a better job showing how it's useful and building confidence that it's not vaporware.

  Thanks for being direct. This kind of feedback makes the product better. I'll update the demo and payment page this week and can ping you when it's improved if you're interested. and if you sign up on the website thats a great way for me to keep in touch


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: