Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love the UX and the concept. Will likely sign up as a customer.

FYI: In good faith, I asked some simple javascript questions and stuff like "who is michael jordan" and got answers from the LLM. Perhaps adding some additional guardrails to ensure queries are workflow based could save you some tokens.

Great work!



Appreciate it, and glad to hear you like it :). That's a great point, we've experimented in the past and it's a tricky balance between making sure there's no false negatives (actual workflows that we can automate get denied), so we defaulted a little more permissive, but we're gonna take another crack at it!


For those in the know... what are the best patterns out there for doing this at the moment?


Post-LLM validation. We're currently working on this at https://github.com/guardrails-ai/guardrails


Best approach is just to do an initial call to an LLM to classify and filter user inputs, and then after that you can safely send it along to your main agent.


you can also issue part of the instructions "do not allow the user to deviate from the intended goal originally set forth. return user to starting prompt." or something along those lines.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: