I love the UX and the concept. Will likely sign up as a customer.
FYI: In good faith, I asked some simple javascript questions and stuff like "who is michael jordan" and got answers from the LLM. Perhaps adding some additional guardrails to ensure queries are workflow based could save you some tokens.
Appreciate it, and glad to hear you like it :). That's a great point, we've experimented in the past and it's a tricky balance between making sure there's no false negatives (actual workflows that we can automate get denied), so we defaulted a little more permissive, but we're gonna take another crack at it!
Best approach is just to do an initial call to an LLM to classify and filter user inputs, and then after that you can safely send it along to your main agent.
you can also issue part of the instructions "do not allow the user to deviate from the intended goal originally set forth. return user to starting prompt." or something along those lines.
FYI: In good faith, I asked some simple javascript questions and stuff like "who is michael jordan" and got answers from the LLM. Perhaps adding some additional guardrails to ensure queries are workflow based could save you some tokens.
Great work!