"Roughly, RAG is runtime prompt engineering where you build a system to dynamically add relevant things to your prompt before you ask the agent for an answer."
Have you considered a land-and-expand rollout, where you focus on getting an active community going in one or two cities before you expand? Otherwise, folks from underserved cities (which will be most of them) will be disappointed when they try to find interesting meetups near them using your site. Just a thought!
Hey, thanks! That's exactly what I'd planned - didn't realise there was a term for it heh!
Still very much focusing on feedback and iterating so group locations are a bit all over the place just now (one of the first groups to sign up was a Ruby meetup in Kathmandu haha) - but yeah, I do plan on coordinating it a bit more when the time comes!
"But the impact has actually been fantastic. Some highlights:
- ~$1m of sales pipeline generated in 3 weeks, ~$100k already closed
- ~10% of the free customers converted to paid
- ~50% of the paying customers converted to the new pricing model"
Following the logic in the article: it's not actually a good outcome for the business when "Your QA people are the only ones charged with being an organizational conscience on the behalf of your users." To your earlier point, quality is something the entire team has to commit to.
Everyone says they hold Quality in high regard, but then when it comes time to do the work, everyone takes every escalation path to escape the QA group's mandate.
Everywhere I've been save a military shop, the political lip service is paid, but everyone by default works on getting an exec waiver on QA pushback rather than actually addressing fundamental Quality issues.
So while the advertisement in question is technically true, the way it shakes out in reality is you either have Quality valued as a political factor by those at the top, or your entire org is built like the Titanic with watertight compartments (effective Quality Controls) only up to C deck.
At that point it just takes one good iceberg of a client/feature/bad product (iceberg)... And we all know how that goes.
That was not an article. It was an advertisement. Logic of advertising is distinct from normal everyday logic.
GP is making a point based on experience that maps well to my experience. Your entire team is committed to quality, and this includes stakeholders which express their commitment by treating Q/A as a first-class element of a product development process. Ideal setup imho requires a somewhat adversarial -- think Red Queen in genetic algos -- relationship between devs and q/a; and that the Q/A team and product team do -not- have the same manager.
Treat Q/A as 'internal affairs' in a police department. A fundamental necessity as developers also have a 'code of silence' regarding software misdeeds.
tldr;/ remember: even a lousy underwear gets to be inspected by Q/A.
>tldr;/ remember: even a lousy underwear gets to be inspected by Q/A.
And remember, if your QA group are dropping like flies because they can't in good conscience condone signing off on the output of a lousy underwear assembly line, that too is strong signal.
100%; they're related. It's easy to fall into the trap of "move fast and break things", then over react and make shipping slow, which sucks for everyone involved. Getting the balance right when you're early is hard, as tooling choices stick around, and "you don't have time". Def worth getting right.
Playwright is excellent too. Playwright is much more forgiving with tests that need to high different origins (common with Enterprise apps) and multiple browsers in the same test (to verify collaborative editing etc). If you're considering Cypress, I'd highly recommend also giving Playwright a look (https://playwright.dev )
Agreed; also, even with human testing, their nuance can be a double-edge sword. Testing, and specifically QA is hard - as today it's mostly about the "assurance" part; does one feel assured enough to ship this, which is subjective.
Totally agree that automation will never fully replace the value of human-powered testing. (Though it is great for the rote regression-testing stuff. The "drudgery", as you put it.)
Isn't the problem with relying too much on unit and integration tests that they don't consider the end-to-end app experience? (Which, in my mind, is what ultimately matters when customers think of "quality".)
IMHO, yep - it's a balance, but the great thing is they can be quick to run, and run locally-easily; which is great for developers to get fast-feedback. Unit testing is unlikely to catch all end-user level issues though; traditional automation too, which is why human-testing is still valuable today.
It's hard to get them to provide details to the engineers. I haven't seen many who write user stories. Asking them to build no-code automated test cases, that's ... ambitious.
When I was a PM, I was lucky to have a big, talented QA team, but I still knew I'd have to do a "smoke test" myself after every major feature release. I cared the most, and I knew the most about the intricacies of the product.
Also: Bliss is when you as a product owner get to such a place that you trust the QA team that you've worked with so closely so much that you only have have to some basic tests a few hours after the release.
I've been on teams where the "siloed" QA model seemed to work pretty well -- we seemed to find a decent balance between test coverage and frequency of releases.
But this was at a cash-rich startup that had lots of money to spend on making sure we had plenty of QA headcount. That seems to be the exception, rather than the rule. Lots of startups I talk to are quite constrained in terms of dedicated QA, so the argument for empowering product managers to own quality does make some sense to me.
"Roughly, RAG is runtime prompt engineering where you build a system to dynamically add relevant things to your prompt before you ask the agent for an answer."