It looks phind queues a query for its machine learning models. I submit the following query twice. For the first time, phind gave Google-like answers that talked about only Guava. For the second time, though, Phind gave me good answers on using popular Go libraries with sample code.
Both answers were run through the LLM. Answer variability is caused by different web links being returned and the way we sample answers from the LLM. We're working on making it more consistent.
Consistency has to be a tough problem to solve for a service like this, since randomness in the choice of each token is part of the magic sauce that makes LLMs work.
It is definitely a hard problem. There are ways to ensure consistency, such as using beam search decoding, which is deterministic. But that comes with other tradeoffs regarding answer quality.
https://phind.com/search?q=How+do+I+use+a+cache+that+is+like...