I'm currently using a self hosted instance of lightdash connected to a dbt project and I can see this being really efficient for data exploration for business users
Thank you. We're definitely looking to get to a point where you don't need to jump to a jupyter notebook for simple analyses, especially things which are hard to do in SQL (a basic linear regression model for eg).
Happy to chat at prasoon [at] withpretzel [dot] com if you need any integration help!
aren't profit margins the main drivers for the decisions leading to this post to begin with?
if revenue was all they cared about, I don't think what's happening would come to fruition
I've tested both π and g, and while they both work well, g results in far fewer disk full errors. I've heard c works even better, though I haven't tried it yet.
> I've tested both π and g, and while they both work well, g results in far fewer disk full errors. I've heard c works even better, though I haven't tried it yet.
Good to know. FWIW i should also be avoided. It's tempting to use, since most programs use it as a counter, so it /should/ standardize the logfilesizes. But in practice it's very tricky to get a definitive disk space requirement with it.
Not sure about "the tech is well understood" given the LLM itself is a black box in regards to how it works internally, even Microsoft researches admit it (Sparks of Artificial General Intelligence: Early experiments with GPT-4).
In this context I think we’re talking about moat, i.e., private data, that they can leverage for personalized experiences. Similar to what Microsoft announced with their Office365 Copilot stuff.
I'm currently using a self hosted instance of lightdash connected to a dbt project and I can see this being really efficient for data exploration for business users
quite interesting!