Organizations adapting AI is the biggest problem that businesses are facing right now. Even in Ozonetel I face this problem day in and day out. The employees who really use AI to its full potential are minuscule. I can count on my fingertips. We need to overcome this in the right way or we will face the same problems we faced during industrial revolution.
Got pissed off with too many ARR manipulations and AI startups announcing revenue numbers manipulations(best day * 365 as ARR etc). These startups are not only messing up the AI ecosystem with the non standard numbers, they are also messing up the SaaS ecosystem by co opting the SaaS metrics. Now SaaS startups are supposed to show the same scale though the AI startups have also not achieved that scale. So here is what I propose, the AI startups should use their own new vocabulary. Since everyone is vibe coding, I suggest VRR, Vibe Revenue Run-rate :)
I have even provided a formula(all scientific and all) and also provided a checklist for the VCs.
You can disagree with the commons definition, and thats fine. But the point I wanted to make was about the exploitation. The open Internet was built with a code of sharing. Now they are trying to put walled gardens around all that knowledge. Lets remove Marx from the equation if that becomes a bone of contention. But we as a society need to come up with better dialogues to decide how we will treat our creators and how we will deal with the AI copy machine. We cannot expect that profit mongers will do the right thing.
I understand your point, and despite what my (hastily-typed) critique shows, I find there are valuable kernels of truth in all types of ideas. The walled-gardens for AI are more to do with recouping the cost of model training, and there are currently no incentives for open sourcing some types of models. The new tool has knowledge of more than one type of domain, so its content is different from its source material, more or less. So while creators have a point, they lose it when it turns out the tool is capable of multi-faceted information synthesis. But what’s interesting to me is that creators are not precluded from using AI tools to develop more content, and that makes all the difference.
That said, I think it would be better if more models were open sourced, or if FOSS non-profits would buy GPUs and start their own model training program based on the currently released open source models. The commons argument doesn’t apply here if there are multiple open source models which contain information from hundreds of hours of GPU training which someone else has already done, and thus can be picked up by any open source organization to train on additional content for whatever is of interest. Some orgs have tried that already, but didn’t gain traction due to poor marketing and lack of funding, https://en.wikipedia.org/wiki/EleutherAI. Maybe if there were government subsidies to encourage open source model release, or non-profit funding for setting up and paying for GPU farms for training models which could be used by everyone, then this type of organizational behavior would become more productive.
The question here is, is social media addictive and is it harmful. If we have enough evidentiary proof, then yes, it should be banned just like we do for alcohol or cigarettes.
We also ban porn for kids. And we don't need any ID proofs in implementing the ban. So we have a precedent. It's not perfect, but society knows it's bad, government, family, schools come together and implement the ban. No need for IDs etc and give more control to government.
Well not exactly earning calls in the classical sense, but haven't you heard about these startups announcing how they have scaled to $100 million in 3 months etc. Maybe revenue calls every quarter.
This will need a separate blog post But when you give something for free, then you will run out of that resource. So yes, companies have too little GPUs to give their services for free, but too many GPUs for their paid services.
Why would you go inside a chat box and try to force fit applications and show the applications in weird ways and then finally link out to the actual application instead of just putting a chat box inside the application which is the accepted way.
If I had a human assistant, I'd ask them to book my flight. The chat box is your window to your AI assistant. Maybe this new assistant hasn't earned your trust yet, but it makes sense that trust-aside, you'd ask your assistant to do whatever they could do for you.
Years ago (in the age of flip phones, think pre 2001) I worked at a bank.
When we launched our mobile banking platform, one of the PM's there swore up and down that we should be piloting banking by text message. He was fabulously wrong at the time and in the end got a lot of things right.
There are a lot of applications that could fit in a text box provided that your not doing the work rather that your delegating it.
reply