I understand your point, and despite what my (hastily-typed) critique shows, I find there are valuable kernels of truth in all types of ideas. The walled-gardens for AI are more to do with recouping the cost of model training, and there are currently no incentives for open sourcing some types of models. The new tool has knowledge of more than one type of domain, so its content is different from its source material, more or less. So while creators have a point, they lose it when it turns out the tool is capable of multi-faceted information synthesis. But what’s interesting to me is that creators are not precluded from using AI tools to develop more content, and that makes all the difference.
That said, I think it would be better if more models were open sourced, or if FOSS non-profits would buy GPUs and start their own model training program based on the currently released open source models. The commons argument doesn’t apply here if there are multiple open source models which contain information from hundreds of hours of GPU training which someone else has already done, and thus can be picked up by any open source organization to train on additional content for whatever is of interest. Some orgs have tried that already, but didn’t gain traction due to poor marketing and lack of funding, https://en.wikipedia.org/wiki/EleutherAI. Maybe if there were government subsidies to encourage open source model release, or non-profit funding for setting up and paying for GPU farms for training models which could be used by everyone, then this type of organizational behavior would become more productive.
That said, I think it would be better if more models were open sourced, or if FOSS non-profits would buy GPUs and start their own model training program based on the currently released open source models. The commons argument doesn’t apply here if there are multiple open source models which contain information from hundreds of hours of GPU training which someone else has already done, and thus can be picked up by any open source organization to train on additional content for whatever is of interest. Some orgs have tried that already, but didn’t gain traction due to poor marketing and lack of funding, https://en.wikipedia.org/wiki/EleutherAI. Maybe if there were government subsidies to encourage open source model release, or non-profit funding for setting up and paying for GPU farms for training models which could be used by everyone, then this type of organizational behavior would become more productive.