I don't understand the Google's move. Google uses Android as a platform to collect virtually everyone's personal info and build the profile to benefit its ad business. If there is an extremely tiny chance that people (or a sizble population) may walk away from the platform, it's not worth the risk.
It's Google's response to the remedies required by the Antitrust act decision last August. The timing is explained by the US Supreme Court decision of Oct 6 to deny Google its request to pause implementation of said remedies.
It's legit. Just gives people the impression that it is sabotaging the community. I understand why they do it (the more inconvenience the more likely people are gonna pay), but wish companies are more thoughtful on open sourcing code and how to differentiate enterprise offerings at the beginning, rather than playing tricks after gaining tractions.
It's about structures. A long function is unfortunately "flat" at the first glance. Even if there are inherent structures, it usually burns a lot of brain power for a human to abstract structures out of "flat". Being flat is a more acute issue for LLM to understand. I think in the LLM era it's more important than ever to keep things short and structured.
sqlite is embedded. I understand that there might be scenarios in which multi-threaded sqlite is beneficial when an application has many concurrent writers. But taking a look at the company's website makes me wonder what the project's motivation is. The company offers a database service, which is a completely different scenario from embedded dbs. If the intention is to offer a cloud service, evolving from "sqlite" seems odd. The only benefit I can think of is that the new db service helps existing sqlite users migrate. My issue is that if I choose sqlite to store data locally in my browser or my cell phone, why do I suddenly want to store it in the cloud?
> My issue is that if I choose sqlite to store data locally in my browser or my cell phone, why do I suddenly want to store it in the cloud?
I don't think those solutions are necessarily that bad. There was one the other week that offered a means of guaranteed sync to a sqlite file in the cloud. Its nice to have the infra to allow users to hop around devices and have backups. What's weird to me, is trying to magic it into a performant multi-client db which the underlying technology was never designed to be.
I agree with your point on finding a new standard on what developers should do given LLM coding. Something that matters before may not be relevant in future.
My so far experiences boil down to: APIs, function descriptions, overall structures and testing. In other words, ask a dev to become an architect that defines the project and lay out the structure. As long as the first three points are well settled, code gen quality is pretty good. Many people believe the last point (testing) should be done automatically as well. While LLM may help with unit tests or tests on macro structures, I think people need to define high-levle, end-to-end testing goals from a new angel.
ORM often produces horrible queries that are impossible for humans to digest. I think there are two factors. First, queries are constructed incrementally and mechanically. There is no an overview for the generator to understand what developers want to compute, or no channel for developers to specify the intention. I anticipant this will change w/ AI in the near future. Second, ORM models data following the dogmatic data normalization, on which queries are destined to be horrible. I believe that people should take a moment to view their data, think what computations they want to do on top, estimate how expensive they may be, and finally settle on a reasonable model overall. Ask ORM (or maybe AI) to help with constructing and sending queries and assembling results. But do not delegate data modeling out. With right data modeling that fits computations, queries cann't be that bad.
If you take a look at how storage is billed in cloud, you'll see a huge difference. Networked storage, e.g., EBS, provides durability and survives VM restart. But it is billed on IOPS. 200K IOPS is a piece of cake for today's NVMe. But a 200K EBS easily costs you thousands per month. High-end NVMe devices, unfortunately, are all instance-level storage, which means they are gone if you shutdown your VM.
Looks like AGPL is a new norm? Redis switched to AGPL too. SSPL is also common on the server side. Curious of how you view AGPL vs. SSPL and choose the former.
SSPL is appealing for business but it is not open source. That is a deal breaker for us. We want to remain open source under a license that is recognized as open source.
SSPL is open source by every definition. The OSI rejected it, but their explanation for why boils down to "neener neener" (it's not backed by any facts). Most other organizations haven't bothered to take a position because no notable software uses it and it's not worth the hassle to evaluate (mongodb and redis have better alternatives so nobody cares about them).
late addition: the OSI is a consortium of software and cloud companies; the ones whose business model is ruined by SSPL. They aren't neutral and we probably shouldn't let them be the arbiters of what counts as open source.
Foundations run in a more non-profit, community-oriented way include the FSF, EFF, and Debian, none of which made any significant comment. Debian has excluded SSPL software, but their criteria for inclusion are stricter than simply "is it open source?" and they announced it was simpler to replace them with their superior non-SSPL equivalents than to actually tackle the question.
They're switching because they saw the failure of doing the source-available rugpull and causing other, sometimes even more successful, forks to show up, like Redis and Valkey. SSPL is not open source so it's not something I'd ever choose.