Having used Forgejo with AGit now, IMO the PR experience on GitHub is not great when trying to contribute to a new project. It's just unnecessarily convoluted.
It's just how straightforward it is. With GitHub's fork-then-PR approach I would have to clone, fork, add a remote to my local fork, push to said remote, and open the PR.
With agit flow I just have to clone the repository I want to contribute to, make my changes, and push (to a special ref, but still just push to the target repo).
I have been making some small contributions to Guix when they were still using email for patches, and that (i.e. send patches directly to upstream) already felt more natural than what GitHub propagates. And agit feels like the git-native interpretation of this email workflow.
I said e.V., not EV. Codeberg is an e.V., i.e. a "registered association" in Germany. I am not actually sure if you could technically buy an e.V., but I am 100% certain that all of the Codeberg e.V. members would not take kindly to an attempt at a hostile takeover from Microsoft. So no, buying Codeberg is not easier than DDoSing them.
What do you mean by "orgs", and what do you mean by "the codeberg"?
Sure, they could try to bribe the Codeberg e.V. active members into changing its mission or disbanding the association entirely, but they would need to get a 2/3 majority at a general assembly while only the people actively involved in the e.V. and/or one of its projects can get voting rights. I find that highly unlikely to succeed.
Are there standards committees with 786 voting members, of which you would have to convince at least 2/3 to betray the ideals of the association they chose to actively take part in to get the association to disband or otherwise stop it from pursuing its mission?
~800 members? That's great to hear actually. I like Codeberg and want them to succeed and be protected from outside effects.
That's said, I believe my comparison checks out. Having ~800 members is a useful moat, and will deter actors from harming Codeberg.
OTOH, the mechanism can still theoretically work. Of course Microsoft won't try something that blatant, but if the e.V loses this moat, there are mechanisms which Microsoft can and would like to use as Codeberg gets more popular.
I think another big "moat" is actually that Codeberg is composed of natural people only (those with voting rights, anyway). Real people have values, and since they have to actively participate in Codeberg in some way to get voting rights those values are probably aligned with Codeberg's mission. I don't actually now the details of the standardization process you cite, but I think this is a big difference to it.
Additionally, from skimming the bylaws of Codeberg I'd say they have multiple fail-safes built in as additional protection. For one, you can't just pay ~1600 people to sign up and crash a general assembly, every membership application has to be approved first. They also ask for "support [for] the association and its purpose in an adequate fashion" from its members, and include mechanisms to kick people out that violate this or are otherwise acting against Codeberg's interests, which such a hostile attack would surely qualify as.
Of course it's something to stay vigilant about, but I think Codeberg is well positioned with regard to protecting against a hostile takeover and shutdown situation, to the point that DDoS is the much easier attack against them (as was the initial topic).
I've lost my laptops SSD (as in: no longer mountable, only got data out of it with some rescue tools) at some point between 2017 and 2020, don't remember when exactly. I've also had a weird experience where a btrfs filesystem formatted on my desktop PC was not mountable on a Raspberry Pi, and vice versa formatted on the Pi was not mountable on the desktop. That didn't instill confidence either.
On the other hand, I've been running a btrfs RAID1 on two HGST datacenter drives for a few years and haven't had issues with that.
A good package manager, e.g. GNU Guix, let's you define a reproducible environment of all of your dependencies. This accounts for all of those external headers and shared libraries, which will be made available in an isolated build environment that only contains them and nothing else.
Eliminating nondeterminism from your builds might require some thinking, there are a number of places this can creep in (timestamps, random numbers, nondeterministic execution, ...). A good package manager can at least give you tooling to validate that you have eliminated nondeterminism (e.g. `guix build --check ...`).
Once you control the entire environment and your build is reproducible in principal, you might still encounter some fun issues, like "time traps". Guix has a great blog post about some of these issues and how they mitigate them: https://guix.gnu.org/en/blog/2024/adventures-on-the-quest-fo...
As well as the toolchain used to compile your toolchain, through multiple levels, and all compiler flags along the path, and so on, down to some "seed" from which everything is build.
Re: delayed security fixes, if a vulnerability is not yet publicly known and there is no indication that it is actively abused it is common practice to schedule fixes and give advance notice of them to have administrators be prepared to update promptly. The fact that the vulnerability was leaked beforehand is unfortunate, but Forgejo handled it well with rescheduling their release in response.
Re: license change, hard forking, and new features: my understanding is that Gitea wasn't very open to contributions coming from Forgejo. The hard fork seems to be a consequence of that. Yes, there used to be weekly cherry picks, I assume they stopped exactly because Forgejo and Gitea diverged to much and they became too much of a maintenance burden. Yes, this means Gitea has gotten features that aren't present in Forgejo since then. But you miss the point of the hard fork if you count this as a negative: Forgejo is deliberately diverging from Gitea now. Cooperation didn't work out, so they are no longer a superset of Gitea, but an entirely separate project. And as such they don't have more maintenance burden than Gitea itself.
And Forgejo definitely does not lack development power as its own now-independent project. They have features themselves that Gitea doesn't have. One notable that comes to mind is storage quotas, but there are many more too.
> The ideologically forked Forgejo made some license changes and hard fork decisions that increased the maintenance burden even more, resulting in missing upstream features and decreased security. Forgejo is more busy managing ideals, than creating software.
And from other comments:
> When deciding which software fork to pick, it is about the development power.
> In my view they don't have the development to keep up with Gitea.
How do you come to the conclusion that Gitea has more development power? Looking at the Insights / Activities overview of each repository there were slightly more authors with more contributions to Forgejo over the last month. Acknowledging that this fluctuates I'd estimate that both projects are similarly active.
Also, Forgejo is actually dogfooding its development, which is much more reassuring than what Gitea does IMO.
I responded to that comment, but it does not address why you think Forgejo is lacking development power. IMO it rather shows a lack of understanding on your part of what Forgejo is today. It no longer is a superset of Gitea since the hard fork, but its own independent project instead. And as such it has at least comparable activity to Gitea, which is reflected e.g. in the unique features that Forgejo has, but Gitea doesn't.
As I've mentioned elsewhere [0], sometimes there's just fake outrage trying to associate drama or a general feeling of disapproval with a particular project.
To my knowledge git-lfs is only really designed to store your large files on a central server. I think it also uses its own out-of-band (from the perspective of git) protocol and connection to talk to that server. So it doesn't work with a standard ssh remote, and breaks git's distributed nature.
For an actually distributed large file tracking system on top of git you could take a look at git-annex. It works with standard ssh remotes as long as git-annex is installed on the remote too (it provides its own git-annex-shell instead of git-shell), and has a bunch of additional awesome features.
Requiring a fork to open pull requests as an outsider to a project is in itself a idiosyncrasy of GitHub that could be done without. Gitea and Forgejo for example support AGit: https://forgejo.org/docs/latest/user/agit-support/.
Nevertheless, to avoid ambiguity I usually name my personal forks on GitHub gh-<username>.
No, it's a normal feature of Git. If I want you to pull my changes, I need to host those changes somewhere that you can access. If you and I are both just using ssh access to our separate Apache servers, for example, I am going to have to push my changes to a fork on my server before you can pull them.
And of course in Git every clone is a fork.
AGit seems to be a new alternative where apparently you can push a new branch to someone else's repository that you don't normally have access to, but that's never guaranteed to be possible, and is certainly very idiosyncratic.
That's backwards. In Github every fork is just a git clone. Before GitHub commandeered the term "fork' was already in common use and it had a completely different meaning.
Arguably the OG workflow to submit your code is `git send-email`, and that also doesn't require an additional third clone on the same hosting platform as the target repository.
All those workflows are just as valid as the others, I was just pointing out that the way github does it is not the only way it can be done.
> Requiring a fork to open pull requests as an outsider to a project is in itself a idiosyncrasy of GitHub that could be done without. Gitea and Forgejo for example support AGit: https://forgejo.org/docs/latest/user/agit-support/.
Ah yes, I'm sure the remote being called "origin" is what confuses people when they have to push to a refspec with push options. That's so much more straightforward than a button "create pull request".
As far as I'm concerned the problem isn't that one is easier than the other.
It's that in the github case it completely routes around the git client.
With AGit+gitea or forgejo you can either click your "create pull request" button,
or make a pull request right from the git client. One is necessarily going to require more information than the other to reach the destination...
It's like arguing that instead of having salad or fries on the menu with your entree they should only serve fries.
IMO "Advent of Code" only determines the timeframe in which it happens, not the amount of puzzles it must contain. It could just as well be four puzzles, one for each sunday of the advent, or any other amount, as long as they are released within those roughly four weeks before christmas.
Eh, the implication has always been that it's a Christmas calendar where you open one door per day until it's Christmas eve - just with code riddles instead of chocolate.
Having used Forgejo with AGit now, IMO the PR experience on GitHub is not great when trying to contribute to a new project. It's just unnecessarily convoluted.
reply