You have already gotten used to loading multiple megabytes of bytes just to display a static landing page. You'll get used to this as well... just give it some time :-D
It is never wrong to be considered untrusted. It is only occasionally right to be considered trusted. Especially in zero-risk relationships that is the default on the anonymous internet.
The underlying idea is admirable, but in practice this could create a market for high-reputation accounts that people buy or trade at a premium.
Once an account is already vouched, it will likely face far less scrutiny on future contributions — which could actually make it easier for bad actors to slip in malware or low-quality patches under the guise of trust.
That's fine? I mean, this is how the world works in general. Your friend X recommends Y. If Y turns out to suck, you stop listening to recommendations from X. If Y happens to be spam or malware, maybe you unfriend X or revoke all of his/her endorsements.
It's not a perfect solution, but it is a solution that evolves towards a high-trust network because there is a traceable mechanism that excludes abusers.
That's true. And this is also actually how the global routing of internet works (BGP protocol).
My comment was just to highlight possible set of issues. Hardly any system is perfect. But it's important to understand where the flaws lie so we are more careful about how we go about using it.
The BGP for example, a system that makes entire internet work, also suffers from similar issues.
Amazing idea - absolutely loving vouch.
However, as a security person, this comment immediately caught my attention.
A few things come to mind (it's late here, so apologies in advance if they're trivial and not thought through):
- Threat Actors compromising an account and use it to Vouch for another account. I have a "hunch" it could fly under the radar, though admittedly I can't see how it would be different from another rogue commit by the compromised account (hence the hunch).
- Threat actors creating fake chains of trust, working the human factor by creating fake personas and inflating stats on Github to create (fake) credibility (like how number of likes on a video can cause other people to like or not, I've noticed I may not like a video if it has a low count which I would've if it had millions - could this be applied here somehow with the threat actor's inflated repo stats?)
- Can I use this to perform a Contribution-DDOS against a specific person?
The idea is sound, and we definitely need something to address the surge in low-effort PRs, especially in the post-LLM era.
Regarding your points:
"Threat Actors compromising an account..." You're spot on. A vouch-based system inevitably puts a huge target on high-reputation accounts. They become high-value assets for account takeovers.
"Threat actors creating fake chains of trust..." This is already prevalent in the crypto landscape... we saw similar dynamics play out recently with OpenClaw. If there is a metric for trust, it will be gamed.
From my experience, you cannot successfully layer a centralized reputation system over a decentralized (open contribution) ecosystem. The reputation mechanism itself needs to be decentralized, evolving, and heuristics-based rather than static.
I actually proposed a similar heuristic approach (on a smaller scale) for the expressjs repo a few months back when they were the first to get hit by mass low-quality PRs: https://gist.github.com/freakynit/c351872e4e8f2d73e3f21c4678... (sorry, couldn;t link to original comment due to some github UI issue.. was not showing me the link)
I belong to a community that uses a chain of trust like this with regards to inviting new people. The process for avoiding the bad actor chain problem is pretty trivial: If someone catches a ban, everyone downstream of them loses access pending review, and everyone upstream of them loses invite permissions, pending review. Typically, some or most of the downstream people end up quickly getting vouched for by existing members of the community, and it tends to be pretty easy to find who messed up with a poorly-vetted invite (most often, it was the person who got banned's inviter). Person with poor judgement loses their invite permissions for a bit, everyone upstream from them gets their invite permissions back.
This is a strange comment because, this is literally the world that we live in now? We just assume that everyone is vouched by someone (perhaps Github/Gitlab). Adding this layer of vouching will basically cull all of that very cheap and meaningless vouches. Now you have to work to earn the trust. And if you lose that trust, you actually lose something.
The difference is that today this trust is local and organic to a specific project. A centralized reputation system shared across many repos turns that into delegated trust... meaning, maintainers start relying on an external signal instead of their own review/intuition. That's a meaningful shift, and it risks reducing scrutiny overall.
I am still not going to merge random code from a supposed trusted invdividual. As it is now, everyone is supposedly trusted enough to be able to contribute code. This vouching system will make me want to spend more time, not less, when contributing.
Trust signals change behavior at scale, even if individuals believe they're immune.
You personally might stay careful, but the whole point of vouching systems
is to reduce review effort in aggregate. If they don't change behavior,
they add complexity without benefi.. and if they do, that's exactly where
supply-chain risk comes from.
I think something people are missing here is, this is a response to the groundswell in vibecoded slop PRs. The point of the vouch system is not to blindly merge code from trusted individuals; it's to completely ignore code from untrusted individuals, permitting you to spend more time reviewing the MRs which remain.
To whom? It's not against Github's ToS to submit a bad PR. Anyway, bad actors can just create new accounts. It makes more sense to circulate whitelists of people who are known not to be bad actors.
I also like the flexibility of a system like this. You don't have to completely refuse contributions from people who aren't whitelisted, but since the general admission queue is much longer and full of slop, it makes sense to give known good actors a shortcut to being given your attention.
I wouldn't do this where it's not clear there was an issue, but for something like the really poor OCaml PR that was floating around, reporting the user to me seems like a logical step to reduce the flood.
I don't think the intent is for trust to be delegated to infinity. It can just be shared easily. I could imagine a web of trust being shared between projects directly working together.
That could happen.. but then it would end up becoming a development model similar to the one followed by sqlite and ffmpeg ... i.e., open for read, but closed(almost?) for writes to external contributions.
I don't know whether that's good or bad for the overall open-source ecosystem.
Has this ever happened? Not revoking certificates, which they've certainly done for malware or e.g. iOS "signing services", but because a developer used non-Apple hardware.
I am the dev of Pocket Squadron (https://play.google.com/store/apps/details?id=com.bombsight....) and a few years ago I tried to make a build for iOS due to many player requests. I did not have a mac so I setup a mac VM and a dev account to start making builds and see how big of a lift it would be. My account was banned unfortunately. Still no iOS build to this day, I'm probably missing out on a good bit of money.
I don’t know the answer to that but a quick search shows lots of examples of people complaining that their developer certificate has been revoked, demonstrating a willingness by Apple to revoke certificates if they believe the developer violated their terms of service. I doubt Apple would go out of their way to include language in the agreement that binds developers to their own sanctioned platform if they didn’t intend to enforce it.
I agree, but I think a better wager (and what GP probably meant) would be that all of these developers had their certificates revoked because Apple thought they were distributing malware. That's what the system is for.
This error exists because Apple has effectively made app notarization mandatory, otherwise, users see this warning. In theory, notarization is straightforward: upload your DMG via their API, and within minutes you get a notarized/stamped app back.
…until you hit the infamous "Team is not yet configured for notarization" error.
Once that happens, you can be completely blocked from notarizing your app for months. Apple has confirmed via email that this is a bug on their end. It affects many developers, has been known for years, and Apple still hasn't fixed it. It completely elimiates any chances of you being able to notarize your app, thus, getting rid of this error/warning.
Yikes. Why anyone would willingly develop for Apple platforms is beyond me. But then I also don't understand why some some people like using the crap^WmacOS. To each their own I guess. Hardware does look nice though, too bad about their software.
Well, mainly because it's a better unix than Linux for the desktop, and I'd rather pull my eyes out of their sockets with a rusty screwdriver than use Windows.
Other than developing my own (without using any other OS...) which is a ... significant ... task, there's not much other option. YMMV.
It was a better linux for the desktop back during the snow leopard day but it's slowly gotten worse at the same time that linux became better.
Now the only advantages they have is the hardware. The os is buggy doesn't respect apple's own human interface guidelines and is increasingly locked down. Gone are the days of simbl extensions, customizability and a clean nice coherent stable os with few bugs.
I switch between Tahoe, fedora and pop_os on a daily basis. Tahoe in its complete design madness is still in a league of its own when it comes to basic UX IMHO. Just the fact that the keymappings for undo/redo are consistent between apps puts it’s way ahead of Linux when considering the whole ecosystem. Linux is a clear winner in tech and tooling thought, which is why I use both.
MacOS is a better desktop in the sense that the desktop is locked down. GNOME trie to be the same as MacOS but being the default desktop for nerds and build for people who lives the Apple way makes it a bit schizofrenic.
As a Linux lifer I agree that the hard diamond surface of the Mac desktop has a solid feeling to it. The Linux way is harder and also more brittle. Windows and Linux are both better than MacOS even as a desktop as long as you do not look at the in the wrong way. The thing is I have only minor problems doing that on either Linux or Windows, but the walled garden of the Mac, Android and iOS is a joke.
MacOS is designed to be a somewhat stable desktop, that is good. It is not a better Unix, it is a political stance that means hacking will forever die.
I don’t know anything about “hard/brittle” analogies for operating systems. What I do know is that Linux distributions don’t seem to have a coherent strategy for building an operating system with sensible defaults and a consistent design that makes it easy to use for non-technical users.
Linux developers seem to almost-universally believe that if the user doesn’t like it or it doesn’t make sense then the user will fix it themselves either via configuration files or patching the source code. That model works fine for users with a lot of knowledge and time on their hands. In other words, it’s an operating system for hobbyists.
MacOS, for all its faults, is still pretty easy to use (though not even close to the ease of use of Classic Mac OS 9 and earlier).
Apple developers seem to almost-universally believe that if the user doesn’t like it or it doesn’t make sense then the user will... just have to learn to live with it.
I never said the Mac was perfect. Far from it. But it has sensible defaults which the vast majority of users find acceptable and easy enough to use.
Linux users, on the other hand, seem to spend more time customizing their operating system and sharing screenshots of it than actually getting work done.
You are encouraged to play with footguns on Linux, I do not do it and none of my family do it works fine for us. On "Linux desktop" one of the things you are not encouraged to play with is installation of programs. The Linux way is preferable that is why Apple and all the other are walking down the same path.
Not being able to install things sucks, but when you do you will easily destroy your nice shiny brittle desktop. The pebkac is strong here, but making the users enemies is a bad solution, this is why Google, Apple and MS are all bad desktops.
As I said I have been a Linux user my whole life. I know it works as a desktop but it works best with either people who do not care about instaling stuff, or thise who care enough to get it working.
you’re welcome to your opinion of course, mine differs.
I’ve been using Linux since it came on a root and boot floppy. I remain completely unimpressed with its desktop design, ease of use, and (especially) accessibility. It’s a fantastic server OS.
It might be "better unix" (whatever that means), but it sure as hell is not better. Locked down, buggy, and difficult / impossible to navigate by keyboard. And I need to install (and trust) a 3rd party app to get a multi value clipboard? Yeah right, better. I'd prefer Windows, and I'm not fan of the ad-OS either.
The advantage is you can just develop it once and publish, rather than pushing things through multiple different packaging processes, and a MacOS person might be more likely to spend money.
> It affects many developers, has been known for years, and Apple still hasn't fixed it.
Not a feature they care about. Same for deleting apps not released yet. Haven't looked in a while but for over a decade it has been impossible to delete ios apps submitted and not released. So either you have to release the app, make it "apple approved" and then immediately kill it or have an app always present (I think you can hide it but I've not checked that in quite a while.
Yeah but this command sucks because AFAIK then it doesn’t even verify notarized apps anymore (for example if the certificate is invalid, if it was revoked, etc.)
Most prompts we give are severely information-deficient. The reason LLMs can still produce acceptable results is because they compensate with their prior training and background knowledge.
The same applies to verification: it's fundamentally an information problem.
You see this exact dynamic when delegating work to humans. That's why good teams rely on extremely detailed specs. It's all a game of information.
Having prompts be information deficient is the whole point of LLMs. The only complete description of a typical programming problem is the final code or an equivalent formal specification.
Genuine question: Given Anthropic's current scale and valuation, why not invest in owning data centers in major markets rather than relying on cloud providers?
Is the bottleneck primarily capex, long lead times on power and GPUs, or the strategic risk of locking into fixed infrastructure in such a fast-moving space?
reply