It depends. Usually there are enough "knobs" that adding that many balls to the package would be crazy expensive at volume.
Most SoCs of even moderate complexity have lots of redundancy built in for yield management (e.x. anything with RAM expects some % of the RAM cells to be dead on any given chip), and uses fuses to keep track of that. If you had to have a strap per RAM block, it would not scale.
The GSM processor is often a separate chip. You may have read an article about the super spooky NSA backdoor processor that really controls your phone, but it's just a GSM processor. Connecting via PCIe may allow it to compromise the application processor if compromised itself, but so can a broadcom WiFi chip.
I'll admit it's a bit charged, but I'm frustrated with bad faith takedowns of ATProto/Bluesky, while Mastodon (and it is Mastodon, not ActivityPub) solves almost none of the actual problems. I tried implementing my own ActivityPub server and the spec is so hilariously lacking that it's understandable that everyone just uses the Mastodon API instead.
ActivityPub isn't actually the spec of Mastodon. Treat claims of "Mastodon is ActivityPub" the same as you treat claims of "Bluesky is decentralised."
Just expose the same interface Mastodon does and you'll be fine. Noting that almost nothing cares about the exact URLs you use, except for webfinger, but does care about the domain being the same as the right side of the @ sign.
> Treat claims of "Mastodon is ActivityPub" the same as you treat claims of "Bluesky is decentralised."
Not sure if you meant this in the way I read it, but I believe that Bluesky is pretty much decentralised and tidying up the last bits of that, and I also believe that Mastodon is functionally ActivityPub and probably mopping up the last bits where the open spec meant anything.
The problem with ActivityPub is that it was missing at least half of what would be necessary to do anything with it, maybe more. You certainly can't create clients with it, it doesn't define anything about writing, etc. It's good that it's an open spec, but I see it as closer to Open Graph tags on web pages than it is to a social network foundation. That's fine... but we treat "Mastodon" as open because of ActivityPub, when in reality almost the entire system is defined by a Rails API implementation and its idiosyncrasies. I see it as a problem that you can't participate in the network without implementing an API with one implementation, rather than by implementing to a spec.
Not exactly—a PBC is allowed to "balance" shareholder profit with "stakeholder interests. But at the end of the day, the money is still coming from the shareholders, and they're still looking for a return. They're required to be transparent, but that's about it. And there aren't really any penalties for not complying either.
Blockchain is still like that. Today I am setting up a blockchain node. The chain is actually two chains that recursively depend on each other. The docs say to start one of them first and wait for it to fully sync. It prints a timeout error for every block, saying the other chain node software was unreachable, and is estimated to catch up to current block height in about 200 years, which can't be right. Maybe I need to run both nodes at once contrary to the explicit instructions in the docs which say not to do so.
I wouldn't be surprised if half of all blockchains were vulnerable to some kind of trivial double–spend attack because it's not possible that all the complexity has eyes on it.
Edit: you're supposed to download a 2GB JSON file containing the state as of the last migration.
The normal way to set up most blockchain nodes these days is to rsync someone else's node's working directory. Obviously this is worthless as far as a decentralised and trustless system goes.
Unfortunately, the swarm is 99.99999% advertisements for penis enlargement pills. How can a P2P system filter them out? A federated system relies on each admin to filter them out. A centralised system does even better, relying on a single dictator to filter them out. A P2P system requires every user to filter every spam message, together consuming far more effort than the spammer needed to send it.
This isn't, and has never been a hard problem. Just pay for people's attention. People you follow don't have to pay, and make that transitive. Penalize people in your network who propagate spam by increasing the cost to get your attention.
If a scammer, advertiser, or some other form of spammer can get a payout just 1% of the time, they will be willing to pay much more than the average person posting the average tweet.
If you make everything explicitly transactional, you will be left with only people trying to make a profit.
Penis enlargement spam is worth like $0.00000001 per message. Any number higher than that makes them lose money. The real problem is that nobody will post on a social media network where you have to pay to post.
Do the outbound rules of other participants include microtransactions?
And who besides a spammer would pay more than $0 to have their message read by you? If I wrote a blog post about vulnerabilities of blockchains, or how I ran Doom on a pregnancy test, and you don't read it because I'm not paying you, you're losing value, not me. You guarantee an inbox of only spam — but at least you get paid for it.
If you've got great content, I would just follow you. Or someone I follow would follow you, and through the network it would lead to discovery. I want your content, so unless you charge for it, nobody's paying anyone.
If someone wants me to ingest something novel from far outside my network, one way to gain reputation might be to pay a microtransaction fee. I'd be free to choose to set that up as a part of my ingestion algorithm. Or maybe my peers do it, and if they "upvote" the content, I see it.
If my peers start acting poorly and sending spam, I can flag disinterest and my algorithm can naturally start deboosting that part of the network.
With such systems-level control, we should be able to build really excellent tooling, optimization, and statistical monitoring.
Also, since all publications are digitally signed, your content wouldn't have to be routed to me through your node at all. You could in fact never connect to the swarm and I could still read your content if you publish it to a peer that has distribution.
I don't agree. I think the chief problem with advertising is that it is extremely repetitive. I'm not, in principle, opposed to being informed about new things relevant to my interests existing. In a world that is completely oversaturated with content, it is hard to gain traction on something new with word-of-mouth alone, even if it is of very high quality. There is a point to being informed about something existing for the first time (maybe I'll use it), and there is a reason why people would have to pay to make use of that informational system (the barrier to entry is necessary to make the new thing stand out in the ocean of garbage).
reply