Well, no. This argument might be correct if this policy wouldn't very strongly incentive people to use (possibly paid!) iCloud instead and if Apple would just allow any app onto the app store (or effortless sideloading like on Android). Instead, they heavily scrutinize everything that gets submitted. They could just have special permissions for apps like Nextcloud that would only be enabled if the app behaves correctly regarding this background sync functionality.
Even if you pay, most likely you have a contract that effectively gives you close to no power because it's full of conditions favoring the service provider and trying to use the little power you have will be expensive because laywers and courts get involved.
Without HTTP/1.1 either the modern web would not have happened, or we would have 100% IPv6 adapation by now. The Host header was such a small but extremely impactful change. I believe that without HTTP/3, nothing much would change for the majority of users.
But also, the only thing in most of the organizations I've been in that was using anything other than HTTP 1.1 was the internet facing loadbalancer or cloudflare, and even then not always. Oh yeah we might get a tiny boost from using HTTP/2 or whatever, but it isn't even remotely near top of mind and won't make a meaningful impact to anyone. HTTP/1.1 is fine and if your software only used that for the next 30 years, you'd probably be fine. And that was the point of the original comment, nginx is software that could be in the "done with minor maintenance" category because it really doesn't need to change to continue being very useful.
Maybe you just haven't been in organizations that consider head-of-line blocking a problem? Just because you personally haven't encountered it, doesn't mean that there aren't tons of use cases out there that require HTTP/3.
>Maybe you just haven't been in organizations that consider head-of-line blocking a problem?
I have not. It is quite the niche problem. Mostly because web performance is so bad across the board that saving a few milliseconds just isn't meaningful when your page load takes more than a second and mostly is stuck in javascript anyway. Plus everybody just uses cloudflare and having that CDN layer use whatever modern tech is best is very much good enough.
Sure, but there's video streaming, server to server long polling bidirectional channels, IOT sensors and all sorts of other things you probably use every day that can really benefit from HTTP3/quic.
Maybe it's my experience from working in grant driven academia, but applying for a grant 4 months before you need the money and then complaining because it took three short months to get a decision is ridiculous. Applying later than a year in advance is too late.
This is not grant driven acedemia. This is the PSF, not encumbered with layers of beurocracy, audits and regulation. An expectation of a turn round time of a few weeks is entirely reasonable
Apparently, the two decision making bodies in the PSF for this matter have once-a-month meetings. This means any problem or missing information or whatever will delay the process for at least a month. So, maybe not a year, but 6-9 months would be a comfortable timeframe. I stand by it: 4 months before the event is WAY too late, and even with a slim organization, 3 months to get a decision is an absolutely reasonable time.
At Strange Loop this year during the keynote the organizer said he was usually signing the contract for the hosting venue 2 years in advance. I'm not saying a first year 200 person conference has to plan as far out as an established X,000 person conference, but 4 months out is cutting it close. I'd suspect all first year conferences are very chaotic as the organizers learn to go from 0 to 1 so good for them for pulling it off.
It's time to start working on next year's conference and grant proposal.
this was my first reaction. how fast they came to a consensus, it was the organizers fault for applying so late, it the boards fault for approving in only 3 months. 1 year prior to the con is more what I would expect.
That throws me back. Being a teenager without any real understanding of compilers, interpreters etc., being able to create my own EXE file in TP4 felt like having superpowers - like being a real(TM) programmer :)
A few years later at 16, I actually got paid for developing a small app for managing my dad's customers, paid by the company he worked for. Part of that money went into getting a legal version of TP6.
Raspberry Pi is many things today. The foundation is indeed focused on education, but the company behind it has a much broader focus and sells many (most?) of their devices to commercial customers, where 2x4k outputs might be beneficial, e.g. in digital signage applications.
Not a US cititzen, but "The government" is a wide term and any law enforcement agency would fit this, including the ones that are responsible to deal with things like copyright enforcement - that's exactly the type of fish they exist to fry ...
In the US, subpoenas come from the Justice Department (either state or federal depending on the crime for which evidence is being sought). The court that issued the subpoena is on it, and the person or entity being served, has the right to see why some government agency felt it could aid in the uncovering of a crime that had already been committed. The person or entity then has the opportunity to challenge that in court prior to complying with it. This is sometimes informally called "quashing the subpoena." From my sister-in-law who is a defense attorney, the most common result of challenging a subpoena is to get what it asks for narrowed down to just what is plausibly responsive.
In the article, this response: As a result we are currently developing new data retention and disclosure policies. These policies will relate to our procedures for future government data requests, how and for what duration we store personally identifiable information such as user access records, and policies that make these explicit for our users and community. Is good practice for limiting what a subpoena can request (you can't give what you don't have).
At Blekko we logged access records in such a way that we could use PII for 48 hours and then it was deleted. The CTO, Greg Lindahl, is a huge privacy advocate and this sort of architecture made it possible to get information to improve our ranking and service without compromising people's privacy. In practice I don't think any agency could go from "we have a suspect" to "issue a subpoena" in 48 hrs so it was a useful way for us to stay out of the crosshairs. The most interesting event was the FBI asking for information on IP addresses that had accessed their honeypot CSAM site. That turned out to be some of the machines in the crawling cluster. Given that the site was outside the crawl "horizon" and didn't rank (very few sites linked to it) it didn't even make it into the cache for rank analysis. But in that case the turn around time was impressive. Of course that is because they were just using their own logs to generate subpoena requests.
As I recall (and I'm not a lawyer so don't rely on this advice) the lawyers had advised that as long as the retention period was published, even if a subpoena asked for a longer look back you could meet your obligation by returning "all the data you had" which would only be 48hrs worth.
Had a jurisdiction said, "You should have expected ..." I expect our response would have been, "We have published what we retain, me meet conform to federal and state laws you knew ahead of time we wouldn't have more than 48 hrs worth."
That said, jurisdiction when it comes to the Internet is always kind of "weird". Did you use the web service in your house in Columbus OH, or did you use the web service on a server in a data center in California? Also as I recall our TOS also had a requirement that any legal action be brought in California but I don't think we ever tested that in court.
Given the discussion around how lacking PyPI supply chain security is, how juicy of a target it is for attackers, and how critical infrastructure is probably relying on PyPI, yt-dlp is the last thing on my mind.
Yeah, but then the base module will cost 2000$ (or 20$/month), the cup holder attachment is another 200$ and the manufacturer will sent assassin squads against anyone daring to create compatible attachments or even the printable designs for them...