Hacker Newsnew | past | comments | ask | show | jobs | submit | globie's commentslogin

We're talking about this line, right?

>The precision of outages (at :29 and :44) matches a network-synchronized clock (NTP).

I think this just correctly points out that if the trigger was something unsynchronized like animals chewing on wires or someone digging underground, you wouldn't have 61% of events occurring at these two second markers. Even if the trigger was something digital but on a machine that isn't NTP synchronized, you would eventually have enough clock drift to move the events to other seconds. 61% combined at two markers (exactly 15 seconds apart) strongly suggests synchronized time.


Those are minutes not seconds.


D'oh!


Like many others have pointed out, Apple is obsessed with avoiding antitrust scrutiny. A fun (edit: incorrect) example: they are fine owning the device you use to connect to their own streaming service to watch their own produced TV shows, but appear to strictly avoid showing their products in those shows.

EDIT: I stand corrected! Multiple "Apple Original Series" contain Apple products, such as in Ted Lasso and (I think implied) in For All Mankind, as people pointed out below. Shows what I know about TV.

I wonder why some Apple Original Series have Apple products, and some don't. I would love to see if there's any correlation between the number of shows which feature a specific product and that product's market share in the show's region or demographic.


> but appear to strictly avoid showing their products in those shows.

Apple has 'villains can't use iPhones' rule. Directors are not allowed to use Apple product placement for villains.

Apple pushes their products in Apple TV+ and pays for product placements, or provides their products for free for other productions as as long as you follow the rules.


That is not true, just from first principles. You think anyone would watch Apple TV+ shows if there were no suspense or mystery because all the good guys use Apple stuff and all the bad guys use non Apple stuff?


It is true.

You can't use first principles to make argument about easily verifiable facts. There are movies that use loopholes and gray areas, of course. https://www.theguardian.com/technology/2020/feb/26/apple-doe...


That Rian Johnson interview got milked to high heavens for clickbait. Genius marketing by him, especially since he makes mystery movies.

Plenty of examples of bad guys using iPhones here:

https://www.reddit.com/r/television/comments/13hrdak/comment...

https://www.reddit.com/r/television/comments/13hrdak/rant_ap...


Just have everyone use non-apple stuff? What?


The characters in Ted Lasso almost exclusively have iPhones and Macs


To the extent that a character makes a joke about his relationship getting serious enough that they started sharing an iCloud account


Yeah… there are also a number of Apple products in For All Mankind


Right you both are! I had no idea.


Guess they need even more product placement :)


I'm really surprised they do it at all. I always thought they'd be crazy to.


Were you running certbot multiple times per day?

Looking at the relevant limit, "Consecutive Authorization Failures per Hostname per Account"[0], it looks like there's no way to hit that specific limit if you only run once per day.

Ah, to think how many cronjobs are out there running certbot on * * * * *!

[0]: https://letsencrypt.org/docs/rate-limits/#consecutive-author...


Isn't that where we are going eventually? Certs only lasting a day?


That's a good point. I suspect as the renewal period is shortened, scripts will attempt renewal faster and faster.

I hope they don't go any shorter than a month. Let the user pick, any value up to a year should do.


Browsers are eventually going to deny any certificate after 47 days iirc


They simultaneously want shorter certs but can't cope with the current load


Nowhere in the blog post does it say they can't cope with the load, which is why the rate limits are so high. This is only about reducing wasted resources by blocking requests which are never going to succeed.


They definitely can't cope with the load at midnight, or at least couldn't back in 2022, and the fact that they mention midnight specifically in this post makes me assume they still can't. I say this because I had cert issuance fail for multiple days because of DB timeouts on their end from that: https://community.letsencrypt.org/t/post-to-new-order-url-fa...

Incidentally the fact that it took them 4 days to respond to that issue is why I'll be wary of getting 6-day certs for them. The only reason it wasn't a problem there was that it was a 30d cert and had plenty of time remaining, so I was in no rush. (Also ideally they'd have a better support channel than an open forum where an idiot "Community Leader" who doesn't know what he's talking about wastes your time, as happened in that thread.)


Why not just run your update outside rush hour though.


No, they will never get that short due to reliability issues. I could see getting down to maybe two weeks.

To make 24 hour valid certs practical you would need to generate them ahead of time and locally switch them out. This would be a lot more reliable if systems supported two certs with 50% overlapping validity periods at the same time.


Let’s Encrypt has already started issuing a limited number of 6-day certs and they will be generally available later this year.

(90 days will remain the default though)


Timezones going to make that hilarious, probably go back to much longer certs. I like free so I put up with LE. The automated stuff only works on half my servers, the other half I either run without https or I manually install it. Except now I wait until the service stops working, spend 15 minutes debugging why, go to the domain in a browser and see the warning, and then go fix it. Why? LE decided sending 4 emails a year is too many. And let's be real, sending automated emails is expensive. I think AWS charges like $0.50 per email when you use their hosted email sender.


> I think AWS charges like $0.50 per email when you use their hosted email sender.

SES? Around $0.0001 per e-mail


Yes, it was facetious, i am jabbing at Let's Encrypt for ceasing email operations.


Assuming 47 day certs they would be saving 500k USD/year just from SES fees with that change.

For a free service, that's a whole lot of money.


By my memory, a cron runs a script that checks my cert file's last modified daily. When it is a certain number of days since (flavored Bash statements) the file last modified I'll certbot and install whatever comes back.

It's very under-engineered, maybe a trifold pamphlet on light A11 printed with a laser jet running out of ink.

I've probably spent more time talking about how much it sucks than I have bothered considering a proper solution, at this point.


>I've probably spent more time talking about how much it sucks than I have bothered considering a proper solution, at this point.

I respect this. Reading someone else write this makes me feel more comfortable thinking about the things in my life I could be doing more to improve, which makes me respect this even more.


Yep, the embedded video in the article really says it all. Proven Industries' wording here at at the very least ambiguous as to whether or not a shim that works on one lock will work on another of the same model.

If you had to take apart the lock to make a shim for only that lock, of course that would be misleading to suggest otherwise. Instead, they're going directly after the researcher for demonstrating the insecurity of an entire line of locks.

Either the TSA should sue the man who published photos of the "Travel Sentry" keys, or Proven Industries should look into rebranding as "peace of mind" locks :)


I assume autoexec is referring to the plethora of WebRTC vulnerabilities which have affected browsers, messengers, and any other software which implements WebRTC for client use. Its full implementation is seemingly difficult to get right.

Of course, you're right that this implementation is very small. It's very different than a typical client implementation, I don't share the same concerns. It's also only the WHIP portion of WebRTC, and anyone processing user input through ffmpeg is hopefully compiling a version enabling only the features they use, or at least "--disable-muxer=whip" and others at configure time. Or, you know, you could specify everything explicitly at runtime so ffmpeg won't load features based on variable user input.


>I assume autoexec is referring to the plethora of WebRTC vulnerabilities which have affected browsers, messengers, and any other software which implements WebRTC for client use. Its full implementation is seemingly difficult to get right.

Like what? I did a quick search and most seem to be stuff like ip leaks and fingerprinting, which isn't relevant in ffmpeg.


Here's a (very) small sample gathered from a search for "webrtc" on cve.org and picking high-severity CVEs affecting browsers:

* CVE-2015-1260

* CVE-2022-4924

* CVE-2023-7010

* CVE-2023-7024

* CVE-2024-3170

* CVE-2024-4764

* CVE-2024-5493

* CVE-2024-10488

Of course, I agree that it's not relevant to ffmpeg. But seeing "WebRTC" triggers the same part of the brain that looks out for unescaped SQL statements. Good opportunity to point out the difference in this implementation.


So you searched “WebRTC”, and then took the extraordinary step of… not actually reading any of them while simultaneous using them as supposed points? Quick question since you seem to know a lot about these CVEs and have spent a fair amount of time understanding them: how many of those were browser implementation issue?

This is like searching CVE for “node” and then claiming Node is terrible because some node packages have vulnerabilities. Low effort and intended to fit evidence to an opinion instead of evaluating evidence. “Linux” has 17,000 results; using your critical lens, all Linux is insecure.


When writing this inflammatory post, you seem to have forgot the bigger picture of the thread you are flaming.

We're discussing whether it's right to take a second look from a security standpoint when a software implements WebRTC. In this case, it's nuanced, and the implementation in FFmpeg is very different than the more complete implementations you find in browsers. And when browsers have implemented WebRTC, many vulnerabilities have followed.

So the double-take is justified here, even if only in principle. No one is saying WebRTC is insecure, or FFmpeg, or node, or Linux..........

I did a cursory read of each CVE. Wherever you got the idea I did not, you must have forgot to include it in your post. Just now, I picked one from random. It reports "Multiple WebRTC threads could have claimed a newly connected audio input leading to use-after-free."

Does that exactly qualify as an "implementation bug"? I don't know, and I don't care, because how you taxonomize a CVE has nothing to do with whether it's a vulnerability that was introduced when implementing WebRTC. And it is.


I forgot nothing, but you seemed to forget whose comment you were attempting to bolster. I “flamed” the useless injection of CVEs that attempt to legitimize someone’s point about the insecurity of a protocol, when that tiny amount of CVEs for a technology the world uses quite heavily almost unanimously point to poor implementation-specific issues, none of which inform the security or risk of the protocol itself, adding useless data that doesn’t further a conversation on security.

“No one is saying webrtc is insecure”? That is literally what the comment was doing, which you attempted to legitimize by listing browser-specific CVEs.

Someone pointed to a car fire and said gasoline caused the fire, and you posted pictures of car fires. There is a reason a Fire Investigator (like a security researcher would) considers the difference between what started a fire and an accellerant. WebRTC was not the cause of these vulnerabilities like you are trying to imply and like the opinion you attempted to legitimize.

“I don’t care” — clearly, if you couldn’t take the time to understand the difference, I’m not surprised.


> stuff like ip leaks and fingerprinting, which isn't relevant in ffmpeg.

If ffmpeg implements WHEP in the future then I'd certainly be concerned about both of those things when viewing a stream. Probably less so for serving a stream up, particularly via a gateway (the current implementation IIUC).


This is exactly the question I have.

While WebRTC causes fingerprinting risks in browsers, isn’t that unrelated to running ffmpeg?


From the article it's sure starting to seem like people across the internet are just starting to realize what happens when you don't have just 3-4 search engines responsible for crawling for data anymore. When data becomes truly democratized, its access increases dramatically, and we can either adjust or shelter ourselves while the world moves on without us.

Did Google never ever scrape individual commits from Gitea?


> When data becomes truly democratized...

That is not at all what is happening.


I know, we're locking everything down behind WAFs and repeating captchas so only attested identities can get access in the end.


Ok, and, what your solution?


Yep, everyone is building their own little walled gardens instead of adapting.


Friend, this IS them adapting.


More like maladapting.


Can you please provide an explanation of what you consider 'adapting' to be?

What's described in the article did my personal definition pretty well.


They only need to pay off or install a single employee to get total or near-total access. Consider this chart from 2013 showing when various tech companies were added to PRISM:

https://upload.wikimedia.org/wikipedia/commons/c/c7/Prism_sl...

A lot of the companies embattled in the "constant litigation" mentioned by the GP are featured in this very chart.


> lot of the companies embattled in the "constant litigation" mentioned by the GP are featured in this very chart

Yup. A great first step towards understanding these systems is to disaggregate the monoliths of these enterprises and the U.S. government into their power centres.


Do you believe the disaggregation of those monoliths helps to put the "hypothesis to bed"? It sure seems like you were listing "constant litigation" over "records request" as counterevidence of the claim that "if a company knows something about you, so does the government(s)".

If anyone in the U.S. government is extracting data from companies in a manner which is unlawful or should be (and they sure are), I see that as strong evidence of the hypothesis. Pointing out that local agencies may have to fight for their access in court doesn't change that it "is exactly the state of affairs the government prefers".


> sure seems like you were listing "constant litigation" over "records request" as counterevidence of the claim that "if a company knows something about you, so does the government(s)"

Yes. Just because the NSA can access some data doesn’t mean the entire federal government, including the NSA, has it.

> local agencies may have to fight for their access

The White House is fighting Harvard for student records. I don’t think people appreciate the degree to which information is siloed, intentionally and unintentionally, in the federal government. (It’s what led to DOGE likely committing multiple felonies.)


>I don’t think people appreciate the degree to which information is siloed, intentionally and unintentionally, in the federal government.

Thanks for that. Information can be completely siloed and the statements "If a company knows something about you, so does the government(s)" and "This is exactly the state of affairs the government prefers" still be correct.

Is your belief that the federal government has not actually purchased hordes of corporate surveillance data? Or is it that because there are examples of information being siloed or not available, that means it's okay or a non-issue that Americans' data that was once unlawfully collected is now still unlawfully collected but also collected by corporations and purchased wholesale by the federal government?


>In the vast majority of cases, the US government at least, has to obtain a warrant to collect data on US citizens, so those two sets are not the same

If only that were true[0][1][2][3].

[0] (2022): https://fedscoop.com/dhs-buying-personal-data-from-govt-cont...

[1] (2023): https://www.congress.gov/118/meeting/house/116192/documents/...

[2] (2024): https://www.cnn.com/2024/01/26/tech/the-nsa-buys-americans-i...

[3] (2025): https://theintercept.com/2025/05/22/intel-agencies-buying-da...


I... you're right. I was wondering why the world was only 9x9x9, there's 46k lines showing each block can have air, stone, grass, dirt, log, wood, leaves, or glass.

I kind of like it.


Without a doubt the most impressive thing I've seen with CSS.

This immediately brought "A Single Div"[0] to mind, which stood as the coolest CSS demo I'd seen for... 11 years!

This one takes the cake. I'll be pouring over it. Thanks!

[0]: https://a.singlediv.com/


have you seen this modern marvel? https://diana-adrianne.com/purecss-lace/


Incredible. I was so skeptical that I went in on the neckruff and from there to a lacetop, it's really all generated based on background-image but without using images but gradients of specific colors, as well as box-shadows and the like.


Wow, Dark Reader absolutely mutilated her.


Wow, mobile Safari hates this. Zooming in and scrolling around crashes the page constantly.


Works fine on my iPhone 14.


Similar problems on my MBP, actually – just sans crashed tab. Zooming in and scrolling around on Chrome and Safari cause the divs to rerender (repaint?) and often not all of them even do! E.g. Chrome: https://imgur.com/a/VWCAL9G

Scrolling is fine in Firefox but extremely slow.


It surprisingly smooth on Firefox on my Pixel 8


Interesting. Worked fine on my MBP in Safari. Even browsed around in the dev tools to see the styles used


this is my favorite one one I've seen: https://lyra.horse/css-clicker/


These were 1852 seconds well spent. If you don't hate clickers, try this one, it was definitely made with love.


latest in my long list of poor life choices, not going to bed at 2 AM because I'm waiting to reach 10 mil views :)))))


oh man this was so perfect


that endgame is absolutely perfect


Wierd, posting the last blog post crashed firefox.


Wild, got me hooked!


I had the honor of seeing her give a talk. She also has a lot of other css projects that are awesome.

https://lynnandtonic.com/work/

Also love seeing Phoenix devs mentioned!


Damn, that website is great on its own and it turns out she redesigns/rewrites it every year to learn new web technologies.

https://lynnandtonic.com/archive/

Got this bookmarked to click around for inspiration in my free time.


So many of these look deliciously interactive but aren’t. Is that because I’m on mobile or do they not do anything?


I don't think any on the first page are interactive. There might be a few on the next page of it (I only found one where a pen changes color on hover).


poring over it or pouring your attention :)


My bad, I forgot I'm a liquid. It's too late to edit, but s/po\w*/poured over/ anyway :)


but they are all individual divs


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: