Hacker Newsnew | past | comments | ask | show | jobs | submit | jacobgkau's commentslogin

To be honest, the first moment I saw the page, it did seem to give my eyes a negative reaction, but after reading a few of the results, it started to look fine pretty quickly.

That would be easier if both GPU and display manufacturers weren't eschewing newer DisplayPort versions for older versions with DSC (which is not lossless despite its subjective claims of being "visually lossless"), while building in newer HDMI versions with greater performance.

To be fair, the DisplayPort 2.0/2.1 standardisation process was riddled with delays and they ended up landing years after HDMI 2.1 did. It stands to reason that hardware manufacturers picked up the earlier spec first.

what resolution is it that you can drive with "newer HDMI versions" but you cannot drive with DisplayPort 1.4 w/o DSC? The bandwidth difference is not really that much in practice, and "newer HDMI versions" also rely on DSC, or worse, chroma subsampling (objectively and subjectively worse).

I mean, one has been able to drive 5K, 4K@120Hz, etc. for almost over a decade with DP1.4, for the same res you need literally the latest version of HDMI (the "non" TDMS one). It's no wonder that display screens _have_ to use the latest version of HDMI, because otherwise they cannot be driven from a single HDMI port at all.

Having monitors that supported its native resolution through DP but not HDMI used to be a thing until very recently.


I understand that this is not a common case, but 7680x2160@240 (not to mention using hdr and to be fair, DP 2.1 also requires DSC then).

You can use this to check: https://trychen.com/feature/video-bandwidth


On my computer, I cannot drive my 1440p240hz OLED display with HDR. HDR takes the requirement from 25 Gigabit to 30 Gigabits, just over DP1.4's capabilities: https://linustechtips.com/topic/729232-guide-to-display-cabl...

Like you say, not that much difference, but enough to make DP1.4 not an option


Loaded in about one second for me (in regular Firefox with uBlock Origin installed, and Diversion running on my network).

I noticed this outage last night (Cloudflare 500s on a few unrelated websites). As usual, when I went to Cloudflare's status page, nothing about the outage was present; the only thing there was a notice about the pre-planned maintenance work they were doing for the security issue, reporting that everything was being routed around it successfully.

This is the case with just about every status page I’ve ever seen. It takes them a while to realize there’s really a problem and then to update the page. One day these things will be automated, but until then, I wouldn’t expect more of Cloudflare than any other provider.

What’s more concerning to me is that now we’ve had AWS, Azure, and CloudFlare (and CliudFlare twice) go down recently. My gut says:

1. developers and IT are using LLMs in some part of the process, which will not be 100% reliable.

2. Current culture of I have (some personal activity or problem) or we don’t have staff, AI will replace me, f-this.

3. Pandemic after effects.

4. Political climate / war / drugs; all are intermingled.


Management doesn't like when things like this are automated. They want to "manage" the outage/production/etc numbers before letting them out.

There's no sweet spot I've found. I don't work for Cloudflare but when I did have a status indicator to maintain, you could never please everyone. Users would complain when our system was up but a dependent system was down, saying that our status indicator was a lie. "Fixing" that by marking our system as down or degraded whenever a dependent system was down led to the status indicator being not green regularly, causing us to unfairly develop a reputation as unreliable (most broken dependencies had limited blast radius). The juice no longer seemed worth the squeeze and we gave up on automated status indicators.

> "Fixing" that by marking our system as down or degraded whenever a dependent system was down led to the status indicator being not green regularly, causing us to unfairly develop a reputation as unreliable (most broken dependencies had limited blast radius).

This seems like an issue with the design of your status page. If the broken dependencies truly had a limited blast radius, that should've been able to be communicated in your indicators and statistics. If not, then the unreliable reputation was deserved, and all you did by removing the status page was hide it.


> all you did by removing the status page was hide it

True, but everyone that actually made the company work was much happier for it.


> whenever a dependent system was down led to the status indicator being not green regularly, causing us to unfairly develop a reputation as unreliable (most broken dependencies had limited blast radius)

You are responsible of your dependencies, unless they are specific integrations. Either switch to more reliable dependencies or add redundancy so that you can switch between providers when any is down.


The headline status doesn't have to be "worst of all systems". Pick a key indicator, and as long as it doesn't look like it's all green regardless of whether you're up or down, users will imagine that "green headline, red subsystems" means whatever they're observing, even if that makes the status display utterly uninterpretable from an outside perspective.

100% — will never be automated :)

Still room for someone to claim the niche of the Porsche horsepower method in outage reporting - underpromise, overdeliver.

Thing is, these things are automated... Internally.

Which makes it feel that much more special when a service provides open access to all of the infrastructure diagnostics, like e.g. https://status.ppy.sh/


Nice! Didn't know you could make a Datadog dashboard public like that!

>It takes them a while to realize there’s really a problem and then to update the page.

Not really, they're just lying. I mean yes of course they aren't oracles who discover complex problems in instant of the first failure, but naw they know when well there are problems and significantly underreport them to the extent they are are less "smoke alarms" and more "your house has burned down and the ashes are still smoldering" alarms. Incidents are intentionally underreported. It's bad enough that there ought to be legislation and civil penalties for the large providers who fail to report known issues promptly.


Those are complex and tenuous explanations for events that have occurred since long before all of your reasons came into existence.

Only way to change that it to shame them for it: "Cloudflare is so incompetent at detecting and managing outages that even their simple status page is unable to be accurate"

If enough high-ranked customers report this feedback...


The status page was updated 6 minutes after the first internal alert was triggered (8:50 -> 8:56:26 UTC), I wouldn't say this is too long.

I'm guessing you don't manage any production web servers?

robots.txt isn't even respected by all of the American companies. Chinese ones (which often also use what are essentially botnets in Latin American and the rest of the world to evade detection) certainly don't care about anything short of dropping their packets.


I have been managing production commercial web servers for 28 years.

Yes, there are various bots, and some of the large US companies such as Perplexity do indeed seem to be ignoring robots.txt.

Is that a problem? It's certainly not a problem with cpu or network bandwidth (it's very minimal). Yes, it may be an issue if you are concerned with scraping (which I'm not).

Cloudflare's "solution" is a much bigger problem that affects me multiple times daily (as a user of sites that use it), and those sites don't seem to need protection against scraping.


It is rather disingenuous to backpedal from "you can easily block them" to "is that a problem? who even cares" when someone points out that you cannot in fact easily block them.

I was referring to legitimate ones, which you can easily block. Obviously there are scammy ones as well, and yes it is an issue, but for most sites I would say the cloudflare cure is worse than the problem it's trying to cure.

No true scotsman needs Cloudflare, as any true scotsman can block AI bots themselves is not a strong argument.

But is there any actual evidence that any major AI bots are bypassing robots.txt? It looked as if Perplexity was doing this, but after looking into it further it seems that likely isn't the case. Quite often people believe single source news stories without doing any due diligence or fact checking.

I haven't been in the weeds in a few months, but last time I was there we did have a lot of traffic from bots that didn't care about robots. Bytedance is one that comes to mind.

Security almost always brings inconvenience (to everyone involved, including end users). That is part of its cost.

What security issue is actually being solved here though?

What you're describing is more like someone who doesn't know computer science principles hacking on code, manually. Part of the definition of "vibe coding" is that AI agents (of questionable quality) did the actual work.

I feel like Hacker News commenters love to make analogies more than average people in your average space, though. You can't come across a biology/health topic on here without someone chiming in with "it's like if X was code and it had this bug" or "it's like this body part is the Y of the computer."

Analogies can be useful sometimes, but people also shouldn't feel like they need to see everything through the lens of their primary domain, because it usually results in losing nuances.


On the other hand, if you are communicating with a bunch of people who share that primary domain, it can be a useful way of making a point.

(unless that primary domain tends to attract a lot of people who tend to the hyper-literal /s)


That's a great point and an easy way to visualize it as an outsider, but it's not necessarily that simple.

For one thing, the iPad (market-leading tablet) and the iPhone (market-leading pocket touchscreen device) were not their first attempt at doing that. That would be the Newton, which was an actual launched product and a commercial failure.

For another thing, even Apple can't just become the market leader by doing nothing. They need to enter late with a good product, and having a good product takes R&D, which takes time. With MP3 players, smartphones, and tablets, they didn't exactly wait until the industry was burnt through before they came in with their offering; they were just later (with the successful product) than some other people who did it worse. They were still doing R&D during those years when they were "waiting."

Apple could still "show up late" to AI in a few more years or a decade, using their current toe-dipping to inform something better, and it would still fit into the picture you have of how they "should've done it." Not to mention, Apple's also lost its way before with things like convoluted product lines (a dozen models of everything) and experimental products (the Newton then, Apple Vision now); messing up for a while also isn't exactly bucking history.


I see your point, but I see nothing to indicate they’re doing the “polish and wait”. No reason to believe they’re cooking behind the scenes or that this product was a learning exercise for them.

Most of their current products seem to be decaying in the dead march towards the next yearly release. ux and ui are becoming more and more poorly thought (see their last design language). They half pursue ideas and don’t manage to deliver (vr, apple car, etc).

I see cargo culting and fad chasing like any average leadership, only with a fatter stack of cash supporting the endeavour.


I guess I'm not necessarily saying they're secretly working on it now, but I'm responding to your "I don't get why they went for the rush" with "it doesn't seem like they really went for the rush" (any more than the Newton was evidence that they "went for the rush" of smartphones or tablets).

We basically know what they are cooking behind the scenes - to write a $1 Billion check to Google for a custom Gemini model

If I understand correctly, you're just basing that statement on climate change or war destroying us before we can do any better than Voyager, right? Because if we don't assume the destruction of humanity or the complete removal of our ability to make things leave Earth, then just based on "finite past vs. infinite future," it seems incredibly unlikely that we'd never be able to beat an extremely old project operating far beyond its designed scope.


Many reasons why. The probability is based on many many many factors. What you mentioned is just a fraction of the factors.

If we do ever reach that distance again it will be even less likely we do it for a third time.


I'm pretty bearish on human interstellar travel or even long-term settlement within our solar system but I wouldn't be so pessimistic on unmanned probes. The technical hurdles seem likely to be surmountable given decades or centuries. Economic growth is likely to continue so relative cost will continue to drop.

Absent a general decline in the capacity of our civilization the main hurdle I see is that the cost is paid by people who will not live to see the results of it but I don't think that rules it out, I'd certainly contribute to something like that.

What are some of the other factors you are thinking of?


This is reflexive pessimism with no substance. You're not articulating a set of particular challenges that need to be navigated/overcome, which could provide a roadmap for a productive discussion; it's just doomposting/demoralization that contributes nothing.


I don't want to introduce 50 tangential branches to argue about with no end in sight.

It's not pessimism, it's reality. Think about how unlikely it is. Humanity had one stretch where we reached for the stars and that stretch ended and by sheer luck some crazy guy made it cheap. What happens when he's gone? Will it happen again? Most likely: no. In your lifetime? Even Less likely.


Nobody was talking about only their own lifetime here. Even invoking that is off-topic pessimism ("you're going to die before stuff gets better").

How you acknowledge serious side effects some people have, sweep them aside with "but some people don't," and act like getting children on lifelong medication won't result in them having adult-relevant side effects once they're adults is mind-blowing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: