Hacker Newsnew | past | comments | ask | show | jobs | submit | james412's commentslogin

Pret and Itsu (same founders) have been my default shops for many years because you simply don't have to think, walking in and out in under 2 minutes is absolutely to be expected, and if you're a bit autistic and enjoy routine, can have precisely the same choc bar + coffee + salad + sandwich with identical consistency on every single visit

Their food is hardly amazing, but it is wonderfully consistent. Truly hope they survive and flourish once more, London simply would not be the same without them


Dominic Raab (currently the UK's foreign secretary) is (in)famous for having the same Pret meal for lunch every (working) day: a chicken caesar and bacon baguette, with a pot of fruit and the same smoothie.

https://www.theguardian.com/lifeandstyle/2018/apr/26/is-it-w...


Was worried about CF getting their claws dug into archive.org, but on reading, this is a decidedly non-evil deal, actually it sounds wonderful. Still, I worry if there might be some unseen long term interest in the archive.

Never forget Dejanews


Keep in mind how Cloudflare makes most of their money: They sell a web proxy service with security and performance features including a CDN. Cloudflare's interests are furthered by improving that service in ways that help its customers. Keeping the Web Archive healthily stocked with content is aligned with their long term revenue growth.


T+10 years I very much expect CloudFlare's core business to have expanded significantly. I remember that time my Googler friend told me they were about to release that one thing they'd absolutely never do, Chrome came out a few weeks later, now look at Firefox

You need to pay attention to the silent positioning of these companies to even guess at where they might go, so deals with things like archive.org may have some unseen substance to them that might only become obvious much later


As a business they absolutely are not going to stay in the CDN lane as a primary.

Akamai has $3b in sales and an $18b market cap.

Cloudflare has $348m in sales and an $10.8b market cap.

Akamai is their maximum ceiling if they focus primarily on the CDN segment. Cloudflare is rapidly approaching their valuation ceiling if they stick to CDN as their core (and they'd have to start killing Akamai just to get there; the CDN business is increasingly a slower growth segment in the larger cloud industry).

Companies all around them in the cloud are growing faster, yet few are more important than Cloudflare. Zero question Cloudflare will continue to aggressively branch out, leveraging their critical positioning. In the not-so-distant future CDN will not be the center of their business. CDN is and will remain a springboard for them, a gateway drug, milk at the back of the grocery store.


Akamai is not their ceiling because Akamai doesn't serve all segments of the market.

I'm fairly critical of Cloudflare for a lot of resources, but one thing I think they did right was focus on the SMB market with plans that were actually affordable to the average business. They targeted customers that companies like Akamai pretended didn't exist. Even now they have the cheapest plan available, and once they consolidate the market even further they can start raising those prices.


Akamai is their ceiling in CDN because they have a much higher value segment of the business, representing a drastically larger share of all dollars in the CDN space. Their business is nine times the size of Cloudflare because their customers are far more lucrative.

If Cloudflare holds onto all of their already considerable number of customers, and then kills Akamai and somehow takes all of Akamai's business, the combination will be a mere 10% larger than Akamai already is now. There is your general indie ceiling in action, with all segments combined (and Cloudflare isn't going to monopolize the entire CDN business besides).

All you need to know to spot the independent CDN ceiling is that Cloudflare + Fastly + Akamai = $3.6 billion in sales (with the understanding that it's a slowly increasing ceiling, as the CDN market is still growing). The ceiling in that space for Cloudflare just can't realistically be much larger than that combined group and that's not much larger than where Akamai is already at. The only way this isn't the case, is if you project Cloudflare knocks off most competitors and takes the market (they can't, Amazon, Microsoft, Google among other giants, are standing in the way of that outcome).

It'll take Cloudflare a small lifetime to get to $3 billion in sales in the CDN space at the rate they're growing (they're adding ~$8m-$10m per quarter in growth (all of which obviously isn't CDN), so maybe it'll only take a few decades with some compounding). It took Akamai 22 years to get there with very high value customers and a pretty nice open field for many of those years.

Akamai in absolute dollar terms is growing faster than Cloudflare + Fastly combined. The CDN ceiling is actually running away from Cloudflare at present. That shouldn't be happening.

Cloudflare knows full well CDN isn't their brightest business future. It's why so much of their expansion effort is going into everything else. Given the way they price-structured their CDN from day one, Cloudflare has always known CDN was a lure and the upside was in sprawling outward from it. Come for the CDN, stay for the workers or whatever preferably higher margin thing we can sell you on. It's also why they're not interested in / worried about trying to make money on domain registrations, as with SSL before that. They'll happily murder the margins in foreign services all day long (areas where they don't compete, but there is margin to wipe out cost effectively, and with customers to lure in), so long as they can occasionally launch a new service where they have a distinct advantage and can convert their base to use it and increase total revenue per customer in the process.

What would be a better path: if Cloudflare could own a big part of Akamai's CDN business by trying to aggressively climb up the ladder from an unassailable price-value position Akamai doesn't want to go down to, like an ARM eating an Intel from the feet upward; or just leave the snoring giant alone to keeping snoozing in his enterprise tower while Cloudflare busies itself sprawling out in many directions, leveraging the volume of customers that Akamai doesn't want to (and or can't) go after because they're not viewed as lucrative enough? I think what Cloudflare can find outside of the CDN business, is likely to be more valuable than what's inside the CDN business, very long-term speaking.

And if you're Akamai and you let Cloudflare get far enough along with that sprawling (likely already too late), how about if they drop your CDN legs out from under you. Cloudflare builds out many other legs to stand on, so they flip the switch on the margin and kill the CDN market for the independents, as they were willing to do with domains and SSL. Free CDN, all tiers, all features. They can't do that today, they might be able to do it tomorrow. The CDN market becomes the SSL market, and as a totally free lure it accelerates a rush into Cloudflare's other more exclusive services (including for larger, lucrative enterprise customers). Surely this switch has been pondered inside of Cloudflare, road-mapped as a potential.


> As a business they absolutely are not going to stay in the CDN lane as a primary.

Yeah, and all the big five Cloud vendors: AWS, Azure, GCP, IBM, Oracle all have their own CDN solutions bundled. Hard to make a case to purchase separate CDN solutions.


I'm not sure about all the providers but Amazon's CloudFront CDN product has additional costs, so it's "bundled" but not in the sense that it's free, only that it's integrated.

And one of Cloudflare's selling points imo is the multi-cloud customers. Use AWS all the way but Cloudflare as your CDN and you could swtich to GCP seamlessly. Or route traffic based on pricing etc. I think you're right they will/have absolutely branch out from CDN but I think their CDN product is actually compelling especially to bigger companies that are more afraid of Amazon that they are of Cloudflare.

(Other interesting point - it's worth noting that IBM's CDN is essentially white labeled Cloudflare).


Great comment. Cloudflare is not a CDN. They are an edge computing platform that happens to offer CDN services. Could Akamai grow into that market faster than Cloudflare can consume it? TBD.


Edge computing is super interesting, and today's CDN providers should be able to provide it given their current infrastructure deployment. It could really bring in the next era of computing and technology once certain networks/providers reach critical mass to provide edge services within 5-10ms to customers.


If jgrahamc is reading this, I'd really like to know if Cloudflare wants to work with telcos.

Imagine a small server in every cell tower, with locally-cached maps/Wikipedia/latest movies.

Some communication couldn't be cached (e.g. real-time video calls), but a lot of broadcast media could be. Of course there are copyright implications, and it might require partnering with Netflix or others.

The quick load times would be great for users, and the reduced load on the backbone would be good for the telecom companies.

If you'd like me to chat to some friends in telcos in New Zealand about this, drop me an email. It's not my job now (I'm in IoT) but I know who to talk to if you'd like to get this kind of thing moving.


Kiwix does this for Wikipedia, other Wikimedia projects and other free content projects, through "Kiwix hotspot" (based on kiwix-serve). https://www.kiwix.org/en/downloads/kiwix-hotspot/


AFAIK Netflix (and YouTube and others) already do this edge caching. They partner with telcos, as you said.



They control SSL decryption for a massive number of websites. Governments will gladly fund Cloudflare for eternity.


If your root CA is subject to the laws of a government that can take the root certificates and MITM the connection with those root CAs that's not much better. Cloudflare just makes it easier.


Certificate Transparency makes this significantly harder to do stealthily. I’m not convinced that Cloudflare is a deep state operation either, but Cloudflare's ability to secretly MITM is a position afforded to a select few, and certainly not every CA.


It's much easier (and virtually undetectable) to MITM when you are also the reverse proxy though.


Akamai as well then?


Much of the US Government already uses it, so yes.


More like a web blocker "service". It is profoundly unhelpful to me that a proxy service cares if I have Javascript disabled in my browser.


That’s the website that has manually enabled a feature if it requires Javascript. Cloudflare does not require Javascript out of the box.


Please clarify. I thought all those captcha puzzles were coming from Cloudflare. Are you saying they are only enabled if the destination page has JS?


I believe GP is referring to a setting that a cloudflare user has to flip for requiring visitors to enable JavaScript


What worries me is that Cloudflare is deanonymizing a huge load of TOR users, and the issue that comes with it is that a huge part of TOR users actually needs access to the web archive due to country-wide DNS censorships (European countries included).

As Cloudflare is deanonymizing TOR users pretty much with every website that's hosted on it, I fear they are abusing that power once again to deanonymize users of the web archive.

Cloudflare always claims it's not their issue and that it's a webmaster setting with the shitty captchas and Google's infamous Prism-sponsored PREFS cookie - but to be honest they should just not have implemented it in the first place if privacy was a core value of their company.

The "DDoS" protection basically fingerprints a machine and user inside an encrypted HTTPS connection; which makes the encryption tunnel itself obsolete.


Not long ago, CF has been blocking access from Tor. And they are blocking access from my web crawler sometimes. I don't like CF as they act as a police or gatekeeper to the origin website, deciding who to penalize and who do not, while pretending to be speeding up websites and protecting from 'threads'.


They’re acting more as a security guard. Which is to say that they’re intentionally employed by the owner of the property you’re trying to enter. Often specifically to “bounce” users like you, malicious or not. Believe it or not there are legitimate reasons for wanting only real human users on your website!


> while pretending to be speeding up websites and protecting from 'threads'.

They do though. That's why people pay them lots of money to do those two things. Not sure what part you think is "pretending"?


One of the first 100 people to use cloudflare when it launched.

Paying them today to speed up a couple of websites while protecting them.

They rock at making big things possible for very small companies.


Hey, me too! Do you have the first-users t-shirt?


I don't know really. Cloudflare is notoriously in conflict with different archive sites and now this announcement makes that sound not too credible.

I think we will see selective removal of certain content.


> Was worried about CF getting their claws dug into archive.org

SAME. From the title, I assumed the Wayback Machine would be using Cloudflare. Nice prank, boys.


When users are used to this (getting redirected to a archived copy when the site is down/not available) and when this trial baloon has been proved to work, Cloudflare will replace archive.org with their own infrastructure. This is the common game plan.


Uh, no. We're literally doing the opposite. We used to have our own caching infrastructure for "Always Online" and we're getting rid of it and using archive.org instead.


Thanks, so maybe this page is outdated where it mentions your own crawler with user-agent? Or does the Internet Archive use it for these crawls? https://www.cloudflare.com/always-online/


How do you handle robots.txt? The previous incarnation of Always Online didn't care about robots.txt, while archive.org does.


https://blog.cloudflare.com/cloudflares-always-online-and-th...

We tell archive.org about the URI, they crawl it. They handle robots.txt.


archive.org doesn't handle robots.txt in any meaningful way (see my comment above at https://news.ycombinator.com/item?id=24516875 ). If that's changed recently, I'd like to know more.


Note that archive.org stopped respecting robots.txt since 2017. [1]

In my experience, the site owner must email archive.org support to be excluded from its crawler and archiving.

[1]: https://boingboing.net/2017/04/22/internet-archive-to-ignore...


And thank god for it. Trying to explain to end users why their site was not, in fact, always online on account of the creaking behemoth that plodded along in IAD barely managing to successfully cache and serve anything ever was never any fun.

The original Always Online infra was long unloved and probably kept on life support far too long for lack of want to deprecate an early feature.


"We're literally doing the opposite."

How does what you do now contradict what you will do in the future? What legal assurances are there that you won't do hat when you leave? (See Facebook/Oculus "no Facebook account promise")


Wait... so you think Cloudflare's master plan is to roll this new thing out to get people to accept it as normal, and then suddenly make a big shift to.... what they currently have?

Why don't they skip this step and just keep what they have now, then? No one seems to be up in arms that they currently provide their customers offline caching...


Doesn't CF already have an "Always Online" feature using their own infrastructure? So this seems like the opposite happening.



I know it's common and perhaps even fashionable, but FWIW language like "We take an opinionated stance" utterly puts me off caring about this package

It's a piece of software, it has a design that is either fit for purpose or not. When ego becomes entangled in that design process, it's a strong indicator of the kind of experience one might have trying to get fixes or enhancements merged, or even the kind of attitude you'd find when attempting to report a bug.


That's not what the word 'opinionated' means here. It's not any one person's opinion; it's that the project overall takes a stance on an issue rather than leaving everything open for everyone else to figure out. It provides clarity and direction compared to the more difficult situation where every library is completely general. No ego involved at all.


Perhaps I misunderstand the text in the README. Who is "we" in this case? Is the software writing its own README?


'We' refers to the authors and their organization as a collective. That's still the meaning of the word 'opinionated' in this context.


More context is helpful here. "We take an opinionated stance that every module should be a crate, as opposed to generating Rust files 1:1 with proto files."


A cold cache google page load here pushes 725kb just to render a logo and search box. To avoid fingerprinting my cache is always cold.

Google search is a huge dog


Considering the functionality, what would be your limit to keep google from being considered a "huge dog"?


42 links, a text box, vector logo, 20kb and that's generous

Autocomplete function, dropdown menu JS, 10kb max

Considering this page is viewed by millions, and doesn't even contain the logic to render search results


I was linked https://lite.duckduckgo.com/lite, and even without autocomplete or dropdown functionality, it comes in at 8.5 kB, so I'd say that the 10 kB goal is pretty aggressive.


Reverting to their own search site from 10 years ago?


You might be pleased to notice Google Scholar is pretty much still the old design, such a breeze to use



That seems unfair, considering there is no lite google equivalent. The full duckduckgo site still comes in lighter (3/4 of the size), and includes a lot more media, so it's still better than Google.


They've taken so long to deliver Librem 5 that even normies have started reverting back to candybar phones in the meantime


Shows the strength of the pinephone. Release something cheap, quick and hackable rather than try to deliver everything. It’s been a really impressive project so far!


Thanks for reminding me about it. I can see it's already sold out:

https://store.pine64.org/

Curious if there's going to be another batch.


They're doing CE batches which run about a month each but the underlying hardware is the same for all of them. Just follow their blog (https://www.pine64.org/blog/) and wait for the announcement for the next batch preorder.


Not quite the same; they're now selling two slightly different versions. One has 2GB of RAM, while the other has 3GB RAM and is usually marketed under a "Convergence" label. For example, the UBports CE and the pmOS hardware has some slight revision that changes your ability to connect with the phone over Ethernet. I believe that the upcoming Manjaro CE will have the same distinction between models.

[1] https://store.pine64.org/product/pinephone-community-edition...


Right, but that's a different model. The 2GB CE's are otherwise identical to each other aside from bug fixes.


they are shipping like every 2 months


The Pinephone is one of the most disappointing things I have ever bought. I'm an old Nokia N900 fan who was so enthusiastic about getting a new open phone, but the Pinephone’s CPU is extremely underpowered in terms of running the only interface that is both libre and has any real future (i.e. Phosh on Mobian – Ubuntu Touch is based on moribund 2014-era code, and Sailfish’s UI isn’t libre). Scrolling is ragged, opening new windows is painfully slow, and it is easy to make the device start swapping. Also, the Pinephone screen and case feel very cheap.

For me, the Pinephone is at best a tech preview for the sort of experiences you might get to have on the Librem when it becomes available.


Yeah I ordered one 15 months ago and had to start using a flip phone. I can't wait any longer though, I am going to have to get a different smartphone :(


> McCulloch says that the thrust appears to be between one and four micronewtons—exactly the amount his theory predicts

Somehow I thought physics was meant to be a little more precise than this


Physics is precise. The uncertainty is due to error-bars in measurements. Any measurement has error bars for systematic, random and absolute errors, then there is the combinative error as multiple types of measurement are combined together.

It’s more of a worry, tbh, if you don’t see an error estimate in a result...


I think the objection is that those are large error bars compared to the force measured. And that even a modest source of experimental error could put this at zero real output.

ETA: and since “non-zero real output” implies that virtually all of our physics is incorrect, and needs to be rebuilt from the ground up, the bar for measurement error needs to be exceptionally small.


C-the-language has no concept of size-tagged arrays at runtime, and I guess it's baked in deeply due to the various guarantees made about sizeof(array), &array[0], and ability to cast &array[0] back to the original array. The iAPX hardware would have gone unused


Solaris uses SPARC ADI to great success, iPhone X has pointer validation, and Android 11 has made the work to require ARM MTE in future releases.


Pointer tagging is a completely different tech that requires no runtime knowledge of the length of an array


This is actively encouraging people to abuse a free service that many people depend on. It's incredibly irresponsible to see it posted here.

Stealing a worker for up to 6 hours likely running on real Apple hardware because that's how OS X is licensed, man, there are no words. Free CI is difficult enough to supply as it is without freeloaders tying up a limited pool of workers because they're too lazy to run something locally


It should be noted that we're talking about Microsoft's resources here, not some independent CI company or startup. While I don't wanna encourage anyone violating any ToS I think the moral situation is a bit different for a tech giant.


It's not about Microsoft's resources, it's about holding up those resources from others


The resources are effectively limitless, it’s not a big deal.


It's a tragedy of the commons situation. Microsoft might be _able_ to provided effectively limitless funding to power these free workers, but if enough people choose to abuse this, Microsoft might no longer be _willing_ to provide that funding, so everybody would be deprived of a useful resource because of the bad actors.


Yeah, Microsoft does some very strange things. Particularly, I'd like to draw attention to how skydrive/one drive used to have 25? GB and not only did Microsoft lower it to 7? But it said it would delete files over the limit. I have gone over the limit on Dropbox and it just stops you from uploading until you bring your storage within quota. Google Drive does the same iirc. Microsoft is just outright terrible.


I would even argue that rampant abuse might actually be threatening to GitHub the company.

MS may still be the type of company to change it’s mind about upholding the principles that GitHub runs on.


You mean MacStadium resources, since that's the 3rd party company that provides the actual backend service.


I don’t mind the post because it’s a cool hack and could be useful for debugging workflows, but I do agree we should refrain from abusing it as a free shell.


> This is actively encouraging people to abuse a free service that many people depend on.

The internet was built by phreakers and pirates. This statement shows how far we've since fallen.


There's nothing of the spirit of the early hacker days in this repo, it's following GH's documentation and cutpasting some tunnelling instructions. I think it says more about the common misunderstanding of what hacking originally meant that this comparison was attempted at all

(FWIW, spoken as someone who spent many months wardialling by hand and poking around as the rest of the household slept during his school days)


I didn't cut and paste any tunneling instructions, or indeed any instructions at all.


which phreakers and pirates built the internet?


I'm pretty sure it was built by the military and universities.


You think he's a fraud simply because he bullshitted his way through some computer sales pitch? In a way, I have a ton of respect for people willing to even try this kind of thing in public

If he could talk lucidly about things like HTML I'd think that lent more credence to the possibility he didn't have a clue about hydrogen

edit: for the downvoters, if you haven't seen one of these "bear thesis" articles before, understand you must do at least as much homework as the bear claims to have done before accepting anything you read. Of course Nikola is a dodgy company, but it's also a social phenomenon. That's the value in it for the likes of GM, and also for the average investor -- including the professionals. At one stage YouTube was the largest video piracy company on the planet before a larger company swallowed them up and cut deals to legitimize what they'd done. Meanwhile, everyone knew the brand. You can consider what's happening here to be something roughly comparable


You have a ton of respect for people who will lie through their teeth to make a sale?

Why should we not assume his HTML5 knowledge is roughly equivalent to his hydrogen car knowledge?


In another window I have a thesaurus open looking for a word that conveys only the subset of meanings of respect that don't intersect with approval, something that captures the fascination, the horror of the carnival barker personality and the way we are seduced and bamboozled by their spectacle of bullshit. I think we have to admit that we actually enjoy aspects of this seduction. We love listening to a raconteur, stories of a con man, but the rock star we love from a hundred feet back in the throng of the crowd, smells like their own urine up close. I think if we're going to find a way to build a society wide immune response against bullshit we need to be honest that we enjoy a performance, and a sure fire way to improve a performance is to unshackle it from the constraints of reality and integrity.


Would you withdraw your savings from a bank because the cashier didn't understand HTML 5? Why should we not assume operating a cash drawer is roughly equivalent to HTML 5 knowledge?


1. He’s a vastly more senior person than a cashier.

2. It’s not the lack of understanding that’s concerning. It’s the confident wrongness.


I wouldn't expect the cashier to be selling me a product they didn't understand either. If my bank tried to sell me a new security because it was backed by custom chips running HTML, I would probably switch banks as well.

Lying with confidence is, after all, where the "con" in con man comes from.


Fair enough. But what kind of person lies about the existence of solar panels on their roof?

> Trevor claimed that Nikola’s headquarters has 3.5 megawatts of solar panels on its roof producing energy. Aerial photos of the roof and later media reports show that the supposed panels don’t exist.


I don't have respect for people that bullshit their way through situations that fraudulently entice investors to put their money into a company.


[flagged]


I think he's started some great companies. I also don't have much respect for him as a person.


[flagged]


I think there's a difference between saying "Here is a demo of some tech on a pig that, some day, could treat depression in humans" and "We have treated depression and cured it, as you can see from this demo. Also, this is not a pig, it is a human."

Elon loves the former. Trevor appears to love the latter.


I'm not sure what you're getting at here: I've already said that I don't have much respect for either person.

It seems like you're trying to say that because Elon does it and has been successful, it's okay for Trevor to do it in his own pursuit of success. Which is a big nope nope nope. It just means both of them are doing shady things.


Is Neuralink publicly traded?

Forward thinking, speculative claims aren’t the same as fraud.


it's the cherry on top of everything else he's bullshitted through.

nothing about his background or experience gives any legitimacy to him being in a position of "electric truck CEO and mogul"


IP addresses sharing a route have a common prefix. This is not true of MAC addresses. They are allocated essentially randomly. If you wanted to route solely using MAC addresses, every router in the world would need a lookup table containing every MAC address, route aggregation would be impossible

That's not /the/ reason why a MAC address is involved. It's because that's the address for a physical device at a lower layer in the stack. As others mention, IP is media-independent, it cannot depend on a lower tier addressing scheme without becoming fused to that medium


In an alternative universe where Novell continued to dominate networking, we'd be talking about how IPX uses the MAC directly to ID the host and had a separate network ID to uniquely identify the LAN the host is connected to.

It is actually a pretty reasonable way of integrating hardware MACs directly into the internetworking stack.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: