Many VMs do. The JVM was one of the first to do this extensively - all strings are in a "constant pool" as part of the .class format, and then they're referenced by index in instructions. Python does it for strings that appear in source code, and you can force it for runtime data with sys.intern().
In a sense, this is how being in a foreign country where people talk in other languages and act within their own culture. They are more productive than you because the environment fits them more than it does you at that time.
And this acclimation is also similar to https://en.m.wikipedia.org/wiki/Mathematical_maturity, how you talk with math people the from math village and feel you are stupid because you're not familiar with the symbols, the vocabularies. But when you have learned the language, you will see that you become better and better at learning, you have the means to gain more means--forming some sort of a positive feedback loop.
The only thing I disagree is the title. And maybe number 3; estimate is needed for pseudo-reassurance and comparison; it's ok to not be accurate. And don't spend hours to be accurate.
Agency prison is often overlooked, a superset of vendor-locks. When google domain was sold to squarespace, lots of people complained that they can't pay manually; it seemed unimportant to domain ownership, but they moved to cloudflare anyway; they were forced to migrate, freedom was taken, it was hurtful.
Maybe I'll add - simplicity the author referred to (or I would prefer to call it process elegance) needs experience and knowledge. For no 4,5,6 especially.
The history - surviving or not - contained often overlooked wisdom. Looking at the past reveal that our predecessors struggled the same struggle. It's also wise to realize that time can be too scarce for that; let others that have done teach us; keep our mind open.
Must or mustn't they filter customers is a matter of law.
However, putting the responsibility to mitigate this problem in its entirety is very inefficient and ineffective. If Cloudflare would have a team dedicated for this effort, bad actors would simply switch providers, beating $200k/year effort by couple clicks.
Notice that the malware ultimately takes effect when the user executes the file.
This sounds more like an interaction design problem that should be solved in the OS level; the OS interface is one of the logistical bottleneck for the malware delivery path.
Everyone running a service on the internet has a responsibility to prevent abuse of that service. They should all have and monitor an abuse@ address where they accept notifications about problems they're causing others and they should act on those notices within a reasonable amount of time. When someone fails in that responsibility they should/will get blocked.
I hadn't heard of trycloudflare.com before, but it's blocked on my network for now. If I need to, I can re-evaluate that later.
Anyone running a service online can get caught off guard and be taken advantage of by scammers and assholes. It's an opportunity to shore up your security and monitoring. The bad actors will eventually move on to abuse easier targets and that's fine. When they do that doesn't invalidate the work someone put into making sure their service wasn't being repeatedly/routinely used to harm others.
That responsibility only goes as far as other people are willing to block them for not doing it. There's no law of the internet that says you have to, but if your customers can't access your service because their ISP or whatever blocked you, that's when it's your responsibility to yourself to clean it up. If you're too big to block, then it's OK to ignore abuse.
The internet is a community. Some people in a community feel that they have no responsibility to anyone but themselves, which is why we need laws and regulations.
We want service providers on the internet to police themselves and make sure that they're not turning a blind eye to crimes taking place right on their own servers because the alternative is that laws and regulation come into play. There's an argument that internet companies that are too big to block could still be negligent, an accessory to crimes, liable for the very real and significant damages the poor management of their service enabled just so that they could save a little money, etc.
Just like with banks, there are people who would say that if a company is too big to fail/be blocked then they are too big to exist and should be broken up.
Personally, I'd rather that a service provider just do a better job keeping their corner of the internet clean, keeping the people who use their services safer, and preventing their services/equipment/IP space from being used to carry out criminal acts.
In the end it'd improve their service, improve their image, make the internet a safer place, and as a bonus it would force criminals to waste their time looking for the a new company who'll be too cheap/lazy to kick them off their services. Hopefully they'll eventually end up only being able to find ones that the rest of us feel we can block.
The internet _was_ a community. Now it's a wall of commercial property, riddled with victimising criminals and advertisements that watch you. There are still some communities in there, but the bulk of it is a set of actors with no social interests in common with the users.
The abuse mechanism you describe exists in theory, but... commercial.
There is community between the NOCs of tier 1 ISPs, but they mainly care about routing.
In your picture, I'm imagining, say, CenturyLink stomping on a retail ISP, and I question whether this pans out like swatting. Can I get someone taken down by abusing abuse reports?
> I question whether this pans out like swatting. Can I get someone taken down by abusing abuse reports?
Not generally, no. Typically, abuse departments at ISPs don't blindly cut off people's internet access just because someone complains. They require evidence (server logs, message headers, etc) and there will be an investigation as well as multiple communications between an ISP and a user being accused of violating the ISP's terms of service. The same is true when the issue is between ISPs and their upstream providers. Keep in mind too that for both ISPs and upstream providers, everyone is naturally and strongly incentivized to not cancel the accounts of the customers who pay them.
There is one situation where false reports can get someone taken down. DMCA notices have this potential. ISPs can face billions in fines if they refuse to permanently disconnect their customers from the internet based on nothing more than unproven/unsubstantiated allegations made by third party vendors with a long history of sending wildly inaccurate DMCA notices. So far, media companies have been winning in courts and ISPs have been losing or (more often) settling outside of court. Everyone is still waiting to see how the case against Cox ends (https://torrentfreak.com/cox-requests-rehearing-of-piracy-ca...)
There is a solution for this at the OS level. It's domain names, validated through DNS. Those let the user decide if they trust the other side of a connection.
Here cloudflare is showing they should nt be trusted, but because they are so big, we can't act on that. Blocking them would be bad, mocking them is the second best option.
It isnt really "putting the responsibility to mitigate this problem in its entirety" on them so much as it is "putting the responsibility to mitigate this problem * on their service * "
Large software companies seem to enjoy passing the buck in recent years if it might impact their profitability which is fine but to say the could not do anything about it incorrect. It may not be feasible to do so an still operate the service but that doesnt mean it isnt possible.
That said, they're also using the "utility argument" - just as your phone provider won't screen you at every call you make, your electricity provider won't lock your supply until you authenticate use for non-nefarious purposes , your ISP won't content-filter, Cloudflare also says they won't police per-use other than when under explicit legal mandate (court injunctions). That's fair enough, at least to me.
Sure, but in this instance, they're offering an anonymous service. Just require a sign-up and a captcha, like you do for all of your other products, FFS. Are they on drugs? Do they want more botnets, to drive DoS mitigation sales?
Either discontinue the service, or serve each pipe from a subdomain that encodes the original source. Something that lets security tooling block known bad sites, without having them block a lot of legitimate sites.
> No one was insane enough to blame the CD-RW and flash drive manufacturers.
cloudflare isn't acting like a CD-RW or a flash drive. They're acting like a storefront that sells fraudulent flash drives that say they're 1TB when they're actually 200MB, or don't work at all when you plug it in, or worse catch fire. A storefront that refuses to take the faulty products off the shelves when customers complain, refuses to stop selling merchandise they sourced from criminals, and refuses to do even basic due diligence to make sure the products they sell are legitimate.
People who operate stores have a responsibility to make sure that merchandise they sell to consumers isn't fraudulent and harmful. Companies offering their services online also have a responsibility to make sure that those services aren't being used to push fraudulent and harmful content onto consumers and that they aren't acting as safe-havens for criminals.
I can, as a google admin, block links from outside the org; or, as a non-google admin, block google docs. The business may decide not to block, but if I have good SIEM then I can still do something, possibly inspect the file before it hits the user's desktop.
I can't block cloudflare, unless I'm willing to block half the internet. If I try to do additional inspection, I've got huge amounts of noise and I'm going to make the internet unusably slow.
Whatever differences exist between a publicly accessible google drive and an innocuous seeming link to a cloudflare owned domain that takes users to a random malicious server without warning, we can be reasonably sure that those differences are meaningful because these scammers are flocking to the cloudflare service instead of using google drive.
Something about this cloudflare service is really attractive to these scammers in way that google drive isn't. Maybe it's because these scammers just haven't discovered how great google drive is as a malware delivery platform, but I suspect that they have.
Now maybe all the attention on how google drive became the hottest place in town to spread malware caused google to get off their ass and do something about the abuse of their online service, and it's become a less hospitable place for criminals than it used to be. Or, maybe google has continued to neglect their responsibility to keep criminals off their service and it's the public who have just gotten more suspicious of the links to google drive in their inboxes making google drive campaigns less effective and its the novelty of cloudflare tunnels that makes them so effective. Maybe it's just easier to create cloudflare links that don't require accounts than it is to keep creating google drive accounts.
Where it matters most though, there really isn't much difference between the two services. Both have a responsibility to keep their services from being used to facilitate crime. Both should respect RFC 2142, but don't. Both can eventually get around to removing links to malware after you report it to them enough while doing basically nothing to stop that same malware from going right back up again at another URL/account. Both have more than enough resources and talent to be doing a much better job at internet abuse handling than they have been. They both just don't care enough to bother.
I quite like the status quo. I don't want Cloudflare or Google to block the files I'm trying to download just because they got a bunch of reports from clueless people or bots.
I want both to behave like dumb pipes. They don't have enough context to make any decisions like the ones you described. Ideally everything would be end to end encrypted so it'd be impossible for them to make the decision for me.
> I don't want Cloudflare or Google to block the files I'm trying to download just because they got a bunch of reports from clueless people or bots.
Lots of scammers don't want Cloudflare or Google to block the files they're trying to trick people into downloading either. There are people who feel the same way about spam, that no service provider should have right to block or even flag messages as spam for anyone else. Thankfully, most people disagree and want service providers to act on abuse complaints instead of acting as safe-havens for criminals.
Even dumb pipes need to be maintained when they start carrying something toxic/harmful that isn't supposed to be there. These are nothing like dumb pipes though. They're watching everything you and everyone else does with the service and logging it all. They're collecting every scrap of data they can while we interact with these services and they're happy to use that data when they think it'll put money in their pocket, but much less interested in using it to prevent the harm being done.
It isn't hard to find this stuff. These types of scammers are not usually very subtle. In this case they're linking to .LNK and .VBS, but scammers using these kinds of services are doing things like repeatedly uploading the exact same malware infected file, or not even bothering to modify their phishing sites each time they reupload them, or using the same keywords/broken english in their spam, etc.
These companies could automate checking to see what's at the other end of a generated link, or run a quick AV scan on an uploaded file, or to look for domains that are registered with misspellings of banks and online shopping companies, or to see if the hash of recently uploaded content matches something they recently had to take down because it violated the law and/or their own ToS/AUP.
I'm not even suggesting that they take something offline immediately if they find something, just flag it for review by an actual human with eyes and a brain and have enough humans available that it doesn't take long before that review happens. Make it easy for people to send reports of internet abuse. It's not hard to act like responsible members of the internet community, it's just takes work.
> In this case they're linking to .LNK and .VBS, but scammers using these kinds of services are doing things like repeatedly uploading the exact same malware infected file
It sounds like you advocate for proxy servers to inspect traffic at the application layer. Is that right?
In the OSI reference model, the communications between systems are split into seven different abstraction layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application.
> It sounds like you advocate for proxy servers to inspect traffic at the application layer. Is that right?
In most cases you wouldn't need one. a URL shortener service can see what people are linking to. A webhosting company can see everything on their own servers. In the specific case of cloudflare and this particular product they may or may not need to. I notice that they do reserve the right to monitor and inspect any traffic on the Cloudflare network.
This text is non-responsive to the question. Maybe your purpose is to practice typing? Just because a company "can" do something doesn't mean that they will devote the resources to perform it.
Never dug deep into Bevy's subcrates, but does this mean you have more control from the JS side? (e.g. when I want to interlace the main loop tick with something from the JS side)
Oh, I my case, I was actually creating a “headless” game, where my graphics were just in HTML and CSS because I’m more artistically enabled to make things using those instead of OpenGL!
One thing I learnt going from frontend development, to distributed system database, development, and then back:
These neanderthal grade tools have the same problem: attempting to abstract away unabstractible problems. The MO of said tools are to limit expressiveness rather than provide context or knowledge.
Take the multi-tab syncing you mentioned. Syncing will always take a loop and a buffer, assuming a channel of communication is not a problem. Those can be in a library, in a browser's built-in API, abstracted away, or you can write it by yourself. But when you don't write it, you don't have control over it (e.g. when to sync, how to interlace the syncing with other operations).
A better toolings work in the level of paradigm. Instead of simply abstracting, bring complexity to clarity and provide the learning ramp.