Now I'm curious -- is that here a way to do this that avoids downloading any more than strictly necessary?
The command above downloads the whole repo history. You could do a depth=1 to skip the history, but it still downloads the he latest version of the entire repo tree.
In TypeScript 7, the compiler will be written in Go instead of TS. But the compiler will still produce JS code as its output and so Node.js is still relevant for running that JS code.
Or is there something else about TypeScript 7 that will make Node.js irrelevant?
> When we started 8 years ago, SQL databases were “old fashioned.” NoSQL was the future. Hadoop, MongoDB, Cassandra, InfluxDB – these were the new, exciting NoSQL databases. PostgreSQL was old and boring.
In 2017? I thought the NoSQL hype had subsided by then and everyone was excited about distributed transactions -- Spanner, Cockroach, Fauna, Foundation, etc.
For PC CPUs, there are already so many watts per square millimeter that many of the top tiers of the recent generations are running thermally throttled 24/7; More cooling improves performance rather than temperatures because it allows more of the cores to run at 'full' speed or at 'boost' speed. This kills their profitable market segmentation.
In this environment it makes some sense to use more efficient RISC cores, and to spread out cores a bit with dedicated bits that either aren't going to get used all the time, or that are going to be used at lower power draws, and combining cores with better on-die memory availability (extreme L2/L3 caches) and other features. Apple even has some silicon in the power section left as empty space for thermal reasons.
Emily (formerly Anthony) on LTT had a piece on the Apple CPUs that pointed out some of the inherent advantages of the big-chip ARM SOC versus the x86 motherboard-daughterboard arrangement as we start to hit Moore's Wall. https://www.youtube.com/watch?v=LFQ3LkVF5sM
If you know that you need to offload matmuls, then building matmul hardware is more area efficient than adding an entire extra CPU. Various intermediate points exist along that spectrum, e.g. Cell's SPUs.
Not really. To get extra CPU performance that likely means more cores, or some other general compute silicon. That stuff tends to be quite big, simply because it’s so flexible.
NPUs focus on one specific type of computation, matrix multiplication, and usually with low precision integers, because that’s all a neural net needs. That vast reduction in flexibility means you can take lots of shortcuts in your design, allowing you cram more compute into a smaller footprint.
If you look at the M1 chip[1], you can see the entire 16-Neural engine has a foot print about the size of 4 performance cores (excluding their caches). It’s not perfect comparison, without numbers on what the performance core can achieve in terms of ops/second vs the Neural Engine. But it seems reasonable to be that the Neural Engine and handily outperform the performance core complex when doing matmul operations.
What happens if we gradually transition to memory-safe languages for new features, while leaving existing code mostly untouched except for bug fixes?
...
In the final year of our simulation, despite the growth in memory-unsafe code, the number of memory safety vulnerabilities drops significantly, a seemingly counterintuitive result [...]
Why would this be counterintuitive? If you're only touching the memory-unsafe code to fix bugs, it seems obviously that the number of memory-safety bugs will go down.
A few years ago I was considering Heroku for something new. But then I learned that Heroku Postgres's HA offering used async replication, meaning you could lose minutes of writes in the event that the primary instance failed. That was a dealbreaker.
That was very surprising to me. Most businesses that are willing to pay 2x for an HA database are probably NOT likely to be ok with that kind of data loss risk.
(AWS and GCP's HA database offerings use synchronous replication.)
Noticed this too. The master failover is marketed like a strict upgrade, and the "async" part is only in the fine print. Many would actually prefer the downtime over losing data. A user who's experienced with DBs should think to check on this, but still.
But the article claims it applies to SQL databases as well.
> Are these rules specific to a particular database?
>
> No. These rules apply to almost any SQL or NoSQL database. The rules even apply to the so-called "schemaless" databases.
> Cargo-cult thinking means only looking for, and only accepting, confirming evidence.
This article's definition of cargo-cult thinking seems incorrect. The definition I'm familiar with: when you lack a true understanding of some idea and end up just mimicking the superficial qualities. It's a great metaphor that comes up all the time in software engineering.
For example, seeing a successful system that uses microservices and thinking that switching your system to microservices will make it successful. If you don't understand exactly what the tradeoffs are and why those tradeoffs worked well for the successful system, you're not going to get the result you want.
Maybe the author confused "cargo-cult thinking" with just plain "cult-like thinking"?
I'm not the person you're responding to, but I interpreted their comment as, "doesn't the argument against having protobuf check for required fields also apply to all of protobuf's other checks?"
From the linked article the post: "The right answer is for applications to do validation as-needed in application-level code. If you want to detect when a client fails to set a particular field, give the field an invalid default value and then check for that value on the server. Low-level infrastructure that doesn’t care about message content should not validate it at all."
(I agree that "static typing" isn't exactly the right term here. But protobuf dynamic validation allows the programmer to then rely on static types, vs having to dynamically check those properties with hand-written code, so I can see why someone might use that term.)
The command above downloads the whole repo history. You could do a depth=1 to skip the history, but it still downloads the he latest version of the entire repo tree.