Hacker Newsnew | past | comments | ask | show | jobs | submit | cakoose's commentslogin

Now I'm curious -- is that here a way to do this that avoids downloading any more than strictly necessary?

The command above downloads the whole repo history. You could do a depth=1 to skip the history, but it still downloads the he latest version of the entire repo tree.


You could do a blobless or treeless clone https://github.blog/open-source/git/get-up-to-speed-with-par...

Combined with --depth=1 and the --no-checkout / --sparse-checkout flow that the GP already described.

I just tested on the emacs repo, left column is disk usage of just the `.git` folder inside:

  Shallow clones (depth=1):
  124K: Treeless clone depth=1 with no-checkout
  308K: Blobless clone depth=1 with no-checkout
  12M: Treeless clone depth=1 sparse checkout of "doc" folder
  12M: Blobless clone depth=1 sparse checkout of "doc" folder
  53M: Treeless clone depth=1 non-sparse full checkout
  53M: Blobless clone depth=1 non-sparse full checkout
  53M: Regular clone with depth=1

  Non-shallow clones:
  54M: Treeless clone with no-checkout
  124M: Blobless clone with no-checkout
  65M: Treeless clone sparse checkout of "doc" folder
  135M: Blobless clone sparse checkout of "doc" folder
  107M: Treeless clone with non-sparse full checkout
  177M: Blobless clone with non-sparse full checkout
  653M: Full regular git clone with no flags
Great tech talk covering some of the newer lesser-known git features: https://www.youtube.com/watch?v=aolI_Rz0ZqY


git-archive downloads only strictly necessary files but is not universally supported

https://git-scm.com/docs/git-archive


Why will TypeScript 7 make Node.js irrelevant?

In TypeScript 7, the compiler will be written in Go instead of TS. But the compiler will still produce JS code as its output and so Node.js is still relevant for running that JS code.

Or is there something else about TypeScript 7 that will make Node.js irrelevant?


The output would probably not be node, it would target the browser. TypeScript is the only real Node project worth a damn in open source.


> When we started 8 years ago, SQL databases were “old fashioned.” NoSQL was the future. Hadoop, MongoDB, Cassandra, InfluxDB – these were the new, exciting NoSQL databases. PostgreSQL was old and boring.

In 2017? I thought the NoSQL hype had subsided by then and everyone was excited about distributed transactions -- Spanner, Cockroach, Fauna, Foundation, etc.


I think this just illustrates the tech bubble we live in. Occasionally we find one that doesn't match ours.


Exactly!

"The future is already here, it's just not very evenly distributed" - William Gibson


I had the same thought. They are off by a few years


Marketing is going to market.


Yup, the example doesn't make sense for the reason you pointed out.

You could water down the example a bit to make it work:

1. Assume there's some other authentication mechanism for client-server communication, e.g. TLS.

2. The client sends the user ID unencrypted (within TLS) so the server can route, but encrypts the message contents so the server can't read it.

3. The final recipient can validate the message and the user ID.

This saves the client from having to send the user ID twice, once in the ciphertext and once in the clear.

But another more interesting use case is when you don't even send the associated data: https://news.ycombinator.com/item?id=43827342


Offload only makes sense if there are other advantages, e.g. speed, power.

Without those, wouldn't it be better to use the NPUs silicon budget on more CPU?


For PC CPUs, there are already so many watts per square millimeter that many of the top tiers of the recent generations are running thermally throttled 24/7; More cooling improves performance rather than temperatures because it allows more of the cores to run at 'full' speed or at 'boost' speed. This kills their profitable market segmentation.

In this environment it makes some sense to use more efficient RISC cores, and to spread out cores a bit with dedicated bits that either aren't going to get used all the time, or that are going to be used at lower power draws, and combining cores with better on-die memory availability (extreme L2/L3 caches) and other features. Apple even has some silicon in the power section left as empty space for thermal reasons.

Emily (formerly Anthony) on LTT had a piece on the Apple CPUs that pointed out some of the inherent advantages of the big-chip ARM SOC versus the x86 motherboard-daughterboard arrangement as we start to hit Moore's Wall. https://www.youtube.com/watch?v=LFQ3LkVF5sM


If you know that you need to offload matmuls, then building matmul hardware is more area efficient than adding an entire extra CPU. Various intermediate points exist along that spectrum, e.g. Cell's SPUs.


More CPU means siphoning off more of the power budget on mobile devices. The theoretical value of NPUs is power efficiency on a limited budget.


Not really. To get extra CPU performance that likely means more cores, or some other general compute silicon. That stuff tends to be quite big, simply because it’s so flexible.

NPUs focus on one specific type of computation, matrix multiplication, and usually with low precision integers, because that’s all a neural net needs. That vast reduction in flexibility means you can take lots of shortcuts in your design, allowing you cram more compute into a smaller footprint.

If you look at the M1 chip[1], you can see the entire 16-Neural engine has a foot print about the size of 4 performance cores (excluding their caches). It’s not perfect comparison, without numbers on what the performance core can achieve in terms of ops/second vs the Neural Engine. But it seems reasonable to be that the Neural Engine and handily outperform the performance core complex when doing matmul operations.

[1] https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


What happens if we gradually transition to memory-safe languages for new features, while leaving existing code mostly untouched except for bug fixes?

...

In the final year of our simulation, despite the growth in memory-unsafe code, the number of memory safety vulnerabilities drops significantly, a seemingly counterintuitive result [...]

Why would this be counterintuitive? If you're only touching the memory-unsafe code to fix bugs, it seems obviously that the number of memory-safety bugs will go down.

Am I missing something?


The counter intuitive part is that there is now more code written in memory unsafe languages than there was before. Even if it's just bug fixing.

It's not as if bug fixes haven't resulted in new memory bugs, but apparently that rate is much lower in bug fixes than it is in brand new code.


I think the standard assumption would be that you need to start replacing older code with memory safe code to see improvements.

Instead they’ve shown that only using memory safe languages for new code is enough for the total bug count to drop.


A few years ago I was considering Heroku for something new. But then I learned that Heroku Postgres's HA offering used async replication, meaning you could lose minutes of writes in the event that the primary instance failed. That was a dealbreaker.

That was very surprising to me. Most businesses that are willing to pay 2x for an HA database are probably NOT likely to be ok with that kind of data loss risk.

(AWS and GCP's HA database offerings use synchronous replication.)


Noticed this too. The master failover is marketed like a strict upgrade, and the "async" part is only in the fine print. Many would actually prefer the downtime over losing data. A user who's experienced with DBs should think to check on this, but still.


But the article claims it applies to SQL databases as well.

> Are these rules specific to a particular database? > > No. These rules apply to almost any SQL or NoSQL database. The rules even apply to the so-called "schemaless" databases.


> Cargo-cult thinking means only looking for, and only accepting, confirming evidence.

This article's definition of cargo-cult thinking seems incorrect. The definition I'm familiar with: when you lack a true understanding of some idea and end up just mimicking the superficial qualities. It's a great metaphor that comes up all the time in software engineering.

For example, seeing a successful system that uses microservices and thinking that switching your system to microservices will make it successful. If you don't understand exactly what the tradeoffs are and why those tradeoffs worked well for the successful system, you're not going to get the result you want.

Maybe the author confused "cargo-cult thinking" with just plain "cult-like thinking"?


Yeah, that's closer to the definition I'm familiar with. It's a species of "correlation does not imply causation".

What the author is talking about here seems to be how to recognize and avoid confirmation bias.


I'm not the person you're responding to, but I interpreted their comment as, "doesn't the argument against having protobuf check for required fields also apply to all of protobuf's other checks?"

From the linked article the post: "The right answer is for applications to do validation as-needed in application-level code. If you want to detect when a client fails to set a particular field, give the field an invalid default value and then check for that value on the server. Low-level infrastructure that doesn’t care about message content should not validate it at all."

(I agree that "static typing" isn't exactly the right term here. But protobuf dynamic validation allows the programmer to then rely on static types, vs having to dynamically check those properties with hand-written code, so I can see why someone might use that term.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: