Hacker Newsnew | past | comments | ask | show | jobs | submit | DuckConference's commentslogin

Tariffs/inflation/everything has raised the unit cost to the point that they're probably close to running a loss again sometimes on the latest gen consoles.


On possibility I've seen raised is that slower GI movement -> slower alcohol uptake -> not getting as much of a "hit" from drinking as the effects come on more slowly.


In my personal experience, I do still get the same hit from drinking–I feel a buzz almost immediately, same as before. Rather, I just don't feel the "urge". I've never been a heavy drinker, but I would occasionally crave a beer or two, particularly at the end of a work week. Also, drinking on a GLP1 (I've been on both Tirzepatide and Semaglutide) absolutely wrecks my GI tract for 24-48 hours. Usually with an onset of maybe 8 hours, I get horrible heartburn, moderate to severe nausea, and even mild diarrhea.


I don't think it's any one thing. People like different kinds of alcohol, for different reasons. For someone who's alcohol cravings are based on the sugar in their preferred alcoholic drink, it isn't surprising then, that a medication that lowers their desire to ingest sugar lowers their desire to drink (their chosen sugary drink). Naturally this doesn't cover all alcohol drinkers, but it can't also be none of them.


They're saying a net 14,000 reduction after hiring, so it's possible that's consistent with the 30k total from the earlier rumours.


Their performance claims are quite a bit ahead of the distributed android build systems that I've used, I'm curious what the secret sauce is.


Is it going to be anything more than just a fancier ccache?


It’s definitely not ccache as they cover that under compiler wrapper. This works for Android because a good chunk of the tree is probably dead code for a single build (device drivers and whatnot). It’s unclear how they benchmark - they probably include checkout time of the codebase which artificially inflates the cost of the build (you only checkout once). It’s a virtual filesystem like what Facebook has open sourced although they claim to also do build caching without needing a dedicated build system that is aware of this and that part feels very novel


Re: including checkout, it’s extremely unlikely. source: worked on Android for 7 years, 2 hr build time tracks to build time after checkout on 128 core AMD machine; checkout was O(hour), leaving only an hour for build if that was the case.


Obviously this is the best-case, hyper-optimized scenario and we were careful not to inflate the numbers.

The machine running SourceFS was a c4d-standard-16, and if I remember correctly, the results were very similar on an equivalent 8-vCPU setup.

As mentioned in the blog post, the results were 51 seconds for a full Android 16 checkout (repo init + repo sync) and ~15 minutes for a clean build (make) of the same codebase. Note that this run was mostly replay - over 99 % of the build steps were served from cache.


Do you have any technical blog post how the filesystem is intercepting and caching build steps? This seems like a non-obvious development. The blog alludes to a sandbox step which I’m assuming is for establishing the graph somehow but it’s not obvious to understand where the pitfalls are (eg what if I install some system library - does this interception recognize when system libraries or tools have changed, what if the build description changes slightly, how does the invalidation work etc). Basically, it’s a bold claim to be able to deliver Blaze-like features without requiring any changes to the build system.


> This works for Android because a good chunk of the tree is probably dead code for a single build (device drivers and whatnot)

Device drivers would exist in kernel sources, not the AOSP tree.


As of a few ipadOS versions ago, the higher performance models can use swap space now.


One of the famous small canadian mining companies that went under was named something like Bre-X, somehow a lot of members of the general public had shares of it so it was a big scandal on the news when it went under. Also as it was unraveling a whistleblower at the company "fell" from a helicopter in indonesia, I don't recall if anyone was ever charged or convicted for that.

EDIT: Oh damn it was far sketchier than I recalled and he wasn't a whistleblower. From wikipedia:

The fraud began to unravel rapidly beginning on March 19, 1997, when Bre-X geologist Michael de Guzman reportedly died of suicide by jumping from a helicopter in Indonesia.[11][12] A body was found four days later in the jungle, missing the hands and feet, "surgically removed".[13] In addition, the body was reportedly mostly eaten by animals.[14] According to journalist John McBeth, a body had gone missing from the morgue of the town from which the helicopter flew. The remains of "de Guzman" were found only 400 metres from a logging road. No one saw the body except another Filipino geologist who claimed it was de Guzman. One of the five women who considered themselves to be his wife was receiving monetary payments from somebody long after the supposed death of de Guzman.[13]


BBC has a good audio series on this: https://www.bbc.co.uk/programmes/w13xtvt4

Granted, it's a bit drawn out, but certainly is a memorable story that perfectly encapsulates speculative behavior at every level.


Thanks to you I'm already 2 episodes in and they say Bre-X was traded initially at Alberta SE and then Toronto?


I was born and raised in Vancouver, and have deep personal ties to the junior mining industry - I worked for two decades as an exploration geologist, and my Dad has been in the industry since the early 1970s.

Bre-X was brutal. I was barely into my teens, but I recall - and have spoken at length with my Dad and many others - about how he was out of work for several years after the scandal. Investment completely dried up. Industry recovery took years, and was accompanied by the implementation [1] of fairly stringent disclosure rules, which define reporting standards to this day. Nonetheless, scams are still commonplace, and pretty much everyone I know has a story or two of a shifty promoter pulling the rug out.

Mineral exploration is a tough business. You can't just sudo apt install a drill rig! The logistics and expense of even small exploration programs are a bit insane. Crews head out to some of the most remote corners the world has to offer, moving hundreds of thousands to millions of dollars of heavy equipment, fuel, food, and camp gear on to site for just a few months of near-constant work, then moving (hopefully most of) it out again. Hundreds of tons of rock and soil samples are collected, by drilling, by walking the ground, by trenching; these samples are shipped out to processing and assay labs. Some properties have the benefit of road access - deep-wilderness, often decommissioned logging roads - but many are accessible by helicopter only. It's an adventure, but it is also very demanding and taxing. There is very little year-to-year consistency, even in bull markets.

Sixty to seventy years ago, the majors - mining companies with actual mines and annual revenue - did the lion's share of exploration work. Over the years, however, the majors have divested almost entirely from risky grassroots exploration, leaving it almost entirely up to junior explorers who must raise their capital from investors.

There are lots of fascinating tales:

- How to Get Rich in a Gold Rush: https://youtu.be/yW5iGLLgzRc?si=Pk_9eZF0vBEjF2f4 two-part youtube documentary on the VSE

- Gold: https://en.wikipedia.org/wiki/Gold_(2016_film) great movie starring Matthew McConaughey, loosely based on the Bre-X scandal

- The Big Score: https://www.goodreads.com/book/show/2370656) excellent book that covers the story of the Voisey's Bay discovery in Labrador

- Fire Into Ice: https://www.goodreads.com/book/show/1166624.Fire_into_Ice_Ch...

- Barren Lands: https://www.goodreads.com/book/show/22322947-barren-lands

[1] https://www.cim.org/news/2019/how-cim-helped-an-industry-roc...


Truth is stranger than fiction…


Pretty sure Baumol has like 80% of the blame here.


Not for drugs or insurers though.


Switzerland's extreme wealth makes them a bit of an outlier though, other european countries are probably a fairer comparisons for most places.


I would argue having functioning public transport is a must to generate extreme wealth.

I travel all across Europe for work and only few places has similarly functioning public transport as Zürich. Stockholm city center, that's about it.

I am not from Switzerland.


Great, I always wanted to founder of roblox to tell me to go keto /s


If everyone is on keto and has keto breath, that is lots more people alone and playing Roblox. My girlfriend tried it for a very short time and I couldn't handle her breath smelling like a nail salon.


They're big, expensive chips with a focus on power efficiency. AMD and Intel's chips that are on the big and expensive side tend toward being optimized for higher power ranges, so they don't compete well on efficiency, while their more power efficient chips tend toward being optimized for size/cost.

If you're willing to spend a bunch of die area (which directly translates into cost) you can get good numbers on the other two legs of the Power-Performance-Area triangle. The issue is that the market position of Apple's competitors is such that it doesn't make as much sense for them to make such big and expensive chips (particularly CPU cores) in a mobile-friendly power envelope.


Per core, Apple’s Performance cores are no bigger than AMD’s Zen cores. So it’s a myth that they’re only fast and efficient because they are big.

What makes Apple silicon chips big is they bolt on a fast GPU on it. If you include the die of a discrete GPU with an x86 chip, it’d be the same or bigger than M series.

You can look at Intel’s Lunar Lake as an example where it’s physically bigger than an M4 but slower in CPU, GPU, NPU and has way worse efficiency.

Another comparison is AMD Strix Halo. Despite being ~1.5x bigger than the M4 Pro, it has worse efficiency, ST performance, and GPU performance. It does have slightly more MT.


Is it not true that the instruction decoder is always active on x86, and is quite complex?

Such a decoder is vastly less sophisticated with AArch64.

That is one obvious architectural drawback for power efficiency: a legacy instruction set with variable word length, two FPUs (x87 and SSE), 16-bit compatibility with segmented memory, and hundreds of otherwise unused opcodes.

How much legacy must Apple implement? Non-kernel AArch32 and Thumb2?

Edit: think about it... R4000 was the first 64-bit MIPS in 1991. AMD64 was introduced in 2000.

AArch64 emerged in 2011, and in taking their time, the designers avoided the mistakes made by others.


There's no AArch32 or Thumb support (A32/T32) on M-series chips. AArch64 (technically A64) is the only supported instruction set. Fun fact: this makes it impossible to run Mario Kart 8 via virtualization on Macs without software translation, since it's A32.

How much that does for efficiency I can't say, but I imagine it helps, especially given just how damn easy it is to decode.


It actually doesn't make much difference: https://chipsandcheese.com/i/138977378/decoder-differences-a...


I had not realized that Apple did not implement any of the 32-bit ARM environment, but that cuts the legs out of this argument in the article:

"In Anandtech’s interview, Jim Keller noted that both x86 and ARM both added features over time as software demands evolved. Both got cleaned up a bit when they went 64-bit, but remain old instruction sets that have seen years of iteration."

I still say that x86 must run two FPUs all the time, and that has to cost some power (AMD must run three - it also has 3dNow).

Intel really couldn't resist adding instructions with each new chip (MMX, PAE for 32-bit, many more on this shorthand list that I don't know), which are now mostly baggage.


> I still say that x86 must run two FPUs all the time, and that has to cost some power (AMD must run three - it also has 3dNow).

Legacy floating-point and SIMD instructions exposed by the ISA (and extensions to it) don't have any bearing on how the hardware works internally.

Additionally, AMD processors haven't supported 3DNow! in over a decade -- K10 was the last processor family to support it.


80-bit x87 has no bearing on SSE implementation.

Right. Not.


Oh wow, I need to dig way deeper into this but wonderful resource - thanks!


> Despite being ~1.5x bigger than the M4 Pro

Where are you getting M4 die sizes from?

It would hardly be surprising given the Max+ 395 has more, and on average, better cores fabbed with 5nm unlike the M4's 3nm. Die size is mostly GPU though.

Looking at some benchmarks:

> slightly more MT.

AMD's multicore passmark score is more than 40% higher.

https://www.cpubenchmark.net/compare/6345vs6403/Apple-M4-Pro...

> worse efficiency

The AMD is an older fab process and does not have P/E cores. What are you measuring?

> worse ST performance

The P/E design choice gives different trade-offs e.g. AMD has much higher average single core perf.

> worse GPU performance

The AMD GPU:

14.8 TFLOPS vs. M4 Pro 9.2 TFLOPS.

19% higher 3D Mark

34% higher GeekBench 6 OpenCL

Although a much crappier Blender score. I wonder what that's about.

https://nanoreview.net/en/gpu-compare/radeon-8060s-vs-apple-...


  Where are you getting M4 die sizes from?
M1 Pro is ~250mm2. M4 Pro likely increased in size a bit. So I estimated 300mm2. There are no official measurements but should be directionally correct.

  AMD's multicore passmark score is more than 40% higher.
It's an out of date benchmark that not even AMD endorses and the industry does not use. Meanwhile, AMD officially endorses Cinebench 2024 and Geekbench. Let's use those.

   The AMD is an older fab process and does not have P/E cores. What are you measuring?
Efficiency. Fab process does not account for the 3.65x efficiency deficit. N4 to N3 is roughly ~20-25% more efficient at the same speed.

  The P/E design choice gives different trade-offs e.g. AMD has much higher average single core perf.
Citation needed. Further more, macOS uses P cores for all the important tasks and E cores for background tasks. I fail to see why even if AMD has a higher average ST would translate to better experience for users.

  14.8 TFLOPS vs. M4 Pro 9.2 TFLOPS.
TFLOPs are not the same between architectures.

  19% higher 3D Mark
Equal in 3DMark Wildlife, loses vs M4 Pro in Blender.

  34% higher GeekBench 6 OpenCL
OpenCL has long been deprecated on macOS. 105727 is the score for Metal, which is supported by macOS. 15% faster for M4 Pro.

The GPUs themselves are roughly equal. However, Strix Halo is still a bigger SoC.


> TFLOPs are not the same between architectures.

Shouldn't they be the same if we are speaking about same precision? For example, [0] shows M4 Max 17 TFLOPS FP32 vs MAX+ 395 29.7 TPLOFS FP32 - not sure what exact operation was measured but at least it should be the same operation. Hard to make definitive statements without access to both machines.

[0] https://www.cpu-monkey.com/en/compare_cpu-apple_m4_max_16_cp...


M4 Max doesn't even disclose TFLOPS so no clue where that website got the numbers from.

TFLOPS can't be measured the same between generations. For example, Nvidia often quotes sparsity TFLOPS which doubles the dense TFLOPS previously reported. I think AMD probably does the same for consumer GPUs.

Another example is Radeon RX Vega 64 which had 12.7 TFLOPS FP32. Yet, Radeon RX 5700 XT with just 9.8 TFLOPS FP32 absolutely destroyed it in gaming.


What a waste of time.

"directionally correct"... so you don't know and made up some numbers? Great.

AMD doesn't "endorse benchmarks" especially not fucking Geekbench for multi-core. No-one could because it's famously nonsense for higher core counts. AMD's decade old beef with Sysmark was about pro-Intel bias.


  "directionally correct"... so you don't know and made up some numbers? Great.
I never said it was exactly that size. Apple keeps the sizes of their base, Pro, and Max chips fairly consistent over generations.

Welcome to the world of chip discussions. I've never taken apart and M4 Pro computer and measured the die myself. It appears no one has on the internet. However, we can infer a lot of it based on previously known facts. In this case, we know M1 Pro's die size is around 250mm2.

  AMD doesn't "endorse benchmarks" especially not fucking Geekbench for multi-core. No-one could because it's famously nonsense for higher core counts. AMD's decade old beef with Sysmark was about pro-Intel bias.
Geekbench is the main benchmark AMD tends to use: https://videocardz.com/newz/amd-ryzen-5-7600x-has-already-be...

The reason is because Geekbench correlates highly with SPEC, which is the industry standard.


Your source is an article based on someone finding a Geekbench result for a just released CPU and you somehow try to say its from AMD itself and its an endorsed benchmark, huh.


Those are AMD's marketing slides.


Their "main benchmark"? Stop making things up. It's no more than tragic fanboy addled fraud at this point.

That three-year old press-release refers to SINGLE CORE Geekbench and not the defective multicore version that doesn't scale with core counts. Given AMD's main USP is core counts it would be an... unusual choice.

AMD marketing uses every other product under the sun too (no doubt whatever gives the better looking numbers)... including Passmark e.g. it's on this Halo Strix page:

https://www.amd.com/en/products/processors/ai-pc-portfolio-l...

So I guess that means Passmark is "endorsed" by AMD too eh? Neat.


The industry has moved past Passmark because it does not correlate to actual real world performance.

The standard is SPEC, which correlates with with Geekbench.

https://medium.com/silicon-reimagined/performance-delivered-...

Every time there is a discussion on Apple Silicon, some uninformed person always brings up Passmark, which is completely outdated.


Enough. You don't know what you are talking about.

What's with posting 5 year old medium articles about a different version of Geekbench? Geekbench 5 had different multicore scaling so if you want to argue that version was so great then you are also arguing against Geekbench 6 because they don't even match.

https://www.servethehome.com/a-reminder-that-geekbench-6-is-...

"AMD Ryzen Threadripper 3995WX, a huge 64 core/ 128 thread part, was performing at only 3-4x the rate of an Intel D-1718T quad-core part, even despite the fact it had 16x the core count and lots of other features."

"With the transition from Geekbench 5 to Geekbench 6, the focus of the Primate Labs team shifted to smaller CPUs"


GB6 measures MT the way most consumer applications use MT. GB5 was embarrassingly parallel. It reflects real world usage more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: