Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Micron Samples 256 GB DDR5-8800 MCR DIMMs: Modules for Servers (anandtech.com)
99 points by mfiguiere on March 22, 2024 | hide | past | favorite | 27 comments


Wow. 8800 is fast DDR 5. I just ordered some Dell PCs with DDR5 4800.


It's kind of putting two channels on one DIMM so it's exactly double the speed. This DIMM is interleaving 4400 chips.


If I'm doing my math right, each RAM chip only has two data pins attached to it. So you could cut the number of RAM chips in half (or even fewer) and still keep the same throughput.

In other words, these same ultra-fast buffer chips could be added to an otherwise normal module. Or you could put the transceivers from them into normal RAM chips. Having so many chips is for capacity reasons, not speed reasons. There's no need to have more "channels".


How does feature size and scaling of memory compare with logic? It looks like leading edge memory is made by three companies - Samsung, Hynix and Micron - no TSMC. Do they use EUV? Do they use ASML machines?


DRAM/NAND processes are optimized for different things than logic, so there isn't much cross application. My understanding is that it would be hard to make a general compute chip with a DRAM process due to a low number of metal layers, transistor types, etc. Micron [1] and Samsung have both investigated doing massively parallel compute in the memory cell, but the technology never panned out.

Regarding EUV, according to Micron's most recent quarterly report (March 20th) [2]

"We continue to mature our production capability with extreme ultraviolet lithography (EUV), and have achieved equivalent yield and quality on our 1α as well as 1ß nodes between EUV and non-EUV flows. We have begun 1γ (1-gamma) DRAM pilot production using EUV and are on track for volume production in calendar 2025."

[1] https://investors.micron.com/news-releases/news-release-deta...

[2] https://investors.micron.com/static-files/1a8d6c22-3b89-4806...


What about Apple's M-series chips, which have on-die RAM? Is that RAM significantly more expensive per GB due to the more expensive process?

(It's certainly exorbitantly expensive for retail consumers at $200 for an 8 GB RAM upgrade on a Macbook!)


Apple does not have on-die RAM. The SoC is a normal logic die and the RAM is regular DRAM.


Ah - I see, the DRAM is literally BGA'd right next to the die. So that $200 upcharge is mostly profit... Thank you!


It's called "package on package". The RAM is different chip, however it's located very close to the CPU chip and both are under a single cover. The end result is a "package".

I think that GPUs use similar approach.


AMD Vega used HBM next to the GPU die, but most GPU manufacturers are now back to discrete chips on the GPU circuit board.



It's hard to tell because DRAM vendors talk about things like "1x nm" or "1a nm" but it sounds like they are still above 10 nm which is fairly far behind logic processes. DRAM typically sells for much cheaper than logic so they may not be able to invest as much as TSMC.


With error correction?


When's the last time anyone made non-ECC registered or buffered memory modules?


ECC memory still isn't standard on consumer PCs, I don't think — although all DDR5 memory includes some error correction, manufacturers suggest it's not as robust (https://www.corsair.com/us/en/explorer/diy-builder/memory/is...).


Consumer PCs aren't relevant here because they only use unbuffered DIMMs and cannot accept RDIMMs, LRDIMMs, MCRDIMMs, etc., which is part of why consumer desktops are limited to much lower memory capacities than servers. (The other limitation being the number of memory channels.)


Intel is to blame for this status quo. AMD is fixing it, hopefully Intel will start shipping ECC support for their consumer products soon...


Are you referring to the status quo of consumer PCs not supporting ECC memory, or the status quo of consumer PCs not supporting buffered memory of various flavors? Because AMD's not doing anything about the latter, and DDR5 made it even harder to support both buffered and unbuffered modules on the same platform.


It is little better than speculation but the photo shown suggests 9 bits.


5 columns of chips on each side indicates an extra 8 bits for each 32-bit subchannel, which is pretty common for DDR5 RDIMMs. But the article mentions a 72-bit interface (the standard for DDR5 ECC UDIMMs and earlier ECC DIMM standards) rather than an 80-bit interface, so there's at least some cause for doubt about which level of ECC these modules provide.


> a 72-bit interface (the standard for DDR5 ECC UDIMMs

How does that work, when you only have 4 bits of ECC per 32 bits of data? Is the calculation done in two-transfer bursts?

(Apparently EC4 and EC8 are useful keywords for this.)


DDR5 uses burst length 16, so think of it as 512 data bits with 64 ECC bits out of 576 total.


Sure, it is, but that doesn't explain how it works. Previous generations made it simple, they did SECDED ECC (or sometimes chipkill) one transfer at a time no matter what the transaction size was. DDR5 EC4 can't do that, so what does it do?

And does EC8 do SECDED one transfer at a time, or does it do something more resilient?

(And sure technically it's up to the memory controller to implement, but are different controllers doing different things with DDR5?)


72-bit DDR5 is nonstandard to begin with and I assume AMD and Intel are using undocumented ECC codes. There might be hints in patents.


AIUI, I don't think folks will be able to read the ECC values from these chips. They may/should silently fix issues but you won't be able to monitor the chips like you can with regular ECC RAM using tools like ras. So they may help with bit flips but won't help identifying bad sticks, I think.


The ECC used by buffered modules is the "regular ECC RAM" where the memory bus is widened (and chip count increased) to accommodate carrying the extra ECC information between the DRAM controller on the CPU and the memory modules, with the ECC calculations and corrections done on the CPU and errors surfaced to the firmware and/or OS, giving ECC protection not just for data at rest but also in transit. But the actual ECC bits computed, transferred and stored for each word of DRAM are never directly exposed to software to access.

There is also on-die ECC used by all recent DRAM as a consequence of shrinking memory cell sizes and spacing in the latest DRAM fabrication processes. That on-die ECC is what's unfortunately invisible to the host system, and only really useful for protecting data at rest, not in transit. The existence of on-die ECC has most commonly been publicized in the context of DDR5, but really has nothing to do with what DRAM interface standard is used because the on-die ECC happens entirely on each individual die.


> only really useful for protecting data at rest, not in transit

Which makes it more annoying that normal DDR doesn't also have the ability to add link-ECC like LPDDR.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: