Device tree doesn't really have an embedded focus per se. It comes from Sparc and PowerPC workstations.
And ACPI doesn't really have a standard root bus, it's a soup of descriptor tables and virtual machine blobs you have to run and trust, along with veritable mountains of patches for those vm bytecodes to fix firmware issues.
IMO Device Tree is the right way to go for nearly all platforms.
I'm going to try to describe something I only half understand in the hopes that someone who does fully understand it will come by and provide a better description.
For normal PC hardware, you have a SATA controller, the device company commits a driver for it to the Linux kernel tree, and thereafter every subsequent version of Linux can use the SATA controller in any device with that kind of SATA controller. If the device company doesn't provide a driver but the hardware is popular, somebody else reverse engineers the chip and eventually the same result obtains. Likewise for the network controller, the GPU and so on.
Typical ARM devices like cellphones are beleaguered by some kind of shameful omnishambles whereby that doesn't work. The device maker provides a hideous binary blob with the device, it only works with that specific kernel version and everything is terrible. The exact source of the tragedy is the part I'm not quite clear on.
But making whatever that is not apply to RISC-V boards would be highly satisfactory.
My take is that this is the effect of the requirements of the companies producing hardware and is entirely unrelated to the CPU architecture. On desktop and server consumers have a very strong demand that off-the-shelf OSes Just Work (so they can install stock RedHat or Windows) and so there is pressure on hw manufacturers to create and follow standards. Breaking away from the consensus standards is penalized, not rewarded. You can see this happening as Arm moved into the server space, too -- server Arm is much less heterogenous than embedded Arm. Conversely, in embedded devices there is little or no pressure to standardize, because often the system software will be a custom build for that device anyway, and end-users aren't expected to try to install a new OS on it. In this world, differences in hardware tend to be rewarded rather than penalized -- the funky nonstandard feature in your SoC may be what persuades the h/w designer to pick you rather than a competitor, implementing something in a weird way can shave a bit off power consumption or just save time in the design process, and so on. And because there's no requirement to have the end user install a new OS the temptation to save time in development by not trying to upstream kernel changes is often overwhelming.
The difficulty with 'dev boards' like the RPi is that for economic reasons they're made with SoCs from the high-volume markets of the embedded world but sold to people who want to use them like the standardized hw of the server world. This mismatch tends to be annoying and inconvenient.
Anyway, my take is that for RISC-V the dynamics will be exactly the same -- in the embedded world there will be a profusion of different hardware and a lot of binary blobs and non-upstreamed drivers; in the server world things will be nicer; and dev boards will be more like the embedded world than you would like.
(Also, x86 is really unusual in having such a uniform every-machine-looks-the-same ecosystem; this has happened by historical accident as much as anything else. Of course most people only have experience with x86, but it might help to try to not think of absolute-x86-monoculture as 'normal' and wider-variety as 'weird', when it's the other way around :-))
It "doesn't work" on ARM simply do to the extreme heterogenity. To ask for otherwise would be asking for AMD drivers to work for an Nvidia graphics card.
Additionally, PCs do in fact of tons of glue and support systems that match how ARM style SoCs work. They paper over it with ACPI which is based around a VM that has to be run in the kernel that's basically a giant binary blob, in contrast to device tree's description of the components and how they're connected.
> It "doesn't work" on ARM simply do to the extreme heterogenity. To ask for otherwise would be asking for AMD drivers to work for an Nvidia graphics card.
If what you're saying is that on ARM there are 15,000 different kinds of USB controller that all need their own driver, that's one thing. But if all you're saying is that there are 25 different USB controllers and 25 different drive controllers and 25 different network controllers and that means you get 15,000 different possible combinations, the fact that every combination needs its own separate drivers is the bug. Even if various controllers are combined into one SoC.
> They paper over it with ACPI which is based around a VM that has to be run in the kernel that's basically a giant binary blob, in contrast to device tree's description of the components and how they're connected.
It seems to be doing a useful thing in abstracting away the various binary blobs into a stable interface that allows them to continue operating with newer kernel versions.
Getting rid of the binary blobs entirely is a separate fight, presumably related to getting open source firmware (rather than drivers).
It’s not the matter of combinatorial explosion or binary blobs...
On many non-PC platforms, you’re given a thousand “pins” to which you’re free to connect whatever circuit that you conceived for your product, be it green and red buttons, or backlight control circuits, or temperature sensing inputs, or SATA controllers, or 4 bit 74 series counter repurposed to switch pin 567 between SATA controller configuration interface output and battery temperature gauge input and laser ranging sensor power modulation output and self destruction device detonation cord output.
There’s no way or even motivation to describe those board specific miscellanies in such way that Linux Kernel drivers can dynamically consume and adapt to, or decide how to behave.
This is not an issue for x86 PC platform, because every x86/x64 PC is either a 100% clone of IBM 5170 “PC/AT” to run IBM DOS 5.0 or Microsoft MS-DOS 5.0 binary without ever catching fire, or a Microsoft Windows Logo Program certified product that boots into Windows of the same era from binary installer disc/USB key.
> There’s no way or even motivation to describe those board specific miscellanies in such way that Linux Kernel drivers can dynamically consume and adapt to, or decide how to behave.
It doesn't seem like an intractable problem to standardize and document this sort of thing. Once you know that pin 45 is connected to a model ABC123 backlight controller, you know to communicate with it on pin 45 using the ABC123 backlight controller driver. If the same pin is used for multiple outputs, no problem, standardize the method for switching between outputs and describe which pins are switched to which devices using which control pins in a machine-readable table.
Apparently they haven't done this, so how do we get them to start?