I have wondered why the likes of McKinsey, KPMG, and PWC do not put up candidates (don't even sponsor them, just say you're electing _well known consultancy_).
1. Why would McKinsey etc be interested in a well-functioning government? Best argument I have is that if the economy grows then government (and private) spending on consulting may grow.
2. Note that the consulting firms already managed to get the legislation they most cared about – creation of the LLP as a kind of entity – despite not having any candidates
3. If the government is too associated with a big consultancy then (a) they may be pressured out of giving them contracts (not good for McKinsey!) and (b) failures by that consultancy will be highlighted more than usual in the news (also not good!)
4. I mean plenty of people would go through the consultancy meat-grinder before becoming politicians. If you are training juniors to think similarly then that may carry over after they leave.
That was basically Rishi Sunak, but going beyond that voters really hate it when you make the corporate control obvious.
However, they don't ask questions, so one layer of money laundering is completely fine. Nobody asks where the funding for Farage's various projects comes from, for example.
The author (Cong Wang) is building all sorts of neat stuff. Recently, they built kernelscript: https://github.com/multikernel/kernelscript -- another DSL for BPF that's much more powerful than the C alternatives, without the complexity of C BPF. Previously, they were at Bytedance, so there's a lot of hope that they understand the complexities of "production".
It appears the patent is for "User-Worn Device for Noninvasively Measuring a Physiological Parameter of a User". So Apple is simply moving the logic to a non user-worn device - like a phone - to get around the problem. (this is my quick read / conjecture)
Yeah, prob because one cannot patent an algorithm itself, but only a specific implementation. The patent was about a wearable device so i guess the workaround was to do the computations in a non-wearable device.
I really like the manycores approach, but we haven’t seen it come to fruition — at least not on general purpose machines. I think a machine that exposes each subset of cores as a NUMA node and doesn’t try to flatten memory across the entire set of cores might be a much more workable approach. Otherwise the interconnect becomes the scaling limit quickly (all cores being able to access all memory at speed).
Erlang, at least the programming model, lends itself well to this, where each process has a local heap. If that can stay resident to a subsection of the CPU, that might lend itself better to a reasonably priced many core architecture.
> think a machine that exposes each subset of cores as a NUMA node and doesn’t try to flatten memory across the entire set of cores might be a much more workable approach. Otherwise the interconnect becomes the scaling limit quickly (all cores being able to access all memory at speed).
Epyc has a mode where it does 4 numa nodes per socket, IIRC. It seems like that should be good if your software is NUMA aware or NUMA friendly.
But most of the desktop class hardware has all the cores sharing a single memory controller anyway, so if you had separate NUMA nodes, it wouldn't reflect reality.
Reducing cross core communication (NUMA or not) is the key to getting high performance parallelism. Erlang helps because any cross process communication is explicit, so there's no hidden communication as can sometimes happen in languages with shared memory between threads. (Yes, ets is shared, but it's also explicit communication in my book)
> Erlang, at least the programming model, lends itself well to this, where each process has a local heap.
That loosely describes plenty of multithreaded workloads, perhaps even most of them. A thread that doesn't keep its memory writes "local" to itself as much as possible will run into heavy contention with other threads and performance will suffer a lot. It's usual to try and write multithreaded workloads in a way that tries to minimize the chance of contention, even though this may not involve a literal "one local heap per core".
Yes, but in Erlang, everything on every process is immutable and nothing is ever trying to write anywhere besides locally. Every variable assignment leaves the previous memory unchanged and fully accessible to anything directly referencing it.
Paraphrasing the late great Joe Armstrong, the great thing about Erlang as opposed to just about any other language is that every year the same program gets twice as fast as last year.
Manycores hasn't succeeded because frankly the programming model of essentially every other language is stuck in 1950. I, the program, am the entire and sole thing running on this computer, and must manually manage resources to match its capabilities. Hence async/await, mutable memory, race checkers, function coloring, all that nonsense. If half the effort spent straining to get the ghost PDP-11 ruling all the programming languages had been spent on cleaning up the (several) warts in the actor model and its few implementations, we'd all be driving Waymos on Jupiter by now.
I'm curious, which actor model warts are you referring to exactly?
[The obvious candidates from my point of view are (1) it's an abstract mathematical model with dispersed application/implementations, most of which introduce additional constraints (in other words, there is no central theory of the actor model implementation space), and (2) the message transport semantics are fixed: the model assumes eventual out-of-order delivery of an unbounded stream of messages. I think they should have enumerated the space of transport capabilities including ordered/unordered, reliable/unreliable within the core model. Treatment of bounded queuing in the core model would also be nice, but you can model that as an unreliable intermediate actor that drops messages or implements a backpressure handshake when the queue is full.]
I don't think either of those are particularly problematic. The actor model as implemented by Erlang is concrete and robust enough. The big problems with the actor model are, in my opinion, around (1) speed optimizations for immutable memory and message passing (currently, there's a great deal of copying and pointer chasing involved, which can be slow and is a ripe area for optimization), (2) (for Erlang) speed and QOL improvements for math and strings (Erlang historically is not about fast math or string handling, but both of those do comprise a great deal of general purpose programming), (3) (for Erlang) operational QOL misc improvements (e.g. existing distribution, ets, amnesia, failover, hot upgrade, node deployment, build process range from arcane (amnesia, hot upgrades, etc.all the way up to covered-in-terrifying-spiders (e.g. debugging queuing issues, rebar3))
There is no lineage between The Actor Model and Erlang. The creators of Erlang are on record as having never heard of the Actor Model (as developed by Hewitt, Agha and colleagues at MIT). None of the points you make (including the first one) are a part of any formal definition or elaboration of the Actor Model that I have seen, which was one of my points: there is no unified theory of the Actor Model that addresses all of the practical issues.
With respect to your point (1), you might be interested in Pony, which has been discussed here from time to time, most recently: https://news.ycombinator.com/item?id=44719413 Of course there are other actor-based systems in wide use such as Akka.
Erlang's runtime system, the BEAM, automatically takes care of scheduling the execution of lightweight erlang processes across many cpus/cores. So a well written Erlang program can be sped up almost linearly by adding more cpus/cores. And since we are seeing more and more cores being crammed into cpus each year, what Joe meant is that by deploying your code on the latest cpu, you've doubled the performance without touching your code.
> Erlang, at least the programming model, lends itself well to this, where each process has a local heap. If that can stay resident to a subsection of the CPU, that might lend itself better to a reasonably priced many core architecture.
I tend to agree.
Where it gets -really- interesting to think about, are concepts like 'core parking' actors of a given type on specific cores; e.x. 'somebusinessprocess' actor code all happens on a specific fixed set of cores and 'account' actors run on a different fixed set of cores, versus having all the cores going back and forth between both.
Could theoretically get a benefit due to instruction cache being very consistent per core, giving benefits due to the mechanical sympathy (I think Disruptors also take advantage of this).
On the other hand, it may not be as big a benefit, in the sense that cross process writes are cross core writes and those tend to lead to their own issues...
Who knows what will really happen, but there have been rumours of significant core-count bumps in Ryzen 6, which would edge the mainstream significantly closer to manycore.
Investment gap is what I'd say too. While Rust, Go, Python, etc... have had massive backers that have managed to invest a ton more into things like static analysis, type checking, and developer ergonomics, the Erlang ecosystem hasn't necessarily had the same love, and instead the major users have typically chosen to pivot, or build something outside of the BEAM.
Energy. Creating a single anti-hydrogen atom requires an absurd amount of energy to first create a collision in a particle accelerator and then capture that anti-hydrogen before it eliminates against another atom.
Only about 0.01% of the energy used to operate the particle collider creates antimatter, the vast majority of which is impossible to capture. All in all, the efficiency of the entire process - if you were to measure it in the e^2=(pc)^2+(mc^2)^2 sense - is probably on the order of 1e-9 or worse.
Has there been research on more efficient ways to generate antiprotons? (By the way anti-hydrogen isn't how you would store it as anti-hydrogen can't be trapped.)
Nothing we do creates it at any kind of scale, and it's a pain in the ass to store.
Not to mention the only way to create it is with energy (it doesn't exist on Earth), and we can only do so at terrible efficiencies. So even theoretically it's pretty bad.
No it wont. It's incredibly hard to build out a fully useful version of the Linux APIs, as shown to us by Cygwin, and WSL. Even if you built out a similar set of APIs, Linux itself offers a ridiculous set of interaction points where applications can tie together (for example, I can use inotifywatch to copy files out of my container as they're written). I feel like what you'll end up with is something like gvisor running on top of WASM. In which case, what did we gain from VMs at all?