Yeah, it's easy to see the singularity as close when you see it as "when human loose collective control of machines" but any serious look at human society will see that human lost collective control of machines a while back ... to the small number of humans individually owning and controlling the machine.
Even the humans at the top don’t have commanding control of the machines, however. We live in an age where power is determined by the same ineffable force that governs whether a tweet goes viral.
Since Luddites smashed textile machines in England three hundred years ago, it seems technology didn’t care, it kept growing apace due to capitalism. Money and greed fed the process, we never stood a chance of stopping any of it.
The ratio of workers to cars matters more, imo, than whether the workers drive the cars. The fundamental sell of self-driving is that it saves labor. If it effectively doesn't, self-driving essentially going to a luxury rather than a replacement for the existing models.
The dark matter theory broadly is that there is amount of invisible matter that obeys the laws of Einsteinian gravity but isn't otherwise visible. By itself, it has considerable experimental evidence. It doesn't resemble Ptolemaic theories of planetary motion notably in that doesn't and hasn't required regular updating as new data arrives.
It really fits well with the OP comments. Nothing really contradicts the theory but there's no deeper theory beyond it. Another comment mentioned as "nightmare" of dark matter only have gravitational interaction with other matter. That would be very unsatisfying for physicists but wouldn't something that really disprove any given theory.
When you say dark matter theory doesn't require updates when new data arrives, it sounds like you don't count the parameters that describe the dark matter distribution to be part of the theory.
I've seen adversarial approaches in small companies or even an individual boss and I've seen cooperative approaches in moderately large organizations.
The shift toward an adversarial approach in just about any organization is noticeable, in fact, in the US in the last 10-15 years but the US hasn't grown in size that much, large scale organizations existed much earlier.
I dislike it when rhetorical flourishes start with "honest question...".
Maybe using AI assistant instead of directly writing code is equivalent to using a high level language instead of assembly and maybe it isn't. So at least begin your discussion as "I think programmers who don't use AI are like programmers who insist on assembly rather than a high level language" (and they existed back in the day). I mean, an "honest question" is one where you are honestly unsure whether you will get an answer or what the answer will be. That's completely different from honestly feeling your opponents have no good arguments. Just about the opposite, really.
By the way, the reason I view AI assistants and high level language compilers as fundamentally different is that high level languages compilers are mostly deterministic, mostly you can determine both the code generated and the behavior of code in terms of the high level language. AI created/assisted code is fundamentally undermined relative to the source (a prompt) on a much wider basis than the assembly created by a high level language compiler (whose source is source code).
Of course, but the agent can't run a code block in a readme.
It _can_ run a PEP723 script without any specific setup (as long as uv and python are installed). It will automatically create a virtual environment AND install all dependencies. All with a single command without polluting the context with tons of setup.
Microsoft has a cultural problem; it went from an "engineers" company to an MBA directed one
Every simplistic analysis of failing company X uses a hackneyed cliche like this. But in the case of MS, this is completely ridiculous. MS has been renowned for shitty software, since day one. Bill Gates won the 90s software battle based on monopoly, connections and "first feature to market" tactics.
If anything, the heyday of MS quality was the mid 2000s, where it was occasionally lauded for producing good things. But it was never an engineers company (that's Boeing or whoever).
The thing is that "television" seemed like a thing but really it was a system that required a variety of connected, compatible parts, like the Internet.
Different pieces of what became TV existed in 1900, the challenge was putting them together. And that required a consensus among powerful players.
Also, I believe precursors to CRT existed in the 19th century. What was unique with television was the creation of a full CRT system that allowed moving picture consumption to be a mass phenomena.
Early exceptional performers and later exceptional performers within a domain are rarely the same individuals but are largely discrete populations over time... and Most top achievers (Nobel laureates and world-class musicians, athletes, and chess players) demonstrated lower performance than many peers during their early years. Together.
A simple explanation: high performance requires quite a bit of specific preparation. But "exceptional" performance is mostly random relative to the larger population of high performers in terms of the underlying training-to-skills-to-achievement "equation". Especially, being at the top tends to get someone more resources than those nearly at the top who don't have visible/certified achievements.
I'd that billing your work "the study of the very best" really gives you strong marketing spin and that makes people tempted to find simplistic markers rather than looking at the often random processes involved in visible success. IE, I haven't touched on reversion to mean (https://en.wikipedia.org/wiki/Regression_toward_the_mean).
reply