Hacker Newsnew | past | comments | ask | show | jobs | submit | chuckadams's commentslogin

Dawn of War 3 made DoW 2 look like Game of the Decade by comparison. I hear they're making a DoW 4, and they're not even mentioning 3 when talking about the history.

Apparently sans-serif is "woke" or something. Cleek's Law meets Poe's.

Looks like slashdot-era copypasta.

... I’ll have you know I graduated top of my class in the Navy Seals, and I’ve been involved in numerous secret raids ...

State machines come to mind: a transition is just a function call. Unfortunately that's a general tail call, not always a recursive one, so no love from this library, and that's where "proper" TCO wins (or trampolines if $your_language lacks TCO)

Also it wouldn't help with Fibonacci, since while it's recursive, it's not tail-recursive (yes, it can be written that way, but I'm talking about the idiomatic naive definition).


foobar2000 runs great under Wine.

Sun was pushing it as a way to script Java applets. Might have even worked out if LiveConnect (the interface layer between Java and JS) wasn't such buggy trash.

And if Java wouldn't have been such a big beast. The startup times for the runtime and memory usage were way too high for a good experience for most user's machines.

> who would they sue

Anyone they feel like. Lawnmower gonna mow.


I'm pretty happy with k3s, but I'm also happy to see some development happening in the space between docker compose and full-blown kubernetes. The wireguard integration in particular intrigues me.

So do we actually get to edit any of the AI code additions or changes or is this just "PR merge hell mode" in Project Manager Simulator? Yes, I could flip over to my editor, but that kind of misses the whole point of the 'I' in "IDE".

I'm team JetBrains4Life when it comes to IDEs, but their AI offerings have been a pretty mixed bag of mixed messages. And this one requires a separate subscription at that when I'm already paying for their own AI product.


Wolfram is kind of obsessed with cellular automata, even went and wrote a whole book about them titled "A New Kind of Science". The reception to it was a bit mixed. CA are Turing-complete, so yeah, you can compute anything with them, I'm just not sure that in itself leads to any greater Revealed Truths. Does make for some fun visualizations though.

A new kind of science is one of my favorite books, I read the entirety of the book during a dreadful vacation when I was 19 or 20 on an iPod touch.

It goes much beyond just cellular automata, the thousand pages or so all seem to drive down the same few points:

- "I, Stephen Wolfram, am an unprecedented genius" (not my favorite part of the book) - Simple rules lead to complexity when iterated upon - The invention of field of computation is as big and important of an invention as the field of mathematics

The last one is less explicit, but it's what I took away from it. Computation is of course part of mathematics, but it is a kind of "live" mathematics. Executable mathematics.

Super cool book and absolutely worth reading if you're into this kind of thing.


I would give the same review, without seeing any of this as a positive. NKS was bloviating, grandiose, repetitive, and shallow. The fact that Wolfram himself didn’t show that CA were Turing complete when most theoretical computer scientists would say “it’s obvious, and not that interesting” kinda disproves his whole point about him being an under appreciated genius. Shrug.

That CA in general were Turing complete is 'obvious'. What was novel is that Wolfram's employee proved something like Turing completeness for a 1d CA with two states and only three cells total in the neighbourhood.

I say something-like-Turing completeness, because it requires a very specially prepared tape to work that makes it a bit borderline. (But please look it up properly, this is all from memory.)

Having said all that, the result is a nice optimisation / upper bound on how little you need in terms of CA to get Turing completeness, but I agree that philosophically nothing much changes compared to having to use a slightly more complicated CA to get to Turing completeness.


The question really ultimately resolves to whether the universe can be quantized at all levels or whether it is analog. If it is quantized I demand my 5 minutes with god, because I would see that as proof of all of this being a simulation. My lack of belief in such a being makes me hope that it is analog.

Computation does not necessarily need to be quantized and discrete; there are fully continuous models of computation, like ODEs or continuous cellular automata.

That's true, but we already know that a bunch of stuff about the universe is quantized. The question is whether or not that holds true for everything or rather not. And all 'fully continuous models of computation' in the end rely on a representation that is a quantized approximation of an ideal. In other words: any practical implementation of such a model that does not end up being a noise generator or an oscillator and that can be used for reliable computation is - as far as I know - based on some quantized model, and then there are still the cells themselves (arguably quanta) and their location (usually on a grid, but you could use a continuous representation for that as well). Now, 23 or 52 bits (depending on the size of the float representation you use for the 'continuous' values) is a lot, but it is not actually continuous. That's an analog concept and you can't really implement that concept with a fidelity high enough on a digital computer.

You could do it on an analog computer but then you'd be into the noise very quickly.

In theory you can, but in practice this is super hard to do.


If your underlying system is linear and stable, you can pick any arbitrary precision you are interested in and compute all future behaviour to that precision on a digital computer.

Btw, quantum mechanics is both linear and stable--and even deterministic. Admittedly it's a bit of a mystery how the observed chaotic nature of eg Newtonian billard balls emerges from quantum mechanics.

'Stable' in this case means that small perturbations in the input only lead to small perturbations in the output. You can insert your favourite epsilon-delta formalisation of that concept, if you wish.

To get back to the meat of your comment:

You can simulate such a stable system 'lazily'. Ie you simulate it with any given fixed precision at first, and (only) when someone zooms in to have a closer look at a specific part, you increase the precision of the numbers in your simulation. (Thanks to the finite speed of light, you might even get away with only re-simulating that part of your system with higher fidelity. But I'm not quite sure.)

Remember those fractal explorers like Fractint that used to be all the rage: they were digital at heart---obviously---but you could zoom in arbitrarily as if they had infinite continuous precision.


> If your underlying system is linear and stable

Sure, but that 'If' isn't true for all but the simplest analog systems. Non-linearities are present in the most unexpected places and just about every system can be made to oscillate.

That's the whole reason digital won out: not because we can't make analog computers but because it is impossible to make analog computers beyond a certain level of complexity if you want deterministic behavior. Of course with LLMs we're throwing all of that gain overboard again but the basic premise still holds: if you don't quantize you drown in an accumulation of noise.


> Sure, but that 'If' isn't true for all but the simplest analog systems.

Quantum mechanics is linear and stable. Quantum mechanics is behind all systems (analog or otherwise), unless they become big enough that gravity becomes important.

> That's the whole reason digital won out: not because we can't make analog computers but because it is impossible to make analog computers beyond a certain level of complexity if you want deterministic behavior.

It's more to do with precision: analog computers have tolerances. It's easier and cheaper to get to high precision with digital computers. Digital computers are also much easier to make programmable. And in the case of analog vs digital electronic computers: digital uses less energy than analog.


Just be careful with how aligned to reality your simulation is. When you get it exactly right, it's no longer just a simulation.

"It looks designed" means nothing. It could be our ignorance at play (we have a long proven track record of being ignorant about how things work).

Yes. Or it could be an optimisation algorithm like evolution.

Or even just lots and lots of variation and some process selecting which one we focus our attention one. Compare the anthropic principle.


For all we know, it could be distinct layers all the way down to infinity. Each time you peel one, something completely different comes up. Never truly knowable. The universe has thrown more than a few hints that our obsession with precision and certainty could be seen cosmically as "silly".

In our current algorithmic-obsessed era, this is reminiscent of procedural generation (but down/up the scale of complexity, not "one man's sky" style of PG).

However, we also have a long track record of seeing the world as nails for our latest hammer. The idea of an algorithm, or even computation in general, could be in reality conceptually closer to "pointy stone tool" than "ultimate substrate".


> For all we know, it could be distinct layers all the way down to infinity. Each time you peel one, something completely different comes up. Never truly knowable. The universe has thrown more than a few hints that our obsession with precision and certainty could be seen cosmically as "silly".

That's a tempting thing to say, but quantum mechanics suggests that we don't have infinite layers at the bottom. Most thermodynamic arguments combined with quantum mechanics. See eg also https://en.wikipedia.org/wiki/Bekenstein_bound about the amount of information that can even in theory be contained in a specific volume of space time.


From the link you shared:

> the maximum amount of information that is required to perfectly describe a given physical system _down to the quantum level_

(emphasis added by me)

It looks like it makes predictions for the quantum layer and above.

--

Historically, we humans have a long proven track record of missing layers at the bottom that were unknown but now are known.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: