Soon, very soon, there will be no way to discern the output of AI from that of a human, just as there is no way to determine whether I added two numbers by hand on paper manually, or I pushed the "+" on my calculator, given only their sum. Would you refuse to read an article if someone used a calculator to obtain results? What about numerical integrations? Would you refuse to read a book because a printing press made it? Perhaps books should have labels that say "WARNING this book was printed by a machine".
What if over the next few decades, science is creatively destroyed; no more science remains that isn't in some way produced using AI. I'm quite critical in using these tools but I'm not a Luddite.
It might be better to think about what a horse is to a human, mostly a horse is an energy slave. The history of humanity is a story about how many energy slaves are available to the average human.
In times past, the only people on earth who had their standard of living raised to a level that allowed them to cast there gaze upon the stars were the Kings and there courts, vassals, and noblemen. As time passed we have learned to make technologies that provide enough energy slaves to the common man that everyone lives a life that a king would have envied in times past.
So the question arises as to whether AI or the pursuit of AGI provides more or less energy slaves to the common man?
The big problem I see with AI is that it undermines redistribution mechanisms in a novel and dangerous way; despite industrialization, human labor was always needed to actually do anything with capital, and even people born in poverty could do work to get their share of the growing economical pie.
AI kinda breaks this; there is a real risk that human labor is going to become almost worthless this century, and this might mean that the common man ends up worse off despite nominal economic growth.
The goal is to eradicate the common man. Turns out you dont need a lot of energy, food, water, space if there aren't 8 billion humans to supply. It's the tech billionaires dream, replacing humans with robotic servants. Corporations do not care about the common man.
A better anology would be a virus. In some sense LLMs, and all other very sophisticated technologies, lean on our resources to replicate themselves. With LLMs you actually do have a projection of intelligemce in the language domain. Even though it is rather corpse-like, as though you shot intelligence in the face and shoved its body in the direction of language, just so you could draw a chaulk outline around it.
Despite all that, one can adopt the view that an LLM is a form of silicon based life akin to a virus and we are its environmental hosts exerting selective pressure and supplying much needed energy. Whether that life is intelligent or not is another issue which is probably related to whether an LLM can tell that a cat cannot be, at the same time and in the same respect, not a cat. The paths through the meaning manifold contructed by an LLM are not geodesic, they are not reversible, while in human reason the correct path is lossless. An LLM literally "thinks", up is a little bit down, and vice versa, by design.
An LLM creates a high fidelity statistical probabistic model of human language. The hope is to capture the input/output of various hierarchical formal and semiformal systems of logic that transit from human to human, which we know as "Intelligence".
Unfortunately, its corpus is bound to contain noise/nonsense that follows no formal reasoning system but contributes to the ill advised idea that an AI should sound like a human to be considered intelligent. Therefore it is not a bag of words but a bag of probabilities perhaps. This is important because the fundamental problem is that an LLM is not able, by design, to correctly model the most fundamental precept of human reason, namely the law of non-contradiction. An LLM must, I repeat must assign nonvanishing probability to both sides of a contradiction, and what's worse is the winning side loses, since long chains of reason are modelled with probability the longer the chain, the less likely an LLM is to follow it. Moreover, whenever there is actual debate on an issue such that the corpus is ambiguous the LLM becomes chaotic, necessarily, on that issue.
I literally just had an AI prove the forgoing with some rigor, and in the very next prompt, I asked it to check my logical reasoning for consistency and it claimed it was able to do so (->|<-).
A bad thing with some positive side effects is not a good thing. Wildfire is bad, too frequent wildfires will turn forest into savannah. I didn't see where this part of the analogy was discussed.
I agree, this whole thread seems to turn the concept of idempotency on its head. As far as I know, an idempotent operation is one that can be repeated without ill-effect rather than the opposite which is a process that will cause errors if executed repeatedly.
The article doesn't propose anything especially different from Lamport clocks. What this article suggests is a way to deal with non-idempotent message handlers.
I'm not sure I follow, though I agree with your definition of idempotency I think ensuring idempotency on the receiving side is sometimes impossible without recognizing that you are receiving an already processed message, in other words: you recognize that you have already received the incoming message and don't process it, in other words: you can tell that this message is the same as an earlier one, in other words: the identity of the message corresponds to the identity of an earlier message.
Its true that idempotency can sometimes be achieved without explicitly having message identity, but in cases it cannot, a key is usually provided to solve this problem. This key indeed encodes the identity of the message, but is usually called an "idempotency key" to signal its use.
The system then becomes idempotent not by having repeated executions result in identical state on some deeper level, but by detecting and avoiding repeated executions on the surface of the system.
We are saying the same thing using different words. I view this as a strategy for dealing with a lack of idempotency in handlers with a great deal of overhead. So I guess I would call it a non-idempotency key since a handler that is not idempotent will necessarily use it. I think this strays too close to a contradiction in terms.
Maybe this is a mismatch of expectations, but I generally think very few handlers are idempotent without such a key. E.g any edits or soft-deleted are impossible to handle in an idempotent way without auxiliary information (idempotency key or timestamp or sequence number).
Again stopping the execution of the handler based on an ID is not idempotency, but rather a corrective measure due to the fact that the handler is not idempotent. Idempotency is a property that says the handler can run as many times as I like, as diametrically opposed to the notion that it can run only once.
> Idempotency is a property that says the handler can run as many times as I like
.... using some input. If that input has a key and the handler bails on execution, your definition is not at all violated. Its only violated if you don't consider the check a part of the handler, which is an arbitrary limitation.
Regardless, I think your interpretation that avoids identifying the message explicitly leaves a very narrow set of idempotency candidates, which is not a useful definition. Under that definition, really only reads are idempotent, as any state setting can be later retried and give a different result if other state settings are interleaved.