The concept is actually pretty simple: instead of changing existing values, you create new values.
The classic example is a list or array. You don't add a value to an existing list. You create a new list which consists of the old list plus the new value. [1]
This is a subtle but important difference. It means any part of your program with a reference to the original list will not have it change unexpectedly. This eliminates a large class of subtle bugs you no longer have to worry about.
[1] Whether the new list has completely new copy of the existing data, or references it from the old list, is an important optimization detail, but either way the guarantee is the same. It's important to get these optimizations right to make the efficiency of the language practical, but while using the data structure you don't have to worry about those details.
> The classic example is a list or array. You don't add a value to an existing list. You create a new list which consists of the old list plus the new value. [1]
Getting back to this, though - where would this be useful? What would do this?
I'm not getting why having a new list that's different from the old list, with some code working off the old list and some working off the new list, is anything you'd ever want.
Can you give a practical example of something that uses this?
> It means any part of your program with a reference to the original list will not have it change unexpectedly.
I don't get why that would be useful. The old array of floats is incorrect. Nothing should be using it.
That's the bit I don't really understand. If I have a list and I do something to it that gives me another updated list, why would I ever want anything to have the old incorrect list?
There’s a mismatch between your assumptions coming from C and GP’s assumptions coming from a language where arrays are not fixed-length. Having a garbage collector manage memory for you is pretty fundamental to immutable-first languages.
Rich Hickey asked once in a talk, “who here misses working with mutable strings?” If you would answer “I do,” or if you haven’t worked much in languages where strings are always immutable and treated as values, it makes describing the benefits of immutability more challenging.
Von Neumann famously thought Assembly and higher-level language compilers were a waste of time. How much that opinion was based on his facility with machine code I don’t know, but compilers certainly helped other programmers to write more closely to the problem they want to solve instead of tracking registers in their heads. Immutable state is a similar offloading-of-incidental-complexity to the machine.
I must admit I do regard assembly language with some suspicion, because the assembler can make some quite surprising choices. Ultra-high-level languages like C are worse, though, because they can often end up doing things like allocating really wacky bits of memory for variables and then having to get up to all sorts of stunts to index into your array.
State exists in time, a variable is usually valid at the point it's created but it might not be valid in the future. Thus if part of your program accesses a variable expecting it to be from a certain point in time but it's actually from another point in time (was mutated) that can cause issues.
The classic example is a list or array. You don't add a value to an existing list. You create a new list which consists of the old list plus the new value. [1]
This is a subtle but important difference. It means any part of your program with a reference to the original list will not have it change unexpectedly. This eliminates a large class of subtle bugs you no longer have to worry about.
[1] Whether the new list has completely new copy of the existing data, or references it from the old list, is an important optimization detail, but either way the guarantee is the same. It's important to get these optimizations right to make the efficiency of the language practical, but while using the data structure you don't have to worry about those details.