This is still a partial knowledge situation - you have the information that there are a certain proportion of colored balls in the chamber, but not the information about their order. The probability includes the information we do know and allows inferences about information we don’t know.
They didn't say that it has "nothing to do with the object it describes" though. It has to do with your knowledge, which itself has to do with the object your knowledge describes
Well, it's getting philosophical now. We do not have a way to experience the true nature of the coin. We can only experience an "image" [0] of the coin and we summarize all "images" of that object into knowledge.
[0] by image I do include your vision, but also hearing, feeling and other methods of perception.
I do not think there is anything very philosophical here, just the point that the claim in the post I was replying to was demonstrably false.
I am intrigued by the idea of the coin having a "true nature" that we have no way to experience; I would like to know what this elusive "true nature" is, but if we cannot experience it, I don't suppose you can tell me. Instead, I will settle for an explanation of how you know it has such a true nature.
I don't think it's demonstrably false: If you don't know that the coin is weighted, the probability is 50%. Probabilities are predictions and estimates, not fundamentally about the thing itself, but about what we know about the thing.
The principle of indifference? I know that it is a commonplace assumption, but feels to me as though one is assuming one has more information than is justified. Coming back to the article's "economist's wager", is it rational to bet with even odds on something you know nothing about? If the assumption is interpreted as a testable hypothesis about outcomes, why would complete ignorance imply any particular result? On the other hand, if it is interpreted strictly as a statement about one's knowledge, why present it exactly as if one had sufficient knowledge of the situation to know that the probability is 0.5? Maybe the author will have an answer in part 2.
> If you don't know that the coin is weighted, the probability is 50%.
No, if it's weighted it's not 50%. Your prior probability is 50%, but neither a Bayesian nor a frequentist would claim the true probability is known before testing.
There is no such thing as "true probability" in Bayesian interpretation, it only exists in frequentist world.
Notice that it is possible to build a robot which flips a coin in such a way that it's always heads - sure you might need to build a different robot if the coin is "biased"(you probably mean its weight distribution is uneven) but it's still possible.
But that’s the thing: the “true“ probability is unknowable, and may even be an ill-defined concept. It is a deterministic process, so “probability“ is just a simplifying concept to describe our best guess belief about how the coin behaves in the aggregate.
The true probability requires an infinite sequence of tests, so it's by definition unknowable. But it's what any sort of statistics attempts to approximate.
But, once the approximates are within undetectable differences from the "true probability" then you are done.
Because not only is the true probability unknownable, it is also unencodeable, but, if we are to accept a limitation on encoding, then we CAN give a true probability subject to that.
Like.... if we are to determine a coins probability to 1 decimal place, then we can do that.
Of course not. If you're a frequentist you can say your best estimate is 100% heads with an unknown variance, and if you're a Bayesian you work out p(a|b) = p(b)p(b|a)/p(a) and update your priors (which will not give 100% heads). The more coins you flip, the better you can estimate the true probability
Ok, so let's imagine you build a servo-driven flipping machine to carry out an infinite series of tests and notice, after a thousand of them, that 99% of the time, the coin flip matches the orientation of the coin when it's loaded into the machine.
What have you learn about the coin's true flip probability?
You've learned that the system of coin + machine has resulted in the same orientation 99% of the time. You can put some error bars on that, investigate the differences (did that 1% where it changed happen disproportionately with a certain side of the coin up?) and from that provide an estimate for whether the coin is fair. If the confidence intervals aren't small enough for you, you can do more experiments. The confidence interval will never be 0, until you've either done an infinite sequence of trials. (Only axiomatic logic can have confidence interval 0, and it doesn't make statements about the real world, only about the axiomatic system in use.)
So, let's say that we continued the servo tests 1e99 times, with the coin loaded in each orientation equally. We measured 50.00% flips for heads and tails, and continue to see the 0.99 correlation with the initial orientation. The 1% of the time that the correlation doesn't match, it doesn't seem to show any bias for one side or the other.
So after an "infinite" number of tests, we continue to get 50.00% frequency of heads, but with an 0.99 correlation with the orientation when loaded into the machine.
Now I load a coin into the machine and ask you to name the true probability that the result is heads. I don't tell you the initial orientation, but I know it privately.
What's the true probability of heads? Our testing found precisely 50.00% frequency of heads. But are you still sure the probability is an intrinsic property of the system, rather than a property of your state of knowledge of the system?
We can continue the pattern; maybe the 1% error itself correlates to 0.99 with someone running the microwave in the kitchen. This drops the line voltage and causes the servo to impart a little less momentum to the coin, causing it to flip one fewer times on average. Neither of us have currently checked that the microwave is running... And so on...
But what do the error bars themselves mean? Are they not probabilistic in nature themselves?
Say you conduct a thousands trials and calculate the error bar based on the results. If you conduct a hundred such experiments (each consisting of a thousand trials) and one of the experiments violates the error bar, does that invalidate it?
>What have you learn about the coin's true flip probability?
Nothing because you only tested the coin flip machine in aggregate. If you have a different throwing mechanism the results could be completely different.