I think that is equivalent to selection sort:
You put all numbers into the scheduler which periodically checks all scheduled threads and selects the one which timer elapsed, i.e. it sequentially selects the scheduled tasks with increasing timer duration value.
That has expected running time N! loops. Far from horrible unless you make the 'is sorted' or 'randomize' calls horrible.
The great thing about this algorithm is that the same piece of code does all these things. There isn't a separate bodge for checking whether something is sorted.
Your expected number of rolls until a 6 doesn't decrease with each roll. The same is true for this algorithm. If you run this algorithm and it has an expected 10 days runtime, you still have an expected 10 days left after 5 days of failure.
The worstsort in the post has a much worse expected runtime than the sort you described. You could make the bound not guaranteed by randomly deciding whether to throw out the result and start over at the end.
If I roll a die the chances are 1 in 6 yes, but every throw is another bite of the cherry. If I throw the die 10 times, the chances of one of those throws being a 6 is greater than 1 in 6. The same with this 'sorting' algorithm.
Yes you could add a random element to any sorting algorithm to make it slower. I suppose you could argue with my algorithm, the primary purpose of the random wasn't to slow the algorithm down, the algorithm would totally break!
I'm not sure it even matters though. My primary point was that a random algorithm can be worse than just a slow algorithm. With Worstsort, you can be fairly certain that the heat death of the universe will interupt you. My algorithm you don't know, it might be instant, it might never finish.
More concretely, do you want a game that runs at a steady 30fps, or a game that goes from 5fps to 30000fps, with an average of 60fps ?
Gamblers fallacy is me rolling a die 10 times and so expecting the 11th time to be anything other than a 1 in 6 chance.
That isn't what I'm saying. I'm saying if you roll a die 10 times, the chance of any one of those rolls being 6 is greater than 1 in 6. It is statistically likely if you roll a die 10 times, that you will roll a 6 at least once.
I said "kind of", because there isn't a direct link no.
But my chances still increase. If I roll that die once, I know my chances of rolling a 6 are poor, if I roll it 10 times the odds would be better, a million times almost certain.
If my chances of success are steadily increasing, why is that not progress? (especially for an algorithm based solely on chance).
Your chances aren't steadily increasing. The chance you finish within a week is higher than the chance you finish within a day, but your chance is no higher after a day than before it.
Consider an alternative algorithm based solely on chance. Pick an element uniformly at random and if it's larger than the element after it, swap them. This algorithm has progress, as the array becomes monotonically more sorted over time, decreasing the expected amount of time until the algorithm is done. This is what it looks like to make progress. Your algorithm does not have that feature.
I never said you couldn't eventually win. We're talking about whether you make progress leading up to the win. You could win at any time, but you never get closer to winning. If you had a progress bar based on how long left until you win, it wouldn't move until the moment you won.
For real-time applications, worst case can be much more important than the average case. However, not all computation is real-time. Sometimes the average rate is more important than the worst-case.
If we're talking about practical applications like a video game, I'd still say that an algorithm that won't finish within my lifetime is strictly worse than one that might.
Lol.
If we're talking practical applications, the it doesn't really matter!
Agreed, that sometimes the average rate is fine. That just legitimises my disagreement over the 'worst' sort, there is no worst sort.
Ultimately its just down to semantics. The author stroked his beard and thought about the absolute slowest sort, and called it the worst. I stroked my (metaphorical) beard and found a different kind of worst. Both should be avoided in practical applications where possible.
His is cleverer yes, mine is stupid, that's probably the most relevant metric, and probably why no one submitted a sort that returns the wrong answer, which I would claim could also be 'worst'.
> Lol. If we're talking practical applications, the it doesn't really matter!
Then why did you bring up fps in video games?
> there is no worst sort.
Of course there is no worst sort. But some sorts are worse than others. The disagreement here is that your sorting algorithm is clearly better than Worstsort.
If all randomized algorithms with unbounded runtime are worse than worstsort, then flipping a coin until you get heads and then running a fast sort algorithm would be worse than Worstsort. If you instead take a more reasonable approach like comparing expected runtime, Worstsort is clearly worse than bogosort.
While list not sorted { Random(list) }
?