Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In old-fashioned AI, it was generally believed that the best way to spend resources was to exactly evaluate as much of the search tree as possible. To that end, you should use lightweight heuristics to guide the search in promising directions and optimizations like alpha-beta pruning to eliminate useless parts of the search space. For finite games of perfect information like chess this is hard to beat when the search is deep enough. (For if you could evaluate the whole game tree from the start then you could always make optimal moves.) Stockfish follows this approach and provides ample evidence of the strength in this strategy.

Perhaps a bit flippantly, you can think of MCTS as “vibe search”—but more accurately it’s a sampling-based search. The basic theory is that we can summarize the information we’ve obtained to estimate our belief in the “goodness” of every possible move and (crucially) our confidence in that belief. Then we allocate search time to prioritize the branches that we are most certain are good.

In this way MCTS iteratively constructs an explicit search tree for the game with associated statistics that is used to guide decisions during play. The neural network does a “vibe check” on each new position in the tree for the initial estimate of “goodness” and then the search process refines that estimate. (Ask the NN to guess at the current position; then play a bunch of simulations to make sure it doesn’t lead to obvious blunders.)



I feel old. Old-old-fashioned(pre-alpha beta) chess engines used a heavyweight evaluator to limit graph searched branch factor.


I think there was some debate on this, actually. I did a lot of research on the subject in the late 2010s and it seems like there were those who felt like limiting the branching factor was the goal, while others felt like fast eval to guide search in order to prune the tree was better.

For what it’s worth, “prune the tree” is still the winningest strategy. MCTS in AlphaGo/AlphaZero scored some wins when they came out, but eventually Stockfish invented the efficiently updatable neural network that now guides their search & it’s much stronger than any MCTS agent.


I suspect you are talking a few decades after the time I am talking about. Many of the earliest chess programs used lossy pruning(type b Shannon engines), under the assumption that the static evaluation at some node could just be bad enough to say don't look down this branch anymore. But they were not provably correct like with alpha beta. Shannon's paper explains a lot more about this. In the late 1940s some of these programs were being run on pen and paper.

For what it's worth stockfish didn't invent efficiently updatable neural networks, Yu Nasu did. Hisayori Noda ported it to Western chess and Stockfish. NNUE is really neat.


Threads like this are why I love HN. Thanks for teaching me new things. :-)


Could you elaborate on this? I thought alpha-beta first appeared way back in the 50s/60s.


The first non trivial chess programs were 'playing' in the late 40s(with pen and paper CPUs). Some of these include features you'll still see today.

https://www.chessprogramming.org/Claude_Shannon proposed two types of chess programs, brutes and selective. Alpha-beta is an optimization for brutes, but many search chess programs were selective with heavyweight eval, or with delayed eval.

Champernowne(Turing's partner), mentions this about turochamp, "We were particularly keen on the idea that whereas certain moves would be scorned as pointless and pursued no further others would be followed quite a long way down certain paths."

You can read more about the A/B/A/B algorithm shift here: https://www.chessprogramming.org/Type_B_Strategy




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: