On Monte-Carlo tree search for deterministic games with alternate moves and complete information

We consider a deterministic game with alternate moves and complete information, of which the issue is always the victory of one of the two opponents. We assume that this game is the realization of a random model enjoying some independence properties. We consider algorithms in the spirit of Monte-Carlo Tree Search, to estimate at best the minimax value of a given position: it consists in simulating, successively, $n$ well-chosen matches, starting from this position. We build an algorithm, which is optimal, step by step, in some sense: once the $n$ first matches are simulated, the algorithm decides from the statistics furnished by the $n$ first matches (and the a priori we have on the game) how to simulate the $(n+1)$-th match in such a way that the increase of information concerning the minimax value of the position under study is maximal. This algorithm is remarkably quick. We prove that our step by step optimal algorithm is not globally optimal and that it always converges in a finite number of steps, even if the a priori we have on the game is completely irrelevant. We finally test our algorithm, against MCTS, on Pearl's game and, with a very simple and universal a priori, on the games Connect Four and some variants. The numerical results are rather disappointing. We however exhibit some situations in which our algorithm seems efficient.

1. Introduction 1.1. Monte-Carlo Tree Search algorithms. Monte-Carlo Tree Search (MCTS) are popular algorithms for heuristic search in two-player games. Let us mention the book of Munos [19] and the survey paper of Browne et al. [7], which we tried to briefly summarize here and to which we refer for a much more complete introduction on the topic.
We consider a deterministic game with complete information and alternate moves involving two players, that we call J 1 and J 0 . We think of Go, Hex, Connect Four, etc. Such a game can always be represented as a discrete tree, of which the nodes are the positions of the game. Indeed, even if a single position can be thought as the child of two different positions, we can always reduce to this case by including the whole history of the game in the position. Also, we assume that the only possible outcomes of the game, which are represented by values on the leaves of the tree, are either 1 (if J 1 wins) or 0 (if J 0 wins). If draw is a possible outcome, we e.g. identify it to a victory of J 0 .
Let r be a configuration in which J 1 has to choose between several moves. The problem we deal with is: how to select at best one of these moves with a computer and a given amount of time.
If having a huge amount of time, such a question can classically be completely solved by computing recursively, starting from the leaves, the minimax values (R(x)) x∈T , see Remark 1. Here T is the tree (with root r and set of leaves L) representing the game when starting from r, and for each x ∈ T , R(x) is the value of the game starting from x. In other words, R(x) = 1 if J 1 has a winning strategy when starting from x and 0 else. So we compute R(x) for all the children x of r and choose a move leading to some x such that R(x) = 1, if such a child exists.
In practice, this is not feasible, except if the game (starting from r) is very small. One possibility is to cut the tree at some reasonable depth K, to assign some estimated values to all positions of depth K, and to compute the resulting (approximate) minimax values on the subtree above depth K. For example if playing Connect Four, one can assign to a position the value remaining number of possible alignments for J 1 minus remaining number of possible alignments for J 0 . Of course, the choice of such a value is highly debatable, and heavily depends on the game.
A more universal possibility, introduced by Abramson [1], is to use some Monte-Carlo simulations: from each position with depth K, we handle a certain number N of uniformly random matches (or matches with a simple default policy), and we evaluate this position by the number of these matches that led to a victory of J 1 divided by N . Such a procedure is now called Flat MCTS, see Coquelin and Munos [10], Browne et al. [7], see also Ginsberg [15] and Sheppard [21].
Coulom [11] introduced the class of MCTS algorithms. Here are the main ideas: we have a default policy and a selection procedure. We make J 1 play against J 0 a certain number of times and make grow a subtree of the game. Initially, the subtree T 0 consists of the root and its children. After n steps, we have the subtree T n and some statistics (C(x), W (x)) x∈Tn provided by the previous matches: C(x) is the number of times the node x has been crossed and W (x) is the number of times this has led to a victory of J 1 . Then we select a leave y of T n using the selection procedure (which relies on the statistics (C(x), W (x)) x∈Tn ) and we end the match (from y) by using the default policy. We then build T n+1 by adding to T n the children of y, and we increment the values of (C(x), W (x)) x∈Tn+1 according to the issue of the match (actually, it suffices to compute these values for x in the branch from r to y and for the children of y). Once the given amount of time is elapsed, we choose the move leading to the child x of r with the highest W (x)/C(x).
Actually, this procedure throws away a lot of data: C(x) is not exactly the number of times x has been crossed, it is rather the number of times it has been crossed since x ∈ T n , and a similar fact holds for W (x). In practice, this limits the memory used by the algorithm.
The most simple and universal default policy is to choose each move at uniform random and this is the case we will study in the present paper. It is of course more efficient to use a simplified strategy, depending on the game under study, but this is another topic.
Another important problem is to decide how to select the leave y of T n . Kocsis and Szepesvári [17] proposed to use some bandit ideas, developed (and shown to be optimal, in a very weak sense, for bandit problems) by Auer et al. [3], see Bubeck and Cesa-Bianchi [5] for a survey paper. They introduced a version of MCTS called UCT (for UCB for trees, UCB meaning Upper Confidence Bounds), in which the selection procedure is as follows. We start from the root r and we go down in T n : when in position x where J 1 (resp. J 0 ) has to play, we choose the child z of x maximizing W (z)/C(z) + c(log n)/C(z) (resp. (C(z) − W (z))/C(z) + c(log n)/C(z)). At some time we arrive at some leave y of T n , this is the selected leave. Here c > 0 is a constant to be chosen empirically. Kocsis and Szepesvári [17] proved the convergence of UCT.
Chaslot et al. [9] have broaden the framework of MCTS. Also, they proposed different ways to select the best child (after all the computations): either the one with the highest W/C, the one with the highest C, or something intermediate. Gelly et al. [14] experimented MCTS (UCT) on Go. They built the program MoGo, which also uses some pruning procedures, and obtained impressive results on a 9 × 9 board.
The early paper of Coquelin and Munos [10] contains many results. They showed that UCT can be inefficient on some particular trees and proposed a modification taking into account some possible smoothness of the tree and outcomes (in some sense). They also studied Flat MCTS.
Lee et al. [18] studied the problem of fitting precisely the parameters of the selection process. Of course, W/C means nothing if C = 0, so they empirically investigated what happens when using (W + a)/(C + b) + c(log n)/(C + 1), for some constants a, b, c > 0. They conclude that c = 0 is often the best choice. This is not so surprising, since the logarithmic term is here to prevent from large deviation events, which do asymptotically not exist for deterministic games. MoGo uses c = 0 and ad hoc constants a and b. This version of MCTS is the one presented in Section 9 and the one we used to test our algorithm.
Let us mention the more recent theoretical work by Buşoniu,Munos and Páll [6], as well as the paper of Garivier, Kaufmann and Koolen [13] who study in details a bandit model for a two-round two-player random game.
The survey paper of Browne et al. [7] discusses many tricks to improve the numerical results and, of course, all this has been adapted with very special and accurate procedures to particular games. As everybody knows, AlphaGo [22] became the first Go program to beat a human professional Go player on a full-sized board. Of course, AlphaGo is far from using only MCTS, it also relies on deep-learning and many other things.
1.2. Our goal. We would like to study MCTS when using a probabilistic model for the game. To simplify the problem as much as possible, we only consider the case where the default policy is play at uniform random, and we consider the modified version of MCTS described in Section 9, where we keep all the information. This may cause memory problems in practice, but we do not discuss such difficulties. So the modified version is as follows, for some constants a, b > 0 to be fitted empirically. We make J 1 play against J 0 a certain number of times and make a subtree of the game grow. Initially, the subtree T 0 consists in the root r and its children. After n steps, we have the subtree T n and some statistics (C(x), W (x)) x∈Tn provided by the previous matches: C(x) is the number of times the node x has been crossed and W (x) the number of times this has led to a victory of J 1 . The (n + 1)-th step is as follows: start from r and go down in T n by following the highest values of (W + a)/(C + b) (resp. (C − W + a)/(C + b)) if it is J 1 's turn to play (resp. J 0 's turn to play), until we arrive at some leave z of T n . From there, complete the match at uniform random until we reach a leave y of T . We then build T n+1 by adding to T n the whole branch from z to y together with all the brothers of the elements of this branch, and we compute the values of (C(x), W (x)) x∈Tn+1 (actually, it suffices to compute these values for x in the branch from r to y). Once the given amount of time is elapsed, we choose the move leading to the child x of r with the highest C(x)/W (x).
As shown by Coquelin and Munos [10], one can build games for which MCTS is not very efficient. So it would be interesting to know for which class of games it is. This seems to be a very difficult problem. One possibility is to study if MCTS works well in mean, i.e. when the game is chosen at random. In other words, we assume that the tree and the outcomes are the realization of a random model. We use a simple toy model enjoying some independance properties, which is far from convincing if modeling true games but for which we can handle a complete theoretical study.
We consider a class of algorithms resembling the above mentioned modified version of MCTS. After n simulated matches, we have some information F n on the game: we have visited n leaves, we know the outcomes of the game at these n leaves, and we have the explored tree T n = B n ∪ D n , where B n is the set of all crossed positions, and D n is the boundary of the explored tree (roughly, D n consists of uncrossed positions of which the father belongs to B n ).
So we can approximate R(r), which is the quantity of interest, by E[R(r)|F n ] (if the latter can be computed). Using only this information F n (and possibly some a priori on the game furnished by the model), how to select z ∈ D n so that, simulating a uniformly random match starting from z and updating the information, E[R(r)|F n+1 ] is as close as possible (in L 2 ) to R(r)?
We need a few assumptions. In words, (a) the tree and outcomes enjoy some independence properties, (b) we can compute, at least numerically, for x ∈ D n , m(x) = mean value of R(x) and s(x) = mean quantity of information that a uniformly random match starting from x will provide. See Subsection 2.8 for precise definitions.
Under such conditions, we show that E[R(r)|F n ] can indeed be computed (numerically), and z can be selected as desired. The procedure resembles in spirit MCTS, but is of course more complicated and requires more computations. This is extremely surprising: the computational cost to find the best z ∈ D n does not increase with n, because this choice requires to compute some values of which the update does not concern the whole tree T n , but only the last visited branch, as MCTS. (Actually, we also need to update all the brothers of the last visited branch, but this remains rather reasonable). It seems miraculous that this theoretical algorithm behaves so well. Any modification, such as taking draws into account or changing the notion of optimality, seems to lead to drastically more expensive algorithms, that require to update some values on the whole visited tree T n . We believe that this is the most interesting fact of the paper.
The resulting algorithm is explained in details in the next section. We will prove that this algorithm is convergent (in a finite number of steps) even if the model is completely irrelevant. This is not very surprising, since all the leaves of the game are visited after a finite number of steps. We will also prove on an example that our algorithm is myopic: it is not globally optimal. There is a theory showing that for a class of problems, a step by step optimal algorithm is almost globally optimal, i.e. up to some reasonable factor, see Golovin and Kraus [16]. However, it is unclear whether this class of problems includes ours.
1.3. Choice of the parameters. We will show, on different classes of models, how to compute the functions m and s required to implement our algorithm. We studied essentially two possibilities.
In the first one, we assume that the tree T is the realization of an inhomogeneous Galton-Watson tree, with known reproduction laws, and that the outcomes of the game are the realizations of i.i.d. Bernoulli random variables of which the parameters depend only on the depths of the involved (terminal) positions. Then m(x) and s(x) depend only on the depth of the node x. We can compute them numerically once for all, using rough statistics we have on the true game we want to play (e.g. Connect Four), by handling a high number of uniformly random matches.
The second possibility is much more universal and adaptive and works better in practice. At the beginning, we prescribe that m(r) = a, for some fixed a ∈ (0, 1). Then each time a new litter {y 1 , . . . , y d } (with father x) is created by the algorithm, we set m(y 1 ) = · · · = m(y d ) = m(x) 1/d if x is a position where it is J 0 's turn to play, and m(y 1 ) = · · · = m(y d ) = 1 − (1 − m(x)) 1/d else. Observe here that d is discovered by the algorithm at the same time as the new litter. Concerning s, we have different possibilities (see Subsection 2.14 and Section 6), more or less justified from a theoretical point of view, among which s(x) = 1 for all x does not seem to be too bad.
The first possibility seems more realistic but works less well than the second one in practice and requires some preliminary fitting. The second possibility relies on a symmetry modeling consideration: assuming that all the individuals of the new litter behave similarly necessarily leads to such a function m. This is much more universal in that once the value of a = m(r) is fixed (actually, a = 0.5 does not seem to be worse than another value), everything can be computed in a way not depending on the true game. Of course, the choice of a = m(r) is debatable, but does not seem to be very important in practice.

1.4.
Comments and a few more references. Our class of models generalize a lot the Pearl model [20], which simply consists of a regular tree with degree d, depth K, with i.i.d. Bernoulli(p) outcomes on the leaves. However, we still assume a lot of independence. This is not fully realistic and is probably the main reason why our numerical experiments are rather disappointing.
The model proposed by Devroye-Kamoun [12] seems much more relevant, as they introduce correlations between the outcomes as follows. They consider that each edge of the tree has a value (e.g. Gaussian). The value of the leave is then 1 if the sum of the values along the corresponding branch is positive, and 0 else. This more or less models that a player builds, little by little, his game. But from a theoretical point of view, it is very difficult to study, because we only observe the values of the leaves, not of the edges. So this unfortunately falls completely out of our scope. See also Coquelin and Munos [10] for a notion of smoothness of the tree, which seems rather relevant from a modeling point of view.
Finally, our approach is often called Bayesian, because we have an a priori law for the game. This has already been studied in the artificial intelligence literature. See Baum and Smith [4] and Tesauro, Rajan and Segal [24]. Both introduce some conditional expectations of the minimax values and formulas resembling (2) already appear.
1.5. Pruning. Our algorithm automatically proceeds to some pruning, as AlphaBeta, which is a clever version to compute exactly the minimax values of a (small) game. This can be understood if reading the proof of Proposition 15. The basic idea is as follows: if we know from the information we have that R(x) = 0 for some internal node x ∈ T and if the father v of x is a position where J 0 plays, then there is no need to study the brothers of x, because R(v) = 0. And indeed, our algorithm will never visit these brothers.
Some versions of MCTS with some additional pruning procedures have already been introduced empirically. See Gelly et al. [14] for MoGo, as well as many other references in [7]. It seems rather nice that our algorithm automatically prunes and this holds even if the a priori we have on the game is completely irrelevant. Of course, if playing a large game, this pruning will occur only near the leaves.

1.6.
Numerical experiments. We have tested our general algorithm against some general versions of MCTS. We would not have the least chance if using some versions of MCTS modified in such a way that it takes into account some symmetries of a particular game, with a more clever default policy, etc. But we hope our algorithm might also be adapted to particular games.
Next, let us mention that our algorithm is subjected to some numerical problems, in that we have to compute many products, that often lead numerically to 0 or 1. We overcome such problems by using some logarithms, which complicates and slows down the program.
Let us now briefly summarize the results of our experiments, see Section 8.
We empirically observed on various games that, very roughly, each iteration of our algorithm requires between 2 and 4 times more computational time than MCTS.
When playing Pearl's games, our algorithm seems rather competitive against MCTS (with a given amount of time per move), which is not very surprising, since our algorithm is precisely designed for such games.
We also tried to play various versions of Connect Four. Globally, we are clearly beaten by MCTS. However, there are two situations where we win.
The first one is when the game is so large, or the amount of time so small, that very few iterations can be performed by the challengers. This is quite natural, because (a) our algorithm is only optimal step by step, (b) it relies on some independence properties that are less and less true when performing more and more iterations.
The second one is when the game is so small that we can hope to find the winning strategy at the first move, and where our algorithm finds it before MCTS. We believe this is due to the automatic pruning. 1.7. Comparison with AlphaBeta. Assume that T is a finite balanced tree, i.e. that all the nodes with the same depth have the same degree, and that we have some i.i.d. outcomes on the leaves. This slightly generalizes Pearl's game. Then Tarsi [23] showed that AlphaBeta is optimal in the sense of the expected number of leaves necessary to perfectly find R(r). For such a game, Bruno Scherrer told us that our algorithm visits the leaves precisely in the same order as AlphaBeta, up to some random permutation (see also Subsection 8.11 for a rather convincing numerical indication in this direction). The advantage is that we provide an estimated value during the whole process, while AlphaBeta produces nothing before it really finds R(r).
This strict similarity with AlphaBeta does not hold generally, because our algorithm takes into account the degrees of the nodes. On the one hand, a player is happy to find a node with more possible moves than expected. On the other hand, the value of such a node may be difficult to determine. So the way our algorithm takes degrees into account is complicated and not very transparent.
1.8. Organization of the paper. In the next section, we precisely state our main results and describe our algorithm. Section 3 is devoted to the convergence proof. In Sections 4 and 5, we establish our main result. Section 6 is devoted to the computation of the functions m and s for particular models. In Section 7, we show on an example that global optimality fails. We present numerical experiments in Section 8. In Section 9, we precisely describe the versions of MCTS and its variant we used to test our algorithms.

Notation and main results
2.1. Notation. We first introduce once for all the whole notation we need concerning trees.
Let T be the complete discrete ordered tree with root r and infinite degree. An element of T is a finite word composed of letters in N * . The root r is the empty word. If e.g. x = n 1 n 2 n 3 , this means that x is the n 3 -th child of the n 2 -th child of the n 1 -th child of the root. We consider ordered trees to simplify the presentation, but the order will not play any role. The depth (or generation) |x| of x ∈ T is the number of letters of x. In particular, |r| = 0.
We say that y ∈ T is the father of x ∈ T \ {r} (or that x is a child of y) if there is n ∈ N * such that x = yn. We denote by f (x) the father of x.
For x ∈ T, we call C x = {y ∈ T : f (y) = x} the set of all the children of x and T x the whole progeny of x: T x is the subtree of T composed of x, its children, its grandchildren, etc.
We say that x, y ∈ T are brothers if they are different and have the same father. For x ∈ T \ {r}, we denote by H x = C f (x) \ {x} the set of all the brothers of x. Of course, H r = ∅.
For x ∈ T and y ∈ T x , we denote by B xy is the branch from x to y. In other words, z ∈ B xy if and only if z ∈ T x and y ∈ T z .
, which consists of x, its brothers, its father and uncles, its grandfather and granduncles, etc.
For x ⊂ T, we introduce B x = ∪ x∈x B rx , the finite subtree of T with root r and set of leaves x.
Let S f be the set of all finite subtrees of T with root r. For T a finite subset of T, it holds that T ∈ S f if and only if for all x ∈ T , B rx ⊂ T .
For T ∈ S f and x ∈ T , we introduce C T x = T ∩ C x the set of the children of x in T , H T x = T ∩ H x the set of the brothers of x in T , T x = T ∩ T x the whole progeny of x in T , and K T x = T ∩ K x which contains x, its brothers (in T ), its father and uncles (in T ), its grandfather and granduncles (in T ), etc. See Figure 1.
We denote by L T = {x ∈ T : T x = {x}} the set of the leaves of T ∈ S f . We have B L T = T .
Finally, for T ∈ S f and x ⊂ T , we introduce D T x = T ∩ D x , the set of all the brothers (in T ) of the elements of B x not belonging to B x (observe that B x ⊂ T ), see Figure 2. It holds that 2.2. The general model. We have two players J 0 and J 1 . The game is modeled by a finite tree T ∈ S f with root r and leaves L = L T . An element x ∈ T represents a configuration of the game. On each node x ∈ T \ L, we set t(x) = 1 if it is J 1 's turn to play when in the configuration x and t(x) = 0 else. The move is alternate and the player J 1 starts, so that we have t(x) = 1 {|x| is even} , where |x| is the depth of x. We have some outcomes (R(x)) x∈L in {0, 1}. We say that x ∈ L is a winning outcome for J 1 (resp. J 0 ) if R(x) = 1 (resp. R(x) = 0).
So J 1 starts, he chooses a node x 1 among the children of r, then J 0 chooses a node x 2 among the children of x 1 , and so on, until we reach a leave y ∈ L, and J 1 is the winner if R(y) = 1, while J 0 is the winner if R(y) = 0.
2.4. Minimax values. Given the whole tree T and the outcomes (R(x)) x∈L , we can theoretically completely solve the game. We classically define the minimax values (R(x)) x∈T as follows. For any x ∈ T , R(x) = 1 if J 1 has a winning strategy when starting from x and R(x) = 0 else (in which case J 0 necessarily has a winning strategy when starting from x).
It is thus possible to compute R(x) for all x ∈ T by backward induction, starting from the leaves.
This is easily checked: for x ∈ T with t(x) = 0, we have R(x) = 0 if x has at least one child y ∈ T such that R(y) = 0 (because J 0 can choose y from which J 1 has no winning strategy) and R(x) = 1 else (because any choice of J 0 leads to a position y from which J 1 has a winning strategy). This rewrites R(x) = min{R(y) : y ∈ C x }.
If now t(x) = 1, then R(x) = 1 if x has at least one child y ∈ T such that R(y) = 1 (because J 1 can choose y, from where it has a winning strategy) and R(x) = 0 else (because any choice of J 1 leads to a position from which he has no winning strategy). This can be rewritten as R(x) = max{R(y) : y ∈ C x }.
2.5. The goal. Our goal is to estimate at best R(r) with a computer and a given amount of time.
In practice, we (say, J 1 ) are playing at some true game such as Connect Four or any deterministic game with alternate moves, against a true opponent (say, J 0 ). As already mentioned in the introduction, we may always consider that such a game is represented by a tree, and we may remove draws by identifying them to victories of J 0 (or of J 1 ).
We are in some given configuration (after a certain number of true moves of both players). We call this configuration r. We have to decide between several possibilities. We thus want to estimate at best from which of these possibilities there is a winning strategy for J 1 . In other words, given a position r (which will be the root of our tree), we want to know at best R(r) = max{R(y) : y ∈ C r }: if our estimate suggests that R(r) = 0, then any move is similarly desperate. If our estimate suggests that R(r) = 1, this necessarily relies on the fact that we think that some (identified) child y 0 of r satisfies R(y 0 ) = 1. Then in the true game, we will play y 0 .
Of course, except for very small games, it is not possible in practice to compute R(r) as in Remark 1, because the tree is too large.
The computer knows nothing about the true game except the rules: when it sees a position (node) x ∈ T , it is able to decide if x is terminal position (i.e. x ∈ L) or not; if x is a terminal position, it knows the outcome (i.e. R(x)); if x is not a terminal position, it knows who's turn it is to play (i.e. t(x)) and the possible moves (i.e. C x ).
The true game is deterministic and our study does not apply at all to games of chance such as Backgammon (because games of chance are more difficult to represent as trees, their minimax values or not clearly well-defined, etc.). However, it is a very large game and, in some sense, unknown, so one might hope it resembles the realization of a random model. We will thus assume that T , as well as the outcomes (R(x)) x∈L , are given by the realization of some random model satisfying some independence properties. It is not clear whether such an assumption is reasonable. In any case, our theoretical results completely break down without such a condition.
2.6. A class of algorithms. We consider a large class of algorithms resembling the Monte Carlo Tree Search algorithm, of which a version is recalled in details in the Appendix. The idea is to make J 1 play against J 0 a large number of times: the first match is completely random, but then we use the statistics of the preceding matches. MCTS makes J 1 and J 0 rather play some moves that often led them to victories. At the end, this provides some ratings for the children of r. In the true game, against the true opponent, we then play the move leading to the child of r with the highest rating.

Definition 2.
We call a uniformly random match starting from x ∈ T , with y ∈ L as a resulting leave, the following procedure. Put y 0 = x. If y 0 ∈ L, set y = y 0 . Else, choose y 1 uniformly among C y0 . If y 1 ∈ L, set y = y 1 . Else, choose y 2 uniformly among C y1 . If y 2 ∈ L, set y = y 2 . Etc. Since T is finite, this always ends up.
The class of algorithms we consider is the following.

Definition 3.
An admissible algorithm is a procedure of the following form.
Step 1. Simulate a uniformly random match from r, call x 1 the resulting leave and set Step n+1. Using only the knowledge of B xn , D xn and (R(x)) x∈xn , choose some (possibly randomized) z n ∈ D xn ∪ x n .
If z n ∈ D xn , simulate a uniformly random match starting from z n and call x n+1 the resulting leave. Set Conclusion. Stop after a given number of iterations n 0 (or after a given amount of time). Choose some (possibly randomized) best child x * of r, using only the knowledge of B xn 0 , D xn 0 and (R(x)) x∈xn 0 .
After n matches, B xn represents the explored part of T , while D xn represents its boundary.
Note that we assume we that know D xn (the set of all the brothers, in T , of the elements of B xn that are not in B xn ) after the simulation some matches leading to the set of leaves x n . This is motivated by the following reason. Any y ∈ D xn has its father in B xn . Thus at some point of the simulation, we visited f (y) for the first time and we had to decide (at random) between all its children: we are aware of the fact that y ∈ T .
Also note that we assume that the best thing to do, when visiting a position for the first time (i.e. when arriving at some element of D xn ), is to simulate from there a uniformly random match. This models that fact that we know nothing of the game, except the rules.
The randomization will allow us, in practice, to make some uniform choice in case of equality.
Finally, it seems stupid to allow x n+1 to belong to x n , because this means we will simulate precisely a match we have already simulated: this will not give us some new information. But this avoids many useless discussions. Anyway, a good algorithm will always, or almost always, exclude such a possibility.

Remark 4. (i) In
Step n+1, by "using only the knowledge of B xn , D xn and (R(x)) x∈xn choose some (possibly randomized) z n ∈ D xn ∪ x n ", we mean that (ii) In Conclusion, by "choose some (possibly randomized) best child x * of r, using only the knowledge of 1]) is independent of everything else and where G is a deterministic application from A to T such that, for w ∈ A as above, G(w) ∈ C B∪D r .
(iii) The two applications F, G completely characterize an admissible algorithm.

2.7.
Assumption. Except for the convergence of our class of algorithms, the proof of which being purely deterministic, we will suppose at least the following condition.
Assumption 5. The tree T is a random element of S f . We denote by L = L T and, as already mentioned, we set C We assume that for any T ∈ S f with leaves L T , the family is independent conditionally on A T = {T ⊂ T and D L T = ∅} as soon as Pr(A T ) > 0.

Observe that
This condition is a branching property: knowing A T , i.e. knowing that T ⊂ T and that all the brothers (in T ) of x ∈ T belong to T , we can write T = T ∪ x∈L T T x , and the family ((T x , (R(y)) y∈L∩Tx ), x ∈ L T ) is independent. A first consequence is as follows.
Remark 6. Suppose Assumption 5. For T ∈ S f such that Pr(A T ) > 0 and z ∈ L T , we denote by G T,z the law of (T z , (R(y)) y∈L∩Tz ) conditionally on A T . We have G T,z = G K T z ,z .
Assumption 5 is of course satisfied if T is deterministic and if the family (R(x)) x∈L is independent. It also holds true if T is an inhomogeneous Galton-Watson tree and if, conditionally on T , the family (R(x)) x∈L is independent and (for example) R(x) is Bernoulli with some parameter depending only on the depth |x|. But there are many other possibilities, see Section 6 for precise examples of models satisfying Assumption 5.
2.8. Two relevant quantities. Here we introduce two mean quantities necessary to our study.
Definition 7. Suppose Assumption 5. Let T ∈ S f such that Pr(A T ) > 0, and z ∈ L T . Observe that on A T , z ∈ T .
(ii) Simulate a uniformly random match starting from z, denote by y the resulting leave. We put K zy = K y ∩ T z and introduce G = σ(y, K zy , R(y)). We set We will see in Section 6 that for some particular classes of models, m and s can be computed.
Recall that our goal is to produce some admissible algorithm. Assume we have explored n leaves x 1 , . . . , x n and set x n = {x 1 , . . . , x n }. Recall that B xn is the explored tree and that D xn is, in some sense, its boundary. For z ∈ x n , we perfectly know R(z). But for z ∈ D xn , we only know that z ∈ T and we precisely know Thus the best thing we can say is that R(z) equals 1 with (conditional) probability m(K z , z). Also, s(K z , z) quantifies some mean amount of information we will get if handling a uniformly random match starting from z.
2.9. The conditional minimax values. From now on, we work with the following setting.
Setting 8. Fix n ≥ 1. Using an admissible algorithm, we have simulated n matches, leading to the leaves x n = {x 1 , . . . , x n } ⊂ L. Hence the σ-field F n = σ(x n , D xn , (R(x)) x∈xn ) represents our knowledge of the game. Observe that B xn is of course F n -measurable. Also, for any x ∈ B xn ∪D xn , This last assertion easily follows from the fact that for any x ⊂ T , any element of B x ∪ D x has all its brothers (in T ) in B x ∪ D x .
We first want to compute R n (r) = E[R(r)|F n ] = Pr(R(r) = 1|F n ), which is, in some obvious sense, the best approximation of R(r) knowing F n . Of course, we will have to compute R n (x) on the whole explored subtree of T . We will check the following result in Section 5.
Proposition 9. Grant Assumption 5 and Setting 8. For all x ∈ B xn ∪ D xn , define R n (x) = Pr(R(x) = 1|F n ). They can be computed by backward induction, starting from x n ∪ D xn , as follows. For all

2.10.
Main result. We still work under Setting 8. We want to simulate a (n + 1)-th match. We thus want to choose some z ∈ D xn and then simulate a uniformly random match starting from z, in such a way that the increase of information concerning R(r) is as large as possible. We unfortunately need a few more notation.
(iii) Fix z ∈ D xn , handle a uniformly random match starting from z, with resulting leave y z , set Our main result reads as follows.
Hence on the event {R n (r) ∈ {0, 1}}, we perfectly know R(r) from F n and thus the (n + 1)-th match is useless.
When R n (r) / ∈ {0, 1}, we have the knowledge F n , and the theorem tells us how to choose z * ∈ D xn such that, after a uniformly random match starting from z * , we will estimate at best, in some L 2 -sense, R(r). In words, z * can be found by starting from the root, getting down in the tree B xn ∪ D xn by following the maximum values of U 2 n Z n , until we arrive in D xn . As noted by Bruno Scherrer, z * is also optimal if using a L 1 -criterion.
Remark 12. With the assumptions and notation of Theorem 11, it also holds that This is easily deduced from (5), noting that conditionally on F z n+1 (which contains F n ), R(r) is Bernoulli(R z n+1 (r))-distributed, and that for X ∼Bernoulli(p), 2.11. The algorithm. The resulting algorithm is as follows.
Algorithm 13. Each time we use argmax, we e.g. choose at uniform random in case of equality.
Step 1. Simulate a uniformly random match from r, call x 1 the resulting leave and set Step n+1. Put z = r. Do z = argmax{(U n (y)) 2 Z n (y) : y ∈ C z } until z ∈ D xn . Set z n = z.

During this random match, keep track of
If R n+1 (r) ∈ {0, 1}, go directly to the conclusion.
Conclusion. Stop after a given number of iterations n 0 (or after a given amount of time). As best child of r, choose x * = argmax{R n0 (x) : x ∈ C r }.

2.12.
The update is rather quick. For e.g. a (deterministic) regular tree T with degree d and depth K, the cost to achieve n steps of the above algorithm is of order nKd, because at each step, we have to update the values of R n (x) and Z n (x) for x ∈ B rxn (which concerns K nodes) and the values of U n (x) for x ∈ K xn+1 (which concerns Kd nodes).
Observe that MCTS algorithms (see Sections 1 and 31) enjoy a cost of order Kn, since the updates are done only on the branch B rxn (or even less than that, but in any case we have at least to simulate a random match, of which the cost is proportional to K, at each step).
The cost in Kdn for Algorithm 13 seems miraculous. We have not found any deep reason, but calculus, explaining why this theoretically optimal (in a loose sense) behaves so well. It would have been more natural, see Remark 22, that the update would concern the whole explored tree B xn ∪ D xn , which contains much more than Kd elements. Very roughly, its cardinal is of order Kdn, which would lead to a cost of order Kdn 2 to achieve n steps.   We did not write it down in the present paper, which is technical enough, but we also studied two variations of Theorem 11.
• First, we considered the very same model, but we tried minimize Pr(1 {R z n+1 (r)}>1/2 = R(r)|F n ) instead of (5). This is more natural, since in practice, one would rather estimate R(r) by 1 {Rn(r)>1/2} than by R n (r) (because R(r) takes values in {0, 1}). It is possible to extend our theory, but this leads to an algorithm with a cost of order Kdn 2 (at least, we found no way to reduce this cost).
• We also studied what happens in the case where the game may lead to draw. Then the outcomes (A(x)) x∈T can take three values, 0 (if J 0 wins), 1 (if J 1 wins) and 1/2 (if the issue is a draw). For any x ∈ T , we can define the minimax rating A(x) as 0 (if J 0 has a winning strategy), 1 (if J 1 has a winning strategy) and 1/2 (else). The family (A(x)) x∈T satisfies the backward induction relation (1). A possibility is to identify a draw to a victory of J 0 (or of J 1 ). Then, under Assumption 5 with R(x) = 1 {A(x)=1} , we can apply directly Theorem 11. However, this leads to an algorithm that tries to find a winning move, but gives up if it thinks it cannot win: the algorithm does not make any difference between a loss and a draw. It is possible to adapt our theory to overcome this default by trying to estimate both R( for some a, b ≥ 0. However, this leads, again, to an algorithm of which the cost is of order Kdn 2 , unless b = 0 (or a = 0), which means that we identify a draw to a victory of J 0 (or of J 1 ). Technically, this comes from the fact that in such a framework, nothing like Observation (10) does occur, see also Remark 22. In practice, one can produce an algorithm taking draws into account as follows: at each step, we compute (R n (x), Z n (x)) x∈Bx n ∪Dx n identifying draws to victories of J 0 and, with obvious notation, (R n (x), Z n (x)) x∈Bx n ∪Dx n identifying draws to victories of J 1 . We use our algorithm with (R n (x), Z n (x)) x∈Bx n ∪Dx n while R n (r) is not too small, and we then switch to (R n (x), Z n (x)) x∈Bx n ∪Dx n . We have no clear idea of how to choose the threshold.
Finally, the situation is even worse for games with a large number (possibly infinite) of game values (representing the gain of J 1 ). This could for example be modeled by independent Beta priors on the leaves. As first crippling difficulty, Beta laws are not stable by maximum and minimum.
2.13. Convergence. It is not difficult to check that, even with a completely wrong model, Algorithm 13 always converges in a finite (but likely to be very large) number of steps. (ii) For any constant λ > 0, the algorithm using the functions m and λs is precisely the same than the one using m and s.
Note that we allow s to be larger than 1, which is never the case from a theoretical point of view. But in view of (ii), it is very natural. We will prove the following result in Section 3.
Let us emphasize this proposition assumes nothing but the fact that T is finite. Assumption 5 is absolutely not necessary here. The condition on m and s is very general and obviously satisfied if e.g. m is (0, 1)-valued and if s is (0, ∞)-valued.
Once a sufficiently large part of the tree is explored (actually, almost all the tree up to some pruning), the algorithm knows perfectly the minimax value of r, even if m and s are meaningless. Thus, the structure of the algorithm looks rather nice. In practice, for a large game, the algorithm will never be able to explore such a large part of the tree, so that the choice of the functions m and s is very important. However, Proposition 15 is reassuring: we hope that even if the modeling is approximate, so that the choices of m and s are not completely convincing, the algorithm might still behave well. 2.14. Practical choice of the functions m and s. In Section 6, we will describe a few models satisfying our conditions and for which we can compute the functions m ans s. As seen in Definition 7 (see also Algorithm 13), it suffices to be able to compute m(K x , x) and s(K x , x) for all x ∈ T (actually, for all x in the boundary D xn of the explored tree). Let us summarize the two main examples.
(i) First, assume that T is an inhomogeneous Galton-Watson tree, with maximum depth K and known reproduction laws, and that conditionally on T , the outcomes (R(x)) x∈L are independent Bernoulli random variables with parameters depending only on the depths of the involved nodes. Then we will show that the functions m(K x , x) and s(K x , x) depend only on the depth of x and can be computed numerically, once for all, from the parameters of the model. See Subsection 6.1 for precise statements. Let us mention that the parameters of the model can be fitted to some real game such as Connect Four (even if it is not clear at all that this model is reasonable) by handling a high number of uniformly random matches, see Subsection 8.2 for a few more explanations.
(ii) Second, assume that T is some given random tree to be specified later. Fix some values a ∈ (0, 1) and b ∈ R. Define m({r}, r) = a, s({r}, r) = 1 (or any other positive constant, see Remark 14) and, recursively, for all x ∈ T , define m(K x , x) and s(K The formula for m is a modeling symmetry assumption rather well-justified and we can treat the following cases, see in Subsection 6.3 for more details. (ii)-(a) If we consider Pearl's model [20], i.e. T is the deterministic d-regular tree with depth K and the outcomes are i.i.d. Bernoulli random variables with parameter p (explicit as a function of a, d, K), then the above formula for s with b = 2 is theoretically justified, see Remark 27.
(ii)-(b) Assume next that T is a finite homogeneous Galton-Watson tree with reproduction law (1−p)δ 0 +pδ d (with pd ≤ 1) and that conditionally on T , the outcomes (R(x)) x∈L are independent Bernoulli random variables with parameters (m(K x , x)) x∈L (that do not need to be computed). Then if a = a 0 is well-chosen (as a function of p and d), the above formula for s with b = 0 (i.e. s ≡ 1) is theoretically justified, see Remark 29.
We also experimented, without theoretical justification, other values of b. But we obtained so few success in this direction that we will not present the corresponding numerical results.
Let us mention that while (i) requires to fit precisely the functions m and s using rough statistics on the true game, (ii) is rather universal. In particular, it seems that the choice a = 0.5 and b = 0 works quite well in practice, and this is very satisfying. Also, the implementation is very simple, since each time a new node x is visited by the algorithm, we can compute m(K x , x) from m(K f (x) , f (x)) and the number of children |C f (x) | of f (x).
Finally observe that for any tree T and any choices of a ∈ (0, 1) and b ∈ R, m is (0, 1)-valued and s is (0, ∞)-valued, so that Proposition 15 applies: the algorithm always converges in a finite number of steps.
2.15. Global optimality fails. By Theorem 11, Algorithm 13 is optimal, in a loose sense, step by step. That is, if knowing, for some n ≥ 1, D xn and the values of R(x) for x ∈ x n ⊂ L, it tells us how to choose the next leave x n+1 ∈ L so that E[(R n+1 (r) − R(r)) 2 ] is as small as possible. However, it is not globally optimal.
Remark 17. Let T be the (deterministic) binary tree with depth 3 and assume that the outcomes (R(x)) x∈L are i.i.d. and Bernoulli(1/2)-distributed. Then Assumption 5 is satisfied and we can compute the functions m and s introduced in Definition 7. We thus may apply Algorithm 13, producing some random leaves x 1 , x 2 , . . . . We set F n = σ({B xn , D xn , (R(x)) x∈xn }) and R n (r) = E[R(r)|F n ].
There is another admissible algorithm, producing some random leavesx 1 ,x 2 , . . . , such that, for F n = σ({Bx n , Dx n , (R(x)) x∈xn }) andR n (r) = E[R(r)|F n ], we have It looks very delicate to determine the globally optimal algorithm. Moreover, it is likely that such an algorithm will be very intricate and will not enjoy the quick update property discussed in Subsection 2.12.

General convergence
We first show that Algorithm 13 is convergent with any reasonable parameters m and s.
Proof of Proposition 15. We consider some fixed tree T ∈ S f with L its set of leaves, some fixed outcomes (R(x)) x∈L and we denote by (R(x)) x∈T the corresponding minimax values. We also consider any pair of functions m and s on {(S, x) : S ∈ S f , x leave of S} with values in [0, 1] and [0, ∞) respectively and we apply Algorithm 13. We assume that for any S ∈ S f of which x is a leave, s(S, x) = 0 if and only if m(S, x) ∈ {0, 1}, and that in such a case, m(S, x) = R(x).
Step 1. After the n-th step of the algorithm, we have some values R n (x) ∈ [0, 1], U n (x) ∈ [0, 1] and Z n (x) ≥ 0 for all x ∈ B xn ∪ D xn , for some x n = {x 1 , . . . , x n } ⊂ L. These quantities can generally not be interpreted in terms of conditional expectations, but they always obey, by construction, the following rules.
(a) If x ∈ x n , then R n (x) = R(x) and Z n (x) = 0.
We finally recall that, by Proposition 1, Step 2. Here we prove that for all x ∈ B xn ∪ D xn , R n (x) ∈ {0, 1} and Z n (x) = 0 are equivalent and imply that R n (x) = R(x).
In the whole step, the notions of child (of x ∈ B xn \ x n ) and brother (of x ∈ B xn ∪ D xn ) refer to the tree T or, equivalently, to the tree B xn ∪ D xn .
First, (7) is obvious if x ∈ x n by point (a) (then R n (x) = R(x) ∈ {0, 1} and Z n (x) = 0) and if x ∈ D xn by point (b) and our assumption on m and s. We next work by backward induction: we consider x ∈ B xn \ x n , we assume that all its children satisfy (7), and we prove that x also satisfies (7). We assume for example that t(x) = 0, the case where t(x) = 1 being treated similarly.
If R n (x) = 0, then by (c), x has (at least) one child y such that R n (y) = 0 whence, by induction assumption, R(y) = 0 and thus R(x) = 0 by (f). Furthermore, R n (y) = 0 implies that U n (z) = 0, whence U 2 n (z)Z n (z) = 0, for all z brother of y by (e). And by induction assumption, we have Z n (y) = 0, whence U 2 n (y)Z n (y) = 0. We conclude, by (d), that Z n (x) = 0, and we have seen that R n (x) = 0 = R(x).
If R n (x) = 1, then by (c), all the children y of x satisfy R n (y) = 1, whence, by induction assumption, R(y) = 1 and thus R(x) = 1 by (f). Still by induction assumption, Z n (y) = 0 for all the children y of x, whence Z n (x) = 0 by (d), and we have seen that R n (x) = 1 = R(x).
Assume now that Z n (x) = 0, whence U 2 n (y)Z n (y) = 0 for all the children y of x by (d). If there is (at least) one child y of x for which U n (y) = 0, this means that there is another child z of x for which R n (z) = 0 by (e), whence R n (x) = 0 by (c). Else, we have Z n (y) = 0 for all the children y of x, so that R n (y) ∈ {0, 1} by induction assumption, and thus R n (x) ∈ {0, 1} by (c).
Step 3. We now prove that if x n+1 ∈ x n , then R n (r) ∈ {0, 1} and this will prove point (i). Looking at Algorithm 13, we see that x n+1 ∈ x n means that the procedure put z = r and do z = argmax{U 2 n (y)Z n (y) : y ∈ C z } until z ∈ x n ∪ D xn returns some z ∈ x n . But then, Z n (z) = 0 by (a). From (d) and the way z has been built, one easily gets convinced that this implies that Z n (r) = 0, whence R n (r) ∈ {0, 1} by Step 1.
Step 4. By Step 3 and since T has a finite number of leaves, n 0 = inf{n ≥ 1 : R n (r) ∈ {0, 1}} is well-defined and finite. Finally, we know from Step 2 that R n0 (r) = R(r).

Preliminaries
We first establish some general formulas concerning the functions m and s. They are not really necessary to understand the proof of our main result, but we need them to show Lemma 16. Also, we will use them to derive more tractable expressions in some particular cases in Section 6.
Proof. We fix S ∈ S f such that Pr(A S ) > 0 and x ∈ L S . We first observe that for any y ⊂ C x with |y| ≥ 1, A S ∩ {C x = y} = A S∪y (recall that A S is the event on which S ⊂ T and all the brothers (in T ) of all the elements of S also belong to S). Pr(C x = y, R(x) = 1|A S ).
Hence the only difficulty is to verify that Pr(C x = y, R(x) = 1|A S ) = Pr(C x = y|A S )Θ(S, x, y) or, equivalently, that Since A S ∩ {C x = y} = A S∪y and since y ⊂ L S∪y , we know from Assumption 5 that the family (T y , (R(u)) u∈L∩Ty ) y∈y is independent conditionally on A S ∩ {C x = y}. Consequently, the family (R(y)) y∈y is independent conditionally on A S ∩ {C x = y} (because R(y) depends only on T y and (R(u)) u∈L∩Ty , recall Remark 1). We assume e.g. t(x) = 1. Since R(x) = max{R(y) : y ∈ C x }, we may write But for y ∈ y, Pr(R(y) = 0|A S ∩ {C x = y}) = Pr(R(y) = 0|A S∪y ) = 1 − m(S ∪ y, y), whence (8), because t(x) = 1.
We next study s. Knowing A S , we handle a uniformly random match starting from x, with resulting leave v and we set G = σ(v, R(v), K xv ), where K xv = K v ∩ T x . We recall that s(S, x) = On A S ∩ {C x = y}, let w be the child of x belonging to B xv . Since v is obtained by handling a uniformly random match starting from x, Pr(w = y|A S ∩ {C x = y}) = |y| −1 for all y ∈ y. We thus only have to verify that But A S ∩ {C x = y} = A S∪y , so that, by Assumption 5 (and since the random match is independent of everything else), the family (T u , (R(z)) z∈L∩Tu ) u∈y is independent conditionally on A S ∩ {C x = y} ∩ {w = y}. Hence the family (R(u)) u∈y\{y} is independent and independent of (T y , (R(z)) z∈L∩Ty ) conditionally on From now on, we assume e.g. that t(x) = 0.
We have R(x) = min{R(u) : u ∈ y} = u∈y R(u) on {C x = y} and we conclude from the above independence property that, conditionally on But Pr(R(u) = 1|A S∪y ) = m(S ∪ y, u). Adopting the notation R 1 (y) = Pr(R(y) = 1|G ∨ σ(A S∪y )), we deduce that R 1 ( Recall that t(x) = 0. To conclude that (9) holds true, it only remains to verify that By definition, we have s(S ∪ y, y) = E[(R 1 (y) − m(S ∪ y, y)) 2 |A S∪y ] conditionally on {w = y}, because on {w = y}, R 1 (y) = Pr(R(y) = 1|G ∨ σ(A S∪y )) is indeed the conditional probability that R(y) = 1 knowing the information provided by a uniformly random match starting from w (with resulting leave v). Point (a) follows. For the second equality, we used that {w = y} ∈ G ∨ σ(A S∪y ). For the third equality, we used that w is of course independent of R(y) knowing A S∪y .

For (b), we write
We next give the Proof of Lemma 16. We fix S ∈ S f such that Pr(A S ) > 0 and z ∈ L S .
If Pr(z ∈ L|A S ) = 0, we consider a finite tree T with root z such that Pr(T z = T |A S ) > 0. We set U z = S and, for all x ∈ T \ {z}, U x = S ∪ y∈Bzx\{x} C T y ∈ S f . It holds that x ∈ L Ux for all x ∈ T and, if x ∈ T \ L T , U x ∪ C T x = U y for all y ∈ C T x . We now prove by backward induction that for any x ∈ T , s(U x , x) = 0 implies that m(U x , x) ∈ {0, 1}. Applied to x = z, this will complete the proof.
If first x ∈ L T , then Pr(x ∈ L|A Ux ) > 0, because A S ∩ {T z = T } ⊂ A Ux ∩ {x ∈ L}, because Pr(T z = T |A S ) > 0 and because Pr(A S ) > 0. We thus have already seen that s(U x , x) = 0 implies that m(U x , x) ∈ {0, 1}.
If next x ∈ T \ L T , we introduce y = C T x and we see that because Pr(T z = T |A S ) > 0 and because Pr(A S ) > 0. We deduce from Lemma 18 that if s(U x , x) = 0, then Γ(U x , x, y, y) = 0 for all y ∈ y. If e.g. t(x) = 0, this implies that for all y ∈ y (recall that U x ∪ y = U y ), Thus we always have m(U x , x) = u∈y m(U u , u) and either (i) s(U u , u) = 0 for all u ∈ y or (ii) there is u ∈ y such that m(U u , u) = 0. In case (i), we deduce from the induction assumption that m(U u , u) ∈ {0, 1} for all u ∈ y, whence m(U x , x) ∈ {0, 1}. In case (ii), we of course have m(U x , x) = 0. We next study the information provided by some admissible algorithm. Here, Assumption 5 is not necessary. The following result is intuitively obvious, but we found no short proof. Lemma 19. Recall Setting 8: an admissible algorithm provided some leaves x n = {x 1 , . . . , x n } together with the objects D xn and (R(x)) x∈xn . For any (deterministic) y n = {y 1 , . . . , y n } ⊂ T, any D n ⊂ D yn and any (a(y)) y∈yn ⊂ {0, 1} yn , the law of (T , (R(y)) y∈L ) knowing A n = {x n = y n , D yn = D n , (R(y)) y∈yn = (a(y)) y∈yn } is the same as knowing A n = {y n ⊂ L, D yn = D n , (R(y)) y∈yn = (a(y)) y∈yn } as soon as Pr(A n ) > 0.
Proof. We work by induction on n.
Step 2. Assume that the statement holds true with some n ≥ 1. Consider some deterministic y n+1 = {y 1 , . . . , y n+1 } ⊂ T, D n+1 ⊂ D yn+1 and (a(y)) y∈yn+1 ⊂ {0, 1} yn+1 , as well as the events Recall that x n+1 is chosen as follows: for some deterministic function F as in Remark 4 and some X n ∼ U([0, 1]) independent of everything else, we set z n = F (x n , D xn , (R(x)) x∈xn , X n ), which belongs to x n ∪ D xn . If z n ∈ x n , we set x n+1 = z n , else, we handle a uniformly random match starting from z n and denote by x n+1 the resulting leave.
If y n+1 ∈ y n , then we have A n+1 = A n ∩ {F (y n , D n , (a(x)) x∈yn , X n ) = y n+1 } and A n+1 = A n , where D n = D n+1 , where A n = {x n = y n , D yn = D n , (R(y)) y∈yn = (a(y)) y∈yn } and where A n = {y n ⊂ L, D yn = D n , (R(y)) y∈yn = (a(y)) y∈yn }. By induction assumption, we know that the law of (T , (R(y)) y∈L ) knowing A n is the same as knowing A n . Since X n is independent of (T , (R(y)) y∈L ), A n , the law of (T , (R(y)) y∈L ) knowing A n+1 is the same as knowing A n and thus the same as knowing A n+1 (which equals A n ).
If y n+1 / ∈ y n , let x be the element of B ryn+1 ∩ B yn the closest to y n+1 and let z n be the child of x belonging to B ryn+1 . We set D n = (D n+1 \ T zn ) ∪ {z n }. Then A n+1 = A n ∩ B 1 ∩ B 2 and A n+1 = A n ∩ B 2 , where A n and A n are as in the statement and B 1 ={F (y n , D n , (a(x)) x∈yn , X n ) = z n }, First, since X n is independent of everything else, the law of (T , (R(y)) y∈L ) knowing A n+1 is the same as knowing A n ∩ B 2 (from now on, we take the convention that in B 2 , x n+1 is the leave resulting from a uniformly random match starting from z n ). We thus only have to prove that the law of (T , (R(y)) y∈L ) knowing A n ∩ B 2 is the same as knowing A n ∩ B 2 . Consider T ∈ S f and (α(y)) y∈L T ∈ {0, 1} L T , such that y n+1 ⊂ L T , D T yn+1 = D n+1 and (α(y)) y∈yn+1 = (a(y)) y∈yn+1 .
We start from thanks to our induction assumption. On the one hand, exactly as in Step 1, we have On the other hand, Since finally {T = T, (R(y)) y∈L T = (α(y)) y∈L T } ⊂ B 2 , we conclude that Pr(T = T, (R(y)) y∈L T = (α(y)) y∈L T )|A n ∩ B 2 ) = Pr(T = T, (R(y)) y∈L T = (α(y)) y∈L T |A n ∩ B 2 ), which was our goal.
We deduce the following observation, that is crucial to our study.
Lemma 20. Suppose Assumption 5 and recall Setting 8: an admissible algorithm provided some leaves x n = {x 1 , . . . , x n } together with the objects D xn and (R(x)) x∈xn and we define F n = σ(x n , D xn , (R(x)) x∈xn ). Recall also Remark 6.
(i) Knowing F n , for all x ∈ D xn , the conditional law of (T x , (R(y)) y∈L∩Tx ) is G Kx,x .
(ii) Knowing F n , for all x ∈ B xn \ x n , the family ((T u , (R(y)) y∈L∩Tu , u ∈ C x ) is independent.
Recall that for all x ∈ D xn , K x is F n -measurable and that for all x ∈ B xn \ x n , C x is F nmeasurable. Hence this statement is meaningful.
Proof. We observe that F n is generated by the events of the form A n = {x n = y n , D yn = D n , (R(y)) y∈yn = (a(y)) y∈yn } as in Lemma 19. Let A n = {y n ⊂ L, D yn = D n , (R(y)) y∈yn = (a(y)) y∈yn }. We see that on A n (which contains A n ), To check (i), it thus suffices to prove that knowing A n , for all x ∈ D n , the law of (T x , (R(y)) y∈L∩Tx is G K T x ,x . We fix x ∈ D n . By Lemma 19,it suffices to verify that the law of (T x , (R(y)) y∈L∩Tx By Assumption 5, (T x , (R(y)) y∈L∩Tx is independent of u∈L K T x \{x} E u knowing A K T x . Thus the law of (T x , (R(y)) y∈L∩Tx knowing A n equals the law of (T x , (R(y)) y∈L∩Tx knowing A K T x , which is G K T x ,x by definition. For (ii), we show that for any x ∈ B yn \ y n , the family ((T u , (R(y)) y∈L∩Tu , u ∈ C T x ) is independent conditionally on A n , or equivalently, conditionally on A n . To this aim, we introduce S is the tree T from which we have removed all the subtrees strictly below the children of x). We write A n = A S ∩ u∈L S E u with E u as in the proof of (i). We know from Assumption 5 that the family ((T u , (R(y)) y∈L∩Tu , u ∈ L S ) is independent conditionally on A S . Observing that E u ∈ σ(T u , (R(y)) y∈L∩Tu ) for all u ∈ L S , we conclude that the family ((T u , (R(y)) y∈L∩Tu , u ∈ L S ) is independent conditionally on A n . (Here we used that if a family of random variables (X i ) i∈I is independent conditionally on some event A and if we have some events E i ∈ σ(X i ), for i ∈ I, then the family (X i ) i∈I is independent conditionally on A ∩ i∈I E i ). Since C T x ⊂ L S , the conclusion follows.

Proof of the main result
In the whole section, we take Assumption 5 for granted. We first compute the conditional minimax values.
Proof of Proposition 9. We work under Setting 8. If first x ∈ x n , then R(x) is F n -measurable, so that R n (x) = Pr(R(x) = 1|F n ) = R(x).
Next, for x ∈ D xn , we know from Lemma 20 that the law of (T x , (R(y)) y∈L∩Tx knowing F n is G Kx,x . Recalling Definition 7 and Remark 6, we see that R n (x) = Pr(R(x) = 1|F n ) = m(K x , x).
Finally, for x ∈ B xn \ x n , Lemma 20 tells us that the family ((T y , (R(u)) u∈L∩Ty , y ∈ C x ) is independent conditionally on F n . But for y ∈ C x , R(y) is of course σ(T y , (R(u)) u∈L∩Ty )-measurable (recall Remark 1). Thus the family (R(y), y ∈ C x ) is independent conditionally on F n . If t(x) = 0, we may write, by (1), which equals y∈Cx R n (y) as desired. If now t(x) = 1, we find similarly which is nothing but 1 − y∈Cx (1 − R n (y)).
We now study the different possibilities for the (n + 1)-th step.
(ii) For all z ∈ D xn , we have ∆ z n (z) = s(K z , z).
To check points (ii) and (iii), we fix z ∈ D xn . We recall that F z n+1 = F n ∨ G, where G = σ(y, K zy , R(y)), where y is the leave resulting from a uniformly random match starting from z and where K zy = K y ∩ T z . We also recall that R z n+1 (x) = Pr(R(x) = 1|F z n+1 ) for all x ∈ B xn ∪ D xn . We know from Lemma 20 that the law of (T z , (R(y)) y∈L∩Tz ) knowing F n is G Kz,z . Recalling Definition 7 and Remark 6, we immediately deduce that R n (z) = m(K z , z) and that . This proves (ii). To prove (iii), we fix x ∈ B rz \ {r} and we set v = f (x). By Lemma 20, the family ((T y , (R(u)) u∈L∩Ty ), y ∈ C v ) is independent conditionally on F n . Furthermore, G, which only concerns (T x , (R(u)) u∈L∩Tx ) is independent of the family ((T y , (R(u)) u∈L∩Ty ), y ∈ H x ). Finally, we recall that for all y ∈ C v , R(y) is σ(T y , (R(u)) u∈L∩Ty )-measurable.
If t(v) = 0, R z n+1 (v) = Pr(R(v) = 1|F n ∨ G) = Pr(min{R(y) : y ∈ C v } = 1|F n ∨ G) whence, by conditional independence, R z n+1 (v) = y∈Cv Pr(R(y) = 1|F n ∨ G) and thus We now have all the weapons to give the Proof of Theorem 11. Recall that we work under Setting 8 and that we adopt Notation 10 in which U n , Z n and F n+1 z are defined. For x ∈ B xn ∪ D xn , we setŪ n (x) = Brx\{r} U n (y), with the convention thatŪ n (r) = 1.
Step 1. In view of the explicit formula for ∆ z n (r) checked in , the natural way to find z * maximizing ∆ z n (r) is to start from r and to go down in B xn ∪ D xn following the maximum values of (N n (x)) x∈Bx n ∪Dx n defined as follows. Set N n (x) = 0 for x ∈ x n , set N n (x) = (Ū n (x)) 2 s(K x , x) for x ∈ D xn and put N n (x) = max{N n (y) : y ∈ C x } for x ∈ B xn \ x n .
We claim that N n (x) = (Ū n (x)) 2 Z n (x) for all x ∈ B xn ∪ D xn .
By construction, z * = argmax{N n (z) : z ∈ D xn ∪ x n }. Also, N n (x) = N n (r) for all x ∈ B rz * .
Step 4. We now prove that if N n (r) > 0, then z * = z * .

Remark 22.
As seen in the proof, the natural way to find z * would be to start from r and to go down in the tree following the highest values of N n . Recalling the discussion of Subsection 2.12, this would lead to an algorithm with cost of order Kdn 2 : since (generally) R n+1 (x) = R n (x) for x ∈ B rxn+1 , this (generally) modifies the value ofŪ n (x) for all x ∈ B xn \B rxn+1 (actually, except for x ∈ B xn ∩B rxn+1 ) and thus the values of N n (x) for all x on the whole explored tree B xn ∪D xn . The observation (10), which asserts that argmax{N n (y) : y ∈ C x } = argmax{(U n (y)) 2 Z n (y) : y ∈ C x } is thus crucial, as well as the fact that U n and Z n enjoy a quick update property.

Computation of the parameters for a few specific models
Here we present a few models for the tree T and the outcomes (R(x)) x∈L where our assumptions are met and where we can compute, at least numerically, the functions m and s introduced in Definition 7. Recall that these functions are necessary to implement Algorithm 13.
6.1. Inhomogeneous Galton-Watson trees. We assume that T is the realization of an inhomogeneous Galton-Watson tree with reproduction laws µ 0 , . . . , µ K : the number of children of the root r follows the law µ 0 ∈ P(N), the number of children of these children are independent and µ 1 -distributed, etc. We assume that µ K = δ 0 , so that any individual of generation K is a leave and thus K is the maximal depth of T .
We also consider a family q 0 , . . . , q K of numbers in [0,1]. Conditionally on T , we assume that the family (R(x)) x∈L is independent and that R(x) ∼ Bernoulli(q |x| ) for all x ∈ L.
Example 23. With such a model, Assumption 5 is fulfilled, and for any S ∈ S f and x ∈ L S such that Pr(A S ) > 0, we have m(S, x) = m(|x|) and s(S, x) = s(|x|), where (i) m is defined by backward induction by m(K) = q K and, for k = 0, . . . , K − 1, (ii) s is defined by backward induction by s(K) = q K (1 − q K ) and, for k = 0, . . . , K − 1, These quantities can be computed once for all if one knows the parameters µ 0 , . . . , µ K and q 0 , . . . , q K of the model. If unknown, as is generally the case, these parameters can be evaluated numerically quite precisely, handling an important number of uniformly random matches. From these evaluations, we can derive some approximations of m and s. However, the main problem is of course that in general, assuming that the true game is the realization of such a model is not very realistic.
Proof. First, Assumption 5 is satisfied, thanks to the classical branching property of Galton-Watson trees. Indeed, consider S ∈ S f and x ∈ L S such that Pr(A S ) > 0. Conditionally on A S , we can write T = S ∪ x∈L S T x and the family ((T x , (R(y)) y∈L∩Tx , x ∈ L S ) is independent by construction. Furthermore, for any x ∈ L S , the law G S,x of (T x , (R(y)) y∈L∩Tx knowing A S depends only on the depth |x|.

Inhomogeneous Galton-Watson trees of order two.
Here we mention that we can also deal with random trees that enjoy some independence properties without being Galton-Watson trees. For example, the following model of order 2 allows one to build a broad class of random trees with non-increasing degree (along each branch), which might be useful for real games. It is possible to treat some models of higher order, but the functions m and s then become really tedious to compute theoretically and to approximate in practice.
We consider a family of probability measures on N: µ 0 and µ k,d for k = 1, . . . , K and d ≥ 1. We assume that µ K,d = δ 0 for all d ≥ 1 and K will represent the maximum depth of the tree.
We build the random tree T as follows: the root has D r ∼ µ 0 children. Conditionally on D r , all the children x of the root produce, independently, a number D x ∼ µ 1,Dr of children. Once everything is built up to generation k ∈ {0, . . . , K − 1}, all the individuals x with |x| = k produce, independently (conditionally on what is already built), a number D x ∼ µ k,D f (x) of children.
We also consider a family q 0 , . . . , q K of numbers in [0,1]. Conditionally on T , we assume that the family (R(x)) x∈L is independent and that R(x) ∼ Bernoulli(q |x| ) for all x ∈ L. (i) m(K, d) = q K for all d ≥ 1 and, for k = 1, . . . , K − 1 and d ≥ 1, (ii) s(K, d) = q K (1 − q K ) for all d ≥ 1 and, for k = 1, . . . , K − 1 and d ≥ 1, We could easily express m({r}, r) and s({r}, r), but these values are useless as far as Algorithm 13 is concerned.

Symmetric minimax values.
Here we discuss the formulas introduced in Subsection 2.14.
We fix some value a ∈ (0, 1). For S ∈ S f , we build the family (m a (S, x)) x∈S by induction, starting from the root, setting m a (S, r) = a and, for all x ∈ S \ {r}, Observe that m a (S, x) actually depends only on K S x , i.e. m a (S, x) = m a (K S x , x) Example 26. Consider a possibly random tree T ∈ S f enjoying the property that for any S ∈ S f with leaves L S , the family (T x ) x∈L S is independent conditionally on A S = {S ⊂ T , D L S = ∅} as soon as Pr(A S ) > 0. Fix a ∈ (0, 1) and assume that conditionally on T , the family (R(y)) y∈L is independent and R(y) ∼ Bernoulli(m a (T , y)) for all y ∈ L. Then Assumption 5 is fulfilled, and for any S ∈ S f such that Pr(A S ) > 0 and any x ∈ L S , we have m(S, x) = m a (S, x). We are generally not able to compute s(S, x).
Observe that this is a qualitative symmetry assumption, saying that knowing T , for any v ∈ T \ L, the family of the minimax values (R(x), x ∈ C v ) is i.i.d. Once this is assumed, the only remaining parameter is the mean minimax rating of the root (which we set to a).
Once the value a = m a (T , r) is chosen (even if not knowing T ), it is easy to make the algorithm compute the necessary values of m a , as explained in Subsection 2.14: each time a new node x of T is created by the algorithm, we can compute m a (K x , x) from m a (K f (x) , f (x)) and |C f (x) |.
Proof. Assumption 5 is satisfied because (a) the random tree T is supposed to satisfy the required independence property and (b) conditionally on T , for any x ∈ L, m a (T , x) depends only on K x .
It remains to verify that m(S, x) = m a (S, x) for all S ∈ S f of which x is a leave.
We first show by backward induction that Pr(R(x) = 1|T ) = m a (T , x) for all x ∈ T . First, this is obvious if x ∈ L by construction. Next, if this is true for all the children (in T ) of x ∈ T \L with e.g. t(x) = 1, then we have Pr(R(x) = 1|T ) = 1 − y∈Cx Pr(R(y) = 0|T ) = 1 − y∈Cx (1 − m a (T , y)). We first used that the family (R(y)) y∈Cx is independent conditionally on T and then the induction assumption. Using finally (11) (recall that t(x) = 1 and that f (y) = x for all y ∈ C x ), we find But we know that m a (T , x) = m a (K x , x). Since K x = K S x on A S , we conclude that m(S, x) = m a (K S x , x) = m a (S, x) as desired.
Let us mention that Pearl's model, which we already interpreted as a particular case of Example 23, can also be seen as a particular case of Example 26, where we can furthermore compute s.
Remark 27. Consider again Pearl's model [20]: T is the deterministic regular tree with degree d ≥ 2 and depth K ≥ 1 and the family (R(x)) x∈L is i.i.d. Bernoulli(p)-distributed. Then we already know that for all S ∈ S f and x ∈ L S such that Pr(A S ) > 0, we have m(S, x) = m(|x|) and s(S, x) = s(|x|), with m and s as in Remark 24. One then also has, for all S ∈ S f and x ∈ L S such that Pr( Setting a = m(0), which can be computed from p, K, d, we thus have m(S, x) = m a (S, x) as defined in (11), and we can compute s(S, x). Note that it is not necessary to determine precisely s({r}, r): we can set s({r}, r) = 1 (or any other positive constant) by Remark 14.
Indeed, the above formulas are nothing but a complicated version of the ones in Remark 24, since we have m(S, x) = m(|x|), s(S, x) = s(|x|), |C S v | = d, m(S v , v) = m(|v|) and s(S v , v) = s(|v|). There are other cases where we can characterize s, which should thus be numerically computable.
Example 28. Assume that T is a homogeneous Galton-Watson tree with reproduction law µ such that ≥1 µ( ) ≤ 1 and µ(0) > 0, so that T is a.s. finite. Fix a ∈ (0, 1) and assume that conditionally on T , the family (R(y)) y∈L is independent and that R(y) ∼ Bernoulli(m a (T , y)) for all y ∈ L. Then for all S ∈ S f such that Pr(A S ) > 0 and all x ∈ L S , we have m(S, x) = m a (S, x) and s(S, Proof. We already know from Example 26 that Assumption 5 is satisfied and that m(S, x) = m a (S, x). Next, (12) Let us denote by s(a) = s({r}, r), which clearly depends only on a (and µ).
For any S ∈ S f such that Pr(A S ) > 0 and x ∈ L S , we have s(S, x) = s(m a (S, x)) if t(x) = 1 and s(S, x) = s(1 − m a (S, x)) if t(x) = 0. Indeed, the law of (T x , (R(u)) u∈L∩Tx ) conditionally on A S is the same as that of (T , (R(u)) u∈L ) (re-rooted at x), replacing a by m a (S, x): T x is a Galton-Watson tree with reproduction law µ and, knowing A S and T x , one easily checks that m a (T , y) = m ma(S,x) (T x , y) for all y ∈ L ∩ T x . Hence we have s(S, x) = s(m a (S, x)) if t(x) = 1. If now t(x) = 0, we see that s(S, x) = s(1 − m a (S, x)) by exchanging the roles of the two players.

Global optimality fails
Proof of Remark 17. We assume here that T is the binary tree with depth 3. We thus have the eight leaves 111,112,121,122,211,212,221,222 (recall Subsection 2.1). We also assume that the family (R(x)) x∈L is i.i.d. with common law Bernoulli(1/2).
Observe that T can be seen as an inhomogeneous Galton-Watson tree with reproduction laws µ 0 = µ 1 = µ 2 = δ 2 and µ 3 = δ 0 . Applying Example 23 (with q 3 = 1/2, the values of q 0 , q 1 , q 2 being irrelevant), we can compute the functions m and s: for any S ∈ S f such that Pr ( By symmetry, we can replace the uniformly random matches (starting from some z) used in any admissible algorithm, see Definition 3, by the visit of any deterministic leave (under z), without changing (at all) the performance of the algorithm. With this slight modification, some tedious computations show that Algorithm 13, using the above function m and s, leads to the following strategy (and results) for the three first steps.
Visit the leave x 1 = 111.
If R(x 2 ) = 1 (whence R 2 (r) = 1), then  r)]. Everywhere, we used φ(R), φ(U ) and φ(Z) instead of R, U and Z (we mean, concerning the values R n (x), U n (x) and the Z n (x)). Actually, for large games, some numerical problems persist: at some steps, we have numerically (R n+1 (r), U n+1 (r), Z n+1 (r)) = (R n (r), U n (r), Z n (r)) (even after the change of variables), which should never be the case. However, the above trick eliminates most of them. Instead of using φ, one could manipulate simultaneously log r and log (1 − r). This would be more or less equivalent, the use of φ is just slightly more concise.
8.2. The algorithms. We will make play some versions of our algorithm against the two versions of MCTS recalled in Section 9 on a few real games (variations of Connect Four) and on Pearl's model. use the formulas stated in Example 23. We obtain rather stable results. Of course, this is done once for all for each game.
We call Sym Algorithm 13 with the function m = m a defined by (11), with the choice a = 1/2, and with s ≡ 1. Recall that a ∈ (0, 1) is the expected minimax rating of the root. This is the most simple and universal algorithm, although not fully theoretically justified (see however Remark (29) in Subsection 6.3).
We finally call SymP Algorithm 13 with the functions m and s defined in Remark 27, here again with a = 1/2. This is the theoretical algorithm furnished by our study in the case of Pearl's game (if a = 1/2) and it is precisely the same as GW in this case, see Remarks 24 and 27.
Let us now give a few precisions.
(i) In all the experiments below, each algorithm keeps the information provided by its own simulations handled to decide its previous moves. In practice, this at most doubles the quantity of information (when compared to the case where we would delete everything at each new move), because most of the previous simulations led to other positions.
(ii) Concerning Sym and SymP, we actually use a = 1/2 as expected minimax value of the true root of the game, that is the true initial position. When in another configuration x, we use m a (T , x) (with a = 1/2, here T is the tree representing the whole game) as expected minimax value of the current root x (i.e. the current position of the game). Such a value is automatically computed when playing the game.
8.3. The numerical experiments. First, let us mention that we handled many trials using GW and GW2. They almost always worked less well than Sym concerning Connect Four, so we decided not to present those results. Also, concerning Sym and SymP, we tried other values for the expected minimax value a ∈ (0, 1) of the root without observing significantly better results, so we always use a = 1/2. Similarly, we experimented other values for b ∈ R (see Subsection 2.14, Sym corresponds to the case b = 0 and SymP to the case where b = 2), here again without clear success.
In each subsection below (except Subsection 8.11), which concerns one given game, we proceed as follows.
For each given amount of time per move, we first fit the parameters of MCTS. To this aim, we perform a championship involving MCTS(a, b), for all a = k/2, b = /2 with 1 ≤ k ≤ ≤ 10 (we thus have 55 players). Each player competes 40 times against all the other ones (20 times as first player, 20 times as second one), and we select the player with the highest number of victories. Observe that each player participates to 2160 matches. The resulting best player is rather unstable, but we believe this is due to the fact that the competition is very tight among a few players. Hence even if we do not select the true best player, we clearly select a very good one. Note that we impose a ≤ b because, after many trials allowing a > b, the best player was always of this shape.
Of course, we do exactly the same thing to fit the parameters of MCTS'.
Then, we make our algorithm (Sym or SymP) compete against the best MCTS and the best MCTS', 10000 times as first player and 10000 times as second one.
Also, we indicate the (rounded) mean number of iterations made by each algorithm at the first move. This sometimes looks ridiculous: when e.g. playing a version of Connect Four with large degree with 1 millisecond per move, this mean number of iterations is 8 for Sym and 11 for MCTS. However, after sufficiently many moves, this mean number of iterations becomes much higher. In other words, the algorithms more or less play at random at the beginning, but become more and more clever as the game progresses. So in some sense, the algorithm that wins is the one becoming clever before the other.
Finally, let us explain how to read the tables below, which are all of the same shape. For example, the first table, when playing a large Pearl game, is as follows.
First, we call τ K the number of leaves that SymP needs to determine R(r). Denoting byτ K the average value over 10000 trials, we found I thus seems highly plausible that our algorithm visits the leaves in the same order as AlphaBeta (for a Pearl game), up to some random permutation. This is rather satisfying, since Tarsi [23] showed that for a Pearl game, AlphaBeta is optimal in the sense of the expected number of leaves necessary to determine R(r).
Finally, we plot a Monte-Carlo approximation (with 10000 trials) of E[(R n (r) − R(r)) 2 ] as a function of the number n of iterations, when K = 8 and K = 16, as well as one trajectory of n → R n (r) when K = 16. One trajectory of n → R n (r).
8.12. Conclusion. When playing Pearl's game, SymP beats MCTS and seems competitive against MCTS'. This is reassuring, since our algorithms are typically designed for such games.
On true games, MCTS and MCTS' seem globally much better than Sym. However, we found two situations where Sym may win.
The first and most interesting situation is the one where the game is so large (or the amount of time so small) that very few iterations can be performed by the challengers. This is quite natural, and there are two possible reasons for that.
• Our algorithm is only optimal step by step. So, it is absolutely not clear that it works well when we have enough time to handle many iterations.
• Assumption 5 imposes some independence properties. While this is reasonable, on any game, in some sense to be precised, after a small number of iterations (when playing a small number of uniformly random matches, it is rather clear that the issues will be almost independent), this is clearly not the case when performing a large number of well-chosen matches.
The second situation is that where the game is so small that we can hope to find the winning strategy at the first move, and where Sym may find it before MCTS and MCTS'. As already mentioned, we believe this is due to the fact that Sym does some pruning.
From a theoretical point of view, it would be very interesting to study more relevant models such as the one proposed by Devroye-Kamoun [12]. Clearly, this falls completely out of our scope. On the contrary, it does not seem completely desesperate to find empirically variants of our algorithm that work much better in practise. For example, it may be relevant to use other choices of functions m and s, to try a clever default policy, etc.

Appendix: Monte Carlo Tree Search algorithms
In this subsection, we write down precisely the versions of the MCTS algorithm we used to test our algorithm. We start with a modified version, more close to our study, where we do not throw down any information.
Step 1. Simulate a uniformly random match from r, call x 1 the resulting leave and put x 1 = {x 1 }.
During this random match, keep track of R(x 1 ), of B x1 = B rx1 and of D x1 = ∪ y∈Brx 1 H y and set C 1 (x) = W 1 (x) = 0 for all x ∈ D x1 .
Set z n = z.
(i) If z n ∈ x n (this will almost never occur if n is reasonable for a large game), set x n+1 = x n , B xn+1 = B xn and D xn+1 = D xn For all x ∈ B rzn , set C n+1 (x) = C n (x) + 1, W n+1 (x) = W n (x) + R(z n ).
(ii) Else (then z n ∈ D xn ), simulate a uniformly random match from z n , call x n+1 the resulting leave, set x n+1 = x n ∪ {x n+1 }.
Conclusion. Stop after a given number of iterations n 0 (or after a given amount of time). As best child of r, choose x * = argmax{φ(V n0 (x), C n0 (x)) : x child of r}.
Algorithm 30 updates the information on the whole visited branch at each new rollout. Here is a more standard version: it only creates, after each new simulation, one new node (together with its brothers) and updates the information only on the branch from the root to this new node. It seems clear that Algorithm 30 should be better, but it may lead to memory problems if the game is very large. We do not discuss such memory problems in the present paper.
Step 1. Simulate a uniformly random match from r, call u the resulting leave.
During this random match, keep track of R(u) and of T 1 = {r} ∪ C r .
Set z n = z.
(i) If z n ∈ L (this will never occur if n is reasonable for a large game), set T n+1 = T n .
(ii) If z n / ∈ L, simulate a uniformly random match from z n , call u the resulting leave, define y as the child of z n belonging to B znu and set T n+1 = T n ∪ C zn .
Conclusion. Stop after a given number of iterations n 0 (or after a given amount of time). As best child of r, choose x * = argmax{φ(V n0 (x), C n0 (x)) : x child of r}.
Of course, in both algorithms, the choice of the function φ is debatable. The choice φ(w, c) = (w + a)/(c + b), with a > 0 and b > 0 chosen empirically, seems to be a very good choice and was proposed by Lee et al [18,.
Algorithm 30 is not admissible in the sense of Definition 3 because it may take different decisions with the same information. Indeed, it might visit twice the same leave consecutively: this does not modify the information but changes the values of W n and C n . However, it is almost admissible: it would suffice to forbid two visits at the same leave (or alternatively to set C n+1 (x) = C n (x) and W n+1 (x) = W n (x) for all x ∈ B xn ∪ D xn in the case where x n+1 ∈ x n ) to make it admissible. Since such a double visit almost never happens in practice, we decided not to complicate the definition of admissible algorithms nor to modify Algorithm 30.
Algorithm 31 is not admissible, because it has not the good structure (it does not keep track of the whole observed information), but we see it as an truncated version of Algorithm 30, which is itself almost admissible.
Observe that in Algorithm 30, C n (x) is the number of times (iterations) where the node x has been crossed and W n (x) is the number of times where x has been crossed and where this has led to a victory of J 1 , all this after n matches. The (n + 1)-th match is as follows: we start from the root and make J 1 play the most promising move (for itself, i.e. the child with the highest φ(W n , C n )) and J 0 play the most promising move (for itself, i.e. the child with the highest φ(C n − W n , C n )) until we reach an uncrossed position z n ∈ D xn . From there, we end the match at uniform random, until we arrive at some leave u. We finally update the explored tree as well as its boundary and the values of the numbers of crosses and of victories of each node of the branch from r to u.