Invariant measures of interacting particle systems: Algebraic aspects

Consider a continuous time particle system ηt = (ηt(k), k ∈ 𝕃), indexed by a lattice 𝕃 which will be either ℤ, ℤ∕nℤ, a segment {1, ⋯ , n}, or ℤd, and taking its values in the set 
Eκ𝕃
where Eκ = {0, ⋯ , κ − 1} for some fixed κ ∈{∞, 2, 3, ⋯ }. Assume that the Markovian evolution of the particle system (PS) is driven by some translation invariant local dynamics with bounded range, encoded by a jump rate matrix ⊤. These are standard settings, satisfied by the TASEP, the voter models, the contact processes. The aim of this paper is to provide some sufficient and/or necessary conditions on the matrix ⊤ so that this Markov process admits some simple invariant distribution, as a product measure (if 𝕃 is any of the spaces mentioned above), the law of a Markov process indexed by ℤ or [1, n] ∩ ℤ (if 𝕃 = ℤ or {1, …, n}), or a Gibbs measure if 𝕃 = ℤ/nℤ. Multiple applications follow: efficient ways to find invariant Markov laws for a given jump rate matrix or to prove that none exists. The voter models and the contact processes are shown not to possess any Markov laws as invariant distribution (for any memory m). (As usual, a random process X indexed by ℤ or ℕ is said to be a Markov chain with memory m ∈ {0, 1, 2, ⋯ } if ℙ(Xk ∈ A | Xk−i, i ≥ 1) = ℙ(Xk ∈ A | Xk−i, 1 ≤ i ≤ m), for any k.) We also prove that some models close to these models do. We exhibit PS admitting hidden Markov chains as invariant distribution and design many PS on ℤ2, with jump rates indexed by 2 × 2 squares, admitting product invariant measures.

If E is a set and I a subset of Z, or a sequence in Z, we denote by E I := {x(I) : the entries in x(I) belong to E} the set of sequences in E indexed by I. For y = x(I), a sequence indexed by a set I, and for A ⊂ Z, set the word obtained by suppressing the letters in position belonging to A in y. Following the same idea, we denote by M {i} the matrix M with the column and row i suppressed. For any set E, we denote by M(E) the set of probability measures on E (for a topology which will be specified in the context). A function g : A → R is said to be equivalent to 0, we write g ≡ 0, if its image is reduced to 0.

Models and presentation of results
All the results presented in this article (apart from Thm. 1.24) concern space and time homogeneous particle systems (PS), with finite range interactions defined on a lattice L, which will be Z, Z/nZ, Z d , or a segment 1, n . The set of colours is E κ = 0, κ − 1 , where κ (the number of colours) belongs to {2, 3, . . .} ∪ {+∞}. An element of the set of configurations E L κ , is a colouring of the sites of L by the elements of E κ (neighbouring sites may have the same colour). When well defined, the PS will be a continuous time Markov process η := (η t , t ≥ 0), where for any t, η t = (η t (k), k ∈ L) ∈ E L κ . The set E L κ is equipped with the product σ-algebra. The construction of the family of PS considered here is illustrated on Z first, but considerations for the analogues on Z/nZ, 1, n and Z d will appear progressively. indexed by the size L words on the alphabet E κ , with non negative entries and with zeroes on the diagonal.
Assume for a moment that κ, the number of colours, is finite and fix a JRM T with range L. With any element of the "possible jumps set" where: -i encodes an abscissa in an infinite word, -w and w encode respectively some size L initial and final words, we associate the "local map" The map m i,w,w , -in the case η i + 1, i + L = w, keeps η unchanged (so that m i,w,w (η) = η) -in the case η i + 1, i + L = w, transforms this subword into w (formally: m i,w,w (η) = η with η j = η j if j / ∈ i + 1, i + L , and η i+k = w k , the kth letter of w if 1 ≤ k ≤ L).
The generator G (given in (1.4)) encodes all possible infinitesimal transformations of a configuration η, indexed by Z, whose size L subwords may jump: a subword equals to w is transformed into w with rate T [w|w ] (a jump is then possible only when T [w|w ] > 0). When κ is finite, such a particle system is well defined (see references given above for all details). Many such models have been studied in the literature, for example: -The contact process, for which κ = 2, L = 3, and all the entries of T are 0 except T [a,1,b|a,0,b] = 1 for any (a, b) ∈ {0, 1} 2 (recovery rate), T [a,0,b|a,1,b] = λ(a + b) for some λ > 0 the infection rate (the same model can be expressed using a JRM with range L = 2 instead: T [ (1.5) Here the state 1 represents a vertex on the line with positive magnetization, 0 a vertex with negative magnetization and β a positive parameter, which, depending on its sign, favours or penalizes configurations in which vertices magnetization are aligned.
A distribution µ on E Z κ is said to be invariant by T if η t ∼ µ for any t ≥ 0, when η 0 ∼ µ (where the notation ∼ means "distributed as"). Following the discussion given below (1.4), this property can be rephrased when κ is finite, as Gf dµ = 0 for any f bounded cylinder function f (or function of C ∆ ). A simple argument (Lem. 1.3, p. 23 of [19]) shows that it is also characterized by Gf dµ = 0 for any indicator function f of the type f (η) = 1 η n1,n2 =x n1,n2 (1.6) for some fixed word x n 1 , n 2 and fixed indices n 1 ≤ n 2 : this is the balance between the (infinitesimal) creation and destruction of the subword x n 1 , n 2 in the interval n 1 , n 2 under the distribution µ.
Recall that under the product σ-algebra, a measure µ ∈ M(E Z κ ) is characterized by its finite dimensional distributions.
We are interested in the following question: for what JRM T does there exist a simple invariant distribution ? Here the word "simple" stands for distributions as product measures, Markov laws or Gibbs measures (depending on the underlying graph where is defined the particle system). It turns out that this question has a rich algebraic nature, and we then decided to focus on this question only. The algebra in play depends on T and on the fixed family of distributions whose invariance is under investigation.
Consider a function f as given in (1.6). The single jumps of the PS that may affect the value of f (η) take place in the dependence set of n 1 , n 2 which is larger than n 1 , n 2 : D n 1 , n 2 = n 1 − (L − 1), n 2 + L − 1 . (1.7) For any w and z in E n1,n2 κ , set the induced transition rate T [w|z] from w to z as: T [w a+1,a+L |z a+1,a+L ] 1 wj =zj for all j∈ n1,n2 \ a+1,a+L , (1.8) that is the sum of the transition rates which makes this transition possible in a single jump totally included in w. For a fixed pair (w, z) the contribution of the Z-interval a + 1, a + L is 0 if T [w a+1,a+L |z a+1,a+L ] = 0 (jump not allowed), or if w and z do not coincide outside a + 1, a + L . This includes the case where n 2 − n 1 is too small, that is < L − 1.
Notice that taking the same notation for the transition rate between two words as for the JRM is possible since they coincide if the lengths of w and z are both L.
We want to reformulate in a Lemma what has been said so far concerning the cases where κ is finite: Lemma 1.2. Let κ < +∞. A probability measure ν ∈ M E Z κ is invariant under T on the line if and only if it solves the system of equations Sys(Z, ν, T) defined by Line Z (x n 1 , n 2 , ν, T) = 0, for any n 1 ≤ n 2 , for any x n 1 , n 2 ∈ E n1,n2 κ , (1.9) where Line Z (x n 1 , n 2 , ν, T) = w,z∈E D n 1 ,n 2 κ ν(w)T [w|z] − ν(z)T [z|w] ×1 z n1,n2 =x n1,n2 . (1.10) We now define the notion of algebraic invariance of a probability measure with respect to a particle system. The aim of this notion is to disconnect the problem of well definition of a particle system which brings its own technical difficulties and obstructions when κ = +∞ (see discussion in Sect. 1.2) to the resolution of the systems (1.9) which is "just" an algebraic system, which can be solved independently from other considerations. Definition 1.3. For κ finite or infinite, a probability measure ν ∈ M E Z κ is said to be algebraically invariant under T on the line (we write ν is AlgInv by T on the line) if it solves the system of equations (1.9).
Again, in the case where κ < +∞, standard invariance of measures and algebraic invariance are equivalent notions. When κ = +∞, difficulties arise (see Sect. 1.2) and the notion of algebraic invariance is indeed useful.
Extension on Z/nZ. The previous considerations for PS η indexed by Z can be extended to Z/nZ (the finiteness of Z/nZ provides a more favourable setting). for Cycle n (x, µ n ) = w∈E Z/nZ κ µ n (w)T [w|x] − µ n (x)T [x|w] , (1.12) where T [w|z] has to be adapted to fit with the structure of Z/nZ : T [w a+1,a+L |z a+1,a+L ] 1 wj =zj for all j∈(Z/nZ)\ a+1,a+L , (1.13) where in this context, a + 1, a + L stands for (a + 1 mod n, . . . , a + L mod n).
When κ is finite, the existence of a measure µ n solving the system (1.11) is granted from the theory of finite state space Markov processes.
Again, we disconnect the problem of existence of particle systems with the solution of the algebraic system: Invariance and algebraic invariance are equivalent when κ < +∞.

The results
Definition 1.6. -For −∞ < a ≤ b < +∞, a process (X k , k ∈ a, b ) is said to be a Markov chain on E κ , or to have a Markov law, if there exists M := M i,j i,j∈Eκ , a Markov kernel (we will say also simply kernel), and an initial distribution ν ∈ M(E κ ) such that, For short, we will say that X (resp. µ) is a (ν, M )-Markov chain on a, b (resp. (ν, M )-Markov law) if its kernel is M , and its initial distribution is ν.
-We will say that a law ρ in M(E κ ) is invariant for M (or for this Markov chain) if ρM = ρ, for ρ seen as a row vector. If the initial distribution is ρ, we say that X is a M Markov chain under (one of) its invariant distribution.
-For ρ ∈ M(E κ ) invariant for M , we call (ρ, M )-Markov chain (X k , k ∈ Z) a process indexed by Z whose finite dimensional distribution are given by P( . Its distribution is called (ρ, M )-Markov law.
• A M -Markov law on E κ is said to be positive recurrent if under this kernel, a Markov chain is positive recurrent (we will say also that M is positive recurrent).
• If all the M i,j 's are positive, we say that M is positive, and write M > 0. The system of equations {Line ρ,M,T n ≡ 0, for any n}, (as stated in (1.15)) provides the necessary and sufficient algebraic relations between ρ, M and T for the AlgInv of the M -Markov law. This is an infinite system of equations even when E κ is finite. It is linear in T, with unbounded degree in M .
-The first goal of this paper is to produce an equivalent finite system of algebraic equations to characterize the invariance of a (ρ, M )-Markov law by T when the set E κ is finite. The main result is the proof of equivalence of {Line ρ,M,T n ≡ 0, for any n} with each of several (equivalent) algebraic systems of degree 6 in M and linear in T (Thms. 1.9 and 1.16, when the range is L = 2 and the memory of the Markov chain is m = 1). These equivalent systems are finite, and moreover, they can be explicitly solved using some linear algebra arguments (Thm. 1.29): in words, it is possible to decide if a PS with JRM T possesses an invariant Markov law, or to describe the class of all T that do (which provide some applications discussed in Sect. 1.1.2).
• When the cardinality of E κ is infinite some additional complications arise (Sect. 2.2), but some results still hold. • When M possesses some zero entries, a plurality of algebraic behaviours for these systems of equations (and solutions) makes a global approach probably impossible (Sect. 2.4).
-Similar criteria are developed to characterize product measures ρ Z invariant by T. In this case the finite representations use equations of degree 3 in ρ and linear in T (Thm. 1.20, when the range L = 2).
-The invariance of the Gibbs distribution with kernel M on the circle Z/nZ is also studied, when E κ is finite. In Theorem 1.9 the equivalence between the invariance of a Gibbs measure (see Def. 1.8) with Markov kernel M on Z/nZ for n = 7 with the invariance of the (ρ, M )-Markov law (for ρ such that ρM = ρ) on the line Z is established (Thm. 1.9). Besides, Corollary 1.10 implies that if the Gibbs distribution with kernel M is invariant by T on Z/nZ for n = 7, then it is also invariant by T on Z/nZ for any n ≥ 3 (when the range is L = 2).
-When considering a PS indexed by the segment 1, n , some interactions β r and β with the boundaries are introduced (Sect. 1.7). When the range L = 2, if a Markov law is invariant for n ≥ 7 on the segment (with fixed boundaries interactions), then it is invariant on the line (Thm. 1.34). Some relations between invariant measures on the line and on the segment are provided.
-The 2D case and beyond will be discussed in Theorem 1.28, where a simple necessary and sufficient condition for the invariance of a product measure will be provided (Sect. 1.4).
-The case where T has a larger range L and/or where the invariant distribution is a Markov law with larger memory m is discussed in Section 2. Many extensions discussed in Section 2 to larger range and memory, are proved by the same ideas as those for L = 2, with some extra technical complications. We think that the presentation of the proof in the case L = 2 is needed in order to make the arguments understandable.

Applications
As said above, the theorems we provide allow one to decide if there exists a Markov law with kernel M (with memory m) invariant under the dynamics of a PS with a given T. This is done "by explicitly" solving a finite polynomial system with "small degree in M ". These kinds of problems are solved using some algebra, for example, the computation of a Gröbner basis (see Sect. 3.1), using some Computer algebra systems if needed. The theorems also allow to find pairs (T, M ) for which this invariance occurs, and then, to design some PS having a simple known invariant distribution.
Hence, having in hands a simple algebraic characterization of PS admitting invariant Markov law, allows to extend considerably the family of PS for which explicit invariant distributions can be found, and we think that, as illustrated by what we are saying below, the interest of these results go far beyond invariant Markov laws.
In the sequel, when we say that we use a specific model with general rates, we mean that we let the positive rates of this specific model as "free variables": for example, the rates of the voter model on the line are T [a,1−c,b|a,c,b] = 1 c=a + 1 c=b . In its general version, the jump rates T [a,1−c,b|a,c,b] for c ∈ {a, b} are considered as variables (that can be adjusted), all the other rates stay equal to zero.
In addition to the results presented in the preceding section we present here several applications of our work. -In Section 3.1.2, we prove that the voter model does not admit any Markov law of any memory as invariant distribution. The general rate version is explored and the parameters for which there exists Markov law invariant on the line are discussed. -In Section 3.1.3, the contact process is discussed: we prove that this process does not have a Markov law of any memory m ≥ 0 as invariant distribution. -In Section 3.1.4, the TASEP and some variants are explored: Zero-range type processes, 3 colours TASEP and PushASEP.
• For the zero range type processes we prove that there exists a family of distributions F , such that depending on T, either all the product measures ρ Z are invariant by T for all ρ ∈ F , or none of them is invariant by T.
• In the general rate 3-colour TASEP some sufficient and necessary conditions on T are given so that there exists a Markov law with positive kernel M that is invariant by T.
• For the PushASEP we explain how some special types of P S with range L = ∞ can be transformed and solved with our results. -In Section 3.1.1, the stochastic Ising model is analyzed and its well known Markov invariant measure on the line (Gibbs on the cycle) is found based on our results. -The possibility offered by our theorems to find automatically parameters (T, M ), say, on the space E 3 = {0, 1, 2} (with 3 colours) and L = 2 for which the PS with JRM T let the Markov law with kernel M invariant, allows to find some PS on E 2 = {0, 1} with 2 colours and L = 3 which possesses some hidden Markov chain distributions as invariant distributions, using some projection from E 3 to E 2 . As far as we are aware of, this is the first time that a hidden Markov chain is shown to be invariant under a PS on the line. This is discussed in Section 3.2. We think that this method will allow in the future to find many invariant distribution for PS with 2 colours, or more. -In Section 3.3.1, the set of pairs (T, M ) for which the Markov law with positive kernel M is invariant under T, in the case κ = 2 and L = 2 is totally explicitly solved. This case corresponds to standard PS on the line, where 1 and 0 are used to model the presence, or absence of particles at each position. Under these assumptions and mass preservation (see Def. 3.4) we prove that the unique Markov kernels that are AI by this type of T's are the i.i.d. measures. -In Section 3.3.2, the set of pairs (ρ, T) for which the product measure with marginal ρ is invariant under T, in the case κ = 2 and L = 2 is totally explicitly solved. -In Section 3.5 we use our criteria of invariance of product measures under the dynamics of a PS defined on Z 2 , to provide many explicit PS admitting product measures as invariant measure.

Some pointers to related papers
Given an infinitesimal generator (or a JRM) of a particle system, the existence of a stochastic Markov process with this generator can be proved when the number of colour is finite, or if sup w w T [w|w ] < +∞ is uniformly bounded, using for example the so-called graphical representation due to Harris [18] see also Swart [23], or by the Hille-Yosida theorem and other considerations coming from functional analysis and measure theory (see e.g. Liggett [21], Swart [23], Kipnis & Landim [19]) see also Andjel [2], where proofs of existence and construction can be found in some particular cases.
When the state space E κ is infinite, complications arise since even the state at a site may diverge in a finite time, and then, in general a JRM T does not allow to define a particle system properly. Sufficient conditions for the well definition of a particle system in the infinite case can be found in Liggett ([21], Chap. IX), Kipnis & Landim [19], Balázs et al. [4], Andjel [2], Fajfrová et al. [13].
Other works related to the present one concern the computation of invariant distribution(s) of a given PS, or the characterization of its ergodicity (Blythe & Evans [5], Crampe et al. [8], Fajfrová et al. [13], Greenblatt & Lebowitz [17], Kraaij [20]). Numerous results concern works not directly related to the present paper: study of PS out of equilibrium, their speed of convergence, their time to reach a certain state, among others.
As far as we are aware, the paper whose point of view is the closest to the present work, is Fajfrová et al. [13], in which some conditions for the invariance of product measures are designed, for mass migration processes (see Def. 3.5 and below).
We add that the present work has been inspired by some similar works on probabilistic cellular automata, where the transition matrices for which simple invariant measures exist, have been deeply investigated, and are at the heart of the theory, Toom et al. [24], Dai Pra et al. [9], Marcovici & Mairesse [22], Casse & the second author [7] (see also Sect. 3.4).

Main results
The case L = 1 being non interesting here, we examine in details the case where the range is L = 2 , representative of this kind of models as will be seen in Section 2 where larger ranges will be investigated.
For a given JRM T, the exit rate out of w ∈ E 2 κ is defined by Consider a Markov chain with kernel M , and let ρ be one of its invariant distributions. The equation Line ρ,M,T n (x 1, n ) = 0 (as defined in (1.14)) rewrites (1.15) From Lemma 1.2, a (ρ, M ) Markov law under its invariant distribution is invariant by T on the line when Line ρ,M,T n ≡ 0, for all n ∈ N. Since the range is L = 2, the values of x 0 and x n+1 "just outside" x 1, n play a role (they are in the dependence set of 1, n , as defined in Def. 1.7) we then need to sum on all the possible values of (x 0 , x n+1 ). But, because of the appearance of the pattern M xi,u M u,v M v,xi+3 T [u,v|xi+1,xi+2] , it is a bit simpler to consider also additionally the extra values (x −1 , x n+2 ) in the sum even if they are not in the dependence set: these additional terms concern only the representation of the Markov law, and also the fact that ρ is the invariant distribution of M (not the JRM).
We now present the main theorems of the paper. The proofs that are not given in this section, are postponed to Section 4.

Invariant Markov laws with positive kernel
In this section, E κ is finite, and the Markov kernel M = M i,j i,j∈Eκ has positive entries. The measure ρ is the invariant law for a Markov chain with kernel M , and is characterized by ρ = ρM .
Define the normalized version of Line by: so that, for n = 1, and any x ∈ E κ , NLine ρ,M,T 1 (x) := Line ρ,M,T 1 (x) and for n ≥ 2, and any x ∈ E n κ , We will drop the exponents M, T and write Z a,b,c,d instead when they are clear from the context. Remark 1.7 (Key point). One can say that the leading idea of the paper is the following: when a Markov law (ρ, M ) is invariant by T then Z possesses a huge amount of nice algebraic additive properties: this will be seen in all the theorems of the paper. Some of the additive properties of Z will allow to control NLine ρ,M,T and show its nullity. The more natural object Line ρ,M,T from a probabilistic perspective (without normalisation) is not the right object to deal with these additive properties.
Now, for u 1, a -tuple of elements of E κ , denote by the multiset 1 "of k-subwords" of u 1, so that, for example where (following our notation, below the abstract) a 1, 7 {4} = (a 1 , a 2 , a 3 , a 5 , a 6 , a 7 ). The map Master M,T 7 will play an important role in the sequel. Let us expand for once, this compressed notation: This formula coincides with Cycle n (x, µ n ) given in (1.12) for µ n , a Gibbs measure with kernel M : Definition 1.8. A process (X k , k ∈ Z/nZ) indexed Z/nZ for some n ≥ 1 and taking its values in E Z/nZ κ is said to have a Gibbs measure with kernel M i,j i,j∈Eκ , a non negative matrix, if For short, we will say that X follows the M -Gibbs measure on Z/nZ.
It is immediate to check that the notion of standard Gibbs measure (with nearest interaction) on Z coincides with the standard notion of Markov chain (when the state space is finite). The same property holds on the cylinder: measure of the multiplicative type as defined in Definition 1.8 coincide with Gibbs measures on this space (see e.g. [16], Sect. 3).
The Perron-Frobeniüs theorem asserts that if a square matrix A is non negative and irreducible, then A has a real eigenvalue λ larger (or equal, if A is periodic) than the modulus of the other ones, and the corresponding right and left eigenvectors may be chosen with positive entries. We qualify by "main" in the sequel these eigenvectors and eigenvalue. Hence, if M is irreducible, one may suppose w.l.o.g that M is a classical Markov kernel, since It is then the balance in NCycle M,T 7 (a 1, 7 ) when the "central letter" a 4 of a word a 1, 7 is replaced by a 4 . A key result of the paper is the following: the infinite system of equations {Line ρ,M,T n ≡ 0, n ≥ 1}, which by definition is the invariance of the Markov law by T on the line is equivalent to many different finite systems of equations with bounded degree (in M ): Theorem 1.9. Let E κ be finite and L = 2. If M > 0 then the following statements are equivalent:   This example shows how easy it is to check that a guessed solution is invariant by a given JRM T. However, it does not say how such a guess has been done: in fact, as explained in Section 1.6 and the subsequent ones, many computations can be done explicitly using some computer algebra system. Here, we just specified some arbitrary entries of the matrices T and M and compute some others so that the system (viii) of Theorem 1.9 is satisfied. Remark 1.12. -The appearance of "0" everywhere in the Theorem is arbitrary. It may be replaced by any constant element of E κ in the previous statements.
-The positivity of M is a strong condition whose relaxation entails many difficulties. It is discussed in Section 2.4.
-[link with reversibility] The condition is equivalent to the fact that PS with JRM T is reversible with respect to the Gibbs measure with kernel M on any cylinder with size ≥ 3. As usual reversibility implies invariance. However, invariance and reversibility are not equivalent even for Gibbs measures: Theorem 1.9 gives the complete picture. In particular, (1.24) implies Z M,T a,b,c,d ≡ 0, which implies Conditions (ii) to (ix) of Theorem 1.9. The converse does not hold. -Further in the paper, we will state Theorem 2.4 which implies a result somehow stronger than Theorem 1.9 in some conditions: NCycle M,T n ≡ 0 for any n ≤ κ is necessary and sufficient for the Markov chain (ρ, M ) to be invariant by T. When the number of colours satisfies κ < 7, this provides a criterion potentially simpler to check than those given in Theorem 1.9.
We state here a theorem which is important in many applications. Consider a PS with JRM T defined on Z, and its analogue on E  In fact only the case m = 1 is a Corollary of Theorem 1.9, the strongest form for general memory m ≥ 1 and range L is a Corollary of Theorem 2.1 which treats the invariance of Markov law with memory m. Remark 1.14. -Notice that if T is identically 0, then all the states are absorbing, so that all Markov laws are invariant.
-If the hypothesis of the theorem holds for some fixed n, then the conclusion holds if the memory size m satisfies m + L ≤ n.
We will use this theorem in Section 3 for some applications on the contact process and the voter model.
Proof of Theorem 1.13. By Theorem 1.9 (for m = 1) or 2.1 (for m ≥ 1), if there exists a Markov law with memory m and full support invariant for T on the line, then the same property holds on Z/nZ for n ≥ m + L for the corresponding Gibbs measure. But the invariance of a full support measure is incompatible with the existence of a non trivial absorbing subset.
The reason is that the subwords which contain at least one cross (x) are the same in both pictures so that the computation finally can be reduced by taking the difference of subwords of size 4 appearing in the last picture-equation.
The condition Master M,T 7 ≡ 0 is equivalent to This relation allows to see that the LHS of (1.26), does not depend on a 4 , which is the "removal" cost of one letter in equilibrium. This identity will allow us to remove "one letter" inside linear combinations involving Z: for any word a 1, n with at least n ≥ 7 letters, for any 4 ≤ k ≤ n − 3, This property is reminiscent to other algebraic properties, as rewriting systems, dependence in a vector space or as relation in the presentation of a group by generators and relations.
-Replacement cost of one letter: The role of Replace 7 is to measure the difference between NLine evaluated at two (long enough) words w and w , which are equal up to a central letter. Again, the same graphical representation of the simplification taking place can be given: Note that we keep the same notations as in (1.25). From this, it is clear that Replace 7 ≡ 0 is a necessary condition for the Markov law (ρ, M ) to be invariant by T on the line. The sufficiency of this condition is not obvious (see Sect. 2.4 for extension when M is not supposed positive).
There are many links between the systems NCycle n ≡ 0 for different values of n, here are some of them, which prove that the Markov law with Markov kernel M is invariant by T if (M, T) solves a system of equations with degree 6 in M , linear in T (the system being finite when κ is finite): The proof is given in Section 4. ≡ 0 equivalent ? We tested this with a computer for κ = 5 (by the computation of some Gröbner basis), where the answer turns out to be negative. We will see in the sequel that 7 is the "critical" length of the systems associated to the range L = 2. In Section 2 we will give the critical length associated with a general range L.
In the sequel, W 1 · · · W k will stand for the word obtained by the concatenation of the words W 1 , . . . , W k−1 and W k . Remark 1.18 (Linearity principle). From Theorem 1.9, if a Markov law with Markov kernel M > 0 is invariant by T on the line, then the M -Gibbs measure is invariant in Z/nZ for any n ≥ 3. This is something which can be guessed and proved as follows. Take three words: p, w, and s, the "prefix", the "pattern", and the "suffix". Consider the word W n = pw n s. If the M -Markov law is invariant by T on the line, then In fact, this remark is also valid for any range L, and even the converse holds (see Thm. 2.4).

Invariant product measures
Definition 1.19. A process (X k , k ∈ I) indexed by a finite or countable set I is said to have the product distribution p I , for a distribution p on E κ , if the random variables X k 's are i.i.d. and have common distribution p.
Since product measures are special Markov laws, we can use what has been said so far to characterize invariant product measures that are invariant by T by replacing M i,j by ρ j in the previous considerations (and rewrite, for example Thm. 1.9 restricted to this special case). But, the "7" appearing everywhere in this theorem is no more relevant for a product measure... the crucial length here is "3"! To see this, observe that when M i,j = ρ j , the quantity Z M,T a,b,c,d does not depend on (a, d), so that we may set and NCycle M,T n (a 0, n − 1 ) respectively "simplify to" We have the following analogue of Theorem 1.9, which provides some finite certificate/criteria for the algebraic invariance of product measures.
with support E κ , then the following statements are equivalent: Remark 1.21 (Comparison with detailed balance condition). Consider a probability distribution ρ on E κ with full support. A natural/folklore sufficient condition for the associated product measure to be invariant by T on the line is the fact that it solves the following system: (1.31) Summing this over (u, v), one sees that this condition implies Z ρ,T ≡ 0. Theorem 1.20 applies to these situations since when Z ρ,T ≡ 0, (ii) to (iii) are clearly satisfied. The crucial point here is that Z ρ,T ≡ 0 is just a sufficient condition, not a necessary one (as we will see by providing examples in Sect. 3): Theorem 1.20 gives the complete necessary and sufficient conditions. Remark 1.22. For the sake of simplicity, in Section 1 we restrict ourselves to criteria/properties for the invariance of product measures with full support. Nevertheless, contrary to the Markov case, the case of product measures with a smaller support can be also considered without any problem (see Sect. 2.3).
"Range 2" on a more general class of graphs. Most of the previous discussions on AlgInv Markov laws rely on the geometry of Z, but it turns out that for AlgInv product measures, some of the previous properties still hold when one defines a PS on a more general graph, in the case where the JRM has range 2.
Formally, consider a continuous time Markov process Assume that the pair of states (η(x), η(y)) of two vertices x and y, jumps to the new pair of By p, the rates also depend on the positions. We suppose that there exists In this case the equilibrium equations for where D(A) denotes as before the dependence set, whose definition needs to be extended for this type of graphs to Definition 1.23. We will say that ρ G is AlgInv by pT if ρ G satisfies Line ρ,T,p A ≡ 0 for all finite A ⊂ G (again when E κ is finite and G locally finite, invariance and algebraic invariance are equivalent notions). Theorem 1.24. Let #E κ < +∞, L = 2 and ρ ∈ M(E κ ) with full support. Depending on p we have the following equivalences Hence, in both cases, the geometry of the graph G does not matter since Z ρ,T only depends on the states (given by η).

A glimpse in 2D and beyond
We consider in this part PS indexed by Z d , and then whose configuration space is E Z d κ . We suppose that the JRM instead of being defined (as done in (1.1)) by "the jump rate of size L-subwords" is defined by Formally, replace J defined in (1.2) by i,w,w on the set of configurations defined for any (i, w, w ) ∈ J (d) , generalizing naturally the m i,w,w 's defined in (1.3). The corresponding generator is acting on continuous functions f sufficiently smooth (see discussion below (1.4)). The dynamics of this PS is as follows: starting from a (random or not) configuration η 0 = (η 0 z , z ∈ Z d ), each sub-configuration (η 0 z , z ∈ h) = u indexed by a hypercube h equal to HC[L, d] up to a translation, is replaced by the sub-configuration with same shape v with rate T [u|v] . When E κ < +∞, this defines a Markov process (see discussion below (1.4)).
It is then possible to state the analogue of Line Z in these settings: let C be a finite subset of Z d . Set where x(C) = (x c , c ∈ C) is any element of E C κ , and where D(C) is the dependence set of C: for any subset F of Z d , the dependence set of F is Again, for any w, z ∈ E D(C) κ , the global transition rate from w to z is Finally, the normalized version Line ρ,T is defined for any finite domain C by for any x(C) ∈ E C κ . (1.37) The first theorem we want to state gives a necessary and sufficient condition for a product measure ρ Z d to be invariant by some PS with JRM T. Again, when E κ < +∞ it provides a criterion involving a system composed by a finite number of equations. After that, we will explain how to obtain an equivalent system with a much smaller number of equations.
Let C be a finite subset of Z d and D(C) its dependence set. The dependence set, by definition is a union of hypercubes h with sides L: depending on C, some of them may be included completely in C, some contains some points in C and some points outside. The balance NLine ρ,T (x(C)) can be decomposed as a sum on these hypercubes. Indeed, using the decomposition of T along simple jumps (1.37), one gets depending on whether h is totally included in C or not. Here, the geometry of Z d appears: when h is not included in C, h ∩ C can be (depending on C) any subset of h, and we then need to mark this dependence with the pair (h ∩ C, h) as an exponent of Z. A simple analysis on the summation variables and the simplification of the quotient of weights of unchanged colours, give: (1.39) and more generally, for h such that h ∩ C = ∅, h ⊂ C, We said, "more generally" because when h ⊂ C, When |E κ | < +∞, a product measure ρ Z d is AlgInv by T if and only if all the maps NLine ρ,T ≡ 0. For this, it is not needed that Z ≡ 0 (but it is sufficient): Theorem 1.25. When |E κ | < +∞, a product measure ρ Z d is invariant by T if and only if the two following conditions hold: as a trivial consequence we get the following condition, weaker than reversibility: Proof of Theorem 1.25. The product measure ρ Z d is invariant by T if and only if for at least one sequence can be written as a sum of the contributions of the hypercubes h such that (h ∩ C) = (h ∩ C ). A simple inspection of the balance in the corresponding sums as expressed in (1.38), gives The theorem states something stronger than the fact that this property holds for all C = C i+1 , C = C i : it suffices that this property holds for those included in HC[2L − 1, d]. It remains to say that this last condition comes from (1.43): the difference between the two NLine concerns only the hypercubes h that intersect the new vertex c, and then the union of these hypercubes is included in HC[2L − 1, d]. A given union of hypercubes appearing in such a difference can be realised by taking two sets Remark 1.27. (i) It is possible to reduce the number of necessary and sufficient conditions in Theorem 1.25 by designing a particular growing sequence (C i ) in such a way that the family (h, C i ∩ h, C i+1 ∩ h) (up to translation) involved in the right hand side of (1.43) for some i, take only a very small number of values: in Z 2 for 2 × 2 squares, we can manage to get only 2 (kind of) differences, starting from C 0 = {(0, 0), (0, 1), (1, 0)}. This is exemplified in Theorem 1.28 and in its proof.
(ii) What has been said so far concerns JRM indexed by hypercubes. If the PS of interests is given using some JRM indexed by some other "shape F ", it is still possible to represent such a PS using a JRM indexed by hypercube (by taking a hypercube h large enough to contain F , and by letting the colours in h \ F unchanged). However, in Z d the number of equations grows rapidly if one uses this kind of expedient. The best thing to do, is to adapt what has been said above to this special shape.

JRM indexed by 2 × 2 squares in 2D
Following Remark 1.27, we design a set of necessary and sufficient conditions for invariance of a product measure ρ Z "less abundant" than those given in Theorem 1.25. We examine this in the 2D case, for a PS with JRM indexed by 2 × 2 squares, denoted further Sq (as the one given in (1.32)).
Consider the three following sets: Theorem 1.28. Let κ < +∞. Consider ρ a probability distribution with full support on E κ and T = T [u|v] u,v∈E Sq κ a JRM indexed by Sq. The measure ρ Z 2 is invariant by T on Z 2 iff the two following conditions hold simultaneously: As a simple corollary: if NLine ρ,T ≡ 0 on Γ 2 for a ρ with full support then ρ Z is invariant by T on Z.
1.6. How to explicitly find invariant Markov laws or invariant product measures on the line?
In real applications, often T is given, and the need is to find a Markov kernel M so that the M -Markov law is AlgInv by T. Let us call When |E κ | < +∞, by Theorem 1.9, to find such M amounts to finding S 7 (T) (which can be empty).
Assume T is given, and let us determine S 3 (T). Setting This is a linear system in ν, therefore it can be solved by means of linear algebra. If no positive solution ν exists, then S 3 (T) = ∅. Assume that a positive solution ν exists. Define for any a, b ∈ E κ the row matrices L a,b , the square matrices N a , and the vector R: For each a, take the pair of left and right eigenvector ( = a , r = r a ) with positive entries of N a corresponding to the main eigenvalue (notion defined below Def. 1.8), normalized so that a 1 = a R = 1, and r a a = 1.
Recall the considerations just above (1.46).  The proof is provided in Section 4.

Finding the set of invariant product measures
Let T be given and |E κ | < +∞. Now we explore some necessary and/or sufficient conditions for the existence of product measures invariant by T on the line. Define the symmetric version of T by The product measure ρ Z is invariant by S (or any symmetric JRM S) on the line iff Z ρ,S ≡ 0.
Proof. (i) The set of JRM that preserve a given invariant distribution is a cone. Now, the product measure ρ Z is preserved by "space" reversibility: if ρ Z is invariant by the JRM T then it is also invariant by T defined by (ii) When the product measure ρ Z is invariant by S, then NCycle ρ,S 4 ≡ 0 by Theorem 1.20 which implies = 0 for any a, b ∈ E κ , and then Z ρ,S ≡ 0. Conversely, if Z ρ,T ≡ 0, by all the criteria of Theorem 1.20, the product measure ρ Z is invariant by S on the line.
Hence, to know if there exists some product measures invariant by some given T, one can proceed as follows: (a) compute S, (b) solve the equation Z ρ,S ≡ 0 with unknown S (a pretreatment, can consist to replace in Z ρ,S , each occurrence of ρ x ρ y by ρ x,y in order to get a linear equation in the vector (ρ u,v , (u, v) ∈ E 2 κ )). After that, it remains to check if indeed ρ u,v can be written under the form ρ u ρ v (notice that in this case ρ u = ρ 2 u,u ). (c) If (b) provides no solution, then no product measure is invariant under T. If (b) provides some solutions, they are candidate to be invariant by T, and it remains to check whether Cycle ρ,T 3 ≡ 0 or not.

Models in the segment with boundary conditions
In the literature, often, when a PS is considered on a segment 1, n , the dynamics are made of the sums of two contributions: -the inner rate jumps are given as before (using a JRM T), -the boundary jumps are driven by some additional JRM depending on the states of the neighbourhood of 1 and n.
This motivates the following definition: is said to be AlgInv by T 1,n on the segment 1, n if it solves the following system: where for an extra B in the notation denote the presence of a boundary Recalling that T [w|z] is the induced jump rate on an interval defined in (1.8), we define T 1,n as the sum of this induced jump rate to which we add some boundary effects at the left and at the right of the segment given by some jump rate matrices β and β r with range L − 1: We go on focusing on AlgInv Markov laws here. Take again M a positive Markov kernel, and ρ its invariant distribution. Define where ρ is the unique element of M(E κ ) such that ρM = ρ. For n ≥ 3, a simple computation shows (see if needed the forthcoming Sect. 4 ≡ 0, and observe that for any Proof. Suppose that a (ρ, M )-Markov law is invariant by T on the line. The key point in the proof is that, if (X 0 , · · · , X n+1 ) is distributed according to the (ρ, M )-Markov law under its invariant distribution, then (X 1 , · · · , X n ) is also distributed according to the (ρ, M )-Markov law under its invariant distribution. Hence, it is possible to build explicitly β and β r in such a way they emulate the exterior effects of the segment 1, N . It suffices then to take simply Remark 1.36. What is done in this section is a bit related to the matrix ansatz used by Derrida et al. [10] in order to find and describe the invariant distribution µ n of the TASEP on a segment 1, n , in the sense that it relies on a telescopic scheme.

Extension of Theorem 1.9 to larger range and memory
The case L > 2 can be treated as the case L = 2 has been treated, with some adjustments. Also the case of AlgInv Markov laws with memory m > 1 can be managed. We discuss both extensions simultaneously here. A first change concerns the "7" which played a special role in Theorem 1.9 and will be replaced by h = 4m + 2L − 1. (2.1) As usual, a Markov chain with Markov kernel M and memory m ≥ 0, is a process (X k , k ≥ k 0 ) (for some k 0 ) whose distribution is characterized by for j − m ≥ k 0 , and an initial distribution µ ∈ M(E m κ ), the distribution µ of (X k0 , · · · , X k0+m−1 ). The Markov kernel M is a matrix with size κ m × κ with non negative entries, such that, for any x ∈ E m κ , y∈Eκ M xy = 1. We call such a Markov kernel, a Markov kernel with memory m.
We let Line ρ,M,T n (x 1, n ) be the equation Line Z (x 1, n , ν, T) where ν is the M -Markov law with memory m and JRM T (we may use the same notation as before, since in the case (L, m) = (2, 1) we recover the same definition as before). The equation Line ρ,M,T n (x 1, n ) = 0 rewrites: The quantity which plays the role of Z in these settings is: where T out The quantity which will play the role of "4" as in (1.23) is We extend the definition of NCycle n for n ≥ m + 1: for any x ∈ E h κ and y ∈ E κ , extend Master M,T and Replace M,T by: In  The previous point may lead to think that it could be a good idea to represent any PS with a JRM with range 2. This is always possible by encoding configurations η ∈ E Z κ as a sequence (η j , j ∈ Z) of overlapping subwords of size L − 1: The resulting η is a word on the alphabet A = E L−1 κ . The initial PS on E Z κ with range L induces a PS on (E L−1 κ ) Z with range 2. However, since our theorem allows to characterize the invariant Markov laws with some fixed memory m, with full support, this transformation is not suitable since the distribution of η cannot have a full support for L ≥ 3 (due to the overlapping of η j and η j+1 ).
The following question is due to an anonymous referee. The proof is given in Section 4.

The case
Here we will consider Markov kernels with positive entries (the case with possibly zero entries is discussed in Sect. 2.4). The main problem in the case κ = +∞ is that the sums defining Line are now infinite series and therefore some conditions need to be satisfied in order to rearrange terms as done in the proofs, for example, to write Z. The first problems come from the infinitesimal generator (see (1.4)) which may fail to have an interesting domain, in other words, in general, it does not define a Markov process. But even if we jump directly to the AlgInv considerations a second problem arising is that it is no more clear that Line ρ,M,T n ≡ 0 and NLine ρ,M,T n ≡ 0 are equivalent. The series appearing in both members of (1.15) are composed with positive terms. It is necessary and sufficient that each of them converges for Line to be well defined. If each of them converges, Fubini's theorem ensures that we can rearrange globally their terms as wished. Hence, we have under this condition The problem is that it is often the converse which is needed, since all criteria we gave rely on Master, NLine, NCycle. When #E κ < +∞, but, when κ is infinite, when a pair (M, T ) solving NLine ρ,M,T ≡ 0 is found, (2.7) must be checked.
The following proposition gives a sufficient condition for the validity of both (2.7) and (2.6).
Proposition 2.5. Assume that κ ∈ N ∪ {+∞}, and M is a positive Markov kernel. If Proof. Following the discussion above, we verify that under the hypothesis above, the series arising in each term of (1.15) are absolutely convergent. For this notice that it suffices to replace the sign "minus" by "plus" in (1.15) and to bound it by Hence, if C 1 and C 2 are finite, the sums in Line ρ,M,T are well defined and can be rearranged.
In the same way, the positive and negative contributions in (1.17) can be separated and each of them converges absolutely. The conclusion follows.  One way to see the appearance of multiple AlgInv Markov laws is to consider two continuous time Markov processes X t and Y t respectively on A Z , and B Z with A ∩ B = ∅. With them, one may construct a continuous time Markov process Z t which coincides with X t and Y t if the starting configurations are in A Z , and B Z , respectively by defining: u2] . In this case, the set of configurations A Z and B Z do not communicate; if both X t and Y t possess a AlgInv Markov law, then Z t possess several invariant Markov laws, including those that are mixture of these. Theorem 1.9 and all its criteria do not allow to characterize this kind of invariant measures.

Invariant product measures with a partial support in E κ
We discuss here an iff criterion to show the invariance of a product measures ν Z with support S strictly included in E κ . The idea to get some criteria is just to discard the set E κ \ S which should not be reachable from S if an invariant distribution with support S exists: Consider T a JRM on E κ , and let S be a strict (non empty) subset of E κ and ν a measure with support S. Assume that for any u, v, a, b ∈ S and interpret this condition as: if the word w is obtained from the word w ∈ S Z by a jump with positive rate, then w must be in the support of ν Z . This implies that the restriction T of T to S defined by has the following property: the PS on E Z κ (resp. S Z ) with JRM T (resp. T ) coincide if started from a measure ν with support in S. The following theorem is a direct consequence of this fact:

Invariant Markov distributions with MK having some zero entries.
In Theorem 2.8 is discussed the invariance of a product measure which has a partial support in E κ , and in fact, our criteria apply to this situation up to a simple restriction of the state space.
The same kind of conditions can be imagined for a M -Markov law satisfying M i,j > 0 for i, j ∈ S, and such that for any i ∈ S, j∈S M i,j = 1, meaning that the states in S just communicate with other states in S. If then, the JRM T can be restricted to S. Denoting by T this restriction, the criterion we have (Thm. 1.9) allows to decide if the M -Markov law is invariant by T on S Z . Under these conditions, everything is then somehow trivial, since S Z is closed under the action of the jumps with positive rate. The general case is much more complicated! Consider a general Markov kernel M = (M i,j ) i,j∈Eκ . Consider the directed graph G = (E κ , E) whose vertex set is the alphabet E κ and whose edge set is E = {(i, j) : M i,j > 0} . Consider the strongly connected components (C j , j ∈ J) of this graph, where J is a set of indices. Starting from any point v ∈ E κ , the Markov chain (X n , n ≥ 0) with kernel M will eventually reach one of these strongly connected components C j and will stay inside a.s., forever. The invariant distributions of M naturally decompose as a mixture of the invariant distributions ρ (j) , where ρ (k) is the invariant distribution of M on C k .
The strongly connected components do not communicate, then, one may partition the vertex set E κ along these connected components. The Markov chain on each of these connected components is irreducible and can be treated separately: the fact that one of them is invariant by T does not interfere with the fact that the "other sub-Markov chains" have the same property or not.
The property of being irreducible does not mean that E is the complete graph and some M i,j 's can still be 0 in this case. It may also happen that M is periodic, meaning that again, it may exist several invariant distributions with the Markov kernel M (for example, equal up to a translation, alternating between even and odd states).
Again, the range considered here is L = 2, and some adjustments need to be made in the next considerations if L > 2. Consider an irreducible Markov chain (X n , n ≥ 0) with kernel M . Its invariant distribution has full support on E κ . Let It turns out that for a general JRM T, the support of an (algebraic) invariant distribution possesses its own combinatorial structure: it is possible to design some JRM T which preserves several non communicating subsets of E Z κ . For example the subset of words w = (w i , i ≥ Z) satisfying w i + w i+1 ∈ 17Z ∪ 19Z for all i's (such a property holds for any JRM T satisfying : for any (x, y) such that x + y ∈ 17Z ∪ 19Z, T [x,y|x ,y ] > 0 ⇒ x + y ∈ 17Z ∪ 19Z).
The criteria given in Theorem 1.9, which allows to characterize invariant Markov distributions with full support, fails in this case. Indeed, they rely on the fact that one can compare NLine M,T (x) with NLine M,T (x ) for two close words x and x ; for example, when x is obtained by the suppression of a letter in the "middle" of x. These comparison methods fail if the removal of one (or several) letter of x belonging to the support, gives x outside the support (or drives x in another communicating subsets of E Z κ ).

Explicit computation: Gröbner basis
In this subsection we generalize and revisit some well known models using our theorems. Before that, we would like to discuss a bit the "explicit" resolution of systems of algebraic equations.
First, the simplest systems of equations are linear systems: they are systems of polynomial equations of degree 1 in some unknown variables (x 1 , ..., x n ), with some coefficients in R or, possibly, with coefficients being some functions of some parameters (y 1 , · · · , y n ). A very classical fact: such systems can be solved using linear algebra. If some parameters (y i ) are present, then the study is in general much more complicated: typically, even the dimension of the set of solutions can vary when the parameters change.
To solve these systems a computer algebra system can be used: only simple operations as multiplications and additions are needed: if the coefficients are integers, or for examples, polynomials in the y i 's with integer coefficients, the results obtained are exact.
For general polynomial systems with only one unknown x, of the form Sys = {P i (x) = 0, 1 ≤ i ≤ k}, the first step is the computation of the gcd G of these polynomials (using Euclidean algorithm): x is solution to the system Sys if and only if G(x) = 0. Assuming that the P i are not all 0 (in which case the question is trivial, but what follows does not work) if G is a constant, then Sys has no solution, and if G is a polynomial, then the solutions of Sys is the set of roots of G which is non empty in C by d'Alembert-Gauss theorem. Finding explicit solutions can be done by numerical approximations, and in some cases, explicit exact solutions can be found. In any case, the set of solutions of Sys is implicitly known: it is the set of solution of the system G(x) = 0, and arguably, G(x) = 0 is the simplest representation of the solutions of Sys.
Here the situation we face is more complex: take for example Master M,T

7
≡ 0 in the case where E κ is finite. This system is linear in T and involves quotient of cubic monomials in the M i,j 's: for example, for a fixed range and number of color (L, κ), solving this system in M given T provides typically a huge number of nonpolynomial equations with large degree in M . We can transform this type of system into a polynomial system in several variables as follows: A pair (M, T) solves the system S = {Cycle M,T 7 ≡ 0, M > 0} iff it solves the following system of polynomial equations where the y a,b are additional variables which prevent the M a,b 's to be 0.
Equivalence of systems means here that (M, T) solves S iff there exists y such that (M, T, y) solves S , and M > 0. Any M such that (M, T, y) solves S has non zero entries, but could have some negative ones, or even complex ones: it depends on how/where the system is solved.
We then need to solve polynomial systems in several variables. In this case again, we cannot expect a better situation than for polynomial systems of a single variable: in general no closed formulas exist for solutions, but again, it is possible to know if solutions exist, and in this case, to find some minimal representations of the solution set (if T is given, the problem is almost the same).
A common way to solve this kind of problems amounts to computing a Gröbner basis of the set of polynomials involved in the system: given a finite set of polynomials S = {P i , i ∈ I} where the P i 's belong to R[x 1 , ..., x n ], a Gröbner basis of S is a basis of the ideal generated by S which have some additional properties. It depends on a good monomial order (preserved by multiplications, if x (α) < x (β) then x (α) x (γ) < x (β) x (γ) where x (α) = i x αi i for α = (α 1 , . . . , α n )). We cannot go too far in the description of the Gröbner basis properties, or to explain how they are computed: we refer the interested reader to Adams and Loustaunau [1] to get an overview and to Jean-Charles Faugère webpage [14] for many resources on this topic, including fast algorithms.
In order to be understandable to the reader unaware of these methods, we will just stress on the following facts: -Computation of Gröbner basis for polynomials with integer coefficients relies on simple elementary operations as Euclidean division of polynomials, sorting of polynomials according to their coefficients/and or degrees and then can be performed by a computer algebra system working on integers (and then it is decidable).
-When the basis B has been computed, the basis is a finite sequence of polynomials, equivalent to the initial system S.
• if the Gröbner basis is G = [1] then there are no solution to the initial system (whatever is the order used), • if it is not G = [1] then there are some solutions to the initial system in C: some extra work could be needed to see if there are some solutions in R, R + or [0, 1] n if these are some additional requirements, • since B is a basis of the ideal generated by S, each polynomial p in B is a necessary condition on the solution set. Hence if a Gröbner basis contains a polynomial, for example (2x 1 + x 7 − 9)(3x 7 − 8x 17 9 + 1) for a system {P i , 1 ≤ i ≤ k} in the variables x 1 , . . . , x 100 : then they are some solutions to the system in C 100 , and each solution (x 1 , . . . , x 100 ) satisfies either 2x 1 + x 7 − 9 = 0 or 3x 7 − 8x 17 9 + 1 = 0 (inclusive "or" of course). -Computing a Gröbner basis is time and memory consuming, so computing a Gröbner basis is sometimes impossible in practice by hand, and even by computer. -There are different notions of Gröbner basis as said above, since they rely on a (good) order on monomials; indeed, an order on polynomials is needed to define the Euclidean division in the set of polynomials in several variables. Each order leads to a specific representation of the ideal. For example, if P 1 = x 2 + y 2 − z 2 − 3, P 2 = x 2 + 2y 2 − 4, P 3 = y 2 + 3z 2 − x − 2, the computation of the Gröbner basis relative to the graded reverse lexicographical order gives as a basis . If alternatively, the lexicographical order (plex) is chosen, the basis is G = [4z 4 − 6z 2 − 1, y 2 + z 2 − 1, −2z 2 + x + 1]. Both results ensure the existence of solutions in C 3 . It is somehow trivial in this case that solutions exist if we take as granted the equivalence to solve the initial system {P 1 = 0, P 2 = 0, P 3 = 0} and (one of) the system(s) G. If we add the polynomial P 4 = xz − y 2 + 2, then this time (any) Gröbner basis is G = [1]: there are no solutions. What happens here is different from the "linear algebra settings": the number of polynomials in the Gröbner basis depends on the order chosen, and often, the basis is huge, containing much more polynomials than the initial system. Until now, our main theorems assert that "a M -Markov law" is invariant by a PS with JRM T can be reduced to checking if a polynomial system in (M, T) has some solutions. The paragraph above on the Gröbner basis is here to say that checking the existence of a M that solves for example Cycle M,T 5 = 0 when M is fixed, is doable at the price of computing a Gröbner basis. If the Gröbner basis is G = [1] there are no solution. If G is not 1, it will be a list of polynomials in the variables M and T simpler than the initial problem (the order plex allows to obtain a kind of triangular system in which a well chosen order of the variables make apparent the conditions on T, for example). Again, a work still remains to be done to check that real or positive solutions exist.
When a solution (M, T) has been found by this mean, an independent proof of the invariance of the Markov chain with kernel M by T can be done by checking directly -without using a Gröbner basis computation -that Cycle M,T 7 ≡ 0.
Remark 3.1. There are many good reasons to be confident on the Gröbner basis computations with computer algebra systems which rely on simple computations on integers and which are used by many users, for many reasons including cryptography motivations, but for the reader which prefers to stay away from this kind of automatic tools, we insist on the fact that these computations can be done by hands (and patience).
To follow in details the following examples, the reader can download in [15] a maple-file or a pdf file, where all the computations are done. When β = 0, this is the Bernoulli(1/2) product measure (Liggett [21], Introduction). Let us see how to recover this with our approach. By Theorem 2.1 , since m = 1, κ = 2, and the range is L = 3, it suffices to check that Cycle M,T 9 ≡ 0. Plug the values of M and of T (given in (3.1) and in (1.5)) in the corresponding Z (which is found in (2.5)). Here Z has 5 indices, and then 32 values Z a,b,c,d,e need to be computed: one finds that these 32 values are all zeroes! As a consequence Cycle M,T 9 ≡ 0. Assume now, that the existence of an invariant Markov law is unknown for this PS. Let us see how to recover this property. Again, since the range is L = 3, we need to find a M , for which Cycle M,T 9 (a, b, c, d, e, 0, 0, 0, 0) = 0 for all a, b, c, d, e ∈ E 2 as specified by Theorem 2.1. First, we make rid of the "exponential function" in T by a change of variables and a deterministic linear change of time (the computation of a Gröbner basis must be done in a polynomial ring). To do this, we set x = e −β and use T = x 2 T instead of T, since this does not alter the set of invariant distributions. We obtain We also add the polynomials M a,b g a,b − 1 in the basis computation (this prevents each M a,b to be 0) and for simplicity we imposed M i,0 = 1 − M i,1 for all i ∈ E 2 . Then with a computer algebra system, compute the Gröbner basis: this is immediate, and two solutions appear, one of them being negative. The unique positive solution (after inverting the change of variable) is given in (3.1).

The voter model and some variants
Consider the JRM T of the voter model: T is not identically 0 and besides, the voter model possesses 0 n and 1 n as absorbing states on Z/nZ (and this can be generalized if more "opinions" are represented). The following corollary is an immediate consequence of Theorem 1.13.

The contact process and some extensions
The Dirac measure on 0 Z is invariant for the contact process. Another invariant distribution exists for λ large enough with no atom at 0 Z (Liggett [21], Thm.1.33, Sec. VI). We prove that this other invariant distribution is not Markovian with memory m, for any λ > 0 and any m ≥ 1. The JRM T of the contact process is not identically 0 and the contact process possesses 0 n as absorbing state on Z/nZ, thus, an immediate consequence of Theorem 1.13 is: If a distribution µ = δ 0 Z is invariant for the contact process on the line, then µ is not a Markov law with memory m, for any m.
In fact, Theorem 1.13 just states the non existence of invariant Markov law with memory m with positive kernel. By the nature of the contact process, no other kernels are neither possible. When solving the system for general rates and using T [0,0,0|0,1,0] as a free parameter, meaning that it can take any real value, we found that a necessary condition to have a Markov law invariant by T is that T [0,0,0|0,1,0] > 0 which means that there is a sort of "spontaneous infection".

Around TASEP
The TASEP is the PS defined on the line, or on a segment (see Sect. 1.7) whose JRM T is null, except for T [1,0|0,1] = 1. Some variants of this model have been defined, we will explore some of them. meaning that a particle can overtake smaller ones, with a constant rate. For more information on this type of PS and its invariant measures on some special cases see Angel [3] or Zhong-Jun et al. [11].
Here, we propose to replace the common value (3.4) by parameters, and use our theorems to characterize the set of 3-tuple (T [1,0|0,1] , T [2,0|0,2] , T [2,1|1,2] ) for which an invariant Markov law with positive M exists (T being null besides). The computation of the Gröbner basis of the system (with the additional polynomials M i,j g i,j − 1 to prevent the M i,j to be 0) is rapid, but the expression of a Gröbner basis is too large to be written here. What can be observed is that a polynomial of the basis is −T [2,0|0,2] + T [2,1|1,2] + T [1,0|0,1] so that the nullity of this polynomial is a necessary condition for existence of an invariant Markov law, in which case it appears that M must have constant lines, so that the distribution is a product measure with marginal M 0,. . Examining further the very simple Gröbner basis, it appears that any product distribution ρ Z with ρ having support over {0, 1, 2} is invariant! This can be checked by hand on Cycle ρ,T meaning that now 0 can overtake 2, but not the contrary. The computation of a Gröbner basis provides a list of polynomials, among which one can find: T [0,2|2,0] + T [2,1|1,2] + T [1,0|0,1] . Since the T are non negative numbers, the 3 parameters must be 0. Hence, the only case where a Markov law with positive kernel M is invariant, is when no particle are allowed to move! -Variant with parameters T [a,b|b,a] This is a generalization of the two previous points. In this case, each particle can overtake the other ones. This is a case where the Gröbner basis are huge (more than seven hundred polynomials), with many very simple polynomials of the following kind: meaning that one of this three factors must be 0 to have a solution. In order to study completely this system a method consists from here, to choose such an equation and to constitute 3 systems from here, each of them, constituted by the initial system at which is added one of the factors above, as a new polynomial.
Due to the complexity of the system, the constitution of these 3-subsystems is not enough to conclude (the obtained Gröbner basis stays large), but this method can be iterated if the complete set of solutions needs to be found.
Zero-range type processes. We start with a preliminary definition In the literature, PS's associated to mass preserving T's are called mass migration processes (MMP) [13] and mass transport models [12,17].
The following definition will be useful to define this type of systems.
Definition 3.5. A mass preserving T is said to be zero range mass preserving if there exists a function g : In words: the rate at which a part k of the mass a jumps to the next vertex at its right is g(a, k) (for any k legal, that is 1 ≤ k ≤ a).
PS's associated to zero range mass preserving T's are called zero range mass migration processes (MMP-ZR) [13]. These types of processes are generalizations of TASEP, since they could be interpreted as particle systems where each site can host more than one particle and where particles in the same sites can jump at the same time (see [2,13]).
The zero-range mass migration process is a process on E ∞ whose JRM is zero range mass preserving. Let ρ ∈ M(E κ ) such that ρ 0 > 0. In Proposition 3.10 from [13] they obtained that ρ Z is invariant for the MMP-ZR iff ρ a ρ k g(k, k) = ρ a+k ρ 0 g(a + k, k) ∀k ≥ 1, ∀a ≥ 1.
Definition 3.6. We say that a distribution ρ on E κ is almost-geometrically distributed if there exists a function g : E κ → R + such that The support of an almost-geometric distribution can be either finite or infinite. If the support is N, then it is a geometric distribution (since ρ a ρ b = ρ a+b ρ 0 ).
We make a small break in the TASEP applications to present a result that holds in a more general setting.

A family of models with an infinite number of invariant product distributions
The next theorem states that the family of mass preserving kernels having almost-geometric distributions as invariant distributions, have an infinite number of invariant distributions.
Theorem 3.7. If ρ Z is AlgInv by a mass preserving kernel T for ρ an almost-geometric distribution such that (2.6) and (2.7) hold, then for all almost-geometric distributions ν with same support as ρ, ν Z is also AlgInv by T.
Proof. We use Theorem 2.6. Assume that ν is AlgInv by T. Taking into account the discussion just above, consider E κ = Supp(ν). A necessary condition for ν to be AlgInv by T is that (2.8) holds. Theorem 1.20 says: .
From the definition of a mass preserving kernel, the first sum can be restricted to I a+b , where for any k, I k is the set of pairs (u, v) such that u + v = k; the second one can be restricted to I b+c , and the last, to I a+c . Now, (3.5) can be used since for all (u, v) ∈ I a+b , ρ u ρ v /(ρ a ρ b ) = 1, and one sees that (3.6) rewrites in this case . (3.7) The steps which brings us to (3.7) is valid for any (ρ, g) satisfying (3.5) so the theorem is proved.
Now we return to the TASEP discussion. From Proposition 3.10 of [13] and Theorem 3.7, we get immediately: Corollary 3.8. Consider a zero range mass preserving T, with g : E 2 κ → R positive in the diagonal and ρ ∈ M(E κ ) with ρ 0 > 0 such that ρ Z is AlgInv by T. If for some h : E κ → [0, ∞), g(b, k) = h(b)g(k, k) for all b ∈ E κ , then ν Z is also AlgInv by T, for all almost-geometric distributions ν with same support as ρ.
Before presenting the next variant, we introduce a result that we will apply to it. This result holds in a more general setting and because of it we state it as a separate result.
PushTASEP: The PushASEP is the PS defined on E Z 2 where 0 represents an empty site and 1 an occupied site. The dynamics are described as follows: each particle tries to jump to the right at rate 1, and it actually jumps if the site is empty. Moreover, each particle jumps to the closest empty site at its left with rate 1. This type of PS has range L = ∞. However, each configuration can be encoded by the consecutive size of the blocks along the line, where a block is constituted with an empty site together with the set of consecutive occupied sites at its left. The dynamics of the PushASEP induces a PS on the "block size process" with range L = 2 and κ = ∞ (all block sizes starting by 1 are possible). For this induced PS, the product measure with marginal the geometric distribution (for any parameter in (0,1) by Thm. 3.7) is AlgInv by T. This provides a description of some invariant distributions for the PushTASEP.

Projection and hidden Markov chain
This part also illustrates our theorems: with Theorem 2.1 one can find JRM T on E Z κ for some κ ≥ 3 (with more than 3 colours) having some Markovian invariant distribution. Some of them, possess some nice projection properties: they allow to characterize some PS invariant distributions on {0, 1} (and probably of some PS with more than 2 colours) having as invariant distribution, the distribution of some hidden Markov chain (see [6] for more information on these models).
Consider T and T be two JRM of two PS defined respectively E Z and F Z , where E and F are two spaces of colours such that #F < #E. Consider π a surjective map from E on F : with each colour c in F , one or several colours π −1 (c) of E are associated by π (on an exclusive basis). Definition 3.9. T is said to be the π-projection of T if for any a, b, c, d ∈ F , any (A, B) ∈ π −1 (a), π −1 (b) (3.8) In words: starting from any representative (A, B) of (a, b), the total jump rate to the representatives of (c, d) does not depend on (A, B), but only on (a, b).
for some finite E. Assume that T is the π-projection of T for a surjection π : E → F , for some set F . Under these hypothesis η = (η t , t ≥ 0) defined by η t = (π (η t (k)) , k ∈ Z) is a PS with JRM T . Hence, if µ is a measure invariant by T on E Z , then µ • π −1 is invariant by T on F Z .
There exist in the literature several definitions for the notion of hidden Markov chains. The most classical is the following: Definition 3.11. (Y k , k ∈ Z) is said to be a hidden Markov chain taking its values in F Z , if it has the following representation: -there exists a Markov chain (Z k , k ∈ Z) taking its values in some set E Z , -there exists a transition kernel K = (K(a, b)) a∈E,b∈F ; such that, conditionally on Z = (Z k , k ∈ a, b ), the Y j 's are independent and conditionally on Z the distribution of Y j is given by K(Z j , .).
Hence, if (X k , k ∈ Z) is a Markov chain with state space E, and π : E → F is a surjection (or just a map) then since the process (π(X k ), k ∈ Z) is a hidden Markov chain. If X has initial distribution ρ at time 0, and kernel M , then From there, it may be checked that a hidden Markov chain is not a Markov chain in general (with any memory), since in general, (3.9) does not factorize suitably. Now, we state the following result: The proof is constructive, we will provide an example. Consider the 4-tuples (T, T , M, π) as follows -Take π : E 3 → E 2 defined by π(0) = 0, π(1) = π(2) = 1.  The Gröbner basis, of this system, is too long to be written here; nevertheless, here are the first polynomials of the obtained basis, which provide some necessary conditions: Hence, t 3,0 = t 2,1 = t 1,2 = t 0,3 = 0 is a necessary condition. The polynomial on the second line expresses somehow "the important condition", and the third line polynomial (and subsequent, see [15]) allow to compute the kernel M . We infer that

Invariant product measure
First, we claim that, Lemma 3.14. If κ = 2, then ρ Z is invariant by T on the line iff NCycle ρ,T 2 ≡ 0.
Proof. By Theorem 1.20 (vii), it suffices to prove that NCycle ρ,T 3 ≡ 0 ⇔ NCycle ρ,T 2 ≡ 0. Start by the implication: from NCycle ρ,T 3 (a, a, a) = 0 = 3Z ρ,T a,a we infer that Z ρ,T a,a = 0 for any a ∈ E κ = {0, 1}. Now, write so that the implication holds. For the converse, note that any words w with three letters on E 2 = {0, 1} possesses one letter repeated. Given the cyclical structure of the equation NCycle ρ,T
The following section aims to answer partially some questions of a referee, by discussing further the links between particle systems and probabilistic cellular automata. Several references and suggestions given in the next section are also due to him.

Particle systems and probabilistic cellular automata
Probabilistic cellular automata (PCA) are synchronous discrete time analogues of particle systems, meaning that at each time unit the states of all the vertices are updated. The transition probabilities have a particular shape: conditionally on the global state X(t) = (X k (t), k ∈ Z), at time t, the finite dimensional distributions at time t + 1 of X(t + 1) are given by meaning that the states of all vertices are updated simultaneously, independently conditionally on X(t): the new state y k depends on the states [x j , k ≤ j ≤ k + L − 1]. Again, a notion of range appears as well as a notion of transition matrix but they are different to the case of those of PS: -Transition matrices of PCA are kind of probability kernels: they need to satisfy for any word a 1, L in E L κ , b T [a 1,κ |b] = 1, meaning that each cell is updated, and the distribution of the new state depends on the states of its neighbours.
-The updating of the sites are done somehow independently. Since the transition matrices for PCA have the form T [a1,··· ,a L |b] , it is not possible to consider transition matrices of the form T [a 1,L |b 1,L ] for L > 1 since the overlapping of updates would create an ill-defined process (in the synchronous case).
Nevertheless, it is of course tempting to find some common points between PCA and PS, and, since finding invariant distributions for both classes of models seems difficult, to try to transfer as much possible results from one model to the other.
-There exist also some characterizations of PCA admitting some particular invariant measures. For example, characterizations of PCA having invariant Markov distributions for L = 2 ( [24], p.139) in the two colours case, [7] in the case with more colours and invariant product measures [22].
-The connection between quasi-reversibility and Markovian invariant measures is also rich [9,[24][25][26] (in this settings, a PCA is said to be quasi-reversible, for some measures, if the reverse time process is also a PCA). This connection comes from the fact that the stationary space-time diagram of PCA has a Gibbs distribution (on Z × Z, or more generally, if defined on another lattice L, on L × Z), and somehow, some induced properties of the potential of this Gibbs distribution may ensure reversibility and Markovianity of some slices of the spacetime diagram. In the case of particle systems, these properties do not exist (there are no Gibbs measure on some space-time diagram) and we are not able to connect time reversibility with invariance of some Markovian distribution. However, this is indeed a possible line of research.
-In order to find connections between PCA and PS, one may also try to approach some PCA by suitable models of PS: among the PCA for which this approach seems promising are those with some parameters of the following type T [a,b|c] = (1 − p)1 c=a + p1 c=f (a,b) , for some function f . The idea is that as p → 0, this PCA evolution leaves a unchanged with a large probability. It may be thought that such PCA behaves more or less as a standard PS since the probability that neighbour sites are updated simultaneously goes to 0. We think however that it is not possible to study PS with this kind of PCA, nor the converse, as long as we are only considering the fact that they own or not some Markovian invariant distributions. The reason being that the subspace of the parameter spaces in the PCA case, or in the PS case, for which there exist Markovian invariant distributions are close. It is therefore impossible to approach "models having non Markovian invariant distribution" by "models having Markovian invariant distribution" in principle (without taking sequences of models with unbounded memories). This fact also holds in the realm of PCA (resp. in that of PS) for the same reason. Now, when trying to approach PCA by PS, a second idea is to consider the PS for which a single element is updated. For example, those having T [a,b,c|a ,b ,c ] = 0 for (a , c ) = (a, c); meaning that a single cell b is updated depending on the neighbourhood. Even for these models, we do not think that there are some connections with "some similar PCA". Again, in the PCA world, the T's are probability kernel, when in continuous time PS they are rates (which does not sum to 1 generally): criteria of invariance of Markov distribution are really different for these models. . In any case, we do not have an intuitive explanation of such algebraic condition and, in accordance to what we commented before, they are close but different from the characterization of Markovian invariant measures for PCA with (L, κ) = (2, 2) (see e.g. Mairesse and Marcovici [22], Thm. 3.5). As suggested by one of the referees, it may be interesting to find connections between these two characterizations.

2D applications
The criterion provided by Theorem 1.28 seems to depend on all the colourings of the neighbors of Γ 0 , Γ 1 and Γ 2 , which represents for this last case, as many as κ 14 possibilities, and this for each of the κ 5 different configurations in Γ 2 . So, the total number of equations seems out of reach, but in fact, again, (1.35) is decomposed on a sum of Z h∩C,h x(h∩C) (defined in (1.40)) so that it suffices to express these functions which intersect the domain D under inspection: the contribution of each square can be computed independently. This provides a small finite set of functions with 1, 2, 3 or 4 variables (as discussed in (1.38) and in the proof of Thm. 1.28): when E κ = E 2 = {0, 1}, this provides a small quantity of functions, each of them being a sum of at most 2 3 elementary quantities. The corresponding set of equations can be written easily, or even automatically if needed. When T is totally specified, searching invariant distributions amounts then just to solving a polynomial system with unknown ρ 0 , · · · , ρ κ−1 in the set {(r 0 , · · · , r κ−1 ) ∈ [0, 1] κ : r i = 1}. Here are some cases we have investigated  32)) then the Bernoulli product measure with parameter ρ 1 ∈ (0, 1) is invariant iff aρ 2 1 − ρ 2 1 + 2ρ 1 − 1 = 0 (so that for a given a, the density is ρ 1 = 1/( √ a + 1)). This can be checked by hand with our criterion, or just using a reversibility argument, as (1.31), for example.
-Similarly, with the same methods, one checks that if all the T's are zero except T In this case, the space of parameters for which there exist invariant product measures is quite complex: see [15].
To prove Theorem 1.9 we will show two cyclical implications -Proof of (i) ⇒ (ii). Observe the contribution of the term with index j, in equation (1.17), for a j in 2, n − 2 that is, far from 0 and from n in (1.16). One sees that since for any a, b M a,b = 1, and ρM = ρ, for any 1 < j < n − 1, Take three arbitrary words x 1, n , y 1, m and a 1, 7 with letters in E κ , and a 4 ∈ E κ . Define w = x 1, n a 1, 4 000 y 1, m and w = x 1, n a 1, 3 0000 y 1, m , we recall that the concatenation gives for example: a 2, 4 b y 3, 6 = a 2 a 3 a 4 yb 3 b 4 b 5 b 6 . Using the property (4.1) and the fact that the boundary terms are the same (those for j ∈ {0, 1, n − 1, n}), we get Hence by (4.4), H n (a 1, n ) = H 3 (a 1 , a 2 , a 3 ) and then, depends only on a 1 , a 2 , 3 . It follows that H 7 (a 1, 7 ) = H 7 (a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 ) which implies Replace 7 (a 1, 7 ; a 4 ) = 0.
-Proof of (iii) ⇒ (iv). We first prove that when (iii) holds, NCycle M,T 4 ≡ 0. For this just observe that Replace 7 (a, b, c, d, a,  Hence H n (a 1, n ) is a function of (a 1 , a 2 , a 3 ) only and it does not depend on n. Therefore, since Master 7 (a, b, c, d, e, f, g) = H 7 (a, b, c, d, e, f, g) − H 6 (a, b, c, e, f, g) we see that this quantity is 0. Proof of (v) ⇒ (i). We will need three intermediate results: which depends on the word x −1, n + 2 , with the sum S(x −1, n + 2 {m} ), that is corresponding with the same word with the letter with index m removed for any m ∈ 2, n − 1 . Hence from (1.17), and this is NLine ρ,M,T n (x 1, n {m} ) since m is not equal to 1 or to n. The contribution of the term j = 2, is which is 0 by the Lemma 4.1. Therefore the sum on x 4 simplifies (because x4 M x3,x4 = 1), which gives the expected result. The proof of the second statement and of the generalization to larger words, can be obtained similarly.    In these equations, the parameters of Z have the form Z x,a,b,y for different values of x and y. Two terms depend on d and two on c: this implies that the differences between the elements that depend on d (respectively c) do not depend on d (respectively c). This provides new identities. Playing with the dependence of the differences in the variables involved, leads eventually to the formula. But the formula can be checked directly independently from these considerations as indicated in the proof of the Theorem.

Proof of Theorem 1.24
We will adapt the proof of Theorem 3.1. in [13] (steps 4 and 5). The main difference is that they use that a measure is invariant iff Gf (η)dρ Z (η) = 0 for every bounded cylinder function f : E Z d κ → R. This is equivalent to Line ρ,T,p (x(A)) = 0 for any A ⊂ Z d finite, and x(A) ∈ E A κ . We do not need to take the limit to get (91) and (92) and the last part of step 5, just n sufficiently large, given that our p is a finite rate transition probability.

Proof of Theorem 1.28
We give a picture based proof, using some representation of computations by pictures. We insist on the fact that ρ Z is invariant by T iff NLine ρ,T (x(C)) = 0 for any sub-configuration x(C) ∈ E C κ , for any subset C of Z 2 . As noticed in Theorem 1.25, we just need to prove that for any s ≥ 0, any square C = 0, s 2 is included in a finite domain C for which NLine ρ,T (x(C )) = 0 for all x(C ) ∈ E C κ . We will construct a well designed sequence (C i ) satisfying the hypothesis of Theorem 1.25 and containing eventually 0, m 2 .
Recall formula (1.38), which expresses NLine(x(C)) as a sum of some "Z" indexed by the hypercube included in the dependence domain D(C). In view of Figure 1 the first hypothesis of Theorem 1.28 says that the sums  of these Z over the eight 2 × 2 squares contained in the first picture of Figure 1 is 0. Let us express this by In Z y 1 y 2 y 4 y 3 , the variables x 1 , x 2 , x 3 refers to some fixed specified values and the "x" refers to free variables on which a sum is taken (as in the definition of Z h∩C,h x(h∩C) , the "variables in h \ C" are free variables on which a sum is taken).
Further Line ρ,T (x(Γ 1 )) and Line ρ,T (x(Γ 2 )) are respectively sums of 10 and 11 such Z: each of these Z must be seen at this stage as indexed by a 2 × 2 square included in the second and third picture in Figure 1 where Γ 1 or Γ 2 are drawn. Many of these Z are common between these structures. It appears then that The terms have been assembled to make clear what changes the "appearance" of x 5 in x(Γ 2 ) compared to x(Γ 1 ). Graphically, we use the shortcut given in Figure 2. This picture has to be understood as when one expressed the difference Line ρ,T (x(Γ 2 )) and Line ρ,T (x(Γ 1 )) by summing on the Z indexed by the squares included in Γ 2 and those included in Γ 1 , one gets the same results as if we do the same computation in the small figures in the right hand side in Figure 2.
Consider some n ≥ 5 (to avoid border effects due to the size of Γ 2 ), and consider the triangle : each sum has to be taken on the set of 2 × 2 squares included in the drawn rectangles. All squares appearing in both pictures simplify and then the geometry of the summation reduces to that of the second line. In the third line, some squares are added, but since they correspond to the same contributions, this is allowed. We will show that under the hypothesis of the theorem, for any x(∆ n ) ∈ E ∆n κ , Line ρ,T (x(∆ n )) = 0. For this, we will need the four following steps: (a) if Line ρ,T (x(Γ 0 )) = 0 for any x(Γ 0 ) ∈ E Γ0 κ , then Line ρ,T (x(G)) = 0 if G is the 2 × 1 or 1 × 2 domino, or if G is a single vertex (1 × 1). Indeed, these structures are included in Γ 0 , and, for any G ⊂ Γ 0 , Line ρ,T (x(G)) = 0 can be obtained by summing Line ρ,T (x(Γ 0 )) on the variables which are in Γ 0 \ G. (b) From (a), we deduce that if L n is the n × 1 line, then Line ρ,T (x(L n )) = 0 for any x(L n ) ∈ E Ln κ . The graphical proof of this property is drawn on Figure 3. A single argument is needed: the set of 2 × 2 square contributions that do not vanish is the same in the right and left hand side.
(c) We now extend the construction of this row L n by adding a single vertex y just above the right-most element, getting a new shape L n as represented in the top-left picture in Figure 4. The graphical proof provided in Figure 4 allows to prove that Line ρ,T (x(L n )) = 0 using the nullity of Line on E Ln κ , E Γ0 κ and on dominoes. (d) The argument given in (c) is independent from the fact that L n was the first row. Since the difference Line ρ,T (x(L n )) − Line ρ,T (x(L n )) does not involve the square below row at level 1 (say), if we "complete" both L n and L n by the same fixed row at level 0, the difference Line ρ,T (x(L n )) − Line ρ,T (x(L n )) would be unchanged. Hence, if two structures S and S are equal up to a given row at level h, and differ only because S possesses an additional point just above the leftest position of this row, then we still have Line ρ,T (x(S )) − Line ρ,T (x(S)) = 0. Adding a single vertex above the left-most point of the top-most row is a construction which does not allow to pass from L n to ∆ n . We still need an elementary growing trick to allow to put some new vertices at the right of the top-most vertex in L n to complete the second row (in fact, we will construct a new row with one vertex less than L n , leading iteratively to ∆ n ): a slight generalization of Figure 2 does the job, and the graphical computation is represented in Figure 5 , and then a main right eigenvector of N a is given by r a = t 1/M y,a y∈E k . One sees that λ = 1/t is the common main eigenvalue to all the N a 's. In the same way, one sees that a = ρ x M x,a x∈E k is a main left eigenvector associated to N a . The vectors r a and a of the theorem are obtained after normalisation: a = ρ x M x,a /ρ a x∈Eκ , r a = ρ a r a = t ρ a /M y,a y∈E k . Now L a,b = M a,b M b,x M x,a /t 3 , x ∈ E κ , giving L a,b r a = ρ a M a,b /t 3 and indeed, a,b L a,b r a = 1/t 3 = λ 3 .
-Now, assume that ν is given, and (1.45) possesses a positive recurrent solution M . From the previous point, the N a 's have same main eigenvalues. The main argument of the proof we will develop relies on the structure of ν, which allows to show that M exists, it is characterized by (1.45). Equation (1.45) motivates to consider ν a,b,c as the weight of a cycle abc of length 3, which may be expanded as a product on its edges: We will see that the knowledge of ν a,b,c allows to determine the weight of the cycles of every size, and then, by taking a limit, we will determine M . First, taking (a, b, c) = (a, a, a) provides Consider a cycle C n = (a 1 , . . . , a n−1 , a n , a 1 ) of length n on E κ , and for some 1 < j < n add the directed edge (a 1 , a j ) as well as the edge (a j , a 1 ) to get the graph C n . We may partition this oriented graph also as the union of C j of length j and C j,n = (a j , . . . , a n−1 , a n , a 1 , a j ). Therefore W (C n ) = W (C n ) w a1,aj w aj ,a1 = W (C j )W (C j,n ) w a1,aj w aj ,a1 = W (C j )W (C j,n ) × ν 1/3 a1,a1,a1 ν a1,aj ,a1 . (4.7) A simple iteration argument allows one to express the weight of a cycle of any length with the weights of cycles of length 3. A particular way to do that, is to see (4.7) as the algebraic effect of the addition of the edge (a 1 , a j ) and (a j , a 1 ) in the cycle C n : adding all the edges from and to a 1 yields to W (C n ) = ν a1,a2,a3 n−1 j=3 ν a1,aj ,aj+1 ν 1/3 a1,a1,a1 ν a1,aj ,a1 . (4.8) Using the matrices L, N, R, (4.8) implies that a3,··· ,an W (C n ) = L a1,a2 N n−3 a1 1.
Since M and M are positive recurrent, taking the limit when n → ∞, we get where ρ and ρ are the invariant measures for the Markov kernels M and M . Hence α = 1. Taking b = c = a in (3) then gives M a,a /M a,a = 1 for any a. Using this relation and taking d = a and α = 1 in (4.13), we obtain ρ a = ρ a , and still from (4.13), this implies M a,d = M a,d for any (a, d).

Proof of Theorem 2.1
We start with a preliminary lemma. where in the sum, w = w 1, L + 2m is used instead of abc (meaning that w 1, m = a and w L + m + 1, L + 2m = c).
The proof is the same as that of Lemma 4.1 taking into account Remark 4.2. The proof of Theorem 2.1 is very similar to that of Theorem 1.9; we discuss only the main differences. We will prove the two cyclical implications (i) ⇒ (ii) ⇒ (iii) ⇒ (iv) ⇒ (v) ⇒ (i) and (v) ⇒ (vi) ⇒ (vii) ⇒ (viii) ⇒ (ix) ⇒ (v).
-Proof of (v) ⇒ (i). The proof of the following lemmas can be adapted  We will adapt the argument of Lemma 4.5. The argument is a bit more involved here. Take A ∈ E N κ . Using iteratively Lemma 4.10, and X 2 are µ distributed, Y 1 and Y 2 are ν distributed. Suppose that (X i , Y i ) for i = 1, 2 are optimal couplings for the marginals, that is d := d T V (µ, ν) = 2E(1 X1 =Y1 ) = 2E(1 X2 =Y2 ). We have then where this last equality is valid when d is small (d < 1/2). Hence, if one knows that the distance d T V (µ 1,2 , ν 1,2 ) < ε < 1/2 and that the marginals are independent, then d T V (µ, ν) < (2/3)ε. The strategy is as follows: we will deduce from the inequality d T V (µ t n , µ 0 n ) ≤ Ct for any n, that d T V (µ t n , µ 0 n ) ≤ (3/4)Ct for any n, so that necessarily d T V (µ t n , µ 0 n ) = 0. Take I r (k) = 1, r ∪ (k − 1)r + 1, kr . Now write d T V (µ t Ir(k) , µ 0 Ir(k) ) ≤ d T V (µ t 0,kr , µ 0 0,kr ) < ε. Since I r (k) is the union of two intervals, µ t Ir(k) is (for a clear notation) the distribution of the pair (X t 1, r , X t (k − 1)r + 1, kr ). According to the previous discussion, to conclude it suffices to prove that when k → +∞, the two pairs A t (k) := (X t 1, r , X 0 1, r ) and B t (k) := (X t (k − 1)r + 1, kr , X 0 (k − 1)r + 1, kr ) converge to two independent variables with the same distribution.
By the hypothesis we made on the Markov kernel, for any ε > 0, it is possible to find a k large enough such that the variation distance of the initial configuration (X 0 1, r , X 0 (k − 1)r + 1, kr ) to a pair of independent r.v. with the same marginals is smaller than ε > 0 for the ε of our choice.
The fact that the initial configuration converges to independent vectors with same distribution when k → +∞, is not sufficient. We need to show that their evolutions till time t are asymptotically (in k) independent too.
The argument is routine: since the number of colours is finite, L is finite, max w,w ∈E L κ T [w|w ] < +∞. For t < ∞ fixed we build a dependence graph G t as follows: first, the vertex set of the graph is the set of intervals of size L. For each jump that has occurred in an interval I before time t we add an edge between this interval and the intervals which intersect it (to encode the fact that the state at time t of these intervals may have been modified by the jump in I). Since max w,w ∈E L κ T [w|w ] < +∞, when k → +∞, the probability that the two intervals 1, r and (k − 1)r + 1, kr intersect distinct connected components of G t goes to 1. This suffices to deduce the asymptotic independence of (A t (k), B t (k)) when k → +∞.