SEMI-PARAMETRIC ESTIMATION OF THE VARIOGRAM SCALE PARAMETER OF A GAUSSIAN PROCESS WITH STATIONARY

. We consider the semi-parametric estimation of the scale parameter of the variogram of a one-dimensional Gaussian process with known smoothness. We suggest an estimator based both on quadratic variations and the moment method. We provide asymptotic approximations of the mean and variance of this estimator, together with asymptotic normality results, for a large class of Gaussian processes. We allow for general mean functions, provide minimax upper bounds and study the aggregation of several estimators based on various variation sequences. In extensive simulation studies, we show that the asymptotic results accurately depict the ﬁnite-sample situations already for small to moderate sample sizes. We also compare various variation sequences and highlight the eﬃciency of the aggregation procedure.


General context and state of the art
Gaussian process models are widely used in statistics. For instance, they enable to interpolate observations by Kriging, notably in computer experiment designs to build a metamodel [44,54]. A second type of application of Gaussian processes is the analysis of local characteristics of images [45] and one dimensional signals (e.g. in finance, see [26,61] and the references therein). A central problem with Gaussian processes is the estimation of the covariance function or the variogram. In this paper, we consider a real-valued Gaussian process (X(t)) t∈R with stationary increments. Its semi-variogram is well-defined and given by (1.1) Ideally, one aims at knowing perfectly the function V or at least estimate it precisely, either in a parametric setting or in a nonparametric setting. The parametric approach consists in assuming that the mean function of the Gaussian process (the drift) is a linear combination of known functions (often polynomials) and that the semi-variogram V belongs to a parametric family of semi-variograms {V θ , θ ∈ Θ ⊂ R p } for a given p in N * . Furthermore, in most practical cases, the semi-variogram is assumed to stem from a stationary autocovariance function k defined by k(h) = Cov(X(t), X(t + h)). In that case, the process is supposed to be stationary (assuming also the mean to be constant), and V can be rewritten in terms of the process autocovariance function k: V (h) = k(0) − k(h). Moreover, a parametric set of stationary covariance functions is considered of the form {k θ , θ ∈ Θ} with Θ ⊂ R p . In such a setting, several estimation procedures for θ have been introduced and studied in the literature. Usually in practice, most of the software packages (like, e.g. DiceKriging [48]) use the maximum likelihood estimation method (MLE) to estimate θ (see [44,52,54] for more details on MLE). Unfortunately, MLE necessitates to solve linear systems and compute determinants and is very often obtained via an iterative procedure for likelihood optimization. So, it is known to be computationally expensive and intractable for large data sets, and in addition, the iterative procedure may diverge (that is to say, in practice, the sequence of covariance parameters obtained during the iterative procedure for likelihood optimization may diverge or converge to parameters that are at the boundaries of the definition domain Θ) in some complicated situations (see Sect. 5.4). This has motivated the search for alternative estimation methods with a good balance between computational complexity and statistical efficiency. Among these methods, we can mention low rank approximation [55], sparse approximation [27], covariance tapering [20,32], Gaussian Markov random fields approximation [16,49], submodel aggregation [9,19,28,50,57,58] and composite likelihood [6].

Framework and motivation
The approaches discussed above are parametric. In this paper, we consider a more general semi-parametric context. The Gaussian process X is only assumed to have stationary increments and no parametric assumption is made on the semi-variogram V in (1.1). Assume, to simplify, that the semi-variogram is a C ∞ function outside 0. This is the case for most of the models even if the sample paths are not regular, see the examples in Section 2.1. Let D be the order of differentiability in quadratic mean of (X(t)) t∈R . This is equivalent to the fact that V is 2D differentiable and not 2D + 2 differentiable. Let us assume that the 2Dth derivative of V has the following expansion at the origin: where C 0, 0 < s < 2, and the remainder function r satisfies some hypothesis detailed further (see Sect. 2.1) and is a o(|h| s ) as h → 0. Note that, since s < 2, V is indeed not (2D + 2) differentiable. The quantity s is the smoothness parameter and we call C the scale parameter. In this paper, we assume D and s to be known and we focus on the theoretical study of the semi-parametric estimation of C defined in (1.2) in dimension one. Remark that this makes it possible to test whether the Gaussian process stems from a white noise or not. Notice that we also perform some additional simulations in higher dimensions. The value of C may lead to significantly different behaviors of the process X as one can see in Figure 1 which represents several realizations of a Gaussian process with exponential covariance function for different values of C, i.e. V (h) = 1 − exp(−C|h|) (which satisfies (1.2) with D = 0). In many cases, a large C is associated to a small dependence between the values of X and a small C is associated to an almost constant process X. For instance, in the above example V (h) = 1 − exp(−C|h|), the limit of V (h) is 1 = Var(X(0)) when C → +∞ and the limit of V (h) is 0 when C → 0. Hence, in this example, the case C = +∞ can be interpreted as independence between the values of X and the case C = 0 can be interpreted as a constant process X.
As a motivating example, consider the following case where the estimation of C is beneficial. Assume that we observe a signal S depending on a vector of initial parameters x given by a computer code described by the where E stands for the initial parameter space. In order to interpret the output curve t → S x (t) for a given parameter x, it is useful to consider that this curve is the realization of a Gaussian process, with scale parameter C x . It is then insightful to study the black-box x → C x , for instance by means of a sensitivity analysis, or in the aim of finding which inputs x lead to C x smaller than a given threshold. As discussed above, this can indicate output curves with a non-negligible dependence structure. A necessary step to such a study is the estimation of the value of C x , given a discretized version of the curve t → S x (t). More generally, estimating C can enable to assess if an observed signal is composed of almost independent components (C large) or not, and to quantify the level of dependence. We refer to the real data sets studied in Section 5.4 for further discussion.

State of the art on variogram estimation
Nonparametric estimation of the semi-variogram function is a difficult task since the resulting estimator must necessarily lead to a "valid" variogram (conditional negative definiteness property) ( [13], p. 93). This requirement usually leads to complicated and computationally involved nonparametric estimators of the variogram [24,25] that may need huge data sets to be meaningful. A simpler estimator based on the moment method has been proposed in [14,40] but does not always conduce to a valid variogram. A classical approach to tackle this problem has been proposed in the geostatistics literature [10,18,31] and consists in fitting a parametric model of valid semi-variograms to a pointwise nonparametric semi-variogram estimator by minimizing a given distance between the nonparametric estimator and the semi-variograms at a finite number of lags. The reader is referred to [13], Chapter 2 for further details on semi-variogram model fitting and to [34] for least-squares methods.

State of the art on quadratic variations
In order to remedy the drawbacks of the MLE previously mentioned, we focus on an alternative estimation method using quadratic variations based on the observations of the process X at a triangular array of points (t j ) j , where j = 1, . . . , n. This method is also an alternative to the existing methods mentioned in the previous paragraph. Quadratic variations have been first introduced by Levy in [37] to quantify the oscillations of the Brownian motion. Then a first result on the quadratic variations of a Gaussian non-differentiable process is due to Baxter (see e.g. [8], [22], Chap. 5 and [21]) that ensures (under some conditions) the almost sure convergence (as n tends to infinity) of where t j = j/n, for j = 1, . . . , n (by convention t 0 = 0 and X 0 = 0). A generalization of the previous quadratic variations has been introduced in Guyon and Léon [23]: for a given real function H, the H-variation is given by In [23], technical conditions are assumed and a smoothness parameter 0 < s < 2, similar to the one in (1.2) when D = 0, is considered. Then the most unexpected result of [23] is that (V H,n /n) n has a limiting normal distribution with convergence rate n 1/2 when 0 < s < 3/2 whereas the limiting distribution is non normal and the convergence rate is reduced to n 2−s when 3/2 < s < 2. Moreover, for statistical purposes, it has been proved by Coeurjolly that quadratic variations are optimal (details and precisions can be found in [11]). In [30], Istas and Lang generalized the results on quadratic variations. They allowed for observation points of the form t j = jδ n for j = 1, . . . , n, with δ n depending on n and tending to 0 as n goes to infinity. They studied the generalized quadratic variations defined by: where the sequence a = (a j ) j has a finite support and some vanishing moments. Then they built estimators of the smoothness parameter and the scale parameter C and showed that these estimators are almost surely consistent and asymptotically normal. In the more recent work of Lang and Roueff [35], the authors generalized the results of Istas and Lang [30] and Kent and Wood [33] on an increment-based estimator in a semi-parametric framework with different sets of hypothesis. Another generalization for non-stationary Gaussian processes and quadratic variations along curves is done in [1]. See also the studies of [42] and [11].

Contributions of the paper
Now let us present more precisely the framework considered in our paper. We assume that the Gaussian process X has stationary increments and is observed at times t j = jδ n for j = 1, . . . , n with δ n tending to zero. Note that t j also depends on n but we omit this dependence in the notation for simplicity. We will only consider δ n = n −α with 0 < α 1 throughout the article. Two cases are then considered: α = 1 (δ n = 1/n, infill asymptotic setting [13], that we call the infill situation throughout) and 0 < α < 1 (δ n → 0 and nδ n → ∞, mixed asymptotic setting [13], that we call the mixed situation throughout). The paper is devoted to the estimation of the scale parameter C from one or several generalized quadratic a-variations V a,n defined in (1.6). Calculations show that the expectation of V a,n is a function of C so that C can be estimated by the moment method.
Our study is related to the study of Istas and Lang [30] in which they estimate both the scale parameter C and the local Hölder index H = s + D/2. Our main motivation for focusing on the case where the local Hölder index is known is, on the one hand, to provide a simpler method to implement and analyze the estimator, and on the other hand to address more advanced statistical issues, such as efficiency, minimax upper bounds and aggregation of several estimators of C. In addition, our results hold under milder technical conditions than in [30], and in particular apply to most semi-variogram models commonly used in practice. We also show that a necessary condition in [30], namely the fact that the quantity in (2.3) is non-zero when the variation used has a large enough order, in fact always holds.
We establish asymptotic approximations of the expectation and the variance and a central limit theorem for the quadratic variations under consideration and for the estimators deduced from them. In particular, given a finite number of sequences a, we prove a joint central limit theorem (see Cor. 3.8). In addition, our method does not require a parametric specification of the drift (see Sect. 3.4); therefore it is more robust than MLE. Also, we obtain minimax upper bounds on the quadratic error of our method.
For a finite discrete sequence a with zero sum, we define its order as the largest integer M such that j a j j = 0 for = 1, . . . , M − 1.
Roughly speaking j a j f (jδ n ) is an estimation of the M th derivative of the function f at zero. The order of the simplest sequence: (−1, 1) is M = 1. Natural questions then arise. What is the optimal sequence a?
In particular, what is the optimal order? Is it better to use the elementary sequence of order 1: (−1, 1) or the one of order 2: (−1, 2, −1)? For a given order, for example M = 1, is it better to use the elementary sequence of order 1: (−1, 1) or a more general one, for example (−1, −2, 3) or even a sequence based on discrete wavelets? Can we efficiently combine the information of several quadratic a-variations associated to several sequences? As far as we know, these questions are not addressed yet in the literature. Unfortunately, the asymptotic variances we give in Proposition 3.1 or Theorem 3.7 do not allow either to address theoretically this issue. However, by Corollary 3.8, one may gather the information of different quadratic a-variations with different orders. In order to validate such a procedure, an important numerical study is performed. The main conclusion is that gathering the information of different quadratic a-variations with different orders M yields an aggregated estimator with asymptotic variance closer to the optimal Cramér-Rao bound computed in Section 4.2. This is illustrated in Figure 4. The benefit of combining different a-variations is also demonstrated in a Monte Carlo study in Section 5.3. We also illustrate (still in a Monte Carlo study) the convergence to the asymptotic distribution considering different models (namely, exponential and Matérn models). Finally, we show that our suggested quadratic variation estimator can be easily extended to the twodimensional case and we consider two real data sets in dimension two. When comparing our suggested estimator with MLE, we observe a very significant computational benefit for our estimator.

Organization of the paper
The paper is organized as follows. In Section 2, we detail the framework and present the assumptions on the process. In Section 3, we introduce our quadratic variation estimator and provide its asymptotic properties, together with a wide discussion. In this section, we consider also the addition of a drift and we give minimax upper bounds for the quadratic error of our method. Section 4 is devoted to the analysis of the statistical efficiency of our estimator. In Section 5, we provide the results of the numerical simulation and on the real data sets. A conclusion is provided in Section 6. In Appendix A, we discuss the assumptions made on the process. All the proofs have been postponed to Appendix B.

Assumptions on the process
In this paper, we consider a Gaussian process (X(t)) t∈R which is not necessarily stationary but only has stationary increments. The process is observed at times jδ n for j = 1, . . . , n with δ n going to 0 as n goes to infinity. As mentioned in the introduction, we will only consider δ n = n −α with 0 < α 1 throughout the article. Two cases are then considered: α = 1 (δ n = 1/n, infill situation) and 0 < α < 1 (δ n → 0 and nδ n → ∞, mixed situation). The semi-variogram of X is defined by In the sequel, we denote by (Const) a positive constant which value may change from one occurrence to another. For the moment, we assume that X is centered, the case of non-zero expectation will be considered in Section 3.4. Now, we introduce the following assumptions. The form of (H 1 ) and (H 2 ) change following whether we are in the infill situation or in the particular mixed situation (δ n = n −α with 0 < α < 1).
Infill situation: δ n = 1/n. (H 1 ) The semi-variogram is 2D-times differentiable with D 0 and there exists C > 0 and 0 < s < 2 such that for any h ∈ R, we have In (H 1 ), the integer D is the greatest integer such that V is 2D-times differentiable everywhere. We recall that, when X is assumed to be a stationary process, we have V (h) = k(0) − k(h). If the covariance function k belongs to a parametric set of the form {k θ ; θ ∈ Θ} with Θ ⊂ R p , then C is a deterministic function of the parameter θ.
• When D > 0, the Dth derivative X (D) in quadratic mean of X is a Gaussian stationary process with autocovariance function k given by k(h) = (−1) D+1 V (2D) (h). This implies that the Hölder exponent of the paths of X (D) is s/2. Because s < 2, D is exactly the order of differentiation of the paths of X. • If we denote H = D + s/2, H represents the local Hölder index of the process [29].
A discussion of the assumptions (H 2 ) and (H 3 ) is provided in Appendix A. always hold and (H 3 ) holds when 1/(2α) < s. Hence, in the infill situation, we need s > 1/2 and, in the mixed situation, the observation domain needs to increase slowly enough. • The generalized Slepian model [53]: k(h) = (1 − C|h| s ) + , s ∈ (0, 1] (D = 0). For this model, the condition s ∈ (0, 1] is needed for k to be a continuous non-negative definite function. In this case, (H 0 ) to (H 3 ) hold in the infill situation and when C < 1. For (H 0 ), we remark that, in this case, V is smooth on (0, 1] and not on (0, ∞), but this is sufficient for all the results to hold. • The Matérn model:

Examples of processes that satisfy our assumptions
where ν > 0 is the regularity parameter of the process. The function K ν is the modified Bessel function of the second kind of order ν. See, e.g., [54] for more details on the model. In that case, D = ν and s = 2ν − 2D. Here, it requires tedious computations to express the scale parameter C as a function of ν and θ. However, in Section 5.1, we derive the value of C in two settings (ν = 3/2 and ν = 5/2). For this model, (H 0 ) to (H 2 ) always hold and (H 3 ) holds when s < 2 − 1/(2α). Hence, in the infill situation we need s < 3/2, and in the mixed situation the observation domain needs to increase slowly enough.
All the previous examples are stationary (and thus have stationary increments). The following one is not stationary.
A reference on this subject is [12]. This process is classically indexed by its Hurst parameter H = s/2.
Here, D = 0 and s ∈ (0, 2) and C are left arbitrary. We call the FBM defined by C = 1 the standard FBM. We remark that the Gaussian model, or square-exponential, defined by k(h) = σ 2 e −h 2 θ 2 , with (σ 2 , θ) ∈ (0, ∞), does not satisfy our assumptions, because it is too regular (i.e. it is C ∞ everywhere and in particular at zero). Indeed, we have V (h) = σ 2 θ 2 h 2 + o(h 2 ) as h → 0, and hence this model can be interpreted similarly as in (2.1), with σ 2 θ 2 playing the role of the scale parameter C and with s in (2.1) taking the value 2. It is then a relevant statistical problem to estimate σ 2 θ 2 . However, the variation estimator developed here is not suitable for estimating σ 2 θ 2 in the Gaussian model. The main reason for this is that the variation estimator is designed for variograms that have only a finite order of differentiability at zero. As h → 0, r(h) = O(|h| 2s ) = o(|h| s ) and thus (H 1 ) holds in the infill situation. Furthermore, as |h| → ∞, r(h) = −C|h| s + o(1) = O(|h| s ) and so (H 1 ) holds also in the mixed situation. Let us now show that (H 2 ) is also satisfied. First consider the case where s < 3/2 and let One can check that s − 2 < β and that β < −1/2 since s − 2 < −1/2. Consider the case where |h| 1. Since one has where g is a bounded function on [−1, 1]. Hence, |r (2) (h)| (Const)|h| β since 2s − 2 β. Then the case s < 3/2 is complete in the infill situation. Consider now the case where |h| 1. Simply, one can show that Hence The case where s 3/2 can be treated analogously. Finally, it is simple to show that (H 3 ) holds when 1/(2α) < s.
In the rest of the paper, we assume that 0 < s < 2, unless specified otherwise.

Discrete a-differences
Now, we consider a non-zero finite support sequence a of real numbers with zero sum. Let L(a) be its length.
Since the starting point of the sequence plays no particular role, we will assume when possible that the first non-zero element is a 0 . Hence, the last non-zero element is a L(a)−1 . We define the order M (a) of the sequence as the first non-zero moment of the sequence a: To any sequence a, with length L(a) and any function f , we associate the discrete a-difference of f defined by where n stands for n − L(a) + 1. As a matter of fact, in the case of the simple quadratic a-variation given by a 0 = 0 and a 1 = −1, the operator ∆ a is a discrete differentiation operator of order one. More generally, a j f (jδ n ) is an approximation (up to some multiplicative coefficient) of the M (a)th derivative (when it exists) of the function f at zero.
We also define ∆ a (X) as the Gaussian vector of size n with entries ∆ a,i (X) and Σ a its variance-covariance matrix.
Examples -elementary sequences. The simplest case is the order one elementary sequence a (1) defined by a More generally, we define the kth order elementary sequence a (k) as the sequence with coefficients a (k) j = (−1) k−j k j , j = 0, . . . , k. Its length is given by L(a (k) ) = k + 1.
For two sequences a and a , we define their convolution b = a * a as the sequence given by b j = k−l=j a k a l . In particular, we denote by a 2 * the convolution a * a. Notice that the first non-zero element of b is not necessarily b 0 but b −(L(a )−1) as mentioned in the following properties. The main result of this section is Proposition 2.6 that is required to quantify the asymptotic behavior of the two first moments of the quadratic a-variations defined in (3.1) (see Prop. 3.1). In order to prove (2.3), we establish two preliminary tools (Prop. 2.4 and Lemma 2.5). In that view, we need to define the integrated fractional Brownian motion (IFBM). We start from the FBM defined in Section 2.2 which has the following non anticipative representation: where d (t) is a white noise defined on the whole real line and For m 0 and t 0, we define inductively the IFBM by

Definition 2.3 (Non degenerated property).
A process Z has the ND property if for every k > 0 and every t 1 < t 2 < · · · < t k belonging to the domain of definition of Z, the distribution of Z(t 1 ), . . . , Z(t k ) is non degenerated.
We have the following results.
where N m ∈ N, for i = 1, . . . , N m , P m,i is a polynomial of degree less than or equal to m and h m,i is some function.
Proposition 2.6. If the sequence a has order M (a) > D, then 3) The case D = 0 follows from [51], Lemma 2.10.8. The whole proof of Proposition 2.6 is postponed to Appendix B.1. Note that (2.3) is stated as an hypothesis in [30].

Definition
Here, we consider the discrete a-difference applied to the process X and we define the quadratic a-variations by recalling that n = n − L(a) + 1. When no confusion is possible, we will use the shorthand notation L and M for L(a) and M (a).

Main results on quadratic a-variations
The basis of our computations of variances is the identity for any sequences a and a . A second main tool is the Taylor expansion with integral remainder (see, for example, (B.5)). So, we introduce another notation. For a sequence a, a scale δ, an order q and a function f , we define One of our main results is the following.
2) If V satisfies additionally (H 2 ) and if we choose a sequence a such that M > D + s/2 + 1/4, then as n tends to infinity: and the series above is positive and finite. (ii) In practice, since the parameters D and s are known, it suffices to choose M such that M D + 1 when s < 3/2 and M D + 2 when 3/2 s < 2.
(iii) The expression of the asymptotic variance appears to be complicated. Anyway, in practice, it can be easily approximated. Some explicit examples are given in Section 5.
Following the same lines as in the proof of Proposition 3.1 and using the identities (a * a ) j = (a * a) −j and R(i, 1, 2D, |·| s , a * a ) = R(−i, 1, 2D, |·| s , a * a), one may easily derive the corollary below. The proof is omitted. Cov(V a,n , V a ,n ) = 2nC 2 δ 4D+2s n i∈Z Particular case -D = 0: (i) We choose a as the first order elementary sequence (a 0 = −1, a 1 = 1 and M = 1). As n tends to infinity, one has (ii) General sequences. We choose two sequences a and a so that M (a) ∧ M (a ) > s/2 + 1/4. Then, as n tends to infinity, one has (1)).

Now we establish the central limit theorem.
Theorem 3.4 (Central limit theorem for V a,n ). Assume (H 0 ), (H 1 ) and (H 2 ) and M > D + s/2 + 1/4. Then V a,n is asymptotically normal in the sense that Remark 3.5.
• If M = D + 1, the condition M > D + s/2 + 1/4 in Proposition 3.1 implies s < 3/2. However, when M = D + 1 and s 3/2, it is still possible to compute the variance but the convergence is slower and the central limit theorem does not hold anymore. More precisely, we have the following.
• If s > 3/2 and M = D + 1 then, as n tends to infinity, • If s = 3/2 and M = D + 1 then, as n tends to infinity We omit the proof. Analogous formula for the covariance of two variations can be derived similarly. • Since the work of Guyon and León [23], it is a well-known fact that in the simplest case (D = 0, L = 2, M = 1) and in the infill situation (δ n = 1/n, α = 1), the central limit theorem holds true for quadratic variations if and only if s < 3/2. Hence assumption M > D + s/2 + 1/4 is minimal for D = 0 in the following sense. When s < 3/2, the assumption becomes M 1, which is not a restriction as we consider sequences a with zero sum throughout. When s 3/2, the assumption becomes M 2, and we have seen that the central limit theorem does not always hold for M = 1.

Estimators of C based on the quadratic a-variations
Guided by the moment method, we define C a,n := V a,n n(−1) D δ 2D+s n R(0, 1, 2D, |·| s , a 2 * ) . (3.10) Then C a,n is an estimator of C which is asymptotically unbiased by Proposition 3.1. Now our aim is to establish its asymptotic behavior.
The following corollary is of particular interest: it will give theoretical results when one aggregates the information of different quadratic a-variations with different orders. As one can see numerically in Section 5.2, such a procedure appears to be really promising and circumvents the problem of the determination of the optimal sequence a. Corollary 3.8. Under the assumptions of Theorem 3.7, consider k sequences a (1) , . . . , a (k) such that, for i = 1, . . . , k, M (a (i) ) > D + s/2 + 1/4. Assume furthermore that the covariance matrix of (C a (i) ,n /Var(C a (i) ,n ) 1/2 ) i=1,...,k converges to an invertible matrix Γ ∞ as n → ∞. Then,

Adding a drift
In this section, we do not assume anymore that the process X is centered and we set for t 0, We write X the corresponding centered process: As it is always the case in statistical applications, we assume that f is a C ∞ function. We emphasize on the fact that our purpose is not proposing an estimation of the mean function.
Corollary 3.9. Assume the same assumptions as in Theorem 3.7, and recall that δ n = n −α for α ∈ (0, 1]. Define and if we assume in addition that Note that in the infill situation (δ n = 1/n, α = 1), K 1 M,n does not depend on n. Obviously, (3.12) is met if f is a polynomial up to an appropriate choice of the sequence a (and M ). In the infill situation, a sufficient condition for (3.12) is M > D + s/2 + 1/4 which is always true. Moreover, it is worth noticing that we only assume regularity on the M th derivative of the drift. No parametric assumption on the model is required, unlike in the MLE procedure.

Minimax upper bound on the quadratic error
In this section, we provide a minimax upper bound on the quadratic error of the estimator C a,n , over a class of possible variograms. Hence, whereas the variogram V is fixed in the other sections of this paper, we allow here for a varying variogram V , that is restricted to a function set. We consider here only the infill situation, with δ n = 1/n, for simplicity of exposition.
First we define this function set. We say that a function h : . We say that it is conditionally negative definite if for any x 1 , . . . , x n ∈ [0, 1] and for any α 1 , . . . , α n ∈ R such that α 1 + · · · + α n = 0, n i,j=1 We observe that any variogram V must be even and conditionally negative definite.
The function set to which the variogram V belongs is V is 2D-times differentiable at 0 and for h ∈ [−1, 1], (3.14) In the definition of C D,s,C,U,µ,B,β , in (3.13), the condition that V is smooth outside of 0 corresponds to (H 0 ). In (3.15), it is assumed that V has the same approximation at 0 as in (H 1 ), where the remainder Note that we only require µ > 0, while in Theorem 3.7 we assume that (H 3 ) holds, which corresponds to µ > 1/2. The reason for this difference is that in Theorem 3.7, we prove a central limit theorem for the estimation of C with a convergence rate of order 1/n 1/2 and no asymptotic bias and that below, we will show that the minimax upper bound on the quadratic estimation error of C has order 1/n only when µ 1/2. For 0 < µ < 1/2, the minimax upper bound is of order strictly larger than 1/n. Finally, (3.16) and (3.17) correspond to (H 2 ). We are interested in bounding the quadratic error uniformly over all the possible variograms V ∈ C D,s,C,U,µ,B,β . A uniform bound achieving this is a minimax upper bound. In the above expectation, we insist on the fact that C a,n = C a,n (X(1/n), . . . , X(1)) and that X is a centered Gaussian process on [0, 1], with variogram V ∈ C D,s,C,U,µ,B,β . The next theorem provides the minimax upper bound on the quadratic error.
consider β > s − 2 to be fixed and if s 3/2, consider β > s − 3 to be fixed. Let 0 < C sup < ∞ be fixed. Let the sequence a be fixed, with M (a) > D + s/2 + 1/4. Then, there exists a constant K, depending only on D, s, U, µ, B, β, C sup , a such that, for n L(a), where in the above expectation the Gaussian process X has mean zero and variogram V .
In Theorem 3.10, the rate of decay of the quadratic error is n − min(2µ,1) . Let us interpret this. The variance of the estimator has order 1/n, see Lemma B.4. The square bias has order n − min(2µ,2) , see Lemma B.3, and stems from the order of magnitude of the remainder term r(h) = V (2D) (h) − V (2D) (0) − C(−1) D |h| s , which is |h| s+µ . Hence, for µ ∈ (0, 1/2), the remainder term is too large and thus the square bias is too large, preventing the estimator C a,n from achieving the parametric rate 1/n. For µ 1/2 the square bias becomes small enough and the parametric rate 1/n is achieved. As discussed above, the central limit theorem for C a,n in Theorem 3.7, which provides an asymptotic variance of order 1/n with no asymptotic bias, relies on (H 3 ), which corresponds to µ > 1/2 in the class C D,s,C,U,µ,B,β . Note that our minimax upper bound is also uniform over the value of C, which is only upper bounded by C sup . The expression of the constant K can be found in the proof of Theorem 3.10.
Remark 3.11. The minimax upper bound in Theorem 3.10 provides an estimation error of order n − min(µ,1/2) . This order coincides with the one obtained in [35], where s rather than C is estimated and where a remainder term there, similar as r(h), is also assumed of order |h| s+µ , similarly as in (3.15). We remark that [35] provides minimax upper and lower bounds, while we only consider upper bounds. In future work, it would be interesting to see if the lower bound techniques of [35], based in particular on the Van Trees inequality, could be extended to our setting.
Remark 3.12. It is important to observe here that the estimator C a,n does not depend on the smoothness indices µ and β, although the rate (partially) depends on µ in Theorem 3.10. This rate could also possibly depend on β in other situations. In other semi-parametric statistical settings, the situation could be different, in that the estimators could depend on smoothness parameters (playing a similar role as µ and β here). For instance, the optimal bandwidth in kernel smoothing typically depends on the underlying smoothness.

Quadratic variations versus MLE
In this section, we compare our methodology to the very popular MLE method. For details on the MLE procedure, the reader is referred to, e.g. [44,52].

Model flexibility
As mentioned in the introduction, the MLE methodology is a parametric method and requires the covariance function to belong to a parametric family of the form {k θ , θ ∈ Θ}. In the procedure proposed in this paper, it is only assumed that the semi-variogram satisfies the conditions given in Section 2.1, and that D and s are known. In this latter case, the suggested variation estimator is feasible, while the MLE is not defined.

Adding a drift
In order to use the MLE estimator, it is necessary to assume that the mean function of the process is a linear combination of known parametric functions: with known f 1 , . . . , f q and where β 1 , . . . , β q need to be estimated. Our method is less restrictive and more robust. Indeed, we only assume the regularity of the M th derivative of the mean function in assumption (3.12). It does not require parametric assumptions neither on the semi-variogram nor the mean function. Furthermore, no estimation of the mean function is necessary.

Computational cost
The cost of our method is only O(n) (the method only requires the computation of a sum) while the cost of the MLE procedure is known to be O(n 3 ).

Practical issues
In some real data frameworks, it may occur that, for the MLE, the iterative procedure for likelihood optimization diverges, as can be seen in Section 5.4. Such a dead end cannot be possible with our procedure.
3.6.6. Quadratic variations versus other methods 3.6.7. Least-square estimators In [34], the authors propose a Least Square Estimator (LSE). More precisely, given a parametric family of semi-variograms {V θ , θ ∈ Θ}, the Ordinary Least Square Estimation (OLSE) consists in minimizing in the parameter θ the quantity where I is a set of lags and V n is a non-parametric estimator of the semi-variogram. Several variants as the Weighted Least Square Estimation and the General Least Square Estimation have been then introduced. Then the authors of [34] provide necessary and sufficient conditions for these estimators to be asymptotically efficient and they show that when the number of lags used to define the estimators is chosen to be equal to the number of variogram parameters to be estimated, the ordinary least squares estimator, the weighted least squares and the generalized least squares estimators are all asymptotically efficient. Similarly as for the MLE, least square estimators require a parametric family of variograms, while quadratic variation estimators do not.

Cross validation rstimators
Cross validation estimators [3][4][5]62] are based on minimizing scores based on the leave one out prediction errors, with respect to covariance parameters θ, when a parametric family of semi-variograms {V θ , θ ∈ Θ} is considered. Hence, as the MLE, they require a parametric family of variograms. Furthermore, as the MLE, the computation cost is in O(n 3 ), while this cost is O(n) for quadratic variation estimators.

Composite likelihood
Maximum composite likelihood estimators follow the principle of the MLE, with the aim of reducing its computational cost [39,41,56,59,60]. In this aim, they consist in optimizing, over θ, the sum, over i = 1, . . . , n of the conditional likelihoods of the observation X i = X(t i ), given a small number of observations which observation locations are close to t i when a parametric family of semi-variograms {V θ , θ ∈ Θ} is considered. The computation cost of an evaluation of this sum of conditional likelihood is O(n), in contrast to O(n 3 ) for the full likelihood. Nevertheless, the composite likelihood estimation requires to perform a numerical optimization, while our suggested estimator does not. Furthermore, a parametric family of variograms is required for the composite likelihood but not for our estimator. Finally, [6] recently showed that the composite likelihood estimator has rate of convergence only n s when D = 0 and 0 < s < 1/2 in (1.2). Hence, the rate of convergence of quadratic variation estimators (n 1/2 ) is larger in this case.

Already known results on quadratic a-variations
In [30], Istas and Lang consider a Gaussian process with stationary increments in the infill case and assume (1.2) as in our paper. Then they establish the asymptotic behavior of V a,n under more restrictive hypothesis of regularity on V than ours (in particular on r and on δ n ). Then they propose an estimation of both the local Hölder index H = D + s/2 and the scale parameter C, based on quadratic a-variations and study their asymptotic behavior. The expression of the estimation of C is much more complex than simply stems from the moment method. More precisely, they consider I sequences (a j ) j=1,...,I with length p j and the vector U of length I whose coordinate j is given by V a j ,n . Noticing that the vector U/n converges to the product AZ where A is a I × p matrix derived from the sequences a with p = max j p j and Z is the vector of (C(−1) D (jδ n ) 2h ) j=1,...,p , they estimate Z byẐ = (A A) −1 A U and derive their estimators of H and C fromẐ.
As explained in the introduction in Section 1, Lang and Roueff in [35] generalize the results of Istas and Lang in [30] and [33]. They consider the infill situation and use quadratic a-variations to estimate both the scale parameter C and smoothness parameter s under a similar hypothesis as in (1.2). Furthermore, they assume three types of regularity assumptions on V : Hölder regularity of the derivatives at the origin, Besov regularity and global Hölder regularity. Nevertheless, estimating both C and s leads to a more complex estimator of C and to proofs significantly different and more complicated.
To summarize, our contributions, additionally to the existing references [30,33,35], is to provide an estimation method for C which definition, implementation and asymptotic analysis are simpler. As a result, we need fewer technical assumptions. In fact, our assumptions can be easily shown to hold in many classical examples. This also enables us to study the aggregation of quadratic variation estimators from different sequences, see Section 5.2. Furthermore, our assumptions are easier to check than in [30] and less technical than in [35]. In addition, our proofs for the asymptotic analysis of quadratic variations are simpler than in the existing references.

Efficiency of our estimation procedure
In this section, in order to decrease the asymptotic variance, we propose a procedure to combine several quadratic a-variations leading to aggregated estimators. Then our goal is to evaluate the quality of these proposed estimators. In that view, we compare their asymptotic variance with the theoretical Cramér-Rao bound in some particular cases in which this bound can be explicitly computed.

Aggregation of estimators
Now in order to improve the estimation procedure, we suggest to aggregate a finite number of estimators: k j=1 λ j C a (j) ,n based on k different sequences a (1) , . . . , a (k) with weights λ 1 , . . . , λ k . Ideally, one should provide an adaptive statistical procedure to choose the optimal number k * of sequences, the optimal sequences and the optimal weights λ * . Such a task is beyond the scope of this paper. Nevertheless, in this section, we consider a given number k of given sequences a (1) , . . . , a (k) leading to the estimators C a (1) ,n , . . . , C a (k) ,n defined by (3.10). Then we provide the optimal weights λ * . Using [36] or [7], one can establish the following lemma.
Lemma 4.1. We assume that for j = 1, . . . , k, the conditions of Corollary 3.8 are met. Let R be the k × k asymptotic variance-covariance matrix of the vector of length k whose elements are given by (n 1/2 /C)C a (j) ,n , j = 1, . . . , k. Then for any λ 1 , . . . , λ k with sum one, Let 1 k be the "all one" column vector of size k and define One has k j=1 λ * j = 1 and λ * T Rλ * λ T Rλ.
Let us now explain how the (optimally) aggregated estimator is computed in practice. The weights in Lemma 4.1 are functions of the matrix R which is the limit as n → ∞ of the covariance matrix R (n) defined by, for i, j = 1, . . . , k, From Corollary 3.3 and (3.10), we have R (n) − R (n) → 0 as n → ∞, where R (n) is defined by, for i, j = 1, . . . , k, Hence, in practice we can compute the weights given by which only depend on a (1) , . . . , a (k) , D and s, which are known quantities. As will be shown with simulations in Section 5, the aggregated estimator considerably improves each of the original estimators C a (1) ,n , . . . , C a (k) ,n .

Cramér-Rao bound
To validate the aggregation procedure, we want to compare the obtained asymptotic variance with the theoretical Cramér-Rao bound. In that view, we compute in the following section the Cramér-Rao bound in two particular cases.
We consider a family Y C (C ∈ R + ) of centered Gaussian processes. Let R C be the (n − 1) × (n − 1) variancecovariance matrix defined by Assume that C → R C is twice differentiable and R C is invertible for all C ∈ R + . Then, let be the Fisher information. The quantity 1/I C is the Cramér-Rao lower bound for estimating C based on (see for instance [4,15]). Now we give two examples of families of processes for which we can compute the Cramér-Rao lower bound explicitly. The first example is obtained from the IFBM defined in Section 2.2. is the IFBM. Then Y C = X (D) is a FBM whose semi-variogram V C is given by Hence in this case, we have 1/I C = 2C 2 /(n − 1). Now we consider a second example given by the generalized Slepian process defined in Section 2.2. Let s 1 and Y C with stationary autocovariance function k C defined by This function is convex on R and it follows from Pólya's theorem [43] that k C is a valid autocovariance function. We thus easily obtain the following lemma whose proof is omitted.
Lemma 4.3. Let X be the integration D times of Y C defined via (4.5). Then, in the infill situation (δ n = 1/n, α = 1) and for C < 2, the semi-variogram of Y C is given by (4.4) and by consequence 1/I C = 2C 2 /(n − 1).

Numerical results
In this section, we first study to which extent the asymptotic results of Proposition 3.1 and Theorem 3.7 are representative of the finite sample behavior of quadratic a-variations estimators. Then, we study the asymptotic variances of these estimators provided by Proposition 3.1 and that of the aggregated a-variations estimators of Section 4.1.

Simulation study of the convergence to the asymptotic distribution
We carry out a Monte Carlo study of the quadratic a-variations estimators in three different cases. In each of the three cases, we simulate N = 10, 000 realizations of a Gaussian process on [0, 1] with zero mean function and stationary autocovariance function k. In the case D = 0, we let k(h) = exp(−C|h|). Hence (H 1 ) holds with D = 0 and s = 1. In the case D = 1, we use the Matérn 3/2 autocovariance [48] : One can show, by developing k into power series, that (H 1 ) holds with D = 1, s = 1 and C = 6 √ 3/θ 3 . Finally, in the case D = 2, we use the Matérn 5/2 autocovariance function: Also (H 1 ) holds true with D = 2, s = 1 and C = 200 √ 5/3θ 5 .
In each of the three cases, we set C = 3. For n = 50, n = 100 and n = 200, we observe each generated process at n equispaced observation points on [0, 1] and compute the quadratic a-variations estimator C a,n of Section 3.3. When D = i, i = 0, 1, 2, we choose a to be the elementary sequence of order i + 1.
In Figure 2, we display the histograms of the 10, 000 estimated values of C for the nine configurations of D and n. We also display the corresponding asymptotic Gaussian probability density functions provided by Proposition 3.1 and Theorem 3.7. We observe that there are few differences between the histograms and limit probability density functions between the cases (D = 0, 1, 2). In these three cases, the limiting Gaussian distribution is already a reasonable approximation when n = 50. This approximation then improves for n = 100 and becomes very accurate when n = 200. Naturally, we can also see the estimators' variances decrease as n increases. Finally, the figures suggest that the discrepancies between the finite sample and asymptotic distributions are slightly more pronounced with respect to the difference in mean values than to the difference in variances. As already pointed out, these discrepancies are mild in all the configurations.

Analysis of the asymptotic distributions
Now we consider the normalized asymptotic variance of C a,n obtained from (3.5) in Proposition 3.1. We consider the infill situation (δ n = 1/n, α = 1) and we let v a,s = 2 i∈Z R 2 (i, 1, 2D, |·| s , a 2 * ) so that (n 1/2 /C)(C a,n − C) converges to a N (0, v a,s ) distribution as n → ∞, where v a,s does not depend on C (nor on n).
First, we consider the case D = 0 and we plot v a,s as a function of s for various sequences a in Figure 3. The considered sequences are the following: From Figure 3, we can draw several conclusions. First, the results of Section 4.2 suggest that 2 is a plausible lower bound for v a,s . We shall call the value 2 the Cramér-Rao lower bound. Indeed, we observe numerically that v a,s 2 for all the s and a considered here. Then we observe that, for any value of s, there is one of the v a,s which is close to 2 (below 2.5). This suggests that quadratic variations can be approximately as efficient as maximum likelihood, for appropriate choices of the sequence a. We observe that, for s = 1, the elementary sequence of order 1 (a 0 = −1, a 1 = 1) satisfies v a,s = 2. This is natural since for s = 1, this quadratic a-variations estimator coincides with the maximum likelihood estimator, when the observations stem from the standard Brownian motion. Except from this case s = 1, we could not find other quadratic a-variations estimators reaching exactly the Cramér-Rao lower bound 2 for other values of s.
Second, we observe that the normalized asymptotic variance v a,s blows up for the two sequences a satisfying M = 1 when s reaches 1.5. This comes from Remark 3.5: the variance of the quadratic a-variations estimators with M = 1 is of order larger than 1/n when s 1.5. Consequently, we plot v a,s for 0.1 s 1.4 for these two sequences. For the other sequences satisfying M 2, we plot v a,s for 0.1 s 1.9.
Third, it is difficult to extract clear conclusions about the choice of the sequence: for s smaller than, say, 1.2 the two sequences with order M = 1 have the smallest asymptotic variance. Similarly, the elementary sequence of order 2 has a smaller normalized variance than that of order 3 for all values of s. Also, the Daubechies sequence of order 2 has a smaller normalized variance than that of order 3 for all values of s. Hence, a conclusion of the study in Figure 3 is the following. When there is a sequence of a certain order for which the corresponding estimator reaches the rate 1/n for the variance, there is usually no benefit in using a sequence of larger order. Finally, the Daubechies sequences appear to yield smaller asymptotic variances than the elementary sequences (the orders being equal). The sequence of order 1 given by (a 0 , a 1 , a 2 ) = (−1, −2, 3) can yield a smaller or larger asymptotic variance than the elementary sequence of order 1, depending on the value of s. For two sequences of the same order M , it seems nevertheless challenging to explain why one of the two provides a smaller asymptotic variance. Now, we consider aggregated estimators, as presented in Section 4.1. A clear motivation for considering aggregation is that, in Figure 3, the smallest asymptotic variance v a,s corresponds to different sequences a, depending on the values of s.
In Figure 4 left, we consider the case D = 0 and we use four sequences: a (1) , a (5) a (2) and a (6) . We plot their corresponding asymptotic variances v a (i) ,s as a function of s, for 0.1 s 1.4 as well as the variance of their aggregation. This variance of aggregation is computed from the covariance matrix given by (4.1) and the weights given by (4.2). Notice that the aggregated estimator, which asymptotic variance is plotted in Figure 4, only depends on known quantities and thus can be readily computed in practice.
It is then clear that aggregation drastically improves each of the four original estimators. The asymptotic variance of the aggregated estimator is very close to the Cramér-Rao lower bound 2 for all the values of s. In Figure 4 right, we perform the same analysis but with sequences of order larger than 1. The four considered sequences are now a (6) , a (2) a (3) , and a (4) . The value of s varies from 0.1 to 1.9 Again, the aggregation is clearly the best.
Eventually, Figures 5 and 6 explore the case D = 1. Conclusions are similar.

Finite sample study of the aggregation of estimators
By definition, the asymptotic variances of aggregated estimators are smaller than the asymptotic variances obtained from their individual sequences. Numerically, in Section 5.2, we have seen that the benefit of the aggregation in terms of asymptotic variance can be relatively substantial.
In this section, we address the finite sample situation and quantify the benefit of the aggregation procedure. We study the infill situation (δ n = 1/n) with C = 3 and the sample sizes n = 50, 100, 200, 400. First, we consider the case D = 0 and the generalized exponential covariance function k(h) = exp(−C|h| s ) with s = 1.3 and we make use of the elementary sequence of order 1: (−1, 1) and of order 2: (1, −2, 1). As in Section 5.1, we simulate 10, 000 observation vectors and compute 10, 000 estimates of C obtained from the two estimators C a,n corresponding to the two sequences. We also compute 10, 000 estimates of C obtained from the aggregation of these two estimators, with the weights given by (4.2), which, we recall, only depend on known quantities. Then we compute the three empirical mean square errors, for estimating C, of the three estimators, from the two sequences and the aggregation. Second, we compute the same empirical mean square errors, but this time with  The empirical mean square errors are displayed in Table 1. We see that the mean square errors obtained from the aggregation procedure are smaller than those obtained for the two individual sequences, for D = 0 and D = 1 and for all sample sizes n = 50 to n = 400. Hence, although the aggregation procedure is based on an asymptotic approximation of the variances and covariances, it is beneficial in finite sample, already for the sample size n = 50.

Real data examples
In this section, we consider real data of spatially distributed processes in dimension two. In this setting, we extend the estimation procedure based on the quadratic a-variations that is then compared to the MLE procedure.   The first method is maximum likelihood estimation in a Kriging model, obtained from the function km of the R toolbox DiceKriging [48]. For this method, the mean and autocovariance functions are assumed to be E(X(i/15, j/15)) = µ and Cov(X(i/15, j/15), X(i /15, j /15)) = σ 2 e −θ1|i−i |/15 e −θ2|j−j |/15 . (5. 2) The parameters µ, σ 2 , θ 1 , θ 2 are estimated by maximum likelihood. The second method assumes the same autocovariance model (5.2) and consists in the following steps.
(1) Estimate σ 2 by the sum of squaresσ The first method, based on maximum likelihood, provides infinite values for θ 1 and θ 2 , so that it considers the 256 observed values as completely spatially independent. On the other hand, the second method provides the valuesθ 1 = 14.72 andθ 2 = 15.73. This corresponds to a correlation of approximately 1/e ≈ 0.36 between direct neighbors on the grid. Hence, the second method, based on our suggested quadratic variation estimator, is able to detect a weak correlation, unlike the maximum likelihood estimator. Actually, graphical observations of the data tend to suggest a small spatial dependence which is in accordance with the biological nature of the data.
We remark that, in general, the second method, based on the quadratic variation estimator, cannot provide infinite values ofθ 1 andθ 2 , because the numerator in (3.10) is always finite and the denominator in (3.10) is always strictly positive. On the other hand, the first method, based on maximum likelihood, may provide infinite values ofθ 1 andθ 2 , as is the case here. For instance, the likelihood can be maximum in the limit θ 1 = +∞, or, even if this is not the case, the iterative procedure for optimizing the likelihood can provide a sequence of values of θ 1 diverging to infinity.
Remark 5.1. In (5.2), we choose a separable model for its classicality (as done, for instance, in the R toolbox DiceKriging [48]). This separable model is characterized by having the basis vectors (1, 0) and (0, 1) as the two important directions. More precisely, in (5.2), the correlation function of i → X(i/15, j/15) is characterized by θ 1 only and the correlation function of j → X(i/15, j/15) is characterized by θ 2 only. It is then natural for the variation estimator to make, separately, first an analysis with the second component of X fixed (column by column) in order to estimate θ 1 and second an analysis with the first component of X fixed (row by row) in order to estimate θ 2 .
In addition, consider that (5.2) holds with 15 replaced by N − 1 with N → ∞. Then the consistency of the quadratic variation estimatorsĈ 1 andĈ 2 established in Lemma 5.2 below suggests that our analysis row by row and column by column is appropriate to the model (5.2).
Remark that the analysis row by row and column by column is also appropriate to other models where the directions (1, 0) and (0, 1) play a special role; for instance, one could consider the geometric anisotropic counterpart of (5.2), i.e.
To conclude, our method of estimation row by row and column by column is perfectly coherent with the separable model (5.2), which is widely used in Kriging applications. Of course this method does not permit to estimate anisotropy in other directions as in [46] or topothesy in the sense of [45] that correspond to more complex models including variations of the Hölder exponent.
Using the results of Section 3.5, we can prove straightforwardly the consistency of the quadratic variation estimatorsĈ 1 andĈ 2 , given by the second method with steps (2) and (3).
Assume that a centered Gaussian process X is observed at {(i/(N − 1), j/(N − 1)); i, j = 0, . . . , N − 1} with a two-dimensional autocovariance function k having a separable expression given by where, for i = 1, 2, the variogram associated to k i lies in C Di,si,Ci,Ui,µi,Bi,βi introduced in Theorem 3.10, with the same setting and assumptions as in this theorem. Then the quadratic variation estimatorsĈ 1 andĈ 2 , described in the steps (2) and (3) above, are consistent; in other words,Ĉ i converges in probability to C i , for i = 1, 2.

A large size data set
The second data set consists in a two-dimensional field of deformation amplitude, corresponding to the registration of two real images. The deformation field is obtained from the software presented in [47]. Figure 7 displays the two images to be registered and the deformation field.
With these data, we consider the same autocovariance model as in Section 5.4.1. We estimate the parameters θ 1 and θ 2 from the same two methods as in Section 5.4.1. The first method providesθ 1 = 0.6547 andθ 2 = 0.8770 and takes about 22 minutes on a personal computer. The second method providesθ 1 = 0.107 andθ 2 = 0.607 and takes about 0.05 seconds on a personal computer. Hence, our suggested quadratic variation estimator provides a very significant computational benefit.
Both estimators conclude that the spatial correlation is more important along the x-axis than along the y-axis, which is graphically confirmed in Figure 7. As in Section 5.4.1, the maximum likelihood estimator provides less correlation than the quadratic variation estimator.
Finally, if the field of deformation is considered with no preliminary subsampling, its size is 400 × 600. In this case, the MLE cannot be directly implemented while the quadratic variation estimator can be.

Conclusion
We have provided an in-depth analysis of the estimation of the scale parameter of a one-dimensional Gaussian process by quadratic variations. Indeed, the knowledge of this scale parameter is essential when studying a Gaussian process, as it enables to quantify its dependence structure, or to test independence.
We have addressed a semi-parametric setting, where no parametric family of variograms needs to be assumed to contain the unknown variogram. We have suggested an estimator, based on previous references, which numerical implementation is straightforward. Our theoretical analysis follows the principles of previous references, but is significantly simpler and holds under mild and simple to check technical assumptions. Based on this theoretical analysis, we have been able to tackle more advanced statistical topics, such as the aggregation of estimators based on different sequences, in the aim of improving the statistical efficiency.
Appendix A. Discussion of the assumptions (H 2 ) and (H 3 ) • The assumption (H 2 ) may be difficult to interpret at first sight. This assumption is necessary in the proof of Proposition 3.1, where it is shown that the variance of the quadratic a-variation V a,n in (3.1) is asymptotically equivalent to nδ 4D+2s n C 2 V D,s,a , with an explicit constant V D,s,a not depending on r. In order to show that r has no impact on the asymptotic approximation of the variance, we need a condition that makes r negligible (compared to C(−1) D | · | s ), in some sense. This is the role of (H 2 ), where we use the second or third derivative of r, from which we can use Taylor's theorem to a term involving r and exploit the order of the sequence a. We refer to (B.8) in the proof of Proposition 3.1 for more details. The condition β > s − 2 or β > s − 3 is exactly what we need to obtain, eventually, the rate nδ 4D+2s n and the explicit constant. The condition β < −1/2, that is additionally assumed in the mixed situation, is needed to guarantee that a series involving r, see also (B.8) in the proof of Proposition 3.1, is finite. If (H 2 ) was not assumed, then the proof of Proposition 3.1 given here would not hold any longer. In addition, if (H 2 ) was not assumed, one may ask whether there exist settings, for which the other assumptions hold, and where the variance of V a,n has a larger order of magnitude than nδ 4D+2s n . We leave the question of an asymptotic analysis not involving (H 2 ) open to future work. We remark that the condition (H 2 ), although technical, does not appear to be too restrictive, since it holds for all the common examples listed in Section 2.2, with no restriction on s. We finally remark that in [30], Theorem 1 (i), a bound on a derivative of r is also assumed, similarly as in (H 2 ).
• We have discussed above that the interpretation of (H 2 ) is that, in the expression V (2D) (h) − V (2D) (0) = C(−1) D |h| s + r(h), the remainder term r(h) is negligible compared to the main term C(−1) D |h| s . In the proof of Proposition 3.1, the useful notion of negligibly for us is the order of magnitude of the dth derivative as h → 0 (where d = 2 or d = 3, depending on s). The dth derivative of the main term is of order |h| s−d and we assume that the dth derivative of the remainder term is of order |h| β with β > s − d. Hence, we indeed assume that the dth derivative of the remainder term is negligible compared to that of the main term, as h → 0. Furthermore, in the infill case, where we only assume β > s − d, we make a minimal assumption to guarantee this.
• Similarly as for (H 2 ), the interpretation of (H 3 ) is that r is negligible. While (H 2 ) is used to study the variance of the quadratic a-variation V a,n , (H 3 ) is used to study its expectation. More precisely, an estimator C a,n of C is constructed, by applying the moment method to V a,n , see (3.10). The assumption (H 3 ) then enables to show that E(C a,n ) − C = o(n −1/2 ), which allows for a central limit theorem with rate n −1/2 and no asymptotic bias, see In the infill situation, if we assume |r(h)| = o |h| s+µ as h → 0, with 0 < µ < 1/2, which is weaker than (H 3 ), then we obtain E(C a,n ) − C = o(n −µ ), see Section 3.5, that is a smaller rate of convergence of the expectation compared to (H 3 ).

Appendix B. Proofs
B.1 Proof of the results of Section 2.3 Proof of Proposition 2.4. By the stochastic Fubini theorem, The positiveness of f s (t, u) for u > 0 implies that of g m,s (t, u). As a consequence, for 0 < t 1 < .
which is independent of (B where ψ is some function. Since we have where N m+1 ∈ N, where for i = 1, . . . , N m+1 , P m+1,i is a polynomial of degree less than or equal to m + 1 and h m+1,i is some function. .
By symmetry, we obtain, for u, v ∈ N, .
Hence, from the relation We conclude using the ND property of the IFBM stated in Proposition 2.4.
The first term is non-zero by (2.3) in Proposition 2.6 and a dominated convergence argument together with (H 1 ) shows that the last term is o(δ 2D+s n ) giving (3.4).
where B n comes from the double product.
Using (B.6), with q = 2M , |·| s instead of V (2D) and δ n = 1, and using the vanishing moments of the sequence a 2 * , we get, for i large enough so that i and i + j always have the same sign in the sum below, which is the general term of a convergent series.
(ii) Now we show that the term C n is negligible compared to A n . This will imply in turn that B n is negligible compared to A n , from the Cauchy-Schwarz inequality. We have to give bounds to the series with general term R 2 (i, δ n , 2D, r, a 2 * ) with For fixed i, the assumptions (2.1) on r in (H 1 ) are sufficient to build a dominated convergence argument to prove that R 2 (i, δ n , 2D, r, a 2 * ) = o(δ 2s n ) which leads to the required result. So we concentrate our attention on indices i such that |i| > 2L. Now we use (H 2 ) and the notation d = 2 if s < 3/2 and d = 3 if s 3/2. Consider β as in (H 2 ) and remark that, in the mixed situation, we have s − d < β < −1/2. In the infill situation, it is assumed that for |h| < 1, |r (d) |(h) (Const)|h| β with β > s − d. Since h is restricted to [−1, 1], we may also consider without loss of generality that β has been chosen such that s − d < β < −1/2. Using B.6 as in the proof of item 1), if 2D + d 2M , one gets The condition |i| > 2L ensures that the integral is always convergent. Then we have Since β < −1/2 , the series in i converges and the contribution to C of the indices i such that |i| > 2L is bounded by (Const)δ 4D+2d+2β n which is negligible compared to δ 4D+2s n since d + β > s.
Proof of Theorem 3.4. By a diagonalization argument, V a,n can be written as where λ 1 , . . . , λ n are the non-zero eigenvalues of variance-covariance matrix Σ a of ∆ a (X) and the Z i are independent and identically distributed standard Gaussian variables. Hence, In such a situation, Lemma 2 in [30] implies that the Lindeberg condition is a sufficient condition required to prove the central limit theorem and is equivalent to Proof of Corollary 3.6. To prove the asymptotic joint normality it is sufficient to prove the asymptotic normality of any non-zero linear combination where γ j ∈ R for j = 1, . . . , k. We have again the representation where the λ i 's are now the non-zero eigenvalues of the variance-covariance matrix σ = k j=1 γ j Σ a (j) ,n , and the Z i 's are as before. The Lindeberg condition has the same expression. On one hand, as n goes to infinity, where stands for the transpose. On the other hand, by the triangular inequality for the operator norm (which is the maximum of the |λ i |'s), one gets max i=1,...,n |λ i | = σ op k j=1 γ j Σ a (j) ,n op .
In the proof of Theorem 3.4, we have established that Σ a (j) ,n op = o(n 1/2 δ 2D+s n ) leading to the result.

B.4 Proof of the remaining results
Proof of Theorem 3.7. We use the definition of C a,n and the following decomposition: which is negligible compared to (Const)n 1/2+αs−α(s+1/2α) by (H 3 ) and thus goes to 0 as n goes to infinity. Then Slutsky's lemma and Theorem 3.4 lead straightforwardly to the required result.
Using the triangular inequality A + B 2 − A 2 B 2 + 2 A B , it suffices to have ∆ a (f ) 2 = o(Var(V a,n (X) 1/2 ) = o(n 1/2 δ 2D+s n ) to deduce the central limit theorem for X from that for X. By application of the Taylor-Lagrange formula, one gets with ξ ∈ [0, n 1−α ]. Then ∆ a (f ) 2 n(K α M,n ) 2 δ 2M n and a sufficient condition is (3.12).
With the notation of the proof of Proposition 3.1, and from the inequality (x + y) 2 2x 2 + 2y 2 , we obtain Var V (V a,n ) 2A n + 2C n .