WAVELET ANALYSIS FOR THE SOLUTION TO THE WAVE EQUATION WITH FRACTIONAL NOISE IN TIME AND WHITE NOISE IN SPACE∗

Via Malliavin calculus, we analyze the limit behavior in distribution of the spatial wavelet variation for the solution to the stochastic linear wave equation with fractional Gaussian noise in time and white noise in space. We propose a wavelet-type estimator for the Hurst parameter of the this solution and we study its asymptotic properties. Mathematics Subject Classification. 60G15, 60H05, 60G18, 60F12. Received March 16, 2020. Accepted May 3, 2021.


Introduction
In mathematical statistics, the parameter estimation for stochastic (partial) differential equations constitutes a topic of wide interest (see, among many others, the monographs or surveys [8,14] or [20]). In the last decades, the statistical inference for stochastic models driven by fractional Brownian motion and related processes also became a popular topic, due to the developments of the stochastic calculus for fractional processes (see, again among many others, [13,21,25]). A common characteristic of the above mentioned references is that they analyze estimators for the drift parameter or for the diffusion coefficient for standard fractional stochastic (partial) differential equations and very few works studied the problem of the estimation of the Hurst parameter of the driving noise (see [12,22,23]).
In our work, we will consider the linear stochastic wave equation (2.1) driven by a fractional-white Gaussian noise (i.e. a Gaussian noise that behaves as a fractional Brownian motion in time and as a white noise in space) and we construct and analyze statistical estimators for the Hurst index of the solution, based on the discrete observations of the solution in space and time. The stochastic partial differential equation (2.1) constitutes a model for an infinite vibrating string (under an ideal context, with uniform mass, neglecting the air resistance, etc.) perturbed by a random force which behaves as a fractional Brownian motion in time and as a Wiener process in space. For related works on the stochastic wave equation, we refer, among many others, to [4,10,24]. The value u(t, x) modelizes the vertical displacement from the x-axis of the string at time t and at position x (in a coordinate system with x on the horizontal line and u on the vertical line). The displacement of the string is clearly affected by the random force and in particular by its Hurst parameter H. This influence of the Hurst parameter appears in several aspects, such as the probability distribution of the solution to (2.1) or the regularity of its sample paths. Indeed, for fixed x ∈ R, the process u is self-similar of order H + 1 2 in time and its paths are Hölder continuous of order δ ∈ (0, H) in space and the same Hölder continuity holds with respect to the time variable (see e.g. [24]). The Hurst parameter also characterizes other properties of the solution, such as the hitting times, the Hausdorff dimension or the regularity of its local times (see e.g. [9]). Therefore, the estimation of this parameter is of interest . We propose a wavelet-type estimator defined via the decomposition of the observed process in a wavelet basis. The wavelet estimators have been intensively used in order to identify the Hurst paramter of the fractional Brownian motion and related processes (see e.g. [1,5,7,11,15]). Such estimators have in general several advantages: they are robust and computationally efficient, they are based on the log-log regression of the empirical variance onto several scales and this regression is useful for goodness-of-fit of the model, they offer flexibility on the choice of the wavelet basis etc.
Let (u(t, x), t ≥ 0, x ∈ R) be the solution to the wave equation with fractional-white additive noise. Here we used a wavelet decomposition of the solution to the wave equation (2.1) with respect to its space variable by assuming that the time variable is fixed. That is, we consider a "mother wavelet" Ψ with Q vanishing moments (Q ≥ 1) and we define the wavelet coefficient d(t, a, i) = 1 √ a R Ψ x a u(t, x)dx with t > 0 fixed and the scale a > 0. The wavelet variation, denoted V N (t, a) in the sequel, is defined by (2.10) by taking the sum of the centered and renormalized squared wavelet coefficients. By analyzing the asymptotic behavior of the wavelet variation V N (t, a) as N → ∞, we are able to construct, via a log-log regression of the empirical variance onto several scales, an estimator for the Hurst parameter of the solution to (2.1) and to analyze its asymptotic behavior. The asymptotic behavior of the estimator is strongly connected to the asymptotic behavior of the wavelet variation V N (a). The time t also plays a role. For practical purposes, it would be convenient to estimate H by assuming that the solution is observed at a fixed time and at discrete points in space. On the other hand, as we will notice later, in the case of fixed time the empirical variance does not behave as a power function whose exponent is a linear function of H and the log-log regression argument cannot be applied. The relation between the wavelet variance and the Hurst index is more complex and we construct our estimator by analyzing this connection.
The techniques that we use to study the limit behavior in distribution of the wavelet variation are based on the Malliavin calculus and Stein method. We employ the recent Stein-Malliavin theory (see e.g. [16]) in order to prove that this sequence satisfies a Central Limit Theorem (CLT) and to derive the rate of convergence for this limit theorem. As mentioned above, we distinguish two situations: when the time t varies with N (i.e. t = N β with β > 0) or when the time t is fixed (and in this case we restrict to the case of the Haar wavelet). We will see that in these two situations, the behavior of the wavelet variation is pretty different, although it always satisfies a CLT (with a different rate of convergence). We deduce the limit behavior of the associated Hurst parameter estimators, via a log-log regression of the empirical variance. We also notice that we use spatial wavelet variation to estimate the Hurst parameter of the solution, although this parameter appears in the time covariance of the noise and it characterizes the self-similarity of the solution in time.
We organized our paper in the following way: Section 2 contains some preliminaries on the wave equation with fractional-colored noise and on wavelets. In Section 3 we state our main theoretical results. Section 4 contains the proofs of the main results, including the correlation structure of the wavelet coefficients, the magnitude of the L 2 -norm of the wavelet variation and the Central Limit Theorem for this sequence as well as the Berry-Essén bound for this limit theorem. Section 5 is devoted to discretized of the wavelet variation and the construction and the asymptotic study of the wavelet-type estimator for the Hurst parameter of the solution to the stochastic wave equation.

Preliminaries
Let us start by presenting some basic facts on the solution to the wave equation with additive fractionalcolored noise and on the wavelet analysis.

The solution to the wave equation
Let (u(t, x), t ≥ 0, x ∈ R n ) be the solution to the wave equation with fractional-white noise Here ∆ is the Laplacian on R n , n ≥ 1 and } is a real valued centered Gaussian field, over a given complete filtered probability space (Ω, F, (F t ) t≥0 , P), whose covariance function is where λ is the d-dimensional Lebesgue measure, B b (R n ) is the set of the λ-bounded Borel subsets of R n and R H is the covariance function of the fBm with Hurst parameter H ∈ (0, 1) given by Throughout this work, we will assume H ∈ 1 2 , 1 . The solution of the equation (2.1) is understood in the mild sense, that is, it is defined as a square-integrable where G 1 is the fundamental solution to the wave equation and the integral in (2.4) is a Wiener integral with respect to the Gaussian process W H . Recall that for n = 1 (we will later restrict to this situation in our work) we have, for every t ≥ 0 and x ∈ R, We refer to e.g. [10] (when H = 1 2 ) and to e.g. [4] (for H ∈ 1 2 , 1 ) for the definition and basic properties of the solution. The solution (2.4) is well-defined in dimension n = 1 for every H ∈ 1 2 , 1 (see e.g. [24]) and we have an explicit formula for its spatial covariance which will be a key ingredient in our study (see [12]) with c H = 4H−1 4(2H+1) . When t > 1 and |x − y| ≤ 1, this expression reduces to We notice that the solution is stationary in space while it has a scaling property in time (it is actually self-similar in time of order H + 1 2 ). The sample paths of the solution are Hölder continuous in time and in time of order δ ∈ (0, H) (see e.g. [24]).

Wavelets
Let Ψ be a continuous function with support in [0, 1] such that its first Q moments vanish i.e. there exists and integer Q ≥ 1 such that The function Ψ is usually called mother wavelet. Define for a > 0, i = 1, Also define the wavelet variation in space of the solution (2.4) by We will study the asymptotic behavior, as N a → ∞, of the wavelet variation V N (t, a). In applications, the parameter a, which is called scale, will depend on N and it is usually assumed that a = a N → N →∞ ∞.
Given the covariance of the solution to the wave equation (see formula (2.6)), it is clear that the time t will play an important role, depending on its position with respect to the spatial increment |x − y|.
We will consider two situations: the fixed time case, i.e. the time t > 0 is fixed, and the moving time case, when the time depends on N and it tends to infinity as N → ∞. The first situation would be more convenient for applications to parameter estimation, since it means that the solution is observed only at a fixed time. Nevertheless, in this case the wavelet variation does not provide an explicit estimator since the usual log-log regression procedure to construct an wavelet estimator based on V N (t, a) leads to a more complicated equation in H. A slightly different argument is then used for fixed time.
We will start with the moving time situation. We will assume a = a N = N α with 0 < α < 1 and t = t N = N β with β ≥ 1. (2.11) The choice of such time t will be explained later, it allows to reduce the expression of the correlation of the wavelet coefficients. Then, we will consider the situation when the time is fixed, i.e. we suppose a = a N = N α with 0 < α < 1 and t > 0 is fixed. (2.12) In this second case, in order to have a precise estimate on the wavelet coefficient and on the empirical variance EV N (t, a), we need to restrict to a particular case of wavelet system (the Haar wavelet).

Main results
In this section we will state our main theoretical results. Their proofs are postponed to Section 4. These results give the asymptotic behavior as N → ∞ of the wavelet variation V N (t, a) given by (2.10) as well as the limit behavior in distribution of the renormalized wavelet variation. We will show that, in both moving time and fixed time cases, the magnitude of the variance of V N (t, a) as N → ∞ is the same and the renormalized wavelet variation satisfies a Central Limit Theorem. We also evaluate the rate of convergence to the normal distribution, which varies in the two cases under consideration.

The moving time case
Let us start by treating the situation when the time t depends on N , i.e. we assume (2.11). In this case, we obtain the following renormalization of the wavelet variation.
Proposition 3. 1. Let V N (t, a) be given by (2.10). Assume Q ≥ 2 or Q = 1, H < 3 4 . Let a N , t N be given by (2.11). Then with g H given by and K Ψ,H given by, for H ∈ 1 2 , 3 Notice that the above integral (3.3) is finite because the support of the mother wavelet Ψ is included in the interval [0, 1] and 2H > 0. We assume, as in [5], that K Ψ,H > 0 (which is satisfied by a large choice of the mother wavelet Ψ). The results in Section 4 show also that the series in the right-hand side of (3.1) is convergent.
Let us denote, for every N ≥ 1 with V N (t N , a N ) defined in (2.10), K 0,Ψ,H from (3.1) and suppose that assumption (2.11) is verified. From Proposition 3.1 We will obtain the following result. We denote below by c, C generic strictly positive constants that may change from line to line. By d we denote the distance between distributions of random variable and below it can be each of the following distances: Kolmogorov, total variation, Wasserstein or Fortet-Mourier (see [16]).
Theorem 3. 2. Let F N be given by (3.4). Then the sequence (F N ) N ≥1 converges in distribution to a standard normal random variable Z ∼ N (0, 1) and We can also prove a multidimensional central limit theorem for the wavelet variation considered at different scales. This will be used in order to estimate the Hust parameter of the solution to the wave equation in the next section.
with C(L 1 , L 2 , H) given by It follows from our proofs in Section 4 that the limit in the right-hand side of (3.6) exists and is finite.

The fixed time case
If t is fixed, we can prove the following approximation result for the variance of the wavelet variation. As mentioned, the role of the mother wavelet will be played by the Haar wavelet, i.e.
Proposition 3. 4. If V N (t, a) is given by (2.10) and (2.12), (3.7) hold true, we have for every t > 0 By Proposition 3.4, we have the following renormalization of the wavelet variation i.e. EG 2 N → N →∞ 1. We will show below that the renormalized wavelet variation satisfies a CLT also when the time is fixed.
Theorem 3. 5. The sequence (G N ) N ≥1 given by (3.9) converges in distribution to Z ∼ N (0, 1) and for N large enough Let us make a short discussion around the above statements.
• We notice that the renormalization of (2.10) is of the same order in both cases (fixed time or moving time) although, as we will see in Section 4, the correlation structure of the wavelet coefficient is different.
• The wavelet variation (2.10) satisfies a CLT both in the moving or fixed time cases. On the other hand, the behavior of this sequence is pretty different in these two cases. While for fixed time, this sequence basically behaves as a sum of independent random variables (see also Remark 3.6), in the moving time case there is a non-trivial correlation between all the summands that compose V N (t, a). • The rate of convergence of the sequence ( 3.9) to the normal distribution varies upon α ∈ (0, 1): when suggests that if the scale a is constant (i.e. α = 0) the sequence V N (t, a) does not satisfy a CLT.

Proofs
This part contains the proofs of the theoretical results stated in Section 3.

The correlation structure of the wavelet coefficient
The behavior of the wavelet variation (2.10) will depend on the behavior of the variance of the wavelet coefficient Ed(t, a, i) 2 and of the correlation between the wavelet coefficients, i.e. Ed(t, a, i)d(t, a, j) with i = j. We will start by analyzing the behavior of these quantities in both cases (2.11) and (2.12).
Let d(t, a, i) be given by (2.9) with t > 0, a > 0 and i = 1, . . . , N a . We will use the following notation throughout our work for every t > 0, a > 0 and i = 1, . . . , N a . Notice that, due to the stationarity of the process (u(t, x), x ∈ R), the quantity Ed(t, a, i) 2 does not depend on i. Let t > 0, a > 0. For every i, j = 1, . . . , N a we have from the covariance formula (2.6) We will see below that the above expression will simplify under assumption (2.11).

The moving time case
First, we assume that we work under the assumption (2.11). We start by studying the variance of the wavelet coefficient. Let us recall the notation K Ψ,H from (3.3).
Let us now study the correlation (4.3) with i = j. We can write with the notation g H (k) from (3.2). Notice that for every k ∈ Z we have g H (k) = g H (−k) for k ∈ Z. The analysis of the quantity g H (k) for k large, will give the asymptotics of the correlation (4.5). Recall that the integer Q ≥ 1 is fixed by (2.8).

Lemma 4.2.
Let g H be given by (3.2). Then for k large enough, we have for every H ∈ 1 2 , 3 where C Ψ,H,Q is a strictly positive constant not depending on k.
Proof. Using the following asymptotic expansion at z = 0 where θ z is a point located between 0 and z, we can write, for k large enough, if C H,Q is a constant depending only of H and Q, where we used (2.8) and we denoted by θ x,y,k a point located between 0 and x−y k . Since |x − y| ≤ 1, we have for k ≥ 2 We deduce that, for k large using the fact that the support of Ψ is included in the interval [0, 1].
Lemma 4. 3. Let g H be given by (3.2). Denote, for a > 0 and N ≥ 1 Moreover, for every H ∈ 1 2 , 1 and for every Q ≥ 1, for N large enough Proof. We can write By the dominated convergence theorem and Lemma 4.2 we clearly have Note that the series k∈Z g H (k) 2 is convergent due to Lemma 4.2. Now again by Lemma 4.2. The series k∈Z k 4H+2−4Q is convergent when Q ≥ 2 and for Q = 1 and H > 1 2 , the sequence |k|≤Na k 4H+2−4Q behaves as C H,Q N 4H−1 a . This implies the estimate (4.8). A similar argument gives (4.9), because from Lemma 4.2

The fixed time case
Let us assume t > 0 is fixed, i.e. we assume (2.12). As before, we use the notation We start by estimating the behavior of D(t, a N ) as N → ∞. It is impossible to get the exact behavior of this quantity for an arbitrary function Ψ. Therefore, in the sequel we will choose the function Ψ to be the mother wavelet of the Haar system, see (3.7).
Proposition 4. 4. Let Ψ be given by (3.7) and assume (2.12). For every t > 0 and for N large enough and where we used the notation and To obtain the speed of convergence of I 1,t,N and I 2,t,N , we need to study the sequence A H,N defined by (4.14).
Clearly, A H,N converges to zero as N → ∞ but we need to analyze how fast this sequence goes to zero. We have Let us chose N large enough such that We will have, with Ψ from (3.7), and by separating the integral dy in the last term above upon x = tN −α less or bigger than one-half we will obtain

This gives
( 4.16) Consequently, we obtain from (4.16) the following behavior for the summand I 1,t,N in (4.11) The second summand I 2,t,N gives, using (4.16) Let us now calculate the term I 3,t,N defined in (4.13). We can write and since (this is the same calculation as for A H,N without the factor (x − y) 2H ) Let us regard the last summand I 4,t,N in (4.13). With B H,N given by (4.15) We estimate separately the summands B 1,H,N and B 2,H,N . First, notice that we can choose N large enough so that t N α < 1 4 and therefore 2t N α < 1 2 . We then get while for B 2,H,N we have By putting together the above computations, we obtain From (4.17), (4.18), (4.19) and (4.20) we obtain the conclusion. In particular, concerning the constant K 1,t (H) which is needed in the sequel by using the expression of c H in (2.6).
We also need to analyze Ed(t, a N , i)d(t, a N , j) when |i − j| = 1. Only this correlation coefficient will be needed for the renormalization of the sequence (2.10).

Renormalization of the wavelet variation
In order to analyze the asymptotic behavior of the wavelet variation (2.10), we will use the chaotic expression of V N (t, a). We will work with multiple stochastic integrals with respect to the fractional-white noise W H .
Let E denote the space of all linear combinations of indicator functions 1 [0,t]×A with t ≥ 0 and A ∈ B b (R) (the bounded Borel subsets of R). Let H be the completion of E with respect to the inner product In particular (see [2]) Let I q be the multiple stochastic integral of order q with respect to the isonormal process (W (ϕ), ϕ ∈ H) (see the Appendix or [3]). Then and therefore the wavelet coefficient d(t, a, i) given by (2.9) can be written as Then, by the product formula for multiple stochastic integrals (A.3), we have, for every t > 0, a > 0 and N ≥ 1 with f t,a,i given by (4.29). Let us compute the L 2 -norm of the random variable V N (t, a) given by (2.10). By using the isometry formula for multiple integrals (A. 2), Again we study the behavior of (4.31) as N → ∞ when t varies with N and when t is fixed. 4. 4. The moving time case: Proof of Proposition 3.1 Assume (2.11) and let us prove the limit theorem (3.1). The formula (4.31) becomes with g H given by (3.2). Thus, with g N,H defined by (4.6), We will use the notation f N ∼ g N which in our work means that the sequences f N and g N have the same limit as N → ∞.

The fixed time case: Proof of Proposition 3.4
If t is fixed, we can prove the approximation result (3.8). We have Ed(t, a, i) 2 Ed(t, a, j) 2 with f H,N given by (4.21). Notice that f H,N (k) = f H,N (−k) and f H,N (k) = 0 if |k| ≥ 2 by chosing N large enough. Since can be seen via (4.21), since the function Ψ has support included in [0, 1]. Therefore We have f H,N (0) = D(t, a N ) and f H,N (1) was computed before. Using (4.28), (4.36) can be written as follows with L t (H) given by (4.28). Then and the conclusion follows. Remark 4. 6. As already noticed in Remark 3.6, the renormalization of (2.10) is of the same order in both cases (fixed time or moving time) although the correlation structure of the wavelet coefficient is different. On the other hand, in the fixed time case, the diagonal term of EV N (t, a N ) 2 is dominant for the behavior of this quantity as N → ∞ (here is only one non-diagonal term which does not contribute to the limit), while when t increases with N , all the diagonal and non-diagonal terms have contribution to the limit.

Central limit theorem and rate of convergence
We will show that, both in the moving time and fixed time cases, the renormalized wavelet variation satisfies a central limit theorem if Q ≥ 2 or Q = 1, H < 3 4 . Our main tool is the following result (see Thm. 5.2.6 and Cor. 5.2.10 in [16]). Recall that by d we denote the distance between distributions of random variable and below it can be each of the following distances: Kolmogorov, total variation, Wasserstein or Fortet-Mourier (see [16]).

The moving time: Proof of Theorems 3.2 and 3.3
Consider the sequence (F N ) N ≥1 given by (3.4) and recall that from Proposition 3.1 Also, by (4.30) we have the following chaos expansion of F N , for every N ≥ 1, with f t,a,i given by (4.29), and, if H is the Hilbert space associated with the fractional-white Gaussian noise (see the beginning of Section 4.3), I 2 (f t,a,i ⊗ f t,a,j ) f t,a,i , f t,a,j H   2 (4.39) and Let us first estimate T 2,N . Since we can write, as in (4.32) First, we analyze the term T 2,1,N . We have and since by (4.4) we obtain The first term in the above expression vanishes with 2 so it remains We have the following bound for the first summand in To obtain a bound for the second term in the expression of T 2,1,N , we write k∈Z g H (k) 2 = k∈Z g H (k) 2 1 {|k|≤Na N } + k∈Z g H (k) 2 1 {|k|>Na N } and using the fact that |g H | is bounded by |k| 4H−4Q we get that Therefore For T 2,2,N we have by (4.8) and Lemma 4.1 Regarding T 2,3,N we use (4.8) and Lemma 4.1 to get that Combining (4.41), (4.42) and (4.42), we have the following bound for (4.40): Concerning T 1,N , by the isometry of multiple integrals Recall that for all integers p, q we have (see relation (4.5)) f t,a,p ; f t,a,q = c H 2 a 2H+2 . Hence, Above we denoted by T 1,1,N a product that depends only on functions g H and T 1,2,N is a product that depends only on one function g H (see (3.2)) and three functions g H+ 1 2 . T 1,3,N is the product of two functions g H and two g H+ 1 2 . As well, T 1,4,N contains three g H and one g H+ 1 2 while T 1,5,N contains the four functions g H+ 1 2 . To study each term we will use the same technique of proof of the Theorem 7.3.1 in [16] or Lemma 3 in [12]. Let g H,N (k) := |g H (k)|1 {|k|≤Na} (this is not the same as g N,H in (4.6)) and g H+ 1 2 ,N (k) := |g H+ 1 2 (k)|1 {|k|≤Na} . Let u, v : Z → R be two sequences, we define their convolution by We will need Young's inequality, which can be written, for s, p, q ≥ 1 such that 1 Now we can estimate each term T 1,i,N , for i = 1, . . . , 5. First .
where the last inequality follows from Young inequality The series k∈Z |g H (k)| 4/3 converges because |g H (k)| is bounded, for k large, by C|k| 4H−4Q , see Lemma 4. 3. Finally, Next, we have We apply the same idea as above ) if Q = 1, H > 5 16 .
Using the previous case we have and as well, which implies that .

(4.46)
It remains to look at T 1,5,N , and this gives us Combining (34)−(38) we see that T 1,1,N is the biggest term and finally we get a simple estimate for T 1,N By (4.42) and (4.47), we obtain the conclusion.
Proof of Theorem 3. 3. By Theorem 3.2 (applied to the scale La N , L = 1, . . . , d), we know that the each component of the vector N 1−α V n (t N , La N ) L=1,. ..,d converges in distribution, as N → ∞, to a centered Gaussian random variable. By the main result in [19], it suffices to show that for every L 1 , L 2 = 1, . . . , d converges as N → ∞ to Γ L1,L2 . We have We know (see Sect. 3.2 in [7] or Prop. 2.3 in [5]), that for every i, j large and for H ∈ 1 2 , 3 2 , and if H ∈ 1 2 , 1 , Now consider the random sequence (G N ) N ≥1 defined by (3.9). It satisfies EG 2 N → 1 as N → ∞ and it admits the following chaos expansion From Theorem 4.7, We analyze first T 1,N , For T 2,N we can write Therefore we obtain

Estimation of the Hurst parameter
We will apply our theoretical results in Section 3 in order to construct an estimator for the Hurst parameter of the solution to the stochastic wave equation (2.1). The estimator will be constructed by using the wavelet variation (2.10). We will assume that the solution is observed at discrete points in space x i = i, i = 1, . . . , N and at a certain time t (fixed or depending on N ). Different estimators (but all of them constructed via the wavelet variation) are obtained in these two situations treated in our work (moving or fixed time). While for fixed time the logarithm of the variance of the wavelet coefficient depends linearly on H (Lem. 4.2), a linear log-log regression will give the explicit form of the estimator. For fixed time, this variance of the wavelet coefficient have a more complex dependence on the Hurst parameter (Prop. 4.4) and a different argument will be employed.

The moving time case
First we introduce a discrete version of the wavelet variation (2.10). Then we define an estimator in terms of the discrete wavelet variation and we prove its asymptotic properties.

Discretization of the wavelet variation
We will use an estimator constructed by using the wavelet variation (2.10) or more precisely, by using its discretized version defined below. Notice that the wavelet coefficient d(t, a, i) is defined as a continuous integral (see (2.9)) and it cannot be observed directly when the process u is observed. Therefore, by approximating the integral in (2.9) by Riemann sums, we define the discrete wavelet coeffcient, for a > 0, t > 0, Since Ψ has its support contained in the interval [0, 1], the above coefficient can also expressed as Let us also define the discrete version of the wavelet variation by setting with, see notation (4.1),ẽ In a first step, we will show that the sequence V N (t N , a N ) has the same limit behavior in distribution as V N (t N , a N ) when N goes to infinity. We need to assume some differentiability of the mother wavelet (several examples satisfy this assumption, among others the Daubechies wavelet or the mexican hat wavelet, see [5] or [7]).
Proposition 5. 1. Suppose that Ψ ∈ C m (R) with m > Hβ α . Assume (2.11) with α ∈ 1 2 , 1 and let V N (t N , a N ), V N (t N , a N ) be given by (2.10) and (5.2) respectively. Then Proof. We start by estimating the difference between the coefficient d(t N , a N , i) and its discrete counterpart e N (t N , a N , i) with i = 1, . . . , N a N and with t N , a N as in (2.11). Let us compute the L 2 (Ω)-norm of this difference.
We write The first summand Ed(t N , a N , i) 2 has already been computed in (4.3). Let us compute the other two terms. For N ≥ 1 and i = 1, . . . , N a N we have from the covariance formula (2.6) We used the fact that |k − l| ≤ a N = N α < t N = N β under (2.11), so the last summand in (2.6) vanishes. We also have from (2.6), (2.9) and (5.1) Via Now we use the following bounds (we refer to [5] for their proofs, see also [7]) for N large and and, for Ψ of class C m (R), with C > 0 not depending on N . By using the inequalities (5.6), (5.7) and (5.8) in (5.5), we obtain For the renormalized coefficients, we have the estimate If m > Hβ α , then for all i = 1, . . . , N a N , . By using Cauchy-Scwarz inequality as proceeding as in the the proof of Lemma 1 in [7], we can write, with C 1 , C 2 > 0, and by (5.10), Consequently, and the conclusion is obtained since α > 1 2 .
As a consequence of the above result, the discrete wavelet variation V N (t N , a N ) has the same limit in law as V N (t N , a N ). Proof. The proof immediately follows from Theorem 3.3 and Proposition 5.1.

The definition of the estimator
with K Ψ,H from Lemma 4.1. We write the above relation for t N = t La N (which means that we replace t N = N β by L β N αβ in (4.3). To do this, we will assume that in the sequel αβ > 1 and with this assumption all our theoretical results (such as Thm. 3.3) can be applied. So The above relation (5.13) implies, for N ≥ 1, log E(S N (t La N , La N )) = log(D(t La N , La N ) = (β + 2H + 1)α log N + (β + 2H + 1) log L + log 1 4 where (ε N ) N ≥1 is a a deterministic sequence defined by, for every N ≥ 1, Let us also introduce the discretized counterpart of S N (t, a), i.e. N , a N )) ( 5.15) with e N from (5.1). The above relation (5.14) inspires the following definition of the estimator for the Hurst parameter where T denotes the transpose and with the notation Equivalently, we have with X, Y from (5.17).
2. Notice that the estimator (5.18) is expressed in terms of the sequence ( 5.15) which depends on the discrete wavelet coefficients e N . Therefore, the estimator can be computed from the data, that is, from the observations u(t, k), k = 1, 2, . . . , N with t = t a N = N αβ with αβ > 1. So, if we have at our disposal N observations in space, one needs to be able to observe them at time N αβ . Recall that in practice our wave equation describes the vertical displacement of a vibrating string under a random force. This means that the observation time of the vibrating time should be sufficiently long and it is related to the number of spatial observations.
Using Theorem 3.3, we can deduce the limit behavior of the estimator H N . with the matrix Γ defined by (3.5). Proof. From (5.15) and (5.14), we have for large enough N , with ε N defined in Section 5. 3. By plugging the above relation into (5.18), we obtain, for N large, Note that V N (t a N , a N ) converges to zero almost surely as N → ∞, this is a consequence of Proposition 3.1 and of a standard Borel-Cantelli argument, see e.g. [24]. Therefore, as ε N tends to zero we get that H N → N →∞ H almost surely and by using Theorem 3.3 we obtain the convergence (5.19).

Estimation when the time is fixed
Assume now that the time t is fixed, as in (2.12). We would like to estimate the parameter H of the mild solution (2.4) based on the observation of the solution at a fied time and at discrete points in space. The result in Proposition 4.4 shows that the behavior of the wavelet coefficient is not a power-function with exponent depending on H and their relationship is more complex. Actually with K 1,t (H), K 2,t (H) from Proposition 4. 4. Therefore the log-log regression argument employed above cannot work when the time is fixed. We proposed and alternative method via the analysis of the constant K 1,t (H). Consider the sequence S N given by (5.11) and assume now (2.12). By Proposition 4.4, ES N (t, a N ) = D(t, a N ) = K 1,t (H) + K 2,t (H)N −α → N →∞ K 1,t (H) ( 5.21) with K 1,t (H) = 1 2(H+1) t 2H+2 , see (4.10). By approximating, as usual, ES N (t, a N ) by S N (t, a N ), we can say that for N large enough, S N (t, a N ) is close to K 1,t (H). > 0 and f 2 (H) = 1 2(H+1) 2 for H ∈ 1 2 , 1 . When t → ∞, this derivative behaves as f 1 (H)t 2H+2 log t so it is positive by choosing a suitable time t large enough. Consequently, the function f N,t is invertible on 1 2 , 1 and the conclusion follows.
We will assume in the sequel that t is large enough in order to ensures the existence and uniqueness of the solution to (5.22).
Definition 5. 6. We define H N to be the unique solution of the equation (5.22).
We derive the asymptotic properties of the estimator H N .
Proposition 5. 7. The estimator H N from Definition 5.6 is strongly consistent. Moreover, it satisfies the following limit behavior in distribution  V N (t, a N )).
We let N → ∞ above. Since V N (t, a N ) tends to zero almost surely and D(t, a N ) converges to K 1,t (H) as N → ∞, we get lim N →∞ K 1,t ( H N ) = K 1,t (H) almost surely. (5.24) By the proof of Lemma 5.5, we deduce that K 1,H is invertible on 1 2 , 1 and its inverse is continuously differentiable on this interval. By applying K −1 1,t to (5.24) we deduce that H N → N →∞ H almost surely. Let us show that the estimator is asymptotically normal. Indeed, from (5. Given the asymptotic behavior of D(t, a N ) (see 5.21)), we can write By using the delta-method with the continuously differentiable function K −1 1,t on 1 2 , 1 , we obtain (5.23).
Let us end this statistical inference part with some comments: closed linear span of the random variables H q (B(ϕ)) where ϕ ∈ H, ϕ H = 1 and H q is the Hermite polynomial of degree q ≥ 1 defined by: The isometry of multiple integrals can be written as follows: for p, q ≥ 1, f ∈ H ⊗p and g ∈ H ⊗q E I p (f )I q (g) = q! f ,g H ⊗q if p = q, 0 otherwise.

(A.2)
We have the following product formula: if f ∈ H p and g ∈ H q , then where f ⊗ r g denotes the contraction of order r = 0, 1, . . . , p ∧ q.
We denote by D the Malliavin derivative operator that acts on cylindrical random variables of the form F = g(B(ϕ 1 ), . . . , B(ϕ n )), where n ≥ 1, g : R n → R is a smooth function with compact support and ϕ i ∈ H in the following way The operator D is closable and it can be extended to the closure of the set of cylindrical random variables (denotes D 1,2 ) with respect to the norm If F = I p (f ) with f ∈ H p and p ≥ 1, then DF = pI p−1 (f (·, * )) (A. 4) where " * " stands for p − 1 variables.