Quasi-stationarity for one-dimensional renormalized Brownian motion

We are interested in the quasi-stationarity of the time-inhomogeneous Markov process X t = B t (t + 1) $\kappa$ where (B t) t$\ge$0 is a one-dimensional Brownian motion and $\kappa$ $\in$ (0, $\infty$). We first show that the law of X t conditioned not to go out from (--1, 1) until the time t converges weakly towards the Dirac measure $\delta$ 0 when $\kappa$>1 2 as t goes to infinity. Then we show that this conditioned probability converges weakly towards the quasi-stationary distribution of an Ornstein-Uhlenbeck process when $\kappa$ = 1 2. Finally, when $\kappa$<1 2 , it is shown that the conditioned probability converges towards the quasi-stationary distribution of a Brownian motion. We also prove the existence of a Q-process and a quasi-ergodic distribution for $\kappa$ = 1 2 and $\kappa$<1 2 .


Introduction
In this paper, we are interested in some notions related to quasi-stationarity for a one-dimensional Brownian motion (B t ) t≥0 killed when reaching the moving boundary t → {−(t + 1) κ , (t + 1) κ }, with κ ∈ (0, ∞). Quasi-stationarity with moving boundaries was studied in [16] and [15] for periodic or converging boundaries, but expanding boundaries were not yet considered. Actually, instead of considering the process B absorbed at t → {−(t + 1) κ , (t + 1) κ }, we will study the quasi-stationarity for the process X = (X t ) t≥0 absorbed at (−1, 1) c and defined by where τ X := inf{t ≥ 0 : |X t | = 1}. The process X is a time-inhomogeneous Markov process. For any x ∈ R and s ≥ 0, denote by P x,s the probability measure satisfying P x,s (X s = x) = 1, and denote by E x,s its corresponding expectation. Also, for any measure µ, for any s ≥ 0, one denotes by P µ,s := R P x,s µ(dx) and E µ,s := R E x,s µ(dx).
We refer the reader to [11,13] for more details on the theory. Note however that these references only deal with the time-homogeneous setting and that quasi-stationary distributions for timeinhomogeneous Markov processes do not exist except for particular cases (especially we will see that the existence of one quasi-stationary distribution holds only for κ = 1 2 ). Usually, in the literature dealing with quasi-stationarity, mathematicians are interested in showing the weak convergence of marginal laws of Markov processes conditioned not to be absorbed by a cemetery set. The corresponding limiting probability measure is called quasi-limiting distribution. For our purpose, we define a quasi-limiting distribution as follows: Definition 1. α is a quasi-limiting distribution for X if, for some initial law µ supported on (−1, 1) and for any s ≥ 0, lim t→∞ P µ,s (X t ∈ ·|τ X > t) = α, where the limit refers to the weak convergence of measures.
In [13] it is noted that, in the time-homogeneous setting, quasi-stationary distribution and quasilimiting distribution are equivalent notions. In the time-inhomogeneous setting, this equivalence does not hold anymore. More particularly a time-inhomogeneous Markov process could admit a quasilimiting distribution without admitting a quasi-stationary distribution. However, a quasi-stationary distribution is necessarily a quasi-limiting distribution.
Quasi-limiting distribution is not the only point of interest in the theory of quasi-stationarity. Another point is the Q-process, which can be considered as the law of the considered Markov process conditioned "never to be absorbed". Concerning the process X, we define the Q-process as follows: Definition 2. We say that there is a Q-process for X if there exists a family (Q x,s ) x∈(−1,1),s≥0 of probability measures defined by : for any x ∈ (−1, 1) and for any s ≤ t, Q x,s (X [s,t] ∈ ·) := lim T →∞ P x,s (X [s,t] ∈ ·|T < τ X ), where, for any u ≤ v, X [u,v] is the trajectory of X between times u and v. Then the Q-process is defined as the law of X under (Q x,s ) x∈(−1,1),s≥0 .
In general, the Q-process is also a Markov process and the theory of quasi-stationarity allows to get some results of ergodicity for the Q-process. In particular, under some assumptions (see for example [6,7,18]), the Q-process admits a stationary distribution which is absolutely continuous with respect to the quasi-stationary distribution.
Finally, a third concept to study is the quasi-ergodic distribution, defined as follows: Definition 3. β is a quasi-ergodic distribution for X if, for some initial law µ supported on (−1, 1) and for any s ≥ 0, In the literature, this notion is also called mean-ratio quasi-stationary distribution. The references [11,13] do not deal with quasi-ergodic distributions. See for example [5,9] which provide general assumptions implying the existence of quasi-ergodic distributions for time-homogeneous Markov processes. In particular, it is shown in [5] that, if the Q-process is Harris recurrent, the quasi-ergodic distribution is the stationary distribution of the Q-process. Concerning the time-inhomogeneous setting, similar results can be stated (see [16]) when the Q-process converges weakly at infinity. In this case, the quasi-ergodic distribution coincides with the limiting probability measure. Some general results on quasi-stationarity for time-inhomogeneous Markov process are established, particularly in [10], where it is shown that criteria based on Doeblin-type condition implies a mixing property (or merging or weak ergodicity) and the existence of the Q-process. However, for our purpose, these conditions will be difficult to establish. See also [19,12,15,16] for a few results about quasi-stationarity in the time-inhomogeneous setting, and [1] for ergodic properties for general non-conservative (time-homogeneous and inhomogeneous) semi-group. Now let us come back to our process X. As we can expect, the existence of a quasi-limiting, Q-process and quasi-ergodic distribution will strongly depend on κ. More precisely, three regimes are identified : • κ > 1 2 , we will say that X is supercritical • κ = 1 2 , we will say that X is critical • κ < 1 2 , we will say that X is subcritical The aim of this paper is therefore to show the existence of quasi-limiting, Q-process and quasi-ergodic distribution for each regime. More precisely, it will be shown in Section 2 that, for any probability measure µ on (−1, 1) and s ≥ 0, lim t→∞ P µ,s (X t ∈ ·|τ X > t) = δ 0 in the supercritical regime. This regime is of little interest and the existence of a Q-process and a quasi-ergodic distribution will not be shown.
The Section 3 is devoted to the critical case. This is the only regime for which there is a (unique) quasi-stationary distribution for X. More precisely, it will be shown in subsection 3.1. that the conditional probability distribution P µ,s [X t ∈ ·|τ X > t] converges polynomially fast in total variation to the quasi-stationary distribution, where the total variation distance of two probability measures µ and ν is defined as Moreover, this convergence in total variation holds uniformly in the initial distribution µ. This is due to the fact that, for this regime, the process X is actually obtained from an Ornstein-Uhlenbeck process by a non-linear time change. As a result, the quasi-stationary distribution for X is the one for an Ornstein-Uhlenbeck process. In a same way, the existence of a Q-process is shown in subsection 3.2. However, the existence and uniqueness of a quasi-ergodic distribution, which will be dealt with in subsection 3.3, cannot be deduced from this time change and the proof requires more technical arguments. Moreover, it is noteworthy that, contrary to what is expected, the quasi-ergodic distribution does not coincide with the stationary measure of the Q-process.
Finally, the main part of this paper will be devoted to the subcritical regime, in section 4. In particular, it is shown in subsection 4.3. that, for any initial law µ and any s ≥ 0, P µ,s (X t ∈ ·|τ X > t) converges weakly, when t goes to infinity, towards the quasi-stationary distribution for a Brownian motion absorbed at {−1, 1}. The key argument is an approximation of the law of (X t ) t≥s by the one of a time changed Brownian motion, when s goes to infinity. This approximation is established in the subsection 4.1, and will be also used to deduce the existence of a quasi-ergodic distribution in the subsection 4.4. The subsection 4.5 is finally concluded by showing the existence of a Q-process.
Let us now introduce some notation. For any E ⊂ R, one denotes by M 1 (E) the set of the probability measures supported on E and, for any measurable bounded function f on (−1, 1) and µ ∈ M 1 ((−1, 1)), denote by For a general Markov process (A t ) t≥0 , denote by (F A s,t ) s≤t the canonical filtration of (A t ) t≥0 and (P A x,s ) x∈R,s≥0 a family of probability measure such that, for any x ∈ R and s ≥ 0, P A x,s (X s = x) = 1. For any probability measure µ on R and any s ≥ 0, we define P A µ,s := R P x,s µ(dx). Then the family of probability measures (P A µ,s ) µ∈M 1 (R),s≥0 satisfies If the process A is time-homogeneous, we define, for any µ ∈ M 1 (R), P A µ := P A µ,0 and we have, for any s ≤ t, . For A = X, we will keep the notation established at the beginning of the introduction. This section is devoted to the situation κ > 1/2. The following theorem states the existence of a unique quasi-limiting distribution, which is δ 0 : Theorem 1. For any µ ∈ M 1 ((−1, 1)) and s ≥ 0, lim t→∞ P µ,s (X t ∈ ·|τ X > t) = δ 0 . (1) Proof. By Markov's inequality, for any ǫ > 0 and any probability measure µ, .
Then the convergence (1) is a consequence of the following lemma.
3 The critical case : This section is devoted to the situation κ = 1/2.

Existence and uniqueness of a quasi-stationary distribution
We state a first theorem on the existence of the quasi-limiting distribution (and quasi-stationary distribution) in the critical regime.
Theorem 2. Let α OU be the unique quasi-stationary distribution for the Ornstein-Uhlenbeck process absorbed by (−1, 1) c whose generator is Then α OU is also the unique quasi-stationary distribution for X and there exist C OU , γ OU > 0 such that, for any probability measure µ on (−1, 1) and any 0 ≤ s ≤ t, In particular, for any µ ∈ M 1 ((−1, 1)) and s ≥ 0, the sequence P µ,s (X t ∈ ·|τ X > t) converges weakly towards α OU , when t goes to infinity.
Remark 1. Using the spectral theory for the Ornstein-Uhlenbeck generator, α OU can easily be computed and one has where K is the renormalization constant. In particular, x → (1 − x 2 ) is the opposite of a Hermite polynomial which is positive on (−1, 1) and vanishing at {−1, 1}, and π(dx) := e − x 2 2 is a reversible measure for L.
Remark 2. It is well known (see [11,13]) that there exists λ OU > 0 such that where τ Z := inf{t ≥ 0 : |Z t | = 1}. Moreover, for any f ∈ D, where L is defined in (2). Using the explicit formula of α OU , it is easy to check that Proof of Theorem 2. Let Z be the Ornstein-Uhlenbeck process whose infinitesimal generator is L. Then, for any probability measure µ on (−1, 1) and any s ≥ 0, This can be shown using that there exists a Brownian motion (W t ) t≥0 (starting from 0) such that, for any u ≥ 0, whereW is another Brownian motion starting from 0, and setting u = log (t + 1), Hence, using (6), one has for any s ≤ t, In other words α OU is also the unique quasi-stationary distribution for the time-inhomogeneous Markov process X. Moreover, since Z satisfies the assumptions (A1) and (A2) of [6] (this is actually shown in [8]), then, by Theorem 2.1. in [6], there exist C OU > 0 and γ OU > 0 such that, for any t ≥ 0 and for any probability measure µ, Using (6), one deduces that, for any s ≤ t and for any probability measure µ on (−1, 1), This concludes the proof.

Existence of the Q-process
Before tackling the existence of the Q-process, we need the following proposition: There exists a non-negative function η OU : [−1, 1] → R + , positive on (−1, 1) and vanishing on {−1, 1}, such that, for any x ∈ (−1, 1) and any s ≥ 0, where the convergence holds uniformly on [−1, 1] and α OU (η OU ) = 1. Moreover the function η OU is bounded, belongs to the domain of L defined in (2), and where λ OU is defined in Remark 2.
An interesting consequence of Proposition 1 and (6) is stated as the following corollary: Let B a Brownian motion on R, and denote by Then, for any x ∈ (−1, 1), Proof of Proposition 1 and Corollary 1. Using Proposition 2.3 in [6] applied to the process Z and (6), one has, for any x ∈ (−1, 1) and s ≥ 0, where we finally used (5). This ends the proof of Proposition 1. Now it is easy to see that, for any x ∈ (−1, 1) and Thus, using Proposition 1 and (7), we conclude the corollary.
Remark 4. In [4], Breiman shows a similar result for one-dimensional Brownian motion absorbed by a one-sided square boundary. More precisely, denoting T * B > t) decay as 1/t. The reader can also see [17] for more general boundaries.
We turn to the existence of the Q-process and its ergodicity.
• There exists a Q-process and the family of probability measures (Q x,s ) x∈(−1,1),s≥0 defined in Definition 2 is given by : for any x ∈ (−1, 1) and s ≤ t, • The probability measure β OU defined by where C OU and γ OU are the same constant as used in (3).

Quasi-ergodic distribution
Now let us provide and show the existence and the uniqueness of the quasi-ergodic distribution: Theorem 3. For any probability measure µ on (−1, 1) and any s ≥ 0, for any measurable set S, where we recall that Z is the Ornstein-Uhlenbeck process whose the generator is (2).
Remark 5. As mentioned in the introduction, the quasi-ergodic distribution E Z x (τ Z )α OU (dx) is different from the invariant measure of the Q-process β OU . According to our knowledge, there does not exist any explicit formula for the density Proof. In order to make the proof easier to read, Theorem 3 will be proved for s = 0 in what follows. The proof for a general s will be very similar to the following one. First, using the variable change u = qt, one has, for any µ ∈ M 1 ((−1, 1)), t > 0 and f continuous and bounded measurable, As a result, it is enough to show the weak convergence of (P µ,0 (X qt ∈ ·|τ X > t)) t≥0 for any q ∈ (0, 1), then to conclude with the Lebesgue's dominated convergence theorem. Let µ ∈ M 1 ((−1, 1)), q ∈ (0, 1) and f continuous and bounded measurable. By the Markov property and (6), for any t ≥ 0, where we set for any y ∈ (−1, 1), By (6), for any y ∈ (−1, 1) and t ≥ 0, Now define for any y ∈ (−1, 1), It is easy to see that (f t ) t≥0 converges pointwise towards f ∞ . Moreover, a simple calculus computation entails that the function t → t+1 qt+1 is increasing, which implies that the sequence (f t ) t≥0 is a decreasing Now, let us show that To show this, let us begin with On the one hand, by Proposition 1, lim t→∞ (qt + 1)P µ,0 (τ X > qt) = µ(η OU ).
Remark 6. As it is seen in the previous proof, the quasi-ergodic distribution for X is obtained computing the limit of P µ,s (X qt ∈ ·|τ X > t), when t goes to infinity and for q ∈ (0, 1) fixed. By the time change t → log(1 + t), this limit is the same as the one of with log(q) < 0. Such a limiting probability measure is called a − log(q)-Yaglom limit and is different from the invariant measure of the Q-process of Z (obtained taking q = +∞). This provides a heuristic reason explaining why the quasi-ergodic distribution for X is different from the one for the Ornstein-Uhlenbeck process Z.
In this section, we will show that a quasi-limiting distribution, quasi-ergodic distribution and a Qprocess exist when κ < 1 2 . To do this, the strategy will be to compare (in a sense described later) the process X to the process Y defined by Then the quasi-stationarity for X will be deduced from the one for Y .

Approximation by Y through asymptotic pseudotrajectories
Denote by τ Y := inf{t ≥ 0 : |Y t | = 1}. The aim of this subsection is to show the following proposition: There exists a function F : R + → R + such that lim s→∞ F (s) = 0, and such that, for any 0 ≤ s ≤ t ≤ T , for any probability measure µ on (−1, 1), Remark 7. (11) provides us with a decay towards 0 uniformly in the initial law, t and T . It can be seen as an analogue of the asymptotic pseudotrajectories introduced by Benaïm and Hirsch in [3]. See also [2] for more details about asymptotic pseudotrajectories in the general case.
By Itô's formula applied to t → κ(t + 1) 2κ−1 X 2 t , for any s ≤ t, Note that the process (N s,t∧τ X ) s≤t is almost surely uniformly bounded, thus E(M ) s,t∧τ X is a martingale. For any t ≥ s ≥ 0 and µ ∈ M 1 ((−1, 1)), define G µ,s the probability measure satisfying Then, by the Girsanov theorem, the law of (X t∧τ X ) t≥s under G µ,s is the law of (Y t∧τ Y ) t≥s under P µ,s . In particular, for any S measurable set, probability measure µ on (−1, 1) and 0 ≤ s ≤ t ≤ T , with N ′ s,T = N s,T + κ log T +1 s+1 . Thus, By this last inequality and by a tringular inequality, one has, for any 0 ≤ s ≤ t ≤ T and for any measurable set S, .
In order to show (11), we will bound the functions A s and C s .
Step 1 : Upper bound for C s .
For any 0 ≤ s ≤ t ≤ T , probability measure µ and S measurable set, .
On the event {τ X > T }, X 2 u < 1 for any 0 ≤ u ≤ T . Hence, the function f defined as above is bounded as follows: In particular, for any 0 ≤ s ≤ t ≤ T , for any probability measure µ and S measurable set, Hence, C s (µ, t, T, S) ≤ φ(s).
Step 2 : Upper bound for A s .
Taking S = (−1, 1), According to the previous bound we have shown, for any for any s ≤ T , for any probability measure µ on (−1, 1), We deduce from this last inequality that We set then, for any s ≥ 0, which concludes the proof.

Quasi-stationarity for Y
Now, we are interested in the quasi-stationarity for the process Y . Note that, by the Dubins-Schwartz theorem, there exists a Brownian motionB such that, for any t ≥ 0, Denote τB := inf{t ≥ 0 : |B t | = 1}. Then, by (14), for any initial law µ and s ≥ 0, It is well known that a Brownian motion absorbed at (−1, 1) c admits a unique quasi-stationary distribution α Bm , whose the explicit formula is and that there exists λ Bm > 0 (see [13]) such that Remark that λ Bm satisfies also and λ Bm = π 2 8 . The Brownian motion absorbed at (−1, 1) c satisfies the Champagnat-Villemonais condition (A1) − (A2) in [6], which implies the existence of C Bm , γ Bm > 0 such that, for any probability measure µ and any t ≥ 0, Thus, using the Dubins-Schwartz transformation, for any s ≤ t and any probability measure µ Moreover, let η Bm be the function defined by This definition makes sense by Proposition 2.3. in [6]. We recall moreover that η Bm is positive on (−1, 1 where the convergence holds uniformly on [−1, 1]. (ii) There exists a Q-process for Y in the sense of Definition 2 and the family of probability measure for any x ∈ (−1, 1) and s ≤ t.
(iii) The probability measure β Bm defined by is the unique stationary distribution of Y under (Q Y x,s ) x∈(−1,1),s≥0 and, for any x ∈ (−1, 1) and s ≥ 0, where C Bm and γ Bm are the same as in (15).
Proof. The proof is essentially the same as for the proof of Proposition 2.

Quasi-limiting distribution of X
Now we will use Proposition 3 in order to show the existence of a quasi-limiting distribution for the process X.

Quasi-ergodic distribution
The following theorem states the existence and uniqueness of the quasi-ergodic distribution (in the sense of Definition 3) for the process X. Moreover, this quasi-ergodic distribution is the probability measure β Bm defined in (16).
Proof. As for the proof of Theorem 3, the following proof will be only done for s = 0, but the statement holds for a general starting time s. Let µ ∈ M 1 ((−1, 1)). We recall the notation µ (s,t) = P µ,s (X t ∈ ·|τ X > t), ∀s ≤ t, and, to make the reading simpler, denote For any probability measure µ and t ≥ 0, By Lebesgue's dominated convergence theorem, In order to prove the convergence towards the quasi-ergodic distribution, it remains therefore to show that The idea of the following reasoning is the same as in the critical case. Similarly, one has, for any x ∈ (−1, 1), t ≥ 0, q ∈ (0, 1) and f bounded measurable, with, for any y ∈ (−1, 1) Also define, for any y ∈ (−1, 1), g ∞ (y) := f (y)η Bm (y).
Reminding that and using Proposition 2.3. in [6] applied to the processB, (g t ) t≥0 converges uniformly on (−1, 1) towards g ∞ , which implies that As a result, if one of the limit in the following equality exists, then the other limit exists also and one has lim t→∞ e λ Bm By the definition of conditional expectation, one has . On the one hand, by (15), On the other hand, using again Proposition 2.3. in [6] applied to the processB, we deduce that and, again by (15), lim t→∞ µ q 2 t (η Bm ) = α Bm (η Bm ) = 1. As a result, Hence, we deduce from (19), (20) and (21) that, for any bounded measurable function f , . We conclude to (18) by Lebesgue's theorem.

Existence of the Q-process
Now, it remains to prove the existence of the Q-process. More precisely, this subsection is devoted to the proof of the following theorem: Theorem 6. For any s ≤ t and µ ∈ M 1 ((−1, 1)), the family of probability measure (P µ,s (X [s,t] ∈ ·|T < τ X )) T >t converges weakly when T goes to infinity towards where (η t ) t≥0 is defined in Proposition 5. Moreover, for any s ≤ t and µ ∈ M 1 ((−1, 1)), one has where F is the same function as in Proposition 3 and Q Y is as defined in Proposition 4.
Before proving this theorem, let us first state the following key proposition.

Proof of Proposition 5
The remaining of the paper is dedicated to prove Proposition 5. In the proof, two important lemmata are used. So we will start by proving these lemmata before tackling the proof of Proposition 5.
Proof. a) Let a > 0. To prove this, note that, for any x ∈ (−1, 1) and t ≥ s ≥ 0, where, for any s ≥ 0, τ So, the Harnack inequality to show becomes: for any t ≥ 0, Actually, for any t ≥ 0, and sup Then, for any t ≥ 0, where Then, setting C s,a := P a(s+1) κ τ 0 , one has C s,a > 0 for any s ≥ 0 and inf x∈[−a,a] P x,s (τ X > t) ≥ C s,a sup x∈(−1,1) b) This is straightforward using the Harnack inequality for a Brownian motion and using the change of time provided by the Dubin-Schwartz transformation (14). Now let us state and prove Lemma 3.
Using the same argument as in the proof of Lemma 3, by Lemma 2, one has P µ,s (τ X > t) P ν,s (τ X > t) ≤ 2 C s,as(ν) .
For any µ ∈ M 1 ((−1, 1)), integrating both sides of the equation with respect to µ, letting u → ∞ and using Lebesgue's theorem, we deduce that, for any s ≤ t, there exists a positive constant c s,t which does not depend on µ such that c s,t = E µ,s (½ τ X >t η t (X t )) µ(η s ) .
Because of the last equality, replacing for all s ≥ 0 the function η s (x) by η s (x)/c 0,s entails (24).