A probabilistic view on the long-time behaviour of growth-fragmentation semigroups with bounded fragmentation rates

The growth-fragmentation equation models systems of particles that grow and reproduce as time passes. An important question concerns the asymptotic behaviour of its solutions. Bertoin and Watson ($2018$) developed a probabilistic approach relying on the Feynman-Kac formula, that enabled them to answer to this question for sublinear growth rates. This assumption on the growth ensures that microscopic particles remain microscopic. In this work, we go further in the analysis, assuming bounded fragmentations and allowing arbitrarily small particles to reach macroscopic mass in finite time. We establish necessary and sufficient conditions on the coefficients of the equation that ensure Malthusian behaviour with exponential speed of convergence to the asymptotic profile. Furthermore, we provide an explicit expression of the latter.


Introduction
Imagine a population of individuals that grow and reproduce as time proceeds, in such a way that the evolution of each individual is independent from the others. The growth-fragmentation equation is the key equation that has been used in the field of structured population dynamics to model such systems. It was first introduced to describe cells dividing by fission [BA67] and, sequently, it has also been used to model neuron networks [KPS14], polymerization [CLO + 09, PPS13], the TCP/IP window size protocol for the internet [BMR02] and many other systems sharing the dynamics described above. The common point is that the "particles" under concern (cells, polymers, dusts, etc.) are well-characterized by their mass (or "size"), i.e., a one-dimensional quantity that grows over time at a certain rate (depending on the mass) and that is distributed among the offspring when a dislocation event occurs. In this work, we do not assume conservation of mass at dislocation events. This means that some of the mass may be lost or gained during a dislocation. The main quantity of interest is the concentration of particles of mass x > 0 at time t ≥ 0, denoted by u t (x). The growth-fragmentation equation describes the evolution of u t (x) and can be obtained either by a mass balance, in a similar way as for fluid dynamics [BSCT + 11, MD86], or by considering the Kolmogorov equation for the underlying jump process [Clo17,DHKR15]: (1) Here, the growth rate In words, particles of size x > 0 grow with speed τ (x) and divide with division rate B(x). When a particle of size x splits, it produces an average of N (x) smaller particles and B(x)k(x, y) is the rate of birth of a particle having size y from a particle with size x.
In this work, we rather deal with the weak form of the growth-fragmentation equation (1), that is d dt µ t , f = µ t , Af .
Here, µ t (dx) := u t (x)dx, the function f is smooth with compact support and µ, g denotes g(x)µ(dx) for any measure µ and any function g, whenever it makes sense. The operator A, called growth-fragmentation operator, has the form and it is defined on some domain D A of smooth functions, which will be made explicit in Section 3. Proper assumptions on the coefficients τ , B and k, specified in Section 3, guarantee that A is the infinitesimal generator of a unique strongly continuous positive semigroup (T t ) t≥0 . In this case, (7) has a unique solution, given by Note that the weak form (7) enables to extend the analysis to cases where the concentration of particles is not absolutely continuous w.r.t. the Lebesgue measure. In particular, we are able to treat initial conditions of Dirac type. In this setting, for all x > 0, the measure 1 µ t (x, dy) on (0, ∞) such that describes the concentration at time t of individuals of mass y when one starts at time 0 from a unit concentration of individuals of mass x, i.e. µ 0 (x, dy) = δ x (dy).
In general, one cannot expect to have an explicit expression for the growth-fragmentation semigroup (T t ) t≥0 and, motivated by several applications in mathematical modelling, many works are concerned with its behaviour for large times. Typically 2 , one expects that, under proper assumptions on the growth and fragmentation rates, there exist ρ ∈ R, a Radon measure ν(dx), usually called asymptotic profile, and a positive function h such that at least for every continuous and compactly supported function f : (0, ∞) → R. In the literature, the above convergence is often referred to as Malthusian behaviour. When it holds, a further important question concerns the speed of convergence. To understand why, consider for example the case in which (9) holds with ρ > 0. This would imply that the concentration of particles grows exponentially in t, albeit, in reality, due to several effects such as the scarcity of space and resources, an indefinite exponential growth is not possible. As a consequence, the growth fragmentation equation is reliable only for rather early stages of the evolution of the population, and the exponent ρ and the asymptotic profile are meaningful only when e −ρt T t converges to the asymptotic profile fast enough. Thus, one wishes to establish the so-called exponential convergence, i.e., for some β > 0.
The tool that has been mostly used in the literature to investigate (9) is the spectral theory of semigroups and operators. The cornerstone of this approach consists in proving the existence of a solution to the so-called eigenvalue problem for A, namely a triplet (ρ, h, ν) that satisfies Ah = ρh, A * ν = ρν, and ν, h = 1, with A * being the dual operator of A, ρ the leading eigenvalue of A and A * , ν a Radon measure and h a positive function. Proper assumptions on the growth and fragmentation rates that ensure existence and uniqueness of a solution to the eigenvalue problem have been established by several authors, for example Mischler and Scher [MS16], Doumic and Gabriel [DJG10] and Michel [Mic06].
Once (11) [CMP10], relied on probabilistic techniques to study the conservative version of (1), in which the total mass of the system is conserved. Bertoin and Watson [Ber19, BW18] developed a probabilistic approach to (9), relying on a Feynman-Kac representation of the growth-fragmentation semigroup, that circumvents the spectral theory of semigroups. They could establish necessary [BW18] and sufficient [Ber19] conditions for the Malthusian behaviour with exponential speed of convergence when the growth rate is continuous and sublinear, i.e., sup x>0 τ (x)/x < ∞. With a similar approach, Cavalli [Cav19] obtained necessary and sufficient assumptions for exponential convergence in the case of homogeneous fragmentations (the rate at which particles split not depend on the size) and piecewise-linear growth rate. One of the main benefits of this approach is that it also provides a probabilistic representation of the quantities of interest (asymptotic profile, exponent ρ, etc.). A common point in the cases studied by Bertoin, Watson and Cavalli, is that microscopic particles remain microscopic. More precisely, the time after which a particle of infinitesimal mass growing at speed τ reaches a fixed mass (say 1 for the sake of simplicity), namely is infinite. In this work, on the contrary, we focus on i.e., particles with arbitrarily small masses may become macroscopic after a bounded time.
We further assume that particles with finite mass cannot reach infinite mass in finite time, i.e., We stress that, unlike the case in [Ber19, BW18,Cav19], in our model it is crucial to assume bounded fragmentations. Just as in [Ber19,BW18], our analysis relies on a Feynman-Kac representation of the semigroup (T t ) t≥0 in terms of an instrumental Markov process X = (X t ) t≥0 . Its infinitesimal generator is and it is closely related to the growth-fragmentation operator A. However, the Markov process we rely on is different from the one used in [Ber19,BW18], letting us treat different situations. In their case, in fact, the dynamics of X can be seen as the dynamics of the mass of a distinguished individual in the population, such that, at every dislocation event, the distinguished daughter is chosen among the siblings by size-biased sampling. In particular, their process jumps at the same rate as the one at which the individuals of the population reproduce. In our case, the process X jumps at rate BN , while the particles in the system reproduce at rate B. Thus, X cannot be seen as a "well-chosen" particle in the system. From (15), we see that the trajectory t → X t is driven by the deterministic flow velocity τ between consecutive jumps and that the jumps are the only source of randomness 3 . Assumptions (3) and (6) guarantee that the total jump rate of X is bounded, so the jumps never accumulate. In the rest of the work, we assume that, for every x > 0, there exists α < x < β with x α k(β, y)dy > 0, which is equivalent to the irreducibility of the process X in (0, ∞), as it is shown in Section 3. Comparing (8) and (15), we get the Feynman-Kac representation 4 with where P x (resp. E x ) is the probability measure (resp. the expectation) when the process X is conditioned to start at X 0 = x. Even though (17) is not quite explicit in general, it is of great help to study the behaviour of T t as t → ∞. A fundamental role is played by the function where H(y) denotes the first hitting time of y by X. An important property of L (we refer to Section 2 for an extensive analysis) is that it is non-increasing and convex. This allows to fix x 0 > 0 and define the Malthus exponent 3 In this case we say that X is piecewise deterministic, see [Dav84] for a complete introduction. 4 We refer to Section 3 for a rigorous proof.

Main results
The main contribution of the present work is to provide sufficient conditions in terms of the coefficients τ , B and k that ensure exponentially fast convergence of e −λt T t to an asymptotic profile. Moreover, we also give an explicit expression of the latter.
Theorem 1.1. Assume (2), (3), (4), (6), (14), (16) and the forthcoming (35). If the Malthus exponent λ and the rates B and N satisfy then the Malthusian behaviour with exponential convergence (10) holds, with ρ = λ, and It is further interesting to discuss the criterion (21). On one hand, it may seem a bit surprising, as it is often assumed in the literature that fragmentations of big particles should be strong enough to counterbalance the growth. However, an heuristic interpretation can be given by making a comparison with branching processes. Consider a system in which particles die with rate B and, when a particle of size x dies, it is replaced by an average of N (x) particles. The quantity B(x) (N (x) − 1) can be seen as the average "increase" in the number of particles of the system that arises from the death of a particle of size x, whilst the Malthus exponent λ represents the long time increase in the number of particles of the system. Condition (21) says that, when particles are large enough, they mostly produce a number of particles that is smaller than the average. So, roughly speaking, the main contribution to the evolution of µ t comes from particles that stay in some compact subset of [0, ∞). Condition (21) then does not come as a surprise, as it is well known that compactness plays a key role in establishing Malthusian behaviour, for example in the Krein-Rutman setting. Condition (21) may still seem unsatisfactory, since it depends not only on the coefficients, but also on the Malthus exponent λ. However, in many cases, it can be made much more explicit. In particular, if X is recurrent and B, N are not constant, then inf x>0 B(x) (N (x) − 1) < λ (see Proposition 2.1). Thus, when X is recurrent, This enables to find explicit conditions for the Malthusian behaviour in the important case when the fragmentation kernel is self-similar, i.e., where k 0 ∈ L 1 ([0, 1]). For all r ∈ R, we define It is easy to check that, in this case, N (x) = M 0 for all x > 0.
and that there exists x ∞ > 0 such that, for all x ≥ x ∞ , Then, the Malthusian behaviour with exponential speed of convergence (10) holds with ρ = λ and h and ν as in Theorem 1.1.
We stress that condition (27) is quite natural and it can be interpreted as a balance between the growth and the fragmentation of large particles.

Related results
The growth-fragmentation equation with self-similar fragmentation rate has been extensively studied in the literature and it is interesting to compare our results with the previous ones.
To start, Bouguet [Bou18] investigated positive recurrence for the family of piecewisedeterministic Markov processes that arise in our analysis. Sufficient conditions for positive recurrence are provided in the case in which τ and B behave as power functions of the size in a neighbourhood of 0 and ∞. In addition to (27), in [Bou18] a balance between growth and fragmentation of very small particles is also assumed (assumption (2.5) in [Bou18]).
In our case, this extra condition is instead a direct byproduct of our setting (see the proof of Theorem 1.2 for further details). Doumic Jauffret and Gabriel [DJG10] obtained conditions to ensure the existence of eigenelements and Malthusian behaviour (by general entropy method). Their assumption clearly implies our condition (27). Furthermore, while they also assume (13) for B(0) > 0, we recall that we don't assume conservation of mass, that is condition (6) in [DJG10]. Bernard and Gabriel [BG17] provided sufficient conditions for the existence of a solution to the eigenvalue problem (11) in the self-similar case, assuming bounded fragmentations. Our condition on the behaviour of τ at ∞ is less restrictive than theirs, as they assume that there existᾱ ≤ α < 1 such that Similarly, for the fragmentation rate, instead of their assumption of B(x) constant for large x, we require continuity conditions for B. We also mention [BCGM19], in which the authors analyse the self-similar case assuming constant growth rate. Finally, our results should be considered together with the ones obtained by Bertoin [Ber19], who also analyses the self-similar case (see paragraph 3.5), but in a complementary framework. In fact, in [Ber19], (13) does not hold and the fragmentations may be unbounded.
Condition (32) in [Ber19] is the the same as our condition (27). However, they again require a balance between growth and fragmentation of very small particles, that is always verified under our assumptions. We conclude mentioning that a possible approach to the study of the asymptotic behaviour of the growth-fragmentation equation may be developed with the help of quasi-stationary distributions. We refer to [CV16] and [CV17] for a comprehensive introduction on the topic.

Outline of the paper
The article is organised as follows. In Section 2 we present some general results on Markov processes with only negative jumps, that will be used in the rest of the work. In Section 3 we establish existence and uniqueness of the growth-fragmentation semigroup, as well as its Feynman-Kac representation. Moreover, we provide a characterization of the Malthusian behaviour in Theorem 3.3. Section 4 is devoted to the proofs of Theorem 1.1 and Theorem 1.2. Finally, we provide some examples in Section 5.

Background on the instrumental Markov process
In this section we aim to present in a more general setting some of the ideas and techniques used by Bertoin and Watson [BW18], Bertoin [Ber19] and Cavalli [Cav19] to study properties of Markov processes of the type (15). The results obtained will be of great use in the next sections. For the sake of simplicity, we use here the same notation that was used in the introduction (for instance the notation X, P x , E x or H), even though we are considering slightly more general processes.

Setup
This section concerns processes with infinitesimal generator of the type (15). However, rather than working with the analytic expression of the generator, it will be more useful for our analysis to focus on the path properties of this kind of processes. We consider a Markov process X on [0, ∞) that satisfies the following. The trajectory t → X t follows a strictly increasing deterministic flow between consecutive jumps and the jumps are the only source of randomness. The total jump rate remains bounded, so the jumps never accumulate. Denote by P x the law of X started at x ≥ 0, by E x the corresponding expectation and let be the first hitting time of y > 0. We make the following assumptions: (A1) X has no positive jumps (upward skipfree); We notice that, since the deterministic flow is strictly increasing and the jumps don't accumulate, we have that (A4) return times are almost surely strictly positive in (0, ∞), i.e., P x (H(x) > 0) = 1 for all x > 0.
Remark 1. We stress that the properties (A1)-(A4) hold when X has generator given by (15) under the assumptions outlined in the Introduction, as we will show in Section 3.
Let g : (0, ∞) → (0, ∞) be a measurable and bounded function and define the random functional We aim to construct some martingales and a family of supermartingales connected to the process X and the functional E g t .

A Laplace transform
We start by defining the Laplace transform First of all, (A2) and (A3) imply that P x (H(y) < ∞) > 0 for all x ≥ 0 and y > 0. Furthermore, on the event {H(y) < ∞}, the functional E g H(y) is strictly positive, and so L x,y (q) ∈ (0, ∞]. Straightforward arguments show that the function L x,y : R → (0, ∞] is non increasing, convex, and right-continuous at the boundary points of its domain (monotone convergence). Moreover, for every q > g ∞ , we have e −qt E g t ≤ 1 and, a fortiori, L x,y (q) < 1. More precisely, Thanks to this property, we can fix x 0 > 0 arbitrarily and define The definition of λ does not depend on the choice of x 0 (see Proposition 3.1 in [BW18]). We state some elementary bounds for λ in terms of the function g. The proof is similar to that of Proposition 3.4 in [BW18] and details are left to the reader.
(ii) If X is recurrent, then λ ≥ inf x>0 g(x) and, in particular, λ ≥ 0. The strict inequality holds except possibly when g is constant.
Due to right continuity, it always holds that L x 0 ,x 0 (λ) ≤ 1. We consider now the following assumption and the stronger one Notice that condition (29) implies not only (28), but also that (a) A case in which (29) holds, implying also (28) and (30).
A case in which condition (28) does not hold, as L x0,x0 < 1.

A remarkable martingale
Assume that (28) holds. Fix x 0 ≥ 0 and define the function Proof. The function h is strictly positive in [0, ∞), thanks to (A2) and (A3). The argument in Corollary 4.3 in [BW18] ensures that h is continuous in (0, ∞). Finally, we show that h is also continuous at 0. Indeed, the Markov property entails that, for all ε < x 0 , Next, we observe that lim ε→0+ e −λH(ε) E g H(ε) 1 {H(ε)<∞} = 1, and so, by Fatou's Lemma, Let Λ ǫ the event that the deterministic flow starting from 0 reaches ε without making any jump. Since the flow is strictly increasing, the jumps are only negative and the total jump rate is bounded, we have that p ε := P 0 (Λ ε ) ↑ 1 when ε → 0 + . Under this event, the hitting time is deterministic, say H(ε) = s(0, ε), with s(0, ǫ) ↓ 0 when ε → 0 + . We introduce now a geometric random variable G(ε), with parameter p ε . The number of jumps of X before reaching ε is stochastically dominated by G(ε). Hence, All that is left to check is that G(ε) has finite exponential moment with exponent δ. We know that which is clearly true for ǫ small enough. In this case, proving the claim.
Lemma 2.3. Assume (A1)-(A4). If (28) holds, the process Proof. For x > 0, we apply Theorem 4.4 in [BW18]. To show that it is also a martingale with respect to P 0 , we define the random variables R 0 = 0 < R 1 := H(x 0 ) < R 2 < . . . to be the return times to x 0 , where x 0 is the point that appears in the definition of h. The stopped process (M t∧Rn ) t≥0 , is then a martingale and with an argument similar to the one in the proof of Theorem 4.4 in [BW18], we can take the limit n → ∞ and obtain the statement.
The next step consists in using the martingale M to "tilt" the probability measure P x . In other words, we introduce the probability measureP x (and corresponding expectationẼ) defined byP Since P x is a probability measure on the space of càdlàg paths, the same holds forP x . Let Y = (Y t ) t≥0 be the process with distributionP x . The finite-dimensional distributions of Y are thus given in the following way. Let 0 ≤ t 1 < · · · < t n ≤ t, and F : R n → R + . Then, Lemma 2.4. Assume (A1)-(A4) and (28). Then the following hold.
(i) Y is a Markov process, recurrent in (0, ∞). Moreover, denoting by H Y (x) = inf{t > 0 : Y t = x} the first hitting time of x ≥ 0, one has that for all x > 0, (ii) If the stronger (29) holds, then Y is exponentially recurrent, which means that it exists ǫ > 0 such thatẼ x [exp(ǫH Y (x))] < ∞.
Proof. (i) The process Y is Markov because M is multiplicative and the (strong) Markov property is preserved by transformations based on multiplicative functionals. We denoteP x the law of Y started at x ≥ 0 andẼ x the corresponding expectation. To show that Y is recurrent, we observe that, for x > 0, where the second inequality comes from the definition of probability tilting, the third from the optional sampling theorem and the last from the monotone convergence theorem. To show that it is actually positive recurrent, we note that, for x > 0, which proves the assertion.
(ii) With similar computations as above, one can prove that, since H Y (x) < ∞ a.s., we have, which is finite for ǫ small enough thanks to condition (29), when x = x 0 . The case of a general x > 0 follows easily.

A family of supermartingales
From the previous lemma, it is clear that condition L x 0 ,x 0 (λ) = 1 is necessary to construct the martingale M. The next result shows that, when q is such that L x 0 ,x 0 (q) < 1, i.e. q ≥ λ, we can associate to X a family of supermartingales. Fix x 0 ≥ 0 and q ≤ λ and define the function Adapting the proof of Theorem 4.4 in [BW18], we have the following Lemma.
is a P x -supermartingale for every x ≥ 0 with respect to the natural filtration (F t ) t≥0 of X.
As before, we use the supermartingale S (q) to "tilt" the probability measure P x and introduce a family of possibly defective (i.e. possibly with a finite lifetime ζ) Markov processes . More precisely, the distribution of the Markov process , that we denote by P (q) , is defined in the following way. For t ≥ 0 and every non-negative functional F defined on Skorokhod's space D [0,t] of càdlàg paths ω : [0, t] → (0, ∞), is a martingale, and L x,x (r) = 1 for every x > 0.
Proof. If Y (r) is positive recurrent, it cannot be defective, i.e., P (r) x (ζ = ∞) = 1. This is equivalent to say that E (r) x S (r) t = h r (x), which implies that S (r) is a martingale for every x > 0. Since Y (r) is point-recurrent, we have that, for every x > 0, where the equalities follow from the optional sampling theorem and the monotone convergence theorem.

The process killed when exiting compact sets
In this paragraph, we focus on the behaviour of the process X killed when exiting compact sets. A necessary preamble for the rest of the analysis is the irreducibility. In fact, even though X is irreducible in the positive half-line by (A2), it may happen that there exist some 0 < a < b such that the process is no longer irreducible, when killed exiting [a, b].
We define the first exit-time from and we call an interval (a, b) good if the process killed at time σ(a, b) remains irreducible, i.e., P x (H(y) < σ(a, b)) > 0 for all x, y ∈ (a, b).
The argument of Lemma 3.1 in [Ber19] shows the following.
Lemma 2.7. Assume that (A1)-(A4) hold. Then, for every ǫ ∈ (0, 1), there exists a good interval (a, b) with a < ǫ and b > 1/ǫ. The next step consists in applying the Krein-Rutman theorem. We consider the Banach space C 0 ([a, b)) of continuous functions f : [a, b] → R with f (b) = 0 endowed with the supremum norm f = sup x∈[a,b) |f (x)|. We assume f (b) = 0 because the process started at b leaves [a, b] immediately 5 . We do not assume yet that [a, b] is a good interval, but this will be crucial at a later point.
Recalling that g ∞ < ∞, we define q g := 1 + g ∞ , so that E t e −tqg ≤ e −t for all t ≥ 0.
Then, we introduce the operator that is defined for every bounded measurable function f : [a, b] → R. The operator U a,b maps C 0 ([a, b)) into itself. The family of functions {U a,b f : f ≤ 1} is equicontinuous; the proof is similar to that of Lemma 3.2 in [Ber19] and we leave the details to the reader. U a,b satisfies the hypothesis of the Krein-Rutman theorem (see for example the requirements of Theorem 9.5 in Deimling [Dei85]), which entails the following result. Lemma 2.9. The process

Characterisation of the Malthusian behaviour
In this section we prove some first important results. The first goal is to establish existence and uniqueness of the growth-fragmentation semigroup and to derive the Feynman-Kac representation (17). The second one is to prove Theorem 3.3, that gives necessary and sufficient conditions for the Malthusian behaviour (9), in terms of the Laplace transform L and the Malthus exponent λ defined respectively in (19) and (20). We start by introducing some notation. For x ≥ 0 and t ≥ 0, we denote by φ(x, t), the flow given by the solution to the differential equation that exists and is unique for all x ≥ 0 thanks to (2). For 0 ≤ x < y, we denote by s(x, y) the time that φ needs to travel from x to y, namely φ(s(x, y), x) = y.
Note that s(·, ·) is decreasing in the first variable and increasing in the second one.
Remark 2. There is the explicit expression dz.
Comparing this to (12), we see that T = s(0, 1). When (13) holds, the solution with initial condition x(0) = 0 can enter from 0 in finite time. On the contrary, when (13) fails (for example in the cases analysed in [Ber19, BW18, Cav19]), the solution to (34) with initial condition x(0) = 0 is φ(t, 0) = 0 for all t ≥ 0. On the other hand, (14) ensures that the solution cannot explode in finite time.

Existence and uniqueness of the semigroup
The goal of this subsection is to prove the existence and uniqueness of a semigroup generated by A. Let C 0 ([0, ∞)) denote the Banach space of continuous functions f : [0, ∞) → R vanishing at infinity, endowed with the supremum norm · ∞ . We view the growthfragmentation operator A, defined in (8), as an operator on C 0 ([0, ∞)). Its domain D(A) contains the space of functions f ∈ C 0 ([0, ∞)) such that τ f ′ ∈ C 0 ([0, ∞)).
We also assume the following technical assumption on the fragmentation kernel: for all compact sets E ⊂ [0, ∞), i.e., the rate at which a particle with size x > 0 produces particles whose sizes are in E tends to 0 as x → ∞. Proof. From (5), for x ≥ 0, A can be written as We introduce the operatorÃf : Note that, if one shows thatÃ generates a unique strongly continuous contraction semigroup Conversely, let (T t ) t≥0 be a positive strongly continuous semigroup on C 0 ([0, ∞)) with infinitesimal generator A. Then,T t := exp(−t B (N − 1) ∞ )T t defines a strongly continuous contraction semigroup with infinitesimal generatorÃ, since From existence and uniqueness of the semigroup generated byÃ, we will get the uniqueness of (T t ) t≥0 .
To show that the semigroup (T t ) t≥0 exists, we construct a Markov process, say (Z t ) t≥0 , having generatorÃ on D(Ã). The evolution of Z starting from x ≥ 0 is the following. Consider the functions Now select two independent random variables K 1 and S 1 such that P(K 1 > t) = F (t, x) and P(S 1 > t) = G(t, x). Consider also a random variable P 1 independent from the others, with distribution k(φ(K 1 , x), y) N (φ(K 1 , x)) dy.
Let T 1 = K 1 ∧ S 1 . On the event T 1 = S 1 , the process is killed, i.e., On the event T 1 = K 1 , the trajectory of Z starting from x ≥ 0 is given by and then the dynamics starts afresh from P 1 . More precisely, we select two independent random variables K 2 and S 2 such that P(K 2 > t) = F (t, P 1 ) and P(S 2 > t) = G(t, P 1 ). Consider also a random variable P 2 independent from the others, with distribution k(φ(K 2 , P 1 ), y) N (φ(K 2 , P 1 )) dy.
Then we define T 2 = K 2 ∧ S 2 and, again, if T 2 = S 2 , the process is killed and, otherwise, the dynamics continues in a similar way. Let m = inf{i | T i = S i }. Then we have a piecewise deterministic trajectory Z t with jump times T 1 , T 1 + T 2 , . . . , m i=1 T i killed at time K = m i=1 T i . By construction, Z is a Markov process and it has generatorÃ on D(Ã) (see for example [MD86]). Uniqueness of the semigroup follows applying Theorem 4.1 in chapter 4 of [EK86], with A ′ beingÃ. Clearly, the set D(A ′ ) = C 0 ([0, ∞)) is separating and we just need to verify R(λ − A ′ ) = D(A ′ ). This follows from Lemma 4.2 and Theorem 4.3 in chapter 4 of [EK86]. We thus proved that there exists a unique positive strongly continuous semigroup (T t ) t≥0 that has infinitesimal generatorÃ, which implies the statement.

A Feynman-Kac representation and Malthusian behaviour
In this subsection, we establish the Feynman-Kac representation (17). First, the same argument used in the proof of Lemma 3.1 shows that the operator G defined in (15), with domain D(G) = D(A), generates a strongly continuous contraction semigroup on C 0 ([0, ∞)) and, hence, it is the generator of a conservative Feller Markov process X on [0, ∞). From the expression of G, we see that X belongs to the class of piecewise-deterministic Markov processes introduced by Davis [Dav84]. Under P x , any path t → X t follows the deterministic flow φ(t, x) defined in (34) up to a random time at which it makes its first (random) jump. When the jump occurs, the position after it, say y, is chosen accordingly to (36) and the dynamics starts afresh from y. Note that X has only negative jumps, as it is clear from the definition of the jump kernel (36). Moreover, (3) and (6) ensure that the jumps of X never accumulate. Note further that, by (14), the process cannot reach ∞ in finite time. On the contrary, by (13), 0 is an entrance boundary for X. As stated in the Introduction, we assume that X is irreducible in (0, ∞). Since τ is positive and X has only negative jumps, this is equivalent to (16). For the proof, we refer to Lemma 3.1 in [Ber19].
Remark 3. We stress that X is not irreducible in [0, ∞). In fact, the process started at 0 can hit any target point y > 0 with positive probability, but the process started at x > 0 does not hit 0 in finite time, due to the fact that the the probability that X hits 0 by a jump is zero together with the fact that the jumps never accumulate and that condition (13) holds.
To sum up, X satisfies the properties (A1)-(A4). Moreover, B(x)(N (x) − 1) is continuous and bounded and so, we can rely on the results presented in Section 2, with the choice g(x) = B(x) (N (x) − 1). Notice that, in this case, the functional (E g t ) t≥0 is exactly the functional (E t ) t≥0 defined in (18). The next result is the Feynman-Kac representation of the semigroup.
Lemma 3.2. The growth-fragmentation semigroup can be expressed in the form Proof. Since G is the generator of X, from Dynkin's formula, for every f ∈ D(G)), is a P x -martingale for every x ≥ 0. In addition, (E t ) t≥0 is a stochastic process with bounded variation and dE t = B(X t ) (N (X t ) − 1) E t dt. Thus, it follows from the integration by parts formula for stochastic calculus, that is a local martingale. Since this local martingale remains bounded on any finite time interval, it is a true martingale (see Theorem I.51 in [Pro05]). Taking expectations and using Fubini's theorem, we conclude that which means that A is the generator of the semigroup E x (E t f (X t )). By uniqueness, we get the Feynman-Kac representation.
We are now ready to state an important result concerning the Malthusian behaviour. To this end, we recall the definition of the function L x,y and the Malthus exponent λ introduced respectively in (19) and (20). The following theorem provides necessary and sufficient conditions in terms of the function L x,y for the convergence of e −λt T t to an asymptotic profile. Moreover, it gives an explicit expression of the latter.
Fix x 0 > 0 and let h(x) be as in (22). Under assumption (37), Lemma 2.3 ensures that the process is a martingale under P x , x ≥ 0. Thus, we can use it to tilt the probability measures associated to X in order to obtain a recurrent Markov process Y = (Y t ) t≥0 . As in Section 2, we callP x (resp.Ẽ x ) the law (resp. the expectation) of the process X condition to start at X 0 = x, x ≥ 0. SinceP is absolutely continuous with respect to P, Y inherits the properties (A1)-(A4).

Proof of Theorem 3.3(i).
Combining (17) and (38), Since (37) holds, Lemma 2.4 shows that Y is positive recurrent. By standard results, the stationary measure of a recurrent Markov process is given by its occupation measure normalized to be a probability measure. Moreover, since Y is piecewise-deterministic and follows the deterministic flow dy(t) = τ (y(t))dt between consecutive jumps, it can be proved that its occupation measure is absolutely continuous with respect to the Lebesgue measure, with a locally integrable and everywhere positive density that is q(x 0 , y) τ (y)q(y, x 0 ) , where q(x, y) :=P x (H Y (y) < H Y (x)). For the proof, we refer to Lemma 5.2 in [BW18]. Combining (31), (39) and (40), we can conclude that where ν(dx) is precisely the probability measure defined in (23).
Remark 4. When (29) holds, Lemma 2.4 shows that Y is exponentially recurrent. In this case, Kendall's renewal theorem ensures that the convergence takes place exponentially fast.
The second part of Theorem 3.3 states that (37) is not only sufficient for the Malthusian behaviour (9), but also necessary and, in particular, whenever (9) holds, the leading eigenvalue ρ coincides with the Malthus exponent λ defined in (37). We actually prove a stronger result. Then α = λ, (37) holds and thus also the Malthusian behaviour (9) holds with h and ν defined as in (22) and (23).
The argument for proving this result belongs to the same vein as in the proof of (i), with the difference that the role of the martingale M is now played by a family of supermartingales. By contradiction with Proposition 3.3 in [BW18], we have the following result. We refer to the proof of Lemma 2.4 in [Ber19] for a more extensive argument.
Hence, we can refer to (32) and consider the function As in (33), we can define the process which is a supermartingale with respect to P x , x ≥ 0 thanks to Lemma 2.5. In the same way as in Section 2, we can use S (α) t to introduce a possibly defective càdlàg Markov , with law P (α) . Since the distribution of (Y x (· | t < ζ) is absolutely continuous with respect to that of (X s ) 0≤s≤t ) under P x , then the process it is irreducible on (0, ∞) and 0 is an entrance boundary. In the following we denote Y := Y (α) . In the next lemma, we show that the the process Y is indeed positive recurrent. The proof follows adapting the ones of Lemma 2.4 and Corollary 2.5 in [Ber19] and is left to the reader.
Lemma 3.6. Suppose that assumptions (i) and (ii) hold. Then, the process Y is pointrecurrent and positive recurrent in (0, ∞), that is where H Y (y) := inf{t ∈ (0, ζ) : Y t = y} is the hitting time of y > 0 by the process Y , with the convention that inf ∅ = ∞.
Finally, we are ready to prove the second part of Theorem 3.3.
Proof of Theorem 3.3(ii). By Lemma 3.6, Y cannot be defective, i.e., P (α) x (ζ = ∞) = 1. This is equivalent to say that E Thanks to Lemma 2.6, we get that L x,x (α) = 1 for every x ≥ 0, i.e., condition (28) holds. From this, we see that the function h α coincides with the function h defined in the proof of Theorem 3.3(i), the martingale S (α) coincides with M and the process Y is the same as the one defined in the previous section. This implies that since Y is recurrent, condition (30) must be satisfied, proving the assertion.

Proof of the main results
This section is devoted to the proofs of Theorem 1.1 and Theorem 1.2.

Proof of Theorem 1.1
Remark 4 shows that (29) is a sufficient condition for the Malthusian behaviour with exponential convergence (10). Thus, the goal of this section is to show that (21) implies (29), i.e., that there exist q ∈ R and x ∈ (0, ∞) such that This will be proven by decomposing the excursions of X away from its (properly chosen) starting point at certain exit times from (properly chosen) compact sets. Thus, first of all, we will fix a compact interval [a, b], with given 0 < a < b and, following Section 2, we study the process killed when exiting [a, b]. The second step consists in fixing the upper-boundary point b large enough and letting the lower-boundary point a go to 0+. In these first two steps, the expectations of proper functionals at exit times will be computed with the help of specific martingales and supermartingales. Finally, the statement of the theorem follows putting the previous results together.
We start by fixing a good interval (a, b) and, following Section 2, we define q BN := 1 + B(N − 1) ∞ and introduce the operator on C 0 ([a, b)) By Proposition 2.8, we have that r(a, b)ν a,b , and ν a  (i) For all x, y ∈ (a, b), there is the identity Proof. (i) It follows from Lemma 2.3, using the argument in the proof of Proposition 3.5 in [Ber19].
We now fix the upper-boundary point b and let the lower-boundary point a tend to 0. Note that since X is upward skip free, then, for all where p a ′ is the probability that the process started from 0 reaches a ′ without making any jump and s(0, a ′ ) is defined as in Remark 2. As shown in Lemma 2.2, this condition is clearly satisfied for a ′ small enough, since, when a ′ tends to 0, s(0, a ′ ) tends to 0, while − log(1 − p a ′ ) tends to +∞.
All we need to check, is that Ψ(r) < ∞. In fact, if Ψ(r) ≤ 1, we choose γ = r, otherwise equation Ψ(q) = 1 has a unique solution γ ∈ (r, λ) by convexity. On the event {H(a ′ ) < H(b ′′ )}, the process remains in [a ′ , b ′′ ] until it makes a jump below a ′ at time σ(a ′ , b ′′ ) and then it stays in (0, a ′ ) until H(a ′ ), when it hits a ′ for the first time.
From the Markov property, we can decompose the excursion away from a ′ at σ(a ′ , b ′′ ), The argument in the proof of Lemma 2.2 shows that, thanks to the proper choice of a ′ made in (41), and so, there exists C > 0 such that Observe that, on the event {H(a ′ ) < H(b ′′ )}, there exists an instant t ≤ σ(a ′ , b ′′ ) with X t < a ′ if and only if the process X stays in [a ′ , b ′′ ] during the whole time-interval [0, t) and exists from [a ′ , b ′′ ] at time t by jumping below a ′ . In other words, t = σ(a ′ , b ′′ ) and H(a ′ ) < H(b ′′ ). Moreover, the predictable compensator of the jump process of X is B(X t− )k(X t− , y)dydt. From this, we deduce that where BN ∞ is the maximal jump rate. To conclude, we notice that where the last equality follows from the fact that M a,b defined above is a martingale and Since h a,b is strictly positive on (a, b), the second factor is bounded and r > ρ a,b we easily get that which proves the assertion.
As a corollary, we get the following result.
Corollary 4.3. Under the assumptions of Proposition 4.2, for 0 < x < b ′′ , we consider the function The process is then a P x -supermartingale for every 0 ≤ x < b ′′ .
We are now ready to prove Theorem 1.1.
Proof of Theorem 1.1. We pick two good intervals (a, b) and (a ′ , b ′ ), with 0 < a < a ′ < b ′ < b sufficiently large such that condition (41) is satisfied and This is indeed possible thanks to condition (21). Next, we choose q such that max{ρ a,b , γ} < q < λ. In particular, We prove that (29) holds with x = b ′ . We let X start from b ′ and we split the excursions at times σ(b ′ , ∞) and σ(a ′ , ∞). Clearly, P b ′ -almost surely, σ(b ′ , ∞) ≤ σ(a ′ , ∞). By (42), until time σ(b ′ , ∞), we have and so, from the strong Markov property, the assertion follows from To conclude the case x ≤ a ′ , we thus need to show that the RHS is finite. To this end, we choose b ′′ ∈ (b ′ , b) close enough to b to have P b ′ (H(a ′ ) < H(b ′′ )) > 0. As usual this can be done by irreducibility arguments. Recalling the notation of Corollary 4.3, we have Proposition 4.2 ensures that g(a ′ ) ∈ (0, 1] and g(b ′ ) > 0 and so Finally, we consider the case x ∈ (a ′ , b ′ ). We distinguish whether the process exits from [a ′ , b ′ ] through the upper or the lower boundary. In the first case, Lemma 4.1 ensures that, and so, In the second case, we have that, in a similar way as in Proposition 4.2, Using the Markov property at the stopping time σ(a ′ , b ′ ) and using (44), we conclude that Combining (44), (45) and (46), we get (43) and thus Theorem 1.1 is established.

Proof of Theorem 1.2
We now turn to the proof of Theorem 1.2. As stated in the Introduction, (21) reduces to the more explicit criterion (24) when X is recurrent. So, we need to find criteria in terms of the growth and fragmentation rates that ensure recurrence of X, when the fragmentation rate is self-similar. Since X is irreducible, its trajectories between jumps are increasing and the jumps are only negative, point-recurrence can only fail when the paths converge almost surely to 0 or to ∞. To exclude these cases, we resort to Foster-Lyapunov criteria. We refer to [Hai16] or [MT09] for a more comprehensive account. In brief, one wishes to find a smooth convex function V : (0, ∞) → (0, ∞) such that for some a, b > 0 and 0 < x 0 < x ∞ , and such that, for all x < x 0 and x > x ∞ , one has GV (x) ≤ 0.
This implies that f X t∧H(x 0 ) and f X t∧σ(x∞,∞) are P x -supermartingale respectively for all 0 < x < x 0 and all x ≥ x ∞ , which is a sufficient condition to avoid that X converges either to 0 or to ∞. When k is self-similar, i.e. it has the form (25), the generator is the following: Hence, for 0 < x ≤ x 0 , and, for x ≥ x ∞ ,

This means that if (27) holds and
then X is recurrent. Condition (48) is directly verified under our assumptions, since (3) and (13) implies that lim x→0 τ (x)/xB(x) = ∞ and thus (26) and (27) are enough to ensure point recurrence of X. We already argued that condition (24) ensures (21) when X is recurrent, and so the claim follows applying Theorem 1.1.

More examples
Here we deal with the case in which B(x) = B > 0 and N (x) = N > 0 are constant. The generator of the process X is then Gf (x) = τ (x)f ′ (x) + B x 0 (f (y) − f (x)) k(x, y)dy, with x 0 k(x, y)dy = N for all x ≥ 0. The Feynman-Kac formula gives If X is recurrent, then, P x (H(x) < ∞) = 1 and (28) holds with λ = B(N − 1). In this case, we cannot rely on criterion (24) to prove exponential convergence, as lim x→∞ B(x)(N (x)− 1) = B(N − 1) = λ. However, if we can find find sufficient conditions for X to be positive recurrent, then it has a (unique) stationary distribution ν and the convergence lim t→∞ e −B(N −1)t T t f (x) = ν, f , holds for all continuous functions with compact support. If X is further exponentially ergodic, then (10) holds. We resort again to Foster-Lyapunov techniques. We define, for s ∈ R, M x (s) := 1 B Note that M is decreasing and that N = M (0). If V is as in (47), we have We already argued that if GV (x) ≤ 0, then X is point recurrent. If one can further find α > 0, 0 < α ′ < ∞ and K compact in (0, ∞) such that then X is exponentially ergodic. This happens if the two conditions and hold. Notice that (50) is directly verified as soon as the moment M −b is defined, since (13) implies that lim x→0 τ (x)/x = ∞. On the other hand, (51) seems natural, as a bound on the growth is expected when the fragmentations are bounded. Moreover, the bound is not too restrictive, as we are already assuming (14). To sum up, we have the following result. Remark 5. We know that if Ah = λh for some λ, than Hf (x) = h(x) −1 A(hf )(x) − λf is the generator of a Markov process, say Z. Then, When B and N are constant, A1 = B(N − 1)1, and the above formula holds with λ = B(N − 1) and h = 1. We conclude noticing that the same value for the Malthus exponent was obtained by Bertoin and Watson the case in which the kernel is homogeneus and there is conservation of mass (see Chapter 7 of [BW18]).