LAW OF LARGE NUMBERS FOR A TWO-DIMENSIONAL CLASS COVER PROBLEM

We prove a Law of Large Numbers (LLN) for the domination number of class cover catch digraphs (CCCD) generated by random points in two (or higher) dimensions. DeVinney and Wierman (2002) proved the Strong Law of Large Numbers (SLLN) for the uniform distribution in one dimension, and Wierman and Xiang (2008) extended the SLLN to the case of general distributions in one dimension. In this article, using subadditive processes, we prove a SLLN result for the domination number generated by Poisson points in R. From this we obtain a Weak Law of Large Numbers (WLLN) for the domination number generated by random points in [0, 1] from uniform distribution first, and then extend these result to the case of bounded continuous distributions. We also extend the results to higher dimensions. The domination number of CCCDs and related digraphs have applications in statistical pattern classification and spatial data analysis. Mathematics Subject Classification. 60G99, 62H30, 05C69. Received January 9, 2019. Accepted June 29, 2021.


Previous results
The domination number of a CCCD is the cardinality of a minimum dominating set of the CCCD. In 1962, Ore [21] first used the term "domination number". Due to the many applications of the domination number in fields such as computer networks, social sciences and computational complexity, there has been increasing interest on this topic (see, e.g., [2,26,34]). In the CCCD setting, we denote the domination number by Γ(X n , Y m ) to indicate dependence on X n and Y m . DeVinney and Wierman [10] proved the following SLLN for the special case of Ω = R, F X = F Y = U [0, 1], the uniform distribution on [0, 1] and in this special case the domination number is denoted by Γ n,m for brevity in notation. 1], and m = rn , where r ∈ (0, ∞), then lim n→∞ Γ n,m n = g 1 (r) = r(12r + 13) 3(r + 1)(4r + 3) a.s.
Wierman and Xiang [31] extended this result to the case of general distributions in one dimension, as stated in the following theorem. In this case, f X and f Y are probability density functions (pdfs) corresponding to X n and Y m , respectively. Theorem 1.2. If Ω = R, the continuous and bounded pdfs f X and f Y have support [a, b] with a < b, and m/n → r, r ∈ (0, ∞) as n → ∞, then where g 1 (r) is defined as in Theorem 1.1.

Our results
Extending the previous results in one dimension to higher dimensions requires a different approach, since the exact distribution of the domination number in the multi-dimensional case is unknown. In this paper, we first focus on R 2 and develop the LLN for the domination number in R 2 by using the ergodic theorem for twodimensional subadditive processes. Then we extend the results to higher dimensions in Section 6. See [25,30] for the subadditive Euclidean functionals. Our approach is to prove the SLLN for the domination number of the CCCD generated by the Poisson points in R 2 , and then transfer the result to uniform data sets on [0, 1] 2 with fixed sample sizes. Our arguments are similar to those given in [22,25,30,33]. In particular, the subadditive approximation is also known as the boundary functional. Furthermore, the de-Poissonization argument we employ was introduced in [33]. However, the domination number in the Poisson case is not subadditive. To make use of the SLLN for subadditive processes, we construct the constrained domination number of the CCCD induced by the X-points and Y -points with the covering balls bounded also by the boundary of the study region, which is assumed to be a rectangular region R in R 2 . Below, we define the domination number and constrained domination number based on CCCDs generated by realizations of X n and Y m in region R. To emphasize the distinction between the two versions, the former will also be called unconstrained domination number when there is potential ambiguity. Definition 1.3. (Unconstrained) Domination Number: For a rectangle R in R 2 , the (unconstrained) covering ball of any x i ∈ R is defined by The (unconstrained) domination number Γ R is the minimum number of (unconstrained) covering balls needed to cover all X-points in R ⊆ R 2 . The constrained domination number Γ R is the minimum number of constrained covering balls needed to cover all X-points in R.
Let S 2 = R 2 + be the two-dimensional quadrant with nonnegative real coordinates. If a = (a i ) and b = (b i ) for i = 1, 2 are two vectors in S 2 then [a, b) denotes the set {u : u = (u i ) ∈ S 2 , a i ≤ u i < b i , i = 1, 2} (i.e., [a, b) denotes a rectangle R which is assumed to include the left edge but exclude the right edge in each coordinate) and let R denote the class of sets of this form. Denote 0 (and e) as the vectors with all coordinates equal to 0 (and 1). Let J q = [0, qe), for q > 0, and |J q | denote its area. Notice that (the closure of) J q is a square with lower left vertex at origin 0 and side length q, so its area is |J q | = q 2 . Suppose that there are two independent Poisson processes {X i } and {Y j } in R 2 , with rates λ X and λ Y , respectively. Notice that the number of X-points on J q , denoted N X (J q ) has a Poisson distribution, Poisson(λ X |J q |) ≡ Poisson(q 2 λ X ). Similarly, the number of Y -points on J q , denoted N Y (J q ) has Poisson(q 2 λ Y ) distribution. Throughout the article, we use lim q→∞,q∈A for the limit as q tending to infinity through the elements of A (e.g., if A = Q means the limit is "as q tends to infinity through rational numbers"), and lim q→∞ with no specification for q for the limit as q tending to infinity through the integers. In Section 2, by directly applying the SLLN for subadditive processes and the separability argument, we establish the following SLLN result for the constrained domination number in the Poisson case.
Theorem 1.5. Let {X i } and {Y j } be two independent Poisson processes in R 2 , with rates λ X and λ Y , respectively. Then if λ Y /λ X = r for r ∈ (0, ∞), there exists a function g 2 (r) such that lim t→∞,t∈R Γ Jt |J t | = g 2 (r) a.s.
Although the explicit form of g 2 (r) is not available, in Corollary 4.2 we show that the function g 2 (r) is bounded, increasing and continuous in (0, ∞). Furthermore, the conclusion of Theorem 1.5 also holds for the (unconstrained) domination number with the square study region with real vertices: Theorem 1.6. Let {X i } and {Y j } be two independent Poisson processes in R 2 , with rates λ X and λ Y , respectively. Then if λ Y /λ X = r for r ∈ (0, ∞), then lim t→∞,t∈R Γ Jt |J t | = g 2 (r) a.s., where g 2 (r) is as in Theorem 1.5.
Denote Γ n,m as the domination number generated by n X-points and m Y -points where both X-points and Y -points are uniformly distributed in the unit square, denoted U [0, 1] 2 . In Section 4, by viewing the Poisson points as uniformly distributed points in [0, 1] 2 (that is, coupling the Poisson point process with a binomial point process and, in particular, approximating the Poisson distribution by a binomial distribution), we prove the following Weak Law of Large Numbers (WLLN) for the domination number in [0, 1] 2 .
Finally in Section 5, based on the approach used in the one dimensional version of this problem in [31], we generalize the WLLN to the case where the strictly positive and bounded densities f X and f Y with support [0, 1] 2 . Define where g 2 (r) is as in Theorem 1.5.
∼ f Y and X i and Y j be independent for all i = 1, 2, . . . , n and j = 1, 2, . . . , m. If the pdfs f X and f Y are strictly positive, bounded and continuous on their support, [0, 1] 2 , and m/n → r, r ∈ (0, ∞), as n → ∞, then The dependent variables (i.e., constrained and unconstrained domination numbers) in this paper are negatively associated. See [32] for more details on negatively associated random variables and vectors. Therefore, it is possible to prove CLT results for these two domination numbers. However, currently we only know some properties of g 2 (r) (see Cor. 4.2.), hence, the asymptotic distribution (more specifically, CLT) is not pursued in this article.

SLLN for the domination number in the Poisson case
Our proof relies on the ergodic theory of multidimensional subadditive stochastic processes. Subadditive processes were introduced by Hammersley and Welsh in [14] and developed by Kingman in [16,17]. In 1981, Akcoglu and Krengel obtained a SLLN result for multidimensional subadditive processes under several natural assumptions [1], which we employ in our proof. We state their results in terms of subadditive, instead of superadditive processes below: the additive semigroup of d-dimensional vectors with nonnegative real coordinates where d ∈ N. A continuous subadditive process {X I : I ∈ T} satisfies: S1: If I 1 , I 2 , . . . , I k are disjoint sets in T, and : For I 1 , I 2 , . . . , I k ∈ T, and u ∈ S d , the joint distributions of (X I1 , . . . , X I k ) and (X u+I1 , . . . , X u+I k ) are identical where u + I j = {u + a : a ∈ I j } for j = 1, 2, . . . , k. S3: E[X I ] < ∞ for all I ∈ T and inf {E[X I ]/|I| : where | · | denotes the Lebesgue measure and τ (X) is referred to as the time constant of the stochastic process {X I }.
Let S d Z denote the set of vectors in S d with integer coordinates. For a real number t > 0, let If {X I } is defined only on T t for some fixed t > 0, and satisfies conditions S1-S3, then it is called a discrete subadditive process. Akcoglu and Krengel [1] proved the following theorem for the t = 1 case: Theorem 2.2. If {X I } is a discrete subadditive process on T 1 , then lim n→∞ X Jn |Jn| exists a.e., where J n = [0, ne) ∈ T 1 .
Kingman observed that the continuous analogue of Theorem 2.2 is false unless further conditions are added and proposed a natural supplementary condition [16,17]. The following theorem gives a multi-parameter generalization of the result in [1]. To eliminate the restriction of q to rational numbers, we rely on the concept of separability. A stochastic process {Y t , t ∈ T } is separable if the parameter set T has a countable dense subset D and there is an event E with probability zero, so that for every open set F ⊂ T and every closed set G ⊂ R, the events {Y t ∈ G, ∀t ∈ F ∩ D} and {Y t ∈ G, ∀t ∈ F } differ by a subset of E. Doob [11] introduced separability to describe the condition that the properties of a stochastic process are determined by its values at a countable set of parameter values. Since {X Iq } is constant except for jumps at the Poisson points, it is clearly a separable process. Hence, with D taken to be the rational numbers, a.s. convergence of X Jq |Jq| as q tends to infinity through the rational numbers implies a.s. convergence through the real numbers. Theorem 2.2 alone does not identify the limiting random variable L d := lim |I|→∞ X I |I| for a subadditive process {X I }. However, the limit is simply the time constant τ (X) when the subadditive process is independent, where a subadditive process is defined to be independent if the random variables {X Ii } are independent for disjoint regions {I i } for i = 1, 2, . . . , k (see [16,17]).
Consider the stochastic process Γ R : R = [a, b), a, b have nonnegative rational components where Γ R is the cardinality of the class cover of the X-points with constrained covering balls in R. We prove that Lemma 2.4. Γ R is a subadditive process.
Proof. In this setting, T in Definition 2.1 is the set of regions [a, b) with a and b having nonnegative rational components. The set of such regions is denoted by R q . We check the three conditions, S1 − S3, in Definition 2.1 for Γ R : R ∈ R q as follows: -For S1, suppose that R 1 , R 2 , . . . , R k are disjoint regions in R q and that R = ∪ k i=1 R i is in R q as well. If a point X ∈ R, then there exists a j ∈ {1, 2, . . . , n} such that X ∈ R j . The constrained covering ball for X with respect to R, denoted B R (X), is the same as or larger than that with respect to R j , denoted B Rj (X). Hence, no constrained covering ball with respect to R is any smaller than its corresponding constrained covering ball with respect to R j . Thus, the minimum number of constrained covering balls required to cover all X-points in R would not be larger than that required to cover all X-points in R j 's. Therefore, after ignoring the boundaries of R j 's, the union of the new constrained covering balls (with respect to R) still contains all X-points in R j 's, and thus all X-points in R. Hence, it follows that, Γ R ≤ n i=1 Γ Ri . -S2 follows from the homogeneity of Poisson processes. -S3 holds, since E Γ R > 0 for any R and Γ R is bounded.
Next, we prove Theorem 1.5 by applying Theorem 2.3 to the process Γ Jq where J q = [0, qe) where q is rational and then extending this result to J t where t is real. Proof of Theorem 1.5. We first consider Γ Jq with q being rational. To apply Theorem 2.3, we just need to check that E[Φ] < ∞, where Φ = sup Γ R with the supremum taken over all rectangles R with rational end points in J 1 . For any rational number q ≤ 1, we have Γ Jq ≤ N X (J q ) ≤ N X (J 1 ). Thus Φ ≤ N X (J 1 ). Taking the expectation of both sides, we have Notice that the time constant τ Γ of the stochastic process Γ Jq depends on r and J q ⊂ R 2 , so we denote the time constant as g 2 (r). Since the process Γ R is separable, a.s. convergence of Γ Jq |Jq| along each sequence of rational numbers implies a.s. convergence of Γ J t |Jt| for all positive real numbers, since rational numbers form a dense countable set in the real numbers. Hence, the desired result follows.

Proof of Theorem 1.6
In Theorem 1.6, as λ Y /λ X = r, we have lim t→∞,t∈R Γ J t |Jt| = g 2 (r) a.s. Having established the convergence result for the constrained domination number generated by the Poisson points, we are now ready to prove a similar result for the (unconstrained) domination number, i.e. prove Theorem 1.6. First, we prove a lemma that shows that the constrained and unconstrained domination numbers agree in the limit over J 0, ne as n → ∞. Proof. Let n be a positive integer and s n be a positive real number depending on n. Consider J n = 0, ne , J n = s n e, (n − s n )e , J n = 2s n e, (n − 2s n )e , and J n = 2 + √ 2 s n e, n − 2 + √ 2 s n e as shown in Figure 1. The quantity s n < n 2(2+ √ 2) will be chosen later in the proof and we will let it go to infinity together with n as n → ∞ but at a much slower rate. Let F n denote the event that all constrained covering balls B Jn (X i ) of X-points in J n are contained in J n , and let E n denote the event that there exists at least one Y -point in each of the s n × s n squares between J n and J n .
The probability of having at least one Y -point in one of those small squares is 1 − exp(−s 2 n λ Y ), and the number of small squares is 4n/s n − 12 which is less than 4n/s n . Therefore, by the independent increments property of Poisson processes, we know that Next, we show that E n ⊆ F n , which will imply If there is at least one Y -point in each s n × s n square between J n and J n , then the constrained covering ball of any X-point in J n cannot cross the boundary of J n (i.e., the constrained covering ball would stay in J n ). The reason is that for any X i ∈ J n , the constrained covering ball B Jn (X i ) cannot get very far away from J n , since there is at least one point Y j in the closest s n × s n square to X i . Hence, B Jn (X i ) is contained in J n . Specifically (but without loss of generality), suppose Y j is the Y -point closest to X i , located at the position shown in Figure 2 (left). Then the radius of the constrained covering ball B Jn (X i ) is √ a 2 + b 2 , where the two segments with respective lengths a and b are also shown in Figure 2 (left). Since a ≤ s n , we have √ a 2 + b 2 ≤ s 2 n + b 2 ≤ b + s n . Note that the distance from X i to the the boundary of J n is greater or equal than b + s n , thus B Jn (X i ) is contained in J n .
Next we carefully analyze the relation between the constrained domination number Γ Jn and the (unconstrained) domination number Γ Jn . Let ∆ Jn = Γ Jn − Γ Jn . If the boundary constraint is ignored, the constrained covering balls will not decrease (and might increase) for those X-points whose constrained covering balls touch the boundary; thus, the domination number will not increase, i.e. ∆ Jn ≥ 0. Hence, given the event F n , the constrained covering ball resizing can only occur for those X-points in J n \ J n . Although the resized covering balls may cover other X-points in J n \ J n , the resized balls do not intersect J n . The reason is that these balls can not contain the Y -points in the s n × s n squares. Specifically (but without loss of generality), suppose Y j is the Y -point closest to X i , located at the position shown in Figure 2 (right). Then the radius of the resized where the two segments with respective lengths c and d are also shown in Figure 2 (right). Since c ≤ s n and d ≤ s n , we have √ c 2 + d 2 ≤ √ 2s n . Note that the distance from X i to the boundary of J n is greater or equal to √ 2s n . Thus, B(X i ) does not intersect J n . Thus, resizing the constrained covering balls of X-points in J n \ J n will decrease Γ n by at most the number of X-points in J n \ J n , i.e. ∆ Jn ≤ N X (J n \ J n ).
By the arguments in the preceding paragraph and by the union bound (for the middle inequality below), for n sufficiently large and s n = (2 + δ) log(n)/λ Y for some δ ∈ (0, 1) ∼ cns n , so by a Chernoff bound for the Binomial distribution, for n sufficiently large, which is also summable in n.
Notice that this choice of s n implies that P (F c n ) → 0 as n → ∞. By the Borel-Cantelli Lemma, the calculation above immediately implies that However, to prove Theorem 1.6, we need to show that the result of the above Lemma also holds for Γ Jt for real t.
Proof of Theorem 1.6. We first define ∆ Jt = Γ Jn − Γ Jt , for any t ∈ [n, n + 1). Note that ∆ Jn defined before is the difference between the two processes for the same region J n , whereas ∆ Jt defined above is the difference over two different regions, J n and J t . It is possible that Γ Jt > Γ Jn (i.e., ∆ Jt < 0), but Γ Jt can only be larger than Γ Jn by at most N X (J t \ J n ), the number of X-points in J t \ J n . Therefore, we obtain the following lower bound for ∆ Jt : On the other hand, recall that F n is the event that all constrained covering balls of X-points in J n are contained in J n . Then given F n , the covering balls of X-points in J n are completely contained in J n , so by the same argument for Γ Jn , we know that Γ Jn can be larger than Γ Jt by no more than the number of X-points in J t \ J n , thus we have Convergence to zero for the lower and upper bounds of ∆ Jt /|J t | would yield the result of Theorem 1.6.
We will prove Theorem 1. For any t > 0, there is an integer n(t) such that n(t) ≤ t < n(t) + 1. By the definitions above, Γ Jt = Γ J n(t) − Jt . In addition, we have shown that ∆ Jt ≥ −N X J n(t)+1 \ J n(t) , so It should also be noted that For the other direction, we first write where I A represents the indicator function for the event or set A. Applying the same technique as when we showed ∆ Jn ≤ N X (J n \ J n ), given F n(t) , we see that when the boundary constraint is ignored, the constrained covering balls centered at X-points in J n(t) do not change, whereas the covering balls centered at X-points in J t \ J n(t) do not intersect with J n(t) . Therefore, we conclude that ∆ Jt ≤ N X J t \ J n(t) . Hence, for the first term on the right hand side of equation (3.2), we have Recall that we have chosen s n(t) = (2 + δ) log n(t) /λ Y . Because we have shown that P F c and |J n(t)+1 | → 1 as n → ∞ by the choice of s n . Thus, substituting the expressions above into Inequality (3.3), we immediately get In addition, note that Therefore, we can incorporate the two results above into equation Furthermore, combining Inequalities (3.1) and (3.4), we conclude that In the previous section, we established the SLLN for the domination number generated by the Poisson points in R 2 . In this section, we show the WLLN for the domination number for uniform data sets in [0, 1] 2 (i.e., prove Thm. 1.7) by transferring the result in the Poisson case to the uniform distribution case.

Thus, since
Proof of Theorem 1.7. In the Poisson case, without loss of generality, we let the rates be λ X = 1 and λ Y = r. For any integer n > 0, we let T (n) be the smallest real number such that there are n + 1 X-points in the closure of J T (n) . Note that the (n + 1)-st X-point is on the boundary of J T (n) , and the other n X-points are in the interior of J T (n) . Moreover, by the SLLN, N X (J t )/|J t | → 1 as t → ∞ through the real numbers a.s.
Combining the equation above with the fact that lim All that remains from the discussion above is to prove the following lemma, the proof of which is provided in the Appendix. In this paper, the exact form of g 2 (r) is not identified. However, we can establish the following properties of g 2 (r) for which proofs are deferred to the Appendix as well.  In this section, we provide the proof of Theorem 1.8 which establishes the WLLN for the domination number of CCCDs based on continuous and bounded densities in [0, 1] 2 . We proceed as in [31] where the SLLN for the domination number of CCCDs with continuous densities in one dimension were proved. In the following two subsections, we first generalize Theorem 1.7 to piece-wise constant densities, then extend it to the continuous case. The proofs are analogous to those in [31]. However, adding or deleting a point in two dimensions could change the domination number quite a bit (as large as n − 1) whereas adding or deleting a point in one dimension can only change the domination number by at most 2. But such large changes are very unlikely to happen, and their probabilities are proved to be negligible in the limit as n → ∞.

The case of piecewise constant densities
We consider the simple situation in which f X and f Y are piecewise constant densities defined as where A pq , p, q = 1, 2, . . . , k equally divide [0, 1] 2 into k 2 squares (see Fig. 3) and a pq > 0 and b pq > 0 for all p, q. Note that k p,q=1 a pq = k p,q=1 b pq = k 2 . Let Γ n,m be the domination number generated by the n X-points and m Y -points from f X and f Y , respectively, in [0, 1] 2 , and Γ npq,mpq be the domination number generated by the n pq X-points and m pq Y -points in A pq . One can think of p,q Γ npq,mpq as a "filtered" domination number generated by adding a "filter" A pq for each Γ npq,mpq . The effect of adding a filter is that no points outside A pq contribute to Γ npq,mpq . The outcome of ignoring the filters is the restoration of the sum of the "filtered" domination numbers p,q Γ npq,mpq to the domination number Γ n,m .
Proof of Lemma 5.1. By the SLLN (see Lemma 2 of [31] for more details in the d = 1 case), it follows that as n → ∞, Hence, m pq /n pq → r pq a.s., where r pq = r · b pq /a pq . Therefore, applying Theorem 1.7 on each A pq yields Γ npq,mpq n = Γ npq,mpq n pq n pq n → g 2 (r pq ) · a pq |A pq | in probability, and thus Writing the above expression in the form of an integral gives where f X and f Y are as in equation (5.1).
Remark 5.2. The proof above can be easily generalized to the case when the regions of constancy for the densities are rectangles instead of squares. However, the limiting function g 2 , then depends on the ratio between the length and the width of the rectangles, hence the final limiting value can not be written in a simple integral form, hence the square partition of the unit square. Proof. We prove this lemma by applying the same technique used in Section 2. Specifically, with ν = ν(n) to be chosen later, we shrink each A pq additively by ξ = 1/kν to get A pq , then shrink A pq additively by ξ to get A pq , and then shrink A pq additively by √ 2ξ to get A pq (see Fig. 4). Finally, we divide A pq \ A pq equally into 4ν − 12 small squares with side length ξ. Then, there are totally (4ν − 12)k 2 small squares in ∪A p,q = [0, 1] 2 .
Define the event E m := {∃ at least one Y -point in each small square}. To analyze the probability of the event E m , we will consider the complementary event. First consider one particular square A pq and the event E pq that there is no Y -point in a small square in A pq . Let b * = min p,q {b pq }, and require that ξ ≤ 1/ √ b * to make the second expression in equation (5.3) below positive. Since all m Y -points must fall outside A pq , we have By Boole's inequality, we have Next, we apply the results obtained in the proof of the SLLN of the domination number in the Poisson case (refer to Fig. 2). Conditional on E m , the covering ball of any X-point in A pq is contained in A pq . Therefore, ignoring the filter A pq has no effect on these X-points. However, there may be some Y -points just outside the boundary of A pq , while some X-point in A pq \ A pq could have a covering ball that is not contained in A pq . Thus, ignoring the filter A pq could reduce the covering ball of some X-points in A pq \ A pq , thereby increasing the domination number. Such an increase is bounded by the number of X-points in A pq \ A pq , since no covering ball of any X-point in A pq \ A pq can intersect with A pq . Summarizing the argument above, we obtain  and We may choose the relationships between the parameters. Let ν = mb * k 4 ·log √ m and in order to satisfy For sufficiently large k and m with k being no larger than √ m log √ m, we have so I E c m → 0 in probability. The expression above combined with Inequality (5.5) yields ∆ n,m n I E c m → 0 in probability. (5.7) Finally, we write the right hand side in Inequality (5.6) as p,q N X (Apq\A pq ) It should be noted that (see Figs. 3 and 4.) E I {Xi∈∪p,q(Apq\A pq )} = P X i ∈ ∪ p,q (A pq \ A pq ) and So, Therefore, for any δ > 0, the Markov inequality provides If f X and f Y are bounded and continuous, then they are both uniformly continuous on [0, 1] 2 . Thus, given any δ > 0, there exists an integer k 0 such that for any k ≥ k 0 and the equal partition {A pq : p, q = 1, 2, . . . , k} of [0, 1] 2 (see Fig. 3), the following must hold: for any (u 1 , v 1 ), (u 2 , v 2 ) ∈ A pq . We define piecewise constant functions approximating f X and f Y as follows: and then rescalef X andf Y by dividing them by their respective integrals to give piecewise constant densitieŝ f X andf Y , which approximate f X and f Y , respectively. For random vectors U i := (X i1 , X i2 , X i3 ), i = 1, 2, . . . , n, distributed i.i.d. uniformly between {(u, v, 0) : Next, we let U i := X i1 , X i2 , X i3 , i = 1, 2, . . . , n, and V j : Finally, let R X be the region between the surfaces and let R Y be the region between the surfaces We then define The sequences above can be interpreted as follows. For each i ∈ [n] := {1, 2, . . . , n}, if the point U i = (X i1 , X i2 , X i3 ) does not fall into R X , then this point is assigned to U i ; otherwise, U i = X i1 , X i2 , X i3 is assigned to U i . A similar procedure applies to Y -points. Again, using a similar technique to one in [31] shows that X i1 , X i2 and Y j1 , Y j2 have piecewise constant densitiesf X andf Y , respectively.
Note that only the points U i ∈ R X and V j ∈ R Y could cause a difference between Γ(X n , Y m ) and Γ X n , Y m . But such a difference could be as large as n − 1. However, by applying the results obtained in the proof of Theorem 1.7 in Section 4, we next show that if the largest covering ball is small, then the difference is negligible in the limit.
Handling U i Points in R X : When any U i ∈ R X is replaced by U i , it is equivalent to deleting X i = (X i1 , X i2 ) from X n and then putting X i = X i1 , X i2 in X n . Deleting X i could decrease (but never increase) the domination number Γ(X n , Y m ) by at most 1, provided that X i is not the center of a covering ball. On the other hand, note that deleting the covering ball of X i could also increase (but never decrease) the domination number by at most the number of X-points in B(X i ) \ {X i }. Hence, deleting X i could change the domination number by at most the number of X-points in B(X i ). Similarly, adding X i could further increase the domination number by at most 1. However, note that adding the covering ball of X i can also decrease the domination number by at most the number of X-points in B X i \ X i . Hence, adding X i can change the domination number by at most the number of X-points in B X i . Therefore, replacing any U i ∈ R X by X i could only change the domination number by at most Hence, the change caused by the set of points X i in R X is bounded by Given any U i ∈ R X , we denote the radius of the covering ball B(X i ) by B i . For any fixed U i ∈ R X , we can bound P (B i > b) as follows: .
Recall that f X ,f X , f Y andf Y are all strictly positive and bounded, so we can assume for some strictly positive constants k 1 , k 2 . Hence, the inequality above can be further bounded as Since the bound above is uniform for any Note that for any l ∈ [n] −i := {1, 2, . . . , i − 1, i + 1, 2, . . . , n}, the random point X l is independent of X i and Y j , j = 1, 2, . . . , m. Therefore, By applying the same technique as in the proof of Lemma 4.1, Case 2 in the Appendix, we can further bound the above expectation as E I {X l ∈B(Xi)} U i ∈ R X ≤ 1 − k 1 πb 2 m d k 1 πb 2 ≤ 1 m+1 . Since m/n → r, when n is sufficiently large, it follows that Similarly, when n is sufficiently large, E n l=1 I { X l ∈B(Xi)} U i ∈ R X ≤ K 1 . From the two inequalities above, we bound the expectation of equation (5.10) as follows: where δ is the uniform bound on the densities f X and f Y introduced in equation (5.9). In fact, δ can be taken to be δ = P (U i ∈ R X ).
Handling V j Points in R Y : After replacing all U i ∈ R X by U i , the original domination number Γ(X n , Y m ) becomes Γ X n , Y m . Next, we consider the effect of replacing replacing all V j ∈ R Y by V j , the domination number Γ X n , Y m becomes Γ X n , Y m . We have discussed the effect of deleting and adding Y m -points in the proof of Theorem 1.7 in Section 4. For all Y j / ∈ R Y , refer to Y j as a Y m -point. For any Y j ∈ R Y , define B y j as the maximum radius of all balls that contain Y j but contain no Y m -points. Applying the arguments in the proof of Lemma 4.1 in the Appendix shows that deleting Y j could decrease (but never increase) Γ X n , Y m by at most the number of X-points in the ball B(Y j ) = B(Y j , 2B y j ), centered at Y j with radius 2B y j . Furthermore, for any Y j ∈ R Y , define B j as the maximum radius of all balls that contain Y j but contain no Y mpoints. Similarly, applying the arguments in the proof of Lemma 4.1 in the Appendix shows that adding Y j could further increase (but never decrease) Γ X n , Y m by at most the number of X-points in B Y j = B(Y j , 2B j ), centered at Y j with radius 2B j . Thus, replacing any V j ∈ R Y by Y j could further change the original domination number Γ(X n , Y m ) by no more than where N X (B(Y j )) := For any fixed V j ∈ R Y , using the same argument (and recalling the small grid balls inscribed in the squares) in the proof of Lemma 4.1 Case 1 in the Appendix, we bound P (B y j > b | Y j , Y j ∈ R Y , M R ) as follows: Since the above bound is uniform for any V j ∈ R Y , it follows that P (B y Note that for any l ∈ [n], the random point X l is independent of Y j , j = 1, 2, . . . , m. Therefore, By applying the same technique as in the proof of Lemma 4.1, Case 2 in the Appendix, we can further bound the above expression as follows Hence, Since m/n → r, when n is sufficiently large, conditional on M R ≤ 2δm, the inequality above yields for some constant K 2 > 0. Furthermore, by applying the argument above to the case of adding Y j , we conclude that, when n is sufficiently large, (5.14) Note that Applying Inequalities (5.13) and (5.14) to the above equation, we obtain Recall from equations (5.10) and (5.12) Since for positive real numbers a, b, c with a + b > c > for some > 0 implies a > /2 or b > /2, Applying the Markov inequality to equations (5.11) and (5.15) yields that, for any > 0, when n is sufficiently large, for some constant K determined by .
Recall that M R is a Binomial(m, δ) random variable, by applying the Markov inequality, we obtain Thus, for any fixed δ ∈ (0, 1), when m is sufficiently large (in particular, m > 1−δ δ 2 ), it follows that P (M R > 2δm) ≤ δ. Hence, Inequality (5.16) reduces to In the previous section, we proved that Γ( Xn, Ym) n → L 2 r, f X , f Y in probability as n → ∞. Thus, when n is sufficiently large, P Combining this inequality with Inequality (5.17) Corollary 4.2 says that g 2 (r) is bounded and continuous. Sincef X → f X andf Y → f Y as δ → 0 (i.e., as k → ∞), then by the dominated convergence theorem, it follows that as δ → 0 Since δ > 0 can be arbitrarily small, we immediately obtain P Γ(Xn,Ym) n − L 2 (r, f X , f Y ) > → 0 which finishes the proof of Theorem 1.8.
Remark 5.4. The limiting function L 2 (r, f X , f Y ) gives the same value for uniform densities or when f X = f Y . However, since we have not proved whether g 2 is concave, we do not know yet if this limiting function achieves the maximum value when f X = f Y .

Extension to higher dimensions
The extension of our results in R 2 to higher dimensions R d with d > 2 can be achieved in a straightforward fashion with some modifications in the geometric arguments. The class cover problem (CCP) is already defined in any dissimilarity space (including R d ) [3]. The LLNs for the domination number in R d also follow from the ergodic theorem for multidimensional subadditive processes. One can prove the SLLN for the domination number of the CCCD generated by the Poisson points in R d , and then transfer the result to uniform data sets on [0, 1] d with fixed sample sizes. The extension of constrained domination number induced by the X-points and Y -points with the covering balls bounded by the boundary of the study region is also straightforward.
In particular, in higher dimensions, S d = R d + , the region J q becomes J d,q = [0, qe), for q > 0, and |J d,q | = q d . Furthermore, we assume that there are two independent Poisson processes {X i } and {Y j } in R d , with respective rates λ X and λ Y . Then the extension of our main findings to higher dimensions are as below: -Extension of Theorem 1.5: Let {X i } and {Y j } be two independent Poisson processes in R d , with rates λ X and λ Y , respectively. If λ Y /λ X = r, r ∈ (0, ∞), there exists a function g d such that lim t→∞,t∈R -Extension of Theorem 1.6: Let {X i } and {Y j } be two independent Poisson processes in R d , with rates λ X and λ Y , respectively If λ Y /λ X = r, r ∈ (0, ∞), then lim t→∞,t∈R . , x d ), y = (y 1 , . . . , y d ), dx = dx 1 · · · dx d and g d (r) is as in the Extension of Theorem 1.5.
The proofs of these extensions are similar to those in the two-dimensional case. We only provide the proof of the higher dimensional version of Lemma 3.1 as an illustration. Proof. In R d , we have J d,n = s n e, (n − s n )e , J d,n = 2s n e, (n − 2s n )e , and J d,n = 2 + √ d s n e, n − 2 + √ d s n e . The quantity s n < n 2(2+ √ d) will be chosen later. The s n × s n squares in R 2 become s n × s n . . . × s n = s d n hypercubes between J d,n and J d,n . Let F d,n denote the event that all constrained covering balls of X-points in J d,n are contained in J d,n , and let E d,n denote the event that there exists at least one Y -point in each of the s d n hypercubes between J d,n and J d,n . The probability of having at least one Y -point in one of these small hypercubes is 1 − exp(−s d n λ Y ), and the number of such small hypercubes is equal to (n−sn) d −(n−4sn) d s d n which is less than equal to 2d(n/s n ) d−1 . Therefore, we have P (E d,n ) ≥ (1 − exp(−s d n λ Y )) 2d(n/sn) d−1 . Next, we show that E d,n ⊆ F d,n , so it would follow that Suppose Y j is the Y -point closest to X i , and let Z i be the closest point on ∂(J d,n ), the boundary of J d,n , to X i , and let P i be the projection of Y j on the line segment joining X i and Z i . Moreover, let d(X i , P i ) = b and d(Y j , P i ) = a. Then the radius of the constrained covering hypersphere As in the two-dimensional case, ∆ J d,n ≥ 0. Given the event F d,n , the constrained covering hypersphere resizing can only occur for those X-points in J d,n \ J d,n and the resized hyperspheres do not intersect J d,n . The reason is that there is a closest Y j to X i in the hypercube and that the radius of the resized covering hypersphere B(X i ) is b 2 ds n and the distance from X i to the boundary of J d,n is greater or equal to √ ds n . Therefore, ∆ J d,n ≤ N X (J d,n \ J d,n ).
By the arguments in the preceding paragraph and by the union bound (for the middle inequality below), for n sufficiently large and s n = d (d + δ) log(n)/λ Y for some δ ∈ (0, 1) which is summable n, but also E[N X (J d,n \ J d,n )] ∼ c n d−1 s n , so by a Chernoff bound for the Binomial distribution, for n sufficiently large, which is also summable in n.
Notice that this choice of s n implies P (F c d,n ) → 0 as n → ∞. By the Borel-Cantelli Lemma, the above calculations imply that which completes the proof.
The proofs of the high dimensional extension of Theorems 1.6 and 1.7 also follow with similar geometric adjustments above for higher dimensions. As main changes in proving the extension of Lemma 4.1, we modify the cases of adding and deleting Y -points as follows. In the case of adding one new Y -point: Y a , we equally divide the hypercube centered at Y a with side length 4b into 8 d smaller hypercubes, and refer to the 8 d small hyperspheres inscribed in the hypercubes with radius b/4 as grid hyperspheres. As the Poisson process Y has rate λ Y , the probability that a particular grid hypersphere covers no Y -point is exp is the usual gamma function. Then the proof proceeds as in the d = 2 case with inserting this probability appropriately. The same conclusion of Corollary 4.2 also holds for g d (r) with the proof requiring almost no adjustment for extending to higher dimensions.
The extension of Theorem 1.8 can be proved similarly to the one in the two dimensional setting: We first generalize extension of Theorem 1.7 to piece-wise constant densities, then extend it to the continuous case. The proofs are analogous to those in the two dimensional setting.
Remark 6.2. Notice that g 2 (r) in Theorems 1.5-1.8 (and likewise for g d (r) in Sect. 6) is not explicitly available in contrast to the one-dimensional case. In applications, this might constitute a drawback which can be overcome by estimating g 2 (r) empirically by Monte Carlo simulations (which is not pursued in this article).

Discussion and conclusions
We study the class cover problem (CCP) of random point sets in two (or higher) dimensions. In particular, given two classes X and Y in a sample space Ω with corresponding random variables X i and Y j , respectively, the covering ball of X i , denoted by B(X i , r i ), is the set of points ω in Ω such that X i is closer to ω than to any other Y j . That is, letting X n = {X 1 , X 2 , . . . , X n } and Y m = {Y 1 , Y 2 , . . . , Y m }, B(X i , r i ) is the ball with radius r i ≤ min j d(X i , Y j ). The goal in the CCP is to minimize the number of covering balls needed to cover all X-points, X n . This goal is equivalent to finding a minimum dominating set for the digraph called the class cover catch digraph (CCCD). A CCCD has vertex set X n and an arc (i.e., directed edge) from The CCP (and hence the CCCD) is motivated by its application in pattern classification. DeVinney and Wierman proved the Strong Law of Large Numbers (SLLN) for the uniform distribution in one dimension [10], and Wierman and Xiang extended the SLLN to the case of general distributions in one dimension [31]. We study the behavior of the domination number of CCCDs when X-and Y -points are coming from independent Poisson point processes in R 2 , as well as X n and Y m are chosen uniformly from the unit square and also X-and Y -points have positive, bounded and continuous densities. In the Poisson process case, we prove a SLLN result for the domination number; i.e., we show that the domination number (properly scaled) converges almost surely to a limiting function, denoted g d (r), where r is the ratio of the rate of the Poisson process for Y points to that for X points in R d . The proof proceeds by applying a result on almost sure convergence of subadditive processes for a constrained domination number, and then showing that the difference between the unconstrained and constrained domination number vanishes in the limit, which is shown by extensive geometric and probabilistic computations. For the case of uniformly distributed points in the unit square, we obtain a WLLN result (i.e., convergence in probability) for the scaled domination number. Finally, we generalize this result to the case where the densities are positive, bounded and continuous on [0, 1] 2 and then extend the results to higher dimensions.
The solutions to the CCP (i.e., the minimum dominating sets of the CCCDs) are employed to build classifiers. For example, the balls around the members of the minimum dominating sets of the CCCDs can be used to construct discriminant regions for assigning class labels (see [8] for more detail). CCCD-based methods have been shown to have relatively good performance in classification (see [9,23]) and also to be robust to the class imbalance problem [20].
One major drawback of CCCDs is that the (exact or asymptotic) distribution of its domination number is only available in the one dimensional case. Despite this difficulty, we were able to show SLLN and WLLN results for the domination number in the Poisson process and uniform distribution cases, respectively, and also determine some properties of the limit of the domination number. The difficulties in extending the nice properties and results in the one-dimensional case to higher dimensions are discussed in [5]. These difficulties mainly arise from the lack of a natural ordering of points in two or higher dimensions, and when Y m partitions the space into cells, the balls are not necessarily restricted to the particular cell their center reside in. CCCDs were generalized to proximity catch digraphs (PCDs) in [4] where distribution of the domination number of PCDs is more tractable than that of CCCDs. The current work also suggests that domination number of CCCDs might have asymptotic normality, so a CLT result is also a topic of prospective research. Moreover, this prospect is also highly contingent on finding the explicit forms of the limiting function, g d (r), at least for d = 2, which is also an open problem.

Appendix A.
A.1 Proof of Lemma 4.1 For any δ > 0 and > 0, by the law of total probability, we have Observe that the first term in the right hand side is for the case of m and M n differing by more than δn from each other, the second term is for deleting ρ n Y -points, the third term is when there is no need of change in the number of Y -points, and finally the last term is for adding ρ n Y -points. Observe that if m = M n , then ∆ n,Mn = 0, thus P Applying a similar argument as in page 432 of [10], which uses Chernoff's Theorem, the above is further bounded as follows: for some constants K, k > 0 and sufficiently large n. Next, we bound the second and third terms in (A.1) by considering the cases of adding and deleting up to δn Y -points. Case 1: Adding up to δn Y -Points. We first consider the case of adding one new Y -point: Y a , i.e., m − M n = 1.
As illustrated in Figure A.1, if Y a falls into the covering ball B(X i ) of some point X i , the covering ball B(X i ) will decrease to B (X i ) so that the domination number may increase (but never decreases). Such an increase can be at most the number of X-points in B(X i ).
Note that it is possible for Y a to fall into more than one covering ball. To take this into account, define the random variable B a as the maximum radius of all balls that contain Y a but contain no other Y -points. We know that given B a = b > 0, the covering balls into which Y a could fall must be contained in the ball B(Y a , 2b), which is the ball centered at Y a with radius 2b. Otherwise, if there exists a covering ball that contains Y a but is not contained in B(Y a , 2b), then that covering ball must have a radius greater than b but contains no Y -points, which contradicts B a = b. Therefore, ∆ n,Mn is bounded above by the number of X-points in B(Y a , 2b), thus 0 ≤ ∆ n,Mn ≤ N X (B(Y a , 2b)) = Next, we calculate an upper bound for P (B a > b). Define E(Y a , b) to be the event that "there exists a ball in B(Y a , 2b) with radius b which contains no Y -point." Note that in the definition above, the ball is a subset of B(Y a , 2b), but is not necessarily centered at an X-point or a Y -point. From the definition of B a , we have Thus, we will find an upper bound for P (E(Y a , b)). As shown in Figure A.2, suppose we equally divide the square centered at Y a -with sides parallel to the coordinate axes -with side length 4b into 8 2 = 64 smaller squares, and refer to the 64 small balls inscribed in the squares with radius b/4 as grid balls. If E(Y a , b) occurs, i.e., there exists a ball in B(Y a , 2b) with radius b which contains no Y -points, then that ball must contain a grid ball that covers no Y -point (as illustrated in Fig. A.2). Therefore, if E(Y a , b) occurs, there must be a grid ball containing no Y -point. Since the Poisson process Y has rate λ Y , we know the probability that a particular grid ball covers no Y -point is exp(−π(b/4) 2 λ Y ). Since λ Y = r, and by Boole's inequality, P (B a > b) ≤ P (E(Y a , b)) ≤ 64 exp(−πr(b/4) 2 ). Applying (A.4), we have Since the X-points are independent of the Y -points, all X i are identically distributed, and B a is defined independently of the X-points, the right side of this inequality equals the unconditional probability. Then, by Markov's Inequality, we have 2b)) .
Note that if B(Y a , 2b) is contained in J T (n) , then P (X 1 ∈ B(Y a , 2b)) = π(2b) 2 /|J T (n) |. However, if Y a is near the boundary of J T (n) , then it is possible that only part of B(Y a , 2b) is contained in J T (n) , so P (X 1 ∈ B(Y a , 2b)) ≤ π(2b) 2 /|J T (n) |. Summarizing the discussion above, we obtain Hence, given T (n), and P (B a > √ 2T (n)) = 0. Recalling that P (B a > b) ≤ 64 exp(−πr(b/4) 2 ), we further bound the above as follows: where C > 0 is a constant. Therefore, without conditioning on T (n), we have Next, we consider the case of adding one or more new Y -points: Using P (B l a > b l ) ≤ 64 exp(−πr(b l /4) 2 ) for each l as before, and recalling we have chosen ρ n ≤ δn, we can finally get the following bound: Hence, for some constant C 1 > 0.  As illustrated in Figure A.3, if Y d is on the boundary of B(X i ) of some X i , then deleting Y d will cause B(X i ) to increase to B (X i ), which we refer to as the enlarged covering ball. The enlarged covering ball B (X i ) has a radius equal to the distance between X i and the second nearest Y -point, Y j . It is worth noting that the domination number can decrease by at most the number of X-points in B (X i ) \ B(X i ).
It is also possible for Y d to fall into more than one enlarged covering ball. Refer to the original Y -points except Y d as Y -points. Define the random variable B d as the maximum radius of all balls that contain Y d but contain no Y -points. Given B d = b > 0, the enlarged covering balls into which Y d could fall must be contained in the ball B(Y d , 2b). Otherwise, if there exists an enlarged covering ball that contains Y d but is not contained in B(Y d , 2b), then that enlarged covering ball must have a radius greater than b but contain no Y -point, which contradicts B d = b. Therefore, |∆ n,Mn | is bounded above by the number of X-points in B(Y d , 2b), thus Therefore, using the same argument as in the case of adding points, for any ρ n ∈ {1, 2, . . . , δn}, we obtain for some constant C 2 > 0. Therefore, For any fixed δ > 0, the first term Ke −kδn goes to 0 as n → ∞. Also, since δ > 0 can be arbitrarily small, we conclude that P |∆ n,Mn | n > → 0, so ∆ n,Mn n → 0 in probability.
Remark A.1. Notice that when adding points, P (B a > b) has an exponentially decaying bound, while when deleting points, P (B d > b) has a polynomially decaying bound. The main difference between these two cases is that when adding points, we add from a Poisson process with rate λ Y , while deleting points we delete from a uniform distribution in the region of interest.
-g 2 (r) is an increasing function of r: Next, we show that g 2 (r) increases as r increases. We first suppose that, for any 0 < r 1 < r 2 , there is a Poisson process X with rate 1, a Poisson process Y 1 with rate r 1 , and another Poisson process Y 2−1 with rate r 2 − r 1 . For any integer n > 0, let T (n) be the smallest real number such that there are n + 1 X-points in J T (n) . Suppose next that M 1 (n) is the (random) number of Y 1 -points in J T (n) , and M 2−1 (n) is the (random) number of Y 2−1 -points in J T (n) . We refer to the merged Y 1 -points and Y 2−1 -points as Y 2 -points. We define Γ n,M1(n) as the domination number generated by the X-points and Y 1 -points in J T (n) , and Γ n,M2(n) as the domination number generated by the X-points and Y 2 -points in J T (n) . Basically, we have just added M 2−1 (n) Y 2−1 -points to those M 1 (n) Y 1 -points to allow us to study the change from Γ n,M1(n) to Γ n,M2(n) . Since adding Y -points can never decrease the domination number, we know that Γ n,M2(n) is larger than or equal to Γ n,M1(n) . Recall that Y 1 -points are generated from a Poisson process with rate r 1 , and Y 2−1 -points are generated from a Poisson process with rate r 2 − r 1 , hence Y 2 -points are generated from a Poisson process with rate r 2 . Therefore, by previous results, we have lim n→∞ Γ n,M1(n) |J T (n) | = g 2 (r 1 ) a.s. and lim n→∞ Γ n,M2(n) |J T (n) | = g 2 (r 2 ) a.s.
Recalling Γ n,M2(n) is larger than or equal to Γ n,M1(n) , we conclude that g 2 (r 2 ) ≥ g 2 (r 1 ). -g 2 (r) is continuous: For any r 1 , r 2 > 0 and > 0, we must show that there exists a δ = δ( ) > 0 such that if |r 2 − r 1 | < δ, then |g 2 (r 2 ) − g 2 (r 1 )| ≤ . Suppose there is a Poisson process X with rate 1, a Poisson process Y 1 with rate r 1 , and another Poisson process Y 2 with rate r 2 . Then for any integer n > 0, we let T (n) be the smallest real number such that there are n + 1 X-points in J T (n) . Suppose next that M 1 (n) is the (random) number of Y 1 -points in J T (n) , and M 2 (n) is the (random) number of Y 2 -points in J T (n) . Taking into consideration that almost sure convergence implies convergence in probability, we have lim n→∞ Γ n,M1(n) |J T (n) | = g 2 (r 1 ) in probability and lim n→∞ Γ n,M2(n) |J T (n) | = g 2 (r 2 ) in probability.