3,356
Views
2
CrossRef citations to date
0
Altmetric
Articles

Minimal sample size in balanced ANOVA models of crossed, nested, and mixed classifications

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 1728-1743 | Received 05 Mar 2020, Accepted 19 May 2021, Published online: 22 Jun 2021

Abstract

We consider balanced one-way, two-way, and three-way ANOVA models to test the hypothesis that the fixed factor A has no effect. The other factors are fixed or random. We determine the noncentrality parameter for the exact F-test, describe its minimal value by a sharp lower bound, and thus we can guarantee the worst-case power for the F-test. These results allow us to compute the minimal sample size, i.e. the minimal number of experiments needed. We also provide a structural result for the minimum sample size, proving a conjecture on the optimal experimental design.

MATHEMATICS SUBJECT CLASSIFICATION 2010:

1. Introduction

Consider a balanced one-way, two-way or three-way ANOVA model with fixed factor A to test the null hypothesis H0 that A has no effect, that is, all levels of A have the same effect. The other factors are denoted B, C (crossed with or nested in A) or U, V (factors that A is nested in). They can be fixed factors (printed in normal font) or random factors (printed in bold). As usual in ANOVA, we assume identifiability, normality, independence, homogeneity, and compound symmetry (Scheffé Citation1959; Maxwell, Delaney, and Kelley Citation2017). In particular, the fixed effects are identifiable and the random effects and errors have a normal distribution with mean zero and they are mutually independent. By A × B, we denote crossed factors with interaction, by AB we denote that B is nested in A. Practical examples that are modeled by crossed, nested, and mixed classifications are included, for example, in Canavos and Koutrouvelis (Citation2009), Doncaster and Davey (Citation2007), Jiang (Citation2007), Montgomery (Citation2017), Rasch (Citation1971), Rasch et al. (Citation2011), Rasch, Spangl, and Wang (Citation2012), Rasch and Schott (Citation2018), Rasch, Verdooren, and Pilz (Citation2020). The number of levels of A (B, C, U, V) is denoted by a (b, c, u, v, respectively). The effects are denoted by Greek letters. For example, the effects of the fixed factor A in the one-way model A, the two-way nested model VA, and the three-way nested model UVA read (1) αi,αi(j),αi(jk),i=1,,a,j=1,,v,k=1,,u.(1) The numbers of levels (excluding a) and the number of replicates n will be called parameters in this article.

This article derives the details for the noncentrality parameter and we show how to obtain the minimum sample size for a large family of ANOVA models.

  • We derive the details for the noncentrality parameter (Theorem 2.1).

  • We derive the worst-case noncentrality parameter (Theorem 2.4), required to obtain the guaranteed power of an ANOVA experiment.

  • We show how to determine the minimal experimental size for ANOVA experiments by a new structural result that we call “pivot” effect (Theorem 2.7). The “pivot” effect means one of the parameters (the “pivot” parameter) is more power-effective than the others. Considering this “pivot” effect is not only helpful for planning experiments but is indeed necessary in certain models, see Remark 2.3(ii).

Our main results are thus for the exact F-test noncentrality parameter, the power, and the minimum sample size determination, see Section 2. In Section 3, we include two exceptional models that do not have an exact F-test. In Section 4, we discuss the distinction between real and integer parameters for some of our results. The proofs are in Appendix A.

2. Main results

Consider a balanced one-way, two-way or three-way ANOVA model, with the notation above, to test the null hypothesis H0 that the fixed factor A has no effect. For most of these models an exact F-test exists, under the usual assumptions mentioned above. The test statistic FA is given by a ratio whose numerator is given by the mean squares (MS) of the fixed factor A, denoted by MSA. The denominator depends on the model. The respective test statistic has an F-distribution (central under H0, noncentral in general). We denote its parameters by the numerator d.f. df1, the denominator d.f. df2, and the noncentrality parameter λ. The notation d.f. is short for degrees of freedom.

By σy2, we denote the total variance, it is the sum of the variance components, such as σβ2 (the variance component of the factor B) and the error term variance σ2.

2.1. The noncentrality parameter

Our first main result lists df1,df2, and the exact form of the noncentrality parameter λ. Our expressions for λ show the detailed form in which the variance components occur. This exact form of λ is the key to a reliable power analysis, which is essential for the design of experiments.

Theorem 2.1.

Consider a balanced one-way, two-way or three-way ANOVA model, with the assumptions of identifiability, normality, independence, homogeneity, and compound symmetry. We test the null hypothesis H0 that the fixed factor A has no effect. Then, under the assumption that an exact F-test exists, the test statistic has an F-distribution (central under H0, noncentral in general) with numerator d.f. df1, denominator d.f. df2, and noncentrality parameter λ=RS/T obtained from .

Table 1. List of one-way, two-way, and three-way ANOVA models with fixed factor A, for use in Theorem 2.1, etc.

The proof of Theorem 2.1 is in Appendix A.

Example 2.2.

For the model ABC, Theorem 2.1 states that the test statistic FA=MSA/MSBinA has an F-distribution (central under H0, noncentral in general) with numerator d.f. df1=a1, denominator d.f. df2=a(b1), and noncentrality parameter λ=RS/T=b·iαi2σβ(α)2+1cσγ(αβ)2+1cnσ2.

Remark 2.3.

  1. The models A×B×C and (AB)×C are excluded from , since an exact F-test does not exist, see Section 3. We also exclude the nesting of crossed factors into others, such as A(B×C).

  2. From inspecting the expression for λ in Example 2.2, we obtain the following somewhat surprising observation. If n increases, then clearly λ increases, but in the limit n we do not obtain λ. It implies that increasing the number of replicates n increases the power but there is a limit for the power if only n is increased. This observation affects each model in with T consisting of more than one term. These are exactly the models which in do not have the parameter n in the “pivot” column. In fact, the “pivot” effect (Theorem 2.7) shows that for these models not n but a different parameter should be increased to achieve any given prespecified power.

2.2. Least favorable case noncentrality parameter

For an exact F-test, the computation of the power is immediate: Given the type I risk α, obtain the type II risk β by solving (2) Fdf1,df2;1α=Fdf1,df2;βλ,(2) where Fν1,ν2;γλ denotes the γ-quantile of the F-distribution with degrees of freedom ν1 and ν2 and noncentrality parameter λ. Then P=1β is the power of the test. The next theorem is our second main result, we determine the noncentrality parameter λmin in the least favorable case, that is, the sharp lower bound in λλmin. Using λmin in (2) yields the guaranteed power Pmin=(1β)min of the test.

Let δ denote the minimum difference to be detected between the smallest and the largest treatment effects, i.e. between the minimum αmin and the maximum αmax of the set of the main effects of the fixed factor A, (3) δ=αmaxαmin.(3) We assume the standard condition to ensure identifiability of parameters, which is that α has zero mean in all directions (Fox Citation2015, pp. 157, 169, 178), (Rasch et al. Citation2011, Section 3.3.1.1), (Rasch and Schott Citation2018, Section 5), (Rasch, Verdooren, and Pilz Citation2020, Section 5), (Scheffé Citation1959, Section 4.1, p. 92), (Searle and Gruber Citation2017, p. 415, Section 7.2.i). That is, exemplified for three models, (4) Aiαi2=0,VAiαi(j0)2=jαi0(j)2=0,for  any i0,j0,UVAiαi(j0k0)2=jαi0(jk0)2=kαi0(j0k)2=0,for  any i0,j0,k0.(4)

Theorem 2.4.

We have the following lower bounds for the noncentrality parameter λ.

  1. With the parameter or product of parameters denoted R in , we have λR2·δ2σy2.

More precisely, denoting by σy,active2σy2 the sum of those variance components that occur in T, we have λR2·δ2σy,active2.
  1. For the models in that involve a factor V that A is nested in, let m=max(v,a). Then the lower bound in (i) can be raised to λR2·δ2σy,active2·mm1.

  2. For the models in that involve the factors U, V that A is nested in, let m1m2m3 denote a, u, v sorted from least to greatest. Then the lower bound in (i) can be raised to λR2·δ2σy,active2·m2m3(m21)(m31).

The proof of Theorem 2.4 is in Appendix A.

Remark 2.5.

  1. The importance of a lower bound for the noncentrality parameter λ is its use for the power analysis, required for the design of experiments. By Theorem 2.4, we establish such a bound. The difference to the previous literature (Rasch et al. Citation2011; Rasch, Spangl, and Wang Citation2012) is that we use the correct, detailed form of the noncentrality parameter λ from Theorem 2.1, and we use the new, sharp bound for the sum of squared effects from Kaiblinger and Spangl (Citation2020).

  2. The bounds in Theorem 2.4 are sharp. The extremal case (minimal λ) occurs if the main effects (1) of the factor A are least favorable, while satisfying (3) and (4), and also the variance components are least favorable, while their sum does not exceed σy2.

    For the extremal αi,αi(j),αi(jk) configurations we refer to Kaiblinger and Spangl (Citation2020). The least favorable splitting of σy2 is that the total variance is consumed entirely by the first term of T in , see the worst cases in Examples 2.6(i) and 2.6(ii).

  3. If in a model there are “inactive” variance components (i.e., some components of the model do not occur in T), then the most favorable splitting of σy2 is that the total variance tends to be consumed entirely by inactive components. In these cases, λ goes to infinity, λ. See the best case in Example 2.6(i).

  4. If in a model all variance components are “active” (i.e., all components of the model also occur in T), then the most favorable splitting of σy2 is that the total variance is consumed entirely by the last term of T. See the best case in Example 2.6(ii).

Example 2.6.

  1. For the model A×B×C, from we have T=σαβ2+1cnσ2.

The “active” variance components are defined to be the variance components that occur in T, σy2=σαβ2+σ2σy,active2+σβ2+σβγ2+σαβγ2.

Since R = b, by Theorem 2.4 we obtain for the noncentrality parameter λ, λb2·δ2σy,active2b2·δ2σy2.

Since the first term of T is σαβ2 and the inactive components are σβ2,σβγ2,σαβγ2, we obtain by Remark 2.5 that the extremal total variance σy2 splittings are (σαβ2,σ2,σβ2,σβγ2,σαβγ2){(*,0,0,0,0), worst, λ=b2·δ2σy2,(0,0,*,*,*), best, λ.

  1. For the model ABC, from we have T=σβ(α)2+1cσγ(αβ)2+1cnσ2.

All variance components occur in T, thus all variance components are “active,” σy2=σy,active2=σβ(α)2+σγ(αβ)2+σ2.

Since R = b, by Theorem 2.4 we obtain for the noncentrality parameter λ, λb2·δ2σy,active2=b2·δ2σy2. In this model there are no “inactive” variance components, and by Remark 2.5 we obtain (σβ(α)2,σγ(αβ)2,σ2){(*,0,0), worst, λ=b2·δ2σy2,(0,0,*), best, λ=bcn2·δ2σy2.

2.3. Minimal sample size

The size of the F-test is the product of the parameters, for the factors that occur in the model, including the number n of replications. For prespecified power requirements PP0, the minimal sample size can be determined by Theorem 2.4. Compute λmin and thus obtain the guaranteed power Pmin=(1β)min, for each set of parameters that belongs to a given size, increasing the size until the power P0 is reached.

The next theorem is the main structural result of our article. We show that for given power requirements PP0, the minimal sample size can be obtained by varying only one parameter, which we call “pivot” parameter, keeping the other parameters minimal. We thus prove and generalize suggestions in Rasch et al. (Citation2011), see Remark 2.9(ii). Part (i) of the next theorem describes the key property of the “pivot” parameter, part (ii) is an intermediate result, and part (iii) is the minimum sample size result.

Theorem 2.7.

Denote by “pivot” parameter the parameter in the second column of . Then the following hold.

  1. If a parameter increases, then the power increases most if it is the “pivot” parameter.

  2. For fixed size, if we allow the parameters to be real numbers, then the maximal power occurs if the “pivot” parameter varies and the other parameters are minimal.

  3. For fixed power, if we allow the parameters to be real numbers, then the minimum size occurs if the “pivot” parameter varies and the other parameters are minimal.

The proof of Theorem 2.7 is in Appendix A.

Example 2.8.

For the model ABC, we have the following. For given power requirements PP0, the minimal sample size is obtained by varying the parameter b, keeping c and n minimal. For this and two other examples, see .

Table 2. Exemplifying the “pivot” effect (Theorem 2.7) for three models.

Remark 2.9.

  1. The “pivot” parameter in Theorem 2.7, defined in the second column of , can also be identified directly from the model formula in the first column of the table. That is, the “pivot” parameter is the number of levels of the random factor nearest to A, if we include the number n of replicates as a virtual random factor, and exclude factors that A is nested in (labeled U, V). For example, in ABC the random factor B is nearer to A than the random factor C or the virtual random factor of replicates; and indeed the “pivot” parameter is b. Inspired by related comments in Doncaster and Davey (Citation2007, p. 23) we interpret this heuristic observation as a correlation between higher power effect and higher organizatorial level.

  2. In Rasch et al. (Citation2011, p. 73), it is observed that for the two-way model A×B only the parameter b should vary, but n should be chosen as small as possible, to achieve the minimum sample size. For the model VA, it is conjectured (Rasch et al. Citation2011, p. 78) that only n should vary, but v should be as small as possible, to achieve the minimal sample size. These suggestions are motivated by inspecting the effect of the parameters on the denominator d.f. df2. By Theorem 2.7(iii), we prove the conjecture and generalize these observations. In fact, from the “pivot” parameter for A×B is b, and the “pivot” parameter for VA is n. Our proof works by inspecting the effect of the parameters not only on df1 and df2 but also on the noncentrality parameter λ. Note we assume that the parameters are real numbers, for the subtleties of the transition to integer parameters see Section 4.

The next example illustrates the minimal sample size computation for ANOVA models, based on our main results.

Example 2.10.

  1. Consider the model A×B. Let α=0.05, let a = 6, let δ=σy and consider the power requirement P0.9. From Theorem 2.7, we observe that the minimal design has n = 2 and only the “pivot” parameter b is relevant. By Theorems 2.1 and 2.4, we obtain that to achieve P0.9, the minimal design is (b,n)=(35,2), with size abcn = 420 and power P = 0.909083.

  2. Consider the model A×B×C and assume σαγ2=0. This model is equivalent to the exact F-test models (A×B)C and A×(BC), cf. Lemma 3.1 below. Let α=0.05, let a = 6, let δ=σy and consider the power requirement P0.9. By Theorem 2.7, we obtain that the minimal design has c=n=2 and only the “pivot” parameter b is relevant. By Theorems 2.1 and 2.4, we obtain that to achieve P0.9, the minimal design is (b,c,n)=(35,2,2), with size abcn = 840 and power P = 0.909083.

Remark 2.11.

In Example 2.10, the power P = 0.909083 for (b,c,n)=(35,2,2) in (ii) is the same as the power for (b,n)=(35,2) in (i). This coincidence is implied by the fact that (i) and (ii) have the same d.f. and in the worst case of (i) and (ii) the total variance is consumed entirely by σαβ2, cf. Remark 2.5(ii).

3. Models with approximate F-test

For the two models (5) A×B×Cand(AB)×C,(5) an exact F-test does not exist. Approximate F-tests can be obtained by Satterthwaite’s approximation that goes back to Behrens (Citation1929), Welch (Citation1938, Citation1947) and generalized by Satterthwaite (Citation1946), see Sahai and Ageel (Citation2000, Appendix K). The details of the approximate F-tests for the models in (5) are in Rasch et al. (Citation2011, Sections 3.4.1.3 and 3.4.4.5). Satterthwaite’s approximation in a similar or different form also occurs, for example, in Davenport and Webster (Citation1972, Citation1973), Doncaster and Davey (Citation2007, pp. 40–41), Hudson and Krutchkoff (Citation1968), Lorenzen and Anderson (Citation2019), Rasch, Spangl, and Wang (Citation2012), Wang, Rasch, and Verdooren (Citation2005), also denoted as quasi-F-test (Myers and Well Citation2010).

The approximate F-test d.f. involve mean squares to be simulated. To approximate the power of the test, simulate data such that H0 is false and compute the rate of rejections. The rate approximates the power of the test. In the middle plot of , we give an example of the power behavior for the approximate F-test model (AB)×C. The plot shows that the “pivot” effect for exact F-tests (Theorem 2.7) does not generalize to approximate F-tests.

Figure 1. Power and size for the mixed model (AB)×C, for a = 6, α=0.05, δ = 5, and three variance component assignments (σβ(α)2,σγ2,σαγ2,σβγ(α)2,σ2) =(10,5,0,5,5),(5,5,5,5,5),(0,5,10,5,5), from left to right. Each contour plots shows the guaranteed power Pmin=(1β)min (solid curves) overlaid with the size factor b·c (red, dashed hyperbolas) as functions of b,c25, for fixed n = 2. By Lemma 3.1(ii) the left model is equivalent to ABC, such that by Theorem 2.7 the “pivot” parameter is b. The middle plot is an approximate F-test model (the power is approximated by 10,000 simulations), there is no “pivot” effect. The right model is equivalent to (A×C)B, the “pivot” parameter is c.

Figure 1. Power and size for the mixed model (A≻B)×C, for a = 6, α=0.05, δ = 5, and three variance component assignments (σβ(α)2,σγ2,σαγ2,σβγ(α)2,σ2) =(10,5,0,5,5),(5,5,5,5,5),(0,5,10,5,5), from left to right. Each contour plots shows the guaranteed power Pmin=(1−β)min (solid curves) overlaid with the size factor b·c (red, dashed hyperbolas) as functions of b,c≤25, for fixed n = 2. By Lemma 3.1(ii) the left model is equivalent to A≻B≻C, such that by Theorem 2.7 the “pivot” parameter is b. The middle plot is an approximate F-test model (the power is approximated by 10,000 simulations), there is no “pivot” effect. The right model is equivalent to (A×C)≻B, the “pivot” parameter is c.

The next lemma rephrases observations in Rasch et al. (Citation2011) and Rasch, Spangl, and Wang (Citation2012). It allowed us to avoid approximations but use exact F-test computations for the left and the right plots of .

Lemma 3.1.

The following special cases of (5) are equivalent to exact F-test models, in the sense of identical d.f. and noncentrality parameters.

  1. If in the model A×B×C we have σαγ2=0, then it is equivalent to (A×B)C and A×(BC).

  2. If in the model (AB)×C we have σβ(α)2=0, then it is equivalent to (A×C)B and A×(CB); while if σαγ2=0, then it is equivalent to ABC.

Proof.

The equivalences follow from inspecting the d.f. and the noncentrality parameter. □

Remark 3.2.

To look up in the first case of Lemma 3.1(ii), swap the factor names BC first.

4. Real versus integer parameters

The “pivot” effect for the minimum sample size described in Theorem 2.7(iii) is formulated with the assumption that the parameters are real numbers. The effect also occurs in most practical examples, where the parameters are integers. But we constructed the following example to point out that for integer parameters the “pivot” effect is not a granted fact.

Example 4.1.

Consider the two-way model A×B with a = 15, α=0.1, δ = 7, (σβ(α)2,σ2)=(0.01,8), and required power P0.9. Then for real b,n2, the minimum sample size obtained by Theorem 2.7(iii) occurs for (b,n)=(4.019937,2), where P = 0.9. For integers b,n=2,3,, the minimum sample size occurs for (b,n)=(3,3), where P = 0.902873. Thus, in this example the “pivot” effect is obstructed if we switch from real numbers to integers. In more realistic examples this obstruction does not occur.

Remark 4.2.

While Example 4.1 shows that the transition to integers can obstruct (if by an unrealistic example) the “pivot” effect, we remark that the obstruction is limited, that is, the real number computation has the following valid implication for the integer result. The real number minimum at (b,n)=(4.019937,2), readily computed by using Theorem 2.7(iii), immediately implies that the integer minimum size occurs at (b, n) with b·n between 4.019937·2 and 5·2, that is, bn{9,10}, in fact in the example b·n=9. A similar implication holds for all models in .

5. Conclusions

We determine the noncentrality parameter of the exact F-test for balanced factorial ANOVA models. From a sharp lower bound for the noncentrality parameter, we obtain the power that can be guaranteed in the least favorable case. These results allow us to compute the minimal sample size, but we also provide a structural result for the minimal sample size. The structural result is formulated as a “pivot” effect, which means that one of the factors is more relevant than the others, for the power and thus for the minimum sample size.

Acknowledgments

The authors are grateful to Karl Moder for helpful discussions and comments. We also thank the reviewer for useful comments.

References

Appendix A: Proofs

We include a short proof of the formula for the noncentrality parameter in Lindman (Citation1992, p. 151), formulated here in a more general form.

Lemma A.1.

Let a test statistic F have a noncentral F-distribution with numerator and denominator d.f. df1 and df2, respectively, written as F=Z1/Z2, with q0, Z1=q·X1/df1,Z2=q·X2/df2,X1χ2(df1,λ),X2χ2(df2,0), and X1,X2 stochastically independent. Then the noncentrality parameter λ satisfies λ=df1·(E(Z1)E(Z2)1).

Proof.

Since E(X1)=df1+λ and E(X2)=df2, we obtain E(Z1)=q·(1+λ/df1) and E(Z2)=q. Hence, (A.1) E(Z1)/E(Z2)=1+λ/df1,(A.1) which implies the expression for λ in the lemma. □

Remark A.2.

Jensen’s equality implies E(Z1)/E(Z2)<E(Z1/Z2)=E(F). For E(F), see Johnson, Kotz, and Balakrishnan (Citation1995, formula (30.3a)).

The next lemma summarizes monotonicity properties of the noncentral F-distribution from Ghosh (Citation1973), listed in Hocking (Citation2003, Section 16.4.2), see also Finner and Roters (Citation1997, Theorem 4.3) with a sharper statement. Recall that for 0γ1, we let Fdf1,df2;γ denote the γ-quantile of the central F-distribution with df1 and df2 degrees of freedom.

Lemma A.3.

Let F be distributed according to the noncentral F-distribution Fdf1,df2λ with noncentrality parameter λ. Then referring to the probability P(F>Fdf1,df2;γ) as power, we have if df1 decreases and df2, λ increase, then the power increases. That is, we have the implication df1df1df2df2λλ}P(F>Fdf1,df2;γ)P(F>Fdf1,df2;γ) with FFdf1,df2λ and FFdf1,df2λ.

Proof.

For varying df1, see Ghosh (1973, Theorem 6). For varying df2, apply Ghosh (1973, Theorem 5) with λ0=0. For varying λ, see Witting (Citation1985, p. 219, Satz 2.36(b)) or Bhattacharya and Burman (2016, p. 53, Exercise 2.9). □

Proof of Theorem 2.1.

We prove the result only for the model ABC, the proofs for the other models are analogous. In the expected mean squares table (Rasch et al. Citation2011, p. 100, Table 3.15) the two expressions (A.2) E(MSA)=σ2+nσγ(αβ)2+cnσβ(α)2+bcna1iαi2E(MSBinA)=σ2+nσγ(αβ)2+cnσβ(α)2(A.2) are equal under the null hypothesis H0 of no A-effects. Hence, H0 can be tested by the exact F-test (A.3) FA=MSAMSBinA,(A.3) which under H0 is central F-distributed, in general noncentral F-distributed. From the ANOVA table (Rasch et al. Citation2011, p. 91, Table 3.10) the numerator and denominator d.f. are df1=a1 and df2=a(b1), respectively. By Lemma A.1 the noncentrality parameter λ is thus (A.4) λ=bcniαi2σ2+nσγ(αβ)2+cnσβ(α)2=b·iαi2σβ(α)2+1cσγ(αβ)2+1cnσ2.(A.4)

Remark A.4.

  1. The formula Equation(A.4) allows us to point out the difference of our results compared to the previous literature (Rasch et al. Citation2011, p. 58–59). In fact, the expression bcn in the numerator at the left-hand side of Equation(A.4) coincides with the expression C in Rasch et al. (Citation2011, Table 3.2), but note that the denominator is distinct. The exact expression for λ in Equation(A.4) has the sum of variance components σy2=σ2+σγ(αβ)2+σβ(α)2 replaced by the linear combination σ2+nσγ(αβ)2+cnσβ(α)2, see also Rasch and Verdooren (Citation2020). The fourth author and Rob Verdooren have acknowledged our results and update their available R-programs accordingly, note in Rasch and Verdooren (Citation2020) some citation numbers have been mixed up. To reproduce the examples of the present paper, R-code is available from the first author.

  2. The transformation from the left-hand side to the right-hand side in Equation(A.4) shifts the attention from the product of parameters bcn to the single parameter b. This observation is the key to our general “pivot” effect result (Theorem 2.7).

  3. To verify the details of note that the expected mean squares table entries used in the proof of Theorem 2.1 depend on the factors being fixed or random.

Proof of Theorem 2.4.

  1. As above we prove the result for the model ABC. Since (A.5) σβ(α)2+1cσγ(αβ)2+1cnσ2σβ(α)2+σγ(αβ)2+σ2σy,active2,(A.5)

we obtain (A.6) λ=b·iαi2σβ(α)2+1cσγ(αβ)2+1cnσ2b·iαi2σy,active2,(A.6)

and the Szőkefalvi-Nagy inequality (Szőkefalvi-Nagy Citation1918; Brauer and Mewborn Citation1959; Alpargu and Styan Citation2000, p. 11; Sharma, Gupta, and Kapoor Citation2010; Gutman et al. Citation2017; Kaiblinger and Spangl Citation2020) states that (A.7) iαi2(αmaxαmin)22=δ22.(A.7)

  • ii, iii. By Kaiblinger and Spangl (Citation2020) we have for the (v×a) matrix (αi(j))i,j and for the (u×v×a) array (αi(jk))i,j,k, (A.8) i,jαi(j)2δ22·mm1 and i,j,kαi(jk)2δ22·m2m3(m21)(m31),(A.8)

respectively. □

Proof of Theorem 2.7.

  1. We consider the parameters as competitors in (A.9) not increasing df1andincreasing df2 and λ.(A.9)

    For each model in , we analyze the effect of the parameters on df1,df2 and λ, using the arguments illustrated in Example A.5 below. The inspection yields that for each model there is a sole winner, which we call the “pivot” parameter. We exemplify the scoring for four models:

Since by Lemma A.3 the lead in Equation(A.9) also means the lead in power increase, we thus obtain that the “pivot” yields the maximal power increase.

  • ii. Start with minimal parameters and apply (i).

  • iii. is equivalent to (ii). □

Example A.5.

We illustrate the proof of Theorem 2.7(i) by showing the typical argument for most increase in df2 and the typical argument for most increase in λ.

  1. In the model ABC the parameter n is more effective than b or c in increasing df2, (A.10) df2=abc(n1)=abcnabc,(A.10)

since b, c, n equally increase the positive term of Equation(A.10), but only n does not increase the negative term.
  1. For the model ABC, the parameter b is more effective than c or n in increasing λ, (A.11) λ=bcniαi2σ2+nσγ(αβ)2+cnσβ(α)2,(A.11)

since b, c, n equally increase the numerator of Equation(A.11), but only b does not increase the denominator.