# Variance

The variance is defined as the sum of the squares of the deviations of the observation data from a specific value, divided by the degrees of freedom /. The variance — sometimes called the mean square — is denoted by V (Steiner S. H. & MacKay R. J., 2005).

3.5.1. Analysis of variance

The relative magnitude of the effect of different factors can be obtained by the decomposition of the variance, namely ANOVA — this is given in table (1). The experimental design permits the effects of numerous factors to be investigated at the same time. When many different factors dynamically affect a given quality characteristic, ANOVA is a systematic and meaningful way of statistically evaluating experimental results (Montgomery D. C., 2009).

 Sources of variation Degrees of freedom F Sum of squares SS Mean square V Pure sum of squares S F- ratio Percent contribution (%) Factor(a) 1 sa va Sa Fa a Error(e) n-1 se Ve Se 1 е Total (t) N st St 100.0
 Table 1. ANOVA table

Where:

1. Variance ratio

 (7) (7) (9) (10)

 Fa =

 2. Sum of squares

 s’ = s — ( f x v| a a J a e J s’„ = s„ + (fn x v„) e e V-‘ a e /

 3. %age contribution:

 a’ = | ^ I x 100

After n pieces of experimental data are collected and after the values of a and e are calculated, significant testing provides the criterion for making such decisions. The F-tests are used to statistically determine whether the constituents — the total sum of squares which are decomposed — are significant with respect to the components that remain in the error variance. The specific numerical confidence levels, depending upon which F-table is used, are called the level of significance. When the variance ratios Fa are larger than the F-table at the 5% level, then the effect is called significant at the 5% level (Montgomery D. C., 2009).