Treatment is what is applied to experimental units (factors) to analyse its effect on dependent variable. In ANOVA, all the factors are categorical, with at least three treatments. The goal of this paper is to examine the role of treatment with reference to some numerical examples obtained from other documented materials. This has been achieved by evaluating one way ANOVA.
An experiment is a cogitation which leads to data collection. Experimental design is the collection of data with proper preparation for the purpose of meeting specific goal(s). Experimental design is useful for obtaining appropriate data, enough sample size and power in order to respond to the study questions expeditiously. When planning an experiment, the followings are executed: statement of the problem and study questions; statement of the target population; ascertaining of the sampling desire and Experimental Design definition (SAS white paper).
Explaining Experimental Design particularly “Treatment” is the major aspect of this paper, SAS white paper, explained that the followings are the important stages of defining Experimental Design: identifying the experimental units, identifying the types of variables, defining the treatment structure and defining the design structure.
What a researcher applies to experimental units is called a treatment. For example a medical doctor can prescribe three different types of drugs to three different groups of patients respectively to see the effectiveness of each drug. Each of this drug applies to particular group of patient is called treatment; a teacher can implement various teaching methods to different groups of students to find out the most effective method; In farming also, a farmer can apply various kind of fertilizer to various field to see which field can yield more results [1]. From these examples, one can realized that treatment is applied to experimental units to compare the outcome of each treatment.
ANOVA method is used to test the null hypothesis that there is a significant difference between the mean of three or more population. ANOVA break down the information into different aspects, the one which addresses group means and the one which addresses deviations from group means. ANOVA employs the use of sums of a squared deviation from a model [2]. The major theme in ANOVA is to segment the overall variance in the response in to that resulting from each factor and that due to error. For example, if a medical doctor wants to test the efficiency of some newly invented machines for detecting a particular type of disease, say four types of machines (machine A, B, C and D), he can test each machine on one group of patients. In ANOVA terminology, each machine tested on one particular group of patients for detecting that particular disease in question is called “Treatment”. Another example also might be a teacher who has device three methods of teaching arithmetic, methods X, Y and Z, at the end of the term, the students are assessed on the same examination to find whether there is significant difference between the three methods. These methods are called treatments in ANOVA terminology [3]. Another example could be a brand of CocaCola; Coke, Pepsi and RC Cola are all treatments, while the brand is a factor. Another factor might be Calories which could be regular or diet (containing two treatments). Here, there are two factors, the first been Brand with three treatments as Coke, Pepsi and RC Cola while the second been Calories which has two treatments as regular and diet.
In ANOVA, predictors are called “Factors” which are all categorical/qualitative, and they have levels (also known as treatments). The parameters in this model are referred to as effects [4].
The model is given as:
${Y}_{ij}=\text{}u\text{}+\text{}{\beta}_{i}+\text{}{\epsilon}_{ij}$
Where
$\beta $
is a factor occurring at i =1 . . . I levels, with j = 1 . . . Ji observations per level.
The equation above is normally referred to as effect model. When the treatments are specifically determined by the researcher and hence the outcomes cannot be generalized to other treatments, the model is referred to as fixed effect model. Also, when the treatments are randomly selected where the treatments are random treatments and hence the outcomes can be generalized to other treatments, the model is referred to as random effects model [5].
In the fixed effect model above, the major target is to attempt to detect differences by testing the hypothesis that there is no significant difference between the means of all the treatments. The hypothesis is formally stated as:
${H}_{0:}{\mu}_{1}=\text{}{\mu}_{2}=\text{}.\text{}.\text{}.\text{}.\text{}=\text{}{\mu}_{I}$
Versus
H1: At least one of the mean differs from the rest.
Where the number of treatments are indicated.
ANOVA table
For the test of the hypothesis stated above in both examples 1 and 2, the common practice is to fill the ANOVA table given in table 1 [5].
Source of Variation 
Sum of Squares 
Degrees of Freedom 
Mean Squares 
F 
Between Treatments 
SSB 
K – 1 
MSB 
F = MSB / MSE 
Error (within Treatments) 
SSE 
N – k 
MSE 
Total 
SST 
N – 1 

Where: SSB means Sum of Square between treatments, SSE means Sum of Square within treatments, MSB means Mean Square Between treatments, MSE is the Mean Square of Error (Within treatments) and SST is the Total Sum of Squares. N is the number of observations and K is the number of treatments.
$SSB\text{}=\text{}{\sum}_{i=1}I\text{}{\sum}_{j=1}J\text{}{\left({x}_{i\u2022}\u2013\text{}{X}_{\u2022\u2022}\right)}^{2}$
$SSE\text{}=\text{}{\sum}_{i=1}I\text{}{\sum}_{j=1}J\text{}{\left({x}_{ij}\u2013\text{}{X}_{i\u2022}\right)}^{2}$
$SST\text{}=\text{}SSB\text{}+\text{}SSE$
$MSB\text{}=\text{}SSB\text{}/\text{}K\text{}\u2013\text{}1$
$MSE\text{}=\text{}SSE\text{}/\text{}N\text{}\u2013\text{}K$
Model comparism
One factor ANOVA is based on the fact that ANOVA test the null hypothesis that all the means of the treatments are equal. But in some cases, ANOVA encompasses more than one factor which is normally referred to as factorial analysis. In factorial analysis, more than one factor is analysed where the null hypothesis that parameter of a particular factor in question is zero is tested. Also, other means of analysing ANOVA is to compare two ANOVA model; reduced model and full model. The full model permits the treatments to possess dissimilar expected value, while the reduced is the model that all the treatments have similar expected values [2].
In factorial design, two effects are analysed; main effect and interaction effect. Main effect is the change in dependent variable (response) resulting from change in level of a factor. In other cases, it can be discovered that the reminder in response between the levels of a factor is unequal at all levels, this case is referred to Interaction effects. Model with interaction effects occurs in factorial; design which has at least two factors. The regression model in interaction effect is represented as:
$Y\text{}=\text{}{\alpha}_{1}+\text{}{\alpha}_{1}{x}_{1}+\text{}{\alpha}_{2}{x}_{2}+\text{}{\alpha}_{12}{x}_{1}{x}_{2}+\text{}\epsilon $
where Y is the response variable,
$\alpha \u2019s$
are the unknown parameters to be estimated, x1 and x2 are the factors 1 and 2 respectively, x1x2 is the interacting factor between factor 1 and factor 2,
${\alpha}_{12}$
is the interaction coefficient and ε is the error term. In this kind of regression, null hypothesis that
${\alpha}_{12}=0$
is tested against the alternative that it is not. If the null hypothesis is accepted, it means that there is no interaction effect in the model otherwise there is. Also, in factorial analysis null hypothesis that the mean values of all the treatments in factor 1 are equal against the alternative that at least one of them differs from the rest is tested; the same hypothesis can also be tested for factor 2. The parameters of this model are estimated by least squares [5].
Block design
A nuisance factor is a factor which likely influences the response variable and that influence is interested upon. The nuisance factor can be unknown and uncontrollable, known but uncontrollable or known and controllable. Blocking is a design method employed to consistently eradicate the effect of known and controllable nuisance on the statistical comparisons among treatments. Blocking is a crucial technique utilized proficiently in industrial experimentation [5].
When treatments are attributed to factors at random, this is called completely randomized design. This is most suitable when the factors are homogeneous. When there is confusion as if the factors are heterogeneous or not and the differences are categorically identified, completely randomized block is still the most appropriate. Preferably, block size should be equal to the number of treatments, if this is not possible, then the use of incomplete block design is necessary. In some cases blocks are decide by the experimenter while in other cases it is determined by the experiment depending on the nature and type of experiment [4].
When there is one treatment factor, one blocking factor and one observation on each treatment on each block, then the model can be stated as:
${Y}_{ij}=\text{}\mu \text{}+\text{}{\gamma}_{i}+\text{}{\beta}_{j}+\text{}{\epsilon}_{ij}=0$
and it’s called completely randomize block design, its analysis is similar to that of twoway ANOVA with one observation per cell. Where
$\gamma i$
the treatment effect and
$\beta j$
is is the blocking effect [4].
“Latin square design is used to eliminate two nuisance sources of variability; that is, it systematically allows blocking in two directions” [5]. When two blocking variables exist, Latin square is used. In Latin square, each treatment is assigned to each block once and only one. A Latin square is a design laid down in rows and columns, each treatment appears once in each column and in each row so that the number of replications equals the number the treatments. In a nut shell, Latin square of k factors is square containing k columns and k rows. Every cell from the k2 cells take one of the k letters representing a treatment, each treatment appears only once in each row and in each column [5].
Numerical examples:
Example 1:
Considering the students’ performance as response variable and the methods of teaching arithmetic as predictor variable (factor), one can test the null hypothesis that there is no significant difference between the three methods of teaching arithmetic. The methods and the observations are given in the table 2 below [1]:
In the above table, there is one factor which is method of teaching which has three treatments; Strategy I, Strategy II and Strategy III. For the hypothesis test, Fstatistic is computed from the values of the observations and is compared with the Fcritical from the Ftable for the acceptance or rejection of the hypothesis.
F = MSB / MSE
MSB = 216.0667 and MSE = 197.7333
F = 216.0667 / 197.7333 = 1.09
This Fstatistic value of 1.09 is compared with Fcritical (2, 12) of 1.56 which led to the acceptance of null hypothesis and the conclusion is that the group means do not differ significantly. This means that all the three arithmetic teaching methods yield the same results.
Strategy I 
Strategy II 
Strategy III 
48 
55 
84 
73 
85 
68 
51 
70 
95 
65 
69 
74 
87 
90 
67 
$\sum {x}_{1}=324$
, n1=5 
$\sum {x}_{2}=369$
, n2=5 
$\sum {x}_{3}=388$
, n3=5 
Table 2: Methods and Observations of Students Performance.
Example 2:
Another example is given in table 3 below where the yield is the dependent variable with one factor (growing condition) which has three treatments (levels) [6].
Figures of the observations in table 3 can also be used to test the same hypothesis given above. Fstatistic should also be computed from the figures which are compared with the Fcritical from the Ftable for the rejection or acceptance of the null hypothesis.
F = MSB / MSE
MSB = 1.883, MSE = 0.389
F = 1.883 / 0.389 = 4.85
This Fstatistic value of 4.85 is compared with Fcritical (2, 27) of 1.46 and the conclusion is that the group means differ significantly. This means that the outcome of one treatment differs from that of another.
Treatment I 
Treatment II 
Treatment III 
4.17 
4.81 
6.31 
5.58 
4.17 
5.12 
5.18 
4.41 
5.54 
6.11 
3.59 
5.50 
4.50 
5.87 
5.37 
4.61 
3.83 
5.29 
5.17 
6.03 
4.92 
4.53 
4.89 
6.15 
5.33 
4.32 
5.80 
5.14 
4.69 
5.28 
$\sum {x}_{1}=50.32$
, n1=10,
$\sum {x}_{1}{}^{2}=\text{}256.27$

$\sum {x}_{2}=46.61$
, n2=10,
$\sum {x}_{2}{}^{2}=\text{}222.92$

$\sum {x}_{3}=55.26$
, n3=10,
$\sum {x}_{3}{}^{2}=\text{}307.13$

This paper examines the role treatment in explaining its effects on response variable. Two examples were given in the paper, the first example accepts the null hypothesis and concludes that there is no significant difference among the three methods of teaching arithmetic, while the second hypothesis rejected the null hypothesis and concludes that there is significant difference in the three growing conditions. One way ANOVA was used for both examples. Factorial design is also explained in the paper but without numerical examples.