ISSN: 2378-315X BBIJ

Biometrics & Biostatistics International Journal
Research Article
Volume 3 Issue 4 - 2016
Experimental Design: The Role of Treatment
Ilker Etikan*, Sulaiman Abubakar and Rukayya Alkassim
Department of Biostatistics, Near East University, Nicosia-TRNC, Cypru
Received:March 22, 2016 | Published: April 28, 2016
*Corresponding author: Ilker Etikan, Department of Biostatistics, Near East University, Nicosia-TRNC, Cyprus, Email:
Citation: Etikan I, Abubakar S, Alkassim R (2016) Experimental Design: The Role of Treatment. Biom Biostat Int J 3(4): 00074. DOI:10.15406/bbij.2016.03.00074

Abstract

Treatment is what is applied to experimental units (factors) to analyse its effect on dependent variable. In ANOVA, all the factors are categorical, with at least three treatments. The goal of this paper is to examine the role of treatment with reference to some numerical examples obtained from other documented materials. This has been achieved by evaluating one way ANOVA.

Introduction

An experiment is a cogitation which leads to data collection. Experimental design is the collection of data with proper preparation for the purpose of meeting specific goal(s). Experimental design is useful for obtaining appropriate data, enough sample size and power in order to respond to the study questions expeditiously. When planning an experiment, the followings are executed: statement of the problem and study questions; statement of the target population; ascertaining of the sampling desire and Experimental Design definition (SAS white paper).

Explaining Experimental Design particularly “Treatment” is the major aspect of this paper, SAS white paper, explained that the followings are the important stages of defining Experimental Design: identifying the experimental units, identifying the types of variables, defining the treatment structure and defining the design structure.

Treatment

What a researcher applies to experimental units is called a treatment. For example a medical doctor can prescribe three different types of drugs to three different groups of patients respectively to see the effectiveness of each drug. Each of this drug applies to particular group of patient is called treatment; a teacher can implement various teaching methods to different groups of students to find out the most effective method; In farming also, a farmer can apply various kind of fertilizer to various field to see which field can yield more results [1]. From these examples, one can realized that treatment is applied to experimental units to compare the outcome of each treatment.

Analysis of Variance (ANOVA)

ANOVA method is used to test the null hypothesis that there is a significant difference between the mean of three or more population. ANOVA break down the information into different aspects, the one which addresses group means and the one which addresses deviations from group means. ANOVA employs the use of sums of a squared deviation from a model [2]. The major theme in ANOVA is to segment the overall variance in the response in to that resulting from each factor and that due to error. For example, if a medical doctor wants to test the efficiency of some newly invented machines for detecting a particular type of disease, say four types of machines (machine A, B, C and D), he can test each machine on one group of patients. In ANOVA terminology, each machine tested on one particular group of patients for detecting that particular disease in question is called “Treatment”. Another example also might be a teacher who has device three methods of teaching arithmetic, methods X, Y and Z, at the end of the term, the students are assessed on the same examination to find whether there is significant difference between the three methods. These methods are called treatments in ANOVA terminology [3]. Another example could be a brand of Coca-Cola; Coke, Pepsi and RC Cola are all treatments, while the brand is a factor. Another factor might be Calories which could be regular or diet (containing two treatments). Here, there are two factors, the first been Brand with three treatments as Coke, Pepsi and RC Cola while the second been Calories which has two treatments as regular and diet.

Methodology

In ANOVA, predictors are called “Factors” which are all categorical/qualitative, and they have levels (also known as treatments). The parameters in this model are referred to as effects [4].

The model is given as:

Y ij = u +  β i +  ε ij MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacaWGzbWdamaaBaaajuaibaWdbiaadMgacaWGQbaajuaGpaqa baWdbiabg2da9iaabccacaWG1bGaaeiiaiabgUcaRiaabccacqaHYo GypaWaaSbaaKqbGeaapeGaamyAaaqcfa4daeqaa8qacqGHRaWkcaqG GaGaeqyTdu2damaaBaaajuaibaWdbiaadMgacaWGQbaajuaGpaqaba aaaa@4903@

Where β MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqaHYoGyaaa@3845@ is a factor occurring at i =1 . . . I levels, with j = 1 . . . Ji observations per level.

The equation above is normally referred to as effect model. When the treatments are specifically determined by the researcher and hence the outcomes cannot be generalized to other treatments, the model is referred to as fixed effect model. Also, when the treatments are randomly selected where the treatments are random treatments and hence the outcomes can be generalized to other treatments, the model is referred to as random effects model [5]. ­­

In the fixed effect model above, the major target is to attempt to detect differences by testing the hypothesis that there is no significant difference between the means of all the treatments. The hypothesis is formally stated as:

H 0: µ 1 =  µ 2 = . . . . =  µ I MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacaWGibWdamaaBaaajuaibaWdbiaaicdacaGG6aaajuaGpaqa baWdbiaadwlapaWaaSbaaKqbGeaapeGaaGymaaqcfa4daeqaa8qacq GH9aqpcaqGGaGaamyTa8aadaWgaaqcfasaa8qacaaIYaaajuaGpaqa baWdbiabg2da9iaabccacaGGUaGaaeiiaiaac6cacaqGGaGaaiOlai aabccacaGGUaGaaeiiaiabg2da9iaabccacaWG1cWdamaaBaaajuai baWdbiaadMeaaKqba+aabeaaaaa@4D87@

Versus

H1: At least one of the mean differs from the rest.

Where the number of treatments are indicated.

ANOVA table

For the test of the hypothesis stated above in both examples 1 and 2, the common practice is to fill the ANOVA table given in table 1 [5].

Source of Variation

Sum of Squares

Degrees of Freedom

Mean Squares

F

Between Treatments

SSB

K – 1

MSB

 

F = MSB / MSE

Error (within Treatments)

SSE

N – k

MSE

Total

SST

N – 1

 

Table 1: ANOVA Table.

Where: SSB means Sum of Square between treatments, SSE means Sum of Square within treatments, MSB means Mean Square Between treatments, MSE is the Mean Square of Error (Within treatments) and SST is the Total Sum of Squares. N is the number of observations and K is the number of treatments.

SSB =  i=1 I  j=1 J  ( x i   X ) 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacaWGtbGaam4uaiaadkeacaqGGaGaeyypa0JaaeiiaiabggHi L=aadaWgaaqcfasaa8qacaWGPbGaeyypa0JaaGymaaqcfa4daeqaa8 qacaWGjbGaaeiiaiabggHiL=aadaWgaaqcfasaa8qacaWGQbGaeyyp a0JaaGymaaqcfa4daeqaa8qacaWGkbGaaeiia8aadaqadaqaa8qaca WG4bWdamaaBaaajuaibaWdbiaadMgacaGGIacajuaGpaqabaWdbiaa cobicaqGGaGaamiwa8aadaWgaaqcfasaa8qacaGGIaIaaiOiGaqcfa 4daeqaaaGaayjkaiaawMcaamaaCaaabeqcfasaa8qacaaIYaaaaaaa @5485@

SSE =  i=1 I  j=1 J  ( x ij   X i ) 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacaWGtbGaam4uaiaadweacaqGGaGaeyypa0JaaeiiaiabggHi L=aadaWgaaqcfasaa8qacaWGPbGaeyypa0JaaGymaaqcfa4daeqaa8 qacaWGjbGaaeiiaiabggHiL=aadaWgaaqcfasaa8qacaWGQbGaeyyp a0JaaGymaaqcfa4daeqaa8qacaWGkbGaaeiia8aadaqadaqaa8qaca WG4bWdamaaBaaajuaibaWdbiaadMgacaWGQbaajuaGpaqabaWdbiaa cobicaqGGaGaamiwa8aadaWgaaqcfasaa8qacaWGPbGaaiOiGaqcfa 4daeqaaaGaayjkaiaawMcaamaaCaaabeqcfasaa8qacaaIYaaaaaaa @54D9@

SST = SSB + SSE MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacaWGtbGaam4uaiaadsfacaqGGaGaeyypa0Jaaeiiaiaadofa caWGtbGaamOqaiaabccacqGHRaWkcaqGGaGaam4uaiaadofacaWGfb aaaa@4292@

MSB = SSB / K  1 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacaWGnbGaam4uaiaadkeacaqGGaGaeyypa0Jaaeiiaiaadofa caWGtbGaamOqaiaabccacaGGVaGaaeiiaiaadUeacaqGGaGaai4eGi aabccacaaIXaaaaa@4359@

MSE = SSE / N  K MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacaWGnbGaam4uaiaadweacaqGGaGaeyypa0Jaaeiiaiaadofa caWGtbGaamyraiaabccacaGGVaGaaeiiaiaad6eacaqGGaGaai4eGi aabccacaWGlbaaaa@4377@

Model comparism

One factor ANOVA is based on the fact that ANOVA test the null hypothesis that all the means of the treatments are equal. But in some cases, ANOVA encompasses more than one factor which is normally referred to as factorial analysis. In factorial analysis, more than one factor is analysed where the null hypothesis that parameter of a particular factor in question is zero is tested. Also, other means of analysing ANOVA is to compare two ANOVA model; reduced model and full model. The full model permits the treatments to possess dissimilar expected value, while the reduced is the model that all the treatments have similar expected values [2].

In factorial design, two effects are analysed; main effect and interaction effect. Main effect is the change in dependent variable (response) resulting from change in level of a factor. In other cases, it can be discovered that the reminder in response between the levels of a factor is unequal at all levels, this case is referred to Interaction effects. Model with interaction effects occurs in factorial; design which has at least two factors. The regression model in interaction effect is represented as:

Y =  α 1 +  α 1 x 1 +  α 2 x 2 +  α 12 x 1 x 2 + ε MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacaWGzbGaaeiiaiabg2da9iaabccacqaHXoqypaWaaSbaaKqb GeaapeGaaGymaaqcfa4daeqaa8qacqGHRaWkcaqGGaGaeqySde2dam aaBaaajuaibaWdbiaaigdaaKqba+aabeaapeGaamiEa8aadaWgaaqc fasaa8qacaaIXaaajuaGpaqabaWdbiabgUcaRiaabccacqaHXoqypa WaaSbaaKqbGeaapeGaaGOmaaqcfa4daeqaa8qacaWG4bWdamaaBaaa juaibaWdbiaaikdaaKqba+aabeaapeGaey4kaSIaaeiiaiabeg7aH9 aadaWgaaqcfasaa8qacaaIXaGaaGOmaaqcfa4daeqaa8qacaWG4bWd amaaBaaajuaibaWdbiaaigdaaKqba+aabeaapeGaamiEa8aadaWgaa qcfasaa8qacaaIYaaajuaGpaqabaWdbiabgUcaRiaabccacqaH1oqz aaa@5B68@  

where Y is the response variable, αs MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqaHXoqycaGGzaIaam4Caaaa@39F8@ are the unknown parameters to be estimated, x1 and x2 are the factors 1 and 2 respectively, x1x2 is the interacting factor between factor 1 and factor 2, α 12 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqaHXoqypaWaaSbaaKqbGeaapeGaaGymaiaaikdaaKqba+aa beaaaaa@3AC5@ is the interaction coefficient and ε is the error term. In this kind of regression, null hypothesis that α 12 =0 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqaHXoqypaWaaSbaaKqbGeaapeGaaGymaiaaikdaaKqba+aa beaacqGH9aqpcaaIWaaaaa@3C85@  is tested against the alternative that it is not. If the null hypothesis is accepted, it means that there is no interaction effect in the model otherwise there is. Also, in factorial analysis null hypothesis that the mean values of all the treatments in factor 1 are equal against the alternative that at least one of them differs from the rest is tested; the same hypothesis can also be tested for factor 2. The parameters of this model are estimated by least squares [5].

Block design

A nuisance factor is a factor which likely influences the response variable and that influence is interested upon. The nuisance factor can be unknown and uncontrollable, known but uncontrollable or known and controllable. Blocking is a design method employed to consistently eradicate the effect of known and controllable nuisance on the statistical comparisons among treatments. Blocking is a crucial technique utilized proficiently in industrial experimentation [5].

When treatments are attributed to factors at random, this is called completely randomized design. This is most suitable when the factors are homogeneous. When there is confusion as if the factors are heterogeneous or not and the differences are categorically identified, completely randomized block is still the most appropriate. Preferably, block size should be equal to the number of treatments, if this is not possible, then the use of incomplete block design is necessary. In some cases blocks are decide by the experimenter while in other cases it is determined by the experiment depending on the nature and type of experiment [4].

When there is one treatment factor, one blocking factor and one observation on each treatment on each block, then the model can be stated as:

Y ij = µ +  γ i +  β j +  ε ij =0 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacaWGzbWdamaaBaaajuaibaWdbiaadMgacaWGQbaajuaGpaqa baWdbiabg2da9iaabccacaWG1cGaaeiiaiabgUcaRiaabccacqaHZo WzpaWaaSbaaKqbGeaapeGaamyAaaqcfa4daeqaa8qacqGHRaWkcaqG GaGaeqOSdi2damaaBaaajuaibaWdbiaadQgaaKqba+aabeaapeGaey 4kaSIaaeiiaiabew7aL9aadaWgaaqcfasaa8qacaWGPbGaamOAaaqc fa4daeqaaiabg2da9iaaicdaaaa@5039@

and it’s called completely randomize block design, its analysis is similar to that of two-way ANOVA with one observation per cell. Where γi MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqaHZoWzcaWGPbaaaa@3939@ the treatment effect and βj MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqaHYoGycaWGQbaaaa@3934@ is is the blocking effect [4].

“Latin square design is used to eliminate two nuisance sources of variability; that is, it systematically allows blocking in two directions” [5]. When two blocking variables exist, Latin square is used. In Latin square, each treatment is assigned to each block once and only one. A Latin square is a design laid down in rows and columns, each treatment appears once in each column and in each row so that the number of replications equals the number the treatments. In a nut shell, Latin square of k factors is square containing k columns and k rows. Every cell from the k2 cells take one of the k letters representing a treatment, each treatment appears only once in each row and in each column [5].

Discussion

Numerical examples:

Example 1:

Considering the students’ performance as response variable and the methods of teaching arithmetic as predictor variable (factor), one can test the null hypothesis that there is no significant difference between the three methods of teaching arithmetic. The methods and the observations are given in the table 2 below [1]:

In the above table, there is one factor which is method of teaching which has three treatments; Strategy I, Strategy II and Strategy III. For the hypothesis test, F-statistic is computed from the values of the observations and is compared with the F-critical from the F-table for the acceptance or rejection of the hypothesis.

F = MSB / MSE

MSB = 216.0667 and MSE = 197.7333

F = 216.0667 / 197.7333 = 1.09

This F-statistic value of 1.09 is compared with F-critical (2, 12) of 1.56 which led to the acceptance of null hypothesis and the conclusion is that the group means do not differ significantly. This means that all the three arithmetic teaching methods yield the same results.

Strategy I

Strategy II

Strategy III

48

55

84

73

85

68

51

70

95

65

69

74

87

90

67

x 1 =324 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqGHris5caWG4bWdamaaBaaajuaibaWdbiaaigdaaKqba+aa beaapeGaeyypa0JaaG4maiaaikdacaaI0aaaaa@3E58@ ,         n1=5

x 2 =369 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqGHris5caWG4bWdamaaBaaajuaibaWdbiaaikdaaKqba+aa beaapeGaeyypa0JaaG4maiaaiAdacaaI5aaaaa@3E62@ ,         n2=5

x 3 =388 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqGHris5caWG4bWaaSbaaKqbGeaacaaIZaaajuaGbeaacqGH 9aqpcaaIZaGaaGioaiaaiIdaaaa@3E26@ ,       n3=5

Table 2: Methods and Observations of Students Performance.

Example 2:

Another example is given in table 3 below where the yield is the dependent variable with one factor (growing condition) which has three treatments (levels) [6].

Figures of the observations in table 3 can also be used to test the same hypothesis given above. F-statistic should also be computed from the figures which are compared with the F-critical from the F-table for the rejection or acceptance of the null hypothesis.

F = MSB / MSE

MSB = 1.883, MSE = 0.389

F = 1.883 / 0.389 = 4.85

This F-statistic value of 4.85 is compared with F-critical (2, 27) of 1.46 and the conclusion is that the group means differ significantly. This means that the outcome of one treatment differs from that of another.

Treatment I

Treatment II

Treatment III

4.17

4.81

6.31

5.58

4.17

5.12

5.18

4.41

5.54

6.11

3.59

5.50

4.50

5.87

5.37

4.61

3.83

5.29

5.17

6.03

4.92

4.53

4.89

6.15

5.33

4.32

5.80

5.14

4.69

5.28

x 1 =50.32 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqGHris5caWG4bWdamaaBaaajuaibaWdbiaaigdaaKqba+aa beaapeGaeyypa0JaaGynaiaaicdacaGGUaGaaG4maiaaikdaaaa@3FC5@ ,      n1=10,                x 1 2 = 256.27 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqGHris5caWG4bWdamaaBaaajuaibaWdbiaaigdaaKqba+aa beaadaahaaqabKqbGeaapeGaaGOmaaaajuaGcqGH9aqpcaqGGaGaaG OmaiaaiwdacaaI2aGaaiOlaiaaikdacaaI3aaaaa@42C8@

x 2 =46.61 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqGHris5caWG4bWdamaaBaaajuaibaWdbiaaikdaaKqba+aa beaapeGaeyypa0JaaGinaiaaiAdacaGGUaGaaGOnaiaaigdaaaa@3FCD@ ,  n2=10,              x 2 2 = 222.92 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqGHris5caWG4bWdamaaBaaajuaibaWdbiaaikdaaKqba+aa beaadaahaaqabKqbGeaapeGaaGOmaaaajuaGcqGH9aqpcaqGGaGaaG OmaiaaikdacaaIYaGaaiOlaiaaiMdacaaIYaaaaa@42C4@

x 3 =55.26 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqGHris5caWG4bWdamaaBaaajuaibaWdbiaaiodaaKqba+aa beaapeGaeyypa0JaaGynaiaaiwdacaGGUaGaaGOmaiaaiAdaaaa@3FCF@ , n3=10,            x 3 2 = 307.13 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcfaieaaaaaa aaa8qacqGHris5caWG4bWdamaaBaaajuaibaWdbiaaiodaaKqba+aa beaadaahaaqabKqbGeaapeGaaGOmaaaajuaGcqGH9aqpcaqGGaGaaG 4maiaaicdacaaI3aGaaiOlaiaaigdacaaIZaaaaa@42C2@

Table 3: Treatments.

Conclusion

This paper examines the role treatment in explaining its effects on response variable. Two examples were given in the paper, the first example accepts the null hypothesis and concludes that there is no significant difference among the three methods of teaching arithmetic, while the second hypothesis rejected the null hypothesis and concludes that there is significant difference in the three growing conditions. One way ANOVA was used for both examples. Factorial design is also explained in the paper but without numerical examples.

References

  1. Valerie J Easton, John H McColl (1997) Statistics Glossary.  
  2. Gray W Oehlert (2010) A First Course in Design and Analysis of Experiments. 1-4.
  3. Prem S Mann (2010) Introductory Statistics. 1-641.
  4. Julian J Faraway (2005) Linear Models with R. Chapman and Hall, USA.
  5. Douglas C Montgomery (2013) Design and Analysis of Experiments. John Wiley and Sons, USA.
  6. Annette J Dobson (2002) An Introduction to Generalized Linear Models. Chapman and Hall, USA.
© 2014-2016 MedCrave Group, All rights reserved. No part of this content may be reproduced or transmitted in any form or by any means as per the standard guidelines of fair use.
Creative Commons License Open Access by MedCrave Group is licensed under a Creative Commons Attribution 4.0 International License.
Based on a work at http://medcraveonline.com
Best viewed in Mozilla Firefox | Google Chrome | Above IE 7.0 version | Opera |Privacy Policy