Dummy variable analysis of variance technique is an alternative approach to the nonparametric Friedman’s twoway analysis of variance test by ranks used to analyze sample data appropriate for use in parametric statistics for two factor random and mixed effects or analysis of variance models with one replication or observation per treatment combinations [1,2].
To develop a nonparametric alternative method for the analysis of matched samples that are appropriate for use with two factor random and mixedeffects analysis of variance models with only one observation per cell or treatment combination, we may suppose that a researcher has collected a random sample of size ’a’ observations randomly drawn from a population ‘A’ of subjects or blocks of subjects exposed to or observed at some ‘c’ time periods, points in space, experimental conditions, tests, or treatments that are either fixed or randomly drawn from population B experimental conditions, points in time, tests or experiments comprising numerical measurements.
The proposed method
Let
${y}_{ij}$
be the
${i}^{th}$
observation drawn from population A, that is the observation on the
${i}^{th}$
subject or block of subjects exposed to or observed at the
${j}^{th}$
level of factor B that is
${j}^{th}$
treatment or time period for i=1,2,…,a; j=1,2,…,c.
Now to set up a dummy variable multiple regression model for use with a two factor analysis of variance problem, we as usual present each factor or the so called parent independent variable with one dummy variable of 1s and 0s less than the number of its categories or levels [2]. Thus factor A, namely subject or block of subjects with ‘a’ levels is represented with a1 dummy variables of 1s and 0s, while factor B with c levels is represented by c1 dummy variables of 1s and 0s.
Hence we may let
$\begin{array}{l}{x}_{i;A}=\{\begin{array}{c}\begin{array}{l}1,if\text{\hspace{0.17em}}{y}_{ij}\text{\hspace{0.17em}}is\text{\hspace{0.17em}}an\text{\hspace{0.17em}}observation\text{\hspace{0.17em}}on\text{\hspace{0.17em}}the\text{\hspace{0.17em}}ith\text{\hspace{0.17em}}subject\text{\hspace{0.17em}}or\text{\hspace{0.17em}}block\text{\hspace{0.17em}}of\text{\hspace{0.17em}}\\ subjects\text{\hspace{0.17em}}and\text{\hspace{0.17em}}jth\text{\hspace{0.17em}}level\text{\hspace{0.17em}}offactor\text{\hspace{0.17em}}B\text{\hspace{0.17em}}(treatment)\end{array}\\ 0,otherwise\end{array}\\ for\text{\hspace{0.17em}}i=1,2,\mathrm{...},a1;and\text{\hspace{0.17em}}all\text{\hspace{0.17em}}j=1,2,\mathrm{...},c\end{array}$
(1)
$\begin{array}{l}Also\text{\hspace{0.17em}}let\\ {x}_{j;B}=\{\begin{array}{c}\begin{array}{l}1,if\text{\hspace{0.17em}}{y}_{ij}\text{\hspace{0.17em}}is\text{\hspace{0.17em}}an\text{\hspace{0.17em}}observation\text{\hspace{0.17em}}or\text{\hspace{0.17em}}response\text{\hspace{0.17em}}at\text{\hspace{0.17em}}the\text{\hspace{0.17em}}jth\text{\hspace{0.17em}}level\text{\hspace{0.17em}}\\ offactor\text{\hspace{0.17em}}B\text{\hspace{0.17em}}(treatment)\text{\hspace{0.17em}}and\text{\hspace{0.17em}}ith\text{\hspace{0.17em}}level\text{\hspace{0.17em}}of\text{\hspace{0.17em}}factor\text{\hspace{0.17em}}\\ A(subject,or\text{\hspace{0.17em}}block\text{\hspace{0.17em}}of\text{\hspace{0.17em}}subjects)\end{array}\\ 0,otherwise\end{array}\\ for\text{\hspace{0.17em}}j=1,2,\mathrm{...}c1;and\text{\hspace{0.17em}}all\text{\hspace{0.17em}}i=1,2,\mathrm{...},a.\end{array}$
(2)
Then the resulting dummy variable multiple regression model fitting or regressing the dependent or criterion variable
${y}_{ij}$
on the dummy variables representing factors A (subject or block of subjects) and B (treatment) is
${y}_{l}={\beta}_{0}+{\beta}_{1;A}{x}_{l1;A}+{\beta}_{2;A}{x}_{l2;A}+\mathrm{......}+{\beta}_{\alpha 1;A}{x}_{l\alpha 1;A}+{\beta}_{1;B}{x}_{l1;B}+{\beta}_{2;B}{x}_{l2;B}+\mathrm{....}+{\beta}_{c1;B}{x}_{lc1;B}+{e}_{i}$
(3)
For
$l=\text{1},\text{2},\dots ,\text{n}=\text{a}.\text{c}$
sample observations where
${y}_{l}$
is the
${l}^{th}$
response or observation on the criterion or dependent variable;
${x}_{ls}$
are dummy variables of 1s and 0s representing levels of factors A and B;
${\beta}_{ls}$
are partial regression coefficients and
${e}_{ls}$
are error terms, with
$E\left({e}_{i}\right)=0$
,for
$l=\text{1},\text{2},\dots ,\text{n}=\text{a}.\text{c}$
. Note that since there are only one observation per row by column, that is factor A (subject or block of subjects) by factor B (treatment) combination; for one to be able to have an estimate for the error sum of squares for the regression model, and hence be able to test desired hypotheses, it is necessary to assume that there are no factors A by B interactions or that such interactions have been removed by an appropriate data transformation. Also note that an advantage of the present method over the extended median test for dependent or matched samples and also over the Friedmans two –way analysis of variance test by ranks is that the problem of tied observations within subjects or blocks of subjects does not arise, and hence unlike in the other two nonparametric methods under reference there is no need to find ways to adjust for or break ties between scores within blocks of subjects [3]. The expected or mean value of the criterion variable is from equation 3.
$E\left({y}_{l}\right)={\beta}_{0}+{\beta}_{1;A}{x}_{l1;A}+{\beta}_{2;A}{x}_{l2;A}+\mathrm{....}+{\beta}_{\alpha 1;A}{x}_{l\alpha 1;A}+{\beta}_{1;B}{x}_{l1;B}+{\beta}_{2;B}{x}_{l2;B}+\mathrm{....}+{\beta}_{c1;B}{x}_{lc1;B}$
(4)
To find the expected or mean effect of any of the factors or parent independent variables, we set all the dummy variables representing that factor equal to 1 and all the other dummy variables found in equation 4 equal to 0.Thus for example the expected or mean effect or value of factor A (subject or block of subjects) on the dependent variable is obtained by setting
${x}_{l;A}=1\text{\hspace{0.17em}}and\text{\hspace{0.17em}}{x}_{j;B}=0$
in equation 4 for
$l=\text{1},\text{2},\dots ,a\text{1};\text{j}=\text{1},\text{2},\dots ,c\text{1}$
.
Similarly the expected or mean value of factor B (treatment) is obtained by setting
${x}_{l;B}=1\text{\hspace{0.17em}}and\text{\hspace{0.17em}}{x}_{j;A}=0$
in equation 4 for
$l=\text{1},\text{2},\dots ,c\text{1};\text{j}=\text{1},\text{2},\dots ,a\text{1}$
thereby obtaining
$E\left({y}_{l;A}\right)={\beta}_{0}+{\displaystyle \sum _{l=1}^{a1}{\beta}_{l;A}\text{\hspace{0.17em}}and\text{\hspace{0.17em}}E({y}_{l;B})={\beta}_{0}+{\displaystyle \sum _{l=1}^{c1}{\beta}_{l;B}}}$
(5)
Now the dummy variable multiple regression model of equation 3 can equivalently be expressed in matrix form as
$\underset{\_}{y}=X\underset{\_}{\beta}+\underset{\_}{e}$
(6)
Where
$\underset{\_}{y}$
is an nx1 column vector of observations or scores on the dependent or criterion variables; X is an nxr design matrix of ‘r’ dummy variables of 1s and 0s;
$\underset{\_}{\beta}$
is an rx1 column vector of partial regression coefficients; and
$\underset{\_}{e}$
is on nx1 column vector of error terms, with
$E(\underset{\_}{e})=\underset{\_}{0}$
where ‘n’=a.c observations and ‘n’=(a1)+(c1)=a+c2 dummy variables of 1s and 0s included in the regression model.
Similarly the expected value of
$\underset{\_}{y}$
is from equation 4.
$E(\underset{\_}{y})=X.\underset{\_}{\beta}$
(7)
Application of the usual methods of least squares to either equation 3 or 6 yields an unbiased estimate of the regression parameter
$\underset{\_}{\beta}$
as
$\underset{\_}{\widehat{\beta}}=\underset{\_}{b}={\left({X}^{\prime}X\right)}^{1}{X}^{\prime}\underset{\_}{y}$
(8)
Where
${\left({X}^{\prime}X\right)}^{1}$
is the inverse matrix of the nonsingular variancecovariance matrix
${X}^{\prime}X$
. A hypothesis that is usually of research interest is that the regression model of either equation 3 or 6 fits, or equivalently that the independent variables or factors have no effects on the dependent or criterion variable, meaning that the partial regression coefficient is equal to zero stated symbolically that we have the null hypothesis.
${H}_{0}:\underset{\_}{\beta}=\underset{\_}{0}\text{\hspace{0.17em}}versus\text{\hspace{0.17em}}{H}_{1}:\underset{\_}{\beta}\ne \underset{\_}{0}$
(9)
As in equation 3 this null hypothesis is tested using the usual Ftest presented in an analysis of variance Table where the total sum of squares is calculated in the usual way as
$SSTotal=\underset{\_}{{y}^{\prime}}\underset{\_}{y}n.{\overline{y}}^{2}\text{\hspace{0.17em}}$
(10)
With n1=a.c1 degrees of freedom where
$\overline{y}$
is the mean value of the dependent variables.
Similarly the treatment sum of squares in analysis of variance parlance which is the same as the regression sum of squares in regression models is calculated as
$SSTreatment=SSR=\underset{\_}{{b}^{\prime}}.{X}^{\prime}.\underset{\_}{y}n.{\overline{y}}^{2}$
(11)
With (a1)+(c1) =a+c2 degrees of freedom. The error sum of squares SSE indicates the difference between the total sum of squares, SST and the sum of squares regression SSR; thus,
$SSE=SSTSSR=\underset{\_}{{y}^{\prime}}\underset{\_}{y}\underset{\_}{{b}^{\prime}}{X}^{\prime}.\underset{\_}{y}$
(12)
With
$(a.c1)\left((a1)+(c1)\right)=(a1)(c1)$
degrees of freedom.
These results are summarized in an analysis of variance Table (Table 1)
The null hypotheses H_{0} of Equation 13 is tested using the Fratio of Table 1. The null hypothesis is rejected at the if the calculated Fratio is greater than the tabulated or critical Fratio at a specified
$\alpha $
level of significance, otherwise the null hypothesis H_{0} is accepted.
If the model fits, that if not all the elements of
$\beta $
are equal to zero, that is if the null hypothesis H0 of equation 9 is rejected, then one may proceed to test further hypothesis concerning factor level effects, that is one may proceed to test the null hypothesis that factors A (subject or block of subjects) and B (treatment) separately have no effects on the dependent or criterion variable. In other words, the null hypotheses
$\begin{array}{l}{H}_{0}:{\underset{\_}{\beta}}_{A}=\underset{\_}{0}\text{\hspace{0.17em}}versus\text{\hspace{0.17em}}{H}_{1}:{\underset{\_}{\beta}}_{A}\ne \underset{\_}{0}\\ and\\ {H}_{0}:{\underset{\_}{\beta}}_{B}=\underset{\_}{0}\text{\hspace{0.17em}}versus\text{\hspace{0.17em}}{H}_{1}:{\underset{\_}{\beta}}_{B}\ne \underset{\_}{0}\end{array}$
(13, 14)
Where
${\underset{\_}{\beta}}_{A}\text{\hspace{0.17em}}and\text{\hspace{0.17em}}{\underset{\_}{\beta}}_{B}$
are respectively the (a1) and (c1) vectors of partial regression coefficients or effects of factor A (subject or block of subjects) and B (treatment) on the criterion or dependent variable. However a null hypothesis that is usually of greater interest here is that of equation 14, that is that treatments, points in time or space of tests or experiments do not have differential effects on subjects.
Source of Variation 
Sum of Squares 
Degrees of Freedom 
Mean sum of Squares 
Fratio 
Regression(treatment) 
$SSR=\underset{\_}{{b}^{\prime}}.{X}^{\prime}.\underset{\_}{y}n.{\overline{y}}^{2}$

a+c2 
$MSR=\frac{SSR}{a+c2}$

$\frac{MSR}{MSE}$

Error 
$SSE=\underset{\_}{{y}^{\prime}}\underset{\_}{y}\underset{\_}{{b}^{\prime}}{X}^{\prime}.\underset{\_}{y}$

(a1)(c1) 
$MSE=\frac{SSE}{(a1)(c1)}$


Total 
$SST=\underset{\_}{{y}^{\prime}}\underset{\_}{y}n.{\overline{y}}^{2}$

(a.c)1 


Table 1: Two factor analysis of variance Table for the full model of Equation 6.
Now to obtain appropriate test statistics for use in testing these null hypothesis we apply the extra sum of squares principle to partition the treatment or regression sum of squares SSR into its two component parts namely, the sum of squares due to factor A (subject or block of subjects), SSA and the sum of squares due to factor B (treatment), SSB, to enable the calculation of the appropriate Fratios.
Now the nxr matrix X for the full model of equation 6 can be partitioned into its two component submatrices namely
${X}_{A}$
, an nx(a1) design matrix of a1 dummy variables of 1s and 0s representing the included a1 levels of factor A (subject or block of subjects) and
${X}_{B}$
, an nx(c1) matrix of the c1 dummy variables of 1s and 0s representing the included c1 levels of factor B (treatment). The partial regression coefficient
$\underset{\_}{b}$
, estimated being an rx1 column vector of regression effects of equation 8 can also be partitioned into the corresponding partial regression coefficients estimated such as,
${\underset{\_}{b}}_{A}$
,which is an (a1)x1 column vector of partial regression coefficients or effects of factor A and
${\underset{\_}{b}}_{B}$
which is a (c1)x1 column vector of the effects of factor B on the dependent variable. Hence the treatment sum of squares SST, that is the sum of squares regression SSR of equation 11 can be equivalently expressed as
$\begin{array}{l}SSTreatment=SSR=\underset{\_}{{b}^{\prime}}{X}^{\prime}\underset{\_}{y}n.{\overline{y}}^{2}=(X\underset{\_}{b}{)}^{\prime}.\underset{\_}{y}n.{\overline{y}}^{2};\\ equivalently\text{\hspace{0.17em}}as\\ SSR={\left(({X}_{A}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{X}_{B})\left\begin{array}{c}{\underset{\_}{b}}_{A}\\ {\underset{\_}{b}}_{B}\end{array}\right\right)}^{\prime}\underset{\_}{y}n.{\overline{y}}^{2}=\left({\underset{\_}{{b}^{\prime}}}_{A}.{{X}^{\prime}}_{A}.\underset{\_}{y}+{\underset{\_}{{b}^{\prime}}}_{B}.{{X}^{\prime}}_{B}.\underset{\_}{y}\right)n.{\overline{y}}^{2}\text{\hspace{0.17em}}\end{array}$
(15)
or equivalently
$SSR=\underset{\_}{{b}^{\prime}}{X}^{\prime}\underset{\_}{y}n.{\overline{y}}^{2}=\left({\underset{\_}{{b}^{\prime}}}_{A}.{{X}^{\prime}}_{A}.\underset{\_}{y}n.{\overline{y}}^{2}\right)+\left({\underset{\_}{{b}^{\prime}}}_{B}.{{X}^{\prime}}_{B}.\underset{\_}{y}n.{\overline{y}}^{2}\right)+n.{\overline{y}}^{2}$
(16)
Which when interpreted is the same as the statement
$SSTreatment=SSR=SSA+SSB+SS(\overline{y}=\widehat{\mu})$
(17)
Where SSR is the sum of squares of regression for the full model with r=a+c2 degrees of freedom; SSA is the sum of squares due to factor A (subject or block of subject); with a1 degrees of freedom; SSB is the sum of squares due to factor B (treatment) with c1 degrees of freedom; and
$SS(\overline{y}=\widehat{\mu})$
is an additive correction factor due to mean effect. These sums of squares namely SSR, SSA and SSB are obtained by separately fitting the full model of equations 6 with X, and the reduced regression models of
${X}_{A}\text{\hspace{0.17em}}and\text{\hspace{0.17em}}{X}_{B}$
again separately on the criterion or dependent variable
$\underset{\_}{y}$
.
Now if the full model of equation 6 fits, that is if the null hypothesis of equation 9 is rejected, then the additional null hypotheses of equations 13 and 14 may be tested using the extra sum of squares principle [4,5]. If we denote the sums of squares due to the full model of equation 6 and the reduced models due to the fitting of the criterion variables
$\underset{\_}{y}$
to any of the reduced design matrices
${X}_{A}\text{\hspace{0.17em}}and\text{\hspace{0.17em}}{X}_{B}$
by SS(F) and SS(R) respectively then following the extra sum of squares principle [4,5] the extra sum of squares due to a given factor is calculated as
$ESS=SS\left(F\right)SS\left(R\right)$
(18)
With degrees of freedom obtained as the difference between the degrees of freedom of SS(F) and SS(R); that is as Edf=df(F)df(R). Thus the extra sums of squares for factors A (subject or block of subjects) and B (treatment) are obtained as follows respectively
$ESSA=SSRSSA;ESSB=SSRSSB$
(19)
With
$(a1)+(c1)(a1)=b1$
degrees of freedom and
$(a1)+(b1)(b1)=a1$
degrees of freedom.
Note that since each of the reduced models and the full model have the same total sum of squares SST, the extra sum of squares may alternatively be obtained as the difference between the error sum of squares of each reduced model and the error sum of squares of the full model. In other words, the extra sum of squares is equivalently calculated as
$ESS=SS(F)SS(R)=SSTSS(F)SSTSS(R)=SSE(R)SSE(F)\text{\hspace{0.17em}}$
(20)
With degrees of freedom similarly obtained. Thus the extra sum of squares due to factors A (subject or block of subjects) and B (treatment) are alternatively obtained as follows respectively.
$ESSA=SSEASSE$
(21)
With c1 and a1 degrees of freedom. Where SSR and SSE are respectively the regression sum of squares and the error sum of squares for the full model and SSEA and SSEB are respectively the error sums of squares for the reduced models for factors A and B. The null hypotheses of equations 13 and 14 are tested using the Fratios
${F}_{A}=\frac{MESA}{MSE}$
(22)
With a1 and (a1)(c1) degrees of freedom where
$MESA=\frac{ESSA}{c1}$
(23)
Is the mean extra sum of squares due to factor A (subject or block of subjects) and
${F}_{B}=\frac{MESB}{MSE}$
(24)
With a1 and (a1)(c1) degrees of freedom where
$MESB=\frac{ESSB}{a1}$
(25)
Is the mean extra sum of squares due to factor B (treatment).These results are summarized in Table 2a which for ease of presentation also includes the sum of squares and other values of Table 1 for the full models.
If the various F–ratios and in particular the Fratios based on the extra sums of squares of Table 2b indicate that the independent variables or factor levels have differential effects on the response, dependent, or criterion variable, that is if the null hypotheses of either equation 13 or 14 or both are rejected, then one may proceed further to estimate desired factor level effects and test hypotheses concerning them.
Source of Variation 
Sum of Squares (SS) 
Degrees of Freedom(DF) 
Mean sum of Squares(MS) 
Fratio 
Full model 




Regression 
$SSR={\underset{\_}{b}}^{\prime}{X}^{\prime}\underset{\_}{y}n.{\overline{y}}^{2}$

a+c2 
$MSR=\frac{SSR}{a+c2}$

$F=\frac{MSR}{SSR}$

Error 
MCEP0028 
(a1)(c1) 
$MSE=\frac{SSE}{(a1)(c1)}$


Factor A (Subjects on block of subjects) 
Regression 
$SSA={\underset{\_}{b}}^{\prime}{}_{A}{{X}^{\prime}}_{A}\underset{\_}{y}n.{\overline{y}}^{2}$

a1 
$MSA=\frac{SSA}{a1}$


Error 
$SSEA={\underset{\_}{y}}^{\prime}\underset{\_}{y}{\underset{\_}{b}}^{\prime}{}_{A}{{X}^{\prime}}_{A}\underset{\_}{y}$

a(c1) 
$MSEA=\frac{MSA}{a(c1)}$


Factor B(Treatment) 
Regression 
$SSB={\underset{\_}{b}}^{\prime}{}_{B}{{X}^{\prime}}_{B}\underset{\_}{y}n.{\overline{y}}^{2}$

c1 
$MSB=\frac{SSB}{c1}$

$F=\frac{MSB}{MSEB}$

Error 
$SSEB={\underset{\_}{y}}^{\prime}\underset{\_}{y}{\underset{\_}{b}}^{\prime}{}_{B}{{X}^{\prime}}_{B}\underset{\_}{y}$

c(a1) 
$MSEB=\frac{MSEB}{c(a1)}$


Total 
${\underset{\_}{y}}^{\prime}\underset{\_}{y}n.{\overline{y}}^{2}$

a.c1 


Table 2a: Table showing two factor Analysis of Variance for Sums of Squares for the full model and due to reduced models and other statistics.
Extra sum of Squares (ESS=SS(F)SS(R) 
Degrees of Freedom(DF) 
Extra Mean sum of Squares (EMSA) 
Fratio 
ESR=SSR 
$a+c2$

$EMSR=\frac{SSR}{a+c2}$

$F=\frac{MSR}{MSE}$

ESER=SSE 
(a1)(c1) 
$EMSE=\frac{SSE}{(a1)(c1)}$


Factor A 
ESSA=SSRSSA 
c1 
$EMSA=\frac{ESSA}{c1}$

${F}_{A}=\frac{EMSA}{MSE}$

ESSEA=SSEASSE=ESSA 
c1 
$EMSEA=\frac{ESSEA}{c1}$


Factor B 
ESSB=SSRSSB 
a1 
$EMSB=\frac{ESSB}{a1}$

${F}_{B}=\frac{EMSB}{MSE}$

ESSEB=SSEBSSE=ESSB 
a1 
$EMSEB=\frac{ESSEB}{a1}$


${\underset{\_}{y}}^{\prime}\underset{\_}{y}n.{\overline{y}}^{2}$

a.c1 


Table 2b: Twofactor Analysis of Variance Table for the Extra sums of Squares due to reduced models and other statistics (Continuation).
In fact an additional advantage of using dummy variable regression models in two factor or multiple factor analysis of variance type problems is that the method also more easily enables the estimation of factor level effects separately of several factors on a specified dependent or criterion variable. For example it enables the estimation of the total or absolute effect, the partial regression coefficient or the so called direct effect of a given independent variable here referred to as the parent independent variable on the dependent variable through the effect of its representative dummy variables as well as the indirect effect of that parent independent variable through the mediation of other independent variables in the model [6]. The total or absolute effect of a parent independent variable on a dependent variable is estimated as the simple regression coefficient of that independent variable represented by codes assigned to its various categories when regressed on the dependent variable. The direct effect of a parent independent variable on a dependent variable is the weighted sum of the partial regression coefficients or effects of the dummy variables representing that parent independent variable on the dependent variable where the weights are the simple regression coefficients of each representative dummy variable regressing on the specified parent independent variable represented by codes. The indirect effect of a given parent independent variable on a dependent variable is then simply the difference between its total and direct effects [6].
Now the direct effect or partial regression coefficient of a given parent independent variable on a dependent variable is obtained by taking the partial derivative of the expected value of the corresponding regression model with respect to that parent independent variable. For example the direct effect of the parent independent variable ‘A’ say on the dependent variable Y is obtained from equation 5 as
$\begin{array}{l}{\beta}_{A}dir=\frac{dE({y}_{i})}{{d}_{A}}={\displaystyle \sum _{l=1}^{a1}{\beta}_{l;A}.\frac{dE({x}_{l;A})}{{d}_{A}}+{\displaystyle \sum _{l}{\beta}_{l;Z}.\frac{dE({x}_{l;Z})}{{d}_{A}}}}\\ or\\ {\beta}_{A}dir={\displaystyle \sum _{l=1}^{a1}{\beta}_{l;A}.\frac{dE({x}_{l;A})}{{d}_{A}}}\\ \mathrm{sin}ce{\displaystyle \sum _{l}{\beta}_{l;Z}}.\frac{dE({x}_{l;Z})}{{d}_{A}}=0\end{array}$
(26)
For all other independent variable ‘z’ in the model different from ‘A’.
The weight
${\alpha}_{l;A}=\frac{dE({x}_{l;A})}{{d}_{A}}$
is estimated by fitting a simple regression line of dummy variable.
${x}_{l;A}$
regressing on its parent independent variable, A represented by codes and taking the derivative of its expected value with respect to ‘A’. Thus, if the expected value of the dummy variable
${x}_{l;A}$
regressing on its parent independent variable ‘A’ is expressed as
$E\left({x}_{l;A}\right)={\alpha}_{0}+{\alpha}_{l;A}.A$
Then the derivative of this expected value with respect to A is
$\frac{dE({x}_{l;A})}{{d}_{A}}={\alpha}_{l;A}\text{\hspace{0.17em}}$
(27)
Hence using Equation 27 in Equation 26 gives the direct effect of the parent independent variable A on the dependent variable Y as
${\beta}_{A}dir={\displaystyle \sum _{l=1}^{a1}{\alpha}_{l;A}.{\beta}_{l;A}}$
(28)
Whose sample estimate is from Equation 8
${\widehat{\beta}}_{A}dir={b}_{A}dir={\displaystyle \sum _{l=1}^{a1}{\alpha}_{l;A}.{b}_{l;A}}$
(29)
The total or absolute effect of ‘A’ on ‘Y’ is estimated as the simple regression coefficient or effect of the parent independent variable ‘A’ represented by codes on the dependent variable ‘Y’ as
${\widehat{\beta}}_{A}={b}_{A}$
(30)
Where
${b}_{A}$
is the estimated simple regression coefficient or effect of ‘A’ on ‘Y’. The indirect effect of ‘A’ on ‘Y’ is then estimated as the difference between
${b}_{A}$
and
${b}_{A}dir$
, that is as
${\widehat{\beta}}_{A}indir={b}_{A}indir=\text{\hspace{0.17em}}{b}_{A}{b}_{A}dir$
(31)
The total, direct and indirect effects of factor B are similarly estimated.
The body weights of a random sample of 10 Broilers here termed “ subject or block of subjects” regarded as factor ‘A’ with ten levels and types of weighing machine here termed “treatment” regarded as factor ‘B’ with five levels are shown below.
To set up a dummy variable regression model of body weight (y) regressing on “subject or block of subjects” here termed factor ‘A’ with ten levels and types of weighing machine, here termed “treatments” treated as factor ‘B’ with five levels, we as usual represent factor ‘A’ with nine dummy variables of 1s and 0s and factor ‘B’ with four dummy variables of 1s and 0s, using Equation 1.
The resulting design matrix ‘X’ for the full model is presented in Table 3 where
${x}_{1;A}$
represents level 1 or broiler No.1;
${x}_{2;A}$
represents levels 9 or broiler No.9 and so on. Similarly
${x}_{1;B}$
represents weighing machine No.1 or treatment 1,
${x}_{2;B}$
represents weighing machine No.2 or treatment 2 and so on, until
${x}_{4;B}$
represents weighing machine No.4 or treatment 4.
Using the design matrix X of Table 3 for the full model of Equation 6 we obtain the fitted regression Equation expressing the dependent of broiler body weight on, that is as a function of broiler (subject) treated as factor A and type of weighing machine (treatment) treated as factor B, both represented by dummy variables of 1s and 0s, as
$\begin{array}{l}{\widehat{y}}_{l}=2.3020.593{x}_{{l}_{1};A}+3.175{x}_{{l}_{2};A}+0.212{x}_{{l}_{3};A}2.023{x}_{{l}_{4};A}1.491{x}_{{l}_{5};A}+0.352{x}_{{l}_{6};A}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}1.219{x}_{{l}_{7};A}+0.123{x}_{{l}_{8};A}2.185{x}_{{l}_{9};A}0.094{x}_{{l}_{1};B}0.235{x}_{{l}_{2};B}+2.329{x}_{{l}_{3};B}0.029{x}_{{l}_{4};B}\end{array}$
Now to estimate the total or absolute effect of type of weighing machine (treatment), ‘B; or body weight y of broilers, we regress
${y}_{i}$
on ‘B’ represented by codes to obtain
${\widehat{\beta}}_{B}={b}_{B}=0.054$
. The weights
${\alpha}_{j;B}$
to be applied to Equation 6 to determine the direct effect are obtained as explained above by taking the derivative with respect to ‘B’ of the expected value of the simple regression equation expressing the dependence of the dummy variable
${x}_{ij}$
of 1s and 0s on its parent variable ‘B’ represented by codes yielding
${\alpha}_{1;B}=0.20;{\alpha}_{2;B}=0.10;{\alpha}_{3;B}=0.00\text{\hspace{0.17em}}and\text{\hspace{0.17em}}{\alpha}_{4;B}=0.10$
.
Using these values in Equation 6, we obtain with Equation 6 the partial or the so called direct effect of type of weighing machine (treatment) ‘B’ on body weight ‘y’ of broilers as
${\widehat{\beta}}_{B}dir={b}_{B}dir=\left(0.094\times 0.2\right)+\left(0.235\times 0.10\right)+\left(0.00\times 2.329\right)+\left(0.029\times 0.10\right){\widehat{\beta}}_{B}dir={b}_{B}dir\text{\hspace{0.17em}}=0.0394$
Hence the corresponding indirect effect is estimated using Equation 6 as
${\widehat{\beta}}_{B}indir={b}_{B}indir=0.0146$
.
The total or absolute, direct and indirect effects of the subjects or block of subjects called factor A are similarly calculated.
It would for comparative purpose be instructive to also analyze the data of example 1 using Friedman twoway analysis of variance test by ranks.
To do this we first rank for each broiler (subject) the body weight as obtained using the five weighing machines (treatment) from the smallest ranked ‘1’ to the largest ranked ‘5’. All tied body weights for each broiler are as usual assigned their mean ranks. The results are presented in Table 4.
Using the ranks shown in Table 4, we calculate the Friedmans test statistic as
${\chi}^{2}=\frac{12}{rc(c+1)}{\displaystyle \sum _{j=1}^{c}{R}_{.j}^{2}3r(c+1)}=\frac{12\left({13}^{2}+{33}^{2}+{27}^{2}+{40.5}^{2}+{36.5}^{2}\right)}{(10)(5)(5+1)}3(10)(5+1)=198.38180=17.38$
Which with c1=51=4 degrees of freedom is statistically significant
$\left({\chi}_{0.99;4}^{2}=13.277\right)$
,indicating that weighing machines probability differ in the values of body weights of broilers obtained using them. This is the same conclusion that is also reached using the present method.
S/no (l)

Body weight (yi) 
${x}_{{l}_{o}}$

${x}_{{l}_{1};A}$
1

${x}_{{l}_{2};A}$
2

${x}_{{l}_{3};A}$
3

${x}_{{l}_{4};A}$
4

${x}_{{l}_{5};A}$
5

${x}_{{l}_{6};A}$
6

${x}_{{l}_{7};A}$
7

${x}_{{l}_{8};A}$
8

${x}_{{l}_{9};A}$
9

${x}_{{l}_{1};B}$
1

${x}_{{l}_{2};B}$
2

${x}_{{l}_{3};B}$
3

${x}_{{l}_{4};B}$
4

1 
1.9 
1 
1 
0 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
0 
2 
2 
1 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
3 
2.1 
1 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
1 
0 
4 
2.1 
1 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
1 
5 
1.9 
1 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
6 
1.7 
1 
0 
1 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
0 
7 
2 
1 
0 
1 
0 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
8 
1.8 
1 
0 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
1 
0 
9 
2.1 
1 
0 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
1 
10 
2 
1 
0 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
11 
1.9 
1 
0 
0 
1 
0 
0 
0 
0 
0 
0 
1 
0 
0 
0 
12 
2.2 
1 
0 
0 
1 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
13 
1.9 
1 
0 
0 
1 
0 
0 
0 
0 
0 
0 
0 
0 
1 
0 
14 
2.2 
1 
0 
0 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
1 
15 
2.2 
1 
0 
0 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
16 
1.8 
1 
0 
0 
0 
1 
0 
0 
0 
0 
0 
1 
0 
0 
0 
17 
2.2 
1 
0 
0 
0 
1 
0 
0 
0 
0 
0 
0 
1 
0 
0 
18 
2.1 
1 
0 
0 
0 
1 
0 
0 
0 
0 
0 
0 
0 
1 
0 
19 
2 
1 
0 
0 
0 
1 
0 
0 
0 
0 
0 
0 
0 
0 
1 
20 
2.1 
1 
0 
0 
0 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
21 
1.9 
1 
0 
0 
0 
0 
1 
0 
0 
0 
0 
1 
0 
0 
0 
22 
1.8 
1 
0 
0 
0 
0 
1 
0 
0 
0 
0 
0 
1 
0 
0 
23 
1.9 
1 
0 
0 
0 
0 
1 
0 
0 
0 
0 
0 
0 
1 
0 
24 
2.2 
1 
0 
0 
0 
0 
1 
0 
0 
0 
0 
0 
0 
0 
1 
25 
2.1 
1 
0 
0 
0 
0 
1 
0 
0 
0 
0 
0 
0 
0 
0 
26 
1.8 
1 
0 
0 
0 
0 
0 
1 
0 
0 
0 
1 
0 
0 
0 
27 
2 
1 
0 
0 
0 
0 
0 
1 
0 
0 
0 
0 
0 
0 
0 
28 
2.1 
1 
0 
0 
0 
0 
0 
1 
0 
0 
0 
0 
1 
0 
0 
29 
2.1 
1 
0 
0 
0 
0 
0 
1 
0 
0 
0 
0 
0 
1 
0 
30 
2.1 
1 
0 
0 
0 
0 
0 
1 
0 
0 
0 
0 
0 
0 
1 
31 
1.8 
1 
0 
0 
0 
0 
0 
0 
1 
0 
0 
1 
0 
0 
0 
32 
2.1 
1 
0 
0 
0 
0 
0 
0 
1 
0 
0 
1 
1 
0 
0 
33 
1.9 
1 
0 
0 
0 
0 
0 
0 
1 
0 
0 
0 
0 
1 
0 
34 
2.2 
1 
0 
0 
0 
0 
0 
0 
1 
0 
0 
0 
0 
0 
1 
35 
2 
1 
0 
0 
0 
0 
0 
0 
1 
0 
0 
0 
0 
0 
0 
36 
1.7 
1 
0 
0 
0 
0 
0 
0 
0 
1 
0 
1 
0 
0 
0 
37 
2.1 
1 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
1 
0 
0 
38 
1.9 
1 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
0 
1 
0 
39 
1.9 
1 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
0 
0 
1 
40 
2.1 
1 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
0 
0 
0 
41 
1.8 
1 
0 
0 
0 
0 
0 
0 
0 
0 
1 
1 
0 
0 
0 
42 
1.9 
1 
0 
0 
0 
0 
0 
0 
0 
0 
1 
0 
1 
0 
0 
43 
2 
1 
0 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
1 
0 
44 
2.1 
1 
0 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
0 
1 
45 
2.1 
1 
0 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
0 
0 
46 
2 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
0 
47 
2.1 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
1 
0 
0 
48 
2 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
1 
0 
49 
2.1 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
1 
50 
2.1 
1 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
0 
Table 3: Design matrix for the sample data of example 1.

Body Weight(Treatment) 
Broiler(Subject) 
1 
2 
3 
4 
5 
1 
1.5 
3 
4.5 
4.5 
1.5 
2 
1 
3.5 
2 
5 
3.5 
3 
1.5 
4 
1.5 
4 
4 
4 
1 
5 
3.5 
2 
3.5 
5 
2.5 
1 
2.5 
5 
4 
6 
1 
2 
4 
4 
4 
7 
1 
4 
2 
5 
3 
8 
1 
4.5 
2.5 
2.5 
4.5 
9 
1 
2 
3 
4.5 
4.5 
10 
1.5 
4 
1.5 
4 
4 
Total 
13 
33 
27 
40.5 
36.5 
Table 4: Ranks of body weights of broilers in Table 1.