ISSN: 2378315X BBIJ
Biometrics & Biostatistics International Journal
Research Article
Volume 4 Issue 5  2016
On Estimating Flexible Weibull Parameters with Type I Progressive Interval Censoring with Random Removal Using Data of Cancerous Tumors in Blood
Afify WM*
Department of Head of Statistics, Mathematics & Insurance, Kafr Elsheikh University, Egypt
Received: September 09, 2016 Published: October 13, 2016
*Corresponding author:
Afify WM, Department of Head of Statistics, Mathematics & Insurance, Kafr Elsheikh University, Faculty of Commerce, Egypt, Email:
Citation:
Afify WM (2016) On Estimating Flexible Weibull Parameters with Type I Progressive Interval Censoring with Random Removal Using Data of Cancerous Tumors in Blood. Biom Biostat Int J 4(5): 00108. DOI: 10.15406/bbij.2016.04.00108
Abstract
In this paper, the maximum likelihood and the Bayes estimators of the two unknown parameters of the flexible Weibull distribution have been obtained for progressive Interval typeI censoring scheme with binomial random removal. Point estimation and confidence intervals based on maximum likelihood and bootstrap method are also proposed. A Bayesian approach using Markov chain Monte Carlo (MCMC) method to generate from the posterior distributions and in turn computing the Bayes estimators are developed. To illustrate the proposed methods will discuss an example with the real data. Finally, comparing the two techniques through comparisons between the maximum likelihood using bootstrap method and different Bayes estimators using MCMC study.
Keywords: Flexible Weibull distribution; Progressive interval typeI censoring; Random removal; Percentile bootstrap; Bayesian and nonBayesian approach; Markov chain Monte Carlo (MCMC)
Introduction
Censoring is very common in life tests in the past several decades; the experimenter may be unable to obtain complete information on failure times of all experimental items. For this reason, Aggarwalla [1] suggested a useful type of censoring, namely, a progressively Type I interval censored data, which is a union of Type I interval and progressive censoring. This method of lifetime data collection can be useful to a biological experimenter, particularly when the experimental units are humans, as continuous monitoring is often not possible to implement, and withdrawal rates from such studies may high.
In progressive censored the number of units being removed from the test at each failure time may occur at random. For example; the number of patients who drop out of clinical test at each stage is random and cannot be predetermined. That is why to display a more general censoring scheme called progressive progressively Type I interval censored with random removal. It can be described as follows: suppose
$n$
units are put on life test at time
${T}_{0}=0$
and under inspection at m prespecified times
${T}_{1}<{T}_{2}<\mathrm{...}<{T}_{m}$
where
${T}_{m}$
is scheduled time to terminate the experiment. The number,
${k}_{i}$
, of failures within
$({T}_{i1},{T}_{i}]$
is recorded and
${r}_{i}$
surviving items are randomly removed from the life testing at the ith inspection time,
${T}_{i}$
, for
$i=1,2,\mathrm{...}m$
. Since the number,
${Y}_{i}$
, of surviving items is a random variable and exact number of items with drawn should not be greater than
${Y}_{i}$
at time schedule
${T}_{i}$
,
${r}_{i}\text{'}s$
are random. Such a censoring mechanism is termed as progressive interval typeI censoring with random removal scheme. If we assume that probability of removal of a unit at every stage is π for each unit then r_{i} can be considered to follow a binomial distribution i.e,
${r}_{i}\approx B\left(nm{\displaystyle {\sum}_{j=0}^{i1}{r}_{j},\pi}\right)$
for
$i=1,2,\mathrm{...},m$
. The main difference between progressive interval type I censoring with fixed removal and progressive interval type I censoring with random removals is that the removals are predetermined in the former case while they are random in the latter case. Note that m is predetermined in both cases. However, many practical applications suggest that it is more flexible to have removals random to accommodate the unexpected drop out of experimental subjects.
Although progressive censoring occurs frequently in many applications, there are relatively few works on it. Some early works can be found in Cohen [2], Readers can refer to the book Balakrishnan & Aggarwala [3] for more details on the methods and applications of this topic. However, all these works assumed that the number of units being removed from the test is fixed in advance. In practice, it is impossible to predetermine the removal pattern. Thus, Yuen & Tse [4] and Yang et al. [5] considered the estimation problem when lifetimes collected under a Type II progressive censoring with random removals and Kendell & Anderson [6] point out that the expected duration under grouped data. Progressive typeI interval censored sampling is an important practical problem that has received considerable attention in the past several years. Based on the progressive typeI interval censored sampling, Ashour & Afify [7] derived the maximum likelihood estimators of parameters of the exponentiated Weibull family and their asymptotic variances under random removal. Lin et al. [8] determined optimally spaced inspection times for the lognormal distribution, while Ng & Wang [9] and Chen & Lio [10] compared three classical estimation methods, the maximum likelihood estimators the moment method and the probability plot method in terms of the Weibull distribution and generalized exponential respectively.
In Bayesian approach, It is too difficult to find integrate over the posterior distribution and the problem is that the integrals are usually impossible to evaluate analytically. But in MCMC technique, the MCMC methodology provided a convenient and efficient way to sample from complex, highdimensional statistical distributions. Recently, application of the MCMC method to the estimation of parameters or some other vital properties about statistical models is very common. Green et al. [11] using the MCMC method for estimating the three parameters Weibull distribution, and they showed that the MCMC method is better than the ML method, when given a proper prior distribution of the parameters. As a generalization of the two parameter Weibull model, Gupta et al. [12] gave a complete Bayesian analysis of the Weibull extension model using MCMC simulation and complete sample. Lin & Lio [13] discussed Bayesian inference under progressive type I interval censoring by using MCMC.
A random variable x is said to have a Flexible Weibull Distribution with parameters
$\lambda ,\beta >0$
if its probability density function, cumulative function, survival function and hazard function are given by
$f\left(x;\lambda ,\beta \right)=\left(\lambda +\frac{\beta}{{x}^{2}}\right){e}^{\lambda x\frac{\beta}{x}}{e}^{\lambda x\frac{\beta}{x}}$ (1)
$F\left(x;\lambda ,\beta \right)=1{e}^{{e}^{\lambda x\frac{\beta}{x}}}$ (2)
$\overline{F}\left(x;\lambda ,\beta \right)={e}^{{e}^{\lambda x\frac{\beta}{x}}}$(3) respectively.
In this paper we consider the Bayesian inference of the scale parameters for progressive interval typeI censored data when both parameters are unknown. We assumed that the both scale parameters
$\lambda and\beta $
have gamma prior and they are independently distributed. As expected in this case also, the Bayes estimates cannot be obtained in closed form. We propose to use the Gibbs sampling procedure to generate MCMC samples, and then using the Metropolis–Hastings algorithms, we obtain the Bayes estimates of the unknown parameters. We perform some simulation experiments to see the behavior of the proposed Bayes estimators and compare their performances with the maximum likelihood estimators.
The rest of the paper is organized as follows. In the next section, the ML estimators of the unknown parameters and approximate confidence intervals are presented. The corresponding parametric bootstrap confidence intervals for the parameters are given in Section 3. In Section 4, we cover Bayes estimates and construction of credible intervals using the MCMC techniques. In Section 5, for illustrative purposes, we performed a real data analysis. Comparisons among estimators are investigated through Monte Carlo simulations in Section 6. Finally, conclusions appear in Section 7.
Classical Estimation and Percentile Bootstrap Algorithm (Bootp)
Classical estimation (maximum likelihood estimators) of the unknown parameters and approximate confidence intervals are presented. Also, the corresponding parametric bootstrap confidence intervals using percentile bootstrap Algorithm (Bootp) for the parameters are given in this section.
Classical estimation
Suppose a progressively TypeI interval censored sample is collected as described above, beginning with a random sample of units with a continuous lifetime distribution
$F(x)$
and let
${k}_{1},{k}_{2},\mathrm{...},{k}_{m}$
denote the number of units known to have failed in the intervals
$(0,{T}_{1}],({T}_{1},{T}_{2}],\mathrm{...},({T}_{m1},{T}_{m}]$
, respectively. Then, based on this observed data, the joint likelihood function will be Aggarwala [1].
${L}_{1}\left(X;\lambda ,\beta \backslash R\right)=C{\displaystyle \prod _{i=1}^{m}{\left[F({T}_{i};\lambda ,\beta )F({T}_{i1};\lambda ,\beta )\right]}^{{k}_{i}}{\left[1F({T}_{i};\lambda ,\beta )\right]}^{{r}_{i}}}$
(4)
Where C is constant. Clearly, if
${r}_{i}=0$
for
$i=1,2,\mathrm{...},m1$
and
${r}_{m}=nk$
equation (4) reduces to the likelihood function for interval type I censoring data is defined as follows:
$$L(X;\lambda ,\beta )=C{(1F({T}_{m},\lambda ,\beta ))}^{nk}{\displaystyle \prod _{i=1}^{m}{(F({T}_{i},\lambda ,\beta )F({T}_{i1},\lambda ,\beta ))}^{{k}_{i}}}$$
Where
$k={\displaystyle \sum _{i=1}^{m}{k}_{i}}$
and
${k}_{1},{k}_{2},\mathrm{...},{k}_{m}$
are the number of units known to have failed in the intervals
$(0,{T}_{1}],({T}_{1},{T}_{2}],\mathrm{...},({T}_{m1},{T}_{m}],$
respectively.
For type I progressive Interval censoring, supposed that
${r}_{i}$
is independent of
${X}_{i}$
for all
$i$
; Wu & Chang [14] suggested the following likelihood function of a progressive interval censoring with binomial removals
$L(X,R;\lambda ,\beta )={L}_{1}(X;\lambda ,\beta \backslash R=r)\times P(R)$
(5)
Where
${L}_{1}(X;\theta \backslash R=r)$
is the likelihood function for a progressive type I interval censored with fixed removal (4) and
$P(R)$
will be
$\begin{array}{l}P(R)=P({R}_{m1}={r}_{m1}\backslash {R}_{m2}={r}_{m2},\mathrm{...},{R}_{1}={r}_{1})P({R}_{m2}={r}_{m2}\backslash {R}_{m3}={r}_{m3},\mathrm{...},{R}_{1}={r}_{1})\mathrm{...}\\ P({R}_{2}={r}_{2}\backslash {R}_{1}={r}_{1})P({R}_{1}={r}_{1})\mathrm{....}P({R}_{2}={r}_{2}\backslash {R}_{1}={r}_{1})P({R}_{1}={r}_{1})\end{array}$
Such as
$P(R)=\frac{(nm)!}{{\displaystyle \prod _{i=1}^{m}{r}_{i}!(nm{\displaystyle \sum _{j=1}^{m1}{r}_{j})!}}}{\pi}^{{\displaystyle \sum _{j=1}^{m1}{r}_{j}}}{(1\pi )}^{(m1)(nm){\displaystyle \sum _{j=1}^{m1}(mj){r}_{j}}}$
(6)
and
$f(.),F(.)$
are the same as defined before in (1) and (2) respectively. The log likelihood function with random removal can be written as
$\begin{array}{l}\mathrm{log}L(X,R;\lambda ,\beta )=\mathrm{log}C+{\displaystyle \sum _{i=1}^{m}{k}_{i}\mathrm{log}\left[F({T}_{i})F({T}_{i1})\right]+{\displaystyle \sum _{i=1}^{m}{r}_{i}\mathrm{log}\left[1F({T}_{i})\right]}}+\\ {\displaystyle \sum _{j=1}^{m1}{r}_{j}}\mathrm{log}\pi +(m1)(nm){\displaystyle \sum _{j=1}^{m1}(mj){r}_{j}}\mathrm{log}(1\pi )\end{array}$
(7)
The maximum likelihood estimations of
$\lambda $
and
$\beta $
are the simultaneous solutions of following normal equations
$${\sum}_{i=1}^{m}{k}_{i}}\frac{\frac{\partial F\left({T}_{i}\right)}{\partial \lambda}\frac{\partial F\left({T}_{i1}\right)}{\partial \lambda}}{F\left({T}_{i}\right)F\left({T}_{i1}\right)}+{\displaystyle {\sum}_{i=1}^{m}{r}_{i}}\frac{\frac{\partial}{\partial \lambda}\left[1F\left({T}_{i}\right)\right]}{\left[1F\left({T}_{i}\right)\right]}=0$$ (8)
$\sum}_{i=1}^{m}{k}_{i}\frac{\frac{\partial F\left({T}_{i}\right)}{\partial \beta}\frac{\partial F\left({T}_{i1}\right)}{\partial \beta}}{F\left({T}_{i}\right)F\left({T}_{i1}\right)}+{\displaystyle \sum}_{i=1}^{m}{r}_{i}\frac{\frac{\partial}{\partial \beta}\left[1F\left({T}_{i}\right)\right]}{\left[1F\left({T}_{i}\right)\right]}=0$ (9)
Note that
$P(R)$
does not involve the parameters. Therefore, the MLE
$\widehat{\pi}$
of
$\pi $
can be found by maximizing
$P(R)$
directly, that is,
$$\frac{1}{\stackrel{\u2322}{\pi}}{\displaystyle \sum _{j=1}^{m1}{r}_{j}\frac{1}{1\stackrel{\u2322}{\pi}}\left((m1)(nm){\displaystyle \sum _{j=1}^{m1}(mj){r}_{j}}\right)}=0$$
Therefore, the maximum likelihood estimation of parameter
$\widehat{\pi}$
is given by
$\widehat{\pi}=\frac{{\displaystyle \sum _{j=1}^{m1}{r}_{j}}}{(m1)(nm){\displaystyle \sum _{j=1}^{m1}(mj1){r}_{j}}}$ (10)
It may be noted that (9) and (10) cannot be solved simultaneously to provide a nicely closed form for the estimators. Therefore, we propose to use ï¬xed point iteration method for solving these equations. Using Fisher information matrix
$I(\widehat{\lambda},\widehat{\beta},\stackrel{\u2322}{\pi})$
in the Appendix and the asymptotic normality of the maximum likelihood estimators can be used to compute the approximate confidence intervals (ACI) for parameters
$\lambda $
,
$\beta $
and
$\pi $
Therefore,
$\left(1\gamma \right)100\%$
confidence intervals for parameters
$\lambda $
,
$\beta $
and
$\pi $
will be become
$\widehat{\lambda}\pm {Z}_{\gamma /2}\sqrt{Var\left(\widehat{\alpha}\right)}$
,
$\widehat{\beta}\pm {Z}_{\gamma /2}\sqrt{Var\left(\widehat{\beta}\right)}$
and
$\widehat{\pi}\pm {Z}_{\gamma /2}\sqrt{Var\left(\widehat{\pi}\right)}$
Where
${\text{Z}}_{\text{\gamma}/2}$
is percentile of the standard normal distribution with righttail probability
$\text{\gamma}/2$
.
Data algorithm
The data generation is based on the algorithm proposed by Aggarwala [1] to simulate the numbers,
${k}_{i}$
of failed items in each subinterval
$\left({T}_{i1},{T}_{i}\right],\text{i}=1,\dots ,\text{m},$
from an initial sample of size putting on life testing at time 0. This algorithm, which is an extension from the procedure developed by Kemp & Kemp [15] for the multinomial distribution, involves generating m binomial random variables. A procedure to generate a progressively type I interval censored data with random removal,
$({k}_{i},\text{}{r}_{i},\text{}{T}_{i}),i=1,\dots ,m,$
from the flexible Weibull distribution can be described as follows briefly: let
${k}_{0}=0$
and
${r}_{0}=0$
and for
$i=1,\dots ,m,$
Step 1: set
$i=0$
and let
$ksum=rsum=0$
.
Step 2:
$i=i+1$
Using initial
$\pi $
to generate a sample
$\text{}R={r}_{i}$
,
$\text{}i=1,\dots ,m$
using binomial distribution, where
${r}_{1}$
following the binomial
$(nm,\pi )$
distribution and the variables
${r}_{i}/{r}_{1},{r}_{2},\dots ,{r}_{i1}$
follow the binomial
$(nm{\displaystyle \sum}_{j=1}^{i1}{r}_{j},\pi )$
distribution for
$i=2,3,\dots ,m1$
.
Set
${r}_{m}=\{\begin{array}{c}nm{\displaystyle \sum}_{j=1}^{m1}{r}_{j}ifnm{\displaystyle \sum}_{j=1}^{m1}{r}_{j}0\\ 0otherwise\end{array}$
Generate k_i as a binomial random variable with parameters nk sumr sum and
$p=\left({e}^{{e}^{\lambda {T}_{i1}\frac{\beta}{{T}_{i}}}1}{e}^{{e}^{\lambda {T}_{i}\frac{\beta}{{T}_{i}}}}\right)/\left(1{e}^{{e}^{\lambda {T}_{i1}\frac{\beta}{{T}_{i}}}1}\right)$
Step 3: Set
$ksum=ksum+{k}_{i}$
and
$rsum=rsum+{r}_{i}$
.
Step 4: If
$i<m$
, go to step 2; otherwise, stop.
Percentile bootstrap algorithm (Bootp)
We can increase information about the population value more than does a point estimate by using a parametric bootstrap interval. We propose to use confidence intervals based on the parameteric bootstrap methods using percentile bootstrap Algorithm (Bootp) based on the idea of Efron [16].
The algorithm for estimating the confidence intervals is illustrated as follows:
Before progressing further, we first describe how we generate progressively interval Type I censored data with binomial random removals. The following algorithm is followed to obtain these samples.
 Specify the values of
$n;m;T$
.
 Specify the values of
$\lambda ,\beta $
and
$\pi $
.
 Form data algorithm; compute the maximum likelihood estimates of the parameters
$\widehat{\lambda}$
,
$\widehat{\beta}$
and
$\widehat{\pi}$
, by solving the likelihood equations simultaneously in (8), (9) and (10).
 Use
$\widehat{\lambda}$
,
$\widehat{\beta}$
and
$\widehat{\pi}$
, to generate a bootstrap sample
${k}^{*}$
with the same values of r_i, m;(i=1,2,…,m) using algorithm presented in Balakrishnan & Sandhu [17].
 As in step 3, based on
${k}^{*}$
compute the bootstrap sample estimates of
$\widehat{\lambda}$
,
$\widehat{\beta}$
and
$\widehat{\pi}$
, say
$\widehat{\lambda}*$
,
$\widehat{\beta}*$
and
$\widehat{\pi}*$
.
 Repeat steps 45 B times representing B bootstrap maximum likelihood estimators of
$\lambda $
,
$\beta $
and
$\pi $
based on B different bootstrap samples.
 Arrange all
$\widehat{\lambda}*\text{'}s$
,
$\widehat{\beta}*\text{'}s$
and
$\widehat{\pi}*\text{'}s$
, in an ascending order to obtain the bootstrap sample
$\left({\phi}_{l}^{\left[1\right]},{\phi}_{l}^{\left[2\right]},\dots ,{\phi}_{l}^{\left[B\right]}\right),l=1,2,3$
(where
${\phi}_{1}\equiv \widehat{\lambda}*$
,
${\phi}_{2}\equiv \widehat{\beta}*$
and
${\phi}_{3}\equiv \widehat{\pi}*$
).
Let
$G\left(z\right)=P\left({\phi}_{l}\le z\right)$
be the cumulative distribution function of
${\phi}_{l}$
. Define
${\phi}_{lboot}={G}^{1}\left(z\right)$
for given Z. The approximate bootstrap
$100\left(12\gamma \right)\%$
confidence interval (ABCI) of
${\phi}_{l}$
is given by
$\left[{\phi}_{l}{}_{boot}\left(\gamma \right),{\phi}_{l}{}_{boot}\left(1\gamma \right)\right]$
.
Bayesian Estimation and MCMC Technique
In this section, we will focus to Bayesian approach using Markov chain Monte Carlo (MCMC) method to generate from the posterior distributions and in turn computing the Bayes estimators are developed.
Bayesian estimation
In Bayesian scenario, we need to assume the prior distribution of the unknown model parameters to take into account uncertainty of the parameters. The informative prior densities for
$\lambda $
and
$\beta $
are given as
${g}_{1}\left(\lambda \right)\alpha {\lambda}^{b1}{e}^{\lambda a},a,b,\lambda 0$
,
${g}_{2}\left(\beta \right)\alpha {\beta}^{d1}{e}^{\beta c},c,d,\beta 0$
,
and
$\pi $
has a
${g}_{3}\left(\pi \right)\alpha {\pi}^{A1}{\left(1\pi \right)}^{B1},0\pi 1;A,B0$
Note that the parameters
$\lambda $
,
$\beta $
and
$\pi $
behave as independent random variables. The joint informative prior probability density function of
$\lambda $
,
$\beta $
and
$\pi $
is
$g\left(\lambda ,\beta ,\pi \right)\alpha {g}_{1}\left(\lambda \right)\times {g}_{2}\left(\beta \right)\times {g}_{3}\left(\pi \right)$
$\left(\lambda ,\beta ,\pi \right)\alpha {\lambda}^{b1}{e}^{\lambda a}{\beta}^{d1}{e}^{\beta c}{\pi}^{A1}{\left(1\pi \right)}^{B1}$
(11)
where
$a,b,c,d,AandB$
are assumed to be known and are chosen to reflect prior knowledge about
$\lambda $
,
$\beta $
and
$$\pi $$
. Note that when
$a=b=c=d=A=B=0$
, (we call it prior 0) they are the noninformative
$\lambda $
,
$\beta $
and
$\pi $
respectively.
It follows from (4), (6) and (11) that the joint posterior density function of
$\lambda $
,
$\beta $
and
$\pi $
given x is thus
$$\pi *\left(\lambda ,\beta ,\pi \right)\alpha \frac{{L}_{1}\left(X;\lambda ,\beta /R=r\right)\times P\left(R\right)\times g\left(\lambda ,\beta ,\pi \right)}{{\displaystyle {\int}_{0}^{\infty}{\displaystyle {\int}_{0}^{\infty}{\displaystyle {\int}_{0}^{1}{L}_{1}\left(X;\lambda ,\beta /R=r\right)\times P\left(R\right)g\left(\lambda ,\beta ,\pi \right)d\lambda d\beta d\pi}}}}$$
$$\pi *\left(\lambda ,\beta ,\pi \right)\alpha \prod _{i=1}^{m}{\left[1\frac{{e}^{u\left({T}_{i}\right)}}{{e}^{u\left({T}_{i1}\right)}}\right]}^{{k}_{i}}{e}^{\left\{{k}_{i}u\left({T}_{i1}\right)+{r}_{i}u\left({T}_{i}\right)\right\}}\pi {\displaystyle {\sum}_{j=1}^{m1}{r}_{j}}.{\left(1\pi \right)}^{\left(m1\right)\left(nm\right){\displaystyle {\sum}_{j=1}^{m1}\left(mj\right){r}_{j}}}\times {\lambda}^{b1}{e}^{\lambda a}{\beta}^{d1}{e}^{\beta c}{\pi}^{A1}{\left(1\pi \right)}^{B1}$$
(12)
where
$\text{u}\left({T}_{i}\right)={e}^{\lambda {T}_{i}\frac{\beta}{{T}_{i}}}$
and
$\text{u}\left({T}_{i1}\right)={e}^{\lambda {T}_{i1}\frac{\beta}{{T}_{i1}}}$
.
It is not possible to compute (12) analytically. The problem is that the integrals in (12) are usually impossible to evaluate analytically, and the numerical methods may fail. The MCMC method provides an alternative method for parameter estimation. In the following subsections, we propose using the MCMC technique to obtain Bayes estimates of the unknown parameters and construct the corresponding credible intervals.
MCMC technique
Computer simulation of Markov chains in the space of parameter will depend on Markov chain Monte Carlo (MCMC) Gilks et al. [18]. The Markov chains are defined in such a way that the posterior distribution in the given statistical inference problem is the asymptotic distribution. However, the posterior likelihood usually does not have a closed form for a given progressively typeI intervalcensored data. Moreover, a numerical integration cannot be easily applied in this situation. A lot of standard approaches to display like Markov chains exist, including Gibbs sampling, MetropolisHastings (MH) and reversible jump. The MH algorithm is a very general MCMC method first expansion by Metropolis et al. [19] and later extended by Hastings [20]. it is possible to use these algorithms by implement posterior simulation in essentially any problem which allow point wise evaluation of the prior distribution and likelihood function. It can be used to obtain random samples from any arbitrarily complicated target distribution of any dimension that is known up to a normalizing constant. In fact, Gibbs sampler is just a special case of the MH algorithm.
In order to use the method of MCMC for estimating the parameters of the flexible Weibull distribution and random removal, namely,
$\lambda $
,
$\beta $
and
$\pi $
. Let us consider independent priors as in (10), the full conditional distribution for any parameter can be obtained, to within a constant, by factoring out from the likelihood function
$L(X,R;\lambda ,\beta )$
any terms containing the relevant parameter and multiplying by its prior. From (11), the full posterior conditional distribution for
$\lambda $
is proportional to
${\pi}^{*}\left(\lambda /x,\beta ,\pi \right)\propto {\displaystyle \prod}_{i=1}^{m}{\left[1\frac{u\left({T}_{i}\right)}{u\left({T}_{i1}\right)}\right]}^{{k}_{i}}{e}^{\left\{{k}_{i}u\left({T}_{i1}\right)+{r}_{i}u\left({T}_{i}\right)\right\}}\times {\lambda}^{b1}{e}^{\lambda a}$
(12)
Also, the full posterior conditional distribution for β is proportional to
${\pi}^{*}\left(\beta /x,\lambda ,\pi \right)\propto {\displaystyle \prod}_{i=1}^{m}{\left[1\frac{u\left({T}_{i}\right)}{u\left({T}_{i1}\right)}\right]}^{{k}_{i}}{e}^{\left\{{k}_{i}u\left({T}_{i1}\right)+{r}_{i}u\left({T}_{i}\right)\right\}}\times {\beta}^{d1}{e}^{\beta c}$
(13)
Similarly, the marginal posterior density of
$\pi $
is proportional to
${\pi}^{*}\left(\pi /x,\lambda ,\beta \right)\propto {\pi}^{A+{\displaystyle \sum}_{j=1}^{m1}{r}_{j}1}{\left(1\pi \right)}^{B+\left(m1\right)\left(nm\right){\displaystyle \sum}_{j=1}^{m1}\left(mj\right){r}_{j}1}$
(14)
It is noted that the posterior distribution of
$\pi $
is beta with parameters
$A*$
and
$B*$
where
${A}^{*}=\text{A}+{\displaystyle \sum}_{\text{j}=1}^{\text{m}1}{\text{r}}_{\text{j}}$
and
${B}^{*}=\text{B}+\left(\text{m}1\right)\left(\text{n}\text{m}\right){\displaystyle \sum}_{\text{j}=1}^{\text{m}1}\left(\text{m}\text{j}\right){\text{r}}_{\text{j}}$
and,
$\pi $
therefore, samples of can be easily generated using any beta generating routine. But the conditional posterior distribution of
$\lambda $
and
$\beta $
equations (12) and (13) respectively, cannot be reduced analytically to wellknown distributions and therefore it is not possible to sample directly by standard methods, but the plot of it show that it is similar to normal distribution. So to generate random numbers from this distribution, we use the MH method with normal proposal distribution.
MCMC process
Now, we propose the following scheme to generate
$\lambda $
,
$\beta $
and
$\pi $
from density functions and in turn obtain the Bayes estimates and the corresponding credible intervals.
 Start with an
${\lambda}^{\left(0\right)}=\widehat{\lambda},{\beta}^{\left(0\right)}=\widehat{\beta}$
and
$M=burnin$
.
 Set
$t=1$
.
 Generate
${\pi}^{\left(t\right)}$
from beta distribution
$\pi *\left(\pi /x,\lambda ,\beta \right)$
.
 Using MH algorithm Metropolis et al. [19],
${\lambda}^{\left(t\right)}$
from
$\pi *\left(\lambda /x,\beta ,\pi \right)$
with the
$N\left({\lambda}^{\left(t1\right)},{\sigma}_{\lambda}^{2}\right)$
proposal distribution where
${\sigma}_{\lambda}^{2}$
is the variance of obtained using variancecovariance matrix; similarly,
${\beta}^{\left(t\right)}$
from
$\pi *\left(\beta /X,\lambda ,\pi \right)$
with the
$N\left({\beta}^{\left(t1\right)},{\sigma}_{\beta}^{2}\right)$
proposal distribution where
${\sigma}_{\beta}^{2}$
is the variance of
$\beta $
obtained using variancecovariance matrix.
 Compute
${\lambda}^{\left(t\right)}$
,
${\lambda}^{\left(0\right)}=\widehat{\lambda},{\beta}^{\left(0\right)}=\widehat{\beta}$
and
${\pi}^{\left(t\right)}$
.
 Set
$t=t+1$
.
 Repeats Steps 36 N times.
 Obtain the Bayes estimates of
$\lambda $
,
$\beta $
and
$\pi $
with respect to the squared error loss function as
$$\widehat{E}\left(\lambda /x\right)=\frac{1}{NM}{\displaystyle \sum _{i=M+1}^{N}{\lambda}_{i}}$$
,
$$\widehat{E}\left(\beta /x\right)=\frac{1}{NM}{\displaystyle \sum _{i=M+1}^{N}{\beta}_{i}}$$
and
$\widehat{E}\left(\pi /x\right)=\frac{1}{NM}{\displaystyle \sum _{i=M+1}^{N}{\pi}_{i}}$
 To compute the credible intervals of , and , order
${\lambda}_{1},\mathrm{...},{\lambda}_{NM}$
,
${\beta}_{1},\mathrm{...},{\beta}_{NM}$
and
${\pi}_{1},\mathrm{...},{\pi}_{NM}$
as
${\lambda}_{1}<\mathrm{...}<{\lambda}_{NM,}$
,
${\beta}_{1}<\mathrm{...}<{\beta}_{NM,}$
and
${\pi}_{1}<\mathrm{...}<{\pi}_{NM.}$
.Then the
$100\left(1\gamma \right)\%$
symmetric credible intervals (SCI) of , and become:
$$\left[{\lambda}_{\left(NM\right)}\gamma /2,{\lambda}_{\left(NM\right)\left(1\gamma /2\right)}\right],\left[{\beta}_{\left(NM\right)}\gamma /2,{\beta}_{\left(NM\right)\left(1\gamma /2\right)}\right]$$
and
$$\left[{\pi}_{\left(NM\right)\gamma /2,}{\pi}_{\left(NM\right)\left(1\gamma /2\right)}\right]$$
Real Data Analysis
To conduct a study within the Institute of Oncology in Tanta  Egypt. This study is concerned with the treatment of cancerous tumors in blood and studies their impact on the overall health of the patient. Underwent the study 228 patients and they had varying degrees of disease. Patients were examined every 15 days for 6 consecutive months. Of course there were cases of withdrawal (death  interruption of treatment for different reasons)
As we know on the basis of a single sample, one cannot make a general statement regarding the behavior of proposed estimators, therefore we present a simulation study for the study of the behavior of the estimators in the next section.
Simulation
The simulation is conducted using the R version 3.2.2 (for more information about R programming, the reader may refer to this manual of R, version 3.3.0 under development (20151030) Copyright 2000 –2015 R Core Team). The simulation setup is parallel to the real data given in (Table 1). To be speciï¬c, each replication of the simulation generates a progressively typeI intervalcensored data within twelve subintervals which have prespeciï¬ed inspection times (in terms of half month),
$${T}_{0}=0,{T}_{1}=16,{T}_{2}=31,{T}_{3}=46,{T}_{4}=61,{T}_{5}=76,{T}_{6}=91,{T}_{7}=106,{T}_{8}=121,{T}_{9}=136,{T}_{10}=151,{T}_{11}=166and{T}_{12}=181.$$
The last inspection time, , is the scheduled time to terminate the experiment. The lifetime distribution is flexible Weibull with parameters
$\lambda $
and
$\beta $
where the simulation input parameters are selected close to the maximum likelihood estimators of flexible Weibull parameters for modeling the real data in (Table 1). The performance of parameter estimation under progressively type I interval censored with random removal is compared via the maximum likelihood, bootstrap method and MCMC procedure developed in this paper. The summary for 1000 simulation runs is shown in (Tables 25).
Bayes estimates of
$\lambda $
,
$\beta $
and
$\pi $
using MCMC method, we assume that informative priors a = 2,b = 3,c = 4 , d = 2, A = 2 and B = 3) on
$\lambda $
,
$\beta $
and
$\pi $
in (Table 4). Also, by noninformative prior using MCMC procedure with Bayes estimation will be obtained on estimates of parameters in (Table 5).
${k}_{i}$

Cases of Withdrawal 
Number of Random Removals
$m$

Interval in Hours
${T}_{i}$

Number at Risk 
Number of Failure
${k}_{i}$

1 
[0,16) 
228 
25 
2 
2 
[16,31) 
201 
39 
2 
3 
[31,46) 
160 
25 
1 
4 
[46,61) 
134 
20 
3 
5 
[61,76) 
111 
11 
1 
6 
[76,91) 
99 
14 
2 
7 
[91,106) 
83 
11 
3 
8 
[106,121) 
69 
17 
0 
9 
[121,136) 
52 
6 
2 
10 
[136,151) 
44 
31 
1 
11 
[151,166) 
12 
6 
1 
12 
[166,181) 
5 
5 
0 
Table 1: Examine patients every 15 days.

Different Parameters 
$\lambda $

$\beta $

$\pi $

Average 
6.441 
0.0841 
0.6312 
MSE 
0.0134 
0.0743 
0.0484 
Bias 
0.0231 
0.1073 
0.0094 
Variance 
0.0129 
0.0627 
0.0483 
ACI 
[5.0132,7.9801] 
[0.1736,0.0901] 
[0.4421,0.7782] 
Length ACI 
2.9669 
0.2637 
0.3361 
Table 2: Progressively type I interval censored with random removal via the, maximum likelihood.

Different Parameters 
$\lambda $

$\beta $

$\pi $

Average 
5.966 
0.099 
0.7058 
MSE 
0.1174 
0.0984 
0.0487 
Bias 
0.0346 
0.1764 
0.0109 
Variance 
0.1162 
0.0673 
0.0486 
ABCI 
[5.0117,8.0412] 
[0.1811,0.0884] 
[0.4434,0.7992] 
Length ABCI 
3.0295 
0.2695 
0.3558 
Table 3: Progressively type I interval censored with random removal via the, bootstrap method.

Different Parameters 
$\lambda $

$\beta $

$\pi $

Average 
4.902 
0.0083 
0.5118 
MSE 
0.0035 
0.0277 
0.04804 
Bias 
0.0049 
0.0833 
0.0017 
Variance 
0.0035 
0.0207 
0.04803 
SCI 
[5.1023,7.6421] 
[0.1075,0.0826] 
[0.4927,0.6524] 
Length SCI 
2.5398 
0.1901 
0.1597 
Table 4: Progressively type I interval censored with random removal via the, MCMC procedure developed (Informative Priors).

Different Parameters 
$\lambda $

$\beta $

$\pi $

Average 
4.671 
0.0398 
0.6501 
MSE 
0.0673 
0.0559 
0.0656 
Bias 
0.0174 
0.0304 
0.0093 
Variance 
0.067 
0.0549 
0.0655 
SCI 
[4.8821,8.0307] 
[0.1010,0.0721] 
[0.3881,0.7061] 
Length SCI 
3.1486 
0.1731 
0.318 
Table 5:Progressively type I interval censored with random removal via the, MCMC procedure developed (NonInformative Priors).
Both of density functions of
${\pi}^{*}\left(\lambda /x,\beta ,\pi \right)$
and
${\pi}^{*}\left(\beta /x,\lambda ,\pi \right)$
can be approximated by normal distribution functions but density function of
${\pi}^{*}\left(\pi /x,\lambda ,\beta \right)$
will be beta as mentioned in subsection (3.3) which are plotted in (Figure 1& 2) Chain of MCMC outputs of
$\lambda $
,
$\beta $
and
$\pi $
, using 100 000 MCMC samples. This was done with 1000 bootstrap sample and 100 000 MCMC sample and discard the ï¬rst 50000 values as ‘burnin’. The Bayes estimators can be seen to have the smaller risks than classical estimators for all the considered cases. It may also be noted that the Bayes estimators obtained under informative prior are more efficient than those obtained under noninformative priors. This indicates that the Bayesian procedure with accurate prior information provides more precise estimates. Also, The Length of the SCI (using informative prior) is smaller than the Length of the ACI and ABCI.
Figure 1:Posterior density function of
$\lambda $
,
$\beta $
and
$\pi $
.
Figure 2: Chain of MCMC outputs of
$\lambda $
,
$\beta $
and
$\pi $
.
Conclusion
The methodology developed in this paper will be very useful to the researchers, engineers, statisticians and in the field of medical where such type of life test is needed and especially where the Weibull distribution is used. we have considered the problem of estimation for flexible Weibull distribution in the presence of Progressive TypeI Interval censored sample with Binomial removals. The scope of this censoring scheme in clinical trials has been discussed. We have found that Bayesian procedure provides estimates of the unknown parameters of flexible Weibull model with smaller MSE. The length of SCI is smaller than that of the ACI and ABCI. Applying the MCMC process through the application of the MH algorithm to deal with the Bayesian estimation for another lifetime distributions under type I progressive interval censoring with random removal could be a fruitful future research.
Appendix
The asymptotic variancecovariance matrix of the maximum likelihood estimators for parameters , and are given by elements of the inverse of the Fisher information matrix with random removal will be
$$I(\widehat{\lambda},\widehat{\beta},\widehat{\pi})=\left[\begin{array}{cc}{I}_{1}(\widehat{\lambda},\widehat{\beta})& 0\\ 0& {I}_{2}(\widehat{\pi})\end{array}\right]$$
,
Unfortunately, the exact mathematical expressions for the above expectations are very difficult to obtain. Therefore, we give the approximate (observed) asymptotic varaincecovariance matrix for the maximum likelihood estimators, which is obtained by dropping the expectation operator E, where
$${I}_{1}^{1}(\widehat{\lambda},\widehat{\beta})={\left[\begin{array}{cc}\left(\frac{{\partial}^{2}\mathrm{ln}L(\pi )}{\partial {\lambda}^{2}}\right)& \left(\frac{{\partial}^{2}\mathrm{ln}L(\pi )}{\partial \lambda \partial \beta}\right)\\ \left(\frac{{\partial}^{2}\mathrm{ln}L(\pi )}{\partial \lambda \partial \beta}\right)& \left(\frac{{\partial}^{2}\mathrm{ln}L(\pi )}{\partial {\beta}^{2}}\right)\end{array}\right]}^{1}{}_{\lambda ={\widehat{\lambda}}_{2}\beta =\widehat{\beta}}\left[\begin{array}{cc}V(\widehat{\lambda})& Cov(\widehat{\lambda},\widehat{\beta})\\ Cov(\widehat{\lambda},\widehat{\beta})& V(\widehat{\beta})\end{array}\right]$$
We explained how to find Fisher information matrix
${I}_{1}(\widehat{\lambda},\widehat{\beta})$
in the Appendix B.
$${I}_{2}(\widehat{\pi})=E\left(\frac{{\partial}^{2}\mathrm{ln}L(\pi )}{\partial {\pi}^{2}}\right)$$
,
and
$$\frac{{\partial}^{2}\mathrm{ln}P(R)}{\partial {\pi}^{2}}=\frac{1}{{\pi}^{2}}{\displaystyle \sum _{j=1}^{m1}{r}_{j}\frac{1}{{(1\pi )}^{2}}\left[\left(m1\right)\left(nm\right){\displaystyle \sum _{j=1}^{m1}\left(mj\right){r}_{j}}\right]}.$$
Numerical technique is needed to obtain the Fisher information matrix and the variancecovariance matrix. Note that under fixed and random removal the estimates based on intervals with equal length when the intervals are of equal length, so that monitoring and censoring occur periodically say
${T}_{i}=i.t$
.
We determine the second partials by differentiating the first partials, equations (8) and (9), obtaining
$$\frac{{\partial}^{2}logL}{\partial \lambda \partial \beta}={\displaystyle \sum _{i=1}^{m}{k}_{i}}\left\{\frac{\frac{{\partial}^{2}F\left({T}_{i}\right)}{\partial \lambda \partial \beta}\frac{{\partial}^{2}F\left({T}_{i1}\right)}{\partial \lambda \partial \beta}}{F\left({T}_{i}\right)F\left({T}_{i1}\right)}\frac{\left[\frac{\partial F\left({T}_{i}\right)}{\partial \lambda}\frac{\partial F\left({T}_{i1}\right)}{\partial \lambda}\right]\left[\frac{\partial F\left({T}_{i}\right)}{\partial \beta}\frac{\partial F\left({T}_{i1}\right)}{\partial \beta}\right]}{{\left[F\left({T}_{i}\right)F\left({T}_{i1}\right)\right]}^{2}}\right\}+{\displaystyle \sum _{i=1}^{m}{r}_{i}}\left\{\frac{\frac{{\partial}^{2}}{\partial \lambda \partial \beta}\left[1F\left({T}_{i}\right)\right]}{\left[1F\left({T}_{i}\right)\right]}\frac{\left[\frac{\partial}{\partial \lambda}\left[1F\left({T}_{i}\right)\right]\right]\left[\frac{\partial}{\partial \beta}\left[1F\left({T}_{i}\right)\right]\right]}{{\left[1F\left({T}_{i}\right)\right]}^{2}}\right\}$$
$\frac{{\partial}^{2}logL}{\partial {\lambda}^{2}}={\displaystyle \sum _{i=1}^{m}{k}_{i}}\left\{\frac{\frac{{\partial}^{2}F\left({T}_{i}\right)}{\partial {\lambda}^{2}}\frac{{\partial}^{2}F\left({T}_{i1}\right)}{\partial {\lambda}^{2}}}{F\left({T}_{i}\right)F\left({T}_{i1}\right)}\frac{{\left[\frac{\partial F\left({T}_{i}\right)}{\partial \lambda}\frac{\partial F\left({T}_{i1}\right)}{\partial \lambda}\right]}^{2}}{{\left[F\left({T}_{i}\right)F\left({T}_{i1}\right)\right]}^{2}}\right\}+{\displaystyle \sum}_{i=1}^{m}{r}_{i}\left\{\frac{\frac{{\partial}^{2}}{\partial {\lambda}^{2}}\left[1F\left({T}_{i}\right)\right]}{\left[1F\left({T}_{i}\right)\right]}\frac{{\left[\frac{\partial}{\partial \lambda}\left[1F\left({T}_{i}\right)\right]\right]}^{2}}{{\left[1F\left({T}_{i}\right)\right]}^{2}}\right\}$
$$\frac{{\partial}^{2}\mathrm{log}L}{\partial {\beta}^{2}}={\displaystyle \sum _{i=1}^{m}{k}_{i}}\left\{\frac{\frac{{\partial}^{2}F\left({T}_{i}\right)}{\partial {\beta}^{2}}\frac{{\partial}^{2}F\left({T}_{i1}\right)}{\partial {\beta}^{2}}}{F\left({T}_{i}\right)F\left({T}_{i1}\right)}{\frac{\left[\frac{\partial F\left({T}_{i}\right)}{\partial \beta}\frac{\partial F\left({T}_{i1}\right)}{\partial \beta}\right]}{{\left[F\left({T}_{i}\right)F\left({T}_{i1}\right)\right]}^{2}}}^{2}\right\}+{\displaystyle \sum _{i=1}^{m}{r}_{i}}\left\{\frac{\frac{{\partial}^{2}}{\partial {\beta}^{2}}\left[1F\left({T}_{i}\right)\right]}{\left[1F\left({T}_{i}\right)\right]}{\frac{\left[\frac{\partial}{\partial \beta}\left[1F\left({T}_{i}\right)\right]\right]}{{\left[1F\left({T}_{i}\right)\right]}^{2}}}^{2}\right\}$$
$$F\left({T}_{i}\right)=1{e}^{{e}^{\lambda {T}_{i}}\frac{\beta}{{T}_{i}}}$$
$$\frac{\partial F\left({T}_{i}\right)}{\partial \lambda}=\frac{1}{{T}_{i}}{e}^{\lambda {T}_{i\frac{\beta}{{T}_{i}}}}{e}^{{e}^{\lambda {T}_{i}\frac{\beta}{{T}_{i}}}}=\frac{1}{{T}_{i}}{e}^{\lambda {T}_{i\frac{\beta}{{T}_{i}}}}\left\{1F\left({T}_{i}\right)\right\}$$
$$\frac{\partial F\left({T}_{i}\right)}{\partial \beta}=\frac{1}{{T}_{i}}{e}^{\lambda {T}_{i\frac{\beta}{{T}_{i}}}}{e}^{{e}^{\lambda {T}_{i}\frac{\beta}{{T}_{i}}}}=\frac{1}{{T}_{i}}{e}^{\lambda {T}_{i\frac{\beta}{{T}_{i}}}}\left\{1F\left({T}_{i}\right)\right\}$$
$$\frac{\partial F\left({T}_{i}\right)}{\partial \lambda}=\left[1F\left({T}_{i}\right)\right]=\frac{\partial F\left({T}_{i}\right)}{\partial \lambda}$$
$$\frac{\partial F\left({T}_{i}\right)}{\partial \beta}=\left[1F\left({T}_{i}\right)\right]=\frac{\partial F\left({T}_{i}\right)}{\partial \beta}$$
$$\frac{{\partial}^{2}F\left({T}_{i}\right)}{\partial {\lambda}^{2}}=\frac{1}{{T}_{i}}{e}^{\lambda {T}_{i}\frac{\beta}{{T}_{i}}}\left\{{T}_{i}\left[1F\left({T}_{i}\right)\right]+\frac{\partial F\left({T}_{i}\right)}{\partial \lambda}\right\}$$
$$\frac{{\partial}^{2}F\left({T}_{i}\right)}{\partial {\beta}^{2}}=\frac{1}{{T}_{i}}{e}^{\lambda {T}_{i}\frac{\beta}{{T}_{i}}}\left\{\frac{1}{{T}_{i}}\left[1F\left({T}_{i}\right)\right]+\frac{\partial F\left({T}_{i}\right)}{\partial \beta}\right\}$$
$$\frac{{\partial}^{2}F\left({T}_{i}\right)}{\partial \lambda \partial \beta}={e}^{\lambda {T}_{i}\frac{\beta}{{T}_{i}}}\left\{\left[1F\left({T}_{i}\right)\right]+{T}_{i}\frac{\partial F\left({T}_{i}\right)}{\partial \beta}\right\}$$
Note that:
${T}_{i}\frac{\partial F\left({T}_{i}\right)}{\partial \beta}=\frac{1}{{T}_{i}}\frac{\partial F\left({T}_{i}\right)}{\partial \lambda}$
References
 Aggarwala R (2001) Progressive interval censoring: Some mathematical results with applications to inference. Communications in Statistics  Theory and Methods 30(89): 19211935.
 Cohen AC (1963) Progressively censored samples in life testing. Technometrics 5(3): 327339.
 Balakrishnan N, Aggarwala R (2000) Progressive Censoring: Theory, Methods, and Applications, Boston, MA: Birkh¨auser Boston.
 Yuen HK, Tse SK (1996) Parameters estimation for Weibull distributed lifetimes under progressive censoring with random removals. Journal of Statistical Computation and Simulation 55(12): 5771.
 Yang C, Tse SK, Yuen HK (2000) Statistical analysis of Weibull distributed life time data under type II progressive censoring with binomial removals. Journal of Applied Statistics 27(8): 10331043.
 Kendell PJ, Anderson RL (1971) An Estimation Problem in Life Testing. Technometrics 13(2): 289301.
 Ashour SK, Afify WM (2007) Statistical analysis of exponentiated Weibull family under typei progressive interval censoring with random removals. Journal of Applied Sciences Research 3(12): 18511863.
 Lin CT, Wu SJS, Balakrishnan N (2009) Planning life tests with progressively typeI interval censored data from the lognormal distribution, Journal of Statistical Planning and Inference 139(1): 5461.
 Ng HKT, Wang Z (2009) Statistical estimation for the parameters of Weibull distribution based on progressively typeI interval censored sample. Journal of Statistical Computation and Simulation 79(2): 145159.
 Chen DG, Lio YL (2010) Parameter estimations for generalized exponential distribution under progressive typeI interval censoring. Computational Statistics and Data Analysis 54(6): 15811591.
 Green EJ, Roesch FA, Smith AFM, Strawderman WE (1994) Bayesian estimation for the threeparameter Weibull distribution with tree diameter data. Biometrica 50(1): 254269.
 Gupta A, Mukherjee B, Upadhyay SK (2008) A Bayes study using Markov Chain Monte Carlo simulation. Reliability Engineering & System Safety 93(10): 14341443.
 Lin YJ, Lio YL (2012) Bayesian inference under progressive typeI interval censoring. Journal of Applied Statistics 39(8): 18111824.
 Wu SJ, Chang CT (2002) Parameter Estimations Based on Exponential Progressive Type II Censored with Binomial Removals. International Journal of Information and Management Sciences 13(3): 3746.
 Kemp CD, Kemp W (1987) Rapid generation of frequency tables. Journal of the Royal Statistical Society 36(3): 277282.
 Efron B (1982) The bootstrap and other resampling plans. In: CBMSNSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, USA, 38.
 Balakrishnan N, Sandhu RA (1995) A simple simulational algorithm for generating progressive typeII censored samples. The American Statistician 49(2): 229230.
 Gilks WR, Richardson S, Spiegelhalter DJ (1996) Markov chain Monte Carlo in Practices, Chapman and Hall, London.
 Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E (1953) Equations of state calculations by fast computing machine. Journal of Chemical Physics 21: 10871091.
 Hastings WK (1970) Monte carlo sampling methods using markov chains and their applications. Biometrika 57(1): 97109.

