Maximum Likelihood Estimation

The maximum likelihood estimation method is a very rigorous statistical method of estimation. The word likelihood has the same meaning as the word probability. The method of maximum likelihood is not restricted to a specific type of analysis like the least square method; rather its application is universal provided the probability distribution of the population is known. Using this method we are able to obtain an estimate of the parameter which is most likely to be true (i.e., it has the maximum probability to be true). The method of determining maximum likelihood estimates is briefly outlined in the following steps.

  1. Formulate the likelihood function (L). The likelihood function is the joint probability distribution of a sample of n values of random variables.
  2.  If the likelihood function (L) is in the exponential form, it will be much more convenient if we write it in the logarithmic form, i.e., find {L_n}L
  3. Maximize L or {L_n}L with respect to the parameter whose estimate(s) is(are) desired using the technique of differential calculus.

 

Example:

Find the maximum likelihood estimate of the parameter \mu in a normal population assuming {\sigma ^2} is known.

Solution:

Let the random sample {X_1},{X_2},{X_3}, \ldots ,{X_n} be drawn from a normal population, and each of the {X_i} will be normally distributed, i.e.

\begin{gathered} P\left( {{X_1}} \right) = \frac{1}{{\sqrt {2\pi {\sigma ^2}} }}\exp \left[ { - \frac{1}{{2{\sigma ^2}}}{{\left( {{X_1} - \mu } \right)}^2}} \right] \\ P\left( {{X_2}} \right) = \frac{1}{{\sqrt {2\pi {\sigma ^2}} }}\exp \left[ { - \frac{1}{{2{\sigma ^2}}}{{\left( {{X_2} - \mu } \right)}^2}} \right] \\ \cdots \,\,\,\,\, \cdots \,\,\,\,\, \cdots \,\,\,\,\, \cdots \,\,\,\, \cdots \,\,\,\, \cdots \,\,\,\, \cdots \,\,\,\,\, \cdots \,\,\,\,\, \cdots \,\,\,\, \cdots \,\,\, \cdots \\ P\left( {{X_n}} \right) = \frac{1}{{\sqrt {2\pi {\sigma ^2}} }}\exp \left[ { - \frac{1}{{2{\sigma ^2}}}{{\left( {{X_n} - \mu } \right)}^2}} \right] \\ \end{gathered}

 

The likelihood function will be the joint distribution (product) of all these density functions. Therefore,

\begin{gathered} L = \frac{1}{{\sqrt {2\pi {\sigma ^2}} }}\exp \left[ { - \frac{1}{{2{\sigma ^2}}}{{\left( {{X_1} - \mu } \right)}^2}} \right] \\ \,\,\,\,\,\,\,\, \times \,\,\frac{1}{{\sqrt {2\pi {\sigma ^2}} }}\exp \left[ { - \frac{1}{{2{\sigma ^2}}}{{\left( {{X_2} - \mu } \right)}^2}} \right] \\ \,\,\,\,\,\,\,\,\, \cdots \,\,\,\,\, \cdots \,\,\,\,\, \cdots \,\,\,\, \cdots \,\,\,\, \cdots \,\,\,\, \cdots \,\,\,\,\, \cdots \,\,\,\,\, \cdots \,\,\,\, \cdots \\ \,\,\,\,\,\,\,\, \times \,\frac{1}{{\sqrt {2\pi {\sigma ^2}} }}\exp \left[ { - \frac{1}{{2{\sigma ^2}}}{{\left( {{X_n} - \mu } \right)}^2}} \right] \\ \end{gathered}

or

L = \frac{1}{{{{\left( {2\pi {\sigma ^2}} \right)}^{\frac{n}{2}}}}}\exp \left[ { - \frac{1}{{2{\sigma ^2}}}{{\left( {{X_i} - \mu } \right)}^2}} \right]

To simplify the process of maximization, take the logarithm of both sides. Therefore,

{L_n}L = - \frac{n}{2}{L_n}\left( {2\pi } \right) - \frac{n}{2}{L_n}\left( {{\sigma ^2}} \right) - \frac{1}{{2{\sigma ^2}}}\sum {\left( {{X_i} - \mu } \right)^2}

 

Now the necessary condition for maximization is that the first derivative with respect to \mu should be zero. Therefore,

\begin{gathered} \frac{{\partial {L_n}L}}{{\partial \mu }} = 0 - 0 - \frac{1}{{2{\sigma ^2}}}\left( { - 2} \right)\sum \left( {{X_i} - \mu } \right) = 0 \\ {\text{or}}\,\,\,\,\,\,\,\,\,\sum \left( {{X_i} - \mu } \right) = 0 \\ {\text{or}}\,\,\,\,\,\,\,\,\sum {X_i} - \sum \mu = 0 \\ {\text{or}}\,\,\,\,\,\,\,\,\sum {X_i} - n\mu = 0 \\ \end{gathered}

 

Therefore,

\begin{gathered} n\mu = \sum {X_i} \\ \,\,\,\widetilde \mu = \frac{{\sum {X_i}}}{n} = \overline X \\ \end{gathered}

 

Hence \overline X (the sample mean) is an estimator of the population mean.