Point Estimation
In the point estimation procedure we make an attempt to compute a numerical value from sample observations, which could be taken as an approximation to the parameter. The estimators, which are also referred to as statistics (plural of statistic), since they are based on observations which are random variables themselves. A number of estimation methods like method of least square, method of maximum likelihood, method of moments, etc., are available with some specific properties.
Method of Least Square
The method of least square is specifically used in regression analysis to estimate the regression coefficients. To understand the technique of estimation let us consider the following simple example. Please note that a formal treatment of the least square method which involves the inclusion of the disturbance term has been avoided for simplicity.
Suppose that consumption expenditure is linearly related to only one variable, family income . This can be written mathematically as
In economics, this relation is known as a consumption function, where is a measure of the consumption expenditure at zero level of income and is a measure of the marginal propensity to consume, i.e., it gives a measure of how much will be consumed from each additional unit of income. The consumption function is in the parametric form, specifying a different relationship for different values of the parameters ( and ). The parameters (, ) are not known and need to be estimated on the basis of a sample. A random sample of households is drawn from the population under study. The information about consumption and income is recorded as follows for each of these households.
Consumption Expenditure

Family Income

Y1

X1

Y1

X2



Yn

Xn

On the basis of these sample observations we wish to estimate the consumption function. Let the estimating equation be , where ( hat), ( hat) and ( hat) are the estimates of , and respectively.
Since is an estimate of , it will be very lucky on our part to have a equal to ; otherwise they will be different. The difference between an estimate value and the observed value is denoted by , which is usually termed “residual”, “deviation” or “error term”. This residual may be positive or negative.
The smaller the residuals are, the closer the estimating equation is to the original model . Hence, to have a closer estimating equation for we should minimize the residuals. The residuals are minimized according to the following principle, which states that:
“Those values of and should be chosen which minimize the sum of squared residual”. This principle is known as the “principle of least squares”
Thus, the sum of the squared residual may be written as
In order to minimize the quantity , we will use the technique of differential calculus. Hence, differentiating with respect to and equating the resulting derivatives to zero.
Simplifying the above equations, we have
These two equations are called the “Normal Equation” in which if we substitute the values and from our sample observations, the two estimates and of the unknown parameters a and b can be determined by solving the simultaneous equations.