Unit I
Unit I
Unit I
UNIT 1
Introduction to Econometrics
It may be pointed out that the econometric methods can be used in other areas like engineering sciences,
biological sciences, medical sciences, geosciences, agricultural sciences etc. In simple words, whenever there
is a need of finding the stochastic relationship in mathematical format, the econometric methods and tools
help. The econometric tools are helpful in explaining the relationships among variables.
Econometric Models:
A model is a simplified representation of a real-world process. It should be representative in the sense that it
should contain the salient features of the phenomena under study. In general, one of the objectives in
modeling is to have a simple model to explain a complex phenomenon. Such an objective may sometimes
lead to oversimplified model and sometimes the assumptions made are unrealistic. In practice, generally, all
the variables which the experimenter thinks are relevant to explain the phenomenon are included in the
model. Rest of the variables are dumped in a basket called “disturbances” where the disturbances are random
variables. This is the main difference between economic modeling and econometric modeling. This is also
the main difference between mathematical modeling and statistical modeling. The mathematical modeling is
exact in nature, whereas the statistical modeling contains a stochastic term also.
An economic model is a set of assumptions that describes the behaviour of an economy, or more generally, a
phenomenon.
An econometric model consists of
- a set of equations describing the behaviour. These equations are derived from the economic model
and have two parts – observed variables and disturbances.
- a statement about the errors in the observed values of variables.
- a specification of the probability distribution of disturbances.
Aims of econometrics:
The three main aims econometrics are as follows:
3. Use of models:
The obtained models are used for forecasting and policy formulation, which is an essential part in any policy
decision. Such forecasts help the policymakers to judge the goodness of the fitted model and take necessary
measures in order to re-adjust the relevant economic variables.
Econometrics uses statistical methods after adapting them to the problems of economic life. These adopted
statistical methods are usually termed as econometric methods. Such methods are adjusted so that they
become appropriate for the measurement of stochastic relationships. These adjustments basically attempt to
specify attempts to the stochastic element which operate in real-world data and enters into the determination
of observed data. This enables the data to be called a random sample which is needed for the application of
statistical tools.
The theoretical econometrics includes the development of appropriate methods for the measurement of
economic relationships which are not meant for controlled experiments conducted inside the laboratories.
The econometric methods are generally developed for the analysis of non-experimental data.
The applied econometrics includes the application of econometric methods to specific branches of
econometric theory and problems like demand, supply, production, investment, consumption etc. The applied
econometrics involves the application of the tools of econometric theory for the analysis of the economic
phenomenon and forecasting economic behavior.
Types of data
Various types of data is used in the estimation of the model.
1. Time series data
Time series data give information about the numerical values of variables from period to period and are
collected over time. For example, the data during the years 1990-2010 for monthly income constitutes a time
series of data.
2. Cross-section data
The cross-section data give information on the variables concerning individual agents (e.g., consumers or
produces) at a given point of time. For example, a cross-section of a sample of consumers is a sample of
family budgets showing expenditures on various commodities by each family, as well as information on
family income, family composition and other demographic, social or financial characteristics.
3. Panel data:
The panel data are the data from a repeated survey of a single (cross-section) sample in different periods of
time.
Aggregation problem:
The aggregation problems arise when aggregative variables are used in functions. Such aggregative variables
may involve.
1. Aggregation over individuals:
For example, the total income may comprise the sum of individual incomes.
4. Spatial aggregation:
Sometimes the aggregation is related to spatial issues. For example, the population of towns, countries, or the
production in a city or region etc..
Such sources of aggregation introduce “aggregation bias” in the estimates of the coefficients. It is important
to examine the possibility of such errors before estimating the model.
Econometrics and regression analysis:
One of the very important roles of econometrics is to provide the tools for modeling on the basis of given
data. The regression modeling technique helps a lot in this task. The regression models can be either linear or
non-linear based on which we have linear regression analysis and non-linear regression analysis. We will
consider only the tools of linear regression analysis and our main interest will be the fitting of the linear
regression model to a given set of data.
variable, depends on k independent (or explanatory) variables denoted by X1, X 2 ,..., Xk . Suppose the
between y and X1, X 2 ,..., Xk and indicates that such a relationship is not exact in nature. When 0, then
the relationship is called the mathematical model otherwise the statistical model. The term “model” is
broadly used to represent any phenomenon in a mathematical framework.
A model or relationship is termed as linear if it is linear in parameters and non-linear, if it is not linear in
parameters. In other words, if all the partial derivatives of y with respect to each of the parameters
1, 2 ,..., k are independent of the parameters, then the model is called as a linear model. If any of the
partial derivatives of y with respect to any of the 1 , 2 ,..., k is not independent of the parameters, the
model is called non-linear. Note that the linearity or non-linearity of the model is not described by the
linearity or non-linearity of explanatory variables in the model.
For example
y 1X 1 2 X 2 3 log X 3
2
is a linear model because y / i , (i 1, 2, 3) are independent of the parameters i , (i 1, 2, 3). On the other
hand,
y 2 X X log X
1 1 2 2 3
is a non-linear model because y / 1 21 X1 depends on 1 although y / 2 and y / 3 are independent
of any of the 1 , 2 or 3 .
When the function f is linear in parameters, then y f ( X 1 , X 2 ,..., X k , 1 , 2 ,..., k ) is called a linear
model and when the function f is non-linear in parameters, then it is called a non-linear model. In general,
the function f is chosen as
are known. Thus the knowledge of the model depends on the knowledge of the parameters 1, 2 ,..., k .
The statistical linear modeling essentially consists of developing approaches and tools to determine
1, 2 ,..., k in the linear model
y 1 X1 2 X 2 ... k Xk
Different statistical estimation procedures, e.g., method of maximum likelihood, the principle of least
squares, method of moments etc. can be employed to estimate the parameters of the model. The method of
maximum likelihood needs further knowledge of the distribution of y whereas the method of moments and
the principle of least squares do not need any knowledge about the distribution of y .
The regression analysis is a tool to determine the values of the parameters given the data on y and
X1, X 2 ,..., X k . The literal meaning of regression is “to move in the backward direction”. Before discussing
and understanding the meaning of “backward direction”, let us find which of the following statements is
correct:
S1: model generates data or
S 2 : data generates the model.
Obviously, S1 is correct. It can be broadly thought that the model exists in nature but is unknown to the
experimenter. When some values to the explanatory variables are provided, then the values for the output or
study variable are generated accordingly, depending on the form of the function f and the nature of the
phenomenon. So ideally, the pre-existing model gives rise to the data. Our objective is to determine the
functional form of this model. Now we move in the backward direction. We propose to first collect the data
on study and explanatory variables. Then we employ some statistical techniques and use this data to know
the form of function f . Equivalently, the data from the model is recorded first and then used to determine
the parameters of the model. The regression analysis is a technique which helps in determining the statistical
model by using the data on study and explanatory variables. The classification of linear and non-linear
regression analysis is based on the determination of linear and non-linear models, respectively.
Consider a simple example to understand the meaning of “regression”. Suppose the yield of the crop ( y)
depends linearly on two explanatory variables, viz., the quantity of fertilizer ( X1 ) and level of irrigation
( X 2 ) as
y 1 X1 2 X 2 .
There exist the true values of 1 and 2 in nature but are unknown to the experimenter. Some values on y
are recorded by providing different values to X1 and X2 . There exists some relationship between y and
X1 , X 2 which gives rise to a systematically behaved data on y , X1 and X 2 . Such a relationship is unknown
to the experimenter. To determine the model, we move in the backward direction in the sense that the
collected data is used to determine the unknown parameters 1 and 2 of the model. In this sense, such an
approach is termed as regression analysis.
The theory and fundamentals of linear models lay the foundation for developing the tools for regression
analysis that are based on valid statistical theory and concepts.
Generally, the data is collected on n subjects, then y on data, then y denotes the response or study variable
and y , y ,..., y are the n values. If there are k explanatory variables X , X ,.., X then x denotes the ith
1 2 n 1 2 k ij
value of the jth variable i 1, 2,..., n; j 1, 2,..., k . The observation can be presented in the following table:
Notation for the data used in regression analysis
Observation number Response Explanatory variables
y X1 X 2 X k
n yn xn1 xn 2 xnk
4. Specification of model:
The experimenter or the person working in the subject usually help in determining the form of the model.
Only the form of the tentative model can be ascertained, and it will depend on some unknown parameters.
For example, a general form will be like
y f ( X1 , X 2 ,..., Xk ; 1, 2 ,..., k )
where is the random error reflecting mainly the difference in the observed value of y and the value of y
obtained through the model. The form of f ( X1 , X 2 ,..., X k ; 1, 2 ,..., k ) can be linear as well as non-linear
depending on the form of parameters 1 , 2 ,..., k . A model is said to be linear if it is linear in parameters.
For example,
y X X 2 X
1 1 2 1 3 2
y 1 2 ln X 2
are linear models whereas
y X 2 X X
1 1 2 2 3 2
y ln 1 X1 2 X 2
are non-linear models. Many times, the non-linear models can be converted into linear models through some
transformations. So the class of linear models is wider than what it appears initially.
If a model contains only one explanatory variable, then it is called a simple regression model. When there
are more than one independent variables, then it is called a multiple regression model. When there is only
one study variable, the regression is termed as univariate regression. When there are more than one study
variables, the regression is termed as multivariate regression. Note that the simple and multiple regressions
are not same as univariate and multivariate regressions. The simple and multiple regression are determined
by the number of explanatory variables, whereas univariate and multivariate regressions are determined by
the number of study variables.
6. Fitting of model:
The estimation of unknown parameters using appropriate method provides the values of the parameter.
Substituting these values in the equation gives us a usable model. This is termed as model fitting. The
estimates of parameters 1, 2 ,..., k in the model
y f ( X1 , X 2 ,..., Xk , 1 , 2 ,..., k )
are denoted by ˆ, ˆ,..., ˆ which gives the fitted model as
1 2 k
When the value of y is obtained for the given values of X1, X 2 ,..., Xk , it is denoted as yˆ and called as fitted
value.
The fitted equation is used for prediction. In this case, yˆ is termed as the predicted value. Note that the
fitted value is where the values used for explanatory variables correspond to one of the n observations in the
data, whereas predicted value is the one obtained for any set of values of explanatory variables. It is not
generally recommended to predict the y -values for the set of those values of explanatory variables which lie
outside the range of data. When the values of explanatory variables are the future values of explanatory
variables, the predicted values are called forecasted values.
7. Model criticism and selection
The validity of the statistical method to be used for regression analysis depends on various assumptions.
These assumptions become the assumptions for the model and the data essentially. The quality of statistical
inferences heavily depends on whether these assumptions are satisfied or not. For making these assumptions
to be valid and to be satisfied, care is needed from the beginning of the experiment. One has to be careful in
choosing the required assumptions and to decide as well to determine if the assumptions are valid for the
given experimental conditions or not? It is also important to decide that the situations is which the
assumptions may not meet.
The validation of the assumptions must be made before drawing any statistical conclusion. Any departure
from the validity of assumptions will be reflected in the statistical inferences. In fact, the regression analysis
is an iterative process where the outputs are used to diagnose, validate, criticize and modify the inputs. The
iterative process is illustrated in the following figure.
Inputs Outputs
Simple Hypothesis: When a hypothesis specifies all the parameters of a probability distribution, it is known
as simple hypothesis.
Composite Hypothesis: When a hypothesis specifies only some of the parameters of a probability
distribution, it is known as composite hypothesis.
1. Statement of Null Hypothesis:
Null Hypothesis - Ho :
For applying the test of significance we first set up a hypothesis-a definite statement about the population
parameters. Such a hypothesis which is usually a hypothesis of no-difference is called null hypothesis and it
is denoted by Ho
Example: Ho- There is no significant difference between the two sample means
Alternate Hypothesis- H1:
Any hypothesis which is complementary to the null hypothesis is called an alternative hypothesis,
usually denoted by H1.
4. Data collection:
Data : The information collected through censuses and surveys or in a routine manner or other sources is
called a raw data.
There are two types of data
1. Primary data
2.Secondary data.
5. Estimation of the Econometric Model:
Here, we quantify β1 and β2 i.e. we obtain numerical estimates. This is done by statistical technique called
regression analysis.
Example:
Y=12.50+0.6X+u
6. Testing of Hypothesis:
Once the hypothesis is formulated we have to make a decision on it. A statistical
procedure by which we decide to accept or reject a statistical hypothesis is called testing of
hypothesis.
7. Forecasting and Prediction:
If the hypothesis testing was positive, i.e. the theory was concluded to be correct, we forecast the values of
the wage by predicting the values of education.
Example:
Y=12.50+0.6X
X=10 means then Y=18.50 it is Forecasting
Y=20 means then X=14. It is prediction.
STOCHASTIC TERM:-
The stochastic term is situations or models containing a random element, hence unpredictable and without a
stable pattern of order. All-natural events are a stochastic phenomenon.
Reason to incorporate the stochastic term
(1) Omission or ignorance of variables from the model -
In reality, it is not only the N application that determines the crop yield. The yield of the crop on the farm is
determined by many other factors. Such as level of other nutrients, Soil, moisture, weather, insects, & pests
agronomic, practices. All these factors are ignored in the model. The disturbance term Ui is use for these
variables.
(2) Non-availability of data in statistical form-
In reality, sometimes we may not have quantitative or statistical information about these variables so we add
disturbance term on the model.
Ex.- It is difficult to have a composite measure of temperature & humidity which existed at each stage of
cop growth beginning from sowing till harvesting.
(3) Joint influence-
Same factors, when taken individually, have a very small influence on the dependent variable. Thus, their
influence cannot be measured in a reliable way. But it is quite possible that the joint influence of all such
variables when taken together may effect on the dependent variable. So a disturbance term is used.
3. Efficient estimator- An estimator is efficient when it has both above property the unbiasedness & law
variance as compared to other unbiased estimator.
7. Sufficient Estimator-
Sufficient estimator is estimator which utilize all the information sample contains about the true parameters.
It must use all the observations of the sample mean in their way that is no other estimator can add any
further information about the population parameter which is being estimated.
where y is termed as the dependent or study variable and X is termed as the independent or explanatory
variable. The terms β0 and β1 are the parameters of the model. The parameter β0 is termed as an intercept
term, and the parameter β1 is termed as the slope parameter. These parameters are usually called as
regression coefficients. The unobservable error component ε accounts for the failure of data to lie on the
straight line and represents the difference between the true and observed realization of y . There can be
several reasons for such difference, e.g., the effect of all deleted variables in the model, variables may be
qualitative, inherent randomness in the observations etc. We assume that ε is observed as independent and
identically distributed random variable with mean zero and constant variance ꝺ2 . Later, we will additionally
assume that ε is normally distributed.
The independent variables are viewed as controlled by the experimenter, so it is considered as non-stochastic
whereas y is viewed as a random variable with
E( y) = β0 + β1 X
and
Var( y) = ꝺ2.
Assuming that a set of n paired observations on (xi , yi ), i = 1, 2,..., n are available which satisfy the linear
regression model y = β0 + β1 X +ε
So we can write the model for each observation as, yi = β0 + β1 Xi +εi (i = 1,2,...,n)
The direct regression approach minimizes the sum of squares
The solutions of these two equations are called the direct regression estimators, or usually called as the
ordinary least squares (OLS) estimators of β0 and β1.
This gives the ordinary least squares estimates b0 of β0 and b1 of β1 as
Multiple Linear Regression Model
Linear regression is a linear approach to modelling the relationship between a scalar response (or dependent
variable) and one or more explanatory variables (or independent variables). The case of one explanatory
variable is called simple linear regression. For more than one explanatory variable, the process is called
multiple linear regression. This term is distinct from multivariate linear regression, where multiple
correlated dependent variables are predicted, rather than a single scalar variable.
Multiple linear regression (MLR), also known simply as multiple regression, is a statistical technique that
uses several explanatory variables to predict the outcome of a response variable. The goal of multiple linear
regression (MLR) is to model the linear relationship between the explanatory (independent) variables and
response (dependent) variable. In essence, multiple regression is the extension of ordinary least-squares
(OLS) regression that involves more than one explanatory variable.
There is a linear relationship between the dependent variables and the independent variables.
The independent variables are not too highly correlated with each other.
yi observations are selected independently and randomly from the population.
Residuals should be normally distributed with a mean of 0 and variance σ.
The coefficient of determination (R-squared) is a statistical metric that is used to measure how much of the
variation in outcome can be explained by the variation in the independent variables. R2 always increases as
more predictors are added to the MLR model even though the predictors may not be related to the outcome
variable.
R2 by itself can't thus be used to identify which predictors should be included in a model and which should
be excluded. R2 can only be between 0 and 1, where 0 indicates that the outcome cannot be predicted by any
of the independent variables and 1 indicates that the outcome can be predicted without error from the
independent variables.
When interpreting the results of a multiple regression, beta coefficients are valid while holding all other
variables constant ("all else equal"). The output from a multiple regression can be displayed horizontally as
an equation, or vertically in table form.