Linear Algebra For Economists (3e)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 43

3

Chapter
Linear Algebra for Economists
Special Determinants & Matrices in Economics

Main reference
3.1 Introduction
3.2 The Jacobean Determinant (|J|)
Dowling, E.T.,
3.3 The Hessian Determinant (|H|))
(1980),Mathemat 3.4 Eigen vectors(𝑿) and Eigen values(𝝀)
ics for Economists • Charactrized roots: 𝝀
(Schaum's • Charaterized equation:(A- 𝝀)X=0
Outline Series), 3.5 Quadratic Forms: Q=𝐴𝑇 XA
Mc Graw-Hill
|J|=dependence test among multivariable functions
|H|=a test for sufficient condition for a multivariable
function z=f(x, y,..z) to be at an optimum values.
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 1
Introduction
• In chapter 1, we showed how to test for linear dependence
through the use of a simple determinant.
• In contrast, a Jacobian determinant permits testing for
functional dependence, both linear and nonlinear.
• |H| is used to ensure whether optimization condition are
meet
• Composed of all the second order partial derivatives of a system
of equations,
• Eigen values and Eigen vectors are alternative ways to solve
homogenous system . In addition, used in testing different
order conditions. We can also use it to test definiteness of
quadratic forms.
• This chapter concludes by proving the concepts of
Quadratic forms .
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 2
The Jacobian Determinants: |J|
• Often used in literatures interchangeably with Jacobian matrix.
• The Jacobian matrix can be applied to test whether functional
(linear or nonlinear) dependence exists among a set of n
functions in n variables.
• A Jacobian determinant |J| is composed of all the first-order
partial derivatives of a system of equations, arranged in ordered
sequence
• More precisely, if we have n functions of n variables 𝑔𝑖 = 𝑔𝑖 (𝑥1 , …
, 𝑥𝑛 ), i = 1, 2, …, n, then the determinant of the Jacobian matrix
of 𝑔𝑖 , …, 𝑔𝑛 with respect to 𝑥1 , … , 𝑥𝑛 will be identically zero for
all values of 𝑥1 , …, 𝑥𝑛 if and only if the n functions 𝑔𝑖 , …, 𝑔𝑛 are
functionally (linearly or nonlinearly) dependent.
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 3
The Jacobian Determinants: |J|
Then |J|;
Given

• Notice that the elements of each row are the partial derivatives of one
function 𝑦𝑖 with respect to each of the independent variables 𝑥1 , 𝑥2 ,
𝑥3 , and the elements of each column are the partial derivatives of
each of the functions 𝑦1 , 𝑦2 , 𝑦3 with respect to one of the independent
variables 𝑥𝑗 .
 If |J|=0, the equations are functionally dependent; if |J| ≠0, the equations
are functionally independent
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 4
The Jacobian Determinants: Example
• Use of the Jacobian to test for functional dependence is
demonstrated below, given

1). First, take the first-order partials,

2). Then set up the Jacobian,

3). and evaluate, |J|

Since |J|=0, there is functional dependence between the equations. In


this, the simplest of cases,

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 5


Exercise 3.1
• Use the Jacobian to test for functional dependence in the
following system of equations:

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 6


Hessian Determinants: Introduction
• Most often the times , in economics, we are concerned with how to
determine whether a function has maximum or minimum value?
• Particularly we may interested in knowing optimal points for;
• Utility function
• Production function
• Cost functions
• Profit function
• Such function may be of the form of single or multiple
functions.
• It depends on some specific objectives to be achieved.
• But, key ideas is to identify the what we call extreme points,
the points at which some objectives may achieved.
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 7
The Hessian Determinants: Introduction (cont’d..)
• Suppose we are given the function with single variable, Y= f(X) which is
continuous and differentiable,
• it is said to have a maximum value at a point where it changes from
an increasing to decreasing function
• where as it is said to have a minimum value at the point where it
changes from decreasing to increasing functions.
• Such min/max is called Extreme points of functions
• The values of X at which the function is at its minimum or maximum
point are known as critical values.
• The given function should satisfy two conditions in order to decide about
maximum and minimum value at a particular point.
• These conditions are called order conditions
• First order =first derivative, called necessary but not sufficient
• Second order: second derivative , sufficient condition for existence
of extremes values.
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 8
The Hessian Determinants: Introduction (cont’d..)
Maximum
y
eg. production or utility
function Problem:

The slope of a function is zero,


both for a maximum and a
minimum

x The zero-derivative condition


y does not help us here !!

So how do we find which is


which ?
• Use second order condition.

Minimum
Example :Cost function
x
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 9
The Hessian Determinants: Introduction (cont’d..)
y Order condition for Maximum:
Maximum
𝑑2 𝑦 The function is first increasing,
<0 then decreasing
𝑑𝑥 2
Therefore, the slope of the
function is decreasing
There is a maximum when the
second derivative is negative
x
𝑑𝑦
y Slope =0 Second Order condition for Minimum:
𝑑𝑥
𝑑2 𝑦 The function is first decreasing, then
slope=𝑑𝑥 2 >0
increasing
Therefore, the slope of the function is
increasing
Minimum There is a minimum when the second derivative
is positive
x
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 10
Note: For a function of 2 variables, the extreme points occur where
the slope is zero both in the x-direction and the y-direction. In other
words, both partial derivatives must be equal to zero

y z  f  x, y   2 x 2  3x  y 2  4 xy  4
z
 f x  x, y   4 x  3  4 y
x
z
 f y   x, y   2 y  4 x
y
4 x  3  4 y  0

2 y  4 x  0
 3
 x  4

y   3
 2
x
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 11
The Hessian Determinants: Introduction (cont’d..)
• In order to find the extreme point of a function of several
variables, we first take the partial derivatives and then We
then set these to zero .

• The problem is :things become a bit more complicated for


functions of several variables.
• The general idea is still to look at the sign of the second
derivative.
• Only this time, because there are several partial derivatives,
there is not a single ‘second derivative’
• This leads to the concepts of the what called Hessian matrix

12 Linear Algebra,Unit 3, Fikadu.A@AAU, 2024


The Hessian Determinants: Introduction (cont’d..)

• In general, for instance with a function of two variables

z  f  x, y 
• This matrix is given by:
This means
This means   z
2
 z 
2

 2  “take the 1st partial


“take the 1st partial  x xy 
derivative with respect
derivative with  2 z 2 z  to x, and differentiate
 2 
respect to x, and
 yx y  with respect to y
differentiate with
respect to x again
13 Linear Algebra,Unit 3, Fikadu.A@AAU, 2024
The Hessian Determinants: Introduction (cont’d..)
• To understand, let’s use an example:
z  2 x  3x  y  2 xy  4
2 2

z z
 4x  3  2 y  2 y  2x
x y

• So: 2 z 2 z 2 z
2
2 z
2
4 2
x 2
y 2 xy yx
 2 z 2 z 
 2   4 2  • The question is ,
 x xy 
 2 z 2 z    what condition we
   2 2  need for Identifying
 yx y 2  maximum and
minimum ?
14 Linear Algebra,Unit 3, Fikadu.A@AAU, 2024
The Hessian Determinants: Introduction (cont’d..)
• In our example, the matrix is given by:

 2 z 2 z 
 2   4 2
 x xy 
 2 z 2 z   
   2 2
 yx y 2 

• Check the sign of the top left entry of the matrix, like the single function case

• If it is positive, you have a minimum

• If it is negative, you have a maximum

• In this case it is positive, so we have a minimum

15 Linear Algebra,Unit 3, Fikadu.A@AAU, 2024


The Hessian Determinants: |H|
• A convenient test for the above second-order condition is the
Hessian.
• Suppose that z=f(x, y) Given that the first-order conditions 𝑧𝑥 = 𝑧𝑦 =0
are met, a sufficient condition for a multivariable function z=f(x, y) to
be at an optimum can be examined using hessian determinant, |H| .
• A Hessian |H| is a determinant composed of all the second-order
partial derivatives, with the second-order direct partials on the
principal diagonal and the second-order cross partials off the
principal diagonal. Thus,

where 𝑧𝑥𝑦 = 𝑧𝑦𝑥 .

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 16


Key Terminology: Principal Minor
• Principal minor of order k: A sub-matrix obtained by deleting any n-k
rows and their corresponding columns from an n x n matrix A.
• Consider 1 2 3
A  4 5 6
7 8 9
• Principal minors of order 1 are diagonal elements 1, 5, and 9.

• Principal minors of order 2 are 1 2, 1 3 and 5 6


 4 5  7 9  8 9
     
• Principal minor of order 3 is A.
• Determinant of a principal minor is called principal determinant.
• There are 2n-1 principal determinants for an n x n matrix.
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 17
Key Terminology: Leading Principal Minor
• The leading principal minor of order k of an n x n matrix is
obtained by deleting the last n-k rows and their
corresponding columns. For example ;
1 2 3
A  4 5 6
7 8 9

• Leading principal minor of order 1 is 1, denoted by 𝐴1


• Leading principal minor of order 2 is 1 2 
𝐴2 = 4
 5 
• Leading principal minor of order 3 is A itself.
• Number of leading principal determinants of an n x n
matrix is n.
• |𝐴1 |, |𝐴2 | and |𝐴3 | are the determinants of Leading
principal minor of order 1, 2 and 3 respectively
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 18
Second order for Minimum: |𝑯|
• If the first element on the principal diagonal, the first principal minor,
denoted by |H1 |=zxx is positive and the second principal minor;

• The second-order conditions for a minimum are met.


• That is when |𝐻1 |> 0 and |𝐻2 |>0, the Hessian H is called
positive definite.
• A positive definite Hessian fulfills the second-order
conditions for a minimum.

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 19


Second order for maximum: |𝑯|

• If the first principal minor |𝐻1 | =𝑍𝑥𝑥 <0 and the


second principal minor; |𝐻2 |

, then the second-order conditions for a maximum are met.


• That is when |𝐻1 |< 0, |𝐻2 |> 0, the Hessian H is negative
definite.
• A negative definite Hessian fulfills the second-order
conditions for a maximum.

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 20


Higher order Hessian condition

• Given y =f(x1, x2, x3), the third-order Hessian is;

where the elements are the various second-order partial


derivatives of y:

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 21


Higher order Hessian condition (continued)

• Conditions for a relative minimum or maximum depend on


the signs of the first, second, and third principal minors,
respectively.
(1). If |𝐻1 |=𝒚𝟏𝟏 >0,
where 𝐻3 is the third principal minor,
Then H is positive definite and fulfills the second-order conditions
for a minimum.
(2).If |𝐻1 |=𝒚𝟏𝟏 <0,

 Then |H| is negative definite and will fulfill the second-order


conditions for a maximum.
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 22
Example
• Consider the function
1. Find the critical values at which the function can be optimized
2. Find the hessian matrix
3. Evaluate whether the optimal point is minimum or maximum
Using the cramer rule and |J|, the function is optimized at x=1 and y= 2. The
second partials were 𝑧𝑥𝑥 =6, 𝑧𝑦𝑦 =4, and 𝑧𝑥𝑦 =1.
Using the Hessian to test the second-order conditions,

Taking the principal minors,|𝐻1 | =6>0 and

With ,|𝐻1 | >0and ,|𝐻2 |>0, the Hessian H is positive definite, and z is minimized at
the critical values.
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 23
Higher order Hessian condition: summary

• Higher-order Hessians follow in analogous fashion, as case


of the second derivatives.
• If all the principal minors of |H| are positive, |H| is positive
definite and the second-order conditions for a relative
minimum are met.
• If all the principal minors of |H| alternate in sign between
negative and positive, |H| is negative definite and the
second-order conditions for a relative maximum are met.

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 24


Higher order Hessian condition: example
• The function;

is optimized as follows, using the Hessian to test the second-


order conditions.
• The first-order conditions are;

which can be
expressed in
matrix form as

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 25


Higher order Hessian condition: example(cont’d)

• Using Cramer’s rule and taking the different determinants,


|A|=−10(28)+1(4)=−276 ≠ 0.
• Since A in this case is the Jacobian and does not equal zero,
the three equations are functionally independent.

Thus,

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 26


Higher order Hessian condition: example(cont’d)
• Taking the second partial derivatives from the first-order conditions
to prepare the Hessian,

which has the same elements as the coefficient matrix in since the first-order partials
are all linear. Finally, applying the Hessian test, by checking the signs of the first,
second, and third principal minors, respectively,

Since the principal minors alternate correctly in sign, the Hessian is negative
definite and the function is maximized at 𝑥1 =1.04, 𝑥2 = 1.22, and 𝑥3 =0.43.
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 27
Exercise 3.2:|𝑯|
• Optimize the following function, using
a) Cramer’s rule for the first-order condition and
b) the Hessian for the second-order condition:

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 28


Eigenvalues and Eigen Vectors
• Let A be an n  n matrix. A scalar  is called an eigenvalue of A if there
exists a nonzero vector x in Rn such that Ax = x. The vector x is called
an eigenvector corresponding to .

• Let A be an n  n matrix with eigenvalue  and corresponding


eigenvector x. Thus Ax = x. This equation may be written
Ax – x = 0 Homogenous system
given of equation
(A – In)x = 0
• Solving the equation |A – In| = 0 for  leads to all the eigenvalues
of A.
• On expanding the determinant |A – In|, we get a polynomial in .
• This polynomial is called the characteristic polynomial of A.
• The equation |A – In| = 0 is called the characteristic equation of A.
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 29
Eigenvalues and Eigen Vectors
Notice that
A:an nn matrix
:a scalar, the characteristic root, latent root, or eigenvalue
x: a n1 nonzero column matrix, the characteristic vector,
latent vector, or eigenvector
Ax   x   Ix  ( I  A) x  0 (homogeneous system)
So, such system of equations has nonzero solutions iff det(I  A) . 0

Otherwise, is has trivial solution

 Characteristic equation of AMnn:


det( I  A)  ( I  A)   n  cn1 n1  L  c1  c0  0
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 30
Eigenvalues and Eigen Vectors
Find the eigenvalues and eigenvectors of the matrix
  4  6
A
3 5 
Solution Let us first derive the characteristic polynomial of A.
We get
  4  6 1 0    4    6 
A  I 2        
 3 5   0 1   3 5   
A  I 2  (4   )(5   )  18  2    2
We now solve the characteristic equation of A.

The eigenvalues of A are 2 and –1.


The corresponding eigenvectors are found by using these values
of  in the equation(A – I2)x = 0. There are many eigenvectors
corresponding to each eigenvalue.
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 31
Eigenvalues and Eigen Vectors
• For  = 2
We solve the equation (A – 2I2)x = 0 for x.
The matrix (A – 2I2) is obtained by subtracting 2 from the
diagonal elements of A. We get

This leads to the system of equations

giving x1 = –x2. The solutions to this system of equations are


x1 = –r, x2 = r, where r is a scalar. Thus the eigenvectors of A
corresponding to  = 2 are nonzero vectors of the form

 x1   1  1
v1     x2    r  
 x2   1  1
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 32
Eigenvalues and Eigen Vectors
• For  = –1
We solve the equation (A + 1I2)x = 0 for x.
The matrix (A + 1I2) is obtained by adding 1 to the diagonal
elements of A. We get

This leads to the system of equations

Thus x1 = –2x2. The solutions to this system of equations are


x1 = –2s and x2 = s, where s is a scalar. Thus the eigenvectors
of A corresponding to  = –1 are nonzero vectors of the form
 x1   2  2
v 2     x2    s  
 x2   1  1
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 33
Exercise 3.3
• Recall that in chapter 2, when we dealt with Homogeneous
Systems of Linear Equations, we stated that the system has a
nontrivial solution if and only if the coefficient matrix is singular:

5 2 1 1 0 0
(A-𝜆𝐼)𝑥 = 0 2 1 0 − 𝜆 0 1 0 X=0
1 0 0 0 0 1

• In that case, we have said that the value of the determinant is


found to be λ(1−λ)(λ−6). Hence, the given system has nontrivial
solutions if and only if λ = 0, 1, or 6
• Now, solutions called Eigen values and find the corresponding
Eigen vectors, which are the solution to that system of
equations.
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 34
Quadratic Forms
• A quadratic function f : R → R has the form f(x) = a x 2 .
• Generalization of this notion to two variables is the
quadratic form would be given by;

• Here each term has degree 2 (the sum of exponents is 2 for all
summands).
• A quadratic form of three variables looks as

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 35


Quadratic Forms
• A general quadratic form of n variables is a real-valued
function Q : 𝑅𝑛 →R of the form

n n
In short Q ( x1 , x2 ,...., xn )   aij xi x j  x T Ax
i 1 j 1

As we see a quadratic form is determined


by the matrix
• A is assumed to be symmetric
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 36
Matrix Representation of Quadratic Forms
n n

• Let Q( x1 , x2 ,...., xn )   aij xi x j


i 1 j 1
• We can easily to see that

Note that: A is assumed to be symmetric. Otherwise, A can be replaced


with the symmetric matrix (A+AT)/2 without changing value of the
quadratic form.
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 37
Example
• Given the quadratic form Q(𝑥1 ; 𝑥2 ; 𝑥3 ) = 5𝑥12 -10𝑥1 𝑥2 +𝑥22
1. Identify the symmetric matrix associated with this equation and
2. write the equation in matrix form.
• A can be formed by placing the coefficients of the squared terms on
the principal diagonal and dividing the coefficients of the nonsquared
term equally between the off-diagonal positions.
−10
5 5 −5
2
𝐴= −10 =
1 −5 1
2

Thus, x Ax T

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 38


Exercise 3.4 : find the quadratic coefficient A,
 3 x1 4x
24x2x1 
3x  4 x1 x2  7 x
2
1
2
2  x1 x2      
70 x2 72 x1x2 

 3 x1  2 x2   3 2   x1 
  x1 x2      x1 x2    x 
 2 x1  7 x2   2 7  2

 3 2   x1 
 x Ax, where A  
T
 , x   .
 2 7   x2 

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 39


Exercise 3.5
 x1 
Let x    . Compute xTAx for the
 x2 
following matrices.

a. A   4 0 
 0 3
 

b.  3 2 
A 
 2 7 
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 40
Classifying Quadratic Forms
• Definition: A quadratic form Q is:
a. positive definite if Q(x) >0 for all x≠ 0 ,
b. negative definite if Q(x) <0 for all x≠ 0 ,
c. indefinite if Q (x) assumes both positive and negative
values. That is if xTAx > 0 for some x and < 0 for other x.
• Also, Q is said to be positive semidefinite if Q(x) ≥0 for all x,
and negative semidefinite if Q(x) ≤0 for all x.

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 41


Quadratic Forms and Eigenvalues
• Let A be an nxn symmetric matrix. Then a quadratic
form xTAx is:
a. positive definite if and only if the eigenvalues of
A are all positive,
b. negative definite if and only if the eigenvalues of
A are all negative, or
c. indefinite if and only if A has both positive and
negative eigenvalues.

Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 42


Quadratic Forms and Eigenvalues(cont’d..)
• The alternative way of relating λ with matrix A;
1. All characteristic roots (𝜆) are positive, A is positive
definite.
2. All 𝜆’s are negative, A is negative definite.
3. All 𝜆’s are nonnegative and at least one 𝜆 =0, A is positive
semidefinite.
4. All 𝜆’s are nonpositive and at least one 𝜆 =0, A is negative
semidefinite.
5. Some 𝜆’s are positive and others negative, A is sign
indefinite.
Linear Algebra,Unit 3, Fikadu.A@AAU, 2024 43

You might also like