3195 Study Vs
3195 Study Vs
3195 Study Vs
Morgan Rodgers
Denition. A vector space is a collection V of objects called vectors which, along with a collection of scalars (usually the real numbers) have operations of addition and scalar multiplication dened on them which satisfy the following for vectors u, v , w (and the zero vector 0) and scalars c and d: a) c) u+v =v+u u+0=u e) f) g) h) c(u + v ) = cu + cv (c + d)u = cu + du c(du) = (cd)u 1u = u
b) u + (v + w) = (u + v ) + w d) u + (u) = 0
Denition. A subspace W of a vector space V is a subset of the vectors that is also a vector space under the same operations of vector addition and scalar multiplication (with the same set of scalars. The crucial properties that need to hold for W to be a subspace of V are: a) For every possible choice of two vectors u and v in W , u + v is also in W . b) For every vector v in W and every possible choice of scalar c, cv is also in W . Here are some examples of vector spaces that are relatively common: Example. Rn is the set of all vectors with n real number entries; vector addition is done componentwise (entry-byentry), the set of scalars is R and we multiply a vector by a scalar by multiplying every entry of the vector by the scalar. Example. Cn is the set of all vectors with n complex number entries; vector addition and multiplication is the same as for Rn , except the set of scalars in C. Example. Mmn is the set of all m n matrices with real number entries; we add these componentwise, and multiply them by scalars (in R) by multiplying every entry by the scalar. Example. F is the set of all real valued functions dened on the real line. Our vectors are functions, and we had two functions f (x) and g (x) to get the new function (f + g )(x) = f (x) + g (x). We mulitply the function f (x) by a real number scalar c to get the new function (cf )(x) = c f (x). Subspaces of F are called function spaces and we will discuss them more in the next chapter.
Denition. Given a list of vectors v1 , v2 , . . ., vk , a vector y = c1 v1 + c2 v2 + . . . + ck vk is called a linear combination of the vectors v1 , v2 , . . ., vk with weights c1 , c2 , . . ., ck . Notice that if our vectors are in Rn , this is the same thing as saying y= v1 v2 vk c1 c2 . . . ck Denition. The span of a set of vectors v1 , v2 , . . ., vk is the set of all possible linear combinations of v1 , v2 , . . ., vk . The span of a set of vectors is always a subspace which we call the subspace spanned by v1 , v2 , . . ., vk . We say that v1 , v2 , . . ., vk span the vector space V (or a subspace W ) if every vector in V (or W ) is a linear combination of v1 , v2 , . . ., vk ; in this case we may also say that v1 , v2 , . . ., vk is a spanning set for V (or W ). Notice that if our vectors are in Rn , the span of v1 , v2 , . . ., vk consists of the vectors v1 v2 vk c1 c2 . . . ck for all possible choices of c1 , c2 , . . ., ck . Denition. The vectors v1 , v2 , . . ., vk are said to be linearly independent if c1 v1 + c2 v2 + . . . + ck vk = 0 has only the trivial solution c1 = c2 = = ck = 0. For two vectors v1 and v2 , this is the same as saying neither vector is a scalar multiple of the other. If v1 , v2 , . . ., vk are not linearly independent, they are said to be linearly dependent. If the vectors are in Rn , this means that v1 , v2 , . . ., vk will be linearly independent if and only if v1 v2 vk c1 c2 . . . ck v1 v2 This is the most common way to test the linear independence of vectors in Rn . has only the trivial solution, that is, when we row reduce the matrix vk we have no free variables. =0 .
Theorem. If we have more than n vectors in Rn , they must be linearly dependent since the matrix formed as above will have to have at least one free variable (since there will be more columns than rows). Theorem. If we have n vectors in Rn , they are linearly independent if and only if we have the determinant v1 v2 vn =0
Denition. A nite set B of vectors in a vector space V is called a basis for V provided that
Theorem. If B = {v1 , v2 , . . . , vn } is a basis for V , then any vector w in V can be written uniquely as a linear combination of the vectors in B . The number of vectors in a basis also tells us some important information about the vector space, due to the following result: Theorem. Any two bases for a vector space contain the same number of vectors. Denition. A vector space V is called nite dimensional if there is a basis for V containing a nite number of vectors; in this case the number of vectors n in every basis for V is called the dimension. A vector space having no nite basis is called innite dimensional. Theorem. Let V be an n-dimensional vector space, and let S be a subset of V . Then a) If S is linearly independent and contains n vectors, then S is a basis for V ; b) If the vectors in S span V and S contains n vectors, then S is a basis for V ; c) If S is linearly independent, then S is contained in a basis for V ; d) If the vectors in S span V , then S contains a basis for V . The rst thing we will look at is how to reduce a spanning set to a basis. The key here is that row reducing a matrix preserves the independence relations between the columns. In essence: a) We test to see if a set of vectors are linearly independent by putting the vectors as columns in a matrix and row reducing; if we end up with a leading entry in each column, the vectors are independent. b) If we have a set of vectors S that span the vector space V (or a subspace W of V ), we can put the vectors of S all as columns in a matrix and row reduce; if we take all of the original vectors that correspond to columns containing a leading entry in the echelon form of the matrix, they will be i) linearly independent and ii) they will span the same space as the original set of vectors. Really this amounts to a nice matrix method of taking the vectors one at a time, starting with the rst vector in S , and throwing away each vector that is a linear combination of the vectors that came before, so we are eliminating the redundancy in S to reduce it to a basis. We can use this to see how to extend a list S of linearly independent vectors in V to a basis for V : a) If we have a set of vectors S = {v1 , v2 , . . . , vk in Rn that are linearly independent, and e1 , e2 , . . ., en are the standard basis vectors in Rn , then {v1 , v2 , . . . , vk , e1 , e2 , . . . , en } is a spanning set for V ; b) If we apply our technique to reduce this spanning set to a basis, the vectors we choose will be {v1 , v2 , . . . , vk } along with whichever of the standard basis vectors we need to form a basis for V . 3
We have a few vector subspaces that arise from an m n matrix A that are useful enough to have special names: Denition. The null space of an m n matrix A is the set of solutions to the homogeneous equations Ax = 0; it contains the vectors x that are solutions to this equation and so is a subspace of Rn . We nd a basis for the null space of A using the following method: i) Reduce A to its reduced echelon form; ii) Identify the free variables; if there are no free variables, we can consider the empty set to be the basis for the null space; iii) Set the r free variables equal to parameters t1 , . . ., tr and solve for the remaining variables in terms of these parameters; iv) Let vj be the vector obtained by setting tj = 1 and all of the other parameters equal to zero. Then {v1 , . . . , vr } is a basis for the null space of A, which has dimension r. Denition. The column space of an m n matrix A is the subspace spanned by the columns of A; it is a subspace of Rm . It can also be thought of as the set of vectors y that can be written Ax = y for some choice of x. To nd a basis for the column space of A, we use the fact that the columns of A are a spanning set for the column space; we can use the method introduced in the previous section to reduce this spanning set to a basis. Denition. The dimension of the column space of A is called the rank of A. Notice that the leading entries correspond to columns that are used as a basis for the column space, while the columns without leading entries correspond to free variables; thus we have the following theorem: Theorem. If A is an m n matrix, then we have that dim (null A) + rank A = n. Denition. The row space of an m n matrix A is the subspace spanned by the rows of A; it is a subspace of Rn , and can be thought of as the column space of AT . We nd a basis for the row space by transposing A and nding a basis for the column space of AT .
Orthogonality
We dene orthogonality of vectors in a vector space using what is called an inner product; Denition. An inner product on a vector space V is a function u, v that takes the pair of vectors u and v and returns a scalar; to be called an inner product, the function must satisfy the following: if u, v , and w are vectors in V and c is a scalar, then a) u, v = v , u ; b) u, v + w = u, v + u, w ; c) cu, v = c u, v ; d) u, u 0; u, u = 0 if and only if u = 0. The most common example of an inner product is the dot product dened on Rn ; this is also sometimes called the Euclidean inner product. It is usually written u v = uT v . Notice that uT v amounts to multiplying the corresponding entries of u and v and adding all of the products together. An inner product on a vector space allows us to dene things like the length of a vector, the distance between two vectors, and the angle between two vectors, which will behave in many ways like we would hope.. Denition. We dene the length of a vector v in terms of an inner product by |v | =
2
is the standard dot product, this corresponds to our usual denition of length in R and R3 . Denition. We dene the distance between two vectors u, v to be d(u, v ) = |u v |. If we are using the dot product on R2 or R3 , this corresponds to our standard notion of the distance between two points in the plane or in three space. Theorem (Cauchy-Schwarz). If u and v are vectors in a vector space V with an inner product, then |u v | |u||v | (notice that we are taking the absolute value of a scalar on the left, and multiplying two lengths on the right). Theorem (Triangle Inequality). If u and v are vectors in a vector space V with an inner product, then |u + v | |u| + |v |. Denition. We dene the angle between two vectors u and v by the formula u v = |u||v | cos , so = cos1 uv . |u||v |
2.
Denition. We say two vectors u and v are orthogonal if u v = 0, that is, if the angle between them is
Theorem. If we have a collection of nonzero vectors v1 , v2 , . . ., vk that are mutually orthogonal (that means every pair are orthogonal) then they are linearly independent. Denition. We say the vector u is orthogonal to a subspace W of a vector space V if u is orthogonal to every vector in W . The orthogonal complement W of W is the set of all vectors in V that are orthogonal to the subspace W .
Theorem. If W is a subspace of a vector space V then: a) its orthogonal complement W is also a subspace of V ; b) the only vector in both W and W is the zero vector; c) the orthogonal complement of W is W (i.e. (W ) = W ); d) if S is a spanning set for W , then u is in W if and only if u is orthogonal to every vector in S ; e) if V if nite dimensional, then dim W + dim W = dim V . If we have a collection of vectors S = {w1 , w2 , . . . , wk } that form a spanning set for W , then to nd W , we need to nd the collection of vectors that are orthogonal to every vector in S ; to do this, we use the fact the two
T vectors wi and u are orthogonal only when wi u = 0. We can set this up as a matrix and notice that
T w1 T w2 . . . T wk
c1 c2 . . . ck
u =
T where ci = wi u; so to nd the vectors u that are orthogonal to the vectors in S , we need to nd the null space of
the above matrix. A basis for the null space will give us a basis for W .