Advanced Calculus
By H.K Nickerson, D.C. Spencer and N. E. Steenrod
()
About this ebook
Starting with an abstract treatment of vector spaces and linear transforms, the authors introduce a single basic derivative in an invariant form. All other derivatives — gradient, divergent, curl, and exterior — are obtained from it by specialization. The corresponding theory of integration is likewise unified, and the various multiple integral theorems of advanced calculus appear as special cases of a general Stokes formula. The text concludes by applying these concepts to analytic functions of complex variables.
Related to Advanced Calculus
Titles in the series (100)
Counterexamples in Topology Rating: 4 out of 5 stars4/5The Calculus Primer Rating: 0 out of 5 stars0 ratingsFirst-Order Partial Differential Equations, Vol. 1 Rating: 5 out of 5 stars5/5Calculus Refresher Rating: 3 out of 5 stars3/5Topology for Analysis Rating: 4 out of 5 stars4/5Analytic Inequalities Rating: 5 out of 5 stars5/5Fourier Series and Orthogonal Polynomials Rating: 0 out of 5 stars0 ratingsA Catalog of Special Plane Curves Rating: 2 out of 5 stars2/5Laplace Transforms and Their Applications to Differential Equations Rating: 5 out of 5 stars5/5First-Order Partial Differential Equations, Vol. 2 Rating: 0 out of 5 stars0 ratingsMethods of Applied Mathematics Rating: 3 out of 5 stars3/5Differential Geometry Rating: 5 out of 5 stars5/5Advanced Calculus: Second Edition Rating: 5 out of 5 stars5/5History of the Theory of Numbers, Volume II: Diophantine Analysis Rating: 0 out of 5 stars0 ratingsInfinite Series Rating: 4 out of 5 stars4/5Applied Functional Analysis Rating: 0 out of 5 stars0 ratingsMathematics for the Nonmathematician Rating: 4 out of 5 stars4/5Calculus: An Intuitive and Physical Approach (Second Edition) Rating: 4 out of 5 stars4/5Optimization Theory for Large Systems Rating: 5 out of 5 stars5/5Geometry: A Comprehensive Course Rating: 4 out of 5 stars4/5Theory of Approximation Rating: 0 out of 5 stars0 ratingsMathematics in Ancient Greece Rating: 5 out of 5 stars5/5Elementary Number Theory: Second Edition Rating: 4 out of 5 stars4/5Numerical Methods Rating: 5 out of 5 stars5/5How to Gamble If You Must: Inequalities for Stochastic Processes Rating: 0 out of 5 stars0 ratingsDynamic Probabilistic Systems, Volume II: Semi-Markov and Decision Processes Rating: 0 out of 5 stars0 ratingsAn Introduction to Lebesgue Integration and Fourier Series Rating: 0 out of 5 stars0 ratingsAn Adventurer's Guide to Number Theory Rating: 4 out of 5 stars4/5Introduction to Matrices and Vectors Rating: 0 out of 5 stars0 ratingsMatrices and Linear Algebra Rating: 4 out of 5 stars4/5
Related ebooks
The Summation of Series Rating: 4 out of 5 stars4/5Tensor Calculus Rating: 4 out of 5 stars4/5Complex Integration and Cauchy's Theorem Rating: 0 out of 5 stars0 ratingsSolution of Certain Problems in Quantum Mechanics Rating: 0 out of 5 stars0 ratingsAn Introduction to Differential Geometry - With the Use of Tensor Calculus Rating: 4 out of 5 stars4/5Differential Equations for Engineers and Scientists Rating: 0 out of 5 stars0 ratingsAdvanced Trigonometry Rating: 2 out of 5 stars2/5Theory of Approximation Rating: 0 out of 5 stars0 ratingsVector Geometry Rating: 0 out of 5 stars0 ratingsIntroduction to Algebraic Geometry Rating: 4 out of 5 stars4/5Matrices and Linear Algebra Rating: 4 out of 5 stars4/5Algebraic Extensions of Fields Rating: 0 out of 5 stars0 ratingsDifferential Calculus and Its Applications Rating: 3 out of 5 stars3/5Advanced Calculus: An Introduction to Classical Analysis Rating: 5 out of 5 stars5/5Theory of Functions, Parts I and II Rating: 3 out of 5 stars3/5The Calculus Primer Rating: 0 out of 5 stars0 ratingsLinear Algebra Rating: 3 out of 5 stars3/5Differential Forms with Applications to the Physical Sciences Rating: 5 out of 5 stars5/5A Second Course in Complex Analysis Rating: 0 out of 5 stars0 ratingsDifferential Geometry Rating: 5 out of 5 stars5/5Applied Complex Variables Rating: 5 out of 5 stars5/5Topological Methods in Euclidean Spaces Rating: 0 out of 5 stars0 ratingsAnalytic Theory of Continued Fractions Rating: 0 out of 5 stars0 ratingsAn Introduction to Ordinary Differential Equations Rating: 4 out of 5 stars4/5Complex Variables Rating: 0 out of 5 stars0 ratingsFunctions and Graphs Rating: 4 out of 5 stars4/5The Advanced Geometry of Plane Curves and Their Applications Rating: 0 out of 5 stars0 ratingsElements of Tensor Calculus Rating: 4 out of 5 stars4/5Algebraic Geometry Rating: 0 out of 5 stars0 ratingsTrigonometry Refresher Rating: 4 out of 5 stars4/5
Mathematics For You
Algorithms to Live By: The Computer Science of Human Decisions Rating: 4 out of 5 stars4/5Summary of The Black Swan: by Nassim Nicholas Taleb | Includes Analysis Rating: 5 out of 5 stars5/5Algebra - The Very Basics Rating: 5 out of 5 stars5/5The Little Book of Mathematical Principles, Theories & Things Rating: 3 out of 5 stars3/5Quantum Physics for Beginners Rating: 4 out of 5 stars4/5What If?: Serious Scientific Answers to Absurd Hypothetical Questions Rating: 5 out of 5 stars5/5Mathematics for the Nonmathematician Rating: 4 out of 5 stars4/5Game Theory: A Simple Introduction Rating: 4 out of 5 stars4/5My Best Mathematical and Logic Puzzles Rating: 4 out of 5 stars4/5Calculus For Dummies Rating: 4 out of 5 stars4/5Fermat’s Last Theorem Rating: 4 out of 5 stars4/5Learn Game Theory: Strategic Thinking Skills, #1 Rating: 5 out of 5 stars5/5How to Solve It: A New Aspect of Mathematical Method Rating: 4 out of 5 stars4/5The Art of Statistical Thinking Rating: 5 out of 5 stars5/5Introducing Game Theory: A Graphic Guide Rating: 4 out of 5 stars4/5Think Like A Maths Genius: The Art of Calculating in Your Head Rating: 0 out of 5 stars0 ratingsHow Minds Change: The New Science of Belief, Opinion and Persuasion Rating: 0 out of 5 stars0 ratingsThe Shape of a Life: One Mathematician's Search for the Universe's Hidden Geometry Rating: 3 out of 5 stars3/5Introductory Discrete Mathematics Rating: 4 out of 5 stars4/5Mental Math Secrets - How To Be a Human Calculator Rating: 5 out of 5 stars5/5Basic Math & Pre-Algebra For Dummies Rating: 4 out of 5 stars4/5The Art of Logic: How to Make Sense in a World that Doesn't Rating: 0 out of 5 stars0 ratingsGeometry For Dummies Rating: 4 out of 5 stars4/5Statistics: a QuickStudy Laminated Reference Guide Rating: 0 out of 5 stars0 ratingsVedic Mathematics Made Easy Rating: 4 out of 5 stars4/5Calculus Made Easy Rating: 4 out of 5 stars4/5
Reviews for Advanced Calculus
0 ratings0 reviews
Book preview
Advanced Calculus - H.K Nickerson
11.Exercises
I.THE ALGEBRA OF VECTOR SPACE
§1.Axioms
1.1.Definition.A vector space V is a set, whose elements are called vectors, together with two operations. The first operation, called addition, assigns to each pair of vectors A, B a vector, denoted by A + B, called their sum. The second operation, called multiplication by a scalar, assigns to each vector A and each real number x a vector denoted by xA. The two operations are required to have the following eight properties:
Axiom 1.A + B = B + A for each pair of vectors A, B. (I.e. addition is commutative.)
Axiom 2.(A + B) + C = A + (B + C) for each triple of vectors A, B, C. (I.e. addition is associative.)
+ A = A for each vector A.
.
Axiom 5.x(A + B) = xA + xB for each real number x and each pair of vectors A, B. (I.e. multiplication is distributive with respect to vector addition.)
Axiom 6.(x + y)A = xA + yA for each pair x, y of real numbers and each vector A. (I. e. multiplication is distributive with respect to scalar addition.)
Axiom 7.(xy)A = x(yA) for each pair x, y of real numbers and each vector A.
Axiom 8.For each vector. A,
,
(ii)1A = A,
(iii)(−1)A = −A.
1.2.Definition.The difference A − B of two vectors is defined to be the sum A + (−B).
The subsequent development of the theory of vector spaces will be based on the above axioms as our starting point. There are other approaches to the subject in which the vector spaces are constructed. For example, starting with a euclidean space, we could define a vector to be an oriented line segment. Or, again, we could define a vector to be a sequence (x1, ..., xn) of n real numbers. These approaches give particular vector spaces having properties not possessed by all vector spaces. The advantages of the axiomatic approach are that the results which will be obtained apply to all vector spaces, and the axioms supply a firm starting point for a logical development.
§2.Redundancy
The axioms stated above are redundant. For example the word unique
in . Using Axiom 1, we obtain
.
This proves the uniqueness.
The word unique
can likewise be omitted from Axiom 4.
For suppose A, B, C are three vectors such that
.
Using these relations and Axioms 1, 2 and 3, we obtain
Therefore B = C, and so there can be at most one candidate for −A.
The Axiom 8(i) is a consequence of the preceding axioms:
§3.Cartesian spaces
3.1.Definition.The cartesian k-dimensional space, denoted by Rk, is the set of all sequences (a1, a2, ..., ak) of k real numbers together with the operations
(a1, a2, ..., ak) + (b1, b2, ..., bk) = (a1+b1, a2+b2, ..., ak+bk)
and
x(a1, a2, ..., ak)=(xa1, xa2, ..., xak).
In particular, R¹ = R is the set of real numbers with the usual addition and multiplication. The number ai is called the ith component of (a1, a2, ..., ak), i = 1, ..., k.
3.2.Theorem.For each integer k > o, Rk is a vector space.
Proof.The proofs of Axioms 1 through 8 are based on the axioms for the real numbers R.
Let A = (a1, a2, ..., ak), B = (b1, b2, ..., bk), etc. For each i = 1, ..., k, the ith component of A + B is ai + bi, and that of B + A is bi + ai. Since the addition of real numbers is commutative, ai + bi = bi + ai. This implies A + B = B + A; hence Axiom 1 is true.
The ith component of (A + B) + C is (ai + bi) + ci, and that of A + (B + C) is ai + (bi + ci). Thus the associative law for real numbers implies Axiom 2.
+ A = A. This proves Axiom 3 since the uniqueness part of the axiom is redundant (see §2).
. This proves Axiom 4 (uniqueness is again redundant).
If x is a real number, the ith component of x(A + B) is, by definition, x(ai + bi); and that of xA + xB is, by definition, xai + xbi. Thus the distributive law for real numbers implies Axiom 5.
The verifications of Axioms 6, 7 and 8 are similar and are left to the reader.
§4.Exercises
1.Verify that · Rk satisfies Axioms 6, 7 and 8.
2.Prove that Axiom 8(iii) is redundant. Show also that (−x)A = − (xA) for each x and each A.
3.Show that Axiom 8(ii) is not a consequence of the preceding axioms by constructing a set with two operations which satisfy the preceding axioms but not, 8(ii). (Hint: Consider the real numbers with multiplication redefined by xy = o for all x and y.) Can such an example satisfy Axiom 8(iii)?
4.Show that A + A = 2A for each A.
for each x.
.
.
is a vector space.
.
, show that it contains infinitely many distinct vectors. (Hint: Consider A, 2A, 3A, etc.)
11.Let D be any non-empty set, and define RD to be the set of all functions having domain D and values in R. If f and g are two such functions, their sum f + g is the element of RD defined by
(f + g)(d) = f(d) + g(d)for each d in D.
If f is in RD and x is a real number, let xf be the element of RD defined by
(xf)(d) = xf(d)for each d in D.
Show that RD is a vector space with respect to these operations.
12.Let V be a vector space and let D be a nonempty set. Let VD the set of all functions with domain D and values in V. Define sum and product as in Exercise 11, and show that VD is a vector space.
13.A sum of four vectors A + B + C + D may be associated (parenthesized) in five ways, e.g. (A + (B + C)) + D. Show that all five sums are equal, and therefore A + B + C + D makes sense without parentheses.
14.Show that A + B + C + D = B + D + C + A.
§5.Associativity and commutativity
5.1.Proposition.If k is an integer ≥ 3, then any two ways of associating a sum A1 + ... + Ak of k vectors give the same sum. Consequently parentheses may be dropped in such sums.
Proof.The proof proceeds by induction on the number of vectors. Axiom 2 gives the case of 3 vectors. Suppose now that k > 3, and that the theorem is true for sums involving fewer than k vectors. We shall show that the sum of k vectors obtained from any method M of association equals the sum obtained from the standard association MO obtained by adding each term in order, thus:
(...(((A1 + A2) + A3) + A4) ...) + Ak·
A method M must have a last addition in which, for some integer i with 1 ≤ i < k, a sum of A1 + ... + Ai is added to a sum of Ai+1 + ... + Ak. If i is k − 1, the last addition has the form
(A1 + ... + Ak−1) + Ak·
The part in parentheses has fewer than k terms and, by the inductive hypothesis, is equal to the sum obtained by the standard association on k − 1 terms. This converts the full sum to the standard association on k terms. If i = k − 2, it has the form
(A1 + ... + Ak−2) + (Ak−1 + Ak)
which equals
((A1 + ... + Ak−2) + Ak−1) + Ak
by Axiom 2 (treating A1 + ... + Ak-2 as a single vector). By the inductive hypothesis, the sum of the first k − 1 terms is equal to the sum obtained from the standard association. This converts the full sum to the standard association on k terms. Finally, suppose i < k − 2. Since Ai+1 + ... + Ak has fewer than k terms, the inductive hypothesis asserts that its sum is equal to a sum of the form (Ai+1 + ... + Ak−1) + Ak. The full sum has the for
(A1 + ... + Ai) + ((Ai+1 + ... + Ak−1) + Ak)
= ((A1 + ... + A1) + (Ai+1 + ... + Ak−1)) + Ak
by Axiom 2 applied to the three vectors A1 + ... + Ai, Ai+1 + ... + Ak−1 and Ak. The inductive hypothesis permits us to reassociate the sum of the first k − 1 terms into the standard association. This gives the standard association on k terms.
The theorem just proved is called the general associative law; it says in effect that parentheses may be omitted in the writing of sums. There is a general commutative law as follows.
5.2.Proposition.The sum of any number of terms is independent of the ordering of the terms.
The proof is left to the student. The idea of the proof is to show that one can pass from any order to any other by a succession of steps each of which is an interchange of two adjacent terms.
§6.Notations
The symbols U, V, W will usually denote vector spaces. Vectors will usually be denoted by A, B, C, X, Y, Z. The symbol R stands for the real number system, and a, b, c, x, y, z will usually represent real numbers (= scalars). Rk is the vector space defined in 3.1. The symbols i, j, k, l, m, n will usually denote integers.
We shall use the symbol ∈ as an abbreviation for is an element of
. Thus p ∈ Q should be read: p is an element of the set Q. For example, x ∈ R means that x is a real number, and A ∈ V means that A is a vector in the vector space V.
The symbol ⊂ is an abbreviation for is a subset of
, or, equally well, is contained in
. Thus P ⊂ Q means that each element of the set P is also an element of Q (p ∈ P implies p ∈ Q). It is always true that Q ⊂ Q.
If P and Q are sets, the set obtained by uniting the two sets is denoted by P ∪ Q and is called the union of P and Q. Thus r ∈ P ∪ Q is equivalent to: r ∈ P or r ∈ Q or both. For example, if P is the interval [1, 3] of real numbers and Q is the interval [2, 5], then P ∪ Q is the interval [1, 5]. In case P ⊂ Q, then P ∪ Q = Q.
It is convenient to speak of an empty set
. It is denoted by ϕ and is distinguished by the property of having no elements. If we write P ∩ Q = ϕ, we mean that P and Q have no element in cammon. Obvious tautologies are
ϕ ⊂ P,ϕ ∪ Q = Q,ϕ ∩ P = ϕ.
§7.Linear subspaces
7.1.Definition.A non-empty subset U of a vector space V is called a linear subspace of V if it satisfies the conditions:
(i)if A ∈ U and B ∈ U, then A + B ∈ U,
(ii)if A ∈ U and x ∈ R, then xA ∈ U.
These conditions assert that the two operations of the vector space V give operations in U.
7.2.Proposition.U is itself a vector space with respect to these operations.
Proof.The properties expressed by ∈ U; hence Axiom 3 holds in U. Similarly, if A ∈ U, then (−1)A ∈ U by (ii). Since (−1)A = − A (Axiom 8), it follows that − A ∈ U; hence Axiom 4 holds in U.
The addition and multiplication in a linear subspace will always be assumed to be the ones it inherits from the whole space.
is a linear subspace. It is also trivially true that V is a linear subspace of V. Again, if U is a linear subspace of V, and if U′ is a linear subspace of U, then U′ is a linear subspace of V.
7.3.Proposition.If V is a vector space and {U} is any family of linear subspaces of V, then the vectors common to all the subspaces in {U} form a linear subspace of V denoted by ∩ {U}.
Proof.Let A ∈ ∩ {U}, and B ∈ {U}, and x ∈ R. Then, for each U ∈ {U}, we have A ∈ U and B ∈ U. Since U is a linear subspace, it follows that A + B ∈ U and xA ∈ U. Since these relations hold for each U ∈ {U}, it follows that A + B ∈ ∩ {U} and xA ∈ ∩ {U}. Therefore ∩ {U} is a linear subspace.
7.4.Definition.If V is a vector space and D is a non-empty subset of V, then any vector obtained as a sum
x1A1 + x2A2 + ... + xkAk
.
7.5.Proposition.D ⊂ L(D).
For, if A ∈ D, then A = 1A is a finite linear combination of elements of D (with k = 1).
7.6.Proposition.If U is a linear subspace of V, and if D is a subset of U, then L(D) ⊂ U. In particular L(U) = U.
The proof is obvious.
Remark.A second method of constructing L(D) is the following: Define L′(D) to be the common part of all linear subspaces of V which contain D. By Proposition 7.3, L′(D) is a linear subspace. Since L′(D) contains D, Proposition 7.6 gives L(D) ⊂ L′(D). But L(D) is one of the family of linear subspaces whose common part is L′(D). Therefore L′(D) ⊂ L(D). The two inclusions L(D) ⊂ L′(D) and L′(D) ⊂ L(D) imply L(D) = L′(D). To summarize, L(D) is the smallest linear subspace of V containing D.
§8.Exercises
1.Show that U is a linear subspace of V in each of the following cases:
(a)V = R³ and U = set of triples (x1, x2, x3) such that
x1 + x2 + x3 = 0.
(b)V = R³ and U = set of triples (x1, x2, x3) such that x3 = 0.
(c)(See Exercise 4.11), V = RD and U = RD′ where D′ ⊂ D.
(d)V = RR, i.e. V = set of all real-valued functions of a real variable, and U = the subset of continuous functions.
2.Find L(D) in the following cases:
(a)D = ϕ, V arbitrary.
, V arbitrary.
.
(d)V = R³ and D consists of the two vectors (3, 0, 0) and (1, 1, 0).
(e)V = RR and D = the set of all polynomial functions.
3.If V is a vector space, and D′ ⊂ D ⊂ V, show that L(D′) ⊂ L(D).
4.If D, D′ are subsets of the vector space V, show that L(D ∩ D′) ⊂ L(D) ∩ L(D′). Show by an example that L(D ∩ D′) may actually be smaller than L(D) ∩ L(D′).
5.Show that D ⊂ D′ ⊂ L(D) implies L(D′) = L(D).
§9.Independent sets of vectors
. A set of vectors is called independent if it is not dependent. A vector A is said to be dependent on a set D of vectors if A ∈ L(D).
.
is an example of an independent set. A set consisting of two non-zero vectors is independent if neither vector is a scalar multiple of the other. The empty set ϕ is also independent.
9.2.Proposition.A set D is dependent if and only if there is a vector A ∈ D which is dependent on the other vectors of D.
is a relation on distinct elements of D, and at least one coefficient (that of A) is not zero. Therefore D is dependent.
9.3.Theorem.If D is any finite set of vectors, then D contains a subset D′ such that D′ is independent, and L(D′) = L(D).
and we take D′ = D. Assume now that the theorem is true for any set of fewer than k elements where k > 1. Let D be a set of k distinct elements, say A1, ..., Ak. If D is independent, take D′ = D, and the assertion is true of such a D. Suppose D is dependent. By the above proposition, some Ai is dependent on the remaining. By relabelling. If necessary, we may suppose that Ak is dependent on the set D″ consisting of A1 ..., Ak−1. The hypothesis of the induction asserts that D″ contains an independent set D′ such that L(D′) = L(D″). Since Ak ∈ L(D″), it follows that D″ ⊂ D ⊂ L(D″), and therefore L(D) = L(D′), (see Exercise 8.5). Combining this with L(D′) = L(D″) gives L(D′) = L(D). This completes the induction.
9.4.Proposition.If A1, A2, ..., Ak is a sequence of vectors which forms a dependent set, then there is an integer i in the range from 1 to k such that Ai is dependent on the set A1, A2, ..., Ai−1.
where at least one coefficient is not zero. Let i be the largest index of the non-zero coefficients. Then solving for Ai expresses it in terms of A1, ..., Ai−1.
k.
Proof.Suppose the conclusion were false; then D would contain at least k + 1 distinct vectors, say B1, B2, ..., Bk+1. Abbreviate L(A1, ..., Ak) by L. Since B1 ∈ L, we have
(1)L = L(B1, A1, A2, ..., Ak),
and the sequence B1, A1, A2, ..., Ak is dependent. By Proposition 9.4, some term of the sequence depends on the preceding terms. This term is not B1 since B1 by itself is independent (it constitutes a subset of the independent set D). So some A1 depends on B1, A1, A2, ..., Ai−1. We may relabel the A′s if necessary, and obtain that Ak belongs to L(B1, A1, A2, ..., Ak−1). Then (1) implies
(1′)L = L(B1, A1, A2, ..., Ak−1).
Since B2 ∈ L, we have
(2)L = L(B2, B1, A1, A2, ..., Ak−1),
and the sequence B2, B1, A1, A2, ..., Ak−1 is dependent. By Proposition 9.4, some term depends on the preceding terms. This term is not B2 or B1 because D is independent. Hence it must be an A-term. Relabelling the A′s if necessary, we can suppose it is the term Ak−1. Then (2) gives
(2′)L = L(B2, B1, A1, A2, ..., Ak−2).
Since B3 ∈ L, we have
(3)L = L(B3, B2, B1, A1, A2 ..., Ak−2),
and the sequence By B3, B2, A1, A2, ..., Ak−2 is dependent. By Proposition 9.4, some term depends on the preceding terms. It is not a B-term because D is independent. So it must be an A-term which we may discard. Then
(3′)L = L(B3, B2, B1, A1, ..., Ak−3).
We continue in this fashion, first adjoining a B-term, and then discarding an A-term. After k steps we obtain
(k′)L = L(Bk, Bk−1, B2, B1),
and there are no more A-terms. Since Bk+1 ∈ L, it follows that Bk+1, Bk, ..., B1 is a dependent set. This contradicts the independence of D. Thus the assumption that D has more than k elements leads to a contradiction. This proves the theorem.
§10.Bases and dimension
10.1.Definition.A subset D of a vector space V is called a basis (or base) for V if D is independent and L(D) = V. A vector space V is said to be finite dimensional if there is a finite subset D′ of V such that L(D′) = V.
10.2.Theorem.If V is a finite dimensional vector space, then V has a basis. Any basis for V is a finite set, and any two bases have the same number of vectors. (This number is called the dimension of V, and denoted by dim V.)
Proof.By hypothesis there is a finite set D′ of vectors such that L(D′) = V. By h. Therefore k = h, and the proof is complete.
.
. Since A1, ..., Ak are independent, each coefficient must be zero. Therefore xi = yi for each i, and the representation is unique.
§11.Exercises
1.If D′ ⊂ D ⊂ V, and if D′ is dependent, show that D is also dependent.
2.If a ∈ R, b ∈ R, show that the three vectors (1, 0), (0, 1), (a, b) in R² form a dependent set.
3.Let V be a finite dimensional vector space, and let U be a linear subspace:
(a)show that U is finite dimensional,
dim V,
(c)show that dim U = dim V implies U = V,
(d)show that any basis D′ for U can be enlarged to a basis for V,
(e)show by an example that a basis for V need not contain a basis for U.
4.Find a basis for Rk, thus proving k = dim Rk.
5.Find the dimensions of the following vector spaces:
(a)the set of vectors in R³ satisfying x1 + x2 + x3 = 0,
(b)the set of vectors in R³ of the form (x, 2x, 3x) for all x ∈ R,
(c)RN where N is a set of n elements (see Exercise 4.11).
6.If dim V = k, and if D is a set of k vectors such that L(D) = V, show that D is independent.
n has dimension n + 1. Show that the linear subspace of continuous functions in RR is not finite dimensional.
§12.Parallels and affine subspaces
12.1.Definition.Let U be a linear subspace of V, and A a vector of V. Denote by A + U the set of all vectors of V of the form A + X for some X ∈ U. The set A + U is called a parallel of U in V. It is said to result from parallel translation of U by the vector A. A parallel of some linear subspace of V is called an affine subspace.
12.2.Proposition.If U is a linear subspace of V, then
(i)A ∈ A + U for each A ∈ V.
(ii)If B ∈ A + U, then B + U = A + U.
(iii)Two parallels of U either coincide or have no vector in common.
(iv)Two vectors A, B in V are in the same parallel if and only if A − B ∈ U.
∈ U, it follows that A ∈ A + U. To prove (ii), note that B ∈ A + U implies
(1)B = A + Cfor some C ∈ U.
An element of B + U has the form B + X for some X ∈ U; so by (1) it has the form A + (C + X) and, since C + X is in U, is an element of A + U. Thus (1) implies B + U ⊂ A + U. But (1) can be rewritten A = B + (−C) and − C ∈ U. Then the same argument shows that A + U ⊂ B + U. Since each contains the other, they must coincide. Statement (iii) is an immediate consequence of (ii). To prove (iv), suppose first that A and B are in the same parallel. This must be B + U by (i) and (iii); hence A = B + C for some C ∈ U, and C = A − B. Conversely, if A − B ∈ U, then A = B + (A − B) implies A ∈ B + U.
where x1, ..., xh are real numbers satisfying the condition
Then E(A1 ..., Ah) is an affine subspace of V which contains A1, ..., Ah and is the smallest affine subspace containing A1, ..., Ah.
Proof.Let U be the linear subspace spanned by the vectors Ai − A1 for i = 2, 3, ..., h. We assert that
(3)E(A1, ..., Ah) = A1 + U.
A vector on the left has the form zxiAi where the x′s satisfy , so
and thus is a vector of A1 + U. Conversely a vector of A1 + U has the form
The sum of the coefficients on the right is 1; hence the vector is in E(A1, ..., Ah). This proves (3), and the first conclusion of the proposition.
Since Ai = A1 + (Ai − A1), and Ai − A1 is in U, we have Ai ∈ A1 + U, so Ai ∈ E(A1, ..., Ah) for each i.
Any affine subspace containing A1, ..., Ah must have the form A1 + U′ for some linear subspace U′. Now Ai ∈ A1 + U′ implies Ai − A1 ∈ U′. Since this holds for i = 2, ..., h, it follows that U′ ⊃ U, and therefore A1 + U′ ⊃ A1 + U. Thus A1 + U is the smallest.
12.4.Definition.A binary relation ≡ defined on a set S is called an equivalence relation if it satisfies the following axioms:
Axiom 1.Reflexivity: A ≡ A for each A ∈ S.
Axiom 2.Symmetry: if A ≡ B, then B ∈ A.
Axiom 3.Transitivity: if A ≡ B and B ≡ C, then A ≡ C.
12.5.Proposition.An equivalence relation on a set S defines a partition of S into non-empty, mutually disjoint subsets called equivalence classes, such that two elements of S lie in the same equivalence class if and only if they are equivalent.
The proof of this proposition is left as an exercise.
12.6.Definition.If U is a linear subspace of a vector space V, let A ≡ B (mod U), read modulo U
, if and only if A − B ∈ U, A, B ∈ V.
A review of Definition 12.1, and Proposition 12.2 shows that the equivalence classes mod U of V can be identified with the parallels of U in V and that Definition 12.6 does indeed define an equivalence relation.
§13.Exercises
1.Let U, U′ be linear subspaces of V, and Let A, A′ be vectors in V. Show that the intersection
(A + U) ∩ (A′ + U′)
is either empty or is a parallel of U ∩ U′.
2.If U = L(B1, ..., Bh), show that A + U = E(A, A + B1, ..., A + Bh).
3.Prove Proposition 12.5.
4.Verify that the following definitions give equivalence relations:
(a)S = set of all triangles in the euclidean plane, with A ≡ B if and only if A is congruent to B;
(b)S as in (a), with congruent
replaced by similar
;
(c)S = set of integers, with a ≡ b (mod m) if and only if a − b is divisible by m, where m is a fixed integer.
II.LINEAR TRANSFORMATIONS OF VECTOR SPACES
§1.Introduction
Our ultimate objective is the study of functions whose domains and ranges are vector spaces. This chapter treats the simplest class of such functions, the linear transformations. The only linear transformations having the real numbers R as domain and range are those of the form f(x) = mx where m is a real number. This is too simple a case to indicate the importance of the general concept. A linear transformation from R² to R³ is given by a system of linear equations
y1 = a11x1 + a12x2
y2 = a21x1 + a22x2
y3 = a31x1 + a32x2
where (x1, x2) are the components of a vector X of R², the a′s are fixed real numbers, and the resulting (y1, y2, y3) are the components of the vector in R³ which is the transform of X. Briefly, linear transformations are those representable (in terms of components) by systems of linear equations. There are two points to emphasize. When the dimensions of the domain and range exceed 1, there are a great variety of linear transformations (rotations, dilations, projections, etc.), and they have and deserve an extensive theory. The second point is that it is cumbersome to define a linear transformation as something given by linear equations. Instead we shall define it as a function having two simple properties as follows.
1.1.Definition.If V, W are vector spaces, a linear transformation T of V into W, written T : V → W, is a function having V as its domain and W as its range, and such that
(1)T(A + B) = T(A) + T(B)for all A, B ∈ V,
(2)T(xA) = xT(A)for all A ∈ V, x ∈ R.
These properties can be paraphrased by saying that T preserves addition and multiplication by a scalar. It is easily checked that the linear functions mentioned above from R to R, and from R² to R³ do have these properties.
1.2.Definition.Let V and W be sets and let T be a function with domain V and range W. For any A ∈ V, the value of T at A, denoted by T(A), is called the image of A under T; T(A) is also referred to as an image point. If D is any subset of the domain of T, the image of D under T, denoted by T(D), consists of the set of images of elements of D. The set T(V) is also denoted by im T. If E is any subset of W, then the inverse image of E under T, denoted by T−1 (E), is the set of those elements of V whose images are in E. For example, T−1 (W) = V. If no element of V has an image in E, then T−1 (E) = ϕ.
It should be noted that, if E consists of a single element of W, this need not be true of T−1(E), i.e. T−1 need not define a function from im T ⊂ W to V; T−1 is a function from the subsets of W to the subsets of V.
1.3.Definition.Let