Elements of Vector 00 Sil B Rich
Elements of Vector 00 Sil B Rich
Elements of Vector 00 Sil B Rich
'^(^^.''it/^i,:
>^^^
^\^
(5\V
>: MH44<
> ^m¥m .
;l^ I
IVIicrosoft Corporation j
http://www.archive.org/details/elementsofvectorOOsilbrich
^
iITt I
ELEMENTS OF VECTOR ALGEBRA
BY THE SAME AUTHOR
BY
L. SILBERSTEIN, Ph.D.
LECTURER IN NATURAL PHILOSOPHY AT THE UNIVERSITY OF ROME
WITH DIAGRAMS
I9I9
PHYSICS D^
Plvu/i^ ax|xtA/Ji-
CCrPTRlSHT
• ••-»«•••. • • • •
PREFACE
This little book was written at the instance of Messrs. Adam
Hilger, and, in accordance with their desire, it contains just what
is required for the purpose of reading and handling my Simplified
Method of Tracing Rays, (Longmans, Green & Co., London,
etc.
1918). With this practical aim in view, all critical subtleties have
been purposely avoided. In fact, it is scarcely more than a synop-
tical presentation of the elements of Vector Algebra covering the
415488
CONTENTS
rxGE
1. Vectors Defined - i
3. Addition of Vectors 3
4. Subtraction of Vectors 10
Index 41
ELEMENTS OF VECTOR ALGEBRA
A, B, etc., or n, r, s, etc.,
direction in space.
In some branches of physico-mathematics it is important to
consider the position of the vectors in question (besides their sizes
and directions), i.e. to localize their origins, either by fixing the
origin of each vector altogether or by allowing it only to move
'*
freely in its own hne. Such vectors are usually called '*
locaHzed
vectors. In a vast class of investigations, however, the position
of these directed magnitudes is of no avail, and it is then obviously
convenient not to include position among the determining char-
acteristics of a vector. Such vectors, in distinction from localized
ones, are called free vectors. These and these only will here
occupy our attention. The adjective will be dropped, however,
and the beings in question will be called shortly vectors. With'
this understanding, the definition of their equality may be put
thus :
the same way as five apples added to three apples give again a 1
certain number of apples. j
fi
Fig. I. ;
But we might as well have linked the two given vectors so that|
the end-point of B=B' falls into the origin of A, as shown in the!
lower part of Fig. I. Then their sum, say S', would according! —
to the definition —be I
S'=B+A, !
A+B=B-f-A, (l)^
We
might have started from it as a sum definition. It has the
advantage of being immediately symmetrical with respect to the
two addends. At any rate we see that the chain and the parallelo-
gram constructions are (in virtue of Euclid) wholly equivalent to
one another.
Thus far the case of two vector addends. Now, the sum of
these being again a vector, S=A+B, we can add to S any third
vector C, thus obtaining
S + C = (A+B)+C=C + (A-f-B),
(A+B)+C=A-h(B+C), (3)
the result being in both cases the same vector, viz. that drawn
from the beginning to the end of the chain. The same property
holds for the sum of any number of vectors. The brackets become
1
written i
A+B+C
or B+A+C, and so on.
\
a closed chain (or a polygon), plane or not, then the sum of these !
S=A+B+C+D=o,
and therefore also A + C + B+D = o, etc. It is scarcely necessary
]
that is, if its tensor vanishes, and conversely ; or, in other words, i
if its end-point and origin coincide, such precisely being the case
of our closed chain.
The vector sum, which shares with the ordinary algebraic sum
the two capital properties of commutativity and associativity, i
ADDITION OF VECTORS 7
A±B,
according as A, B are of equal or of opposite senses.
The tensor of a sum of vectors, as S=A+B, can conveniently
be denoted by
S=|A+B|,
as is usual for the absolute value of ordinary algebraic magnitudes.
Thus we shall have, for collinear vectors,
|A+B| = |^±5|.
But, it will be well kept in mind that in general, for non-collinear
addends,
2B=A,
we shall write
B = iA,
and similar meanings will be attached to JA, JA, etc. In this
manner, and applying in the case of irrational factors the well-
known limit-reasoning, we easily obtain the meaning of the
expression
wA,
Fig. 5. %
of steps a and then a number (y) of steps b (or first yb and then \
will fall into II ; if ;«• < O, y < O, into III, and finally, x>
\i O, ;
a, b as axes. !
we may again take as unit vectors, then any vector whatever can i
Fig. 6.
form (7) will often be found useful in passing from vector to scalar
formulae, especially in optical computations. The unit of R, i.e.
and B, to be denoted by
A-B,
may be defined as such a vector C, which added to B gives A.
In symbols, we say that
A + (-B).
The above remarks complete the meaning of
wA,
where A is a vector and n any real scalar, positive, nil or negative.
The concept of such a product of a vector by any scalar n does not
contain, in fact, anything besides the previous concept of vector
»sum or difference. It is derived from their special case, viz.
relating to collinear vectors.
To say it once more, nh is simply the vector A stretched in the
ratio \n\\ i and, if n < O, turned through i8o° (in any plane
passing through A).
Finally, as the reader himself will easily prove, for any A, B,
and any scalar factor n,
.1
AB=BA, (12) i
AB=o, I
Conversely, if of A
and B we know only that
two vectors AB =0, '
A±B, \
AB=±^5, (13) \
make with one another the angle of 45°, we have ab = —7=, and if
V2
(a, b)=90°, ab=o. For the three normal unit vectors i, j, k used
above we have ij =jk=ki=«o.
As a sub-case of (13) we have the scalar square of a vector, or
better, its autoproduxt,
AA or A2 = ^2,
and if a be a unit vector,
a2 = a2=i.
Thus, i2 = 32 = k2=i.
Again, if R is any vector whatever and n a unit vector, Rn is
vector, i
(AB)(CD) i
will again be a scalar, and so on. The brackets are here used as
j
(AB)(CD)=AB.C3D,
and so forth. The reader will soon find that this need of precaution ,
'
algebraic sum of the projections (Fig. 8), whence the proof of the!
Similarly, ;
and also jl
(A+B)(C+D)=(A+B)C + (A+B)D .]
A(B-C)=AB-AC.
In fine, the scalar multiplication of vectors is commutative as well
as distributive, and any two vector polynomials are multiplied out
precisely as in ordinary algebra. This makes the scalar multiplica-
tion of vectors a powerful operation.
As examples we may quote
(A+B)(A-B)=A2-B2 = ^2_52^
meaning that the product of the lengths of the diagonals of a
parallelogram multipHed by the cosine of their included angle is
(A+B)2 = ^2 + 52 + 2AB,
or (Fig. 9), remembering that kB=AB cos [ir-6)= -AB cos 6,
C^ = A^ + B^ T 2 AB cos d,
(A+B)2 = ^2 + 52,
the theorem of Pythagoras. As a third example, let us quote the
scalar product of two coinitial unit vectors, written as in (7a),
multiplying out the two trinomials we have, for the required dis-
tance 5,
cos 5 = cos 6^ . cos ^2 + sin 0^ . sin dglcos <^i . cos «^2 + sin <j>i
. sin <f>^.
having the longitudes of the two places (cf. Fig. 6), viz.
a=i .
cos <^i +j .
sin </>!, V=i cos <^2 +3 ^in <^2»
we have y j '
'»
ab = cos (<^i
- </>2) = cos <f}^
cos <f>2 + sin <l>i
sin c^gi
•^8
included between the latter two is <i>2-<l>i' Notice that this is!
valid for any spherical triangle ; for one of its corners can always
be considered as our pole, ^ = 0. *
Fig. II.
C=|VAB|=^5|sin(A,B)|. (16)
From this definition we see, first of all, that the vector product
is not commutative, inasmuch as we have
VBA=-VAB. (17)
S.V.A. B
i8 ELEMENTS OF VECTOR ALGEBRA \
C=|VAB|=^5, \
then we can conclude only that they are parallel (collinear), i.e.\
that
\
mnVAB. 1
i, j, k, we have
Vu=Vjj=Vkk = o.
Contrast these relations with the previous ones, i2=j2 = ij2_,i g^^^
ij =jk=ki = 0. The latter follow also from {a) ; for, by the second!
of (a), for instance, i = Vjk is normal to j, and therefore ji = jVjk =0. ]
XA = o.
Again, XB = BVA (B + C) - BVAC
= (B + C)VBA-CVBA, by (19),
= BVB A + CVBA - CVBA = o,
and similarly XC = 0. Thus, the vector X either vanishes or is
VCA+VCB=VC(A+B).
Thus the distributive property of vector multiplication is proved
for any A, B, C, coplanar or not.
The product of two binomials (or polynomials) does not call for
lengthy explanations. Thus,
V(A+B)(C + D)=V(A+B)C+V(A+B)D
= - VC(A +B) - VD(A + B) =VAC +VBC+VAD +VBD.
The vector multiplication of any two vector polynomials is thus
seen to obey the same rules as ordinary algebraic multiplication,
the only difference being that vector products are not commutative.
A reversal of the order of the two factors changes only the sign
of their product, which is easily remembered.
3 k
VAB = A A A [2ia)
'1 ^2 B..
-^3
In exactly the same way the reader will show himself that the
cartesian expansion of AVBC, the triple product representing the
volume of the parallelepipedon A, B, C, is
^1 A -^i
AVBC B2 B2 (22)
t'O Co
as in (19). For I
A A A, Bi B2 -B3
5i B, B, = Ci ^2 Cg
C^ c. c. ^! A A
and so on.
For the scalar product we have immediately, remembering that
12= I, ij=o, etc.,
AB = ^151 + ^2^2 + ^3^3. (23)
As particular cases of (21) and (23) note the results for two unit
vectors a, b which include the angle trr,
(^=0) into the vertex i and take the first meridian along the
side 12 ; thus, a^ being the angle at I, and ^g, ^3 the sides of the
spherical triangle opposite 2 and 3,
where og, a^ are the remaining two angles of the spherical triangle.
Thus,
sin ai_sin Ug^sin a^ (I9«)
sin Si
" sin $2 " sin 53
the fundamental **
sine formula " of spherical trigonometry,
"
following on the vector method cosine formula
as easily as the '*
"
given before. It is interesting to note that the " sine formula
is, in this circle of ideas, but the statement of the triple expressi-
bility of the volume of the parallelepipedon r^, Fg, 13, viz. as r^r^^i
or rgVrgTi or r3Vrir2. Other examples are left to the care of the
reader.
where /S, y are some scalars. Since the ternary product is per- i
VaVbc = A{b(ca)-c(ab)}, i
sin (b, c) . sin (b, a) = A.! [cos (c, a) - cos (a, b) . cos (b, c)] ;
I
cos (c, a) =cos (b, a) . cos (b, c) +sin (b, a) . sin (b, c), j
so that .A, =I •
]
identically. For the six right-hand terms of (24) and of the two|
similar equations destroy themselves in pairs.
* The trivial case of B, C collinear can be discarded ; for then VAVBC =0.
ITERATION OF VECTORIAL MULTIPLICATION 25
whence we see also that VuVBu is the part of the vector B normal
to u, in both size and direction. For (Bu) u is the part of B along u.
To and at the same time the essential part
close this section,
of the whole Vector Algebra, but a few more remarks which will
be useful in connection with problems often occurring in practice.
Let X be an unknown, and A, u two given vectors, the latter
an unit vector. If we know of X only that
VXu = A, (a)
X-(Xu)u=VuA,
and by {b),
X=(m+VuA, {c)
which is the required solution. This simple rule, {c), for solving
the equations (a) and (&), will often be found helpful.
ct(A+B)=C7A + ctB,
whatever the vectors A and B. If such be the case we call TS a
distributive operator. An example of this kind is afforded by the
*'
reflector," i.e. that operator which converts the incident ray into
the reflected one. The simplest example of a distributive vector
operator is, however, a scalar number <r used as a factor ; for we
have, of course,
<r(A+B)=o-A+(rB.
This operator is a pure stretcher or (if |cr| i) a contractor, and, <
if o- < o,
an invertor at the same time.
Leaving these examples, let us turn to the general distributive
operator, of which we will only assume that it is a continuous
operator, i.e. that cyR a continuous vector function of R. Such
is
under the name of linear vector operators, and were first introduced
by the great Hamilton. In fact, it can easily be shown that the
continuous and distributive operator 7S when applied to a vector
R whose components are each a linear
yields another vector R',
function of the components of R, whatever the triad of axes em-
ployed for the decomposition. For, if n be any integer positive
scalar number, we have, in virtue of the assumed distributive
property,
cy(wA)=wCT(A),
trr^^, CT^j, etc., are simply the cartesian components of these three I
scalars, I
ACTB=Bt7A. (27)1
conjugate operator has but six mutually independent (that is, six i
^cc j
This table, after the insertion of Z5^ = XJS^j,, etc., at the vacant ]
'
conjugate operator.
This being a sub-class of 7S^ the general operator, the remainder ^
^«6^^6a, etc., 1
THE LINEAR VECTOR OPERATOR 29
ActB=Bct'A, (28)
for any pair A, B of vectors, then CT' is called the conjugate of CT.
while, of course, cy^a = ^aa7 ^^^- Thus we see also that to every
operator TS there is one (and only one) conjugate trr'. In particular,
if a symmetrical operator, its conjugate is identical with it,
CT is
Let us use ts for any linear vector operator, and a> for symmetrical
operators only. (In fact, without the circumflex this last letter
of the Greek alphabet has some symmetry.)
Manifestly the symmetrical operator w will be a great deal
simpler than the asymmetrical ts. It is, therefore, very agreeable
to see that any ZS can be split into an w and some other asymmetrical,
but very simple, operator which is called an antisymmetrical (or
skew-) operator and which we will denote by a. The latter is
and therefore also a^^= -<^baj ^tc, and a^ = o, etc., so that the
table for such an operator becomes
-«a6 o a^ (29a)
In fact, let zs' be the conjugate of the given operator ST. Then!
we have, identically, i
A(CT-t3r')B=BCT'A-BCTA= -B(cT-w')A, i
as in (29), the definition of antisymmetric operators. This proves;
the statement, without the slightest need of splitting ZS into itsi
sy = G) + a, (31)
where its symmetrical part is a> = i(CT + zs') and its antisymmetrical':
part a = J(cy-CT').
If the reader so desires he can introduce the nine coefficients of]
these operators. Then j
aR = VwR, \
a=Vw; ;
vector w.
THE LINEAR VECTOR OPERATOR 31
we can easily do so. For, from the table or the " matrix " (29a)
we see that
aR = a(a^,i?j + a^,R,) + etc.,
and since a„6 = i(Wa6~^6a)> ^^^ so on, while aR = VwR, we find
without difficulty that, if a, b, c be a right-handed system,
2w=a(CT,,-trrJ+b(CT«-CTj+c(CT^-CTj, (33)
—
Principal axes of cu. Let R be the operand. Then the vector
R' = wR will in general differ from R not only in size but in direction
as well. But if R assumes certain particular directions, then it
K-o)2)xy = o,
and if Wj ^ Wg, we have xy = o, that is to say, x J_ y. And should
it happen that m^ = Wg, i.e.
principal values cdj, Wg, these axes must be normal to one another, i
principal axis. i
Suppose now there is still a third principal axis z not coplanar '
(oz = (UgZ. :
Then, reasoning as before, we shall see that if Wg, w^, Wg are all :
plane passing through the fourth and the first axis would consist i
of principal axes, and since this plane would cut the y, z plane,
0)2 and 0)3 could not be different from one another, against the
j
assumption. i
x=XiB>+xjb+XQC,
we have
(i)X x^iaa. + X2(t)b + x^oiC = x-^^A. + XgB 4- X2,C,
From this equation we see that the three vectors A-na, etc., are
coplanar, so that the volume of the parallelepipedon constructed
upon them is nil, i.e.
i.e. the coefficients of the cubic (36a) are real. That equation has, \
therefore, at least one real root. Let this be and let us take
Wj,
which is, as it should be, divisible by n - w^, leaving for the remain-
ing two principal values Wg, n^ the quadratic |
r^«iK +
'«'3
±ViK-cuJ2 + <, (38)]
I
SO that, if only all the coefficients w^^ are real, these two principal:
values and, therefore, also the corresponding principal axes are'
real. That they form with the first axis a normal system wC;
already know. I
^ = iK + «3)±Vi(«2-n,)»
which is, as it should be, an identity. Thus, the only necessary
thing was to state that the cubic (36a) has at least one real root,
and this was immediately clear.
Having thus ascertained the general properties of the principal
axes of CO, let us take them as our (natural) reference system
a, b, c, which we will now call i, j, k. Then, Wj, Wg, W3 being the
corresponding principal values, the most general symmetrical linear
vector function will be
wR = n{\ (iR) + n^ (jR) + Wak (kR)
that is,
«i times the first component of R along i plus, etc., or
using the dot, instead of brackets, as separator,
a.b,
a, b being any two vectors, in general that is not coinciding in
direction ;
the first vector is called the antecedent, and the second,
the consequent of the dyad. The dyad as an operator can be used
either as a prefactor of the operand, say
or also as a postfactor,
factor. Such dyads are called symmetrical dyads. They are alii
of the form ^
o-a . a,
'
</)'=x.a+y.b+z.c '
R<^S = S<^'R, \
mention that the general Hnear vector operator rs can always be'
reduced to what is called a normal (trinomial) dyadic, i.e. \
where o-j, o-g, o-g are scalars (either all positive or all negative), and I
t=i.i + j. j+k.k
leaves, of course, any vector operand B intact, and is, therefore,
called an idemfactor. It is also, for all purposes, equivalent to i.
we have obviously
where <j>\l/
is the dyad a (be) . d= (be) a . d. Similarly, if 7 be a
third dyad, we have
#(rR)=</>(\^)R = (<^^)7R,
the associative property, so that each of these expressions can be
simply written <^\^yR. And the same is easily seen to hold if
<f>, ^, etc., stand for binomial or polynomial dyadics. Again,
since the scalar product of vectors is distributive, we have for
any dyadic <f>
and any vectors R, S,
<^(R+S)=<^R + <^S,
and also, if ^, y be two more dyadics, the operational equation
(be), (bg), etc., and here of course the order is irrelevant ; but it ^
the latter kind will be found in the " Simplified Method, etc.,"
mentioned before. The final result of such multipHcations of two ^
A.B+C.D+E.P+G.H+etc. ;
expressed in the form xa. + yh+ 2C, where a, b, c are any non-coplanar
unit vectors, any such result can, in the first place, be reduced to s
operator CT. Ultimately, the latter can with advantage be split >
into a symmetric operator <a and the simple operator Vw, as >
in (32). i
much space.
Let R be a variable vector. To have a possibly desirable picture '
i.e. the vector drawn from the position of the particle at the instant \
'
_ = ^R=Lim^.
dR AR J.
R = Rt + Rt.
Again, since the scalar product of two vectors is distributive, so
that A(RS)=RAS + SAR plus terms of higher order, we have
j^(RS)=RS + RS.
^VRS
at
= VRS + VRS,
A.B + C.D+E.F,
we have
CT=A. B+A.B + ...+E.F.
And if the form a> + Vw is used, we have
^7 = a) + Vw,
where a> can again be expanded, as the derivative of a symmetrical
dyadic. It is scarcely necessary to add any further explanations.
INDEX
Addition of vectors, 3-7 Gibbs, 35
algebraic sum, 7
antecedent, of dyad, 35
Heaviside, 35
antisymmetrical operators, 29
area, of parallelogram, 17
associativity, of addition, 6 Idemfactor, 37
of dyadics. 37 invariants, of operator, 33
asymmetrical operators, 29 iterated multiplication, 23-25
autoproduct, scalar, 13
axes, of operator, 31-35 Linear vector operator, 27
localized vectors, 2
Chain of vectors, 3
closed chains, 6 Multiple of a vector, 7
coinitial vectors, 3 multiplication of dyadics, 37-38
coUinear , 7
commutativity of addition, 6 Negative factor, 7
of scalar product, 12
nil vector, 6
components, 8
normal and longitudinal parts of a
conjugate operators, 29
vector, 36
consequent, of dyad, 35
normal form of dyadic, 36
constituents, of vector operator, 28
continuous operators, 26
coplanar vectors, 20 Operators, 26
cosine formula, 16 origin, of vector, i
Lid
LIB!
•IM^
i
^ fK. yiv.
PHYSICS LIBRARY
This publication is due on the LAST DATE
stamped below.
FF-frffT^^^ w£CEIVED BY
(ERSIIV Of 01
INJA
'"^"'
"J!i! 0GUiL4869(7. fllA
C::c 051992
General Library
v^ ^'^"yrjrx~jn\ ^^ i7-30m-5,'57
University of California
(C6410s20)4188
^y-^^^KKl^ Berkeley
I
? k
0>jg^<H<H<l
^-t ^gJvO^^S
:i^'^'A