Elements of Vector 00 Sil B Rich

Download as pdf or txt
Download as pdf or txt
You are on page 1of 116

m

'^(^^.''it/^i,:
>^^^
^\^

RSITt OF CtLIFORIIU LIBRARY OF THE OmVERSITY OF CiLIFORIIIt LIBRA

(5\V

>: MH44<

RSITV OF CUIFORKU LIBRARY OF THE UNIVERSITY OF CALIFORNIA LIBRA

> ^m¥m .

RSITV OF CAllFORIIIk LIBRARY OF THE UNIVERSITY OF CALIFORNIA LI6RI


YEBSITY OF CAliFORNIA LIBRARY OF THE UNIVERSITY OF CALIFORNiA LIB)

;l^ I

VERSITY OF CALIFORNIA LIBRARY OF THE UNIVERSITY OF CALIFORNIA

kM^^ ^''^^^^"^ '^'


--f^^^> HyXA,W^ ^,-^^^^5:!^^ ^
1.^

VERSITY OF CALIFORNIA LIBRARY OF THE UNIVERSITY OF CALIFORNIA LIB^


Digitized by the Internet Archive •
- ?

in 2007 with funding from j

IVIicrosoft Corporation j

http://www.archive.org/details/elementsofvectorOOsilbrich
^

iITt I
ELEMENTS OF VECTOR ALGEBRA
BY THE SAME AUTHOR

Elements of the Electromagnetic


Theory of Light.
Crown 8vo, 3s. 6d. net.

Simplified Method of Tracing Rays


through any Optical System of
Lenses, Prisms, and Mirrors.
With Diagrams and blank pages for the reader's notes.
8vo, 5s. net.

LONGMANS, GREEN AND CO.


London, New York, Bombay, Calcutta and Madras
ELEMENTS OF
VECTOR ALGEBRA

BY

L. SILBERSTEIN, Ph.D.
LECTURER IN NATURAL PHILOSOPHY AT THE UNIVERSITY OF ROME

WITH DIAGRAMS

LONGMANS, GREEN AND CO


39 PATERNOSTER ROW, LONDON
FOURTH AVENUE & 30TH STREET, NEW YORK
BOMBAY, CALCUTTA AND MADRAS

I9I9
PHYSICS D^

Plvu/i^ ax|xtA/Ji-
CCrPTRlSHT

• ••-»«•••. • • • •
PREFACE
This little book was written at the instance of Messrs. Adam
Hilger, and, in accordance with their desire, it contains just what
is required for the purpose of reading and handling my Simplified
Method of Tracing Rays, (Longmans, Green & Co., London,
etc.

1918). With this practical aim in view, all critical subtleties have
been purposely avoided. In fact, it is scarcely more than a synop-
tical presentation of the elements of Vector Algebra covering the

needs of those engaged in geometrical optics. At the same time,


however, it is hoped that this booklet will serve a more general
purpose, viz. to provide everybody unacquainted with the subject
with an easy introduction to the use of Vector Algebra.
It is scarcely necessary to explain that the deductions given
in book are based on Euchd's axioms, notably with the
this
inclusion of his postulate of parallels —
upon which the equality of
vectors is most essentially based. Those readers who are desirous
of seeing how the formal rules here given can be generalized so as
to be valid independently of the axioms of congruence and of
parallels, may consult the author's Projective Vector Algebra (Bell
& Sons, 1 91 9), and a sequel to it published in Phil. Mag. for July,
1919, pp. 1 15-143. It is, however, advisable for the student to
become first thoroughly familiar with the euclidean vector algebra
as here presented.
I take the opportunity of expressing my sincere thanks to Messrs.
Hilger for enabling me to make this further contribution towards
the promotion of the more general use of this powerful and
convenient language of vectors, and to the Publishers for the care
they have bestowed upon this little book. L. S.

London, August, 1919.

415488
CONTENTS
rxGE
1. Vectors Defined - i

2. Equality of Vectors Defined 2

3. Addition of Vectors 3

4. Subtraction of Vectors 10

5. Scalar Product of Two Vectors 11

6. The Vector Product of Vectors 17

7. Expansion of Vector Formulae 21

8. Iteration of Vectorial Multiplication - - - 23

9. The Linear Vector Operator 25

10. Hints on Differentiation of Vectors - - - - 38

Index 41
ELEMENTS OF VECTOR ALGEBRA

1. Vectors defined. Whereas common algebraic magnitudes,


such as the number of inhabitants of a village, or the mass of a
body, or the energy stored in an accumulator, having nothing to
do with direction, are called scalars, any magnitude such as a
displacement, a velocity or an acceleration, which has size as well
as direction in space, is called a vector. The visual, or tangible,
representative of any vector whatever is a segment of a straight
Hne of some length, representing the vector's size, and of some
definite direction in space, together with its sense (say, from a
point M towards a point iV), giving the direction of the vector.
Vectors will be printed in Clarendon, thus

A, B, etc., or n, r, s, etc.,

and their sizes, regardless of direction, or their tensors (as they


are called) will be denoted by the same letters in Italics. Thus,
A will be the tensor of A ; B, n will be the tensors of B, n, and
so on.
Returning once more to the above definition, we may as well
say that any vector A = OE is given by the ordered couple or pair
of points, the origin and E the end-point of the vector ; the
tensor, called also the absolute value, of the vector being
the mutual distance of and E. In short symbols, and using
the familiar bar for the distance,
A = 0->£, A=^OE.
The tensor of a vector is thus an ordinary, absolute or essentially
positive number.
A vector whose tensor is (in a conventionally fixed scale) equal
to unity, is termed an unit vector. Thus, if r = I, the corresponding
r will be a unit vector. It will be understood that the denomi-

nation of A is that of A. That is to say, if A is, for instance, the


S.V.A. A
•SiLiEMfiNTS: Of 'VJECTOR ALGEBRA
displacement of a particle, A will mean so many centimetres ;

and if A represents a velocity, A will be a number of cm. per second,


and so on.
As far as will be possible we shall reserve small (in distinction
from capital) Clarendon letters for unit vectors. Thus, if the
contrary is not expressly stated, a, b, etc., will stand for unit
vectors, so that a = i, b = i, etc.
In MS. work the reader will, at least in the beginning of his
vector career, find it useful to underline all his vectors once or
twice. Or he may write them thicker, and imitating somehow the
printer's type. Then, everyone will soon find out his most agreeable
manner of writing.

2. Equality of vectors defined. We have just seen that the


two essential features of a vector are : or tensor, and its
its size

direction in space.
In some branches of physico-mathematics it is important to
consider the position of the vectors in question (besides their sizes
and directions), i.e. to localize their origins, either by fixing the
origin of each vector altogether or by allowing it only to move
'*
freely in its own hne. Such vectors are usually called '*
locaHzed
vectors. In a vast class of investigations, however, the position
of these directed magnitudes is of no avail, and it is then obviously
convenient not to include position among the determining char-
acteristics of a vector. Such vectors, in distinction from localized
ones, are called free vectors. These and these only will here
occupy our attention. The adjective will be dropped, however,
and the beings in question will be called shortly vectors. With'
this understanding, the definition of their equality may be put
thus :

By saying that two vectors, A and B, are equal to one a7iother^


and by writing
A=B or B=A,
we mean that their tensors are equal, A=B, and that they have
the same direction or, in other words, that the straight segments
representing these two vectors have the same length and are con-
currently parallel to one another. In short symbols, A=B means
as much as
A=B and A 1 1 B.
EQUALITY OF VECTORS DEFINED 3

Thus, if a pair of points, 0, £, represents a vector A= -> E,


the 00^ point pairs 0\ E' or straight segments O'E' of equal length
with and concurrently parallel to OE are all equal to A, no matter
where their origins are situated. Notice that through every point
0' of euclidean space there is one and only one parallel to OE^
so that from every space point 0' as origin one and only one vector
can be drawn which is equal to the given A. Of course, the
laying off, from 0\ of the length 0'E' = OE implies the use of
some " rigid transferer," such as a pair of compasses.
Equivalently, we may say that the rigid translation (parallel
shifting) of a given vector is irrelevant, or does not change the
vector. Provided it is not being rotated, stretched or contracted,
we can, by the accepted definition, " transfer " it to any place we
Hke best.
Two vectors A, B drawn from the same origin are termed
coinitial. By what has just been said, any two vectors can be
made coinitial, by shifting one of them or both parallel to them-
selves. If A=B, then making them coinitial, fuses them into
one straight segment. If only A==B (equal tensors only), then
making the vectors A, B coinitial will still leave a certain non-
vanishing angle, or direction difference, between them, sufficient by
itself to declare the two vectors as being different from one another.
We will say that two or more vectors form a chain if the end-
point of one serves as the origin for the other, and so on. As
before, any two vectors A, B can be linked up into a chain, to
wit in two manners end-point of A coinciding with origin of B
:

(or A preceding B), or vice versa.


This Hcence will be seen to be of capital importance for the
vector sum to be defined presently, inasmuch as it will confer
upon that sum the extremely convenient property of commuta-
tivity. It will, therefore, be important to keep these latter,

apparently trivial remarks well in mind.

3. Addition of vectors. Let A and B be any two vectors,


drawn anywhere. Shift B so as to bring its origin to coincidence
with the end-point of A, as shown in Fig. I. The vectors being
thus linked up into a chain we call sum of A and B and denote by
S=A+B
a third vector S which runs from the beginning to the end of the
chain, i.e. from the origin of A to the end-point of B.
4 ELEMENTS OF VECTOR ALGEBRA ^

This is the definition of the vector sum. The operation, vector ^

addition, thus defined has the so-called group-property, that is to


|

say, being performedon vectors it gives again a vector, in much ,

the same way as five apples added to three apples give again a 1
certain number of apples. j

fi

Fig. I. ;

The above vectorial expression will be read B added to A. : 1

But we might as well have linked the two given vectors so that|
the end-point of B=B' falls into the origin of A, as shown in the!
lower part of Fig. I. Then their sum, say S', would according! —
to the definition —be I

S'=B+A, !

which reads : A added to B. The natural question arises : What


is this new vector S' } Is it equal to S ?
j

The answer is by construction, B' is


in the affirmative. For,
j

parallel to B and B' = B, so that Oaft and aOy are congruent]


triangles, and S' = S. At the same time the angles P and y are
equal to one another, so that, aj8 being parallel to yO, so are also
j

0/3 and ya, or S 1 1 S'. Therefore, by Section 2, S' = S, what was]


to be proved. 4

Thus we have "*

A+B=B-f-A, (l)^

the commutative property of vector addition. The order of the ;

addends, in the vector chain, is irrelevant for their sum. i

Again, we might have shifted B to the position B" (Fig. i), j

retaining also the previous B = a-^/S and constructing A' = S->/? = A. !


ADDITION OF VECTORS 5

Then, OafiS being a parallelogram and S = 0^> ^ one of its diagonals,


we should have the following construction of the sum of two
coinitial vectors A, B (Fig. 2) :

Through a, the end-point of A, draw a parallel to B, and through


/?, the end-point of B, draw a parallel to A. Then y, the cross of

these parallels, will be the end-point of the required vector sum


A+B or B+A, and thecommon origin of the two addends will
be the origin of their sum
S = 0->y==A+B-B-HA. (2)

This known as the parallelogram construction of a vector sum.


is

We
might have started from it as a sum definition. It has the
advantage of being immediately symmetrical with respect to the
two addends. At any rate we see that the chain and the parallelo-
gram constructions are (in virtue of Euclid) wholly equivalent to
one another.
Thus far the case of two vector addends. Now, the sum of
these being again a vector, S=A+B, we can add to S any third
vector C, thus obtaining

S + C = (A+B)+C=C + (A-f-B),

the latter by the commutative property. Similarly for the sum


of four and more vectors. Again, linking up the vector addends
A, B, C into a chain, we see without difficulty (Fig. 3) that

(A+B)+C=A-h(B+C), (3)

the result being in both cases the same vector, viz. that drawn
from the beginning to the end of the chain. The same property
holds for the sum of any number of vectors. The brackets become
1

6 ELEMENTS OF VECTOR ALGEBRA


superfluous, and either of the above expressions can simply be '

written i

A+B+C
or B+A+C, and so on.
\

The addition of vectors is thus seen to be associative as well as


commutative, exactly as the ordinary algebraic addition of scalars. ;

If by any appropriate parallel shifting of any number of given \

vectors, say A, B, C, D, they can be linked up, as in Fig. 4, into i

a closed chain (or a polygon), plane or not, then the sum of these !

vectors is a nil vector or simply nil,

S=A+B+C+D=o,
and therefore also A + C + B+D = o, etc. It is scarcely necessary
]

to say that a vector is nil or zero, S = 0, if


\

that is, if its tensor vanishes, and conversely ; or, in other words, i

if its end-point and origin coincide, such precisely being the case
of our closed chain.
The vector sum, which shares with the ordinary algebraic sum
the two capital properties of commutativity and associativity, i
ADDITION OF VECTORS 7

contains the algebraic sum as a particular sub-case, to wit, when


the vector addends are all parallel to one another. For, such
being the case, they can always be brought into one line or made
colHnear. Parallel vectors, no matter what their tensors, are
therefore called also collinear vectors. Now, if A, B are collinear
vectors, the tensor of their sum is

A±B,
according as A, B are of equal or of opposite senses.
The tensor of a sum of vectors, as S=A+B, can conveniently
be denoted by
S=|A+B|,
as is usual for the absolute value of ordinary algebraic magnitudes.
Thus we shall have, for collinear vectors,

|A+B| = |^±5|.
But, it will be well kept in mind that in general, for non-collinear
addends,

since |A+B| is the length of the third side of a triangle whose


two other sides are A and B.
By what has just been said, the sum of two equal vectors, which
is written
A+A or 2A,

is a vector coinciding with A in direction arid having 2A for its


tensor. Similarly for 3A, 4A, and so on.
Again, if B be such a vector that

2B=A,
we shall write
B = iA,
and similar meanings will be attached to JA, JA, etc. In this
manner, and applying in the case of irrational factors the well-
known limit-reasoning, we easily obtain the meaning of the
expression
wA,

where n is any positive scalar number, integral, fractional or


irrational. We can say shortly that nk is the vector A stretched
in the ratio n\ i. If w is negative, then (as justified in Section 4,
8 ELEMENTS OF VECTOR ALGEBRA
infra) wA will be the vector A stretched in the ratio |w| I and :

then inverted in its sense, or first inverted and then stretched.


In particular, if a is an unit vector, the " unit of A," as we
have said before, we shall obviously have
A = ^a. (4)

Here A^ the tensor of A, is an ordinary positive number.


Let a, b be any two non-collinear unit vectors. (Imagine them
shifted so as to be coinitial.) Then any vector R contained in or
parallel to the plane a, b can obviously be expressed by
R=^a + yb, (5)
where x, y are some For the plain meaning of
scalar numbers. \

this assertion is that, starting from 0, the origin of a, b, any other


point of the plane a, b can be reached by making a number {x)

Fig. 5. %

of steps a and then a number (y) of steps b (or first yb and then \

x^). If both X and y are positive, then, with as origin, R will


lie in the region I of the plane a, b (Fig. 5) ;
if a;< o, y > o, it
'

will fall into II ; if ;«• < O, y < O, into III, and finally, x>
\i O, ;

y < o, into the region IV.


The scalars x, y in (5) are called the components of R along \

a, b as axes. !

Similarly, if any three non-coplanar vectors * which


a, b, c be \

we may again take as unit vectors, then any vector whatever can i

be expressed in the form ,

R=A;a + yb+sc. (6)

* I.e. such as cannot be made coplanar by parallel translations.


'
ADDITION OF VECTORS 9

The scalars x, y, z are called the components of R taken along


a, b, c as axes. These axes may be chosen at our will (if we wish
at all to split our vectors B into components), either perpendicularly
or obliquely to one another, the only condition for covering all

possible vectors (R) being that a, b, c should not be coplanar.


The three vectors a, b, c or, as we will say, the reference system
a, b, c, being fixed conventionally, we see from (6) that any vector
is determined by three scalar data x, y, z and not less than
fully
three. The same thing is obvious from formula (4), according to
which any vector R can be represented by

In fact R is one scalar number, and r (a direction) implies two


more scalar data, for instance two angles, which makes in all three
independent scalar data as above.
In so-called polar coordinates, for instance, we have (Fig. 6)

R = i?{[icos <^-i-jsin</»] sin ^-hk cos 0}, (7)

where i, j, k are mutually perpendicular unit vectors, the pole-


distance or co-latitude 6 being counted from k and the longitude
from i towards j, if i, j, k be a right-handed system. The particular

Fig. 6.

form (7) will often be found useful in passing from vector to scalar
formulae, especially in optical computations. The unit of R, i.e.

any unit vector r, will be, in polar coordinates,

r = [icos <^ +j sin<^l sin d4-kcos d. [ya)


lo ELEMENTS OF VECTOR ALGEBRA
After what has been said, it is scarcely necessary to explain
that every vector equation is equivalent to three scalar ones. For
to give R is equivalent to giving, say, its three rectangular or
*'
cartesian " components, or, as in (7), the polar coordinates
R, 9, <f>
of the end-point of R, with as origin. Thus A =B means
as much A-^^B^ and A^^B^ and A^ = B^, if the suffixes i, 2, 3
as
are used for the components of the vectors along i, j, k, or as
much as
^A=Rb, ^A=^Bj ^A=^£,
if the suffixes ^^ are used to distinguish the polar coordinates of
the end-point of A from those of the end-point of B.
But it must henceforth be urged that any such splitting of a
vector should be avoided as much as is possible in the course of a
vector investigation of any kind. For the utility of vector method
lies precisely therein thatenables us to treat vectors as wholes
it

instead of the triad of " components " of each of them.

4. Subtraction of vectors. This will require but a few remarks.


In fact, as in common algebra, the difference of two vectors A '

and B, to be denoted by
A-B,
may be defined as such a vector C, which added to B gives A.
In symbols, we say that

if B+C = A./ ^1.

From this definition we see at once that if A, B are made coinitial


(Fig- 7)> the vector A-B runs from the end-point of B to the
end-point of A. From the same figure, and by what was explained

previously, we see that A +B and A -B are represented by the two


diagonals of the parallelogram constructed upon A, B.
SUBTRACTION OF VECTORS ii

Apply the above definition (8) to the particular case A=0;


then
C = o-B= -B,
andB+C = 0; therefore,
B + (-B)=o. (9)

This settles the meaning of the vector denoted by -B ;


it is the
vector which runs from the end-point towards the origin of B, or
the reverse of B. This also justifies the interpretation given
before to a negative scalar factor of a vector. Henceforth, for
any A, B,
A-B
will stand for the same vector as

A + (-B).
The above remarks complete the meaning of

wA,
where A is a vector and n any real scalar, positive, nil or negative.
The concept of such a product of a vector by any scalar n does not
contain, in fact, anything besides the previous concept of vector
»sum or difference. It is derived from their special case, viz.
relating to collinear vectors.
To say it once more, nh is simply the vector A stretched in the
ratio \n\\ i and, if n < O, turned through i8o° (in any plane
passing through A).
Finally, as the reader himself will easily prove, for any A, B,
and any scalar factor n,

n (A+B)=wA + wB. (lo)

Similarly for three or more vector addends. This settles all

questions concerning the multiplication (or division) of a vector


expression by any scalar number.

5. Scalar product of two vectors. We now come to a new


concept, transcending that of vector addition which hitherto has
occupied us. The *' scalar product " of two vectors A, B which
willbe denoted by AB is, first of all, not a vector but a scalar.
(Thus the scalar multiplication of vectors does not respect the
group requirement it yields a result not contained in the class
;

of operands : it takes two vectors and constructs out of them


something which is utterly deprived of direction. None the less
12 ELEMENTS OF VECTOR ALGEBRA |

.1

it is a very useful operation.) The value of this scalar is, by |

definition, proportional to the tensors of both the factors and to


the cosine of the angle (A, B) included between them. In short,
the definition of the scalar product is

AB = ^5cos(A, B). (II) I

This can also be read : AB is the projection of A upon B multi-


plied by B, or the projection of B upon A multiplied by A. i

Since AB^BA, for A, B are common numbers, and ;

cos(A, B)=cos(B, A), i

we see at once from the very definition (ii) that


I

AB=BA, (12) i

the commutative property.

According as the angle (A, B) is <- or > - f but <— ), the |

product AB is positive or negative ; for A, B are themselves :

essentially positive. And if (A, B)=- or A J_ B, then ]

AB=o, I

no matter what the (finite) tensors of A, B. In this case the i

operation (scalar multiplication) deprives the material operated ,

upon not only of direction but of size. It annihilates it.


j

Conversely, if of A
and B we know only that
two vectors AB =0, '

then the only conclusion we can draw from it is that

A±B, \

but by no means that one of the factors vanishes, unless we happen i

to know beforehand that the two vectors cannot be perpendicular. !

It is of prime importance to keep this well in mind : i

AB=o means in general only as much as A J_ B.

The scalar product AB contains the ordinary algebraic product


as a special case, to wit, when A, B are collinear vectors. For if \

such be the case, we have cos (A, B) =± i, and therefore, ;

AB=±^5, (13) \

according as A and B have the same or opposite directions.


SCALAR PRODUCT OF TWO VECTORS 13

Since the tensor of the vector mA is mA, we see at once from


(li) that
vtAnB = mnAB.
Thus, for example, if a, b be the units of A, B, we have
AB = ABfih,
where, again by the definition (ii),

ab = cos(a, b), (14)

vahd for any pair of unit vectors a, b. Thus, for instance, if a, b

make with one another the angle of 45°, we have ab = —7=, and if
V2
(a, b)=90°, ab=o. For the three normal unit vectors i, j, k used
above we have ij =jk=ki=«o.
As a sub-case of (13) we have the scalar square of a vector, or
better, its autoproduxt,
AA or A2 = ^2,
and if a be a unit vector,
a2 = a2=i.
Thus, i2 = 32 = k2=i.
Again, if R is any vector whatever and n a unit vector, Rn is

the (scalar) component of R along n, or the orthogonal projection


of R upon n as axis,
Rn = i?cos (R, n).

By what has been said we see that if A, B be rigidly Hnked


together and thus moved about in space in any arbitrary manner
whatever (spun round, etc.), the value of the product AB is not
changed. It is thus an invariant of the pair of vectors with
respect to their common rigid motion. In fact, AB depends only
on the tensors of A, B and on their relative direction, i.e. the angle
(A, B).
By the fusion of A, B into AB all directional properties of the
factors are gone. The result has nothing more to do with direction
in space ; it is an ordinary scalar, like the tensor of each of the
two vectorial factors. Thus, if C be a third vector,
(AB)C or C(AB)

will simply mean the vector C magnified (stretched) AB times,


assuming, that is, that AB is a dimensionless or pure number ; if

AB is an area and C, say, a displacement, then (AB) C, the tensor


u ELEMENTS OF VECTOR ALGEBRA
of fAB)C, is a volume, of course, and so on. If D is a fourth |

vector, i

(AB)(CD) i

will again be a scalar, and so on. The brackets are here used as
j

separators. They are, of course, indispensable in such and similar ;

cases. For, to take only three factors, ABC would, in general, be


ambiguous, since *

(AB)C is a vector along C,


while
A(BC) is a vector along A,
and thus entirely different from the former. Instead of brackets \

dots may conveniently be used as separators, thus


(AB)C=AB.C, ?

(AB)(CD)=AB.C3D,
and so forth. The reader will soon find that this need of precaution ,

'

gives rise to no serious inconvenience.


The scalar product AB is commutative owing to the symmetry i

of itsvery definition with respect to A, B. In this it resembles!


the ordinary product. But, what is most important, it has also
the distributive property, viz. for any A, B, C,

A(B + C)=AB+AC. (15)!


For, by the definition, A(B+C) or (B+C)A is the projection'^
of the vector B + C upon A multiplied by A. But the projectioaj
of the sum of two (or more) vectors upon any axis is equal to thej

algebraic sum of the projections (Fig. 8), whence the proof of the!

distributive law (15). \

Similarly, ;

A(B+C+D+E + ...)=AB+AC+AD+AE + ... ,


,

and also jl

(A+B)(C+D)=(A+B)C + (A+B)D .]

= C(A+B)+D(A + B)=AC+BC + AD+BD.


SCALAR PRODUCT OF TWO VECTORS 15

And since B-C is the same thing as B + (-C), we have also

A(B-C)=AB-AC.
In fine, the scalar multiplication of vectors is commutative as well
as distributive, and any two vector polynomials are multiplied out
precisely as in ordinary algebra. This makes the scalar multiplica-
tion of vectors a powerful operation.
As examples we may quote

(A+B)(A-B)=A2-B2 = ^2_52^
meaning that the product of the lengths of the diagonals of a
parallelogram multipHed by the cosine of their included angle is

equal to the difference of the squares constructed upon the sides


of the parallelogram ;
again,

(A+B)2 = ^2 + 52 + 2AB,
or (Fig. 9), remembering that kB=AB cos [ir-6)= -AB cos 6,
C^ = A^ + B^ T 2 AB cos d,

the well-known trigonometrical relation. In particular, if A J_ B,

(A+B)2 = ^2 + 52,
the theorem of Pythagoras. As a third example, let us quote the
scalar product of two coinitial unit vectors, written as in (7a),

Fi = [i cos <f>i +j sin <^J sin ^^ +k cos O^,

Tg = [i cos <f)^ +j sin 4>2] sin d^ +k cos d^,

and representing (by their end-points) two places on the Earth *


whose geographic colatitudes and longitudes are ^j, t^^ and ^2> <^2-
If s be their geodesic or shortest distance, i.e. the angle (r^, Fg),

we have cos5 = rir2. Now i^=i, etc., and ij=jk = ki = o. Thus,

* Assumed to be ideally spherical, of radius taken for unit length.


i6 ELEMENTS OF VECTOR ALGEBRA i

multiplying out the two trinomials we have, for the required dis-
tance 5,

cos 5 = cos 6^ . cos ^2 + sin 0^ . sin dglcos <^i . cos «^2 + sin <j>i
. sin <f>^.

Again, calling for the moment a, b two equatorial unit vectors i

having the longitudes of the two places (cf. Fig. 6), viz.

a=i .
cos <^i +j .
sin </>!, V=i cos <^2 +3 ^in <^2»
we have y j '

ab = cos (<^i
- </>2) = cos <f}^
cos <f>2 + sin <l>i
sin c^gi

the well-known formula of plane trigonometry, so that the geodesic


distance of the two places, (dj, <^i) and ($2, <l>^j becomes
cos s = cos $1 cos $2 + sin 6j sin 62 cos (^^ - <^2), (7^)

an important formula for navigators, which is at the same time the


fundamental "cosine formula" of spherical trigonometry. In:
fact, Nbeing the pole (^ = 0), formula {7b) concerns the spherical
triangle 1N2 (Fig. lo), whose sides are s, 6^, $2, and whose angle

•^8

included between the latter two is <i>2-<l>i' Notice that this is!

valid for any spherical triangle ; for one of its corners can always
be considered as our pole, ^ = 0. *

The reader will not be astonished to see the comparatively!


compHcated theorems of euclidean geometry thus to follow with-'
out the least trouble from squaring the sum of vectors or from]
For essentially all euclidean
multiplying scalar ly two unit vectors. |

relations have been condensed into the above vectorial definitions:


and rules of operations (addition and scalar multiplication). Still,

as such a condensed system, the vector algebra is exceedingly]


useful. The reader will find for himself that the vector equality!
THE VECTOR PRODUCT OF VECTORS 17

and the vector addition alone, as explained in Sections i to 4, even


without the help of the scalar product, are sufficient to demonstrate
formally a large number of euclidean theorems, such, for instance,
as the mutual bisection of the diagonals of a parallelogram, the
common cross of the three medians of a triangle, and so on.
The scope and purpose do not permit us to enter
of this booklet
into all these attractive details. The wiUing reader will, however,
find no difficulty in treating them as exercises which he will soon
find to be easy as well as interesting and useful, when skill in
handling the vector method is aimed at.

6. The vector product of vectors. Two non-collinear vectors, A


and B, can always be said to define a plane A, B, by making them
coinitial, for instance, as in Fig. 1 1. We already know that one

Fig. II.

AB, deprives them of all their properly


of the previous operations,
and the other, A+B, or more generally
vectorial^characteristics,
xA+yB, gives us only vectors which are again in the plane A, B.
The operation to be now introduced is in this respect particularly
interesting, since it yields a vector outside the plane of the operands
A,B.
Definition. We call vector product of A into B and denote by
VAB a third vector C normal to A, B and drawn so that for an
observer glancing along C the rotation turning A into B, through
an angle smaller than 180°, is clockwise. This fixes the direction
and the sense of the vector product C = VAB, and its tensor is
defined as equal to the area of the parallelogram constructed upon
A, B as sides, i.e.

C=|VAB|=^5|sin(A,B)|. (16)

From this definition we see, first of all, that the vector product
is not commutative, inasmuch as we have
VBA=-VAB. (17)
S.V.A. B
i8 ELEMENTS OF VECTOR ALGEBRA \

Again, if A,B are parallel to one another, i.e, collinear, we have


VAB=o. '
\

And if A _LB, then sin (A, B) = ±1, and \

C=|VAB|=^5, \

while A, B and C form a right-handed normal system of three]


vectors. If A points upward and B towards the right, then:
C = VAB points forward. !

If we know of two vectors A, B that their vector product vanishes, \

then we can conclude only that they are parallel (collinear), i.e.\

that
\

where m is some undetermined scalar number, but by no means |

that one of the vectors vanishes (unless we know beforehand that


they cannot be parallel). This is, mutatis mutandis^ analogous to!
what has been said in Section 5 with regard to the scalar product.
From (16) we see at once that the vector product of m times Ai
into n times B is equal to \

mnVAB. 1

Thus, for instance, '^

VAB = ^5Vab, (18)!

where a, b are the units of A, B. Similarly AB = ABB,h.


For a right-handed system of normal unit vectors, as the previous j

i, j, k, we have

Vij=k, Vjk=i, Vki=j, {a)

three relations derivable from one another by cyclic permutations;


of i, j, k. At the same time we have, of course, as for every vector, \

Vu=Vjj=Vkk = o.
Contrast these relations with the previous ones, i2=j2 = ij2_,i g^^^
ij =jk=ki = 0. The latter follow also from {a) ; for, by the second!
of (a), for instance, i = Vjk is normal to j, and therefore ji = jVjk =0. ]

It is scarcely necessary to explain that jVjk means the scalar]


product of the vectors j and Vjk. j

More generally we have, for any two vectors A, B, by the very!


definition of VAB,
AVAB=BVAB=o. j

Let now A, B, C be any three vectors whatever, generally non \

coplanar with one another. Then the scalarly-vectorial product,!


AVBC,
THE VECTOR PRODUCT OF VECTORS 19

which is itself a scalar, has a very simple geometrical meaning.


In fact, let A, B, C (in the order as they are written) form a right-
handed system, Le. such that a person glancing along C sees the

rotation from A to B (through less than jt) clockwise. Construct


upon A, B, C as edges a parallelepipedon (Fig. 12). Then VBC
will be perpendicular to the base B,C, and its tensor will be equal
to the area of this base ; in symbols,

VBC = (area of base) n,

where n is a unit vector perpendicular to the base. Therefore,

AVBC = (area of base) An,

and An being the height of the parallelepipedon, we see that

AVBC = volume of parallelepipedon A, B, C,

provided that A, B, C is a right-handed arrangement of the edges.


(If it were a left-handed arrangement, then AVBC would be equal

to minus the volume.) Now, the same volume can be expressed


by taking C, A or A, B as base. Thus we obtain the important
property
AVBC=BVCA = CVAB, (19)

or in words : the cyclic permutation of the three factors of AVBC


does not influence the value of the product.* Inverting the cyclic
order is equivalent to changing its sign. For VCB is minus VBC.
The particular property {A;A-{-yBjVAB=0 can now be interpreted
geometrically by saying that the volume of a parallelepipedon

* The validity of formula (19) is by no means based upon this volume-


proof (or rather illustration), which is given here only because it best
appeals to simple intuitions. In fact, (19) can be proved algebraically,
without any appeal to the concept of ' volume.'
20 ELEMENTS OF VECTOR ALGEBRA
vanishes when its three edges become coplanar, that is to say,
when all its faces collapse into one plane.
If of any three vectors A, B, C we know that
AVBC = o,
then the only thing we can conclude is that A, B, C are coplanar,
but by no means that one of these vectors vanishes. Conversely,
if A, B, C are coplanar, we have AVBC = 0. The theorem expressed
by (19) is of great utility in many applications, and it deserves,
therefore, to be well kept in mind.
As in the case of the scalar product, one of the most important
properties of the vector product is its distributivity, i.e. for any three
vectors A, B, C,
VA(B +C) =VAB + VAC. (20)

This capital property can be proved in a variety of ways. First


of by an immediate geometrical construction of both the
all,

right-and the left-hand member of (19), which will be left as —


an exercise for the reader. (It will be enough if the reader con-
structs it for the simplest case of coplanar A, B, C.*) Another,
comparatively simple proof, based upon (19), is this:
Let us write
VA(B + C)-VAB-VAC = X.
Then our problem is reduced to proving that X vanishes. Now,
all the three addends being perpendicular to A, so is their sum
X, i.e.

XA = o.
Again, XB = BVA (B + C) - BVAC
= (B + C)VBA-CVBA, by (19),
= BVB A + CVBA - CVBA = o,
and similarly XC = 0. Thus, the vector X either vanishes or is

normal to each of the three vectors A, B, C. Now, if these are


not coplanar, the latter case is excluded, so that X = 0. Thus,
Tor non-coplanar A, B, C the distributive property (20) is already
proved. And if A, B, C happen to be coplanar, add to C, for
instance, a fourth vector D inclined to the plane of A, B, C. Then
the new vectors A, B, C+D will not be coplanar, and
VA(C + D)-l-VB(C + D)=V(A+B)(C + D),
* For the case of non-coplanar A, B, C is more easily dealt with by the
following analytical method.
THE VECTOR PRODUCT OF VECTORS 21

and since D can always be so chosen as to make the three relevant


vectors in each of these products non-coplanar, they may be
expanded, giving

VAC + VAD + VBC + VBD = V(A +B)C + V(A +B)D ;

but, by the above,


VAD+VBD=V(A+B)D,
whence
VAC+VBC=V(A+B)C,
or, changing the sign of both sides,

VCA+VCB=VC(A+B).
Thus the distributive property of vector multiplication is proved
for any A, B, C, coplanar or not.
The product of two binomials (or polynomials) does not call for
lengthy explanations. Thus,

V(A+B)(C + D)=V(A+B)C+V(A+B)D
= - VC(A +B) - VD(A + B) =VAC +VBC+VAD +VBD.
The vector multiplication of any two vector polynomials is thus
seen to obey the same rules as ordinary algebraic multiplication,
the only difference being that vector products are not commutative.
A reversal of the order of the two factors changes only the sign
of their product, which is easily remembered.

7. Expansion of vector formulae. Basing ourselves upon the


distributive property just proved, we can at once expand the
vector product of any two vectors into its cartesian or any other
form. Thus, if

h, = A{\ + A^-\-A^, and B = B-^i-irB^^Bjs,,


we have, remembering that Vii = o, Vjk= -Vkj=i, etc.,

VAB =i(A53 - ^3^2) + J {-^3^1 - ^1^3) +kMA - ABi\ (21)

exhibiting ^2^3~^3-^2> ^^c. (by cyclic permutation), as the three


rectangular components of the vector product. Since |VAB| is

the area of the parallelogram constructed upon A, B as sides, we


see at thesame time that ^2^3~^3^2» ^^^-j ^^^ ^^^ areas of the
upon the planes j, k k, i i, j,
projections of this parallelogram ; ;

a well-known result which, however, is more easily seen on the


22 ELEMENTS OF VECTOR ALGEBRA
vector method. The last formula, (21), is easily memorized in
its determinantal form, which is

3 k
VAB = A A A [2ia)

'1 ^2 B..
-^3

In exactly the same way the reader will show himself that the
cartesian expansion of AVBC, the triple product representing the
volume of the parallelepipedon A, B, C, is

^1 A -^i

AVBC B2 B2 (22)

t'O Co

This, in fact, is the most familiar expression for the volume of :

the parallelepipedon constructed upon A, B, C as edges. Formula


(22) gives also an immediate verification of the property 1

AVBC =BVCA, etc.,

as in (19). For I

A A A, Bi B2 -B3

5i B, B, = Ci ^2 Cg
C^ c. c. ^! A A
and so on.
For the scalar product we have immediately, remembering that
12= I, ij=o, etc.,
AB = ^151 + ^2^2 + ^3^3. (23)

As particular cases of (21) and (23) note the results for two unit
vectors a, b which include the angle trr,

singer = [aj}^ - a^b^Y + (a^b^ - a^b^y-h (ajb^ - aj)^\


cos 73 = fli&i + ^2^2 + ^3^3>

^ii ^i> ^tc, being now the direction-cosines of a, b relatively to


i, j, k as axes. For such is the meaning of the components of
unit vectors.
In order to give at least one illustration of the utility of AVBC,
let us consider three coinitial unit vectors whose end-points may
be conceived as the vertices i, 2, 3 of a spherical triangle drawn
on a unit sphere. Let us use the colatitude and the longitude as
in (7a). Without any loss to generality we may put the pole
EXPANSION OF VECTOR FORMULAE 23

(^=0) into the vertex i and take the first meridian along the
side 12 ; thus, a^ being the angle at I, and ^g, ^3 the sides of the
spherical triangle opposite 2 and 3,

r2=icos 53 + j sin 53,


13 = i cos 52 + sin ^2 [j . cos a^ + ksin a^].

This gives for the scalarly-vectorial product, by (22), since the


first vector has no second and no third component, and the second
vector no third component,

riVrgTg = sin ^g sin Sq sin a^,

which, by the cyclical property {19), is also equal to r^t^Ti and


to rsVriig, and these products are obviously equal to

sin 53 sin s^ sin 02


and to
sin Si sm So sm a 3)

where og, a^ are the remaining two angles of the spherical triangle.
Thus,
sin ai_sin Ug^sin a^ (I9«)
sin Si
" sin $2 " sin 53

the fundamental **
sine formula " of spherical trigonometry,
"
following on the vector method cosine formula
as easily as the '*

"
given before. It is interesting to note that the " sine formula
is, in this circle of ideas, but the statement of the triple expressi-
bility of the volume of the parallelepipedon r^, Fg, 13, viz. as r^r^^i
or rgVrgTi or r3Vrir2. Other examples are left to the care of the
reader.

8. Iteration of vectorial multiplication. There is but one more


important formula to be noted in connection with the vector
product of vectors, viz. a formula giving a convenient vector
expansion of the result of repeated or iterated vector multiplication,

VA(VBC) or simply VAVBC,


which reads : having obtained the vector product of B, C, multiply
it, again vectorially, by A. This ternary product, which occurs
very often, is, of course, again a vector, to wit, perpendicular to A
and to VBC ; but the latter being itself perpendicular to B, C,
24 ELEMENTS OF VECTOR ALGEBRA ]

our new vector VAVBC is coplanar with B, C, so that we know i

beforehand that the result *


will be of the form \

VAVBC = /3B + yC, '

where /S, y are some scalars. Since the ternary product is per- i

pendicular to A, we have /3(AB) +7(AC) =0, so that j

VAVBC = A{B(CA) -C(AB)}, \

where A, is a scalar. It remains to determine its numerical value. ;

This can be done, for instance, in the following manner. First


of all, A can always be assumed to be coplanar with B, C, since ]

its part normal to B, C contributes nothing. Next, dividing both '

sides by ABC, the equation becomes :

VaVbc = A{b(ca)-c(ab)}, i

where a, b, C are the units of A, B, C. Now, multiply both sides


'

scalarly by b, and notice that, by (19),

bVaVbc = (Vbc) (Vba) =sin (b, c) . sin (b, a).


Thus, '
i

sin (b, c) . sin (b, a) = A.! [cos (c, a) - cos (a, b) . cos (b, c)] ;
I

but, the three vectors being coplanar, we have !

cos (c, a) =cos (b, a) . cos (b, c) +sin (b, a) . sin (b, c), j

so that .A, =I •
]

The required formula is, therefore, I

VAVBC =B(CA) - C(AB). (24) ;

As an exercise, the reader may verify it by an iterated application i

of the cartesian expansion (21) or {21a). Having once obtained^


this important formula, there will be no difficulty in dealing with i

quaternary vector products, as VDVAVBC, which becomes


(CA)VDB-(AB)VDQ, etc. But such products will hardly occur'
in practice.
A notable property of the above ternary product and of its two
cyclical MgpjKitations is that
^0^"^^ VAVBC + VBVCA + VCVAB = o, (24^)

identically. For the six right-hand terms of (24) and of the two|
similar equations destroy themselves in pairs.

* The trivial case of B, C collinear can be discarded ; for then VAVBC =0.
ITERATION OF VECTORIAL MULTIPLICATION 25

A particular case of (24) which often occurs is that in which C


is equal to A and is a^ unit vector u, say. Then we have
VuVBu = B-(Bu)u, {24b)

whence we see also that VuVBu is the part of the vector B normal
to u, in both size and direction. For (Bu) u is the part of B along u.
To and at the same time the essential part
close this section,
of the whole Vector Algebra, but a few more remarks which will
be useful in connection with problems often occurring in practice.
Let X be an unknown, and A, u two given vectors, the latter
an unit vector. If we know of X only that
VXu = A, (a)

we cannot fully determine X. For to a solution of this equation


we can add any vector mu (since Vuu = o), and X + mu will again
be a solution of this equation. In order to determine X uniquely
we must have one more (scalar) datum. Let this be
Xu=(r, (b)

where o- is a given scalar. Then X is completely determined. In


order to find its value expHcitly in terms of the given A, u, cr,

multiply the equation {a) vectorially by u; then, in virtue of (24),

X-(Xu)u=VuA,
and by {b),
X=(m+VuA, {c)

which is the required solution. This simple rule, {c), for solving
the equations (a) and (&), will often be found helpful.

The Linear Vector Operator. Let R be a variable vector,


9.
that to say, one that can assume in turn all possible sizes (tensors)
is

and directions. Of each of these determined vectors we can speak


as of the special value of the variable R. To have a good picture
of such an abstract concept, imagine R as a straight, extensible
and contractile string fixed at one of its ends at a permanent
point ; then its free end-point P occupying in succession all

possible points of space, OP will represent the various values of R.


The vector R can, in such a connection, be advantageously called
the position vector of the point or, if one prefers, of the particle P.
Now imagine that there is another particle P\ and let its position
vector, with the same origin be called R'. Let there be some
(9,

mechanism, or else our own imagination, which to every chosen


26 ELEMENTS OF VECTOR ALGEBRA
position of P
makes correspond a certain position of P'. This
we may by saying that to every value of R corresponds a
express
certain value of R', by writing

R' = wR, (25)

and by callingR' a vector function of the variable vector R. If,


as we assume, to every R corresponds but one R', determined in
size and direction, we will say that R' is a monovalent function
of R, and we will call CT a monovalent vector operator, the symbol
of some operations to be performed on R in order to obtain R'.
We can think of such operations in the algebraical, as well as in
the physical sense of the word, as turning round the representative
string, stretching or contracting it according to some more or less

complicated prescription. It is needless to explain that an equation


such as (25) is equivalent to three scalar equations : each of the
components of R' equal to some function of, in general, all the
three components of R.
Suppose now that R is represented as the sum A+B of some
two vectors. In general the operations embodied in zs may be
such that cr(A+B) is not the same thing as rnA + zsB. A good
example of such an operator is that which converts an incident
luminous ray into the refracted ray (cf. Simplified Method, quoted
in Preface). But the operations represented by ZJ may also, in
particular, be such that

ct(A+B)=C7A + ctB,
whatever the vectors A and B. If such be the case we call TS a
distributive operator. An example of this kind is afforded by the
*'
reflector," i.e. that operator which converts the incident ray into
the reflected one. The simplest example of a distributive vector
operator is, however, a scalar number <r used as a factor ; for we
have, of course,
<r(A+B)=o-A+(rB.
This operator is a pure stretcher or (if |cr| i) a contractor, and, <
if o- < o,
an invertor at the same time.
Leaving these examples, let us turn to the general distributive
operator, of which we will only assume that it is a continuous
operator, i.e. that cyR a continuous vector function of R. Such
is

distributive operators have very far-reaching applications in


many branches of geometry and physics. They are known better
THE LINEAR VECTOR OPERATOR 27

under the name of linear vector operators, and were first introduced
by the great Hamilton. In fact, it can easily be shown that the
continuous and distributive operator 7S when applied to a vector
R whose components are each a linear
yields another vector R',
function of the components of R, whatever the triad of axes em-
ployed for the decomposition. For, if n be any integer positive
scalar number, we have, in virtue of the assumed distributive
property,
cy(wA)=wCT(A),

and this property can easily be extended to negative and fractional


values of w, and ultimately, by the often repeated limit-reasoning,
to any real value of n. This being granted, let a, b, c be any
non-eoplanar unit vectors ; then, whatever R, we can represent it by
R=;«i + yb-l-sc,
where x, y, z are some scalars, the components of R. Thus, the
corresponding R' will be

R' =X3{x9, + yb + 2c) = ^nra + ycrb -f swc, (26)

and, therefore, the components of R', i.e. A;'=aR', etc.,

jc'= (aCTa)A; + (acTb)y + (acyc)s, \ .


^ ^
\ \20CL\
y' = (bcya)x 4- etc. z' = (ccya)^; + etc.,
;

and these are linear, homogeneous functions of x, y, s, the com-


ponents of R,—the coefficients (aCTa), etc., being certain scalars
(scalar products of a and CTa, a and CTb, and so on), which depend
partly on the nature of the operator CT, and partly on the choice
of the framework a, b, c.

From (26a) we see that the operator Z5 is fully determined if

we give, for any chosen a, b, c, the nine coefficients

acra, acrb, acjc, bwa, etc.,

which we will denote by

respectively. And it is not difficult to see that these nine scalar


data, which are sufficient, are also necessary to determine com-
pletely a linear vector operator.
We can express the same thing more simply by taking the vector
equation (26) instead of its components (26a). Let us rewrite (26),
28 ELEMENTS OF VECTOR ALGEBRA
j

taking for a, b, C any triad of normal vectors ; then x, y, z stand \

for aR, etc., so that the equation is !

R' = CTa (aR) + crb (bR) + cyc(cR) (26') j

Thus we can say that the operator 7S is fully determined if we \

know what it yields if applied to the three conventional vectors \

a, b, c, that is to say, if we are given the three vectors

era, TDh, zsQ. !

Each of these implies 3 scalar data, so that in all we have again \

9 data, as before. What we have, a moment ago, denoted by |

trr^^, CT^j, etc., are simply the cartesian components of these three I

vector data, taken along the axes a, b, c.

Hitherto we have spoken of the most general linear vector J

operator. Let us now explain an important subdivision of this "

vast class of operators. Let A and B be any two vectors. We


can take STB and multiply it scalarly by A, or first form ctA and 1

then multiply it so by B. In this way we should obtain the two «

scalars, I

ActB and BctA. i

two numbers will be different from one


'

Now, in general, these


another. But the operator V5 may happen to be such that they j

are equal, i.e. that, for any A, B,

ACTB=Bt7A. (27)1

If such be the case we call trr a self- conjugate operator. By this


definition (27) we have CT„6 = Wj«, etc., so that a self-
shall also !

conjugate operator has but six mutually independent (that is, six i

independently prescribable) coefficients, or constituents, '\

^aa ^ab ^ac


[

^cc j

This table, after the insertion of Z5^ = XJS^j,, etc., at the vacant ]

places, symmetrical with respect to its diagonal


is whence also ; \

the name of symmetrical operator, used as a synonym for the self- (

'

conjugate operator.
This being a sub-class of 7S^ the general operator, the remainder ^

of the class of trr's, for which I

^«6^^6a, etc., 1
THE LINEAR VECTOR OPERATOR 29

are called non-symmetrical or asymmetrical operators. If ZJ be


such an operator, and if VH' be another operator such that

ActB=Bct'A, (28)

for any pair A, B of vectors, then CT' is called the conjugate of CT.

Obviously also trr is the conjugate of cy'. Thus, if ct'^, etc., be


the coefficients of tJJ', we have, remembering that trr^j stands for
acTb, etc.,
^a6 = ^6«, ^bc = '^cb, ^ca = ^ac, (28a)

while, of course, cy^a = ^aa7 ^^^- Thus we see also that to every
operator TS there is one (and only one) conjugate trr'. In particular,
if a symmetrical operator, its conjugate is identical with it,
CT is

whence " self -conjugate " as a synonym of symmetrical operators.


In harmony with this, (27) is but a special case of (28), viz. for
zs' = z;y.

Let us use ts for any linear vector operator, and a> for symmetrical
operators only. (In fact, without the circumflex this last letter
of the Greek alphabet has some symmetry.)
Manifestly the symmetrical operator w will be a great deal
simpler than the asymmetrical ts. It is, therefore, very agreeable
to see that any ZS can be split into an w and some other asymmetrical,
but very simple, operator which is called an antisymmetrical (or
skew-) operator and which we will denote by a. The latter is

defined most conveniently by saying that, for any A, B,


AaB=-BaA, .*. AaA=0, (29)

and therefore also a^^= -<^baj ^tc, and a^ = o, etc., so that the
table for such an operator becomes

-«a6 o a^ (29a)

which justifies the name. The announced property can shortly be


written
CT = (jo -ha,

which is a symboHc short for

C7R = (oR + oR,

where R is any vector operand. The said property is easily


proved.
30 ELEMENTS OF VECTOR ALGEBRA i

In fact, let zs' be the conjugate of the given operator ST. Then!
we have, identically, i

CT = J(t7 + CT') + i(sy - CT'). (30) I

But the first term represents a symmetrical operator, because,]


by (28),
I

A(sT + CT')B=AcyB+AWB = BcT'A + BcTA=B(cT + cy')A, !

which precisely the definition (27) of a symmetric operator.;


is

And the second term is antisymmetric, for '

A(CT-t3r')B=BCT'A-BCTA= -B(cT-w')A, i
as in (29), the definition of antisymmetric operators. This proves;
the statement, without the slightest need of splitting ZS into itsi

nine constituents TS^, etc. .;

We thus see that every linear vector operator can be written j

sy = G) + a, (31)
where its symmetrical part is a> = i(CT + zs') and its antisymmetrical':
part a = J(cy-CT').
If the reader so desires he can introduce the nine coefficients of]
these operators. Then j

«^«6 = i(^a6 + W6a)=W,bat


proving again that w is self-conjugate, and ,j

««« = O, etc., a„s = J(CT„j - CTJ =* - a^j,


j

proving that a is antisymmetric. ;

Turning now to the antisymmetric operator a we can see from


its definition (29) that it has a very simple meaning. In fact,
let R be the vector operated upon. Then, by the second of (29),
j

whatever the value of R, aR is a vector normal to R. Now, this


condition can be satisfied by putting
|

aR = VwR, \

where w some fixed


is vector. But such being the case, we have J

also, for any A, B, )

AoB = AVwB = - BVwA = - BaA, ]

so that the general definition (29) is completely satisfied. Thus,


the antisymmetric operator is, dropping the arbitrary operand, ]

a=Vw; ;

in words, to operate with a is to multiply vectorially by a certain \

vector w.
THE LINEAR VECTOR OPERATOR 31

Ultimately, therefore, we can write, instead of (31), for any-


linear vector operator,
w= <o + Vw; (32)

the symmetric operator w is half the sum of TS and of its conjugate


CT', while w is a certain vector characterizing the given operator rs.

Notice that co, being symmetric, implies 6 independent scalar data,


and w, being an ordinary vector, 3 more, making in all 9, as before.
Obviously, w and w can be prescribed independently of one another,
and these two data (equivalent to 6 + 3=9 scalar ones) fully
determine the asymmetric operator ZS.
If we desire to express w in terms of the coefficients CT^j, etc.,

we can easily do so. For, from the table or the " matrix " (29a)
we see that
aR = a(a^,i?j + a^,R,) + etc.,
and since a„6 = i(Wa6~^6a)> ^^^ so on, while aR = VwR, we find
without difficulty that, if a, b, c be a right-handed system,

2w=a(CT,,-trrJ+b(CT«-CTj+c(CT^-CTj, (33)

which is the required expansion of w.


Having thus shown that the antisymmetric part of any operator
cy is simply a vectorial multiplier Vw, it will henceforth be enough
to study the remaining part of CT, that is to say, the symmetrical
operator <u.


Principal axes of cu. Let R be the operand. Then the vector
R' = wR will in general differ from R not only in size but in direction
as well. But if R assumes certain particular directions, then it

may happen that R' coincides with R in direction, if not in size.

Let X, an unit vector, represent such a privileged direction. Then,


(oR being a linear function, the inverse direction -x will, obviously,
partake of the same privilege. Such particular directions ±x are
called principal axes of the symmetrical operator o>, both + x and
~x counting for one axis.
This is merely a definition. see whether at all such
Let us now
axes and how many of them do
and what are their mutual
exist,
relations. Let us start with the last question. Suppose then that
there are two different principal axes x and y. Then, by the very
definition of such axes,
(OX = WjX,
(34)
<oy = (Ogy,::)
32 ELEMENTS OF VECTOR ALGEBRA
where Wj, Wg are some ordinary scalar numbers, which are called
the principal values of w, corresponding to these axes x, y. Multiply
the first equation scalarly by y, and the second by x, and subtract
them from one another. The result will be

(a)i - (Og) xy = y wx - xcoy

But, the operator w being symmetrical, xo)y = ya)X. Thus

K-o)2)xy = o,
and if Wj ^ Wg, we have xy = o, that is to say, x J_ y. And should
it happen that m^ = Wg, i.e.

(ux = o)iX, <«)y = a)iy,


then A, /a being any two scalar numbers,

w (Ax + fxy) = a)i( Ax + jxy)


But Ax + /ty is any vector in the plane x, y. We thus see that if :

there are two principal axes x, which correspond different


y to \

principal values cdj, Wg, these axes must be normal to one another, i

And if Wi = w2, then every direction in the plane x, y is also a :

principal axis. i

Suppose now there is still a third principal axis z not coplanar '

with X, y, and let Wg be its corresponding principal value, so that

(oz = (UgZ. :

Then, reasoning as before, we shall see that if Wg, w^, Wg are all :

different, z will be normal to x and also to y. And if (Oi = o>2 = W3, \

then every direction whatever will be a principal axis with the .

same principal value, in which case the operator w degenerates '

into an ordinary scalar factor. '

Thus, in the most general case the symmetrical operator co can i

have three different,* mutually perpendicular principal axes, x, y, i

z ; and only three. Because the fourth, if it existed and carried


a new W4, would have to be normal to those three which, in our
space, is nonsense and if 0)4 were equal to (a^, say, then the whole
; \

plane passing through the fourth and the first axis would consist i

of principal axes, and since this plane would cut the y, z plane,
0)2 and 0)3 could not be different from one another, against the
j

assumption. i

* I.e. such to which correspond different principal values.


THE LINEAR VECTOR OPERATOR 33

Having thus settled the question about the number of the


possible different principal axes of mutual orientation,
(u and their
it remains to see whether they exist, or better, to find them. The
technical side of the latter problem will depend upon the manner
how <a is given. Suppose it is given through its six different
coefficients
(0 (Oir (O * 0> I (O, (U
"'oaj ^bb) ^cc > ^ab} ^bc) ca .

with respect to some arbitrarily fixed framework of normal unit


vectors a, b, C, or —which is the same thing —that the three vectors
(oa, (ob, (oc

are given, say, equal to A, B, C, respectively, so that (w being


symmetrical) Ab=Ba, etc. Let x be a principal axis and n the
corresponding principal value (both to be found). Then if x^, x^,

jtg are the direction cosines of x with respect to a, b, c, so that

x=XiB>+xjb+XQC,
we have
(i)X x^iaa. + X2(t)b + x^oiC = x-^^A. + XgB 4- X2,C,

and since o)X = «x,

XjA. + xjB +x^C = n {x^Si + xjb + x^c)


or
Xi{A,-nSL) +a;2(B -wb) +^^3(0 -nc) =0.. (35)

From this equation we see that the three vectors A-na, etc., are
coplanar, so that the volume of the parallelepipedon constructed
upon them is nil, i.e.

(A - na) V(B - nb) (C - no) = o. (36)

Since A, B, C are given, this is a cubic equation for the unknown n.


Multiply it out and remember that a = Vbc, and therefore aVbc = 1.

Then the result will be

w3 - n2(Aa + Bb + Cc) + w (aVBC + bVCA + cVAB) - AVBC = o. (36a)

Each of the coefficients of this cubic equation for the principal


values of the operator o) has a simple geometric meaning : the
first is the sum of the projections of the vectors upon A = a)a, etc.,

the conventional a, b, c, the second the sum of the volumes of


the parallepipeda a, B, C, etc., and the last is the volume of the
parallelepipedon A, B, C. At the same time we see that these
three expressions are invariants of w, i.e. independent of the choice
of the reference system a, b, c. In fact, if w^, «2> ^3 be the principal
S.V.A. C
34 ELEMENTS OF VECTOR ALGEBRA
values of w, which manifestly are intrinsic properties of the operator, I

independent of the reference framework, we have, by (36a), <

aVBC + bVCA + cVAB = n^n^ + ng?^ + WiWg, \ (37)


where A = coa, etc.
AVBC = WiWgWg, J |

These are very important formulae, exhibiting the three invariants \

of the symmetrical operator w. j

Now, if only A, B, C are real, as we assume, all these invariants, '

i.e. the coefficients of the cubic (36a) are real. That equation has, \

therefore, at least one real root. Let this be and let us take
Wj,

the corresponding principal axis * as our reference axis a. Then \

A = o)a = Wia; .'. bVCA = «iCc, cVAB=«iBb, \

and the left-hand member of (36a) becomes at once


«3 - nH^ + (n«i - w2) (Bb + Cc) 4- (n - «i)aVBC,
j

which is, as it should be, divisible by n - w^, leaving for the remain-
ing two principal values Wg, n^ the quadratic |

n^-n (Bb + Cc) + aVBC = o,


which gives 1

^2 = J(Bb + Cc) ±>/i(Bb + Cc)2-aVBC, (38^) \

or, in terms of the coefficients a)5j = bB, etc., since ;

aVBC = (OjjO),^ - (0^, 1

r^«iK +
'«'3
±ViK-cuJ2 + <, (38)]
I

SO that, if only all the coefficients w^^ are real, these two principal:
values and, therefore, also the corresponding principal axes are'
real. That they form with the first axis a normal system wC;
already know. I

We have written down the two roots (38) in the assumption,


that 0) was given by prescribing its coefficients w^^ or the vectors
A, B, C, with respect to an arbitrary framework a, b, C. But/
as a matter of fact, this expansion of the roots is superfluous.!
For, having taken a as one of the principal axes of o), we know'
beforehand that b, c will be its remaining two axes, i.e. that !

B = a)b = W2b, and C = n^Q. \

* Whose direction cosines with respect to any a, b, c might at once be


determined Irom (35) by taking in it n =«i.
THE LINEAR VECTOR OPERATOR 35

Now, with these values, we have aVBC = W2W3aVbc = W2W3, so that


(38a)becomes

^ = iK + «3)±Vi(«2-n,)»
which is, as it should be, an identity. Thus, the only necessary
thing was to state that the cubic (36a) has at least one real root,
and this was immediately clear.
Having thus ascertained the general properties of the principal
axes of CO, let us take them as our (natural) reference system
a, b, c, which we will now call i, j, k. Then, Wj, Wg, W3 being the
corresponding principal values, the most general symmetrical linear
vector function will be
wR = n{\ (iR) + n^ (jR) + Wak (kR)
that is,
«i times the first component of R along i plus, etc., or
using the dot, instead of brackets, as separator,

<i)R = Wii iR + W2 j jR + W3k


. . . kR,
or dropping the operand R, which it is useless to repeat so many
times,
(u=nii.i + W2J.j+W3k.k. (39)
Thus the symmetrical operator assumes the form of what is called
(after Gibbs) a dyadic, which is a polynomial, in our case a trinomial,
of dyads such as w^i i, etc. It will be well to say a few words
.

on these useful mathematical beings.


Dyads and Dyadics. The dyads appearing — in (39), which, apart
of the scalar factors (calling for no explanations), are of the form
i . i, are but special cases of the general dyad which is

a.b,
a, b being any two vectors, in general that is not coinciding in
direction ;
the first vector is called the antecedent, and the second,
the consequent of the dyad. The dyad as an operator can be used
either as a prefactor of the operand, say

a bR, meaning a(bR),


.

or also as a postfactor,

Ra . b, which means (Ra) b.


As Heaviside says somewhere : " A cart may either be pulled or
pushed." In fact, this two-fold possibility of attelage of the
operator turns out to be very advantageous.
S.V.A. ca
i

36 ELEMENTS OF VECTOR ALGEBRA \

If a, b are not collinear, then a . bR is, of course, altogether^


differentfrom Ra b. . Such is the case with the most general (asym-<
metric) dyad. But if the antecedent and consequent happen toj

be collinear, as in the^case of (39), then the dyad, applied to any:


vector, yields the same result whether it acts as a pre- or a post-!

factor. Such dyads are called symmetrical dyads. They are alii

of the form ^

o-a . a,

where a is an unit vector, and o- a scalar. Sums of such dyads!


are called symmetrical dyadics. Thus, the most general sym-
metrical or self-conjugate linear vector operator w may be repre-^
sented as a (trinomial) symmetrical dyadic. Such, in fact, is (39).'

Consider any, generally asymmetric, dyadic, say a trinomial one, |

'

<^=a.x + b.y + c.z,


which is a certain linear operator. Interchange the antecedents'
and the consequents then the resulting operator or dyadic
;

</)'=x.a+y.b+z.c '

is called the conjugate of <^. For such it is according to the previous'


definition. In fact, if R, S be any two vectors, we have '

Ra.xS = Sx.aR, etc.,


\

since the scalar product commutative. Thus also the products


is

of R into ^S is seen to be identical with that of S into <^'R, as — I

in the definition (28). Again the product of R<^ into S is identical!


with that of R into <^S. Thus no brackets or other separators]
are needed, and the last property can be written simply i

R<^S = S<^'R, \

valid for any dyadic <^, and its conjugate <^'. j

We have already seen that the self-conjugate hnear operator'


0) can be represented as a symmetrical dyadic. We may still j

mention that the general Hnear vector operator rs can always be'
reduced to what is called a normal (trinomial) dyadic, i.e. \

CT = o-il . i + (T^ . j -f-o-gii . k, (40) i

where o-j, o-g, o-g are scalars (either all positive or all negative), and I

both the antecedents 1, m, n and the consequents i, j, k form


normal, say right-handed, systems of unit vectors. If these are :

distinct from one another, we have an asymmetric operator tJ, i


THE LINEAR VECTOR OPERATOR 37

and if they coincide, we have a symmetric operator a>, as before.


The conjugate of the general © will be
t3r' = (rii.l + o-2J .m + o-gk.n. (41)

The special symmetric dyadic

t=i.i + j. j+k.k
leaves, of course, any vector operand B intact, and is, therefore,
called an idemfactor. It is also, for all purposes, equivalent to i.

And if or be any scalar, then crt as an operator is equivalent to a-

itself, an ordinary numerical factor. Thus, expressions such


as
as cr + a b will again be dyadics, and require no further explanations.
.

To close this section it will be enough to make a few remarks


on the " multiphcation," i.e. the successive application of dyadics.
If <^ = a b and ^ = c d be two dyads, and R any vector operand,
. .

we have obviously

where <j>\l/
is the dyad a (be) . d= (be) a . d. Similarly, if 7 be a
third dyad, we have
#(rR)=</>(\^)R = (<^^)7R,
the associative property, so that each of these expressions can be
simply written <^\^yR. And the same is easily seen to hold if
<f>, ^, etc., stand for binomial or polynomial dyadics. Again,
since the scalar product of vectors is distributive, we have for
any dyadic <f>
and any vectors R, S,

<^(R+S)=<^R + <^S,
and also, if ^, y be two more dyadics, the operational equation

<^(^ + y)=«^^ + </)y,


and also
{xP+y)<l> = xf<l>-i-y<l>.
In short, the distributive property holds for the multiplication of
any polynomials and therefore of dyadics. Such products
of dyads,
can, therefore, beexpanded as in ordinary algebra, the only neces-
sary precaution being to keep the order of the operators and of
the constituents of the dyads intact, since (in general) the com-
mutative property does not hold. Thus, for instance,

(a . b +c . d) (e . f +g . h) = (be)a . f + (bg)a . h + (de)e . f + (dg)e h. .


38 ELEMENTS OF VECTOR ALGEBRA \

Vectors not separated by dots are fused into scalar products, as i

(be), (bg), etc., and here of course the order is irrelevant ; but it ^

must be carefully preserved in the resulting dyads, such as a.f, I

not f.a (unless a, Apart from this precaution,


f are coUinear). !

the multiplication of dyadics is as easy and convenient as the ;

common multipHcation of polynomials, and it will be found to j

render inestimable services in the treatment of many geometrical


and physical, especially optical, problems. Some illustrations of \

the latter kind will be found in the " Simplified Method, etc.,"
mentioned before. The final result of such multipHcations of two ^

or more polynomials will be a polynomial of dyads, say ^

A.B+C.D+E.P+G.H+etc. ;

but since each of these antecedents and consequents can be j

expressed in the form xa. + yh+ 2C, where a, b, c are any non-coplanar
unit vectors, any such result can, in the first place, be reduced to s

a sum of nine dyads, viz.

(T^iB, . a + o-22b b . + (TggC . c 4- cTggb . c + (TggC . b + . . . + (Tgib .a,


and it can be proved that this can always be reduced by a proper ]

choice of two orthogonal systems, m, n, to the normal i, j, k and 1, \

form (40), which is that of any, generally asymmetric, linear 1

operator CT. Ultimately, the latter can with advantage be split >

into a symmetric operator <a and the simple operator Vw, as >

in (32). i

10. Hints on Differentiation of Vectors. The concepts of


differentiation and integration as applied to vectors do not '

belong to the subject proper of this booklet, which is vector


^

Algebra. Yet a few elementary remarks on the differentiation '

of vectorial expressions may be here added, as they are likely to


be useful to some readers, and as they do by no means require I

much space.
Let R be a variable vector. To have a possibly desirable picture '

think of R as the position-vector of a particle moving about in ^

space, round a fixed origin. Let t be any independent scalar [

variable, say the time. Then, AR being the vector increment, \

i.e. the vector drawn from the position of the particle at the instant \

t to that at a later date t + At, the quotient AR/At will be a certain j

'

vector, having a definite tensor (size) and a definite direction.


DIFFERENTIATION OF VECTORS 39

We may call it provisionally the average vector-velocity of


the particle. If this quotient (a vector) tends, with indefi-

nitely decreasing A^, to some definite limit, definite both in size


and direction, we call this limit-vector the derivative or the
fluxion of R with respect to t (or the vector velocity of the

particle), and denote it by -j- or R. In short symbols

_ = ^R=Lim^.
dR AR J.

This vector will, in our illustration, be tangential to the orbit of


the particle, and its tensor will represent the particle's speed.
From this definition it follows at once that

where R, S are any vector functions of the variable t. And, if

r be the unit of R, so that R = i?r, we have of course

R = Rt + Rt.
Again, since the scalar product of two vectors is distributive, so
that A(RS)=RAS + SAR plus terms of higher order, we have

j^(RS)=RS + RS.

In particular, if r be a unit vector, so that 1^=1, we have, by


differentiating the latter condition, rr = o, so that r J_ r, which is
also an obvious property. Similarly, for the vector product, which
again is distributive,

^VRS
at
= VRS + VRS,

the only precaution being to preserve the order of the factors, or


— if this be inverted — to change the sign of the product in question.
In quite the same way we have

4. AVBC = AVBC + AVBC + AVBC,


at
and so forth.
Even the case of linear vector functions, such as CJR, does not
call for lengthy explanations. If not only the operand R, but
also the nature of the operator TS varies with t, we have
^(trrR)=CTR + CTR,
40 ELEMENTS OF VECTOR ALGEBRA
since ZS is distributive. Here nr is the derivative of the operator.
If, for instance, CT is represented as a dyadic, say

A.B + C.D+E.F,
we have
CT=A. B+A.B + ...+E.F.
And if the form a> + Vw is used, we have
^7 = a) + Vw,
where a> can again be expanded, as the derivative of a symmetrical
dyadic. It is scarcely necessary to add any further explanations.
INDEX
Addition of vectors, 3-7 Gibbs, 35
algebraic sum, 7
antecedent, of dyad, 35
Heaviside, 35
antisymmetrical operators, 29
area, of parallelogram, 17
associativity, of addition, 6 Idemfactor, 37
of dyadics. 37 invariants, of operator, 33
asymmetrical operators, 29 iterated multiplication, 23-25
autoproduct, scalar, 13
axes, of operator, 31-35 Linear vector operator, 27
localized vectors, 2
Chain of vectors, 3
closed chains, 6 Multiple of a vector, 7
coinitial vectors, 3 multiplication of dyadics, 37-38
coUinear , 7
commutativity of addition, 6 Negative factor, 7
of scalar product, 12
nil vector, 6
components, 8
normal and longitudinal parts of a
conjugate operators, 29
vector, 36
consequent, of dyad, 35
normal form of dyadic, 36
constituents, of vector operator, 28
continuous operators, 26
coplanar vectors, 20 Operators, 26
cosine formula, 16 origin, of vector, i

Derivative of vector, 39 Parallelogram, and vector sum, 5


of operator, 40 polar coordinates, 9
determinantal form of vector pro- position vector, 25
duct, 22 postfactor, 35
difference of vectors, 10 prefactor, 35
differentiation of vectors, 39 principal axes of operator, 31
distributivity, of dyadics, 37 values , 32
of linear vector operators, 26 product of vectors, scalar, 12
—— of scalar product, 14
of vector product, 20
vectorial, 17
projection, 14
dyads, and dyadics, 35-38 Pythagoras' theorem, 15

Equahty of vectors, defined, 2-3 Reference system, 9


reflector,26
Free vectors, 2 refraction, 26
function, vector-, 26 right-handed system, 18
41
42 INDEX
'
Scalai'S', ; i'
,
; c '.
1 ; f ;
/'', symmetrical dyads, 36
"acaldi 'prodnit "of vectors/ i*i*-'i'3 operators, 28
self-conjugate operators, 28
separators, 14 Tensor, i
sine formula, 23 translation, 3
skew operators, 29
spherical trigonometry, 16, 23 Unit vectors, i, 8
square of a vector, 13
stretcher, 26 Values, principal, of operator, 32
subtraction of vectors, 10- 11 vector, defined, i
sum of vectors, 3 vector product, defined, 17
volume, of parallelepipedon, 19, 22

PRINTED IN GREAT BRITAIN BY ROBERT MACLBHOSE AND CO. LTD.


AT THE UNIVER.SITY I'RESS, GLASGOW.
^.^c^SSSf^^ p\|>,,

Lid

LIB!
•IM^

i
^ fK. yiv.

Qg^S 7 DAY USE


RETURN TO DESK FROM WHICH BORROWED

PHYSICS LIBRARY
This publication is due on the LAST DATE
stamped below.

FF-frffT^^^ w£CEIVED BY
(ERSIIV Of 01
INJA

((^ /^l OCT g '

1QB8 HFC 5 1991


# CttCULATlON BE?

'"^"'
"J!i! 0GUiL4869(7. fllA

C::c 051992

General Library
v^ ^'^"yrjrx~jn\ ^^ i7-30m-5,'57
University of California
(C6410s20)4188
^y-^^^KKl^ Berkeley

I
? k

VERSin OF CAIIFORNII LieflHRY OF THE UHlVERSITy OF CtLIFORHIt Lli

0>jg^<H<H<l
^-t ^gJvO^^S

VERSITY OF CALIFORNIA LIBRARY OF THE UNIVERSITY OF CALIFORNIA


"^m0fi
y'<(i©M/^, A^i'tv^;'
m

:i^'^'A

You might also like