Mathematical Geosciences
Joseph L. Awange Béla Paláncz
Robert H. Lewis Lajos Völgyesi
•
•
Mathematical Geosciences
Hybrid Symbolic-Numeric Methods
123
Joseph L. Awange
Spatial Sciences
Curtin University
Perth, WA
Australia
Béla Paláncz
Budapest University of Technology and
Economics
Budapest
Hungary
Robert H. Lewis
Fordham University
New York, NY
USA
Lajos Völgyesi
Budapest University of Technology and
Economics
Budapest
Hungary
ISBN 978-3-319-67370-7
ISBN 978-3-319-67371-4
https://doi.org/10.1007/978-3-319-67371-4
(eBook)
Library of Congress Control Number: 2017953801
© Springer International Publishing AG 2018
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
Printed on acid-free paper
This Springer imprint is published by Springer Nature
The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword
Hybrid symbolic-numeric computation (HSNC, for short) is a large and growing
area at the boundary of mathematics and computer science, devoted to the study and
implementation of methods that mix symbolic with numeric computation.
As the title suggests, this is a book about some of the methods and algorithms
that benefit from a mix of symbolic and numeric computation. Three major areas of
computation are covered herein. The first part discusses methods for computing all
solutions to a system of polynomials. Purely symbolic methods, e.g., via Gröbner
bases tend to suffer from algorithmic inefficiencies, and purely numeric methods
such as Newton iterations have trouble finding all solutions to such systems. One
class of hybrid methods blends numerics into the purely algebraic approach, e.g.,
computing numeric Gröbner bases or Dixon resultants (the latter being extremely
efficient, e.g., for elimination of variables). Another mixes symbolic methods into
more numerical approaches, e.g., finding initializations for numeric homotopy
tracking to obtain all solutions.
The second part goes into the realm of “soft” optimization methods, including
genetic methods, simulated annealing, and particle swarm optimization, among
others. These are all popular and heavily used, especially in the context of global
optimization. While often considered as “numeric” methods, they benefit from
symbolic computation in several ways. One is that implementation is typically
straightforward when one has access to a language that supports symbolic computation. Updates of state, e.g., to handle mutations and gene crossover, are easily
coded. (Indeed, this sort of thing can be so deceptively simple. baked into the
language so to speak, that one hardly realizes symbolic computation is happening.)
Among many applications in this part there is, again, that of solving systems of
equations. Also covered is mixed-integer programming (wherein some variables are
discrete-valued and others continuous). This is a natural area for HSNC since it
combines aspects of exact and numeric methods in the handling of both discrete and
continuous variables.
The third part delves into data modeling. This begins with use of radial basis
functions and proceeds to machine learning, e.g., via support vector machine
(SVM) methods. Symbolic regression, a methodology that combines numerics with
v
vi
Foreword
evolutionary programming, is also introduced for the purpose of modeling data.
Another area seeing recent interest is that of robust optimization and regression,
wherein one seeks results that remain relatively stable with respect to perturbations
in input or random parameters used in the optimization. Several hybrid methods are
presented to address problems in this realm. Stochastic modeling is also discussed.
This is yet another area in which hybrid methods are quite useful.
Symbolic computing languages have seen a recent trend toward ever more high
level support for various mathematical abstractions. This appears for example in
exact symbolic programming involving probability, geometry, tensors, engineering
simulation, and many other areas. Under the hood is a considerable amount of
HSNC (I write this as one who has been immersed at the R&D end of hybrid
computation for two decades.) Naturally, such support makes it all the easier for
one to extend hybrid methods; just consider how much less must be built from
scratch to support, say, stochastic equation solving, when the language already
supports symbolic probability and statistics computations. This book presents to the
reader some of the major areas and methods that are being changed, by the authors
and others, in furthering this interplay of symbolic and numeric computation. The
term hybrid symbolic-numeric computation has been with us for over two decades
now. I anticipate the day when it falls into disuse, not because the technology goes
out of style, but rather that it is just an integral part of the plumbing of mathematical
computation.
Urbana—Champaign
IL, USA
July 2017
Daniel Lichtblau
Ph.D., Mathematics UIUC 1991
Algebra, Applied Mathematics
Wolfram Research, Champaign
Preface
It will surprise no one to hear that digital computers have been used for numerical
computations ever since their invention during World War II. Indeed, until around
1990, it was not widely understood that computers could do anything else. For
many years, when students of mathematics, engineering, and the sciences used a
computer, they wrote a program (typically in Fortran) to implement mathematical
algorithms for solving equations in one variable, or systems of linear equations, or
differential equations. The input was in so-called “float” numbers with 8–12
significant figures of accuracy. The output was the same type of data, and the
program worked entirely with the same type of data. This is numerical computing.
By roughly 1990, computer algebra software had become available. Now it was
possible to enter data like x2 þ 3x þ 2 and receive output like ðx þ 2Þðx þ 1Þ.
The computer is doing algebra! More precisely, it is doing symbolic computing.
The program that accomplishes such computing almost certainly uses no float
numbers at all.
What is still not widely understood is that often it is productive to have algorithms that do both kinds of computation. We call these hybrid symbolic-numeric
methods. Actually, such methods have been considered by some mathematicians
and computer scientists since at least 1995 (ISSAC 1995 conference). In this book,
the authors provide a much-needed introduction and reference for applied mathematicians, geoscientists, and other users of sophisticated mathematical software.
No mathematics beyond the undergraduate level is needed to read this book, nor
does the reader need any pure mathematics background beyond a first course in
linear algebra. All methods discussed here are illustrated with copious examples.
A brief list of topics covered:
•
•
•
•
•
•
Systems of polynomial equations with resultants and Gröbner bases
Simulated annealing
Genetic algorithms
Particle swarm optimization
Integer programming
Approximation with radial basis functions
vii
viii
•
•
•
•
•
•
Preface
Support vector machines
Symbolic regression
Quantile regression
Robust regression
Stochastic modeling
Parallel computations
Most of the methods discussed in the book will probably be implemented by the
reader on a computer algebra system (CAS). The two most fully developed and
widely used CAS are Mathematica and Maple. Some of the polynomial computations here are better done on the specialized system Fermat. Other systems worthy
of mention are Singular and SageMath.
The second author is a regular user of Mathematica, who carried out the computations, therefore frequent mention is made of Mathematica commands.
However, this book is not a reference manual for any system, and we have made an
effort to keep the specialized commands to a minimum, and to use commands
whose syntax makes them as self-explanatory as possible. More complete
Mathematica programs to implement some of the examples are available online.
Similarly, a program written in Fermat for the resultant method called Dixon-EDF
is available online.
The authors:
July 2017
Joseph L. Awange
Perth, Australia
Robert H. Lewis
New York, USA
Béla Paláncz
Budapest, Hungary
Lajos Völgyesi
Budapest, Hungary
Acknowledgements
The authors wish to express their sincere thanks to Dr. Daniel Lichtblau for his
helpful comments and for agreeing to write a foreword for our book. R. Lewis and
B. Paláncz highly appreciate and thank Prof. Jon Kirby the Head of the Department
of Spatial Sciences, Curtin University, Australia for his hospitality and his support
of their visiting Curtin. B. Paláncz wishes also thank the TIGeR Foundation for the
partial support of his staying in Perth. This work was funded partially by OTKA
project No. 124286.
ix
Contents
Part I
1
Solution of Nonlinear Systems
Solution of Algebraic Polynomial Systems . . . . . . . . . . . . . . . . .
1.1 Zeros of Polynomial Systems . . . . . . . . . . . . . . . . . . . . . . .
1.2 Resultant Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.1
Sylvester Resultant . . . . . . . . . . . . . . . . . . . . . . .
1.2.2
Dixon Resultant . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Gröbner Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1
Greatest Common Divisor of Polynomials . . . . . .
1.3.2
Reduced Gröbner Basis . . . . . . . . . . . . . . . . . . . .
1.3.3
Polynomials with Inexact Coefficients . . . . . . . . .
1.4 Using Dixon-EDF for Symbolic Solution of Polynomial
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4.1
Explanation of Dixon-EDF . . . . . . . . . . . . . . . . .
1.4.2
Distance from a Point to a Standard Ellipsoid . . .
1.4.3
Distance from a Point to Any 3D Conic . . . . . . .
1.4.4
Pose Estimation . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4.5
How to Run Dixon-EDF . . . . . . . . . . . . . . . . . . .
1.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5.1
Common Points of Geometrical Objects . . . . . . .
1.5.2
Nonlinear Heat Transfer . . . . . . . . . . . . . . . . . . .
1.5.3
Helmert Transformation . . . . . . . . . . . . . . . . . . . .
1.6 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6.1
Solving a System with Different Techniques . . . .
1.6.2
Planar Ranging . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6.3
3D Resection . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.6.4
Pose Estimation . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
4
4
5
7
8
11
12
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
14
14
16
16
17
18
18
18
22
25
28
28
31
32
34
39
xi
xii
Contents
2
Homotopy Solution of Nonlinear Systems . . . . . . . . . . . . . . . . .
2.1 The Concept of Homotopy . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Solving Nonlinear Equation via Homotopy . . . . . . . . . . . .
2.3 Tracing Homotopy Path as Initial Value Problem . . . . . . . .
2.4 Types of Linear Homotopy . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.1
General Linear Homotopy . . . . . . . . . . . . . . . . . .
2.4.2
Fixed-Point Homotopy . . . . . . . . . . . . . . . . . . . .
2.4.3
Newton Homotopy . . . . . . . . . . . . . . . . . . . . . . .
2.4.4
Affine Homotopy . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.5
Mixed Homotopy . . . . . . . . . . . . . . . . . . . . . . . .
2.5 Regularization of the Homotopy Function . . . . . . . . . . . . .
2.6 Start System in Case of Algebraic Polynomial Systems . . .
2.7 Homotopy Methods in Mathematica . . . . . . . . . . . . . . . . . .
2.8 Parallel Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.9 General Nonlinear System . . . . . . . . . . . . . . . . . . . . . . . . .
2.10 Nonlinear Homotopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.10.1 Quadratic Bezier Homotopy Function . . . . . . . . .
2.10.2 Implementation in Mathematica . . . . . . . . . . . . . .
2.10.3 Comparing Linear and Quadratic Homotopy . . . .
2.11 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.11.1 Nonlinear Heat Conduction . . . . . . . . . . . . . . . . .
2.11.2 Local Coordinates via GNSS . . . . . . . . . . . . . . . .
2.12 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.12.1 GNSS Positioning N-Point Problem . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
41
41
43
45
47
47
47
48
48
48
49
49
51
55
56
58
58
61
62
65
65
68
71
71
76
3
Overdetermined and Underdetermined Systems . . . . . . . . . . . .
3.1 Concept of the Over and Underdetermined Systems . . . . . .
3.1.1
Overdetermined Systems . . . . . . . . . . . . . . . . . . .
3.1.2
Underdetermined Systems . . . . . . . . . . . . . . . . . .
3.2 Gauss–Jacobi Combinatorial Solution . . . . . . . . . . . . . . . . .
3.3 Gauss–Jacobi Solution in Case of Nonlinear Systems. . . . .
3.4 Transforming Overdetermined System into a Determined
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5 Extended Newton–Raphson Method . . . . . . . . . . . . . . . . . .
3.6 Solution of Underdetermined Systems . . . . . . . . . . . . . . . .
3.6.1
Direct Minimization. . . . . . . . . . . . . . . . . . . . . . .
3.6.2
Method of Lagrange Multipliers . . . . . . . . . . . . .
3.6.3
Method of Penalty Function . . . . . . . . . . . . . . . .
3.6.4
Extended Newton–Raphson . . . . . . . . . . . . . . . . .
3.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.7.1
Geodetic Application—The Minimum Distance
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
77
77
77
79
80
84
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
89
90
92
93
93
95
95
96
....
96
Contents
xiii
3.7.2
3.8
Part II
Global Navigation Satellite System (GNSS)
Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.7.3
Geometric Application . . . . . . . . . . . . . . . . . . . . . . . . . 101
Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.8.1
Solution of Overdetermined System . . . . . . . . . . . . . . 105
3.8.2
Solution of Underdetermined System . . . . . . . . . . . . . 107
Optimization of Systems
4
Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1 Metropolis Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Realization of the Metropolis Algorithm . . . . . . . . . . . . . . .
4.2.1
Representation of a State . . . . . . . . . . . . . . . . . . .
4.2.2
The Free Energy of a State . . . . . . . . . . . . . . . . .
4.2.3
Perturbation of a State . . . . . . . . . . . . . . . . . . . . .
4.2.4
Accepting a New State . . . . . . . . . . . . . . . . . . . .
4.2.5
Implementation of the Algorithm . . . . . . . . . . . . .
4.3 Algorithm of the Simulated Annealing . . . . . . . . . . . . . . . .
4.4 Implementation of the Algorithm . . . . . . . . . . . . . . . . . . . .
4.5 Application to Computing Minimum of a Real Function . .
4.6 Generalization of the Algorithm . . . . . . . . . . . . . . . . . . . . .
4.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.7.1
A Packing Problem . . . . . . . . . . . . . . . . . . . . . . .
4.7.2
The Traveling Salesman Problem . . . . . . . . . . . .
4.8 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
113
113
114
114
115
115
116
116
118
118
120
124
124
124
127
132
5
Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1 The Genetic Evolution Concept . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Mutation of the Best Individual . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 Solving a Puzzle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4 Application to a Real Function . . . . . . . . . . . . . . . . . . . . . . . . . .
5.5 Employing Sexual Reproduction . . . . . . . . . . . . . . . . . . . . . . . . .
5.5.1
Selection of Parents . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.5.2
Sexual Reproduction: Crossover and Mutation . . . . . .
5.6 The Basic Genetic Algorithm (BGA) . . . . . . . . . . . . . . . . . . . . .
5.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.7.1
Nonlinear Parameter Estimation . . . . . . . . . . . . . . . . . .
5.7.2
Packing Spheres with Different Sizes . . . . . . . . . . . . .
5.7.3
Finding All the Real Solutions of a Non-algebraic
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.8 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.8.1
Foxhole Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
137
137
137
141
145
150
150
152
154
157
157
160
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
162
164
164
166
xiv
Contents
6
Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1 The Concept of Social Behavior of Groups of Animals . . .
6.2 Basic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3 The Pseudo Code of the Algorithm . . . . . . . . . . . . . . . . . .
6.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.1
1D Example . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.2
2D Example . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.3
Solution of Nonlinear Non-algebraic System . . . .
6.5 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
167
167
168
170
171
172
175
178
181
184
7
Integer Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1 Integer Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 Discrete Value Problems . . . . . . . . . . . . . . . . . . . . . .
7.3 Simple Logical Conditions . . . . . . . . . . . . . . . . . . . .
7.4 Some Typical Problems of Binary Programming . . .
7.4.1
Knapsack Problem . . . . . . . . . . . . . . . . . . .
7.4.2
Nonlinear Knapsack Problem . . . . . . . . . .
7.4.3
Set-Covering Problem . . . . . . . . . . . . . . . .
7.5 Solution Methods . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.5.1
Binary Countdown Method . . . . . . . . . . . .
7.5.2
Branch and Bound Method . . . . . . . . . . . .
7.6 Mixed–Integer Programming . . . . . . . . . . . . . . . . . .
7.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7.1
Integer Least Squares . . . . . . . . . . . . . . . .
7.7.2
Optimal Number of Oil Wells . . . . . . . . . .
7.8 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.8.1
Study of Mixed Integer Programming . . . .
7.8.2
Mixed Integer Least Square . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
185
185
187
189
191
191
192
192
194
194
196
200
200
200
202
203
203
205
206
8
Multiobjective Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.1 Concept of Multiobjective Problem . . . . . . . . . . . . . . . . . .
8.1.1
Problem Definition . . . . . . . . . . . . . . . . . . . . . . .
8.1.2
Interpretation of the Solution . . . . . . . . . . . . . . . .
8.2 Pareto Optimum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2.1
Nonlinear Problems . . . . . . . . . . . . . . . . . . . . . . .
8.2.2
Pareto-Front and Pareto-Set . . . . . . . . . . . . . . . . .
8.3 Computation of Pareto Optimum . . . . . . . . . . . . . . . . . . . .
8.3.1
Pareto Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.2
Reducing the Problem to the Case of a Single
Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.3
Weighted Objective Functions . . . . . . . . . . . . . . .
8.3.4
Ideal Point in the Function Space . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
207
207
207
208
209
210
211
212
212
....
....
....
214
219
220
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Contents
xv
8.3.5
Pareto Balanced Optimum . . . . . . . . .
8.3.6
Non-convex Pareto-Front . . . . . . . . . .
8.4 Employing Genetic Algorithms . . . . . . . . . . . . .
8.5 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.5.1
Nonlinear Gauss-Helmert Model . . . .
8.6 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
220
222
223
229
229
234
241
Approximation with Radial Bases Functions . . . . . . . . . . . . . . .
9.1 Basic Idea of RBF Interpolation . . . . . . . . . . . . . . . . . . . . .
9.2 Positive Definite RBF Function . . . . . . . . . . . . . . . . . . . . .
9.3 Compactly Supported Functions . . . . . . . . . . . . . . . . . . . . .
9.4 Some Positive Definite RBF Function . . . . . . . . . . . . . . . .
9.4.1
Laguerre-Gauss Function . . . . . . . . . . . . . . . . . . .
9.4.2
Generalized Multi-quadratic RBF . . . . . . . . . . . .
9.4.3
Wendland Function . . . . . . . . . . . . . . . . . . . . . . .
9.4.4
Buchmann-Type RBF . . . . . . . . . . . . . . . . . . . . .
9.5 Generic Derivatives of RBF Functions . . . . . . . . . . . . . . . .
9.6 Least Squares Approximation with RBF . . . . . . . . . . . . . . .
9.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7.1
Image Compression . . . . . . . . . . . . . . . . . . . . . . .
9.7.2
RBF Collocation Solution of Partial Differential
Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.8 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.8.1
Nonlinear Heat Transfer . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
245
245
249
251
253
253
254
256
257
257
260
264
264
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
269
276
276
278
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
279
279
280
280
283
284
286
288
289
289
292
294
294
297
299
Part III
9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Approximation of Functions and Data
10 Support Vector Machines (SVM) . . . . . . . . . . . . . . . . . . .
10.1 Concept of Machine Learning. . . . . . . . . . . . . . . . . .
10.2 Optimal Hyperplane Classifier . . . . . . . . . . . . . . . . .
10.2.1 Linear Separability. . . . . . . . . . . . . . . . . . .
10.2.2 Computation of the Optimal Parameters . .
10.2.3 Dual Optimization Problem . . . . . . . . . . . .
10.3 Nonlinear Separability . . . . . . . . . . . . . . . . . . . . . . .
10.4 Feature Spaces and Kernels . . . . . . . . . . . . . . . . . . .
10.5 Application of the Algorithm . . . . . . . . . . . . . . . . . .
10.5.1 Computation Step by Step . . . . . . . . . . . . .
10.5.2 Implementation of the Algorithm . . . . . . . .
10.6 Two Nonlinear Test Problems . . . . . . . . . . . . . . . . .
10.6.1 Learning a Chess Board . . . . . . . . . . . . . .
10.6.2 Two Intertwined Spirals . . . . . . . . . . . . . .
10.7 Concept of SVM Regression . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
xvi
Contents
10.7.1
10.7.2
e-Insensitive Loss Function . . . . . . . . . . . . . . . . . . . . .
Concept of the Support Vector Machine Regression
(SVMR). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.7.3 The Algorithm of the SVMR. . . . . . . . . . . . . . . . . . . .
10.8 Employing Different Kernels . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.8.1 Gaussian Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.8.2 Polynomial Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.8.3 Wavelet Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.8.4 Universal Fourier Kernel . . . . . . . . . . . . . . . . . . . . . . .
10.9 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.9.1 Image Classification. . . . . . . . . . . . . . . . . . . . . . . . . . .
10.9.2 Maximum Flooding Level . . . . . . . . . . . . . . . . . . . . . .
10.10 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.10.1 Noise Filtration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
299
300
302
305
306
307
308
311
313
313
315
318
318
320
11 Symbolic Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.1 Concept of Symbolic Regression . . . . . . . . . . . . . . .
11.2 Problem of Kepler . . . . . . . . . . . . . . . . . . . . . . . . . .
11.2.1 Polynomial Regression . . . . . . . . . . . . . . .
11.2.2 Neural Network . . . . . . . . . . . . . . . . . . . . .
11.2.3 Support Vector Machine Regression . . . . .
11.2.4 RBF Interpolation . . . . . . . . . . . . . . . . . . .
11.2.5 Random Models . . . . . . . . . . . . . . . . . . . .
11.2.6 Symbolic Regression . . . . . . . . . . . . . . . . .
11.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.3.1 Correcting Gravimetric Geoid Using GPS
Ellipsoidal Heights . . . . . . . . . . . . . . . . . .
11.3.2 Geometric Transformation . . . . . . . . . . . . .
11.4 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.4.1 Bremerton Data . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
321
321
325
326
326
328
329
330
330
334
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
334
342
349
349
357
12 Quantile Regression. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.1 Problems with the Ordinary Least Squares . . . . . . . .
12.1.1 Correlation Height and Age . . . . . . . . . . . .
12.1.2 Engel’s Problem . . . . . . . . . . . . . . . . . . . .
12.2 Concept of Quantile . . . . . . . . . . . . . . . . . . . . . . . . .
12.2.1 Quantile as a Generalization of Median . . .
12.2.2 Quantile for Probability Distributions . . . .
12.3 Linear Quantile Regression . . . . . . . . . . . . . . . . . . . .
12.3.1 Ordinary Least Square (OLS) . . . . . . . . . .
12.3.2 Median Regression (MR) . . . . . . . . . . . . .
12.3.3 Quantile Regression (QR) . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
359
359
359
359
362
362
366
368
369
369
370
Contents
xvii
12.4 Computing Quantile Regression . . . . . . . . . . . . . . . . . . . . .
12.4.1 Quantile Regression via Linear Programming . . .
12.4.2 Boscovich’s Problem . . . . . . . . . . . . . . . . . . . . . .
12.4.3 Extension to Linear Combination of Nonlinear
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.4.4 B-Spline Application . . . . . . . . . . . . . . . . . . . . . .
12.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.5.1 Separate Outliers in Cloud Points . . . . . . . . . . . .
12.5.2 Modelling Time-Series . . . . . . . . . . . . . . . . . . . .
12.6 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.6.1 Regression of Implicit-Functions . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
....
....
....
376
376
377
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
380
382
387
387
393
400
400
403
13 Robust Regression . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.1 Basic Methods in Robust Regression . . . . . . . .
13.1.1 Concept of Robust Regression . . . . . .
13.1.2 Maximum Likelihood Method . . . . . .
13.1.3 Danish Algorithm . . . . . . . . . . . . . . .
13.1.4 Danish Algorithm with PCA . . . . . . .
13.1.5 RANSAC Algorithm . . . . . . . . . . . . .
13.2 Application Examples . . . . . . . . . . . . . . . . . . . .
13.2.1 Fitting a Sphere to Point Cloud Data .
13.2.2 Fitting a Cylinder . . . . . . . . . . . . . . .
13.3 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.3.1 Fitting a Plane to a Slope . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
405
405
405
406
422
426
432
442
442
464
502
502
513
14 Stochastic Modeling. . . . . . . . . . . . . . . . . . . . . . . . . .
14.1 Basic Stochastic Processes . . . . . . . . . . . . . . . .
14.1.1 Concept of Stochastic Processes . . . .
14.1.2 Examples for Stochastic Processes. . .
14.1.3 Features of Stochastic Processes . . . .
14.2 Time Series. . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.2.1 Concept of Time Series . . . . . . . . . . .
14.2.2 Models of Time Series . . . . . . . . . . .
14.3 Stochastic Differential Equations (SDE) . . . . . .
14.3.1 Ito Process . . . . . . . . . . . . . . . . . . . . .
14.3.2 Ito Numerical Integral . . . . . . . . . . . .
14.3.3 Euler-Maruyama Method . . . . . . . . . .
14.4 Numerical Solution of (SDE) . . . . . . . . . . . . . .
14.4.1 Single Realization . . . . . . . . . . . . . . .
14.4.2 Many Realizations . . . . . . . . . . . . . . .
14.4.3 Slice Distribution . . . . . . . . . . . . . . . .
14.4.4 Standard Error Band . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
517
517
517
517
519
521
521
521
528
528
528
529
529
530
531
531
532
xviii
Contents
14.5 Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.5.1 Measurement Values . . . . . . . . . . . . . . . . . . . . . .
14.5.2 Likelihood Function . . . . . . . . . . . . . . . . . . . . . .
14.5.3 Maximization of the Likelihood Function . . . . . .
14.5.4 Simulation with the Estimated Parameters . . . . . .
14.5.5 Deterministic Versus Stochastic Modeling . . . . . .
14.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.6.1 Rotating Ellipsoid with a Stochastic Flattening . .
14.6.2 Analysis of Changes in Groundwater Radon . . . .
14.7 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.7.1 Deterministic Lorenz Attractor. . . . . . . . . . . . . . .
14.7.2 Stochastic Lorenz Attractor . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
532
533
533
534
535
536
537
537
545
549
549
553
15 Parallel Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.2 Amdahl’s-Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.3 Implicit and Explicit Parallelism . . . . . . . . . . . . . . . . . . . . .
15.4 Dispatching Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.5 Balancing Loads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.6 Parallel Computing with GPU . . . . . . . . . . . . . . . . . . . . . .
15.6.1 Neural Network Computing with GPU . . . . . . . .
15.6.2 Image Processing with GPU . . . . . . . . . . . . . . . .
15.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.7.1 3D Ranging Using the Dixon Resultant . . . . . . .
15.7.2 Reducing Colors via Color Approximation . . . . .
15.8 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15.8.1 Photogrammetric Positioning by Gauss-Jacobi
Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
559
559
560
560
562
565
568
568
574
577
577
582
586
....
....
586
595
Introduction
Numeric and Symbolic Methods—What are they?
Basically, a numeric (or numerical) method is one that could be done with a simple
handheld calculator, using basic arithmetic, square roots, some trigonometry
functions, and a few other functions most people learn about in high school.
Depending on the task, one may have to press the calculator buttons thousands (or
even millions) of times, but theoretically, a person with a calculator and some paper
could implement a numerical method. When finished, the paper would be full of
arithmetic.
A symbolic method involves algebra. It is a method that if a person implemented,
would involve algebraic or higher rational thought. A person implementing a
symbolic method will rarely need to reach for a calculator. When finished, there
may be some numbers, but the paper would be full of variables like x, y, z.
Students usually meet the topic of quadratic equations in junior high school.
Suppose you wanted to solve the equation x2 þ 3x 2 ¼ 0. With a handheld calculator, one could simply do “intelligent guessing.” Let us guess, say, x =1. Plug it
in, get a positive result. OK, that is too big. Try x = 0; that is too small. Go back and
forth; stop when satisfied with the accuracy. It does not take long to get x =
0.56155, which might well be considered accurate enough. Furthermore, it is easy
to write a computer program to implement this idea. That is a numeric method.
But wait. There is another answer, which the numeric method missed, namely
−3.56155. Even worse, if one were to continue this method on many problems, one
would soon notice that some equations do not seem to have solutions, such as
x2 2x þ 4 ¼ 0. A great deal of effort could be expended in arithmetic until finally
giving up and finding no solution.
The problem is cured by learning algebra and the symbolic method called the
pffiffiffiffiffiffiffiffiffiffiffi
2
quadratic formula. Given ax2 þ bx þ c ¼ 0 the solution is x ¼ b 2ab 4ac. It is now
immediately obvious why some problems have no solution: it happens precisely
when b2 4ac < 0.
xix
xx
Introduction
In the previous example, x2 þ 3x 2 ¼ 0, we see that the two roots are exactly
pffiffiffiffiffi
ð 3 17Þ=2. There is no approximation whatever. Should a decimal answer
correct to, say, 16 digits be desired, that would be trivially obtained on any modern
computer.
There is more. Not only does the symbolic method concisely represent all
solutions, it invites the question, can we define a new kind of number in which the
negative under the square root may be allowed? The symbolic solution has led to a
new concept, that of complex numbers!
Symbolic methods may be hard to develop, and they may be difficult for a
computer to implement, but they lead to insight.
Fortunately, we are not forced into a strict either/or dichotomy. There are
symbolic-numeric methods, hybrids using the strengths of both ideas.
Numeric Solution
In order to further illustrate numeric, symbolic, and symbolic-numeric solutions, let
us consider an algebraic system of polynomial equations. For such systems, there
may be no solution, one solution, or many solutions. With numerical solutions, one
commonly utilizes iterative techniques starting from an initially guessed value. Let
us start with a two variable system of two equations f ðx; yÞ ¼ 0 and gðx; yÞ ¼ 0,
f ¼ ðx
g¼ x
2Þ2 þ ðy
1 2
2 þ y
3Þ 2 ,
3 2
4
5.
This actual problem has two real solutions, see Fig. 1.
Fig. 1 Geometrical representation of a multivariate polynomial system
Introduction
xxi
Fig. 2 Local solution with initial guess and iteration steps
A numeric solution starts with the initial value and proceeds step-by-step locally.
Depending on the method, we expect to converge to one of the solutions in an
efficient manner. Employing the initial value (4, −1) and a multivariate Newton’s
method, the solution after seven steps is (2.73186, 0.887092). Let us visualize the
iteration steps, see Fig. 2.
However, if the initial guess is not proper, for example (0, 0), then, we may have
a problem with the convergence since the Jacobian may become singular.
Symbolic Solution
Let us transform the original system into another one, which has the same solutions,
but for which variables can be isolated and solved one-by-one. Employing Gröbner
basis, we can reduce one of the equations to a univariate polynomial,
gry ¼ 2113
grxy ¼
3120y þ 832y2 ,
65 þ 16x þ 24y.
First, solving the quadratic equation gry, we have
1
y ¼ 104
195
pffiffiffiffiffiffiffiffiffiffi
2 2639 ,
pffiffiffiffiffiffiffiffiffiffi
1
195 þ 2 2639 .
y ¼ 104
xxii
Introduction
Fig. 3 Global solution—all solutions without initial guess and iteration
Then employing these roots of y, the corresponding values of x can be computed
from the second polynomial of the Gröbner basis as
pffiffiffiffiffiffiffiffiffiffi
1
x ¼ 104
130 þ 3 2639 ,
1
130
x ¼ 104
pffiffiffiffiffiffiffiffiffiffi
3 2639 .
So, we have computed both solutions with neither guessing nor iteration.
In addition, there is no round-off error. Let us visualize the two solutions, see
Fig. 3:
Let us summarize the main features of the symbolic and numeric computations:
Numeric computations:
– usually require initial values and iterations. They are sensitive to round-off
errors, provide only one local solution,
– can be employed for complex problems, and the computation times are short in
general because the steps usually translate directly into computer machine
language.
Symbolic computations:
– do not require initial values and iterations. They are not sensitive for round-off
errors, and provide all solutions,
– often cannot be employed for complex problems, and the computation time is
long in general because the steps usually require computer algebra system
software.
Introduction
xxiii
Ideally, the best strategy is to divide the algorithm into symbolic and numeric
parts in order to utilize the advantages of both techniques. Inevitably, numeric
computations will always be used to a certain extent. For example, if polynomial
gry above had been degree, say, five, then a numeric univariate root solver would
have been necessary.
Hybrid (symbolic-numeric) Solution
Sometimes, we can precompute a part of a numerical algorithm in symbolic form.
Here is a simple illustrative example.
Consider a third polynomial and add it to our system above:
2
h ¼ x þ 12 þ y
7 3
4
5.
In that case, there is no solution, since there is no common point of the three
curves representing the three equations, see Fig. 4.
However, we can look for a solution of this overdetermined system in the
minimal least squares sense by using the objective function
G ¼ f 2 þ g 2 þ h2 ,
Fig. 4 Now, there is no solution of the overdetermined system
xxiv
Introduction
or
2
3 !2
1
7
þx þ
þy
G=
5 þ ð 2 þ xÞ 2 þ ð 3 þ yÞ 2 þ
5þ
2
4
2
2 !2
1
3
þx þ
þy
þ
5þ
2
4
2
and minimizing it.
Employing Newton’s method, we get
x¼ 2:28181,y¼ 0:556578.
The computation time for this was 0.00181778 s. The solution of the overdetermined system can be seen in Fig. 5.
Here, the gradient vector as well as the Hessian matrix is computed in numerical
form in every iteration step. But we can compute the gradient in symbolic form:
grad =
1
2xð173 þ 192ð 2 þ xÞxÞ þ 216xy
32
137829 555x 27x2 60527y
þ
þ
þ
512
8
128
8
2
3
4
6321y
767y
105y
2
2 2
5
þ 6xy þ 6x y þ
þ 6y :
16
4
2
64ð1 þ 2xÞy3 þ 3ð 809 þ 740yÞ
41xy
13x2 y
16ð41 þ 26xÞy2 þ
Employing this symbolic form the computation time can be reduced. The
running time can be further reduced if the Hessian matrix is also computed
symbolically,
Fig. 5 The solution of the overdetermined system
Introduction
xxv
2
6 173
27y
6 16 þ 12xð 4 þ 3xÞ þ 4 13y2 þ 4y3
6
6
6
H ¼6
6
27
6 555
6
4 8 þ yð 41 þ 6yÞ þ x 4 þ 2yð 13 þ 6yÞ
3
555
27
þ yð 41 þ 6yÞ þ x
þ 2yð 13 þ 6yÞ 7
8
4
7
7
7
7
7.
60527
2 6321y
7
41x 13x
7
128
8
7
2
5
2301y
2
3
4
þ 12xy þ 12x y þ
210y þ 30y
4
Now, the computation time is less than half of the original one.
So using symbolic forms, the computation time can be reduced considerably.
This so-called hybrid computation has an additional advantage too, namely the
symbolic part of the algorithm does not generate round-off errors.
Another approach of applying the hybrid computation is to merge symbolic
evaluation with numeric algorithm. This technique is illustrated using the following
example.
Let us consider a linear, nonautonomous differential equation system of n variables in matrix form:
d
dx yðxÞ
¼ AðxÞyðxÞ þ bðxÞ,
where A is a matrix of nn dimensions, yðxÞ and bðxÞ are vectors of n dimensions, and x is a scalar independent variable. In the case of a boundary value
problem, the values of some dependent variables are not known at the beginning
of the integration interval, at x ¼ xa , but they are given at the end of this interval,
at x ¼ xb . The usually employed methods need subsequent integration of the
system, because of their trial–error technique or they require solution of a large
linear equation system, in the case of discretization methods. The technique is
based on the symbolic evaluation of the well-known Runge–Kutta algorithm.
This technique needs only one integration of the differential equation system and
a solution of the linear equation system representing the boundary conditions at
x ¼ xb .
The well-known fourth-order Runge–Kutta method, in our case, can be represented by the following formulas:
R1i ¼ Aðxi Þyðxi Þ þ bðxi Þ,
R2i ¼ A xi þ h2 yðxi Þ þ R12i h þ b xi þ h2 ,
R3i ¼ A xi þ h2 yðxi Þ þ R22i h þ b xi þ h2 ,
R4i ¼ Aðxi þ hÞðyðxi Þ þ R3i hÞ þ bðxi þ hÞ
and then the new value of yðxÞ can be computed as:
yi þ 1 ¼ yðxi Þ þ
ðR1i þ 2ðR2i þ R3i Þ þ R4i Þh
.
6
xxvi
Introduction
A symbolic system like Mathematica, is able to carry out this algorithm not
only with numbers but also with symbols. It means that the unknown elements of
ya ¼ yðxa Þ can be considered as unknown symbols. These symbols will appear in
every evaluated yi value, as well as in yb ¼ yðxb Þ too.
Let
illustrative example. The differential equation is:
2 us consider
a simple
d
x
1 5 yðxÞ ¼ x.
dx2 yðxÞ
The given boundary values are:
yð1Þ ¼ 2
and
yð3Þ ¼
1
After introducing new variables, we get a first-order system,
y1ðxÞ ¼ yðxÞ
and
y2ðxÞ ¼
d
yðxÞ
dx
the matrix form of the differential equation is:
d
dx y1ðxÞ;
d
dx y2ðxÞ
¼
1
0
x=5
1
½y1ðxÞ; y2ðxÞ þ ½0; x.
0
Employing Mathematica’s notation:
A[x_]:={{0,1},{1-1/5 x,0}};
b[x_]:={0,x};
x0=1;
y0={2.,s}
The unknown initial value is s. The order of the system M = 2. Let us consider
the number of the integration steps as N = 10, so the step size is h = 0.2.
ysol=RKSymbolic[x0,y0,A,b,2,10,0.2];
The result is a list of list data structure containing the corresponding (x, y) pairs,
where the y values depend on s.
ysol[[2]][[1]]
{{1,2.},{1.2,2.05533+0.200987 s},{1.4,2.22611+0.407722
s},
{1.6,2.52165+0.625515 s},
{1.8,2.95394+0.859296s}, {2.,3.53729+1.11368s},
Introduction
xxvii
{2.2,4.28801+1.39298 s},
{2.4,5.22402+1.70123 s},{2.6,6.36438+2.0421 s},
{2.8,7.72874+2.41888 s},{3.,9.33669+2.8343 s}}
Consequently, we have got a symbolic result using traditional numerical Runge–
Kutta algorithm.
In order to compute the proper value of the unknown initial value, s, the
boundary condition can be applied at x ¼ 3. In our case y1ð3Þ ¼ 1.
eq=ysol[[1]][[1]]==-1
9.33669+2.8343 s==-1
Let us solve this equation numerically, and assign the solution to the symbol s:
sol=Solve[eq,s]
{{s -> -3.647}}
s=s/.sol
{-3.647}
s=s[[1]]
-3.647
Then, we get the numerical solution for the problem:
ysol[[2]][[1]]
{{1,2.},{1.2,1.32234},{1.4,0.739147},{1.6,0.240397},
{1.8,-0.179911}, {2.,-0.524285},{2.2,-0.792178},
{2.4,-0.980351},{2.6,-1.08317},{2.8,-1.09291},
{3.,-1.}}
The truncation error can be decreased by using smaller step size h, and the
round-off error can be controlled by the employed number of digits.