PWLF Jekel Venter v2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

pwlf: A Python Library for Fitting 1D

Continuous Piecewise Linear Functions


Charles F. Jekel∗
Gerhard Venter †
March 19, 2019

Name: pwlf
Version: 0.4.1
Description: fit piecewise linear functions to data
License: MIT
Creator: Charles F. Jekel
Maintainer: Charles F. Jekel - [email protected]
Homepage: https://github.com/cjekel/piecewise_linear_fit_py
Contributors: https://github.com/cjekel/piecewise_linear_fit_
py/graphs/contributors

Abstract
A Python library to fit continuous piecewise linear functions to one
dimensional data is presented. A continuous piecewise linear function has
breakpoints which represent the termination points of the line segments.
If breakpoint locations are known, then a least square fit is used to solve
for the best continuous piecewise linear function. If the breakpoints are
unknown but the desired number of line segments is known, then global
optimization is used to find the best breakpoint locations. This optimiza-
tion process solves a least squares fit several times for different breakpoint
locations. The paper describes the mathematical methods used in the li-
brary, provides a brief overview of the library, and presents a few simple
examples to illustrate typical use cases.

1 Introduction
Piecewise linear functions are simple functions which consist of several discrete
linear segments that are used to describe a one-dimensional (1D) dependent
variable. The locations where one line segment ends and a new line begins are
referred to as breakpoints. Enforcing continuity between these line segments
has resulted in a number of interesting models used in a variety of disciplines
[1][2][3][4][5][6][7]. Muggeo [8] provides a review of the various techniques that
have been used to fit such piecewise continuous models.
∗ Dept
of Mechanical & Aerospace Engineering, University of Florida, Gainesville, FL 32611
† Departmentof Mechanical and Mechatronic Engineering, Stellenbosch University, Stellen-
bosch, South Africa

1
This paper introduces a Python library for fitting continuous piecewise linear
functions to 1D data. The library is called pwlf and was first available online
on April 1, 2017. The library includes a number of functions related to fitting
continuous piecewise linear models. For instance, pwlf can be used to fit a
continuous piecewise linear function for a specified number of line segments.
This type of fit is performed using global optimization to minimize the sum-of-
squares error. Two different global optimization strategies are included in the
library’s fit and fitfast functions. A user may additional specify their own global
optimization routine. The library also provides a number of other statistical
properties associated with these continuous piecewise linear functions.
The paper describes the methodology that pwlf uses to fit continuous piece-
wise linear functions. Essentially a least squares fit can be performed if the
breakpoint locations are known [9]. If the breakpoint locations are unknown, a
global optimization routine is used to find the optimal breakpoint locations by
minimizing the sum-of-squares error. First, the mathematical methods and var-
ious statistical properties available in pwlf are presented. Next, a basic overview
of the pwlf library is provided, followed by a collection of simple examples for
typical use cases. The main highlights of the library are the following:
• simple Python interface for fitting continuous piecewise linear functions

• fit with known breakpoint locations


• fit for specified number of line segments with unknown breakpoint loca-
tions
• constrained fitting that forces model through data points

• quick predict from a fitted model


• global optimization strategies to find optimal breakpoint locations
• relatively fast and efficient implementation

2 Mathematical formulation
This section describes the mathematical methods used in pwlf. The least squares
problem is defined if breakpoints are known. It is additionally possible to force
the linear function through a set of data points by using a constrained least
squares formulation. Various statistics associated with the linear regression
problem are presented. These statistics include: the coefficient of determination,
standard errors, p-values for model parameters, and the prediction variance
of the model. Lastly, an optimization problem is presented to find the best
breakpoint locations. The optimization problem solves the least squares problem
several times for various breakpoint combinations by minimizing the sum-of-
square of the residuals. There is no guarantee that the absolute best (or global
minimum) is found, but this is true for many non-linear regression routines.

2
2.1 Least squares with known breakpoints
This work assumes some 1D data set, where x is the independent variable.
Additionally, y is dependent on x such that y(x). The data can be paired as
 
x1 y1
 x2 y2 
 
(1)
 x3 y3 
 .. .. 
 
 . . 
xn yn
where (x1 , y1 ) represents the first data point. The data points should be ordered
according to x1 ≤ x2 ≤ x3 ≤ · · · ≤ xn for n number of data points. A piecewise
linear function can be described as the following set of functions


 η1 + m1 (x − b1 ) b1 < x ≤ b2

η2 + m2 (x − b2 )

b2 < x ≤ b3
y(x) = . .. (2)

 .
. .


ηnb −1 + mnb −1 (x − bnb −1 ) bnb −1 < x ≤ bnb

where b1 is the x location of the first breakpoint, b2 is the x location of the


second breakpoint, and so forth until the last breakpoint bnb . There are nb
number of breakpoints, and there are nb − 1 number of line segments. Like
the ordering of the data, this formulation also assumes that the breakpoints are
ordered as b1 < b2 < · · · < bnb 1 .
The above equation represents a set of piecewise linear functions. If it is
enforced that the piecewise linear functions are C 0 continuous over the domain,
then the slopes and intercepts of each linear region become dependent upon
previous values. The piecewise functions then reduce to


 β1 + β2 (x − b1 ) b1 ≤ x ≤ b2

β1 + β2 (x − b1 ) + β3 (x − b2 )

b2 < x ≤ b3
y(x) = . ..

 .
. .


β1 + β2 (x − b1 ) + β3 (x − b2 ) + · · · + βnb (x − bnb −1 ) bn−1 < x ≤ bnb

(3)
which results in the same number of unknown β model parameters as the number
of breakpoints. These piecewise functions can be expressed in matrix form as
1 x1 − b1 (x1 − b2 )1x1 >b2 · · · (x1 − bnb −1 )1x1 >bnb −1
    
β1 y1
1 x2 − b1 (x2 − b2 )1x2 >b2 · · · (x2 − bnb −1 )1x2 >bn −1   β2   y2 
. .. .. ..   ..  =  .. 
b
..
    
 .. . . . .  .   . 
1 xn − b1 (xn − b2 )1xn >b2 ··· (xn − bnb −1 )1xn >bnb −1 ynβ nb
(4)
where 1xn >b1 represents the indicator function. The indicator functions can be
described as piecewise functions that are either 0 or 1, for example
(
0 x n ≤ b2
1xn >b2 = (5)
1 xn > b 2
1 If the breakpoint locations or data points are not ordered, pwlf will order the data using

numpy.argsort.

3
and (
0 x n ≤ b3
1xn >b3 = (6)
1 xn > b 3
and so forth. This is a simple linear system of equations where

Aβ = y (7)

such that A is the n × nb regression matrix, β is the vector (nb × 1) of unknown


parameters, and y is the vector (n × 1) of y data points. The least squares prob-
lem solves for the unknown β that reduces the sum-of-square of the residuals,
and the solution is expressed as

β = AT A AT y.
−1
(8)

Once β has been solved for, the residual vector is

e = Aβ − y (9)

where e is a n×1 vector. The residual vector is the difference between the fitted
continuous piecewise linear model and the original data set. The sum-of-squares
of the residuals then becomes

SSR = eT e (10)

which is the L2 norm of the residual vector.


The regression matrix A has a number of interesting properties. Since the
data was ordered initially, A will somewhat resemble a lower triangular matrix2
with the upper right area of the matrix being filled with zeros. Also, A can
be assembled quickly from the ordered x data. This is particularly important
when using optimization in cases where the breakpoint locations are unknown,
because the optimization process will assemble the matrix A many times.

2.2 Constrained least squares fit with known breakpoints


In some applications, it may be desirable to force the continuous piecewise
function through a particular data point or collection of points. For instance,
an unstressed material model must have a stress of zero (y = 0) at a strain of
zero (x = 0). Constrained least squares problems are explained in Chapter 16 of
Boyd and Vandenberghe [10]. This subsection extends the least squares problem
of fitting continuous piecewise linear functions to a constrained problem when
it is desired to force the model through a set of points.
Recall the least squares problem stated in Eqn. 7. It is desired to force the
continuous piecewise linear function through
 
xc1 yc1
 xc2 yc2 
 .. ..  (11)
 
 . . 
xcnc ycnc
2 Technically a lower triangular matrix will only have zeros above the diagonal, and thus A

will only be lower triangular for particular data and breakpoint combinations.

4
with nc number of constrained data points. Let’s call xc the vector of con-
strained x locations, and yc the vector of constrained y locations. A new cn ×nb
matrix C is assembled where
1 xc1 − b1 (xc1 − b2 )1xc1 >b2 ··· (xc1 − bnb −1 )1xc1 >bnb −1
 
1 xc2 − b1 (xc2 − b2 )1xc2 >b2 ··· (xc2 − bnb −1 )1xc2 >bnb −1 
C = . . . . .. .
 
 .. .. .. .. . 
1 xcnc − b1 (xcnc − b2 )1xcnc >b2 · · · (xcnc − bnb −1 )1xcnc >bnb −1
(12)
Note this is the same procedure used to assemble A, but xc is used instead of
x.
The Karush–Kuhn–Tucker (KKT) equations for the constrained least
squares problem become

2.0ATA C T β
    T 
2A y
= (13)
C 0 ζ yc

where ζ is the vector of Lagrangian multipliers which will be solved along with
the β model parameters [10]. This is a square matrix of shape (nb + nc ) × (nb +
nc ), and 2AT y is a vector with shape nb × 1. Note that once β has been solved,
the calculation of the residual vector still follows Eqn. 9.

2.3 Various statistics


Various statistics can be calculated if we assume that the breakpoint locations
and model form are correct. This subsection will define the following commonly
used regression statistics:
• Coefficient of determination R2
• Standard error for each β model parameter
• Testing for parameter significance with p-values
• Prediction variance due to the uncertainty from the lack of data

2.3.1 Coefficient of determination


The coefficient of determination, commonly refereed to as R2 , compares the
correlation between the model output and observed data. First the total sum-
of-squares is calculated using
n
SST = (14)
X
(yi − ȳ)2
i

where ȳ is the mean of y. The total sum-of-squares depends only on the observed
data points, and is independent of the fitted model. Then the coefficient of
determination is obtained from
SSR
R2 = 1 − (15)
SST
where SSR is the sum-of-square of the residuals as defined in Eqn. 10.

5
2.3.2 Standard error for each model parameter
The standard error can be calculated for each model parameter. The standard
error represents the estimate of the standard deviation of each β parameter
due to noise in the data. This derivation follows the standard error calculation
presented in Coppe et al. [11] for linear regression problems.
First the unbiased estimate of the variance is calculated as
SSR
σ̂ 2 = (16)
n − nb
where n is the number of data points, and nb is the number of model parameters
(or breakpoints used). Then the standard error (SE) for the βj model parameter
is q 
SE(βj ) = σ̂ 2 AT A jj
−1
(17)
for each j parameter ranging from j = 1, · · · , nb . It is often assumed that
the parameters follows a normal distribution, with mean of βj and standard
deviation of SE(βj ).

2.3.3 Test for parameter significance


A statistical test can be done to test for the significance in each β model pa-
rameter. This is a marginal test as defined in section 2.4.2 of [12], because the
β parameters are codependent.
The hypotheses for testing the significance of any individual regression pa-
rameter βj are
H0 : β j = 0 (18)
and
H1 : βj 6= 0. (19)
If H0 is not rejected, then βj may be deleted from the model. This could imply
that too many line segments are being used for the provided data. The test
statistic is
βj
tj = (20)
SE(βj )
or the ratio of the parameter to it’s standard error. The p-value for each pa-
rameters is the probability of obtaining a test statistic greater than |tj |. Note
that tj follows Student’s t-distribution, with (n − nb − 2) degrees of freedom.
A typically hypothesis test would reject H0 if the p-value is greater than some
level of significance α.

2.3.4 Prediction variance due to the uncertainty from the lack of


data
The prediction variance is a useful tool for assessing the model uncertainty.
For continuous piecewise linear functions, the prediction variance represents
the uncertainty in each linear segment due to the lack of data. For a useful
discussion on the prediction variance, refer to section 8.4.4 of [12].
The regression matrix shall be denoted  when it is assembled on a set of
new prediction points x̂, and the predictive model output is given as ŷ = Âβ.
Note that x̂ can contain any number of data points from the original domain of

6
x. The reason for this is that the prediction variance can be calculated at any
x within the model, and is not restricted to the original x data. The prediction
variance as a function of x̂ is given by
 
PV(x̂) = σ̂ 2 diag Â[AT A ÂT
−1
(21)

where diag represents the diagonal along the matrix. It’s generally assumed that
ŷ follows a normal distribution, with standard deviation equal to PV(x̂).
p

2.4 Finding the optimal breakpoints


The fitting of continuous piecewise linear functions has thus far assumed that
the breakpoint locations are known. In cases when the breakpoint locations are
unknown, optimization can be used to find the best set of breakpoint locations.
This formulation requires the user to specify the desired number of line segments.
Remember there are nb − 1 number of line segments.
For any given set of breakpoint locations b, a least squares fit can be per-
formed which solves for the β parameters that minimize the sum-of-square error
of the residuals. The sum-of-square of the residuals can be represented as a func-
tion dependent on the breakpoint locations SSR(b). The library assumes that
the first breakpoint is b1 = x1 (or the smallest x), and the last breakpoint is
bnb = xn (or the largest x). An optimization problem is formulated to find
the breakpoint locations that minimize the overall sum-of-square of the residu-
als. Note that there are nb − 2 number of unknown breakpoint locations. The
summary of the optimization problem is as follows:

minimize SSR(b), b = [b2 , · · · , bnb −1 ]T


subject to x1 ≤ bk ≤ xn , k = 1, 2, · · · , nb .

Two different optimization strategies are currently utilized. The first strat-
egy utilizes Differential Evolution (DE) for the global optimization [13]. The
DE optimization algorithm is used in the fit function as pwlf ’s default optimiza-
tion strategy. The specific DE strategy being used has been implemented into
SciPy [14]. The DE optimization strategy is a very popular heuristic optimizer
that has been used in a variety of applications. However, cases may arise where
the progress of DE is deemed too slow, too expensive, or the DE results are
consistently undesirable.
As an alternative to DE, a multi-start gradient based optimization strat-
egy is used in the fitfast function. The multi-start optimization algorithm first
generates an initial population from a latin-hypercube sampling3 . This is a
space filling experiment design, where each point in the population represents a
unique combination of breakpoint locations. A local optimization is then per-
formed from each point in the population. Running multiple local optimizations
is a strategy that attempts to find the global optimum, and such a strategy was
mentioned by Muggeo [15] for solving problems with multiple local minima. The
local optimization algorithm being used is the LBFGS [16] gradient based opti-
mizer implemented in SciPy [14]. Schutte et al. [17] observed a case where the
3 The latin-hypercube sampling is done using the pyDOE package. https://pythonhosted.

org/pyDOE/

7
multi-start optimization performance may exceed running a single optimization
algorithm for an extended period of time. The caveat with the multi-start gra-
dient based optimization algorithms is that each individual optimization may
get stuck at a local minima, and thus increasing the number of starting points
will increase the chances of finding the global optimum.
The overall methodology described is an optimization within an optimiza-
tion, or a double-loop optimization. There is an inner optimization occurring at
every function evaluation. This inner optimization is the least squares fit which
finds the best continuous piecewise linear model for a given set of breakpoint
locations. It is required to solve the least squares problem several times within
the outer optimization process. As shown later in the examples, the outer opti-
mization process can be used to find breakpoint locations in a short amount of
time on a modern computer. This is largely possible because of the efficiency
of the least squares method. Finding the breakpoint locations for other error
measures (i.e. minimizing absolute average deviation, or any other LN norm)
would be a significantly more expensive problem, because an iterative solver
would be required within the inner loop of the optimization process.

3 The pwlf Python library


A brief overview of the pwlf library is provided. This information includes
installation details, versioning semantics, and details about the fitting class.

3.1 Installation
It is recommended to install pwlf using pip by running

pip install pwlf

in a shell (or command prompt)4 . This will download and install the latest
pwlf release along with the necessary dependencies. The dependencies are the
following:
• Python >= 2.7
• NumPy >= 1.14.0

• SciPy >= 0.19.0


• pyDOE >= 0.3.8
Alternatively, pwlf can be installed from the source code by running the
following.

git clone https://github.com/cjekel/piecewise_linear_fit_py.git


pip install ./piecewise_linear_fit_py

4 The PyPA recommended tool for installing packages is with pip https://pypi.org/

project/pip/

8
3.2 Versioning
To import and check the version of pwlf run

import pwlf
pwlf.__version__

where pwlf.__version__ is a string following ”MAJOR.MINOR.PATCH”


Semantic Versioning5 . The changelog is hosted online at https://github.
com/cjekel/piecewise_linear_fit_py/blob/master/CHANGELOG.md, and
released versions of pwlf are available online at https://pypi.org/project/
pwlf/. A new release will be uploaded for changes in the source code, however
the most minor changes (typos in docstrings or example files) may not be
released to PyPI.org.
Run the following code to upgrade pwlf (but none of the dependencies) to
the latest version.

pip install pwlf --upgrade --no-deps

3.3 PiecewiseLinFit class


The usage of pwlf was largely inspired by the simplicity of the scikit-learn project
[18]. The entire fitting routine is stored inside the PiecewiseLinFit class. The
object is initialized by calling

model = PiecewiseLinFit(x, y, disp_res=False, sorted_data=False)

where model becomes the working object in which all fitting routines are run
for the particular x and y data. If the breakpoint locations are known, use
model.fit_with_breaks to perform a least squares fit. If breakpoint locations are
unknown, use model.fit or model.fitfast to perform a fit by specifying the desired
number of line segments. Once a fit has been performed, the object will contain
the following attributes:

model.ssr # sum-of-squares error


model.fit_breaks # breakpoint locations
model.n_parameters # number of model parameters
model.n_segments # number of line segments
model.beta # model parameters
model.slopes # slope of each line segment
model.intercepts # y intercepts of each line segment

5 https://semver.org/spec/v2.0.0.html

9
4 Examples
Simple examples are provided for the following use-cases:
1. fit with explicit breakpoint locations
2. fit for specified number of line segments
3. force the fit through data points
4. use a custom optimization routine to find the optimal breakpoint locations
For additional examples, please look in the examples folder within the source
which is available at https://github.com/cjekel/piecewise_linear_fit_
py.
To get started with the examples: first import the necessary libraries, copy
the x and y data, and finally initialize the fitting object as model.

import numpy as np
import pwlf

x = np.array([1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11.,
12., 13., 14., 15.])
y = np.array([5., 7., 9., 11., 13., 15., 28.92, 42.81, 56.7,
70.59, 84.47, 98.36, 112.25, 126.14, 140.03])

model = pwlf.PiecewiseLinFit(x, y)

4.1 Fit model with explicit breakpoint locations


The most basic fit is for explicit breakpoint locations by solving the least
squares problem presented in Eqn. 7. The example finds the best continuous
piecewise linear function that has breakpoint locations at [0.0, 7.0, 16.0]. The
fit_with_breaks function is used to perform the least squares fit. The following
code performs the fit.

breakpoints = [0.0, 7.0, 16.0]


model.fit_with_breaks(breakpoints)

Prediction from a fitted model just requires calling the predict method on
new x locations. The following code evaluates the model on 100 new x̂ locations
within the domain of x.

x_hat = np.linspace(x.min(), x.max(), 100)


y_hat = model.predict(x_hat)

The resulting fit and data can be (optionally) plotted using the matplotlib
library. The resulting fit is shown in Fig. 1.

10
import matplotlib.pyplot as plt
plt.figure()
plt.plot(x, y, 'o')
plt.plot(x_hat, y_hat, '-')
plt.grid()
plt.xlabel('x')
plt.ylabel('y')
plt.show()

140

120

100

80
y

60

40

20

0
2 4 6 8 10 12 14
x

Figure 1: Example of fitting a continuous piecewise linear function with break-


points occurring at [0.0, 7.0, 16.0].

4.2 Fit for specified number of line segments


The breakpoint locations are unknown in many cases. In the given example it
appears obvious that there are two distinct linear regions, and thus it is logical
to find the breakpoint locations for two line segments. To do so, just run

breakpoints = model.fit(2)

where breakpoints is a numpy array containing the optimal breakpoint locations.


The result of this fit is shown in Fig. 2, and the resulting breakpoint locations
occur at [1.0, 6.0, 15.0].

11
140

120

100

80
y

60

40

20

0
2 4 6 8 10 12 14
x

Figure 2: Example of fitting two continuous piecewise line segments to a simple


data set.

4.3 Forcing fit through data points


It may sometimes be desirable to force the continuous piecewise linear function
through a particular point, or set of points. Such a fit is done by specifying x_c
and y_c while performing a fit. The following code finds the best two continuous
piecewise lines, such that the model goes through the point (0.0, 0.0). The result
is shown in Fig. 3.

model.fit(2, x_c=[0.0], y_c=[0.0])

12
140

120

100

80
y

60

40

20

0
0 2 4 6 8 10 12 14
x

Figure 3: Example of finding the best two continuous piecewise lines that go
through the point (0.0, 0.0).

4.4 Using a custom optimization routine


It is possible to find the optimal breakpoint locations using your favorite opti-
mization algorithm. First, run use_custom_opt to specify the desired number
of line segments. Then, pass fit_with_breaks_opt as the objective function to
your favorite optimization routine. The following example uses SciPy’s minimize
to find the best breakpoint locations for two line segments. There is only one
variable to optimize, because pwlf assumes that the first and last breakpoints
occur at the minimum and maximum x data points.

from scipy.optimize import minimize


model.use_custom_opt(2)
guess = [5.0] # guess the breakpoint location
res = minimize(model.fit_with_breaks_opt, guess)

5 Conclusion
A methodology was presented for fitting continuous piecewise linear functions to
1D data. If breakpoints (or the termination of each line segment locations) are
known, then a simple least squares fit is performed to find the best continuous
piecewise linear function. If breakpoints are unknown but the desired number
of line segments is known, then optimization is used to find the breakpoint
locations of the best continuous piecewise linear function. This is a double
optimization process. The outer loop is attempting to find the best breakpoint
locations, while the inner loop is performing a least squares fit to find the best
β model parameters. This methodology of fitting continuous piecewise linear
functions, as well as various statistics associated with this particular regression

13
model have been discussed in detail. A few examples of the basic usage of pwlf
was described in this paper. Additionally, there are a number of other examples
available online at https://github.com/cjekel/piecewise_linear_fit_py

Acknowledgments
Charles F. Jekel would like to acknowledge University of Florida’s Graduate
Preeminence Award and U.S. Department of Veterans Affairs’ Educational As-
sistance program for providing funding for his PhD.
Thanks to Raphael Haftka for his numerous comments related to optimiza-
tion and linear regression.

References
[1] N. M. Fyllas, S. Patiño, T. R. Baker, G. Bielefeld Nardoto, L. A. Martinelli,
C. A. Quesada, R. Paiva, M. Schwarz, V. Horna, L. M. Mercado, A. Santos,
L. Arroyo, E. M. Jiménez, F. J. Luizão, D. A. Neill, N. Silva, A. Prieto, A. Rudas,
M. Silviera, I. C. G. Vieira, G. Lopez-Gonzalez, Y. Malhi, O. L. Phillips,
and J. Lloyd, “Basin-wide variations in foliar properties of Amazonian forest:
phylogeny, soils and climate,” Biogeosciences, vol. 6, no. 11, pp. 2677–2708,
2009. [Online]. Available: https://www.biogeosciences.net/6/2677/2009/ 1
[2] C. Ocampo-Martinez and V. Puig, “Piece-wise linear functions-based model
predictive control of large-scale sewage systems,” pp. 1581–1593, 2010.
[Online]. Available: http://digital-library.theiet.org/content/journals/10.1049/
iet-cta.2009.0206 1
[3] R. Heinkelmann, J. Böhm, S. Bolotin, G. Engelhardt, R. Haas, R. Lanotte,
D. S. MacMillan, M. Negusini, E. Skurikhina, O. Titov, and H. Schuh,
“VLBI-derived troposphere parameters during CONT08,” Journal of Geodesy,
vol. 85, no. 7, pp. 377–393, jul 2011. [Online]. Available: https:
//doi.org/10.1007/s00190-011-0459-x 1
[4] S. Klikovits, A. Coet, and D. Buchs, “ML4CREST: Machine Learning for CPS
Models.” 1
[5] G. Villarini, J. A. Smith, and G. A. Vecchi, “Changing Frequency of Heavy
Rainfall over the Central United States,” Journal of Climate, vol. 26, no. 1, pp.
351–357, 2013. [Online]. Available: https://doi.org/10.1175/JCLI-D-12-00043.1
1
[6] J. Ollerton, H. Erenler, M. Edwards, and R. Crockett, “Extinctions of
aculeate pollinators in Britain and the role of large-scale agricultural changes,”
Science, vol. 346, no. 6215, pp. 1360–1362, 2014. [Online]. Available:
http://science.sciencemag.org/content/346/6215/1360 1
[7] E. Jauk, M. Benedek, B. Dunst, and A. C. Neubauer, “The relationship
between intelligence and creativity: New support for the threshold hypothesis
by means of empirical breakpoint detection,” Intelligence, vol. 41, no. 4,
pp. 212–221, 2013. [Online]. Available: http://www.sciencedirect.com/science/
article/pii/S016028961300024X 1
[8] V. M. R. Muggeo, “Estimating regression models with unknown break-points,”
Statistics in Medicine, vol. 22, no. 19, pp. 3055–3071, 2003. [Online]. Available:
https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.1545 1

14
[9] N. Golovchenko, “Least-squares fit of a continuous piecewise linear function,”
2004. 2

[10] S. Boyd and L. Vandenberghe, Introduction to Applied Linear Algebra.


Cambridge University Press, 2018. [Online]. Available: https://web.stanford.
edu/{~}boyd/vmls/vmls.pdf 4, 5

[11] A. Coppe, R. T. Haftka, and N. H. Kim, “Uncertainty Identification of


Damage Growth Parameters Using Nonlinear Regression,” AIAA Journal,
vol. 49, no. 12, pp. 2818–2821, dec 2011. [Online]. Available: http:
//dx.doi.org/10.2514/1.J051268 6

[12] R. H. Myers, D. C. Montgomery, and C. M. Anderson-Cook, Response Surface


Methodology: Process and Product Optimization Using Designed Experiments,
4th ed., ser. Wiley Series in Probability and Statistics. Wiley, 2016. 6

[13] R. Storn and K. Price, “Differential Evolution – A Simple and Efficient


Heuristic for global Optimization over Continuous Spaces,” Journal of Global
Optimization, vol. 11, no. 4, pp. 341–359, dec 1997. [Online]. Available:
https://doi.org/10.1023/A:1008202821328 7

[14] E. Jones, T. Oliphant, P. Peterson, and Others, “SciPy: Open source scientific
tools for Python.” [Online]. Available: http://www.scipy.org/ 7

[15] V. M. R. Muggeo, “Segmented: an R package to fit regression models with broken-


line relationships,” R news, vol. 8, no. 1, pp. 20–25, 2008. 7

[16] R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, “A limited memory algorithm for
bound constrained optimization,” SIAM Journal on Scientific Computing, vol. 16,
no. 5, pp. 1190–1208, 1995. 7

[17] J. F. Schutte, R. T. Haftka, and B. J. Fregly, “Improved global convergence


probability using multiple independent optimizations,” International Journal
for Numerical Methods in Engineering, vol. 71, no. 6, pp. 678–702, dec 2006.
[Online]. Available: https://doi.org/10.1002/nme.1960 7

[18] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel,


M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos,
D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine
Learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–
2830, 2011. 9

15

You might also like