Skip to main content
Fixed derivative with respect to x and made clear derivative evaluated at test points $x_1,\dots,x_n$.
Source Link

In many machine learning algorithm, it is often assumed that outputs of unknown function and their corresponding inputs are given to estimate the unknown function. However, I wonder whether there exist some algorithms which can estimate unknown function using derivative data of unknown function and their corresponding inputs. For example,

$$ Given\;D = {(x_1, y_1'),\ldots,(x_n, y_n')} \\ where\;y_{i}' = \frac{df(x_{i})}{dx} \; for \; 1 \leq i \leq n\\ Find \;\; f(x) $$$$ \mbox{Given }D = {(x_1, y_1),\ldots,(x_n, y_n)}$$ $$\mbox{ where }y_{1} = \frac{d{f(x)}}{dx}\Bigg|_{x=x_1},\ldots,y_n=\frac{d{f(x)}}{dx}\Bigg|_{x=x_n}$$ $$\mbox{ Find } f(x).$$

I found one curve fitting method which is called Hermite interpolation in wiki and some papers about Hermite learning. However, in these methods, outputs of unknown function and derivative outputs of it and corresponding inputs are given. Especailly, I want to know there exist some methods that utilize only derivative data. Also is it possible to find unknown function from derivative data, in particular, the error bound between unknown function and our estimator can decrease as the number of training data increases. How can we prove it?

In many machine learning algorithm, it is often assumed that outputs of unknown function and their corresponding inputs are given to estimate the unknown function. However, I wonder whether there exist some algorithms which can estimate unknown function using derivative data of unknown function and their corresponding inputs. For example,

$$ Given\;D = {(x_1, y_1'),\ldots,(x_n, y_n')} \\ where\;y_{i}' = \frac{df(x_{i})}{dx} \; for \; 1 \leq i \leq n\\ Find \;\; f(x) $$

I found one curve fitting method which is called Hermite interpolation in wiki and some papers about Hermite learning. However, in these methods, outputs of unknown function and derivative outputs of it and corresponding inputs are given. Especailly, I want to know there exist some methods that utilize only derivative data. Also is it possible to find unknown function from derivative data, in particular, the error bound between unknown function and our estimator can decrease as the number of training data increases. How can we prove it?

In many machine learning algorithm, it is often assumed that outputs of unknown function and their corresponding inputs are given to estimate the unknown function. However, I wonder whether there exist some algorithms which can estimate unknown function using derivative data of unknown function and their corresponding inputs. For example,

$$ \mbox{Given }D = {(x_1, y_1),\ldots,(x_n, y_n)}$$ $$\mbox{ where }y_{1} = \frac{d{f(x)}}{dx}\Bigg|_{x=x_1},\ldots,y_n=\frac{d{f(x)}}{dx}\Bigg|_{x=x_n}$$ $$\mbox{ Find } f(x).$$

I found one curve fitting method which is called Hermite interpolation in wiki and some papers about Hermite learning. However, in these methods, outputs of unknown function and derivative outputs of it and corresponding inputs are given. Especailly, I want to know there exist some methods that utilize only derivative data. Also is it possible to find unknown function from derivative data, in particular, the error bound between unknown function and our estimator can decrease as the number of training data increases. How can we prove it?

edited tags
Link
Source Link

Learning from derivative data

In many machine learning algorithm, it is often assumed that outputs of unknown function and their corresponding inputs are given to estimate the unknown function. However, I wonder whether there exist some algorithms which can estimate unknown function using derivative data of unknown function and their corresponding inputs. For example,

$$ Given\;D = {(x_1, y_1'),\ldots,(x_n, y_n')} \\ where\;y_{i}' = \frac{df(x_{i})}{dx} \; for \; 1 \leq i \leq n\\ Find \;\; f(x) $$

I found one curve fitting method which is called Hermite interpolation in wiki and some papers about Hermite learning. However, in these methods, outputs of unknown function and derivative outputs of it and corresponding inputs are given. Especailly, I want to know there exist some methods that utilize only derivative data. Also is it possible to find unknown function from derivative data, in particular, the error bound between unknown function and our estimator can decrease as the number of training data increases. How can we prove it?