46
$\begingroup$

One may define the derivative of $f$ at $x$ as $\lim\limits_{h\to0}\cdots\cdots\cdots$ etc., and show that that has certain properties, but it also has a "physical" interpretation: it is an instantaneous rate of change.

How much money do I need to put in the bank today to have $\$1$, $t$ years from now, assuming continuous compounding at a constant rate? The answer is $(e^{-st} \times \$1)$ where $s$ is the annual interest rate. So how much do I need to deposit today to get paid at a rate of $f(t)$ in dollars per year, $t$ years from now? The answer is $$ \int_0^\infty e^{-st} f(t)\,dt. $$ This is a "physical" interpretation of the Laplace transform as the "present value" of a revenue stream, as a function of the interest rate.

Is that pretty much the whole story of how to "physically" interpret the Laplace transform, or can more be said?

$\endgroup$
9
  • 1
    $\begingroup$ The trick is that in the current shape, your interpretation sends the payment rate $f(t)$ to the necessary deposit amount given the interest value $(\mathcal Lf)(s)$. However, I haven't seen that the latter dependence (of the deposit on the interest) is very popular to consider even in finance. Perhaps, it's more natural to look into the connections between the Laplace and Fourier transforms? $\endgroup$
    – SBF
    Commented Jun 24, 2013 at 16:15
  • 3
    $\begingroup$ What is "physical" here? $\endgroup$ Commented Jun 24, 2013 at 16:50
  • 4
    $\begingroup$ @O.L. : "Physical" means applied to questions asked by those whose interest is in something other than mathematics, but who might want to use mathematics to get the answers. This is as opposed to those pursuing mathematics as an end in itself. $\endgroup$ Commented Jun 24, 2013 at 17:21
  • 1
    $\begingroup$ Take a look at some of the answer's here: mathoverflow.net/questions/16274/fourier-vs-laplace-transforms As well, there is a way to motivate the Laplace transform as a continuous taylor expansion of a function. See some of the answers here as well: mathoverflow.net/questions/383/… $\endgroup$
    – Alex R.
    Commented Jun 25, 2013 at 7:33
  • 2
    $\begingroup$ Very clever. But (and I realize that this means thinking about imaginary $s$), how would you interpret the poles of your deposit function $\hat{f}(s)$? Are there interest rates for which you may never attain any sort of payout? $\endgroup$
    – Ron Gordon
    Commented Jun 25, 2013 at 14:14

1 Answer 1

53
$\begingroup$

There's really a lot that can be said, but I will only delve into one geometric idea: the laplace transform, like many integral transforms, is a change of basis ("coordinate system"). I consider this a "physical" interpretation because it is geometric- you will be able to imagine the laplace transform's actions on a function much like you imagine how a matrix can geometrically transform a vector. This description is also mathematically precise.

If you have studied linear algebra you have likely run into the concept of a change of basis. Imagine you have a vector on $\mathbb{R}^2$. You construct a basis $\{\hat{a}_x, \hat{a}_y\}$ to represent your vector as $$ v = a_1\hat{a}_x+a_2\hat{a}_y $$

To express $v$ on an alternative basis, $\{\hat{b}_x, \hat{b}_y\}$, we make use of the inner product (or "dot" product) that geometrically does the job of computing projections. Basically, we are asking:

Given $[a_1, a_2]^T$, $\{\hat{a}_x, \hat{a}_y\}$, and $\{\hat{b}_x, \hat{b}_y\}$, compute $[b_1, b_2]^T$ so that, $$a_1\hat{a}_x+a_2\hat{a}_y = b_1\hat{b}_x+b_2\hat{b}_y$$

If we take the inner product of both sides of this equation with $\hat{b}_x$, we get, $$\hat{b}_x\cdot(a_1\hat{a}_x+a_2\hat{a}_y) = \hat{b}_x\cdot(b_1\hat{b}_x+b_2\hat{b}_y)$$ $$a_1(\hat{b}_x\cdot\hat{a}_x)+a_2(\hat{b}_x\cdot\hat{a}_y) = b_1$$

where for the final simplification we made the common choice that our basis $\{\hat{b}_x, \hat{b}_y\}$ is orthonormal, $$\hat{b}_x\cdot\hat{b}_x=1,\ \ \ \ \hat{b}_x\cdot\hat{b}_y=0$$

Now we have $b_1$ in terms of $a_1$ and $a_2$ with some weights $\hat{b}_x\cdot\hat{a}_x$ and $\hat{b}_x\cdot\hat{a}_y$ that are really equal to the cosines of the angles between these vectors, which we know since we were the ones who chose these coordinate systems. We do the same thing again but dotting with $\hat{b}_y$ to compute $b_2$.

We can construct a matrix representation of this process, $$R \begin{bmatrix} a_1 \\ a_2 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}$$ $$R = \begin{bmatrix} \hat{b}_x\cdot\hat{a}_x & \hat{b}_x\cdot\hat{a}_y \\ \hat{b}_y\cdot\hat{a}_x & \hat{b}_y\cdot\hat{a}_y \end{bmatrix}$$

What does this have to do with the laplace transform? We need to generalize our understanding of vectors now. 3Blue1Brown does a great job of explaining this. Put concisely: a function can be thought of as a member of an infinite-dimensional vector space. It has one element for every value it can take on, in an ordered array.

$$f(x) = \begin{bmatrix} \vdots \\ f(0.05) \\ f(0.051) \\ f(0.052) \\ \vdots \end{bmatrix}$$

where of course, it has elements for values in between 0.0500000... and 0.051, etc. The indexing is uncountable, but the point is that they are vectors because they satisfy the properties of vectors. Just consider that what we can do with functions is identical to what we do with typical vectors in finite-dimensional spaces: when you add two functions you add them "element-wise", scalar multiplication does what it should, and furthermore, we even have an inner product! (So these functions are not just members of a vector space, but actually a Hilbert space).

$$f(x)\cdot g(x) = \int_{-\infty}^\infty f(x)g(x) \, dx$$

What that integral means: for each $x$ (index), multiply $f(x)$ and $g(x)$ and sum that result down all the $x$. Look familiar?

$$\begin{bmatrix} a_1 \\ a_2 \\ \vdots \end{bmatrix} \cdot \begin{bmatrix} b_1 \\ b_2 \\ \vdots\end{bmatrix} = a_1b_1 + a_2b_2 + \cdots$$

If functions are vectors, then don't they need to be expressed on bases of other functions? Yes. How many basis functions do you need to span an infinite dimensional space? Infinitely many. Is it hard to describe an infinite number of unique functions? No. One example: $$f_n(x)=x^n\ \ \ \ \forall n \in \mathbb{R}$$ Though notice that we don't have both $x^n$ and $cx^n$, because they are linearly dependent; they span the same direction. We sometimes call linearly dependent functions "like-terms" in the sense that we can combine $x^n + cx^n = (1+c)x^n$ as opposed to linearly independent functions we cannot combine $x^n + x^{n+1}$.

If we took the inner product of one of these $f_i(x)$ with $f_j(x)$ we would certainly not get 0, so they don't have that nice orthogonal property where $\hat{b}_x\cdot\hat{b}_y=0$. The polynomials don't form an orthogonal basis. However, the purely complex exponentials $e^{i\omega}$ do.

Now lets look at that mysterious laplace transform. $$\mathscr{L}(f(x)) = \int_{-\infty}^\infty e^{-sx}f(x) \, dx$$

Imagine all possible values of $e^{-sx}$ in a big matrix, where each row corresponds to plugging in a specific $s$ and each column corresponds to plugging in a specific $x$. (This matrix is orthonormal if $s=i\omega$ is purely imaginary, i.e. the Fourier transform).

If you select some $s$, you are plucking out a specific value of the function that resulted from the multiplication of this matrix with the vector $f(x)$, a function we call $F(s):=\mathscr{L}(f(x))$. Specifically, $$F(s=3) = f(x) \cdot e^{-3x}$$

(where that dot is an inner product, not ordinary multiplication). We say that $F(s)$ is just $f(x)$ expressed on a basis of exponential functions. Choosing a specific value of $s=s_1$ is picking out the value of $f(x)$ in the $e^{-s_1x}$ direction. The entire $e^{-sx}$ can be viewed as the change of basis matrix.

Physical applications of thinking about things this way:

Differential equations. They model, like, everything. The exponential has a special relationship with the derivative. If we view the derivative as some operator $D$, we see that $D$ applied to an exponential just returns a scaled version of that same exponential.

$$D(e^{\lambda x}) = \lambda e^{\lambda x}$$ $$D(f) = \lambda f$$ $$Av = \lambda v$$

Looks like one of them eigen-somethin equations. The exponential is the eigen-function of the derivative. Imagine we have an equation with just derivative operators and constants (throwing in anything else could potentially break the eigen relationship with the exponential). Things would be easier if we changed coordinates to the eigenbasis.

On the eigenbasis, the action of the original operators becomes simple scaling. We do this all the time to solve real world problems on finite-dimensional vector spaces (change your coordinates to make the problem simpler), now we're just doing it for infinite-dimensional vector spaces. This is why the laplace transform is useful for solving linear differential equations among many other things.

You should now also be able to explain Fourier transforms, Hankel transforms, and explain why the Legendre polynomials are good stuff, all at face value.

Read up more on functional analysis if this interests you. You may also find linear systems theory and the concept of convolution cool and approachable now too. Good luck!

P.S. This wasn't meant to be a book on this topic, and I wasn't being nearly as rigorous as I should be on Math Stack Exchange, but I figured that since you wanted intuition, a less rigorous more intuitive introduction to the functional analysis perspective was the right thing to provide.

A follow-up Q/A can be found here!

$\endgroup$
21
  • 1
    $\begingroup$ @MichaelHardy Is there anything you'd like me to add or change to make this an acceptable answer? $\endgroup$
    – jnez71
    Commented Jun 14, 2018 at 21:30
  • 1
    $\begingroup$ I would count "geometric" as "physical" if it were about some question of geometry that arises without thinking about the Laplace transform. $\endgroup$ Commented Jun 15, 2018 at 2:39
  • 2
    $\begingroup$ If I had a thumbs up smiley on my keyboard, I would put it here. ;-) $\endgroup$
    – amsmath
    Commented Apr 18, 2019 at 21:51
  • 1
    $\begingroup$ Hello, I found this Q&A after many hours of searching for the motivation behind developing the Laplace transform as it is. I must commend you for writing what in my opinion is the single best resource for it online. However, you’re last comment about rigor intrigued me...could you go a little more in depth behind what steps need to be more rigorous and how to make them so? Thank you! $\endgroup$
    – D.R.
    Commented Jun 11, 2019 at 5:50
  • 1
    $\begingroup$ @D.R. You're welcome! Feel free to share it :) As far as the rigor goes, I just mean that nothing I said was worded in terms of exact definitions, and many properties were stated without formal proof. I also left out some details like the issue of convergence of integral transforms (whenever infinite integrals are involved one should be vigilant). You would get all this with an actual text on functional analysis. $\endgroup$
    – jnez71
    Commented Jun 11, 2019 at 23:17

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .