There's really a lot that can be said, but I will only delve into one geometric idea: the laplace transform, like many integral transforms, is a change of basis ("coordinate system"). I consider this a "physical" interpretation because it is geometric- you will be able to imagine the laplace transform's actions on a function much like you imagine how a matrix can geometrically transform a vector. This description is also mathematically precise.
If you have studied linear algebra you have likely run into the concept of a change of basis. Imagine you have a vector on $\mathbb{R}^2$. You construct a basis $\{\hat{a}_x, \hat{a}_y\}$ to represent your vector as
$$
v = a_1\hat{a}_x+a_2\hat{a}_y
$$
To express $v$ on an alternative basis, $\{\hat{b}_x, \hat{b}_y\}$, we make use of the inner product (or "dot" product) that geometrically does the job of computing projections. Basically, we are asking:
Given $[a_1, a_2]^T$, $\{\hat{a}_x, \hat{a}_y\}$, and $\{\hat{b}_x, \hat{b}_y\}$, compute $[b_1, b_2]^T$ so that,
$$a_1\hat{a}_x+a_2\hat{a}_y = b_1\hat{b}_x+b_2\hat{b}_y$$
If we take the inner product of both sides of this equation with $\hat{b}_x$, we get,
$$\hat{b}_x\cdot(a_1\hat{a}_x+a_2\hat{a}_y) = \hat{b}_x\cdot(b_1\hat{b}_x+b_2\hat{b}_y)$$
$$a_1(\hat{b}_x\cdot\hat{a}_x)+a_2(\hat{b}_x\cdot\hat{a}_y) = b_1$$
where for the final simplification we made the common choice that our basis $\{\hat{b}_x, \hat{b}_y\}$ is orthonormal,
$$\hat{b}_x\cdot\hat{b}_x=1,\ \ \ \ \hat{b}_x\cdot\hat{b}_y=0$$
Now we have $b_1$ in terms of $a_1$ and $a_2$ with some weights $\hat{b}_x\cdot\hat{a}_x$ and $\hat{b}_x\cdot\hat{a}_y$ that are really equal to the cosines of the angles between these vectors, which we know since we were the ones who chose these coordinate systems. We do the same thing again but dotting with $\hat{b}_y$ to compute $b_2$.
We can construct a matrix representation of this process,
$$R \begin{bmatrix} a_1 \\ a_2 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}$$
$$R = \begin{bmatrix} \hat{b}_x\cdot\hat{a}_x & \hat{b}_x\cdot\hat{a}_y \\ \hat{b}_y\cdot\hat{a}_x & \hat{b}_y\cdot\hat{a}_y \end{bmatrix}$$
What does this have to do with the laplace transform? We need to generalize our understanding of vectors now. 3Blue1Brown does a great job of explaining this. Put concisely: a function can be thought of as a member of an infinite-dimensional vector space. It has one element for every value it can take on, in an ordered array.
$$f(x) = \begin{bmatrix} \vdots \\ f(0.05) \\ f(0.051) \\ f(0.052) \\ \vdots \end{bmatrix}$$
where of course, it has elements for values in between 0.0500000... and 0.051, etc. The indexing is uncountable, but the point is that they are vectors because they satisfy the properties of vectors. Just consider that what we can do with functions is identical to what we do with typical vectors in finite-dimensional spaces: when you add two functions you add them "element-wise", scalar multiplication does what it should, and furthermore, we even have an inner product! (So these functions are not just members of a vector space, but actually a Hilbert space).
$$f(x)\cdot g(x) = \int_{-\infty}^\infty f(x)g(x) \, dx$$
What that integral means: for each $x$ (index), multiply $f(x)$ and $g(x)$ and sum that result down all the $x$. Look familiar?
$$\begin{bmatrix} a_1 \\ a_2 \\ \vdots \end{bmatrix} \cdot \begin{bmatrix} b_1 \\ b_2 \\ \vdots\end{bmatrix} = a_1b_1 + a_2b_2 + \cdots$$
If functions are vectors, then don't they need to be expressed on bases of other functions? Yes. How many basis functions do you need to span an infinite dimensional space? Infinitely many. Is it hard to describe an infinite number of unique functions? No. One example:
$$f_n(x)=x^n\ \ \ \ \forall n \in \mathbb{R}$$
Though notice that we don't have both $x^n$ and $cx^n$, because they are linearly dependent; they span the same direction. We sometimes call linearly dependent functions "like-terms" in the sense that we can combine $x^n + cx^n = (1+c)x^n$ as opposed to linearly independent functions we cannot combine $x^n + x^{n+1}$.
If we took the inner product of one of these $f_i(x)$ with $f_j(x)$ we would certainly not get 0, so they don't have that nice orthogonal property where $\hat{b}_x\cdot\hat{b}_y=0$. The polynomials don't form an orthogonal basis. However, the purely complex exponentials $e^{i\omega}$ do.
Now lets look at that mysterious laplace transform.
$$\mathscr{L}(f(x)) = \int_{-\infty}^\infty e^{-sx}f(x) \, dx$$
Imagine all possible values of $e^{-sx}$ in a big matrix, where each row corresponds to plugging in a specific $s$ and each column corresponds to plugging in a specific $x$. (This matrix is orthonormal if $s=i\omega$ is purely imaginary, i.e. the Fourier transform).
If you select some $s$, you are plucking out a specific value of the function that resulted from the multiplication of this matrix with the vector $f(x)$, a function we call $F(s):=\mathscr{L}(f(x))$. Specifically,
$$F(s=3) = f(x) \cdot e^{-3x}$$
(where that dot is an inner product, not ordinary multiplication). We say that $F(s)$ is just $f(x)$ expressed on a basis of exponential functions. Choosing a specific value of $s=s_1$ is picking out the value of $f(x)$ in the $e^{-s_1x}$ direction. The entire $e^{-sx}$ can be viewed as the change of basis matrix.
Physical applications of thinking about things this way:
Differential equations. They model, like, everything. The exponential has a special relationship with the derivative. If we view the derivative as some operator $D$, we see that $D$ applied to an exponential just returns a scaled version of that same exponential.
$$D(e^{\lambda x}) = \lambda e^{\lambda x}$$
$$D(f) = \lambda f$$
$$Av = \lambda v$$
Looks like one of them eigen-somethin equations. The exponential is the eigen-function of the derivative. Imagine we have an equation with just derivative operators and constants (throwing in anything else could potentially break the eigen relationship with the exponential). Things would be easier if we changed coordinates to the eigenbasis.
On the eigenbasis, the action of the original operators becomes simple scaling. We do this all the time to solve real world problems on finite-dimensional vector spaces (change your coordinates to make the problem simpler), now we're just doing it for infinite-dimensional vector spaces. This is why the laplace transform is useful for solving linear differential equations among many other things.
You should now also be able to explain Fourier transforms, Hankel transforms, and explain why the Legendre polynomials are good stuff, all at face value.
Read up more on functional analysis if this interests you. You may also find linear systems theory and the concept of convolution cool and approachable now too. Good luck!
P.S. This wasn't meant to be a book on this topic, and I wasn't being nearly as rigorous as I should be on Math Stack Exchange, but I figured that since you wanted intuition, a less rigorous more intuitive introduction to the functional analysis perspective was the right thing to provide.
A follow-up Q/A can be found here!