Category Archives: Partial Differential Equations

Heat Equation and Schrödinger Equation

There is an intriguing relationship between Schrödinger equation for a free particle and homogeneous heat equation.

1-dimentional Schrödinger equation for a free particle is
\begin{equation}
\label{eq:se}
i\hbar\frac{\partial\psi(x,t)}{\partial t}=-\frac{\hbar^2}{2m}\frac{\partial^2\psi(x,t)}{\partial x^2}.
\end{equation}
Take the Wick rotation $t\mapsto \tau=it$. Then the Schrödinger equation \eqref{eq:se} turns into
\begin{equation}
\label{eq:hheq}
\frac{\partial\phi(x,\tau)}{\partial\tau}=\frac{\hbar}{2m}\frac{\partial^2\phi(x,\tau)}{\partial x^2},
\end{equation}
where $\phi(x,\tau)=\psi\left(x,\frac{\tau}{i}\right)$. \eqref{eq:hheq} is a homogeneous heat equation with diffusion coefficient $\alpha^2=\frac{\hbar}{2m}$. Conversely, apply the Wick rotation $t\mapsto\tau=-it$ to the 1-dimensional homogeneous heat equation
\begin{equation}
\label{eq:hhe2}
\frac{\partial u(x,t)}{\partial t}=\alpha^2\frac{\partial^2 u(x,t)}{\partial x^2}.
\end{equation}
Then the resulting equation is
\begin{equation}
\label{eq:se2}
i\hbar\frac{\partial w(x,\tau)}{\partial t}=-\alpha^2\hbar\frac{\partial^2 w(x,\tau)}{\partial x^2},
\end{equation}
where $w(x,\tau)=u\left(x,-\frac{\tau}{i}\right)$. \eqref{eq:se2} is a Schrödinger equation for a free particle with $m=\frac{1}{2\alpha^2}$. The solution of \eqref{eq:hhe2} with homogeneous boundary conditions takes the form
$$u(x,t)=\sum_{n=0}^\infty A_ne^{-\lambda_n^2\alpha^2 t}X_n(x).$$
Its Wick rotated solution is
\begin{align*}
w(x,\tau)&=\sum_{n=0}^\infty w_n(x,\tau)\\
&=\sum_{n=0}^\infty A_ne^{-i\lambda_n^2\alpha^2\tau}X_n(x).
\end{align*}
$i\hbar\frac{\partial w(x,\tau)}{\partial\tau}=\lambda_n^2\alpha^2\hbar w(x,\tau)$. Thus for each $n=0,1,2,\cdots$, $E_n=\lambda_n^2\alpha^2\hbar$ is the energy and $\omega_n=\lambda_n^2\alpha^2$ is the frequency of the wave $w_n(x,\tau)$.

The Semi-Homogeneous Heat Problem

In this lecture, we study how to solve the semi-homogeneous heat problem:
$$
\begin{aligned}
&u_t=\alpha^2 u_{xx}+F(x,t),\ 0<x<L,\ t>0\\
&{\rm BCs:}\ \left\{\begin{aligned}
-k_1u_x(0,t)+h_1u(0,t)&=0,\ t>0\\
k_2u_x(L,t)+h_2u(L,t)&=0,
\end{aligned}
\right.\\
&{\rm IC:}\ u(x,0)=\phi(x),\ 0<x<L\ \ \ \ \ (1)
\end{aligned}
$$

We consider a solution of the form
$$u(x,t)=\sum_{n=0}^\infty T_n(t)X_n(x)$$
In addition, we assume that
$$F(x,t)=\sum_{n=0}^\infty F_n(t)X_n(x)$$
Let $A_n:=T_n(0)$. Then
$$\phi(x)=\sum_{n=0}^\infty A_nX_n(x)$$
For each $n=0,1,2,\cdots$, $X_n(x)$ is the solution of the Sturm-Liouville problem:
$$
\begin{aligned}
&X_n^{\prime\prime}=-\lambda_n^2X_n\\
&{\rm BCs:}\ \left\{\begin{aligned}
-k_1X_n’(0)+h_1X_n(0)&=0\\
k_2X_n’(L)+h_2X_n(L)&=0
\end{aligned}
\right.\ \ \ \ \ (2)
\end{aligned}
$$
and $T_n(t)$ satisfies the first order ODE with initial condition:
$$\left\{\begin{aligned}
\dot{T}_n+\lambda_n^2\alpha^2T_n=F_n\\
T_n(0)=A_n.
\end{aligned}
\right.\ \ \ \ \ (3)$$
Then $T_n(t)$ is given by
$$
T_n(t)=\left[A_n+\int_0^t F_n(s)e^{\lambda_n^2\alpha^2 s}ds\right]e^{-\lambda_n^2\alpha^2 t}.
$$
Finally, $F_n(t)$ and $A_n$ are to be determined by
\begin{align*}
F_n(t)&=\frac{1}{L_n}\int_0^L F(x,t)X_n(x)dx\ \ \ \ \ (3)\\
A_n&=\frac{1}{L_n}\int_0^L\phi(x)X_n(x)dx\ \ \ \ \ (4)
\end{align*}
As a summary we have the recipe for solving the semi-homogeneous heat IBVP (1):

  1. Find the eigenvalue $\lambda_n$ and the corresponding eigenfunction $X_n$ by solving the Sturm-Liouville problem (2)
  2. Find the coefficients $F_n(t)$ and $A_n$ using the formulae (4) and (5) respectively.
  3. For each $n=0,1,2,\cdots$, find $T_n(t)$ using the formula (3)
  4. Put all things together as $$u(x,t)=\sum_{n=0}^\infty T_n(t)X_n(x)$$

Example. Solve the semi-homogeneous heat IBVP:
\begin{align*}
&u_t=\alpha^2 u_{xx}+b\sin\pi x,\ 0<x<1,\ 0<t<\infty\\
&{\rm BCs:}\ \left\{\begin{aligned}
u(0,t)&=0,\ 0<t<\infty\\
u(1,t)&=0,
\end{aligned}
\right.\\
&{\rm IC:}\ u(x,0)=1,\ 0<x<1
\end{align*}

Solution:

Step 1. The eigenfunctions are
$$X_n(x)=\sin n\pi x,\ n=1,2,3,\cdot.$$

Step 2. \begin{align*}
F_n(t)&=2b\int_0^1\sin\pi x\sin n\pi xdx\\
&=\left\{\begin{array}{ccc}
b & {\rm if} & n=1\\
0 & {\rm otherwise}.\end{array}\right.\\
A_n&=2\int_0^1\sin n\pi xdx\\
&=\frac{2[1-(-1)^n]}{n\pi}
\end{align*}
For $n=1,2,3,\cdots$, $A_{2n}=0$ and $A_{2n-1}=\frac{4}{2(n-1)\pi}$

Step 3. \begin{align*}
T_1(t)&=\left[\frac{4}{\pi}+\int_0^t be^{\alpha^2\pi^2s}ds\right]e^{-\alpha^2\pi^2t}\\
&=\frac{b}{\alpha^2\pi^2}+\left(\frac{4}{\pi}-\frac{b}{a^2\pi^2}\right)e^{-\alpha^2\pi^2t}
\end{align*}
For $n=2,3,\cdots,$
$$T_{2n-1}(t)=\frac{4}{(2n-1)\pi}e^{-\alpha^2(2n-1)^2\pi^2t}$$

Step 4. The solution is therefore
\begin{align*}
u(x,t)=\left[\frac{b}{\alpha^2\pi^2}+\left(\frac{4}{\pi}-\frac{b}{a^2\pi^2}\right)e^{-\alpha^2\pi^2t}\right]\sin\pi x\\
+\sum_{n=2}^\infty \frac{4}{(2n-1)\pi}e^{-\alpha^2(2n-1)^2\pi^2t}\sin(2n-1)\pi x
\end{align*}

The eventual temperature is
$$\lim_{t\to\infty}u(x,t)=\frac{b}{\alpha^2\pi^2}\sin\pi x$$

References:

David Betounes, Partial Differential Equations for Computational Science with Maple and Vector Analysis, TELOS, Springer-Verlag

Solving Heat Equation with Non-Homogeneous BCs 2: Time-Dependent BCs

Consider the heat IBVP:
\begin{align*}
&u_t=\alpha^2 u_{xx},\ 0<x<L,\ t>0\\
&{\rm BCs:}\ \left\{\begin{aligned}
&u(0,t)=g_1(t),\ t>0\\
&u_x(L,t)+hu(L,t)=g_2(t),
\end{aligned}
\right.\\
&{\rm IC:}\ u(x,0)=\phi(x),\ 0<x<L.
\end{align*}
Again we seek a solution of the form:
$$u(x,t)=S(x,t)+U(x,t)$$
where $S(x,t)$ satisfies the non-homogeneous BCs:
\begin{align*}
S(0,t)&=g_1(t)\ \ \ \ \ (1)\\
S_x(L,t)+hS(L,t)&=g_2(t)
\end{align*}
Suppose that
$$S(x,t)=A(t)x+B(t)$$
From the BCs (1) we obtain
$$S(x,t)=\frac{g_2(t)-hg_1(t)}{1+hL}x+g_1(t)$$
$U(x,t)$ satisfies the semi-homogeneous heat IBVP:
\begin{align*}
&U_t=\alpha^2 U_{xx}-S_t,\ 0<x<L,\ t>0\\
&{\rm BCs:}\ \left\{\begin{aligned}
&U(0,t)=0,\ t>0\\
&U_x(L,t)+hU(L,t)=0,
\end{aligned}
\right.\\
&{\rm IC:}\ U(x,0)=\phi(x)-S(x,0),\ 0<x<L.
\end{align*}
Since $u(x,t)\approx S(x,t)$ for large time, the eventual temperature is $\lim_{t\to\infty}S(x,t)$. The homogeneous BCs are pretty much similar to the ones we studied here but the heat equation is non-homogeneous. Such a problem (non-homogeneous heat equation + homogeneous BCs) is called a semi-homogeneous IBVP. Unfortunately, we cannot solve the semi-homogenous heat IBVP yet. We will discuss how to solve semi-homogeneous IBVPs later.

Example. Solve the heat IBVP:
\begin{align*}
&u_t=\alpha^2 u_{xx}-ae^{-t}x,\ 0<x<1,\ t>0\\
&{\rm BCs:}\ \left\{\begin{aligned}
&u(0,t)=1,\ t>0\\
&u(1,t)=ae^{-t},
\end{aligned}
\right.\\
&{\rm IC:}\ u(x,0)=0,\ 0<x<1.
\end{align*}

Solution: Let $u(x,t)=S(x,t)+U(x,t)$
where $S(x,t)=A(t)x+B(t)$ satisfies the BCs. Then
$S(x,t)=(ae^{-t}-1)x+1$. $U(x,t)$ satisfies the homogeneous heat IBVP:
\begin{align*}
&U_t=\alpha^2 U_{xx},\ 0<x<1,\ t>0\\
&{\rm BCs:}\ \left\{\begin{aligned}
&U(0,t)=0,\ t>0\\
&U(1,t)=0,
\end{aligned}
\right.\\
&{\rm IC:}\ U(x,0)=-(a-1)x-1,\ 0<x<1.
\end{align*}
We know how to solve this problem and the solution $U(x,t)$ is given by
$$U(x,t)=\sum_{n=1}^\infty A_n e^{-\alpha^2n^2\pi^2t}\sin n\pi x$$
where for each $n=1,2,\cdots$
\begin{align*}
A_n&=2\int_0^1[-(a-1)x-1]\sin n\pi xdx\\
&=2\frac{a(-1)^n-1}{n\pi}
\end{align*}
That is,
$$U(x,t)=2\sum_{n=1}^\infty\frac{a(-1)^n-1}{n\pi}e^{-\alpha^2n^2\pi^2t}\sin n\pi x$$
Therefore, the solution to the original heat problem is
$$u(x,t)=(ae^{-t}-1)x+1+2\sum_{n=1}^\infty\frac{[a(-1)^n-1]}{n\pi}e^{-\alpha^2 n^2\pi^2 t}\sin n\pi x.$$
The eventual temperature is
$$\lim_{t\to\infty}S(x,t)=-x+1$$

References:

David Betounes, Partial Differential Equations for Computational Science with Maple and Vector Analysis, TELOS, Springer-Verlag

Solving Heat Equation with Non-Homogeneous BCs 1: Time-Indepdendent BCs

Consider the following 1-dimensional heat IBVP:
\begin{align*}
&u_t=\alpha^2 u_{xx},\ 0<x<L,\ t>0\\
&{\rm BCs:}\ \left\{\begin{aligned}
u(0,t)&=c_1,\ t>0\\
u(L,t)&=c_2,
\end{aligned}
\right.\\
&{\rm IC:}\ u(x,0)=\phi(x),\ 0<x<L.
\end{align*}
Since the BCs are not homogeneous, the method of separation of variables cannot be used. Let us assume that $u(x,t)$ can be decomposed to
$$u(x,t)=S(x,t)+U(x,t)$$
where $S(x,t)$ satisfies the non-homogeneous BCs i.e.
\begin{align*}
S(0,t)&=c_1\ \ \ \ \ (1)\\
S(L,t)&=c_2
\end{align*}
This implies that $U(x,t)$ satisfies the homogeneous BCs
$$U(0,t)=U(L,t)=0$$
We don’t know the function $S(x,t)$ but we can make a guess. We can first try a possibly simplest one, a linear function in $x$
$$S(x,t)=A(t)x+B(t)$$
If this works, it is our luck day. If not, try something else that is next simplest one, say a quadratic function in $x$. It turns out that our guess works! This means that we can determine the coefficients $A(t)$ and $B(t)$ from the BCs (1) and we find that
$$A(t)=\frac{c_2-c_1}{L},\ B(t)=c_1$$
Thus
$$S(x,t)=\frac{c_2-c_1}{L}x+c_1$$
$U(x,t)$ satisfies the homogeneous IBVP:
\begin{align*}
&U_t=\alpha^2 U_{xx},\ 0<x<L,\ t>0\\
&{\rm BCs:}\ \left\{\begin{aligned}
U(0,t)&=0,\ t>0\\
U(L,t)&=0,
\end{aligned}
\right.\\
&{\rm IC:}\ U(x,0)=\phi(x)-S(x,0)=\phi(x)-\frac{c_2-c_1}{L}x-c_1,\ 0<x<L.
\end{align*}
We already know how to solve the homogeneous IBVP (see here) and
$$U(x,t)=\sum_{n=1}^\infty A_ne^{-\alpha^2\frac{n^2\pi^2}{L^2}t}\sin\frac{n\pi}{L}x$$
where
$$A_n=\frac{2}{L}\int_0^L[\phi(x)-\frac{c_2-c_1}{L}x-c_1]\sin\frac{n\pi}{L}xdx,\ n=1,2,\cdots$$

Note that $\lim_{t\to\infty}U(x,t)=0$ and so $\lim_{t\to\infty}u(x,t)=S(x)$. $U(x,t)$ is called the transient and $S(x)$ is called the steady state. The steady state is eventual state for large time.

Example. Solve the IBVP:
\begin{align*}
&u_t=\alpha^2 u_{xx},\ 0<x<1,\ t>0\\
&{\rm BCs:}\ \left\{\begin{aligned}
u(0,t)&=c_1,\ t>0\\
u(1,t)&=c_2,
\end{aligned}
\right.\\
&{\rm IC:}\ u(x,0)=0,\ 0<x<1.
\end{align*}

Solution: First we solve
\begin{align*}
&U_t=\alpha^2 U_{xx},\ 0<x<1,\ t>0\\
&{\rm BCs:}\ \left\{\begin{aligned}
U(0,t)&=0,\ t>0\\
U(1,t)&=0,
\end{aligned}
\right.\\
&{\rm IC:}\ U(x,0)=-c_1-(c_2-c_1)x,\ 0<x<1.
\end{align*}
The transient $U(x,t)$ is given by
$$U(x,t)=\sum_{n=1}^\infty A_ne^{-\alpha^2n^2\pi^2 t}\sin n\pi x$$
where
\begin{align*}
A_n&=-2\int_0^1[c_1+(c_2-c_1)]\sin n\pi xdx\\
&=\frac{2}{n\pi}[c_2(-1)^n-c_1].
\end{align*}
Hence the solution is
$$u(x,t)=(c_2-c_1)x+c_1+2\sum_{n=1}^\infty\frac{[c_2(-1)^n-c_1]}{n\pi}e^{-\alpha^2n^2\pi^2 t}\sin n\pi x$$

References:

David Betounes, Partial Differential Equations for Computational Science with Maple and Vector Analysis, TELOS, Springer-Verlag

1-Dimensional Heat Initial Boundary Value Problems 3: An Example of Heat IBVP with Mixed Boundary Conditions (Insulated and Specified Flux)

Let us consider the heat IBVP:
\begin{align*}
u_t=\alpha^2u_{xx},\ 0<x<1,\ t>0\\
u_x(0,t)=0,\ t>0\\
u_x(1,t)+u(1,t)=0,\ t>0\\
u(x,0)=1,\ 0<x<1.
\end{align*}
First we solve the Sturm-Liouville problem:
\begin{align*}
X^{\prime\prime}=kX\ \mbox{with BCs}\\
\left\{
\begin{aligned}
&X’(0)=0\\
&X’(1)+X(1)=0.
\end{aligned}
\right.
\end{align*}
The two cases $k=0$ and $k=\lambda^2>0$ are left to be checked for students. Note that the case $k=0$ leads to a trivial solution and for $k=\lambda^2>0$ case there is no solution. Let $k=-\lambda^2$. Then
$$X(x)=A\cos\lambda x+B\sin \lambda x$$
and
$$X’(x)=-A\lambda\sin\lambda x+B\lambda\cos\lambda x.$$
We obtain $B=0$ from the condition $X’(0)=0$. So $X’(x)=-A\lambda\sin\lambda x$. The BC $X’(1)+X(1)=0$ then implies that the eigenvalues $\lambda$ satisfies
$$\lambda=\cot\lambda\ \ \ \ \ (1)$$


The roots of (1) can be found by the Newton’s method. In Maxima, the Newton’s method can be performed as follows.

First we define $\cot(x)-x$ as a function $f(x)$:

(%i1) f(x):=cot(x)-x;

Newton’s method applied to f(x) is performed by

(%i2) newton(t):=subst(t,x,x-f(x)/diff(f(x),x));

To use Newton’s method we need the initial approximation. We can make a guess from its graph. As you can see the first zero is near 0.5, so we choose $x_0=0.5$.

If you just run

(%i3) newton(0.5);

Maxima will return the output

(%o3)                              0.74865744241706

In order to calculate next approximation $x_1$, simply run

(%i4) newton(%);

(%o4)                              0.85239848545338

For the next approximation $x_2$, again run

(%i5) newton(%);

(%o5)                              0.86029880244504

By running the same command, the next approximation $x_3$ is obtained as $x_3=0.86033358835819$. This value is accurate to eight decimal places. If you think this is good enough, you can stop.

Similarly we obtain $\lambda_n$ for $n=2,3,4,\cdots$:
$$\begin{array}{|c||c|c|}
\hline
n & \lambda_n & \frac{(2n-1)\pi}{2}\\
\hline
1 & .8603358 & 1.570796327\\
\hline
2 & 3.425618 & 4.712388981\\
\hline
3 & 6.437298 & 7.853981635\\
\hline
4 & 9.529334 & 10.99557429\\
\hline
5 & 12.645287 & 14.13716694\\
\hline
\cdots & \cdots & \cdots\\
\hline
\end{array}$$
The table shows that as $n$ increases $\frac{(2n-1)\pi}{2}$ gets closer to $\lambda_n$. So for a practical purpose one may use the approximation
$$\lambda_n\approx \frac{(2n-1)\pi}{2},\ \lambda_n=1,2,3,\cdots$$
The temperature distribution $u(x,t)$ is then given by
$$u(x,t)=\sum_{n=1}^\infty A_ne^{-\alpha^2\lambda_n^2t}\cos\lambda_n x$$
The $L_n$’s and $A_n$’s are computed to be:
\begin{align*}
L_n&=\int_0^1\cos^2\lambda_nxdx\\
&=\frac{1}{2}\int_0^1[\cos(2\lambda_nx)+1]dx\\
&=\frac{1}{2}\left[\frac{\sin 2\lambda_n}{2\lambda_n}+1\right]\\
&=\frac{1}{2}\left[\frac{\sin\lambda_n\cos\lambda_n}{\lambda_n}+1\right]\ (\sin2\lambda_n=2\sin\lambda_n\cos\lambda_n)\\
&=\frac{1}{2}\left[\frac{\sin\lambda_n\cos\lambda_n+\lambda_n}{\lambda_n}\right]
\end{align*}
and
\begin{align*}
A_n&=\frac{1}{L_n}\int_0^1\phi(x)\cos\lambda_nxdx\\
&=\frac{2\lambda_n}{\sin\lambda_n\cos\lambda_n+\lambda_n}\int_0^1\cos\lambda_nxdx\\
&=\frac{2\sin\lambda_n}{\sin\lambda_n\cos\lambda_n+\lambda_n}.
\end{align*}

This calculation is done by assuming that $\cos\lambda_n x$, $n=1,2,\cdots$ are orthogonal functions. In fact they are orthogonal functions if $\lambda_n=\frac{(2n-1)\pi}{2}$, $n=1,2,\cdots$.

Finally we have
$$u(x,t)=\sum_{n=1}^\infty\frac{2\sin\lambda_n}{\sin\lambda_n\cos\lambda_n+\lambda_n}e^{-\alpha^2\lambda_n^2 t}\cos\lambda_n x.$$
For an approximation one may take
$$u(x,t)\approx\sum_{n=1}^\infty\frac{4(-1)^{n-1}}{(2n-1)\pi}e^{-\alpha^2\frac{(2n-1)^2\pi^2}{4}t}\cos\frac{(2n-1)\pi x}{2}\ \ \ \ \ (2)$$

The 3D plot of $u(x,t)$ in (2) is



References:

David Betounes, Partial Differential Equations for Computational Science with Maple and Vector Analysis, TELOS, Springer-Verlag

1-Dimensional Heat Initial Boundary Value Problems 2: Sturm-Liouville Problems and Orthogonal Functions

Sturm-Liouville Problems

The homogeneous boundary conditions of 1D heat conduction problem are given by
\begin{align*}
-\kappa_1u_x(0,t)+h_1u(0,t)&=0,\ t>0\\
\kappa_2u_x(L,t)+h_2u(L,t)&=0,\ t>0
\end{align*}
(See here)

The homogeneous BCs for the second order linear differential equation \begin{equation}\label{eq:ho}X^{\prime\prime}=kX\end{equation} is then
\begin{equation}\label{eq:bc}\begin{aligned}
-k_1X’(0)+h_1X(0)&=0\\
k_2X’(L)+h_2X(L)&=0
\end{aligned}\end{equation}
Finding solutions of the second order linear differential equation \eqref{eq:ho} for $k=0$, $k=\lambda^2$, and $k=-\lambda^2$ that satisfy the BCs \eqref{eq:bc} is called a Sturm-Liouville Problem. Here, we study the Sturm-Liouville Theory with the following example.

Remark. In case of homogeneous heat BVPs, the eventual temperature would be 0 as there is no heat source. So, we see that $k=-\lambda^2<0$ is the only physically relevant case.

Example. [Fixed temperature at both ends]

Consider the heat BVP:
\begin{align*}
u_t&=\alpha^2 u_{xx}\ \mbox{PDE}\\
u(0,t)&=u(1,t)=0\ \mbox{(BCs)}
\end{align*}
From the above BCs, we obtain the BCs for $X(x)$:
$$X(0)=X(1)=0$$
For $k=0$ and $k=\lambda^2>0$ we have a trivial solution $X(x)=0$. For $k=-\lambda^2<0$ $X(x)=A\cos\lambda x+B\sin\lambda x$. With the BCS we find the eigenvalues
$$\lambda_n=n\pi,\ n=1,2,3,\cdots$$
and the corresponding eigenfunctions
$$X_n(x)=\sin n\pi x,\ n=1,2,3,\cdots$$
The $\{X_n: n=1,2,3,\cdots\}$ is a linearly independent set so they form a basis for the solution space which is infinite dimensional. The general solution to the heat BVP is given by
$$u(x,t)=\sum_{n=1}^\infty A_n e^{-n^2\pi^2\alpha^2t}\sin n\pi x$$
There are undetermined coefficients $A_n$ called Fourier coefficients. They can be determined by initial condition (initial temperature).

Orthogonal Functions and Solution of a Homogeneous Heat IBVP

Consider a heat distribution function $u(x,t)$ of the following form
$$u(x,t)=\sum_{n=0}^\infty A_ne^{-\lambda_n^2\alpha^2t}X_n(x)$$
where $X_n$’s are eigenfunctions corresponding to the eigenvalues $\lambda_n$’s respectively. The eigenfunctions $X_n$’s form a basis for the solution space (which is often infinite dimensional) of a given heat IBVP, furthermore they can form an orthonormal basis with respect to the inner product
\begin{equation}\label{eq:innerprod}\langle X_m,X_n\rangle=\int_0^LX_mX_ndx\end{equation}
We say that eigenfunctions $X_m$ and $X_n$ are orthogonal if $\langle X_m,X_n\rangle=0$.

Example. $X_n(x)=\sin n\pi x$, $n=1,2,3,\cdots$ form an orthogonal basis with respect to \eqref{eq:innerprod}, where $0<x<1$:
\begin{align*}
\langle X_m,X_n\rangle&=\int_0^1\sin m\pi x\sin n\pi xdx\\
&=\left\{\begin{aligned}
\frac{1}{2}\ &{\rm if}\ m=n\\
0\ &{\rm if}\ m\ne n.
\end{aligned}\right.
\end{align*}

Remark. [The Gram-Schmidt Orthogonalization Process]
If $\{X_n\}$ is not an orthogonal basis, one can construct an orthogonal basis from $\{X_n\}$ using the inner product (3). The standard process is called the Gram-Schmidt orthogonalization process. Details can be found in many standard linear algebra textbooks.

Now we assume that $\{X_n\}$ is an orthogonal basis for the solution space. Let $L_n:=\langle X_n,X_n\rangle=\int_0^LX_n^2dx$. Let the initial condition be given by
$u(x,0)=\phi(x)$. Then
$$\phi(x)=\sum_{n=0}^\infty A_nX_n$$
Multiply this by $X_m$ and then integrate:
$$\int_0^LX_m\phi(x)dx=\sum_{n=0}^\infty A_n\int_0^LX_nX_mdx$$
By orthogonality we obtain
$$L_mA_m=\int_0^LX_m\phi(x)dx$$
or
$$A_m=\frac{1}{L_m}\int_0^L\phi(x)X_mdx,\ m=0,1,2,\cdots$$

Example. Consider the heat BVP in the previous example with initial condition $\phi(x)=T$, a constant temperature. For $n=1,2,3,\cdots$, $X_n(x)=\sin n\pi x;\ 0<x<1$ so
$$L_n=\int_0^1\sin^2 n\pi x dx=\frac{1}{2}$$
The Fourier coefficients are then computed to be
\begin{align*}
A_n&=2\int_0^1\phi(x)\sin n\pi xdx\\
&=2T\int_0^1\sin n\pi xdx\\
&=\frac{2T}{n\pi}[1-\cos n\pi]\\
&=\frac{2T}{n\pi}[1-(-1)^n].
\end{align*}
$A_n=0$ for $n={\rm even}$ and $A_{2n-1}=\frac{4T}{(2n-1)\pi},\ n=1,2,3,\cdots$. Hence
$$u(x,t)=\sum_{n=1}^\infty\frac{4T}{(2n-1)\pi}e^{-(2n-1)^2\pi^2\alpha^2t}\sin(2n-1)\pi x.$$

References:

David Betounes, Partial Differential Equations for Computational Science with Maple and Vector Analysis, TELOS, Springer-Verlag

1-Dimensional Heat Initial Boundary Value Problems 1: Separation of Variables

Let us consider the following assumptions for a heat conduction problem.

  1. The region $\Omega$ is a cylinder of length $L$ centered on the $x$-axis.
  2. The lateral surface is insulated.
  3. The left end ($x=0$) and the right end ($x=L$) have boundary conditions that do not depend on the $y$ and $z$ coordinates.
  4. The initial temperature distribution $\phi$ does not depend on $y$ and $z$ i.e. $\phi=\phi(x)$.

Under these assumptions we want to find temperature distribution $u(\mathbf{r},t)=u(x,y,z,t)$ which satisfies the heat equation
$$\frac{\partial u}{\partial t}=\alpha^2\nabla^2 u+F(\mathbf{r},t)$$
Here $\alpha$ is a constant called the diffusitivity of the material and $F$ is a scalar function called the heat source density. Due to the assumption 2 the  heat equation becomes the 1-dimensional heat equation
$$\frac{\partial u}{\partial t}=\frac{\partial^2 u}{\partial x^2}+F(x,t)$$
The most general form of the boundary conditions is given by the Newton’s law of cooling
$$-\kappa\nabla u\cdot\mathbf{n}=h(u-g)\ \mbox{on}\ \partial\Omega,\ t>0$$
where $\kappa\geq 0, h\geq 0$, $\mathbf{n}$ is unit normal to $\partial\Omega$ and $g$ is the temperature on $\partial\Omega$, $t>0$.

In our 1-dimensional case, the heat flux vector field $\nabla u$ is given by $\nabla u=\left(\frac{\partial u}{\partial x},0,0\right)$. So on the lateral surface $\nabla u\cdot\mathbf{n} =0$ i.e. the assumption that lateral surface is insulated is automatically satisfied. On the left end, $\mathbf{n}=(-1,0,0)$ so
$$-\kappa\nabla u\cdot\mathbf{n}=\kappa\frac{\partial u}{\partial x}.$$
On the right end, $\mathbf{n}=(1,0,0)$ so
$$-\kappa\nabla u\cdot\mathbf{n}=-\kappa\frac{\partial u}{\partial x}.$$
Hence for our 1-dimensional heat problem the general boundary conditions (BCs) are given by
\begin{align*}
-\kappa_1u_x(0,t)+h_1u(0,t)&=g_1(t),\ t>0\\
\kappa_2u_x(L,t)+h_2u(L,t)&=g_2(t),\ t>0
\end{align*}
The BCs are the main ingredients that uniquely determine a specific physical heat conduction phenomenon along with initial condition (IC)
$$u(x,0)=\phi(x),\ 0<x<L$$

The heat equation is said to be homogeneous if $F(x,t)=0$. The BCs are said to be homogeneous if $g_1(t)=g_2(t)=0$.

Separation of Variables Method: The separation of variables method is one of the oldest methods of solving partial differential equations. It reduces a partial differential equation to a number of ordinary differential equations.

Consider the homogeneous 1-dimensional heat equation
$$\frac{\partial u}{\partial t}=\alpha^2\frac{\partial^2 u}{\partial x^2}$$
Assume that $u(x,t)=X(x)T(t)$.
Then
$$X(x)\dot{T}(t)=\alpha^2X^{\prime\prime}(x)T(t)$$
where $’=\frac{\partial}{\partial x}$ and $\dot{}=\frac{\partial}{\partial t}$.
Divide this by $\alpha^2X(x)T(t)$. Then
$$\frac{\dot{T}}{\alpha^2T}=\frac{X^{\prime\prime}}{X}$$
The LHS depends only on time variable $t$ while the RHS depends on $x$ variable. This is possible when both the LHS and the RHS are the same as a constant, say, $k$. Thereby the 1D heat equation reduces to the ordinary differential equations
\begin{align*}
X^{\prime\prime}&=kX,\\
\dot{T}&=\alpha^2kT
\end{align*}
The second order linear equation has the following solutions depending on the sign of $k$:

  • If $k=0$, $X(x)=Ax+B$.
  • If $k=\lambda^2>0$, $X(x)=Ae^{\lambda x}+Be^{-\lambda x}$ or  $X(x)=A\cosh\lambda x+B\sinh\lambda x$.
  • If $k=-\lambda^2<0$, $X(x)=A\cos\lambda x+B\sinh\lambda x$.

The first order linear equation has solution
$$T(t)=Ce^{k\alpha^2 t}$$
Here one may assume that $C=1$ without loss of generality. Therefore we have three possible cases of $u(x,t)$:

  • $u(x,t)=Ax+B$
  • $u(x,t)=e^{\lambda^2\alpha^2 t}(Ae^{\lambda x}+Be^{-\lambda x})$ or $u(x,t)=e^{\lambda^2\alpha^2 t}(A\cosh\lambda x+B\sinh\lambda x)$
  • $u(x,t)=e^{-\lambda^2\alpha^2 t}(A\cos\lambda x+B\sinh\lambda x)$

References:

David Betounes, Partial Differential Equations for Computational Science with Maple and Vector Analysis, TELOS, Springer-Verlag

Helmholtz Equation

Helmholtz equation
$$\nabla^2\psi+k^2\psi=0\ \ \ \ \ \mbox{(1)}$$
is extremely important in physics. Solving many physically important partial differential equations such as heat equation, wave equation (Klein-Gordon equation), Maxwell’s equations, and Schrödinger equation, etc. often require solving Helmholtz equation (1).

In this notes, we discuss how to solve Helmholtz equation using separation of variables in rectangular, cylindrical, and spherical coordinate systems. The solutions we discuss here will be used when you solve boundary value problems associated with Helmholtz equation.

Helmholtz Equation in Rectangular Coordinates

Assume that $\psi(x,y,z)=X(x)Y(y)Z(z)$. Then the equation (1) becomes
$$YZ\frac{d^2X}{dx^2}+XZ\frac{d^2Y}{dy^2}+XY\frac{d^2Z}{dz^2}+k^2XYZ=0.\ \ \ \ \ \mbox{(2)}$$
Dividing (2) by $XYZ$, we obtain
$$\frac{1}{X}\frac{d^2X}{dx^2}+\frac{1}{Y}\frac{d^2Y}{dy^2}+\frac{1}{Z}\frac{d^2Z}{dz^2}+k^2=0.\ \ \ \ \mbox{(3)}$$
Let us write (3) as
$$\frac{1}{X}\frac{d^2X}{dx^2}=-\frac{1}{Y}\frac{d^2Y}{dy^2}-\frac{1}{Z}\frac{d^2Z}{dz^2}-k^2.\ \ \ \ \ \mbox{(4)}$$
Now we have a paradox. The LHS of (4) depends only on the $x$-variable while the RHS of (4) depends on $y$ and $z$-variables. One way to to avoid this paradox is to assume that the LHS and the RHS of (4) is a constant, say $-l^2$. If you are wondering why we choose a negative constant, the reason comes from physics. For a physical reason, we need an oscillating solution which can be obtained by choosing a negative separation constant. Often boundary conditions for Helmholtz equation lead to a trivial solution for a positive separation constant. Continuing a similar process, we separate Helmholtz equation into three ordinary differential equations:
\begin{align*}
\frac{1}{X}\frac{d^2 X}{dx^2}&=-l^2,\\
\frac{1}{Y}\frac{d^2Y}{dy^2}&=-m^2,\\
\frac{1}{Z}\frac{d^2Z}{dz^2}&=-n^2,
\end{align*}
where $k^2=l^2+m^2+n^2$.

Each mode is given by
$$\psi_{lmn}(x,y,z)=X_l(x)Y_m(y)Z_n(z)$$ and the most general solution is given by the linear combination of the modes
$$\psi(x,y,z)=\sum_{i,m,n}a_{lmn}\psi_{lmn}(x,y,z).$$

Helmholtz Equation in Cylindrical Coordinates

In cylindrical coordinate system $(\rho,\varphi,z)$, Helmholtz equation (1) is written as
$$\frac{1}{\rho}\frac{\partial}{\partial\rho}\left(\rho\frac{\partial\psi}{\partial\rho}\right)+\frac{1}{\rho^2}\frac{\partial^2\psi}{\partial\varphi^2}+\frac{\partial^2\psi}{\partial z^2}+k^2\psi=0.\ \ \ \ \ \mbox{(5)}$$

We assume that $\psi(\rho,\varphi,z)=P(\rho)\Phi(\varphi)Z(z)$. Then (5) can be written as
$$\frac{\Phi Z}{\rho}\frac{\partial}{\partial\rho}\left(\rho\frac{\partial\psi}{\partial\rho}\right)+\frac{PZ}{\rho^2}\frac{\partial^2\psi}{\partial\varphi^2}+P\Phi\frac{\partial^2\psi}{\partial z^2}+k^2=0.\ \ \ \ \ \mbox{(6)}$$
As we have done in rectangular coordinate system, by introducing the separation constants we can separate (6) into three ordinary differential equations
\begin{align*}
\frac{d^2Z}{dz^2}=l^2z,\\
\frac{d^2\Phi}{d\phi^2}=-m^2\Phi,\\
\rho\frac{d}{d\rho}\left(\rho\frac{dP}{d\rho}\right)+(n^2\rho^2-m^2)P=0,\ \ \ \ \ \mbox{(7)}
\end{align*}
where $n^2=k^2+l^2$. The last equation (7) is Bessel’s differential equation.

The general solution of Helmholtz equation in cylindrical coordinates is given by
$$\psi(\rho,\varphi,z)=\sum_{m,n}a_{mn}P_{mn}(\rho)\Phi_m(\varphi)Z_n(z).$$

Helmholtz Equation in Spherical Coordinates

In spherical coordinates $(r,\theta,\varphi)$, Helmholtz equation (1) is written as
$$\frac{1}{r^2\sin\theta}\left[\sin\theta\frac{\partial}{\partial r}\left(r^2\frac{\partial\psi}{\partial r}\right)+\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial\psi}{\partial\theta}\right)+\frac{1}{\sin\theta}\frac{\partial^2\psi}{\partial\varphi^2}\right]=-k^2\psi.\ \ \ \ \ \mbox{(8)}$$
Assume that $\psi(r,\theta,\varphi)=R(r)\Theta(\theta)\phi(\varphi)$. Then (8) can be written as
$$\frac{1}{Rr^2}\frac{d}{dr}\left(r^2\frac{dR}{dr}\right)+\frac{1}{\Theta r^2\sin\theta}\frac{d}{d\theta}\left(\sin\theta\frac{d\Theta}{d\theta}\right)+\frac{1}{\Phi r^2\sin^2\theta}\frac{d^2\Phi^2}{d\varphi^2}=-k^2.\ \ \ \ \ \mbox{(9)}$$
By introducing separation constants, (9) is separated into three ordinary differential equations
\begin{align*}
\frac{1}{\Phi}\frac{d^2\Phi}{d\varphi^2}=-m^2,\\
\frac{1}{\sin\theta}\frac{d}{d\theta}\left(\sin\theta\frac{d\Theta}{d\theta}\right)+\left(Q-\frac{m^2}{\sin^2\theta}\right)\Theta=0,\ \ \ \ \ \mbox{(10)}\\
\frac{1}{r^2}\frac{d}{dr}\left(r^2\frac{dR}{dr}\right)+\left(k^2-\frac{Q}{r^2}\right)R=0.\ \ \ \ \ \mbox{(11)}
\end{align*}
The second equation (10) is the associated Legendre equation with $Q=l(l+1)$. The third equation (11) is spherical Bessel equation with $k^2>0$.

The general solution of Helmholtz equation (8) is then given by
$$\psi(r,\theta,\varphi)=\sum_{Q,m}R_Q(r)\Theta_{Qm}(\theta)\Phi_m(\varphi).$$

The restriction that $k^2$ be a constant is unnecessary. For instance the separation process will still be possible for $k^2=f(r)$. If $k^2=f(r)$, (11) is the associated Laguerre equation. The associated Laguerre equation is appeared in the hydrogen atom problem in quantum mechanics.

Modeling a Vibrating Drumhead III

In the previous discussion, we finally obtained the solution of the vibrating drumhead problem:
$$u(r,\theta,t)=\sum_{n=0}^\infty\sum_{m=1}^\infty J_n(\lambda_{nm}r)\cos(n\theta)[A_{nm}\cos(\lambda_{nm} ct)+B_{nm}\sin(\lambda_{nm}ct)].$$
In this lecture, we determine the Fourier coefficients $A_{nm}$ and $B_{nm}$ using the initial conditions $u(r,\theta,0)$ and $u_t(r,\theta,0)$. Before we go on, we need to mention two types of orthogonalities: the orthogonality of cosine functions and the orthogonality of Bessel functions. First note that
$$\int_0^{2\pi}\cos(n\theta)\cos(k\theta)d\theta=\left\{\begin{array}{ccc}0 & \mbox{if} & n\ne m,\\\pi & \mbox{if} & n=m.\end{array}\right.$$
The reason this property is called an orthogonality is that if $V$ is the set of all (Riemann) integrable real-valued functions on the interval $[a,b]$, then $V$ forms a vector space over $\mathbb R$. This vector space is indeed an inner product space with the inner product $$\langle f,g\rangle=\int_a^bf(x)g(x)dx\ \mbox{for}\ f,g\in V.$$
Bessel functions are orthogonal as well in the following sense:
$$\int_0^1J_n(\lambda_{nm}r)J_n(\lambda_{nl}r)rdr=\left\{\begin{array}{ccc}0 & \mbox{if} & m\ne l,\\\frac{1}{2}[J_{n+1}(\lambda_{nm})]^2 & \mbox{if} & m=l.\end{array}\right.$$

From the solution $u(r,\theta,t)$, we obtain the initial position of the drumhead:
$$u(r,\theta,0)=\sum_n\sum_mJ_n(\lambda_{nm}r)\cos(n\theta)A_{nm}.$$
On the other hand, $u(r,\theta,0)=f(r,\theta)$. Multiply
$$\sum_n\sum_mJ_n(\lambda_{nm}r)\cos(n\theta)A_{nm}=f(r,\theta)$$
by $\cos(k\theta)$ and integrate with respect to $\theta$ from $0$ to $2\pi$:
$$\sum_n\sum_mJ_n(\lambda_{nm}r)A_{nm}\int_0^{2\pi}\cos(n\theta)\cos(k\theta)d\theta=\int_0^{2\pi}f(r,\theta)\cos(k\theta)d\theta.$$ The only nonvanishing term of the above series is when $n=k$, so we obtain
$$\pi\sum_mJ_k(\lambda_{km}r)A_{km}=\int_0^{2\pi}f(r,\theta)\cos(k\theta)d\theta.$$ Multiply this equation by $J_k(\lambda_{kl}r)$ and integrate with respect to $r$ from $0$ to $1$:
$$\pi\sum_mA_{km}\int_0^1J_k(\lambda_{km}r)J_k(\lambda_{kl}r)rdr=\int_0^{2\pi}\int_0^1f(r,\theta)\cos(k\theta)J_k(\lambda_{kl}r)rdrd\theta.$$ The only nonvanishing term of this series is when $m=l$. As a result we obtain:
$$A_{kl}=\frac{1}{\pi L_{kl}}\int_0^{2\pi}\int_0^1f(r,\theta)\cos(k\theta)J_k(\lambda_{kl}r)rdrd\theta$$
or
$$A_{nm}=\frac{1}{\pi L_{nm}}\int_0^{2\pi}\int_0^1f(r,\theta)\cos(n\theta)J_n(\lambda_{nm}r)rdrd\theta,\ n,m=1,2,\cdots$$
where
$$L_{nm}=\int_0^1J_n(\lambda_{nm}r)^2rdr=\frac{1}{2}[J_{n+1}(\lambda_{nm})]^2, n=0,1,2,\cdots.$$
For $n=0$ we obtain
$$A_{0m}\frac{1}{2\pi L_{0m}}\int_0^1f(r,\theta)J_0(\lambda_{0m}r)rdrd\theta,\ m=1,2,\cdots.$$
Using
$$u_t(r,\theta,0)=\sum_n\sum_mJ_n(\lambda_{nm}r)\cos(n\theta)B_{nm}\lambda_{nm}c=g(r,\theta),$$
we obtain
\begin{align*}
B_{nm}&=\frac{1}{c\pi L_{nm}\lambda_{nm}}\int_0^{2\pi}\int_0^1g(r,\theta)\cos(n\theta)J_n(\lambda_{nm}r)rdrd\theta,\ n,m=1,2,\cdots,\\
B_{0m}&=\frac{1}{2c\pi L_{nm}\lambda_{nm}}\int_0^{2\pi}\int_0^1g(r,\theta)J_0(\lambda_{0m}r)rdrd\theta,\ m=1,2,\cdots.
\end{align*}
Unfortunately at this moment I do not know if I can make an animation of the solution using an open source math software package such as Maxima or Sage. I will let you know if I find a way. In the meantime, if any of you have an access to Maple, you can download a Maple worksheet I made here and run it for yourself. In the particular example in the Maple worksheet, I used $f(r,\theta)=J_0(2.4r)+0.10J_0(5.52r)$ and $g(r,\theta)=0$. For an animation of the solution, click here.

Modeling a Vibrating Drumhead II

When you solve an initial boundary value problem, the boundary condition is used to find eigenvalues while initial conditions are used to determined Fourier coefficients of the series solution. In the previous discussion, we find the solution $R(r)=AJ_n(\lambda r)$ of Bessel’s equation. The boundary condition $R(1)=0$ results the equation $$J_n(\lambda)=0.$$ This is an eigenvalue equation and its solutions are called Bessel zeros. We can easily find Bessel zeros using Maple. First open Maple and type

Bess:=(n,m)->evalf(BesselJZeros(n,m));

and press enter. This defines a function Bess that returns $m$-th zero $\lambda_{nm}$ of $J_n(\lambda)$ for given positive integer $m$ as shown in the screenshot.


To calculate, for example, 1st zero of $J_0(\lambda)$ i.e. $\lambda_{01}$ simply type

Bess(0,1);

and press enter. It will return the value $2.40482558$ as shown in the screenshot.

In Maxima, unfortunately such a command (as BesselJZeros in Maple) does not appear to be available. But we can still find Bessel zeros using an elementary technique from calculus, Newton’s Method. If the initial approximation $x_0$ to a zero of a differentiable function $f(x)$ is given, then the $(n+1)$-th approximation is given by $$x_{n+1}=x_n-\frac{f(x_n)}{f’(x_n)}.$$ In Maxima, Newton’s Method can be performed as follows. As an example we calculate the 1st Bessel zero of $J_0(\lambda)$.

First we define $J_0(x)$ as a function $f(x)$:

(%i1) f(x):=bessel_j(0,x);

Newton’s method applied to $f(x)$ is performed by

(%i2) newton(t):=subst(t,x,x-f(x)/diff(f(x),x));

To use Newton’s method we need the initial approximation. We can make a guess from its graph. As you can see the first zero is near 2, so we choose $x_0=2$. If you just run

(%i3) newton(2);

the output will not be a numerical value. In order to have a numerical value as output, run

(%i4) ev(newton(2),numer);

The output will be

(%o4)                          2.388210765567796

In order to calculate next approximation $x_1$, simply run

(%i5) ev(newton(%),numer);

and its output is

(%o5)                          2.404769548213657

For the next approximation $x_2$, again run

(%i6) ev(newton(%),numer);

and its output is

(%o6)                          2.404825557043583

This value is accurate to nine decimal places. If you think this is good enough, you can stop. The whole process is seen in the following screenshot.


Let us denote the eigenvalues by $\lambda_{nm}$. Then the radial eigenfunctions are given by $$R_{nm}(r)=J_n(\lambda_{nm}r).$$
The solution $U(r,\theta)$ to the Helmholtz boundary value problem is
$$U_{nm}(r,\theta)=J_n(\lambda_{nm}r)[A\cos(n\theta)+B\sin(n\theta)].$$
Using a simple trigonometric identity, one can write
$$A\cos(n\theta)+B\sin(n\theta)=\sqrt{A^2+B^2}\cos(n(\theta-\psi))$$ for some $\psi$. Thus $U_{nm}(r,\theta)$ can be written as
$$U_{nm}(r,\theta)=AJ_n(\lambda_{nm}r)\cos(n\theta).$$
Oscillatory facotrs satisfy the equation $$\ddot{T}_{nm}+\lambda_{nm}^2c^2T_{nm}=0$$ and they are
$$T_{nm}(t)=C\cos(\lambda_{nm}ct)+D\sin(\lambda ct).$$
Now we are ready to put pieces together to write down the solution $u(r,\theta,t)$ as the following Fourier series:
$$u(r,\theta,t)=\sum_{n=0}^\infty\sum_{m=1}^\infty J_n(\lambda_{nm}r)\cos(n\theta)[A_{nm}\cos(\lambda_{nm} ct)+B_{nm}\sin(\lambda_{nm}ct)].$$

The only things remained for us to do are to determine the unknown Fourier coefficients $A_{nm}$ and $B_{nm}$. We can determine the Fourier coefficients using the initial conditions $u(r,\theta,0)$ and $u_t(r,\theta,0)$. This will be discussed in the next lecture.