Category Archives: Engineering Mathematics

Legendre Functions III: Special Values, Parity, Orthogonality

Special Values

From the generating function
$$g(x,t)=\frac{1}{(1-2xt+t^2)^{1/2}},$$
when $x=1$ we obtain
\begin{align*}
g(1,t)&=\frac{1}{(1-2t+t^2)^{1/2}}\\
&=\frac{1}{1-t}\\
&=\sum_{n=0}^\infty t^n,
\end{align*}
since $|t|<1$. On the other hand,
$$g(1,t)=\sum_{n=0}^\infty P_n(1)t^n.$$
So by comparison we get
$$P_n(1)=1.$$ Similarly, if we let $x=-1$,
$$P_n(-1)=(-1)^n.$$
For $x=0$, the generating function results
$$(1+t^2)^{-1/2}=1-\frac{1}{2}t^2+\frac{3}{8}t^4+\cdots+(-1)^n\frac{1\cdot 3\cdots (2n-1)}{2^nn!}t^{2n}+\cdots.$$
Thus we obtain
\begin{align*}
P_{2n}(0)&=(-1)^n\frac{1\cdot 3\cdots (2n-1)}{2^nn!}=(-1)^n\frac{(2n-1)!!}{(2n)!!},\\
P_{2n+1}(0)&=0,\ n=0,1,2,\cdots.
\end{align*}
Recall that the double factorial !! is defined by
\begin{align*}
(2n)!!&=2\cdot 4\cdot 6\cdots (2n),\\
(2n-1)!!&=1\cdot 3\cdot 5\cdots (2n-1).
\end{align*}

Parity

$g(t,x)=g(-t,-x)$, that is
$$\sum_{n=0}^\infty P_n(x)t^n=\sum_{n=0}^\infty P_n(-x)(-t)^n$$
which results the parity
$$P_n(-x)=(-1)^nP_n(x).\ \ \ \ \ (1)$$
(1) tells that if $n$ is even, $P_n(x)$ is an even function and if $n$ is odd, $P_n(x)$ is an odd function.

Orthogonality

Multiply the Legendre’s diferential equation
$$\frac{d}{dx}[(1-x^2)P_n'(x)]+n(n+1)P_n(x)=0\ \ \ \ \ (2)$$ by $P_m(x)$.
$$P_m(x)\frac{d}{dx}[(1-x^2)P_n'(x)]+n(n+1)P_m(x)P_n(x)=0.\ \ \ \ \ (3)$$
Replace $n$ by $m$ in (2) and then multiply the resulting equation by $P_n(x)$.
$$P_n(x)\frac{d}{dx}[(1-x^2)P_m'(x)]+m(m+1)P_m(x)P_n(x)=0.\ \ \ \ \ (4)$$
Subtract (4) from (3) and integrate the resulting equation with respect to $x$ from $-1$ to 1.
\begin{align*}
\int_{-1}^1&\left\{P_m(x)\frac{d}{dx}[(1-x^2)P_n'(x)]-P_n(x)\frac{d}{dx}[(1-x^2)P_m'(x)]\right\}dx\\
&=[m(m+1)P_m(x)P_n(x)-n(n+1)P_m(x)P_n(x)].\end{align*}
Using integration by parts,
\begin{align*}
\int_{-1}^1P_m(x)\frac{d}{dx}[(1-x^2)P_n'(x)]dx&=\\&(1-x^2)P_m(x)P_n’(x)|_{-1}^1-\int_{-1}^1P_m(x)P_n(x)dx\\
&=-\int_{-1}^1P_m(x)P_n(x)dx.
\end{align*}
Since the integration of the second term inside $\{\ \ \}$ would have the same value, the LHS vanishes.
Hence for $m\ne n$,
$$\int_{-1}^1P_m(x)P_n(x)dx=0.\ \ \ \ \ (5)$$
That is, $P_m(x)$ and $P_n(x)$ are orthogonal for the interval $[-1,1]$.
For $x=\cos\theta$, the orthogonality (5) is given by
$$\int_0^\pi P_n(\cos\theta)P_m(\cos\theta)\sin\theta d\theta=0.$$

Integrate
$$(1-2xt+t^2)^{-1}=\left[\sum_{n=0}^\infty P_n(x)t^n\right]^2$$
with respect to $x$ from $-1$ to $1$. Due to the orthogonality (5), the integration of all the crossing terms in the RHS will vanish, and so we obtain
$$\int_{-1}^1\frac{dx}{1-2xt+t^2}=\sum_{n=0}^\infty \left\{\int_{-1}^1[P_n(x)]^2dx\right\}t^{2n}.$$
\begin{align*}
\int_{-1}^1\frac{dx}{1-2xt+t^2}&=\frac{1}{2t}\int_{(1-t)^2}^{(1+t)^2}\frac{dy}{y}\\
&=\frac{1}{t}\ln\left(\frac{1+t}{1-t}\right)\\
&=\sum_{n=0}^\infty\frac{2}{2n+1}t^{2n}\ (\mbox{since $|t|<1$}).
\end{align*}
Therefore we have the normalizer of Legendre polynomial $P_n(x)$
$$\int_{-1}^1[P_n(x)]^2dx=\frac{2}{2n+1}.$$

Expansion of Functions

Suppose that
$$\sum_{n=0}^\infty a_nP_n(x)=f(x).\ \ \ \ \ (6)$$
Multiply (6) by $P_m(x)$ and integrate with respect to $x$ from $-1$ to 1:
$$\sum_{n=0}^\infty a_n\int_{-1}^1 P_n(x)P_m(x)dx=\int_{-1}^1f(x)P_m(x)dx.$$
By the orthogonality (5), we obtain
$$\frac{2}{2m+1}a_m=\int_{-1}^1f(x)P_m(x)dx\ \ \ \ \ (7)$$
and hence $f(x)$ can be written as
$$f(x)=\sum_{n=0}^\infty\frac{2n+1}{2}\left(\int_{-1}^1 f(t)P_m(t)dt\right)P_n(x).\ \ \ \ \ (8)$$
This expansion in a series of Legendre polynomials is called a Legendre series. Clearly if $f(x)$ is continuous (or integrable) on the interval $[-1,1]$, it can be expanded as a Legendre series.

(7) can be considered as an integral transform, a finite Legendre transform and (8) can be considered as the inverse transform.

Let us consider the integral operator
$$\mathcal{P}_m:=P_m(x)\frac{2m+1}{2}\int_{-1}^1P_m(t)[\ \cdot\ ]dt.\ \ \ \ \ (9)$$
Then
$$\mathcal{P}_mf(t)=a_mP_m(x).$$
The operator (9) projects out the $m$th component of the function $f(x)$.

Legendre Functions II: Recurrence Relations and Special Properties

In this lecture, we derive some important recurrence relations of Legendre functions and use them to show that Legendre functions are indeed solutions of a differential equation, called Legendre’s differential equation.

Differentiating the generating function
$$g(x,t)=(1-2xt+t^2)^{-1/2}=\sum_{n=0}^\infty P_n(x)t^n,\ |t|<1\ \ \ \ \ \mbox{(1)}$$
with respect to $t$, we get
\begin{align*}
\frac{\partial g(x,t)}{\partial t}&=\frac{x-t}{(1-2xt+t^2)^{3/2}}\ \ \ \ \ \mbox{(2)}\\&=\sum_{n=0}^\infty nP_n(x)t^{n-1}.\ \ \ \ \ \mbox{(3)}\end{align*}
(2) can be written as
$$\frac{x-t}{(1-2xt+t^2)(1-2xt+t^2)^{1/2}}=\frac{(x-t)(1-2xt+t^2)^{-1/2}}{1-2xt+t^2}.$$
By (1) and (3), we obtain
$$(x-t)\sum_{n=0}^\infty P_n(x)t^n=(1-2xt+t^2)\sum_{n=0}^\infty nP_n(x) t^{n-1}$$ or
$$(1-2xt+t^2)\sum_{n=0}^\infty nP_n(x) t^{n-1}+(t-x)\sum_{n=0}^\infty P_n(x)t^n=0$$
which can be written out as
\begin{align*}
\sum_{n=0}^\infty nP_n(x)t^{n-1}-\sum_{n=0}^\infty &2xnP_n(x)t^n+\sum_{n=0}^\infty nP_n(x)t^{n+1}\\&+\sum_{n=0}^\infty P_n(x)t^{n+1}-\sum_{n=0}^\infty xP_n(x)t^n=0.\ \ \ \ \ \mbox{(4)}\end{align*}
In (4) replace $n$ by $n+1$ in the first term, and replace $n$ by $n-1$ in the third and fourth term. Then (4) becomes
\begin{align*}
\sum_{n=0}^\infty (n+1)P_{n+1}(x)t^n-\sum_{n=0}^\infty &2xnP_n(x)t^n+\sum_{n=0}^\infty (n-1)P_{n-1}(x)t^n\\&+\sum_{n=0}^\infty P_{n-1}(x)t^n-\sum_{n=0}^\infty xP_n(x)t^n=0.
\end{align*}
This can be simplified to
$$\sum_{n=0}^\infty[(n+1)P_{n+1}(x)-(2n+1)xP_n(x)+nP_{n-1}(x)]t^n=0$$
which implies that
$$(2n+1)xP_n(x)=(n+1)P_{n+1}(x)+nP_{n-1}(x).\ \ \ \ \ \mbox{(5)}$$
The recurrence relation (5) can be used to calculate Legendre polynomials. For example, we found $P_0(x)=1$ and $P_1(x)=x$ here. For $n=1$, (5) is
$$3xP_1(x)=2P_2(x)+P_0(x)$$
i.e.
$$P_2(x)=\frac{1}{2}(3x^2-1).$$
Continuing this using the recurrence relation (5), we obtain
\begin{align*}
P_3(x)&=\frac{1}{2}(5x^3-3x),\\
P_4(x)&=\frac{1}{8}(35x^4-30x^2+3),\\
P_5(x)&=\frac{1}{8}(63x^5-70x^3+15x),\\
\cdots.
\end{align*}
A great advantage of having the recurrence relation (5) is that one can easily calculate Legendre polynomials using a computer with a simple programming. This can be easily done for instance in Maxima.

Let us load the following simple program to run the recurrence relation (5).

(%i1) Legendre(n,x):=block ([],
if n = 0 then 1
else
if n = 1 then x
else  ((2*n – 1)*x*Legendre(n – 1, x)-(n – 1)*Legendre(n – 2,x))/n);

(%o1) Legendre(n, x) := block([], if n = 0 then 1
else (if n = 1 then x else ((2 n – 1) x Legendre(n – 1, x)
- (n – 1) Legendre(n – 2, x))/n))

Now we are ready to calculate Legendre polynomials. For example, let us calculate $P_3(x)$.

(%i2) Legendre(3,x);

The output is not exactly what we may like because it is not simplified.

In Maxima, simplification can be done by the command ratsimp.

(%i3) ratsimp(Legendre(3,x));

The output is

That looks better. Let us calculate one more, say $P_4(x)$.

Now we differentiate $g(x,t)$ with respect to $x$.
$$\frac{\partial g(x,t)}{\partial x}=\frac{t}{(1-2xt+t^2)^{3/2}}=\sum_{n=0}^\infty P_n’(x)t^n.$$
From this we obtain
$$(1-2xt+t^2)\sum_{n=0}^\infty P_n’(x)t^n-t\sum_{n=0}^\infty P_n(x)t^n=0$$
which leads to
$$P_{n+1}’(x)+P_{n-1}’(x)=2xP_n’(x)+P_n(x).\ \ \ \ \ \mbox{(6)}$$
Add 2 times $\frac{d}{dx}(5)$ to $2n+1$ times (6). Then we get
$$(2n+1)P_n=P_{n+1}’(x)-P_{n-1}’(x).\ \ \ \ \ \mbox{(7)}$$
$\frac{1}{2}[(6)+(7)]$ results
$$P_{n+1}’(x)=(n+1)P_n(x)+xP_n’(x).\ \ \ \ \ \mbox{(8)}$$
$\frac{1}{2}[(6)-(7)]$ results
$$P_{n-1}’(x)=-nP_n(x)+xP_n’(x).\ \ \ \ \ \mbox{(9)}$$
Replace $n$ by $n-1$ in (7) and add the result to $x$ times (9):
$$(1-x^2)P_n’(x)=nP_{n-1}(x)-nxP_n(x).\ \ \ \ \ \mbox{(10)}$$
Differentiate (10) with respect to $x$ and add the result to $n$ times (9):
$$(1-x^2)P_n^{\prime\prime}(x)-2xP_n’(x)+n(n+1)P_n(x)=0.\ \ \ \ \ \mbox{(11)}$$
The linear second-order differential equation (11) is called Legendre’s differential equation and as seen $P_n(x)$ satisfies (11). This is why $P_n(x)$ is called a Legendre polynomial.

In physics (11) is often expressed in terms of differentiation with respect to $\theta$. Let $x=\cos\theta$. Then by the chain rule,
\begin{align*}
\frac{dP_n(\cos\theta)}{d\theta}&=-\sin\theta\frac{dP_n(x)}{dx},\ \ \ \ \ \mbox{(12)}\\ \frac{d^2P_n(\cos\theta)}{d\theta^2}&=-x\frac{dP_n(x)}{dx}+(1-x^2)\frac{d^2P_n(x)}{dx^2}.\ \ \ \ \ \mbox{(13)}
\end{align*}
Using (12) and (13), Legendre’s differential equation (11) can be written as
$$\frac{1}{\sin\theta}\frac{d}{d\theta}\left[\sin\theta\frac{dP_n(\cos\theta)}{d\theta}\right]+n(n+1)P_n(\cos\theta)=0.$$

Legendre Functions I: A Physical Origin of Legendre Functions

Consider an electric charge $q$ placed on the $z$-axis at $z=a$.

Electric Potential

The electrostatic potential of charge $q$ is $$\varphi=\frac{1}{4\pi\epsilon_0}\frac{q}{r_1}.\ \ \ \ \ \mbox{(1)}$$ Using the Laws of Cosine, one can write $r_1$ in terms of $r$ and $\theta$:
$$r_1=\sqrt{r^2+a^2-2ar\cos\theta}$$
and thereby the electrostatic potential (1) can be written as
$$\varphi=\frac{q}{4\pi\epsilon_0}(r^2+a^2-2ar\cos\theta)^{-1/2}.\ \ \ \ \ \mbox{(2)}$$

Recall the Binomial Expansion Formula: Suppose that $x,y\in\mathbb{R}$ and $|x|>|y|$. Then
$$(x+y)^r=\sum_{k=0}^\infty\begin{pmatrix}r\\k\end{pmatrix}x^{r-k}y^k,\ \ \ \ \ \mbox{(3)}$$ where $\begin{pmatrix}r\\k\end{pmatrix}=\frac{r!}{k!(r-k)!}$.

Legendre Polynomials: If $r>a$ (or more specifically $r^2>|a^2-2ar\cos\theta|$), we can expand the radical to obtain:
$$\varphi=\frac{q}{4\pi\epsilon_0 r}\sum_{n=0}^\infty P_n(\cos\theta)\left(\frac{a}{r}\right)^n.$$ The coefficients $P_n$ are called the Legendre polynomials. The Legendre polynomials can be defined by the generating function
$$g(t,x)=(1-2xt+t^2)^{-1/2}=\sum_{n=0}^\infty P_n(x)t^n,\ \ \ \ \ \mbox{(4)}$$ where $|t|<1$. Using the binomial expansion formula (3), we obtain
\begin{align*}
(1-2xt+t^2)^{-1/2}&=\sum_{n=0}^\infty\frac{(2n)!}{2^{2n}(n!)^2}(2xt-t^2)^n\ \ \ \ \ \mbox{(5)}\\
&=\sum_{n=0}^\infty\frac{(2n-1)!!}{(2n)!!}(2xt-t^2)^n.
\end{align*}
Let us write out the first three terms:
\begin{align*}
\frac{0!}{2^0(0!)^2}&(2xt-t^2)^0+\frac{2!}{2^2(1!)^2}(2xt-t^2)^1+\frac{4!}{2^4(2!)^2}(2xt-t^2)^2\\
&=1t^0+xt^1+\left(\frac{3}{2}x^2-\frac{1}{2}\right)t^2+\mathcal{O}t^3.
\end{align*}
Thus we see that $P_0(x)=1$, $P_1(x)=x$, and $P_2(x)=\frac{3}{2}x^2-\frac{1}{2}$. In practice, we don’t calculate Legendre polynomials using the power series (5). Instead, we use the recurrence relation of Legendre polynomials that will be discussed later.

The Maxima name for Legendre polynomial $P_n(x)$ is legendre_p(n,x). The following graphs of $P_2(x)$, $P_3(x)$, $P_4(x)$, $P_5(x)$, $-1\leq x\leq 1$ is made by Maxima using the command:

plot2d([legendre_p(2,x),legendre_p(3,x),legendre_p(4,x),legendre_p(5,x)],[x,-1,1]);

Legendre Polynomials

Now expand the polynomial $(2xt-t^2)^n$ in the power series (5):
\begin{align*}
(1-2xt+t^2)^{-1/2}&=\sum_{n=0}^\infty\frac{(2n)!}{2^{2n}(n!)^2}t^n\sum_{k=0}^n(-1)^k\frac{n!}{k!(n-k)!}(2x)^{n-k}t^k\\
&=\sum_{n=0}^\infty\sum_{k=0}^n(-1)^k\frac{(2n)!}{2^{2n}n!k!(n-k)!}(2x)^{n-k}t^{k+n}.\ \ \ \ \ \mbox{(6)}
\end{align*}
By rearranging the order of summation, (6) can be written as
$$(1-2xt+t^2)^{-1/2}=\sum_{n=0}^\infty\sum_{k=0}^{[n/2]}(-1)^k\frac{(2n-k)!}{2^{2n-2k}k!(n-k)!(n-2k)!}(2x)^{n-2k}t^n,$$ where
$$\left[\frac{n}{2}\right]=\left\{\begin{array}{ccc}
\frac{n}{2} & \mbox{for} & n=\mbox{even}\\
\frac{n-1}{2} & \mbox{for} & n=\mbox{odd}.
\end{array}\right.$$

Hence,
$$P_n(x)=\sum_{k=0}^{[n/2]}(-1)^k\frac{(2n-k)!}{2^{2n-2k}k!(n-k)!(n-2k)!}(2x)^{n-2k}.\ \ \ \ \ \mbox{(7)}$$
In practice, we hardly use the formula (7). Again, we use the recurrence relation of Legendre polynomials instead.

Electric Dipole: The generating function (3) can be used for the electric multipole potential. Here we consider an electric dipole. Let us place electric charges $q$ and $-q$ at $z=a$ and $z=-a$, respectively.

Electric Dipole Potential

The electric dipole potential is given by
$$\varphi=\frac{q}{4\pi\epsilon_0}\left(\frac{1}{r_1}-\frac{1}{r_2}\right).\ \ \ \ \ \mbox{(8)}$$
$r_2$ is written in terms of $r$ and $\theta$ using the Laws of Cosine as
\begin{align*}
r_2^2&=r^2+a^2-2ar\cos(\pi-\theta)\\
&=r^2+a^2+2ar\cos\theta.
\end{align*}
So by the generating function (3), the electric dipole potential (8) can be written as
\begin{align*}
\varphi&=\frac{q}{4\pi\epsilon_0 r}\left\{\left[1-2\left(\frac{a}{r}\right)\cos\theta+\left(\frac{a}{r}\right)^2\right]^{-\frac{1}{2}}-\left[1+2\left(\frac{a}{r}\right)\cos\theta+\left(\frac{a}{r}\right)^2\right]^{-\frac{1}{2}}\right\}\\
&=\frac{q}{4\pi\epsilon_0 r}\left[\sum_{n=0}^\infty P_n(\cos\theta)\left(\frac{a}{r}\right)^n-\sum_{n=0}^\infty P_n(\cos\theta)(-1)^n\left(\frac{a}{r}\right)^n\right]\\
&=\frac{2q}{4\pi\epsilon_0 r}\left[P_1(\cos\theta)\left(\frac{a}{r}\right)+P_3(\cos\theta)\left(\frac{a}{r}\right)^3+\cdots\right]
\end{align*}
for $r>a$.

For $r\gg a$,
$$\varphi\approx\frac{2aq}{4\pi\epsilon_0 r}\frac{P_1(\cos\theta)}{r^2}=\frac{2aq}{4\pi\epsilon_0 r}\frac{\cos\theta}{r^2}.$$
This is usual electric dipole potential. The quantity $2aq$ is called the dipole moment in electromagnetism.

Spherical Bessel Functions

When the Helmholtz equation is separated in spherical coordinates the radial equation has the form
$$r^2\frac{d^2R}{dr^2}+2r\frac{dR}{dr}+[k^2r^2-n(n+1)]R=0.\ \ \ \ \ \mbox{(1)}$$
The equation (1) looks similar to Bessel’s equation.  If we use the transformation $R(kr)=\frac{Z(kr)}{(kr)^{1/2}}$, (1) turns into Bessel’s equation
$$r^2\frac{d^2Z}{dr^2}+r\frac{dZ}{dr}+\left[k^2r^2-\left(n+\frac{1}{2}\right)^2\right]Z=0.\ \ \ \ \ \mbox{(2)}$$
Hence $Z(kr)=J_{n+\frac{1}{2}}(x)$, Bessel function of order $n+\frac{1}{2}$ where $n$ is an integer.

Spherical Bessel Functions: Spherical Bessel functions of the first kind and the second kind are defined by
\begin{align*}
j_n(x)&:=\sqrt{\frac{\pi}{2x}}J_{n+\frac{1}{2}}(x),\\
n_n(x)&:=\sqrt{\frac{\pi}{2x}}N_{n+\frac{1}{2}}(x)=(-1)^{n+1}\sqrt{\frac{\pi}{2x}}J_{-n-\frac{1}{2}}(x).
\end{align*}
Spherical Bessel functions $j_n(kr)$ and $n_n(kr)$ are two linearly independent solutions of the equation (1).

One can obtain power series representations of $j_n(x)$ and $n_n(x)$ using Legendre Duplication Formula

$$z!\left(z+\frac{1}{2}\right)!=2^{-2z-1}\pi^{1/2}(2z+1)!$$ from $$J_{n+\frac{1}{2}}(x)=\sum_{s=0}^\infty\frac{(-1)^s}{s!\left(s+n+\frac{1}{2}\right)!}\left(\frac{x}{2}\right)^{2s+n+\frac{1}{2}}:$$
\begin{align*}
j_n(x)&=2^nx^n\sum_{s=0}^\infty\frac{(-1)^s(s+n)!}{s!(2s+2n+1)!}x^{2s},\\
n_n(x)&=(-1)^{n+1}\frac{2^n\pi^{1/2}}{x^{n+1}}\sum_{s=0}^\infty\frac{(-1)^s}{s!\left(s-n-\frac{1}{2}\right)!}\left(\frac{x}{2}\right)^{2s}\\
&=\frac{(-1)^{n+1}}{2^nx^{n+1}}\sum_{s=0}^\infty\frac{(-1)^s(s-n)!}{s!(2s-2n)!}x^{2s}.
\end{align*}
From these power series representations, we obtain
\begin{align*}
j_0(x)&=\frac{\sin x}{x}\left(=\sum_{s=0}^\infty\frac{(-1)^s}{(2s+1)!}x^{2s}\right)\\
n_0(x)&=-\frac{\cos x}{x}\\
j_1(x)&=\frac{\sin x}{x^2}-\frac{\cos x}{x}\\
n_1(x)&=-\frac{\cos x}{x^2}-\frac{\sin x}{x}.
\end{align*}
Orthogonality: Recall the orthogonality of Bessel functions
$$\int_0^aJ_\nu\left(\frac{\alpha_{\nu p}}{a}\rho\right)J_\nu\left(\frac{\alpha_{\nu q}}{a}\rho\right)\rho d\rho=\frac{a^2}{2}[J_{\nu+1}(\alpha_{\nu p})]^2\delta_{pq}$$ as discussed here. By a substitution, we obtain the orthogonality of spherical Bessel functions
$$\int_0^aj_n\left(\frac{\alpha_{np}}{a}\rho\right)j_n\left(\frac{\alpha_{nq}}{a}\rho\right)\rho^2 d\rho=\frac{a^3}{2}[j_{n+1}(\alpha_{np})]^2\delta_{pq},$$ where $\alpha_{np}$ and $\alpha_{nq}$ are roots of $j_n$.

Example: [Particle in a Sphere]

Let us consider a particle inside a sphere with radius $a$. The wave function that describes the state of the particle satisfies Schrödinger equation
$$-\frac{\hbar^2}{2m}\nabla^2\psi=E\psi\ \ \ \ \ \mbox{(3)}$$
with boundary conditions:
\begin{align*}
&\psi(r\leq a)\ \mbox{is finite},\\
&\psi(a)=0.
\end{align*}
This corresponds to a potential $V=0$, $r\leq a$ and $V=\infty$, $r>a$. Here $m$ is the mass of the particle, $\hbar=\frac{h}{2\pi}$ is the reduced Planck constant (also called Dirac constant).
Note that (3) is the Helmholtz equation $\nabla^2\psi+k^2\psi=0$ with $k^2=\frac{2mE}{\hbar^2}$, whose radial part satisfies
$$\frac{d^2R}{dr^2}+\frac{2}{r}\frac{dR}{dr}+\left[k^2-\frac{n(n+1)}{r^2}\right]R=0.$$ Now we determine the minimum energy (zero-point energy) $E_{\mbox{min}}$. Since any angular dependence would increase the energy, we take $n=0$. The solution $R$ is given by
$$R(kr)=Aj_0(kr)+Bn_0(kr).$$ Since $n_0(kr)\rightarrow\infty$ at the origin, $B=0$. From the boundary condition $\psi(a)=0$, $R(a)=0$, i.e. $j_0(ka)=0$. Thus $ka=\frac{2mE}{\hbar}a=\alpha$ is a root of $j_0(x)$.The smallest $\alpha$ is the first zero of $j_0(x)$, $\alpha=\pi$. Therefore,
\begin{align*}
E_{\mbox{min}}&=\frac{\hbar^2\alpha^2}{2ma^2}\\
&=\frac{\hbar^2\pi^2}{2ma^2}\\
&=\frac{h^2}{8ma^2},
\end{align*}
where $h$ is the Planck constant. This means that for any finite sphere, the particle will have a positive minimum energy (or zero-point energy).

Neumann Functions, Bessel Function of the Second Kind $N_\nu(X)$

In here, we have considered Bessel functions $J_\nu(x)$ for $\nu=\mbox{integer}$ case only. Note that $J_\nu$ and $J_{-\nu}$ are linearly independent if $\nu$ is a non-integer. If $\nu$ is an integer, $J_\nu$ and $J_{-\nu}$ satisfy the relation $J_{-\nu}=(-1)^\nu J_\nu$, i.e. they are no longer linearly independent. Thus we need a second solution for Bessel’s equation.

Let us define
$$N_\nu(x):=\frac{\cos\nu\pi J_\nu(x)-J_{-\nu}(x)}{\sin\nu\pi}.$$
$N_\nu(x)$ is called Neumann function or Bessel function of the second kind. For $\nu=\mbox{integer}$, $N_\nu(x)$ is an indeterminate form of type $\frac{0}{0}$. So by l’Hôpital’s rule
\begin{align*}
N_n(x)&=\lim_{\nu\to n}\frac{\frac{\partial}{\partial\nu}[\cos\nu\pi J_\nu(x)-J_{-\nu}(x)]}{\frac{\partial}{\partial\nu}\sin\nu\pi}\\
&=\frac{1}{\pi}\lim_{\nu\to n}\left[\frac{\partial J_\nu(x)}{\partial\nu}-(-1)^\nu\frac{\partial J_{-\nu}(x)}{\partial\nu}\right].
\end{align*}
Neumann function can be also written as a power series:
\begin{align*}
N_n(x)=&\frac{2}{\pi}\left[\ln\left(\frac{x}{2}\right)+\gamma-\frac{1}{2}\sum_{p=1}^n\frac{1}{p}\right]J_n(x)\\
&-\frac{1}{\pi}\sum_{r=0}^\infty(-1)^r\frac{\left(\frac{x}{2}\right)^{n+2r}}{r!(n+r)!}\sum_{p=1}^r\left[\frac{1}{p}+\frac{1}{p+n}\right]\\
&-\frac{1}{\pi}\sum_{r=0}^{n-1}\frac{(n-r-1)!}{r!}\left(\frac{x}{2}\right)^{-n+2r},
\end{align*}
where $\gamma$ is the Euler-Mascheroni number.

In Maxima, Neumann function is denoted by bessel_y(n,x). Let us plot $N_0(x)$, $N_1(x)$, $N_3(x)$ altogether using the command

plot2d([bessel_y(0,x),bessel_y(1,x),bessel_y(2,x)],[x,0.5,10]);

Neumann Functions

We now show that Neumann functions do satisfy Bessel’s differential equation. Differentiate Bessel’s equation
$$x^2\frac{d^2}{dx^2}J_{\pm\nu}(x)+x\frac{d}{dx}J_{\pm\nu}(x)+(x^2-\nu^2)J_{\pm\nu}(x)=0$$
with respect to $\nu$. (Here of course we are assuming that $\nu$ is a continuous variable.) Then we obtain the following equations:
\begin{align*}
x^2\frac{d^2}{dx^2}\frac{\partial J_\nu(x)}{\partial\nu}+x\frac{d}{dx}\frac{\partial J_\nu(x)}{\partial\nu}+(x^2-\nu^2)\frac{\partial J_\nu(x)}{\partial\nu}=2\nu J_\nu(x)\ \ \ \ \ \mbox{(1)}\\
x^2\frac{d^2}{dx^2}\frac{\partial J_{-\nu(x)}}{\partial\nu}+x\frac{d}{dx}\frac{\partial J_{-\nu(x)}}{\partial\nu}+(x^2-\nu^2)\frac{\partial J_{-\nu(x)}}{\partial\nu}=2\nu J_\nu(x)\ \ \ \ \ \mbox{(2)}
\end{align*}
Subtract $\frac{1}{\pi}(-1)^n$ times (2) from $\frac{1}{\pi}$ times (1) and then take the limit of the resulting equation as $\nu\to n$. Then we see that $N_n$ satisfies the Bessel’s differential equation
$$x^2\frac{d^2}{dx^2}N_n(x)+x\frac{d}{dx}N_n(x)+(x^2-n^2)N_n(x)=0.$$
The general solution of Bessel’s differential equation is given by
$$y_n(x)=AJ_n(x)+BN_n(x).$$

Bessel Functions of the First Kind $J_n(x)$ II: Orthogonality

To accommodate boundary conditions for a finite interval $[0,a]$, we need to consider Bessel functions of the form $J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)$. For $x=\frac{\alpha_{\nu m}}{a}\rho$, Bessel’s equation (9) in here can be written as
$$\rho^2\frac{d^2}{d\rho^2}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)+\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)+\left(\frac{\alpha_{\nu m}^2\rho}{a^2}-\frac{\nu^2}{\rho}\right)J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)=0.\ \ \ \ \ \mbox{(10)}$$ Changing $\alpha_{\nu m}$ to $\alpha_{\nu n}$, $J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)$ satisfies
$$\rho^2\frac{d^2}{d\rho^2}J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)+\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)+\left(\frac{\alpha_{\nu n}^2\rho}{a^2}-\frac{\nu^2}{\rho}\right)J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)=0.\ \ \ \ \ \mbox{(11)}$$
Multiply (10) by $J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)$ and (11) by $J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)$ and subtract:
\begin{align*}
J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\frac{d}{d\rho}&\left[\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\right]-J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\frac{d}{d\rho}\left[\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\right]\\&=\frac{\alpha_{\nu n}^2-\alpha_{\nu m}^2}{a^2}\rho J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right).\end{align*}
Integrate this equation with respect to $\rho$ from $\rho=0$ to $\rho=a$:
\begin{align*}\int_0^\rho J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\frac{d}{d\rho}&\left[\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\right]d\rho\\&-\int_0^\rho J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\frac{d}{d\rho}\left[\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\right]d\rho\\&=\frac{\alpha_{\nu n}^2-\alpha_{\nu m}^2}{a^2}\int_0^\rho J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\rho d\rho.\ \ \ \ \ \mbox{(12)}\end{align*}
Using Integration by Parts, we have
\begin{align*}
\int_0^\rho &J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\frac{d}{d\rho}\left[\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\right]d\rho\\&=\left[\rho J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\right]_0^a-\int_0^a \rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)dJ_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right).\end{align*}
Thus (12) can be written as
\begin{align*}\left[\rho J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\right]_0^a-\left[\rho J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\right]_0^a\\=\frac{\alpha_{\nu n}^2-\alpha_{\nu m}^2}{a^2}\int_0^\rho J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\rho d\rho.\ \ \ \ \ \mbox{(13)}\end{align*}
Clearly the LHS of (13) vanishes at $\rho=0$. (Here we consider only $\nu=\mbox{integer}$ case.) It also vanishes at $\rho=a$ if we choose $\alpha_{\nu n}$ and $\alpha_{\nu m}$ to be $n$-th and $m$-th zeros of $J_\nu$. Therefore, for $m\ne n$
$$\int_0^\rho J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\rho d\rho=0.\ \ \ \ \ \mbox{(14)}$$
(14) gives us orthogonality over the interval $[0,a]$.

For $m=n$, we have the normalization integral
$$\int_0^a\left[J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\right]^2\rho d\rho=\frac{a^2}{2}[J_{\nu+1}(\alpha_{\nu m})]^2.\ \ \ \ \ \mbox{(15)}$$

Bessel Functions of the First Kind $J_n(x)$ I: Generating Function, Recurrence Relation, Bessel’s Equation

Let us begin with the generating function

$$g(x,t) = e^{\frac{x}{2}\left(t-\frac{1}{t}\right)}.$$
Expanding this function in a Laurent series, we obtain
$$e^{\frac{x}{2}\left(t-\frac{1}{t}\right)} = \sum_{n=-\infty}^\infty J_n(x)t^n.$$
The coefficient of $t^n$, $J_n(x)$, is defined to be a Bessel function of the first kind of order $n$.
Now, we determine $J_n(x)$.
\begin{align*}
e^{\frac{x}{2}t}e^{-\frac{x}{2t}}&=\sum_{r=0}^\infty\left(\frac{x}{2}\right)^r\frac{t^r}{r!} \sum_{s=0}^\infty(-1)^s\left( \frac{x}{2}\right)^s \frac{t^{-s}}{s!}\\
&=\sum_{r=0}^\infty\sum_{s=0}^\infty\frac{(-1)^s}{r!s!}\left(\frac{x}{2}\right)^{r+s}t^{r-s}.
\end{align*}
Set $r=n+s$. Then for $n\ge 0$ we obtain
$$J_n(x)=\sum_{s=0}^\infty \frac{(-1)^s}{s!(n+s)!}\left(\frac{x}{2}\right)^{n+2s}.$$

Bessel Functions

Now we find $J_{-n}(x)$. In the above series for $J_n(x)$, we obtain
$$J_{-n}(x)=\sum_{s=0}^\infty\frac{(-1)^s}{s!(s-n)!} \left(\frac{x}{2}\right)^{2s-n}.$$
However, $(s-n)!\rightarrow\infty$ for $s=0,1,\cdots,(n-1)$. So the series may be considered to begin at $s=n$. Replacing $s$ by $s+n$, we obtain
$$J_{-n}(x)=\sum_{s=0}^ \infty\frac{(-1)^{s+n}}{s!(s+n)!}\left( \frac{x}{2} \right)^{n+2s}.$$ Note that $J_n(x)$ and $J_{-n}(x)$ satisfy the relation
$$J_{-n}=(-1)^nJ_n(x).$$

Let us differentiate the generating function $g(x,t)$ with respect to $t$:
\begin{align*}
\frac{\partial g(x,t)}{\partial t} &=\frac{x}{2}\left(1+ \frac{1}{t^2}\right) e^{\frac{x}{2}\left(t-\frac{1}{t}\right)}\\
&=\sum_{n=-\infty}^\infty n J_n(x) t^{n-1}.
\end{align*}
Replace $e^{\frac{x}{2}\left(t-\frac{1}{t}\right)}$ by $\sum_{n=-\infty}^\infty J_n(x) t^n$.
Then
\begin{align*}
\sum_{n=-\infty}^\infty \frac{x}{2} (1+ \frac{1}{t^2}) J_n(x) t^{n}&=\sum_{n=-\infty}^\infty \frac{x}{2} [J_n(x) t^n + J_n(x) t^{n-2}]\\
&=\sum_{n=-\infty}^\infty \frac{x}{2} [J_{n-1}(x) + J_{n+1}(x)] t^{n-1}.
\end{align*}
Thus,
$$\sum_{n=-\infty}^\infty \frac{x}{2} [J_{n-1}(x) + J_{n+1}(x)] t^{n-1}=\sum_{n=-\infty}^\infty n J_n(x) t^{n-1}$$
or we obtain the recurrence relation,
\begin{equation}J_{n-1}(x) + J_{n+1}(x) = \frac{2n}{x} J_n(x).\end{equation}
Now we differentiate $g(x,t)$ with respect to $x$:
\begin{align*}
\frac{\partial g(x,t)}{\partial x}&=\frac{1}{2}\left(1-\frac{1}{t}\right)e^{\frac{x}{2}\left(t-\frac{1}{t}\right)}\\
& =\sum_{n=-\infty}^\infty J_n’(x)t^n.
\end{align*}
This leads to the recurrence relation
\begin{equation}J_{n-1}(x) – J_{n+1}(x) = 2 J_n’(x).\end{equation}
As a special Case of this recurrence relation, we obtain,
$$J_{0}’(x)=-J_1(x).$$
Adding (1) and (2), we have
\begin{equation}J_{n-1}(x)=\frac{n}{x}J_n(x) + J_n’(x).\end{equation}
Multiplying (3) by $x^n$:
\begin{align*}
x^n J_{n-1}(x) & = n x^{n-1} J_n(x) + x^n J_n’(x)\\
& = \frac{d}{dx}[ x^n J_n(x)].
\end{align*}
Subtracting (2) from (1), we have
\begin{equation}J_{n+1}(x) = \frac{n}{x} J_n(x) – J_n’(x).\end{equation}
Multiplying (4) by $-x^{-n}$:
\begin{align*}
-x^{-n} J_{n+1}(x) & = -n x^{-n-1} J_n(x) + x^{-n} J_n’(x)\\
& = \frac{d}{dx}[x^{-n} J_n(x)].
\end{align*}

Using recurrence relations, we can show that the Bessel functions $J_n(x)$ are the solutions of the Bessel’s differential equation. The recurrence relation (3) can be written as
\begin{equation}x J_n’(x) = x J_{n-1}(x) – n J_n(x).\end{equation}
Differentiating this equation with respect to $x$, we obtain
$$J_n’(x) + x J_n^{\prime\prime}(x) = J_{n-1}(x) + x J_{n-1}’(x) – n J_n’(x)$$
or
\begin{equation}x J_n^{\prime\prime}(x) + (n+1) J_n’(x) – x J_{n-1}’(x) – J_{n-1}(x) = 0.\end{equation}
Subtracting (5) times $n$ from (6) times $x$ results the equation
\begin{equation}x^2 J_n^{\prime\prime}(x) + x J_n’(x) – n^2 J_n(x) + x(n-1) J_{n-1}(x) – x^2 J_{n-1}’(x) = 0.\end{equation}
Replace $n$ by $n-1$ in (4) and multiply the resulting equation by $x^2$ to get the equation
\begin{equation}x^2 J_n(x) = x (n-1) J_{n-1}(x) – x^2 J_{n-1}’(x).\end{equation}
With the equation (8), the equation (7) can be written as
\begin{equation}x^2 J_n^{\prime\prime}(x) + x J_n’(x) + (x^2 – n^2) J_n(x) = 0.\end{equation}
This is Bessel’s equation. Hence the Bessel functions $J_n(x)$ are the solutions of Bessel’s equation.