Category Archives: Functional Analysis

Functional Analysis 10: Linear Functionals

Definition. Let $X$ be a vector space. A linear functional is a linear map $f:\mathcal{D}(f)\subset X\longrightarrow\mathbb{R}$ (or $f:\mathcal{D}(f)\subset X\longrightarrow\mathbb{C}$).

Definition. A linear functional $f:\mathcal{D}(f)\longrightarrow\mathbb{R}$ is said to be bounded if there exists a number $c$ such that $|f(x)|\leq c||x||$ for all $x\in\mathcal{D}(f)$. Just as in linear operators case $||f||$ is defined by
\begin{align*}
||f||&=\sup_{\begin{array}{c}x\in\mathcal{D}(f)\\x\ne O\end{array}}\frac{|f(x)|}{||x||}\\
&=\sup_{\begin{array}{c}x\in\mathcal{D}(f)\\||x||=1\end{array}}|f(x)|.
\end{align*}
Also we have the inequality holds
$$|f(x)|\leq ||f||||x||$$
for all $x\in\mathcal{D}(f)$.

Just as in linear operators case, we have the following theorem holds.

Theorem. A linear functional $f$ with domain $\mathcal{D}(f)$ in a normed space is continuous if and only if $f$ is bounded.

Example. Let $a=(\alpha_j)\in\mathbb{R}^3$. Define $f:\mathbb{R}^3\longrightarrow\mathbb{R}$ by
$$f(x)=x\cdot a=\xi_1\alpha_1+\xi_2\alpha_2+\xi_3\alpha_3$$ for each $x=(\xi_j)\in\mathbb{R}^3$. Then $f$ is a linear functional. By Cauchy-Schwarz inequality, we obtain
$$|f(x)|=|x\cdot a|\leq ||x||||a||$$
which implies $||f||\leq ||a||$. On the other hand, for $x=a$
$$||a||=\frac{||a||^2}{||a||}=\frac{|f(a)|}{||a||}\leq ||f||.$$
Hence, we have $||f||=||a||$.

Example. Define $f:\mathcal{C}[a,b]\longrightarrow\mathbb{R}$ by
$$f(x)=\int_a^b x(t)dt$$
for each $x(t)\in\mathcal{C}[a,b]$. Then $f$ is a linear functional.
\begin{align*}
|f(x)|&\leq\left|\int_a^b x(t)dt\right|\\
&\leq(b-a)\max|x(t)|\\
&=(b-a)||x||.
\end{align*}
So, $||f||\leq b-a$. Let $x=x_0=1$. Then
\begin{align*}
b-a&=\int_a^b dt\\
&=\frac{|f(x_0)|}{||x_0||}\\
&\leq ||f||.
\end{align*}
Hence, we have $||f||=b-a$.

Let $X^\ast$ be the set of all linear functionals. Then $X\ast$ can be made into a vector space. For any $f,g\in X^\ast$ and scalar $\alpha$, define addition $f+g$ and scalar multiplication $\alpha f$ as follows: For each $x\in X$,
\begin{align*}
(f+g)(x)&=f(x)+g(x),\\
(\alpha f)(x)&=\alpha f(x).
\end{align*}
$X^\ast$ is called the dual space of $X$. One may also consider $X^{\ast\ast}=(X^\ast)^\ast$, the dual space of $X^\ast$. Fix $x\in X$. Define a map $g_x: X^\ast\longrightarrow\mathbb{R}$ by
$$g_x(f)=f(x)$$
for each $f\in X^\ast$. For any $f_1,f_2\in X^\ast$,
\begin{align*}
f_1=f_2&\Longrightarrow f_1(x)=f_2(x)\\
&\Longrightarrow g_x(f_1)=g_x(f_2).
\end{align*}
so, $g_x$ is well-defined. Furthermore, $g_x$ is linear. To show this, for any $f_1,f_2\in X^\ast$ and scalars $\alpha,\beta$,
\begin{align*}
g_x(\alpha f_1+\beta f_2)&=(\alpha f_1+\beta f_2)(x)\\
&=\alpha f_1(x)+\beta f_2(x)\\
&=\alpha g_x(f_1)+\beta g_x(f_2).
\end{align*}
Define a map $C: X\longrightarrow X^{\ast\ast}$ by
$$Cx=g_x$$
for each $x\in X$. Then $C$ is a linear map. First let $x=y\in X$. Then for any $f\in X^\ast$, $g_x(f)=f(x)=f(y)=g_y(f)$, so $C(x)=g_x=g_y=C(y)$. Hence, $C$ is well-defined. To show that $C$ is linear, let $x,y\in X$ and $\alpha,\beta$ scalars. For any $f\in X^\ast$,
\begin{align*}
g_{\alpha x+\beta y}(f)&=f(\alpha x+\beta y)\\
&=\alpha f(x)+\beta f(y)\ (f\ \mbox{is linear})\\
&=\alpha g_x(f)+\beta g_y(f)\\
&=(\alpha g_x+\beta g_y)(f).
\end{align*}
Thus,
$$C(\alpha x+\beta y)=g_{\alpha x+\beta y}=\alpha g_x+\beta g_y=\alpha Cx+\beta Cy.$$
If $X$ is an inner product space or $X$ is a finite dimensional vector space, $C$ becomes oen-to-one. Let us assume that $X$ is equipped with an inner product $\langle\ ,\ \rangle$. Then for any fixed $a\in X$, the map $f_a: X\longrightarrow\mathbb{R}$ defined by
$$f_a(x)=\langle a,x\rangle\ \mbox{for each}\ x\in X$$
is a linear functional. Let $Cx=Cy$. Then $g_{x-y}=0$ and so $g_{x-y}(f_{x-y})=||x-y||^2=0$, hence $x=y$. Therefore, $C$ is one-to-one. We will discussed the case when $X$ is finite dimensional in the next lecture. If $C$ is one-to-one, $X$ is embedded into $X^{\ast\ast}$. We call $C:X\hookrightarrow X^{\ast\ast}$ the canonical embedding. (Here, the notation $\hookrightarrow$ means an embedding or a monomorphism.) If in addition $C$ is onto i.e. $X\stackrel{C}{\cong}X^{\ast\ast}$, then $X$ is said to be algebraically reflexive. If $X$ is finite dimensional, then $X$ is algebraically reflexive. This will be discussed in the next lecture as well.

Functional Analysis 9: Bounded and Continuous Linear Operators

Definition. Let $X,Y$ be normed spaces and $T:\mathcal{D}(T)\longrightarrow Y$ be a linear operator where $\mathcal{D}(T)\subset X$. $T$ is said to be bounded if there exists $c\in\mathbb{R}$ such that for any $x\in\mathcal{D}(T)$,
$$||Tx||\leq c||x||.$$

Suppose that $x\ne O$. Then
$$\frac{||Tx||}{||x||}\leq c.$$
Let
$$||T||:=\sup_{\begin{array}{c}x\in\mathcal{D}(T)\\
x\ne O\end{array}}\frac{||Tx||}{||x||}.$$
Then $||T||$ is called the norm of the operator $T$. If $\mathcal{D}(T)=\{O\}$ then we define $||T||=0$.

Lemma. Let $T$ be a bounded linear operator. Then

  1. $||T||=\displaystyle\sup_{\begin{array}{c}x\in\mathcal{D}(T)\\||x||=1\end{array}}||Tx||.$
  2. $||\cdot||$ defined on bounded linear operators satisfies (N1)-(N3).

Proof.

  1. \begin{align*}||T||&=\sup_{\begin{array}{c}x\in\mathcal{D}(T)\\x\ne O\end{array}}\frac{||Tx||}{||x||}\frac{||Tx||}{||x||}\\&=\sup_{\begin{array}{c}x\in\mathcal{D}(T)\\x\ne O\end{array}}\left|\left|\frac{||Tx||}{||x||}\right|\right|\\&=\sup_{\begin{array}{c}y\in\mathcal{D}(T)\\||y||=1\end{array}}||Ty||.\end{align*}
  2. \begin{align*}||T||=0&\Longleftrightarrow Tx=0,\ \forall x\in\mathcal{D}(T)\\&\Longleftrightarrow T=0.\end{align*} Since $$\sup_{\begin{array}{c}x\in\mathcal{D}(T)\\||x||=1\end{array}}||(T_1+T_2)x||\leq \sup_{\begin{array}{c}x\in\mathcal{D}(T)\\||x||=1\end{array}}||T_1x||+\sup_{\begin{array}{c}x\in\mathcal{D}(T)\\||x||=1\end{array}}||T_2x||,$$ $$||T_1+T_2||\leq ||T_1||+||T_2||.$$

Examples.

  1. The identity operator $I:X\longrightarrow X$ with $X\ne\{O\}$ is a bounded linear operator with $||I||=1$.
  2. Zero operator $O: X\longrightarrow Y$ is a bounded linear operator with $||O||=0$.
  3. Let $X$ be the normed space of all polynomials on $[0,1]$ with $||x||=\max_{t\in[0,1]}|x(t)|$. Differentiation$$T: X\longrightarrow X;\ Tx(t)=x’(t)$$ is not a bounded operator. To see this, let $x_n(t)=t^n$, $n\in\mathbb{N}$. Then $||x_n||=1$ for all $n\in\mathbb{N}$. $Tx_n(t)=nt^{n-1}$ and $||Tx_n||=n$, for all $n\in\mathbb{N}$. So, $\frac{||Tx_n||}{||x_n||}=n$ and hence $||T||$ is not bounded.
  4. Integral operator $$T:\mathcal{C}[0,1]\longrightarrow\mathcal{C}[0,1];\ Tx=\int_0^1\kappa(t,\tau)x(\tau)d\tau$$ is a bounded linear operator. The function $\kappa(t,\tau)$ is a continuous function on $[0,1]\times[0,1]$ called the kernel of $T$. \begin{align*}||Tx||&=\max_{t\in[0,1]}\left|\int_0^1\kappa(t,\tau)x(\tau)d\tau\right|\\&\leq\max_{t\in[0,1]}\int_0^1|\kappa(t,\tau)||x(\tau)|d\tau\\&\leq k_0||x||,\end{align*}where $k_0=\displaystyle\max_{(t,\tau)\in[0,1]\times[0,1]}\kappa(t,\tau)$.
  5. Let $A=(\alpha_{jk})$ be an $r\times n$ matrix of real entries. The linear map $T:\mathbb{R}^n\longrightarrow\mathbb{R}^r$ given by $Tx=Ax$ for each $x\in\mathbb{R}^n$ is bounded. To see this, Let $x\in\mathbb{R}^n$ and write $x=(\xi_j)$. Then $||x||=\sqrt{\displaystyle\sum_{m=1}^n\xi_m^2}$.\begin{align*}||Tx||^2&=\sum_{j=1}^r\left[\sum_{k=1}^n\alpha_{jk}\xi_k\right]^2\\&\leq\sum_{j=1}^r\left[\left(\sum_{k=1}^n\alpha_{jk}^2\right)^\frac{1}{2}\left(\sum_{m=1}^n\xi_m\right)^\frac{1}{2}\right]^2\\&=||x||^2\sum_{j=1}^r\sum_{k=1}^n\alpha_{jk}^2.\end{align*}By setting $c^2=\displaystyle\sum_{j=1}^r\sum_{k=1}^n\alpha_{jk}^2$, we obtain$$||Tx||^2\leq c^2||x||^2.$$

In general, if a normed space $X$ is finite dimensional, then every linear operator on $X$ is bounded. Before we discuss this, we first introduce the following lemma without proof.

Lemma. Let $\{x_1,\cdots,x_n\}$ be a linearly independent set of vectors in a normed space $X$. Then there exist a number $c>0$ such that for any scalars $\alpha_1,\cdots,\alpha_n$, we have the inequality
$$||\alpha_1x_1+\cdots+\alpha_nx_n||\geq c(|\alpha_1|+\cdots+|\alpha_n|).$$

Theorem. If a normed space $X$ is finite dimensional, then every linear operator on $X$ is bounded.

Proof. Let $\dim X=n$ and $\{e_1,\cdots,e_n\}$ be a basis for $X$. Let $x=\displaystyle\sum_{j=1}^n\xi_je_j\in X$. Then
\begin{align*}
||Tx||&=||\sum_{j=1}^n\xi_jTe_j||\\
&\leq\sum_{j=1}^n||\xi_j|||Te_j||\\
&\leq\max_{k=1,\cdots,n}||Te_k||\sum_{j=1}^n|\xi_j|.
\end{align*}
By Lemma, there exists a number $c>0$ such that
$$||x||=||\xi_1e_1+\cdots+\xi_ne_n||\geq c(|\xi_1|+\cdots+|\xi_n|)=c\sum_{j=1}^n|\xi_j|.$$
So, $\displaystyle\sum_{j=1}^n|\xi_j|\leq\frac{1}{c}||x||$ and hence
$$||Tx||\leq M||x||,$$ where
$M=\frac{1}{c}\max_{k=1,\cdots,n}||Te_k||$.

What is really nice about linear operators from a normed space into a normed space is that a linear operator being bounded is equivalent to it being continuous.

Theorem. Let $X,Y$ be normed spaces and $T:\mathcal{D}(T)\subset X\longrightarrow Y$ a linear operator. Then

  1. $T$ is continuous if and only if $T$ is bounded.
  2. If $T$ is continuous at a single point, it is continuous.

Proof.

  1. If $T=O$, then we are done. Suppose that $T\ne O$. Then $||T||\ne 0$. Assume that $T$ is bounded and $x_0\in\mathcal{D}(T)$. Let $\epsilon>0$ be given. Choose $\delta=\frac{\epsilon}{||T||}$. Then for any $x\in\mathcal{D}(T)$ such that $||x-x_0||<\delta$, $$||Tx-Tx_0||=||T(x-x_0)||\leq ||T||||x-x_0||<\epsilon.$$ Conversely, assume that $T$ is continuous at $x_0\in\mathcal{D}(T)$. Then given $\epsilon>0$ there exists $\delta>0$ such that $||Tx-Tx_0||<\epsilon$ whenever $||x-x_0||\leq\delta$. Take $y\ne 0\in\mathcal{D}(T)$ and set $$x=x_0+\frac{\delta}{||y||}y.$$ Then $x-x_0=\frac{\delta}{||y||}y$ and $||x-x_0||=\delta$. So,\begin{align*}||Tx-Tx_0||&=||T(x-x_0)||\\&=\left|\left|T\left(\frac{\delta}{||y||}y\right)\right|\right|\\&=\frac{\delta}{||y||}||Ty||\\&<\epsilon.\end{align*}Hence, for any $y\in\mathcal{D}(T)$, $||Ty||\leq\frac{\epsilon}{\delta}||y||$ i.e. $T$ is bounded.
  2. In the proof of part (a), we have shown that if $T$ is continuous at a point, it is bounded. If $T$ is bounded, then it is continuous by part (a).

Corollary. Let $T$ be a bounded linear operator. Then

  1. If $x_n\to x$ then $Tx_n\to Tx$.
  2. $\mathcal{N}(T)$ is closed.

Proof.

  1. If $T$ is bounded, it is continuous and so the statement is true.
  2. Let $x\in\overline{\mathcal{N}(T)}$. Then there exists a sequence $(x_n)\subset\mathcal{N}(T)$ such that $x_n\to x$. Since $Tx_n=0$ for each $n=1,2,\cdots$, $Tx=0$. Hence, $x\in\mathcal{N}(T)$.

Theorem. Let $X$ be a normed space and $Y$ a Banach space. Let $T:\mathcal{D}(T)\subset X\longrightarrow Y$ be a bounded linear operator. Then $T$ has an extension $\tilde T:\overline{\mathcal{D}(T)}\longrightarrow Y$ where $\tilde T$ is a bounded linear operator of norm $||\tilde T||=||T||$.

Proof. Let $x\in\overline{\mathcal{D}(T)}$. Then there exists a sequence $(x_n)\subset\mathcal{D}(T)$ such that $x_n\to x$. Since $T$ is bounded and linear,
\begin{align*}
||Tx_m-Tx_n||&=||T(x_m-x_n)||\\
&\leq||T||||x_m-x_n||,
\end{align*}
for all $m,n\in\mathbb{N}$. Since $(x_n)$ is convergent, it is Cauchy so given $\epsilon>0$ there exists a positive integer $N$ such that for all $m,n\geq N$, $||x_m-x_n||<\frac{\epsilon}{||T||}$. Hence, for all $m,n>N$,
\begin{align*}
||Tx_m-Tx_n||&\leq ||T||||x_m-x_n||\\
&<\epsilon.
\end{align*}
That is, $(Tx_n)$ is a Cauchy sequence in $Y$. Since $Y$ is a Banach space, there exists $y\in Y$ such that $Tx_n\to y$. Define $\tilde T:\overline{\mathcal{D}(T)}\longrightarrow Y$ by $\tilde Tx=y$. In order for $\tilde T$ to be well- defined, its definition should not depend on the choice $(x_n)$. Suppose that there is a sequence $(z_n)\subset\mathcal{D}(T)$ such that $z_n\to x$. Then $x_n-z_n\to 0$. Since $T$ is bounded, it is continuous so $T(x_n-z_n)\to 0$. This means that $\displaystyle\lim_{n\to\infty}Tz_n=\lim_{n\to\infty}Tx_n=y$. $\tilde T$ is linear and $\tilde T|_{\mathcal{D}(T)}=T$. To show that $\tilde T$ is bounded, let $x\in\overline{\mathcal{D}(T)}$. Then there exists a sequence $(x_n)\subset\mathcal{D}(T)$ such that $x_n\to x$ as before. Since $T$ is bounded, for each $n=1,2,\cdots$,
$$||Tx_n||\leq ||T||||x_n||.$$ Since the norm $x\longmapsto||x||$ is continuous, as $n\to\infty$ we obtain
$$||\tilde Tx||\leq ||T||||x||.$$ Hence, $\tilde T$ is bounded and $||\tilde T||\leq ||T||$. On the other hand, since $\tilde T$ is an extension of $T$, $||T||\leq||\tilde T||$. Therefore, $||\tilde T||=||T||$.

Functional Analysis 8: Linear Operators

From here on, a map from a vector space into another vector space will be called an operator.

Definition. A linear operator $T$ is an operator such that

  1. $T(x+y)=Tx+Ty$ for any two vectors $x$ and $y$.
  2. $T(\alpha x)=\alpha Tx$ for any vector $x$ and a scalar $\alpha$.

Proposition. An operator $T$ is a linear operator if and only if
$$T(\alpha x+\beta y)=\alpha Tx+\beta Ty$$
for any vectors $x,y$ and scalars $\alpha,\beta$.

Denote by $\mathcal{D}(T)$, $\mathcal{R}(T)$ and $\mathcal{N}(T)$, the domain, the range and the null space, respectively, of a linear operator $T$. The null space $\mathcal{N}(T)$ is the kernel of $T$ i.e.
$$\mathcal{N}(T)=T^{-1}(0)=\{x\in \mathcal{D}(T): Tx=0\}.$$
Since the term kernel is reserved for something else in functional analysis, we call it the null space of $T$.

Example. [Differentiation] Let $X$ be the space of all polynomials on $[a,b]$. Define an operator $T: X\longrightarrow X$ by
$$Tx(t)=x’(t)$$
for each $x(t)\in X$. Then $T$ is linear and onto.

Example. [Integration] Recall that $\mathcal{C}[a,b]$ denotes the space of all continuous functions on the closed interval $[a,b]$. Define an operator $T:\mathcal{C}[a,b]\longrightarrow\mathcal{C}[a,b]$ by
$$Tx(t)=\int_a^tx(\tau)d\tau$$
for each $x(t)\in\mathcal{C}[a,b]$. Then $T$ is linear.

Example. Let $A=(a_{jk})$ be an $r\times n$ matrix of real entries. Define an operator $T: \mathbb{R}^n\longrightarrow\mathbb{R}^r$ by
$$Tx=Ax=(a_{jk})(\xi_l)=\left(\sum_{k=1}^na_{jk}\xi_k\right)$$
for each $n\times 1$ column vector $x=(\xi_l)\in\mathbb{R}^n$. Then $T$ is linear as seen in linear algebra.

Theorem. Let $T$ be a linear operators. Then

  1. The range $\mathcal{R}(T)$ is a vector space.
  2. If $\dim\mathcal{D}(T)=n<\infty$, then $\dim\mathcal{R}\leq n$.
  3. The null space $\mathcal{N}(T)$ is a vector space.

Proof. Parts 1 and 3 are straightforward. We prove part 2. Choose $y_1,\cdots,y_{n+1}\in\mathcal{R}(T)$. Then $y_1=Tx_1,\cdots,y_{n+1}=Tx_{n+1}$ for some $x_1,\cdots,\\x_{n+1}\in\mathcal{D}(T)$. Since $\dim\mathcal{D}(T)=n$, $x_1,\cdots,x_{n+1}$ are linearly dependent. So, there exist scalars $\alpha_1,\cdots,\alpha_{n+1}$ not all equal to 0 such that $\alpha_1x_1+\cdots+\alpha_{n+1}x_{n+1}=0$. Since $T(\alpha_1x_1+\cdots+\alpha_{n+1}x_{n+1})=\alpha_1y_1+\cdots+\alpha_{n+1}y_{n+1}=0$, $\mathcal{R}$ has no linearly independent subset of $n+1$ or more elements.

Theorem. $T$ is one-to-one if and only if $\mathcal{N}=\{O\}$.

Proof. Suppose that $T$ is one-to-one. Let $a\in\mathcal{N}$. Then $Ta=O=TO$. Since $T$ is one-to-one, $a=O$ and hence $\mathcal{N}=\{O\}$. Suppose that $\mathcal{N}=\{O\}$. Let $Ta=Tb$. Then by linearity $T(a-b)=O$ and so $a-b\in\mathcal{N}=\{O\}\Longrightarrow a=b$. Thus, $T$ is one-to-one.
Theorem.

  1. $T^{-1}: \mathcal{R}(T)\longrightarrow\mathcal{D}(T)$ exists if and only if $\mathcal{N}=\{O\}$ if and only if $T$ is one-to-one.
  2. If $T^{-1}$ exists, it is linear.
  3. If $\dim\mathcal{D}(T)=n<\infty$ and $T^{-1}$ exists, then $\dim\mathcal{R}(T)=\dim\mathcal{D}(T)$.

Proof. Part 1 is trivial. Part 3 follows from part 2 of the previous theorem. Let us prove part 2. Let $y_1,y_2\in\mathcal{R}(T)$. Then there exist $x_1,x_2\in\mathcal{D}(T)$ such that $y_1=Tx_1$, $y_2=Tx_2$. Now,
\begin{align*}
\alpha y_1+\beta y_2&=\alpha Tx_1+\beta Tx_2\\
&=T(\alpha x_1+\beta x_2).
\end{align*}
So,
\begin{align*}
T^{-1}(\alpha y_1+\beta y_2)&=T^{-1}(T(\alpha x_1+\beta x_2))\\
&=\alpha x_1+\beta x_2\\
&=\alpha T^{-1}y_1+\beta T^{-1}y_2.
\end{align*}

Functional Analysis 6: Normed Spaces and Banach Spaces

From here on, I assume a background of undergraduate level Linear Algebra. Readers should be familiar with notions such as vector spaces, subspaces, linear combination, linear dependence and linear independence, basis, and dimension.

In this lecture, we begin with the notion of a normed space. A normed space is a vector space with a norm defined. So what is a norm? A norm $||\cdot||$ is a real-valued function $||\cdot||: V\longrightarrow\mathbb{R}^+\cup\{0\}$ such that for $x,y\in V$ and for $\alpha$ a scalar,

(N1) $||x||=0\Longleftrightarrow x=O$.

(N2) $||\alpha x||=|\alpha|||x||$.

(N3) $||x+y||\leq ||x||+||y||$. (Triangle Inequality.)

A norm on $X$ defines a metric $d$ on $X$
\begin{equation}
\label{eq:metric}
d(x,y)=||x-y||.
\end{equation}
\eqref{eq:metric} is called a metric induced by the norm $||\cdot||$. So, a normed space is a metric space but the converse need not be true.

A complete normed space is called a Banach space.

Example. $\mathbb{R}^n$ and $\mathbb{C}^n$ with the Euclidean norm
$$||x||=\left(\sum_{j=1}^n|\xi_j|^2\right)^{\frac{1}{2}}$$ are Banach spaces.

Example. $\ell^p$ with the norm
$$||x||=\left(\sum_{j=1}^n|\xi_j|^p\right)^{\frac{1}{p}}$$ is a Banach space.

Example. $\ell^\infty$ with the norm
$$||x||=\sup_{j\in\mathbb{N}}|\xi_j|$$ is a Banach space.

Example. $\mathcal{C}[a,b]$ with the norm
$$||x||=\max_{t\in[a,b]}|x(t)|$$ is a Banach space.

What follows next is an example of a normed space which is not complete. $\mathcal{C}[a,b]$, the vector space of all continuous real-valued functions on $[a,b]$ forms a normed space with the norm defined by
\begin{equation}
\label{eq:intnorm}
||x||=\left(\int_a^b x(t)^2dt\right)^{\frac{1}{2}}.
\end{equation}
Let $[a,b]=[0,1]$. for each $n=1,2,\cdots$, let $a_n=\frac{1}{2}+\frac{1}{n}$. Define a sequence $(x_n)$ in $\mathcal{C}[0,1]$ by
$$x_n(t)=\left\{\begin{array}{ccc}
0 & \mbox{if} & t\in\left[0,\frac{1}{2}\right],\\
nt-\frac{n}{2} & \mbox{if} & t\in\left[\frac{1}{2},a_n\right],\\
1 & \mbox{if} & t\in[a_n,1].
\end{array}\right.$$

nobanachnobanach2Let us assume that $n>m$. Then
\begin{align*}
||x_m-x_n||^2&=\int_0^1(x_m(t)-x_n(t))^2dt\\
&=\int_0^{\frac{1}{2}}(x_m(t)-x_n(t))^2dt+\int_{\frac{1}{2}}^{a_n}(x_m(t)-x_n(t))^2dt\\&+\int_{a_n}^{a_m}(x_m(t)-x_n(t))^2dt+\int_{a_m}^1(x_m(t)-x_n(t))^2dt\\
&=\int_{\frac{1}{2}}^{\frac{1}{2}+\frac{1}{n}}\left(nt-\frac{n}{2}-mt+\frac{m}{2}\right)^2dt+\int_{\frac{1}{2}+\frac{1}{n}}^{\frac{1}{2}+\frac{1}{m}}\left(1-mt+\frac{m}{2}\right)^2dt\\
&=\frac{(n-m)^2}{3mn^2}\\
&<\frac{1}{3m}-\frac{1}{3n}.
\end{align*}
Given $\epsilon>0$, choose a positive integer $N$ so that $N>\frac{1}{3\epsilon^2}$. Then for all $m,n\geq N$,
\begin{align*}
||x_n-x_m||^2&<\frac{1}{3m}-\frac{1}{3n}\\
&<\frac{1}{3m}\\
&\leq\frac{1}{3N}\\
&<\epsilon^2.
\end{align*}
Therefore, $(x_n)$ is a Cauchy sequence in $\mathcal{C}[0,1]$. For any $x(t)\in\mathcal{C}[0,1]$,
\begin{align*}
||x_n-x||^2&=\int_0^1(x_n(t)-x(t))^2dt\\
&=\int_0^{\frac{1}{2}}x(t)^2dt+\int_{\frac{1}{2}}^{a_m}(x_n(t)-x(t))^2dt+\int_{a_n}^1(1-x(t))^2dt.
\end{align*}
Suppose that $x_n\rightarrow x$ as $n\to\infty$. The by the continuity of $x_n(t)$ and $x(t)$, $x(t)$ must satisfy that
$$x(t)=\left\{\begin{array}{ccc}
0 & \mbox{if} & t\in\left[0,\frac{1}{2}\right),\\
1 & \mbox{if} & t\in\left(\frac{1}{2},1\right].
\end{array}\right.$$
But this is impossible since $x(t)$ is continuous. Hence, $(x_n)$ is not convergent in $\mathcal{C}[0,1]$.

In here, we studied completion of metric spaces. Since a normed space is a metric space, we also have completion of normed spaces. The completion of $\mathcal{C}[a,b]$ with the norm \eqref{eq:intnorm} is denoted by $L^2[a,b]$. More generally, for any $p\geq 1$, the Banach space $L^p[a,b]$ is the completion of the normed space $\mathcal{C}[a,b]$ with the norm
\begin{equation}
||x||_p=\left(\int_a^b|x(t)|^pdt\right)^{\frac{1}{p}}.
\end{equation}

Lemma. A metric $d$ induced by a norm on a normed space $X$ satisfies

  1. $d(x+a,y+a)=d(x,y)$, $a,x,y\in X$. This means that $d$ is translation invariant.
  2. $d(\alpha x,\alpha y)=|\alpha|d(x,y)$, $\alpha$, a scalar.

Proof.
\begin{align*}
d(x+a,y+a)&=||x+a-(y+a)||=||x-y||=d(x,y),\\
d(\alpha x,\alpha y)&=||\alpha x-\alpha y||=|\alpha|||x-y||=|\alpha|d(x,y).
\end{align*}

We know that a norm on a vector space $V$ defines a metric on $V$. Can every metric on a vector space $V$ be obtained from a norm? The answer is negative. Let $V$ be the set of all bounded or unbounded sequences of complex numbers. Then $V$ is a vector space. The metric $d$ on $V$ defined by
$$d(x,y)=\sum_{j=1}^\infty\frac{1}{2^j}\frac{|\xi_j-\eta_j|}{1+|\xi_j-\eta_j|}$$ is not translation invariant, so it cannot be obtained from a norm.

Functional Analysis 5: Completion of Metric Spaces

This lecture will conclude our discussion on metric spaces with completion of metric spaces.

Definition. Let $X=(X,d)$ and $\tilde X=(\tilde X,d)$ be two metric space. A mapping $T: X\longrightarrow\tilde X$ is said to be an isometry of $T$ preserves distances, i.e.
$$\forall x,y\in X,\ \tilde d(Tx,Ty)=d(x,y).$$ The space $X$ is said to be isometric with $\tilde X$ if there exists a bijective isometry of $X$ onto $\tilde X$.

Theorem [Completion] For a metric space $(X,d)$ there exists a complete metric space $\hat X=(\hat X,\hat d)$ which has s subspace $W$ that is isometric with $X$ and is dense in $\hat X$. $\hat X$ is called the completion of $X$ and it is unique up to isometries.

Proof. This will be a lengthy proof and I have divided it into steps.

Step 1. Construction of $\hat X=(\hat X,\hat d)$.

Let $(x_n)$ and $(y_n)$ be Cauchy sequences in $X$. We say $(x_n)$ is equivalent to $(x_n’)$, and write $(x_n)\sim (x_n’)$, if
$$\lim_{n\to\infty}d(x_n,x_n’)=0.$$ $\sim$ is actually an equivalence relation on the set of all Cauchy sequences of $X$. Clearly $\sim$ is reflexive and symmetric. Let us show that $\sim$ is also transitive. Let $(x_n)\sim (x_n’)$ and $(x_n’)\sim (x_n^{\prime\prime})$. Then
$$\lim_{n\to\infty}d(x_nx_n’)=0\ \mbox{and}\ \lim_{n\to\infty}d(x_n’,x_n^{\prime\prime})=0.$$
\begin{align*}
\lim_{n\to\infty}d(x_n,x_n^{\prime\prime})&\leq\lim_{n\to\infty}d(x_n,x_n’)+\lim_{n\to\infty}d(x_n’,x_n^{\prime\prime})\\
&=0
\end{align*}
Let $\hat X$ be the set of all equivalence classes $\hat x,\hat y,\cdots$ of Cauchy sequences. Define
$$\hat d(\hat x,\hat y)=\lim_{n\to\infty}d(x_n,y_n),$$
where $x_n\in\hat x$ and $y_n\in\hat y$. We claim that $\hat d$ is a metric on $\hat X$. It suffices to show that $\hat d$ is well-defined. The conditions (M1)-(M3) hold due to the fact that $d$ is a metric. First we show that $\hat d(\hat x,\hat y)$ exists. It follows from (M3) that
$$d(x_n,y_n)\leq d(x_n,x_m)+d(x_m,y_m)+d(y_m,y_n)$$ and so we have
$$d(x_n,y_n)-d(x_m,y_m)\leq d(x_n,x_m)+d(y_m,y_n).$$ Similarly, we obtain
$$d(x_m,y_m)-d(x_n,y_n)\leq d(x_n,x_m)+d(y_m,y_n).$$ Hence,
$$|d(x_n,y_n)-d(x_m,y_m)|\leq d(x_n,x_m)+d(y_m,y_n)\rightarrow 0$$ as $n,m\rightarrow\infty$, i.e.
$$\lim_{n,m\to\infty}|d(x_n,y_n)-d(x_m,y_m)|=0.$$
Since $\mathbb{R}$ is complete, $\displaystyle\lim_{n\to\infty}d(x_n,y_n)$ exists. Now we show that the limit is independent of the choice of representatives $(x_n)$ and $(y_n)$. If $(x_n)\sim (x_n’)$ and $(y_n)\sim(y_n’)$, then
$$|d(x_n,y_n)-d(x_n’,y_n’)|\leq d(x_n,x_m)+d(y_m,y_n)\rightarrow 0$$ as $n\to\infty$.

Step 2. Construction of an isometry $T: X\longrightarrow W\subset \hat X$.

For each $b\in X$, let $\hat b$ be the equivalence class of the Cauchy sequence $(b,b,b,\cdots)$. Then $T(b):=\hat b\in\hat X$. Now,
$$\hat d(Tb,Tc)=\hat d(\hat b,\hat c)=d(b,c).$$ So, $T$ is an isometry. An isometry is automatically injective. $T$ is onto since since $T(X)=W$. Let us show that $W$ is dense in $\hat X$. Let $\hat x\in \hat X$ and let $(x_n)\in\hat x$. Since $(x_n)$ is Cauchy, given $\epsilon>0$ $\exists N$ such that $d(x_n,x_N)<\frac{\epsilon}{2}$ $\forall n\geq N$. Let $(x_N,x_N,\cdots)\in\hat x_N$. Then $\hat x_N\in W$.
\begin{align*}
\hat d(\hat x,\hat x_N)=\lim_{n\to\infty}d(x_n,x_N)\leq\frac{\epsilon}{2}<\epsilon&\Longrightarrow \hat x_N\in B(\hat x,\epsilon)\\
&\Longrightarrow B(\hat x,\epsilon)\cap W\ne\emptyset.
\end{align*}
Hence, $\bar W=\hat X$ i.e. $W$ is dense in $\hat X$.

Step 3. Completeness of $\hat X$.

Let $(\hat x_n)$ be any Cauchy sequence in $\hat X$. Since $W$ is dense in $\hat X$, $\forall \hat x_n$, $\exists\hat z_n\in W$ such that $\hat d(\hat x_n,\hat z_n)<\frac{1}{n}$.
\begin{align*}
\hat d(\hat z_m,\hat z_n)&\leq \hat d(\hat z_n,\hat x_m)+\hat d(\hat x_m,\hat x_n)+\hat d(\hat x_n,\hat z_n)\\
&<\frac{1}{m}+\hat d(\hat x_m,\hat x_n)+\frac{1}{n}.
\end{align*}
Given $\epsilon>0$ by Archimedean property $\exists$ a positive integer $N_1$ such that $N>\frac{\epsilon}{3}$. Since $(\hat x_n)$ is a Cauchy sequence, $\exists$ a positive integer $N_2$ such that $\hat d(\hat x_m,\hat x_n)<\frac{\epsilon}{3}$ $\forall m,n\geq N$. Let $N=\max\{N_1,N_2\}$. Then $\forall m,n\geq N$, $\hat d(\hat z_m,\hat z_n)<\epsilon$ i.e. $(\hat z_m)$ is Cauchy. Since $T: X\longrightarrow W$ is an isometry and $\hat z_m\in W$, the sequence $(z_m)$, where $z_m=T^{-1}\hat z_m$, is Cauchy in $X$. Let $\hat x\in\hat X$ be the class to which $(z_m)$ belongs. Show that $\hat x$ is the limit of $(\hat x_n)$. For each $n=1,2,\cdots$,
\begin{align*}
\hat d(\hat x_n,\hat x)&\leq \hat d(\hat x_n,\hat z_n)+\hat d(\hat z_n,\hat x)\\
&<\frac{1}{n}+\hat d(\hat z_n,\hat x)\\
&=\frac{1}{n}+\lim_{m\to\infty}d(z_n,z_m)
\end{align*} since $(z_m)\in\hat x$ and $(z_n,z_n,\cdots)\in\hat z_n\in W$. This implies that $\displaystyle\lim_{n\to\infty}\hat d(\hat x_n,\hat x)=0$ i.e. the Cauchy sequence $(\hat x_n)$ in $\hat X$ has the limit $\hat x\in\hat X$. Therefore, $\hat X$ is complete.

Step 4. Uniqueness of $\hat X$ up to isometries.

Suppose that $(\tilde X,\tilde d)$ is another completion of $X$ i.e. it is a complete metric space with a subspace $\tilde W$ dense in $\tilde X$ and isometric with $X$. We show that $\hat X$ is isometric with $\tilde X$. Let $X$ is isometric with $W$ and $\tilde W$ via isometries $T$ and $\tilde T$, respectively. Then $W$ is isometric with $\tilde W$ via the isometry $\rho=\tilde T\circ T^{-1}$.

$$\begin{array}{ccc}
& & W\\
& \nearrow &\downarrow\\
X & \longrightarrow & \tilde{W}
\end{array}$$

Let $\hat x\in\hat X$. Then $\exists$ a sequence in $(\hat x_n)\in W$ such that $\displaystyle\lim_{n\to\infty}\hat x_n=\hat x$. $(\hat x_n)$ is a Cauchy sequence and $\rho$ is an isometry, so $(\tilde x_n)$, where $\tilde x_n:=\rho\hat x_n$, is a Cauchy sequence in $\tilde W\subset \tilde X$. Since $\tilde X$ is complete, $\exists\tilde x\in\tilde X$ such that $\displaystyle\lim_{n\to\infty}\tilde x_n=\tilde x$. Define a mapping $\psi:\hat X\longrightarrow\tilde X$ by $\psi\hat x=\tilde x$. Then we claim that $\hat X$ is isometric with $\tilde X$ via $\psi$.

Step A. $\psi$ is well-defined.

It suffices to show that $T\hat x$ does not depend on the choice of $(\hat x_n)\in W$ such that $\displaystyle\lim_{n\to\infty}\hat x_n=\hat x$. Let $(\hat x_n’)$ be another sequence in $W$ such that $\displaystyle\lim_{n\to\infty}\hat x_n’=\hat x$. Then $(\tilde x_n’)$, where $\tilde x_n’=\rho\hat x_n’$, is a Cauchy sequence in $\tilde W$ and so $\exists\tilde x’\in\tilde X$ such that $\displaystyle\lim_{n\to\infty}\tilde x_n’=\tilde x’$. Now,
\begin{align*}
\tilde d(\tilde x,\tilde x’)&=\lim_{n\to\infty}\tilde d(\tilde x_n,\tilde x_n’)\\
&=\lim_{n\to\infty}\hat d(\hat x_n,\hat x_n’)\ (\rho\ \mbox{is an isometry})\\
&=\hat d(\hat x,\hat x)\\
&=0.
\end{align*}
Hence, $\tilde x=\tilde x’$.

Step B. $\psi$ is onto.

Let $\tilde x\in\tilde X$. Then $\exists$ a sequence $(\tilde x_n)$ in $\tilde W$ such that $\displaystyle\lim_{n\to\infty}\tilde x_n=\tilde x$. $(\tilde x_n)$ is Cauchy (since it is a convergent sequence) and $\rho^{-1}$ is an isometry, so the sequence $(\hat x_n)\subset \hat X$, where $\hat x_n=\rho^{-1}\tilde x_n$, is Cauchy. Since $\hat X$ is complete, $\exists\hat x\in\hat X$ such that $\displaystyle\lim_{n\to\infty}\hat x_n=\hat x$. Clearly $\psi\hat x=\tilde x$ and hence $\psi$ is onto.

Step C. $\psi$ is an isometry.

Let $\hat x,\hat y\in\hat X$. Then $\exists$ sequences $(\hat x_n)$, $(\hat y_n)$ in $W$ such that $\displaystyle\lim_{n\to\infty}\hat x_n=\hat x$ and $\displaystyle\lim_{n\to\infty}\hat y_n=\hat y$, respectively.
\begin{align*}
\hat d(\hat x,\hat y)&=\lim_{n\to\infty}\hat d(\hat x_n,\hat y_n)\\
&=\lim_{n\to\infty}\tilde d(\tilde x_n,\tilde y_n)\ (\tilde x_n:=\rho\hat x_n,\ \tilde y_n:=\rho y_n)\\
&=\tilde d(\tilde x,\tilde y)\ (\lim_{n\to\infty}\tilde x_n=\tilde x,\ \lim_{n\to\infty}\tilde y_n=\tilde y)\\
&=\tilde d(\psi\hat x,\psi\hat y).
\end{align*}
Thus, $\psi$ is an isometry.

Remember that an isometry from a metric space into another metric space is automatically one-to-one. Therefore, $\hat X$ is isometric with $\tilde X$ via $\psi$.

Intuitively speaking, the completion of a metric space $X$ can be achieved by adding to $X$ all its limit points. Recall that if $x$ is a limit point of $X$, then there exists a sequence $(x_n)$ in $X$ such that $\displaystyle\lim_{n\to\infty}x_n=x$. This is a reminiscence of extending from rational numbers $\mathbb{Q}$ to real numbers $\mathbb{R}$ (which is complete) by adding to $\mathbb{Q}$ all its limit points (irrational numbers).

Functional Analysis 4: Convergence, Cauchy Sequence, Completeness

The set $\mathbb{Q}$ of rational numbers is not complete (or not a continuum) since it has gaps or holes. For instance, $\sqrt{2}$ is not in $\mathbb{Q}$. On the other hand, the set $\mathbb{R}$ of real numbers has no gaps or holes, so it is complete (or is a continuum). Let $(x_n)$ be a sequence of real numbers. Suppose that $(x_n)$ converges to a real number $x$. Then by the triangle inequality, for any $m,n\in\mathbb{N}$, we have
$$|x_m-x_n|\leq |x_m-x|+|x-x_n|.$$
Hence, $\displaystyle\lim_{m,n\to\infty}|x_m-x_n|=0$, i.e. $(x_n)$ is a Cauchy sequence. Conversely, Georg Cantor introduced the completeness axiom that every Cauchy sequence of real numbers converges and defined a real number as the limit of a Cauchy sequence of rational numbers. For instance, consider the Cauchy sequence $(x_n)$ defined by
$$x_1=1,\ x_{n+1}=\frac{x_n}{2}+\frac{1}{x_n},\ \forall n\geq 2.$$
If $(x_n)$ converges to a number $x$, it would satisfy $x^2=2$ i.e. $(x_n)$ converges to $\sqrt{2}$. There is another way to obtain the completeness of $\mathbb{R}$ by a Dedekind cut, though we are not going to delve into that here.

More generally, one can also consider a complete metric space and that is what we are going to study in this lecture.

Definition. A sequence $(x_n)$ is a metric space $(X,d)$ is said to converge or to be convergent to $x\in X$ if
$$\lim_{n\to\infty}d(x_n,x)=0.$$
$x$ is called the limit if $(x_n)$ and we write
$$\lim_{n\to\infty}x_n=x\ \mbox{or}\ x_n\rightarrow x.$$
If $(x_n)$ is not convergent, it is sad to be divergent. We can generalize the definiton of the convergence of a sequence we learned in calculus in terms of a metric as:

Definition. $\displaystyle\lim_{n\to\infty}d(x_n,x)=0$ if and only if given $\epsilon>0$ $\exists$ a positive integer $N$ s.t. $x_n\in B(x,\epsilon)$ $\forall n\geq N$.

A nonempty subset $M\subset X$ is said to be bounded if
$$\delta(M)=\sup_{x,y\in M}d(x,y)<\infty.$$

Lemma. Let $(X,d)$ be a metric space.

(a) A convergent sequence in $X$ is bounded and its limit is unique.

(b) If $x_n\rightarrow x$ and $y_n\rightarrow y$, then $d(x_n,y_n)\rightarrow d(x,y)$.

Proof. (a) Suppose that $x_n\rightarrow x$. Then one can find a positive integer $N$ such that $d(x_n,x)<1$ $\forall n\geq N$. Let $M=2\max\{d(x_1,x),\cdots,d(x_{N-1},x),1\}$. Then for all $m,n\in\mathbb{N}$,
\begin{align*}
d(x_m,x_n)&\leq d(x_m,x)+d(x,x_n)\ (\mbox{ (M3) triangle inequality)}\\
&\leq M.
\end{align*}
This means that $\delta((x_n))\leq M<\infty$ i.e. $(x_n)$ is bounded.

Suppose that $x_n\rightarrow x$ and $x_n\rightarrow y$. Then
\begin{align*}
0\leq d(x,y)&\leq d(x,x_n)+d(x_ny)\\
&\rightarrow 0
\end{align*}
as $n\to\infty$. So, $d(x,y)=0\Rightarrow x=y$ by (M1).

(b) By (M3),
$$d(x_n,y_n)\leq d(x_n,x)+d(x,y)+d(y,y_n)$$
and so we obtain
$$d(x_n,y_n)-d(x,y)\leq d(x_n,x)+d(y,y_n).$$
Similarly, we also obtain the inequality
$$d(x,y)-d(x_n,y_n)\leq d(x,x_n)+d(y_n,y).$$
Hence,
$$0\leq |d(x_n,y_n)-d(x,y)|\leq d(x_n,x)+d(y_n,y)\rightarrow 0$$
as $n\to\infty$.

Definition. A sequence $(x_n)\subset (X,d)$ is said to be Cauchy if given $\epsilon>0$ $\exists$ a positive integer $N$ such that
$$d(x_m,x_n)<\epsilon\ \forall m,n\geq N.$$
The space $X$ is said to be complete if every Cauchy sequence in $X$ converges.

Examples. The real line $\mathbb{R}$ and the complex plane $\mathbb{C}$ are complete.

Theorem. Every convergent sequence is Cauchy.

Proof. Suppose that $x_n\rightarrow x$. Then given $\epsilon>0$ $\exists$ a poksitive integer $N$ s.t. $d(x_n,x)<\frac{\epsilon}{2}$ for all $n\geq N$. Now, $\forall m,n\geq N$
$$d(x_m,x_n)\leq d(x_m,x)+d(x,x_n)<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon.$$
Therefore, $(x_n)$ is Cauchy.

Theorem. Let $M$ be a nonempty subset of a metric space $(X,d)$. Then

(a) $x\in\bar M\Longleftrightarrow \exists$ a seqence $(x_n)\subset M$ such that $x_n\rightarrow x$.

(b) $M$ is closed $\Longleftrightarrow$ given a sequence $(x_n)\subset M$, $x_n\rightarrow x$ implies $x\in M$.

Proof. (a) ($\Longrightarrow$) Since $x\in\bar M$, $\forall n\in\mathbb{N}$ $\exists x_n\in B\left(x,\frac{1}{n}\right)\cap M\ne\emptyset$. Let $\epsilon>0$ be given. Then by the Archimedean property, $\exists$ a positive integer $N$ s.t. $N\geq\frac{1}{\epsilon}$. Now,
$$n\geq N\Longrightarrow d(x_n,x)<\frac{1}{n}\leq\frac{1}{N}<\epsilon.$$

($\Longleftarrow$) Suppose that $\exists$ a sequence $(x_n)\subset M$ s.t. $x_n\rightarrow x$. Then given $\epsilon>0$ $\exists$ a positive integer $N$ s.t. $x_n\in B(x,\epsilon)$ $\forall n\geq N$. This means that $\forall\epsilon>0$, $B(x,\epsilon)\cap M\ne\emptyset$. So, $x\in\bar M$.

(b) ($\Longrightarrow$) Clear

($\Longleftarrow$) It suffices to show that $\bar M\subset M$. Let $x\in\bar M$. Then $\exists$ a sequence $(x_n)\subset M$ such that $x_n\rightarrow x$. By assumption, $x\in M$.
Theorem. A subspace $M$ of a complete metric space $X$ itself is complete if and only if $M$ is closed in $X$.

Proof. ($\Longrightarrow$) Let $M\subset X$ be complete. Let $(x_n)$ be a sequence in $M$ such that $x_n\rightarrow x$. Then $(x_n)$ is Cauchy. Since $M$ is complete, every Cauchy sequence must converge and hence $x\in M$. This means that $M$ is closed.

($\Longleftarrow$) Suppose that $M\subset X$ is closed. Let $(x_n)$ be a Cauchy sequence in $M\subset X$. Since $X$ is complete, $\exists x\in X$ such that $x_n\rightarrow x$. Since $M$ is closed, $x\in M$. Therefore, $M$ is complete.

Example. In $\mathbb{R}$ with Euclidean metric, the closed intervals $[a,b]$ are complete. $\mathbb{Z}$, the set of integers is also complete by the above theorem since it is closed in $\mathbb{R}$. One can directly see why $\mathbb{Z}$ is complete without quoting the theorem though. Let $(x_n)$ be a Cauchy sequence in $\mathbb{Z}$. Then we see that there exists a positive integer $N$ such that $x_N=x_{N+1}=x_{N+2}=\cdots$. Hence any Cauchy sequence in $\mathbb{Z}$ is a convergent sequence in $\mathbb{Z}$. Therefore, $\mathbb{Z}$ is complete.

Theorem. A mapping $T: X\longrightarrow Y$ is continuous at $x_0\in X$ if and only if $x_n\rightarrow x$ implies $Tx_n\rightarrow Tx_0$.

Proof. ($\Longrightarrow$) Suppose that $T$ is continuous and $x_n\rightarrow x$ in $X$. Let $\epsilon>0$ be given. Then $\exists\delta>0$ s.t. whenever $d(x,x_0)<\delta$, $d(Tx,Tx_0)<\epsilon$. Since $x_n\rightarrow x$, $\exists$ a positive integer $N$ s.t. $d(x_n,x_0)<\delta$ $\forall n\geq N$. So, $\forall n\geq N$, $d(Tx_n,Tx_0)<\epsilon$. Hence, $Tx_n\rightarrow Tx_0$.

($\Longleftarrow$) Suppose that $T$ is not continuous. Then $\exists\epsilon>0$ s.t. $\forall\delta>0$, $\exists x\ne x_0$ satisfying $d(x,x_0)<\delta$ but $d(Tx,tx_0)\geq\epsilon$. So, $\forall n=1,2,\cdots$, $\exists x_n\ne x_0$ satisfying $d(x_n,x_0)<\frac{1}{n}$ but $d(Tx_n,Tx_0)\geq\epsilon$. This means that $x_n\rightarrow x_0$ but $Tx_n\not\rightarrow Tx_0$.

Functional Analysis 3: Basic (Metric) Topology

Let $(X,d)$ be a metric space.

Definition. A subset $U\subset X$ is said to be open if $\forall x\in U$ $\exists\epsilon>0$ s.t. $B(x,\epsilon)\subset U$.

If $U\subset X$ is open then $U$ can be expressed as union of open balls $B(x,\epsilon)$. Hence, the set of all open balls in $X$, $\mathcal{B}=\{B(x,\epsilon): x\in X,\ \epsilon>0\}$ form a basis for a topology (a metric topology, the topology induced by the metric $d$) on $X$. Those who have not studied topology before may simply understand it as the set of all open sets in $X$.

Definition. A subset $F\subset X$ is said to be closed if its complement, $F^c=X\setminus F$ is open in $X$.

The following is the definition of a continuous function that you are familiar with from calculus. The definition is written in terms of metrics.

Definition. Let $(X,d_X)$ and $(Y,d_Y)$ be metric spaces. A mapping $T:X\longrightarrow Y$ is said to be continuous at $x_0\in X$ if $\forall\epsilon>0$ $\exists\delta>0$ s.t $d_Y(Tx,Tx_0)<\epsilon$ whenever $d_X(x,x_0)<\delta$.

$T$ is said to be continuous if it is continuous at every point of $X$.

The above definition can be generalized in terms of open sets as follows.

Theorem. A mapping $T: (X,d_X)\longrightarrow(Y,d_Y)$ is continuous if and only if $\forall$ open set $U$ in $Y$, $T^{-1}U$ is open in $X$.

Proof. (Only if, $\Rightarrow$) Suppose that $T:X\longrightarrow Y$ is continuous. Let $U$ be open in $Y$. Then we show that $T^{-1}U$ is open in $X$. Let $x_0\in T^{-1}U$. Then $Tx_0\in U$. Since $U$ is open in $Y$, $\exists\epsilon>0$ s.t. $B(Tx_0,\epsilon)\subset U$. By the continuity of $T$, for this $\epsilon>0$ $\exists\delta>0$ s.t. whenever $d(x,x_0)<\delta$, $d(Tx,Tx_0)<\epsilon$. This means that
$$TB(x_0,\delta)\subset B(Tx_0,\epsilon)\subset U\Longrightarrow B(x_0,\delta)\subset T^{-1}(TB(x_0,\delta))\subset T^{-1}U.$$ Hence, $T^{-1}U$ is open in $X$.

(If, $\Leftarrow$) Suppose that $\forall$ open set $U$ in $Y$, $T^{-1}U$ is open in $X$. We show that $T$ is continuous. Let $x_0\in X$ and let $\epsilon>0$ be given. Then $B(Tx_0,\epsilon)$ is open in $Y$. So by the assumption, $x_0\in T^{-1}B(Tx_0,\epsilon)$ is open in $X$. This means that $\exists\delta>0$ s.t.
$$B(x_0,\delta)\subset T^{-1}B(Tx_0,\epsilon)\Longrightarrow TB(x_0,\delta)\subset T(T^{-1}B(Tx_0,\epsilon))\subset B(Tx_0,\epsilon).$$ This is equivalent to saying that $\exists\delta>0$ s.t. whenever $d(x,x_0)<\delta$, $d(Tx,Tx_0)<\epsilon$. That is, $T$ is continuous at $x_0$. Since the choice $x_0\in X$ was arbitrary, the proof is complete.

Let $A\subset X$. $x\in X$ is called an accumulation point or a limit point of $A$ if $\forall$ open set $U(x)$ in $X$, $(U(x)-\{x\})\cap A\ne\emptyset$. Here the notation $U(x)$ means that it contains $x$. The set of all accumulation points of $A$ is denoted by $A’$ and is called the derived set of $A$. $\bar A:=A\cup A’$ is called the closure of $A$. $\bar A$ is the smallest closed set containing $A$.

Theorem. Let $A\subset X$. Then $x\in\bar A$ if and only if $\forall$ open set $U(x)$, $U(x)\cap A\ne\emptyset$.

Definition. $D\subset X$ is said to be dense if $\bar D=X$. This means that $\forall$ open set $U$ in $X$, $U\cap D\ne\emptyset$.

Definition. $X$ is said to be separable if it has a countable dense subset.

Examples. The real line $\mathbb{R}$ is separable. The complex plane $\mathbb{C}$ is also separable.

Theorem. The space $\ell^\infty$ is not separable.

Proof. Let $y=(\eta_1,\eta_2,\eta_3,\cdots)$ be a sequence of zeros and ones. Then $y\in\ell^\infty$. We can then associate $y$ with the binary representation
$$\hat y=\frac{\eta_1}{2}+\frac{\eta_2}{2^2}+\frac{\eta_3}{2^3}+\cdots\in [0,1].$$ Each $\hat y\in [0,1]$ has a binary representation and different $\hat y$’s have different binary representations. So, there are uncountably many sequences of zeros and ones. If $y$ and $z$ are sequences of zeros and ones and $y\ne z$, then $d(y,z)=1$. This means that for any two distinct sequences $y$ and $z$ of zeros and ones, $B\left(y,\frac{1}{3}\right)\cap B\left(z,\frac{1}{3}\right)=\emptyset$. Let $A$ be a dense subset of $\ell^\infty$. Then for each sequence $y$ of zeros and ones, $B\left(y,\frac{1}{3}\right)$ has at least one element of $A$. This means that $A$ cannot be countable.

Theorem. The space $\ell^p$ with $1\leq p<\infty$ is separable.

Proof. Let $A$ be the set of all sequences $y$ of the form
$$y=(\eta_1,\eta_2,\cdots,\eta_n,0,0,\cdots,0),$$ where $n$ is a positive integer and the $\eta_j$’s are rational. For each $n=1,2,\cdots$, the number of sequences of the form $y=(\eta_1,\eta_2,\cdots,\eta_n,0,0,\cdots,0)$ is the same as the number of functions from $\{1,2,3,\cdots,n\}$ to $\mathbb{Q}$, the set of all rational numbers. $\mathbb{Q}$ has the cardinality $\aleph_0$ and so the number is $\aleph_0^n=\aleph_0$. The cardinality of $A$ is then $\aleph_0\cdot\aleph_0=\aleph_0$ i.e. $A$ is countable. Now we show that $A$ is dense in $\ell^p$. Let $x=(\xi_j)\in\ell^p$. Let $\epsilon>0$ be given. Since $\displaystyle\sum_{j=1}^\infty|\xi_j|^p<\infty$, $\exists$ a positive integer $N$ s.t. $\displaystyle\sum_{j=N+1}^\infty|\xi_j|^p<\frac{\epsilon^p}{2}$. Since rationals are dense in $\mathbb{R}$, one can find $y=(\eta_1,\eta_2,\cdots,\eta_N,0,0,\cdots)\in A$ s.t. $\displaystyle\sum_{j=1}^N|\xi_j-\eta_j|^p<\frac{\epsilon^p}{2}$. Hence,
$$[d(x,y)]^p=\sum_{j=1}^N|\xi_j-\eta_j|^p+\sum_{j=N+1}^\infty|\xi_j|^p<\epsilon^p,$$
i.e. $d(x,y)<\epsilon$. This means that $y\in B(x,\epsilon)\cap A\ne\emptyset$. This completes the proof.

Functional Analysis 2: $\ell^p$ and $L^p$ as Metric Spaces

Let $p\geq 1$ be a fixed number and let
$$\ell^p=\left\{x=(\xi_j): \sum_{j=1}^\infty|\xi_j|^p<\infty\right\}.$$
Define $d:\ell^p\times\ell^p\longrightarrow\mathbb{R}^+\cup\{0\}$ by
$$d(x,y)=\left(\sum_{j=1}^\infty|\xi_j-\eta_j|^p\right)^{\frac{1}{p}}.$$
Then $(\ell^p,d)$ is a metric space. The properties (M1) and (M2) are clearly satisfied. We prove the remaining property (M3) the triangle inequality. $p=1$ case can be easily shown by the triangle inequality of numbers. We need a few steps to do this. First we prove the following inequality: $\forall\alpha>0,\beta>0$,
$$\alpha\beta\leq\frac{\alpha^p}{p}+\frac{\beta^q}{q},$$
where $p>1$ and $\frac{1}{p}+\frac{1}{q}=1$. The numbers $p$ and $q$ are called conjugate exponenets. It follows from $\frac{1}{p}+\frac{1}{q}=1$ that $(p-1)(q-1)=1$ i.e. $\frac{1}{p-1}=q-1$. If we let $u=t^{p-1}$ then $t=u^{\frac{1}{p-1}}=u^{q-1}$. By comparing areas, we obtain
$$\alpha\beta\leq\int_0^{\alpha}t^{p-1}dt+\int_0^{\beta}u^{q-1}du=\frac{\alpha^p}{p}+\frac{\beta^q}{q}.$$
Next, using this inequality we prove the Hölder inequality
$$\sum_{j=1}^\infty|\xi_j\eta_j|\leq\left(\sum_{k=1}^\infty|\xi_k|^p\right)^{\frac{1}{p}}\left(\sum_{m=1}^\infty|\eta_m|^q\right)^{\frac{1}{q}}$$
where $p>1$ and $\frac{1}{p}+\frac{1}{q}=1$. When $p=2$ and $q=2$, we obtain the well-known Cauchy-Schwarz inequality.

Proof. Let $(\tilde\xi_j)$ and $(\tilde\eta_j)$ be two sequences such that
$$\sum_{j=1}^\infty|\tilde\xi_j|^p=1,\ \sum_{j=1}^\infty|\tilde\eta_j|^q=1.$$
Let $\alpha=|\tilde\xi_j|$ and $\beta=|\tilde\eta_j|$. Then by the inequality we proved previously,
$$|\tilde\xi_j\tilde\eta_j|\leq\frac{|\tilde\xi_j|^p}{p}+\frac{|\tilde\eta_j|^q}{q}$$
and so we obtain
$$\sum_{j=1}^\infty|\tilde\xi_j\tilde\eta_j|\leq\sum_{j=1}^\infty\frac{|\tilde\xi_j|^p}{p}+\sum_{j=1}^\infty\frac{|\tilde\eta_j|^q}{q}=1.$$
Now take any nonzero $x=(\xi_j)\in\ell^p$, $y=(\eta_j)\in\ell^q$. Setting
$$\tilde\xi_j=\frac{\xi_j}{\left(\displaystyle\sum_{k=1}^\infty|\xi_k|^p\right)^{\frac{1}{p}}},\ \tilde\eta_j=\frac{\eta_j}{\left(\displaystyle\sum_{m=1}^\infty|\eta_m|^q\right)^{\frac{1}{q}}}.$$
results the Hölder inequality.

Next, we prove the Minkowski inequality
$$\left(\sum_{j=1}^\infty|\xi_j+\eta_j|^p\right)^{\frac{1}{p}}\leq\left(\sum_{k=1}^\infty|\xi_k|^p\right)^{\frac{1}{p}}+\left(\sum_{m=1}^\infty|\eta_m|^p\right)^{\frac{1}{p}}$$
where $x=(\xi_j)\,y=(\eta_j)\in\ell^p$ and $p\geq 1$. $p=1$ case comes from the triangle inequality for numbers. Let $p>1$. Then
\begin{align*}
|\xi_j+\eta_j|^p&=|\xi_j+\eta_j||\xi_j|\eta_j|^{p-1}\\
&=(|\xi_j|+|\eta_j|)|\xi_j+\eta_j|^{p-1}\ (\mbox{triangle inequality for numbers}).
\end{align*}
For a fixed $n$, we have
$$\sum_{j=1}^n|\xi_j+\eta_j|^p\leq\sum_{j=1}^n|\xi_j||\xi_j+\eta_j|^{p-1}+\sum_{j=1}^n|\eta_j||\xi_j+\eta_j|^{p-1}.$$
Using the Hölder inequality, we get the following inequality
\begin{align*}
\sum_{j=1}^n|\xi_j||\xi_j+\eta_j|^{p-1}&\leq \sum_{j=1}^\infty |\xi_j||\xi_j+\eta_j|^{p-1}\\
&\leq\left(\sum_{k=1}^\infty |\xi_k|^p\right)^{\frac{1}{p}}\left(\sum_{m=1}^\infty(|\xi_m+\eta_m|^{p-1})^q\right)^{\frac{1}{q}}\ (\mbox{Hölder})\\
&=\left(\sum_{k=1}^\infty|\xi_k|^p\right)^{\frac{1}{p}}\left(\sum_{m=1}^\infty|\xi_m+\eta_m|^p\right)^{\frac{1}{q}}.
\end{align*}
Similarly we also get the inequality
$$\sum_{j=1}^n|\eta_j||\xi_j+\eta_j|^{p-1}\leq \left(\sum_{k=1}^\infty|\eta_k|^p\right)^{\frac{1}{p}}\left(\sum_{m=1}^\infty|\xi_m+\eta_m|^p\right)^{\frac{1}{q}}.$$
Combining these two inequalities, we get
$$\sum_{j=1}^n|\xi_j+\eta_j|^p\leq\left\{\left(\sum_{k=1}^\infty|\xi_k|^p\right)^{\frac{1}{p}}+\left(\sum_{k=1}^\infty|\eta_k|^p\right)^{\frac{1}{p}}\right\}\left(\sum_{m=1}^\infty|\xi_m+\eta_m|^p\right)^{\frac{1}{q}}$$
and by taking the limit $n\to \infty$ on the left hand side, we get
$$\sum_{j=1}^\infty|\xi_j+\eta_j|^p\leq\left\{\left(\sum_{k=1}^\infty|\xi_k|^p\right)^{\frac{1}{p}}+\left(\sum_{k=1}^\infty|\eta_k|^p\right)^{\frac{1}{p}}\right\}\left(\sum_{m=1}^\infty|\xi_m+\eta_m|^p\right)^{\frac{1}{q}}.$$
Finally dividing this inequality by $\displaystyle\left(\sum_{m=1}^\infty|\xi_m+\eta_m|^p\right)^{\frac{1}{q}}$ results the Minkowski inequality. The Minkowski inequality tells that
$$d(x,y)=\left(\sum_{j=1}^\infty|\xi_j-\eta_j|^p\right)^{\frac{1}{p}}<\infty$$
for $x,y\in\ell^p$. Let $x=(\xi_j), y=(\eta_j),\ z=(\zeta_j)\in\ell^p$. Then
\begin{align*}
d(x,y)&=\left(\sum_{j=1}^\infty|\xi_j-\eta_j|^p\right)^{\frac{1}{p}}\\
&\leq\left(\sum_{j=1}^\infty[|\xi_j-\zeta_j|+|\zeta_j-\eta_j|]^p\right)^{\frac{1}{p}}\\
&\leq\left(\sum_{j=1}^\infty|\xi_j-\zeta_j|^p\right)^{\frac{1}{p}}+\left(\sum_{j=1}^\infty|\zeta_j-\eta_j|^p\right)^{\frac{1}{p}}\\
&=d(x,z)+d(z,y).
\end{align*}

A measurable function $f$ on a closed interval $[a,b]$ is said to belong to $L^p$ if $\int_a^b|f(t)|^p dt<\infty$. $L^p$ is a vector space. For functions $f,g\in L^p$, we define
$$d(f,g)=\left\{\int_a^b|f(t)-g(t)|^pdt\right\}^{\frac{1}{p}}.$$
Then clearly (M2) symmetry is satisfied and one can also prove that (M3) triangle inequality holds. However, (M1) is not satisfied since what we have is that if $d(f,g)=0$ then $f=g$ a.e. (almost everywhere) i.e. the set $\{t\in[a,b]: f(t)\ne g(t)\}$ has measure $0$. It turns out that $=$ a.e. is an equivalence relation on $L^p$, so by considering $f\in L^p$ as its equivalence class $[f]$, $d$ can be defined as a metric on $L^p$ (actually the quotient space of $L^p$). Later, we will be particularly interested in the case when $p=2$ in which case $L^p$ as well as $\ell^p$ become Hilbert spaces. Those of you who want to know details about $L^p$ space are referred to

Real Analysis, H. L. Royden, 3rd Edition. Macmillan Publishing Company, 1988

Functional Analysis 1: Metric Spaces

This is the first of series of lecture notes I intend to write for a graduate Functional Analysis course I am teaching in the Fall.

What is functional analysis? Functional analysis is an abstract branch of mathematics, especially of analysis, concerned with the study of vector spaces of functions. These vector spaces of functions arise naturally when we study linear differential equations as solutions of a linear differential equation form a vector space. Functional analytic methods and results are important in various fields of mathematics (for example, differential geometry, ergodic theory, integral geometry, noncommutative geometry, partial differential equations, probability, representation theory etc.) and its applications, in particular, in economics, finance, quantum mechanics, quantum field theory, and statistical physics. Topics in this introductory functional analysis course include metric spaces, Banach spaces, Hilbert spaces, bounded linear operators, the spectral theorem, and unbounded linear operators.

While functional analysis is a branch of analysis, due to its nature linear algebra is heavily used. So, it would be a good idea to brush up on linear algebra among other things you need to study functional analysis.

In functional analysis, we study analysis on an abstract space $X$ rather than the familiar $\mathbb{R}$ or $\mathbb{C}$. In order to consider fundamental notions in analysis such as limits and convergence, we need to have distance defined on $X$ so that we can speak of nearness or closeness. A distance on $X$ can be defined as a function, called a distance function or a metric, $d: X\times X\longrightarrow\mathbb{R}^+\cup\{0\}$ satisfying the following properties:

(M1) $d(x,y)=0$ if and only if $x=y$.

(M2) $d(x,y)=d(y,x)$ (Symmetry)

(M3) $d(x,y)\leq d(x,z)+d(z,y)$ (Triangle Inequality)

Here $\mathbb{R}^+$ denotes the set of all positive real numbers. You can easily see how mathematicians came up with this definition of a metric. (M1)-(M3) are the properties that the familiar distance on $\mathbb{R}$, $d(x,y)=|x-y|$ satisfies. The space $X$ with a metric $d$ is called a metric space and we usually write it as $(X,d)$.

Example. Let $x=(\xi_1,\cdots,\xi_n), y=(\eta_1,\cdots,\eta_n)\in\mathbb{R}^n$. Define
$$d(x,y)=\sqrt{(\xi_1-\eta_1)^2+\cdots+(\xi_n-\eta_n)^2}.$$
Then $d$ is a metric on $\mathbb{R}^n$ called the Euclidean metric.

This time, let $x=(\xi_1,\cdots,\xi_n), y=(\eta_1,\cdots,\eta_n)\in\mathbb{C}^n$ and define
$$d(x,y)=\sqrt{|\xi_1-\eta_1|^2+\cdots+|\xi_n-\eta_n|^2}.$$
Then $d$ is a metric on $\mathbb{C}^n$ called the Hermitian metric. Here $|\xi_i-\eta_i|^2=(\xi_i-\eta_i)\overline{(\xi_i-\eta_i)}$.

Of course these are pretty familiar examples. If there can be only these familiar examples, there would be no point of considering abstract space. In fact, the abstraction allows to discover other examples of metrics that are not so intuitive.

Example. Let $X$ be the set of all bounded sequences of complex numbers
$$X=\{(\xi_j): \xi_j\in\mathbb{C},\ j=1,\cdots\}.$$
For $x=(\xi_j), y=(\eta_j)\in X$, define
$$d(x,y)=\sup_{j\in\mathbb{N}}|\xi_j-\eta_j|.$$
Then $d$ is a metric on $X$. The metric space $(X,d)$ is denoted by $\ell^\infty$.

Example. Let $X$ be the set of continuous real-valued functions define on the closed interval $[a,b]$. Let $x, y:[a,b]\longrightarrow\mathbb{R}$ be continuous and define
$$d(x,y)=\max_{t\in [a,b]}|x(t)-y(t)|.$$
Then $d$ is a metric on $X$. The metric space $(X.d)$ is denoted by $\mathcal{C}[a,b]$.

In a metric space $(X,d)$, nearness or closeness can be described by a neighbourhood called an $\epsilon$-ball ($\epsilon>0$) centered at $x\in X$
$$B(x,\epsilon)=\{y\in X: d(x,y)<\epsilon\}.$$
These $\epsilon$-balls form a base for the topology on $X$, called the topology on $X$ induced by the metric $d$.

Next time, we will discuss two more examples of metric spaces $\ell^p$ and $L^p$. These examples are particularly important in functional analysis as they become Banach spaces. In particular, they become Hilbert spaces when $p=2$.