Scalable Time-Stepping for Stiff Nonlinear PDEs

PI: James V. Lambers, The University of Southern Mississippi

Project Description

1 Introduction


Methods for numerical simulation of time-dependent phenomena have advanced considerably over the last several decades, as has the computing technology on which they are applied to increasingly high-resolution models. However, this very increase in available resolution reduces the effectiveness of well-established time-stepping methods, and by extension the overall feasibility of numerical simulation of such phenomena. As future advances in computing power will only exacerbate this problem, the proposed project is an effort to proactively address it by pursuing a new direction in the design of time-stepping methods and openly disseminating the resulting software to the community of researchers in relevant application areas from neuroscience to nanomanufacturing to wireless communication.

The proposed project will develop and disseminate software that employs a componentwise approach to time-stepping to circumvent the difficulties caused by the phenomenon of stiffness. The new methods will provide a more practical alternative to researchers in many branches of science and engineering who rely on the numerical solution of time-dependent partial differential equations (PDEs). The rapid advancement of computing power in recent years has allowed higher resolution models, which has introduced greater stiffness into systems of ordinary differential equations (ODEs) that arise from discretization of time-dependent PDEs. This presents difficulties for explicit and implicit time-stepping methods, due to the coupling of components of the solution corresponding to low and high frequencies, that change at widely varying speeds.

This goal is to be accomplished by continuing the evolution of Krylov subspace spectral (KSS) methods, which were developed by the PI [54] and have been advanced by the PI over the last several years [9,49,50,51,56,59] These methods feature explicit time-stepping with high-order accuracy, and stability that is characteristic of implicit methods. This ``best-of-both-worlds'' combination is achieved through a componentwise approach, in which each Fourier coefficient of the solution is computed using an approximation of the solution operator that is, in some sense, optimal for that coefficient. Initial work on KSS methods [9,37,49,50,51,52,53,54,55,56,57,58,59,60,69] that includes comparisons to many other time-stepping methods, has yielded promising results, in terms of accuracy, stability, and scalability.

These results encourage their continued development, including enhancement of their accuracy and efficiency, as well as expansion of their applicability. To date, KSS methods have been applied almost exclusively to linear variable-coefficient PDEs on $n$-dimensional boxes, for $n=1,2,3$, with either periodic or homogeneous boundary conditions. The proposed project will feature generalization to KSS methods to nonlinear problems, and to problems on non-rectangular domains.

KSS methods will be extended to nonlinear PDEs by combination with exponential propagation iterative (EPI) methods [79,80] that have shown much promise as effective stiff solvers, compared to existing explicit and implicit time-stepping methods [61]. However, EPI methods currently employ Krylov projection methods [39,40,81] to approximate products of matrix functions and vectors that can require high-dimensional Krylov subspaces, especially at higher spatial resolution. KSS methods, on the other hand, work with Krylov subspaces whose dimension is independent of the spatial resolution. The proposed project will include optimization of KSS methods so that EPI methods, when using KSS methods to approximate these products, will be much more efficient and scalable than in their present form.

KSS methods will also be extended to PDEs on more general domains by combination with Fourier continuation (FC) methods [1,7,63,70] which are applicable to such domains. Specifically, the goal is to combine the spatial discretization of these methods with the approach to high-order time-stepping of KSS methods. As KSS methods are particularly effective for evaluating matrix function-vector products for matrices representing differential operators under periodic boundary conditions at high resolution, FC methods use periodic extensions to achieve high accuracy on general domains, and EPI methods use matrix function-vector products to achieve high-order accuracy for nonlinear PDEs, the combination of the three methods is a logical foundation for a more effective approach to overcoming stiffness.

KSS methods represent the first application to time-dependent PDEs of techniques from the emerging research area of ``matrices, moments and quadrature'': the approximation of bilinear forms involving matrix functions by treating them as Riemann-Stieltjes integrals, and then applying Gaussian quadrature rules that are generated by the Lanczos algorithm. However, as beneficial as KSS methods can be in their own right, the broader impacts of the proposed project on the scientific computing community as a whole will stem from an understanding of the body of work from which they arose.

Because of the widespread importance of simulation techniques as evidenced by the variety of applications, the proposed research effort will be accompanied by educational activities designed to

In addition to these goals, these educational activities, both within the PI's institution and the broader research community, will raise awareness and appreciation of the techniques from ``matrices, moments and quadrature'', which have many applications besides their role in KSS methods. Applications within numerical linear algebra include error estimation in iterative methods for solving linear systems [11,29,64] updating or downdating in least squares problems [20], selection of regularization parameters for ill-conditioned least squares problems [77], and estimation of determinants and traces of matrix functions [3,4].

There are also several applications of these techniques outside of numerical linear algebra. Approximation of bilinear forms is useful for approximation of the scattering amplitude [30], and estimating the trace of the inverse has applications in the study of fractals [74,82] lattice quantum chromodynamics [13,75] and crystals [67,68]. This outreach will encourage researchers to extend the application of these techniques to a wider variety of problems, in which computation of bilinear forms arises naturally.

These activities will be complemented by outreach to high school students and teachers in Mississippi. The PI will travel to area mathematics classrooms and work through exploratory projects with students. They will have hands-on experiences with a variety of methods that produce approximate solutions to problems with which they are familiar, such as nonlinear equations and integrals, and be introduced to application areas. These activities and classroom time with a university researcher will give them an idea of how the mathematics that they are learning is applied in a variety of STEM careers.


The development of componentwise time-stepping into a more effective approach for solving systems of ODEs than established time-stepping techniques, and the integration of this approach with EPI and FC methods, has the potential to impact several applications in which nonlinear, stiff time-dependent PDEs, of either parabolic or hyperbolic type, naturally arise. Just a few examples of such application areas include plasma physics, molecular dynamics, electrodynamics, optics, and porous media transport. The motivating force behind the research directions described in this proposal is the need to bridge a fundamental, and worsening, disconnect between the time-stepping methods that are most often used in applications, and the rapidly evolving computing environments in which they are used. Many of these methods, some of which have been in existence for over one hundred years, are not designed with stiff systems in mind. They fall into two broad categories: There are also implicit time-stepping methods, called stiff solvers, that are designed with stiff systems in mind (see, for example, [76]), but many methods of this type require the computation and inversion of the Jacobian, which can be quite expensive, especially for large-scale systems. Newton-Krylov methods [46] avoid this expense but require effective preconditioners, which, depending on the particular problem, may not be readily available [61].

Furthermore, for both explicit and implicit methods, these issues are exacerbated as the spatial resolution increases. For explicit methods, the time step must increase linearly (for hyperbolic problems) or quadratically (for parabolic problems) as the mesh spacing decreases, due to the CFL limit. For implicit methods, increased spatial resolution results in an increase in the condition number of the system that must be solved during each time step. Going forward, spatial resolution of models will only continue to increase with improvements in the efficiency of memory and processors, thus creating a situation that is worsening over time. These advances in hardware will foster a demand for, and expectation of, greater use of simulation in real-time (e.g., medicine) and/or high-data (e.g., reservoir simulation) applications that existing time-stepping methods will have difficulty meeting. Therefore, a fresh perspective is needed for the development of time-stepping methods that can more efficiently deal with stiffness.

In many cases, the solution of such a system of ODEs can be expressed in terms of the approximate evaluation of a matrix function such as the exponential. Techniques for computing $f(A){\bf v}$, such as those described in [39,40], have been used for several years to solve systems of ODEs, but can encounter the same difficulties with stiffness. Techniques for computing the bilinear form ${\bf u}^T f(A) {\bf v}$ open the door to rethinking how time-stepping is carried out. Because they allow individual attention to be paid to each component of $f(A){\bf v}$ in some basis, even if those components are coupled, they can be used to construct new time-stepping methods, based on KSS methods, that are more effective for large-scale problems.

A survey of the literature on time-stepping methods shows that recent advances mainly come from two strategies for mitigating the difficulties caused by stiffness: adaptive time-stepping (see, for example, [26,33,42,43,44,45,48,71,84]), in which the global time step used to compute the entire solution is varied according to an error estimate and/or a measure of the smoothness of the solution, and local time-stepping (see, for example, [6,10,16,17,19,22,23,34,35,47,62,78]), in which the time step is varied over the spatial domain so as to satisfy local stability criteria. The guiding principle behind both strategies is to limit the degradation of performance due to stiffness by reducing the time step either locally in time (for adaptive time-stepping) or in space (for local time-stepping) where higher spatial resolution is needed. Also worth noting are multiple time-stepping (MTS) methods [15,21,83], that mitigate the difficulties caused by stiffness by using appropriate time steps for stiff and non-stiff parts of the equation.

By contrast, the approach in the proposed project is to confront stiffness directly by modifying the approximate solution operator in a componentwise manner, so that an increase in spatial resolution does not cause undue additional computational expense, even when using a fixed time step. However, this approach should not be seen as an alternative to the abovementioned strategies. KSS methods can benefit from adaptive time-stepping just like any other time-stepping method through the addition of error estimation; this enhancement will be part of the proposed project. Also, the approach taken by KSS methods of using different approximations for each Fourier component is amenable to local time-stepping, except that the locality is in Fourier space rather than physical space. Finally, MTS methods can benefit from applying KSS methods to the stiff parts of the problem.

2 A Componentwise Approach to Time-Stepping

Consider a system of ODE of the form \begin{equation} \label{utAu} {\bf u}'(t) + A{\bf u} = {\bf 0}, \quad {\bf u}(t_0) = {\bf u}_0, \end{equation} where $A$ is a $N\times N$ matrix. One-step methods such yield an approximate solution of the form ${\bf u}(t_{n+1}) = f(A;\Delta t){\bf u}(t_n),$ where $\Delta t$ is the time step and $f(A;\Delta t)$ is a polynomial of $A$ approximates $e^{-A\Delta t}$ if the method is explicit, or a rational function of $A$ if the method is implicit.

For example, consider the problem of computing ${\bf w} = e^{-At}{\bf v}$ for a given symmetric matrix $A$ and vector ${\bf v}$. An approach described in [40] is to apply the Lanczos algorithm to $A$ with initial vector ${\bf v}$ to obtain, at the end of the $j$th iteration, an orthogonal matrix $X_j$ and a tridiagonal matrix $T_j$ such that $X_j^T AX_j = T_j$. Then, we can compute the approximation \begin{equation} {\bf w}_j = \|{\bf v}\|_2 X_j e^{-T_j t} {\bf e}_1, \quad {\bf e}_1 = \left[ \begin{array}{cccc} 1 & 0 & \cdots & 0 \end{array} \right]^T. \label{eqlancapprox} \end{equation} As each column ${\bf x}_k$, $k=1,\ldots,j$, of $X_j$ is of the form ${\bf x}_k = p_{k-1}(A){\bf v}$, where $p_n(A)$ is a polynomial of degree $n$ in $A$, it follows that ${\bf w}_j$ is the product of a polynomial in $A$ of degree $j-1$ and ${\bf v}$.

If the eigenvalues of $A$ are not clustered, which is the case if $A$ arises from a stiff PDE, a good approximation cannot be obtained using a few Lanczos iterations, just as a low-degree polynomial cannot accurately approximate an exponential function on a large interval. This can be alleviated using an outer iteration \begin{equation} {\bf w}_j^{m+1} \approx e^{-A\Delta t}{\bf w}_j^m, \quad m = 0, 1, \ldots, \quad {\bf w}_j^0 = {\bf v}, \label{outerit} \end{equation} for some $\Delta t \ll t$. However, this approach is not practical if $\Delta t$ must be chosen very small. Modifications described in [18,65,81] produce rational approximations that reduce the number of Lanczos iterations, but these methods require solving systems of linear equations in which the matrix is of the form $I-hA$, where $h$ is a parameter. Unless $h$ is chosen very small, these systems may be ill-conditioned. In summary, time-stepping methods that approximate the exponential using a polynomial or rational function tend to have difficulty with stiff systems, such as those that arise from PDEs. The solution of a stiff system includes coupled components that evolve at widely varying rates. As a result, explicit methods for such systems require small time steps, while implicit methods typically require the solution of ill-conditioned systems of linear equations that pose difficulties for direct and iterative solvers alike. The difficulties that many time-stepping methods have with stiffness is associated with the interaction between components of the solution of varying frequency, that vary at different speeds. However, it is not this interaction alone that necessarily precipitates the difficulties typically observed in time-stepping methods applied to stiff problems. A contributing factor is the attempt to treat such disparate components of the solution with the same function, whether polynomial or rational, that approximates $e^{-\lambda\Delta t}$. Unfortunately, such approximation is impractical when $\lambda$ varies within a large interval, which is precisely how the eigenvalues of $A$ behave when the problem (\ref{utAu}) is stiff. Componentwise time-stepping, on the other hand, can avoid these drawbacks, because each Fourier coefficient of the solution is computed using a different approximation of the solution operator. Rather than match terms of the Taylor expansion of the exponential, as in methods of Runge-Kutta type, the exponential function is approximated using an interpolating polynomial with frequency-dependent interpolation points. This increased flexibility can be exploited to obtain the advantage of implicit methods, their stability, without having to solve a large system of equations during each time step. However, considering such an approach raises important questions:

The next two sections will provide answers to both of these questions.

3 Matrices, Moments, Quadrature and PDEs

This section describes how componentwise approximations can be obtained. We consider the linear parabolic PDE \begin{equation} \label{eqthepde} u_t + Lu = 0, \quad t > 0, \end{equation} where $L$ is a Sturm-Liouville operator, with appropriate initial conditions and periodic boundary conditions. Let $\langle \cdot, \cdot \rangle$ denote the standard inner product on $[0,2\pi]$. KSS methods [49,50,54,56] compute the solution $\tilde{u}(x,t_{n+1})$ from $\tilde{u}(x,t_n)$ at time $t_n$ by approximating the Fourier coefficients that would be obtained by applying the exact solution operator to $\tilde{u}(x,t_n)$, $$\hat{u}(\omega,t_{n+1}) = \left\langle \frac{1}{\sqrt{2\pi}} e^{i\omega x}, S(\Delta t)\tilde{u}(x,t_n) \right\rangle,$$ where $S(\Delta t) = e^{-L\Delta t}$ is the solution operator of the PDE. Clearly, such an approach requires an effective method for computing bilinear forms.

Elements of Functions of Matrices

In [29] Golub and Meurant described a method for computing quantities of the form \begin{equation} {\bf u}^T f(A){\bf v}, \label{basicquadform} \end{equation} where ${\bf u}$ and ${\bf v}$ are $N$-vectors, $A$ is an $N\times N$ symmetric positive definite matrix, and $f$ is a smooth function. Our goal is to apply this method with $A = L_N$, where $L_N$ is a spectral discretization of $L$, $f(\lambda) = e^{-\lambda t}$ for some $t$, and the vectors ${\bf u}$ and ${\bf v}$ are derived from $\hat{\bf e}_\omega$ and ${\bf u}^n$, where $\hat{\bf e}_\omega$ is a discretization of $\frac{1}{\sqrt{2\pi}}e^{i\omega x}$, and ${\bf u}^n$ is a discretization of the solution at time $t_n$ on an $N$-point uniform grid. Since the matrix $A$ is symmetric positive definite, it has real eigenvalues $b = \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_N = a > 0.$ and corresponding orthonormal eigenvectors ${\bf q}_j$, $j=1, \ldots, N$. Therefore, the quantity (\ref{basicquadform}) can be viewed as a Riemann-Stieltjes integral \begin{equation} {\bf u}^T f(A){\bf v} = \sum_{j=1}^N f(\lambda_{j}) {\bf u}^T {\bf q}_j {\bf q}_j^T{\bf v} \label{stieltjes1} = \int_a^b f(\lambda)\,d\alpha(\lambda),\end{equation} where the measure $d\alpha(\lambda)$ is derived from the coefficients of ${\bf u}$ and ${\bf v}$ in the basis of eigenvectors. As discussed in [11,27,28,29], this integral Gaussian, Gauss-Radau or Gauss-Lobatto quadrature rules, where the nodes and weights can be obtained using the Lanczos algorithm [12,24,25,32]. Figure 1 demonstrates why integrals of the form (\ref{stieltjes1}) can be approximated accurately with a small number of nodes in the case where $A$ is a discretization of a differential operator and the vector ${\bf u}$ is used to extract a particular Fourier coefficient of $f(A){\bf v}$. We examine the distribution $d\alpha(\lambda)$ in the case where ${\bf u} = {\bf v} = e^{i\omega x}$ for small and large values of $\omega$, and for $A$ discretizing a differential operator of the form $-\partial_x a(x) \partial_x$, with $a(x) > 0$ being a smooth function or a piecewise constant function. In either case, $d\alpha(\lambda)$ is mostly concentrated within a portion of the interval of integration $[a,b]$. Gaussian quadrature rules for such integrals naturally target these relevant portions [37,50,51].

Figure 1: The distribution $d\alpha(\lambda)$ from (\ref{stieltjes1}) where the matrix $A$ represents a spectral discretization of a 1-D, second-order differential operator with smooth leading coefficient (top plot) and discontinuous leading coefficient (bottom plot), where ${\bf u} = {\bf v}$ is a discretization of $e^{2ix}$ (solid curve) or $e^{64ix}$ (dashed curve).

Block Gaussian Quadrature

In the case ${\bf u} \neq {\bf v}$, there is the possibility that the weights may not be positive, which destabilizes the quadrature rule [2]. Instead, one can consider the approximation of the $2\times 2$ matrix integral \begin{equation} \label{blkform} \left[ \begin{array}{cc} {\bf u} & {\bf v} \end{array}\right]^T f(A) \left[ \begin{array}{cc} {\bf u} & {\bf v} \end{array}\right]. \end{equation} As discussed in [29], the most general $K$-node quadrature formula is of the form \begin{equation} \int_a^b f(\lambda )\,d\mu (\lambda ) = \sum_{j=1}^{2K} f(\lambda_j){\bf v}_j {\bf v}_j^T + error, \label{eq312} \end{equation} where, for each $j$, $\lambda_j$ is a scalar and ${\bf v}_j$ is a 2-vector. Each node $\lambda_j$ is an eigenvalue of the matrix \begin{equation} \label{blocklanc} {\cal T}_K = \left[ \begin{array}{cccc} M_1 & B_1^T & & \\ B_1 & M_2 & B_2^T & \\ & \ddots & \ddots & \ddots \\ & & B_{K-1} & M_K \end{array} \right], \end{equation} which is a block-tridiagonal matrix of order $2K$. The vector ${\bf v}_j$ consists of the first two elements of the corresponding normalized eigenvector. The matrices $M_j$ and $B_j$ are computed using the block Lanczos algorithm, which was proposed by Golub and Underwood in [31].

KSS Methods

KSS methods [50,52] for (\ref{eqthepde}) begin by defining \begin{equation} R_0 = \left[ \begin{array}{cc} \hat{\bf e}_\omega & {\bf u}^n \end{array} \right], \label{KSSR0} \end{equation} where $\hat{\bf e}_\omega$ is a discretization of $\frac{1}{\sqrt{2\pi}}e^{i\omega x}$ and ${\bf u}^n$ is a discretization of the approximate solution $u(x,t)$ at time $t_n=n\Delta t$. Next we compute the $QR$ factorization of $R_0$, $$ R_0 = X_1 B_0 $$ which outputs \begin{equation} X_1 = \left[ \begin{array}{cc} \hat{\bf e}_\omega & \frac{{\bf u}_\omega^n}{\| {\bf u}_\omega^n \|_2} \end{array} \right] \label{x1} \end{equation} and $$ B_0 = \left[ \begin{array}{cc} 1 & \hat{\bf e}_\omega^H {\bf u}^n \\ 0 & \| {\bf u}_\omega^n \|_2 \end{array} \right], $$ where \begin{equation} {\bf u}_\omega^n = {\bf u}^n - \hat{\bf e}_\omega \hat{\bf e}_\omega^H{\bf u}^n = {\bf u}^n - \hat{\bf e}_\omega \hat{u}(\omega,t_n). \label{equw} \end{equation} Then, block Lanczos iteration is applied to the discretized operator $L_N$ with initial block $X_1(\omega)$, producing a block tridiagonal matrix ${\cal T}_K(\omega)$ of the form (\ref{blocklanc}), where each entry is a function of $\omega$. Then, each Fourier coefficient of the solution at $t_{n+1}$ can be expressed as \begin{equation} \label{blkcomp} [\hat{\bf u}^{n+1}]_\omega = \left[ B_0^H E_{12}^H e^{-{\cal T}_K(\omega)\Delta t} E_{12} B_0 \right]_{12}, \quad E_{12} = \left[ \begin{array}{cc} {\bf e}_1 & {\bf e}_2 \end{array} \right]. \end{equation} This algorithm has temporal accuracy $O(\Delta t^{2K-1})$ for parabolic problems [50]. Even higher-order accuracy, $O(\Delta t^{4K-2})$, is obtained for the second-order wave equation [52]. Furthermore, in [50,52], it is shown that under appropriate assumptions on the coefficients of the PDE, the 1-node KSS method is unconditionally stable. Generalization of this algorithm to higher dimensions and other PDEs is straightforward [38]. For the second-order wave equation, two matrices of the form $T_\omega$ are computed for each $\vec{\omega}$, corresponding to the solution and its time derivative. KSS methods have also been generalized to systems of coupled equations; the details are presented in [53]. In particular, it is shown in [60] that KSS methods are effective for systems of three equations in three spatial variables, through application to Maxwell's equations. KSS methods have been compared to several time-stepping methods, including finite-difference methods, Runge-Kutta methods and backward differentiation formulae [52,53,56,58]. Comparisons with a number of Krylov subspace methods based on exponential integrators [39,40,41], including preconditioned Lanczos iteration [65,81], are made in [59]. As discussed in the next section, KSS methods have also been compared to a variety of methods for computing matrix-function vector products for the purpose of solving nonlinear PDEs [9].

4 Proposed Research

Three significant drawbacks of KSS methods, as described in the previous section, are that The goal of the proposed project is to take steps to overcome all three of these limitations, thus upgrading KSS methods into a broadly applicable approach to time-stepping that combines the best attributes of implicit and explicit methods.

Acceleration Through Asymptotic Node Selection

The central idea behind KSS methods is to compute each component of the solution, in some orthonormal basis, using an approximation that is, in some sense, optimal for that component. That is, each component uses its own polynomial approximation of $S(L_N;\Delta t)$, where the function $S$ is based on the solution operator of the PDE (e.g. $S(L_N;\Delta t) = e^{-L_N\Delta t}$ in the case of (\ref{eqthepde})), and $L_N$ is the discretization of the spatial differential operator. These polynomial approximations are obtained by interpolation of the function $S(\lambda;\Delta t)$ at selected nodes for each component. Then, the computed solution has the form [51] \begin{equation} \label{diagop} \mathbf{u}^{n+1} = S(L_N;\Delta t) \mathbf{u}^n = \sum_{j=0}^{2K} D_j(\Delta t) A^j \mathbf{u}^n, \end{equation} where $D_j(\Delta t)$ is a matrix that is diagonal in the chosen basis. The diagonal entries are the coefficients of these interpolating polynomials in the monomial basis, with each row corresponding to a particular Fourier component. In the original block KSS method [50,52], the interpolation points are obtained by performing block Lanczos iteration and then diagonalizing a $2K\times 2K$ matrix--for each component. We now describe a much faster way of obtaining interpolation points, introduced in [69], by studying the behavior of block Lanczos in the limit as $|\omega|\rightarrow\infty$, where $\omega$ is the wave number. To illustrate the approach, we use an example. As in the previous section, let $\mathbf{u}^n$ be a discretization of the approximate solution $u(x,t)$ at time $t_n=n\Delta t$ on a uniform $N$-point grid. Then, KSS methods use the initial block $R_0=\left[ \begin{array}{cc} \hat{\bf e}_\omega & \mathbf{u}^n \end{array} \right]$, for each $\omega = -N/2+1,\ldots,N/2$. As before, we start the block Lanczos algorithm by finding the $QR$-factorization of $R_0$: $$R_0 = X_1 B_0,$$ where \begin{equation} \label{eq:x1} X_1=\left[ \begin{array}{cc} \displaystyle{\hat{\bf e}_\omega} & \displaystyle{\frac{\mathbf{u}^n_\omega}{\|\mathbf{u}^n_\omega\|_2}} \end{array} \right] \quad \mbox{ and } \quad B_0= \left[ \begin{array}{cc} 1 & \hat{u}(\omega,t_n) \\ 0 & \|\mathbf{u}^n_\omega\|_2 \end{array} \right]. \end{equation} with $\mathbf{u}^n_\omega$ defined as in (\ref{equw}). We note that if the solution $u$ is continuous, then as $|\omega|\rightarrow\infty$, $|\hat{u}(\omega,t_n)|\rightarrow 0$, so that in the limit $B_0$ is diagonal. The next step is to compute \begin{equation} \label{eq:m1} M_1=X_1^HL_N X_1, \end{equation} where the matrix $L_N$ is a spectral discretization of the operator $L$ defined by $Lu=pu_{xx}+q(x)u$, with $p$ being a constant. Substituting the value of $X_1$ from (\ref{eq:x1}) into (\ref{eq:m1}) yields $$ M_1=\left[ \begin{array}{cc} \omega^2p+\bar{q} & \displaystyle{\frac{\widehat{L_N \mathbf{u}^n_\omega} (\omega)}{\|\mathbf{u}^n_\omega\|_2}} \\ \displaystyle{\frac{\overline{\widehat{L_N\mathbf{u}^n_\omega}(\omega)}}{\|\mathbf{u}^n_\omega\|_2}} & R(L_N,\mathbf{u}^n_\omega) \end{array} \right], $$ where $\bar{q}$ is the mean of $q(x)$ on $(0,2\pi)$, $\widehat{L_N \mathbf{u}^n_\omega }(\omega) = \hat{\bf e}_\omega^H L_N\mathbf{u}^n_\omega$ is the Fourier coefficient of the gridfunction $L_N \mathbf{u}^n$ corresponding to the wave number $\omega$, and $R(L_N,\mathbf{u}^n_\omega)=\displaystyle{\frac{\langle \mathbf{u}^n_\omega,L_N \mathbf{u}^n_\omega\rangle}{\langle \mathbf{u}^n_\omega,\mathbf{u}^n_\omega\rangle}}$ is the Rayleigh quotient of $L_N$ and $\mathbf{u}^n_\omega$. As $|\omega |$ increases, assuming the solution is sufficiently regular, the non-diagonal entries of $M_1$ become negligible; that is, $$ M_1 \approx \left[ \begin{array}{cc} \omega^2p+\bar{q} & 0 \\ 0 & R(L_N,\mathbf{u}^n) \end{array} \right]. $$ Proceeding with the iteration, and neglecting any terms that are Fourier coefficients or are of lower order in $\omega$, we obtain $$ R_1=L_NX_1-X_1M_1\approx\left[ \begin{array}{cc} \displaystyle{\tilde{\bf q}}\hat{\bf e}_\omega & \displaystyle{\frac{L_N\mathbf{u}^n_\omega}{\|\mathbf{u}^n_\omega\|_2}-R(L_N,\mathbf{u}^n_\omega)\frac{\mathbf{u}^n_\omega}{\|\mathbf{u}^n_\omega\|_2}} \end{array} \right], $$ where $\mathbf{q}$ is a vector consisting of the value of $q(x)$ at the grid points, $\tilde{\bf q} = \mathbf{q} - \bar{q}$, and multiplication of vectors is component-wise. To obtain $X_2$, we perform the $QR$-factorization $R_1 = X_2 B_1$. We note that the $(1,2)$ entry of $B_1$, modulo lower-order terms, is the Fourier coefficient $\hat{v}_1(\omega)$, where $$\mathbf{v}_1 = \tilde{\bf q} \left( \displaystyle{\frac{L_N\mathbf{u}^n_\omega}{\|\mathbf{u}^n_\omega\|_2}-R(L_N,\mathbf{u}^n_\omega)\frac{\mathbf{u}^n_\omega}{\|\mathbf{u}^n_\omega\|_2} } \right).$$ It follows that given sufficient regularity of the solution $u$, in the limit as $|\omega|\rightarrow\infty$, $B_1$, like $B_0$, approaches a diagonal matrix. Continuing this process, it can be seen that every (nonzero) off-diagonal entry of $M_j$ or $B_j$, for $j=1,2,\ldots,$ is a Fourier coefficient of some function that is a differential operator applied to $u$. Therefore, as long as the Fourier coefficients of $u$ decay to zero at a sufficiently high rate as $|\omega|\rightarrow\infty$, these off-diagonal entries will also decay to zero. It follows that in this high-frequency limit, the block tridiagonal matrix $\mathcal{T}_K$ produced by block Lanczos applied to $R_0$ as defined above converges to the matrix that would be obtained by applying ``non-block'' Lanczos iteration to the two columns of $R_0$ separately, and then alternating rows and columns of the tridiagonal matrices produced by these iterations. Therefore, by reordering the rows and columns of $\mathcal{T}_K$ in such a way that odd-numbered and even-numbered rows and columns are grouped together, as shown here in the $6\times 6$ case, $$ \left[ \begin{array}{cccccc} \times & & \times & & & \\ & \times & & \times & & \\ \times & & \times & & \times & \\ & \times & & \times & & \times \\ & & \times & & \times & \\ & & & \times & & \times \end{array} \right] \quad \longrightarrow \quad \left[ \begin{array}{cccccc} \times & \times & & & & \\ \times & \times & \times & & & \\ & \times & \times & & & \\ & & & \times & \times & \\ & & & \times & \times & \times \\ & & & & \times & \times \end{array} \right] $$ we find that the eigenvalue problem for this matrix decouples, and the block Gaussian quadrature nodes can be obtained by computing the eigenvalues of these smaller, tridiagonal matrices. For finite $\omega$, we can then use ``non-block'' Lanczos to at least estimate the true block Gaussian quadrature nodes. As a consequence of this decoupling, we can obtain approximations of half of the block Gaussian quadrature nodes for all Fourier components by applying ``non-block'' Lanczos iteration to the matrix $L_N$ with initial vector $\mathbf{u}^n$, the computed solution, as is done in Krylov subspace methods such as those described in [39,40,41]. These approximate nodes need only be computed once per time step, as they are independent of the frequency. To estimate the other half of the nodes, we can perform an asymptotic analysis of Lanczos iteration applied to $L_N$ with initial vector $\hat{\bf e}_\omega$, and express the approximate nodes in terms of the coefficients of the differential operator $L$. To illustrate this process, we continue with our example of solving $u_t + Lu = 0$ with $Lu = pu_{xx} + q(x)u$, where $p$ is constant. Carrying out three iterations, which corresponds to a fifth-order accurate KSS method for a parabolic PDE, we obtain the following recursion coefficients, after neglecting lower-order terms: [69] $$ \left[ \begin{array}{ccc} \alpha_1 & \overline{\beta_1} & 0 \\ \beta_1 & \alpha_2 & \overline{\beta_2} \\ 0 & \beta_2 & \alpha_3 \end{array} \right] \approx \left[ \begin{array}{ccc} p\omega^2 & \|\tilde{\bf q}\|_2 & 0 \\ \|\tilde{\bf q}\|_2 & p\omega^2 & 2p|\omega|\|\mathbf{q}_x\|_2/\|\tilde{\bf q}\|_2 \\ 0 & 2p|\omega|\|\mathbf{q}_x\|_2/\|\tilde{\bf q}\|_2 & p\omega^2 \end{array} \right].$$ It follows that the nodes can easily be estimated as \begin{equation} \lambda_{1,\omega} = p\omega^2, \quad \lambda_{2,\omega}, \lambda_{3,\omega} = p\omega^2 \pm \sqrt{\beta_1^2 + \beta_2^2}. \label{eigest} \end{equation} In [69] it is shown that this approach to estimating quadrature nodes also can be used if the leading coefficient $p$ is not constant. In [9] the approach is generalized further to problems with higher space dimension, other boundary conditions, non-self-adjoint operators, and systems of coupled PDEs. To demonstrate the effectiveness of this approach, we solve the 1-D parabolic problem \begin{equation} u_t - (p(x)u_{x})_x + q(x)u = 0, \quad 0 < x < 2\pi, \quad t > 0, \label{eqparabolic_1_pde} \end{equation} where the coefficients $p(x)$ and $q(x)$, given by \begin{equation} p(x) = 1, \quad q(x) = \frac{4}{3} + \frac{1}{4} \cos x, \label{eqsmooth_coefs_1d} \end{equation} are chosen to be smooth functions. The initial condition is \begin{equation} u(x,0) = 1 + \frac{3}{10}\cos x - \frac{1}{20} \sin 2x, \quad 0 < x < 2\pi, \label{eqsmooth_parabolic_1_ic} \end{equation} and periodic boundary conditions are imposed. We use $N$-point uniform grids, with $N=128,256,512$. The results are shown in Figure 2. It can be seen that as $N$ increases, the amount of time needed to achieve a given level of accuracy by (\ref{eqlancapprox}), allowed to run until convergence is achieved to a relative error tolerance of $10^{-7}$, is far greater than that required by a 4-node block KSS method with rapid node estimation. For the KSS method, the Krylov subspace dimension is always 4, whereas for (\ref{eqlancapprox}), the maximum number of iterations needed in a time step for $N=128,256,512$ was 21,35 and 60, respectively. On the other hand, the time required by the KSS method scales approximately linearly with $N$.

Figure 2: Estimates of relative error at $t=1$ in the solution of (\ref{eqparabolic_1_pde}), (\ref{eqsmooth_coefs_1d}), (\ref{eqsmooth_parabolic_1_ic}), with periodic boundary conditions, computed by a 4-node block KSS method with rapid node estimation (solid blue curves), and Lanczos iteration as described in (\ref{eqlancapprox}) (dashed red curves) with various time steps. For both methods, the curves, as displayed from left to right, correspond to solutions computed on $N$-point grids for $N=128,256,512$.

To date, this approach to rapidly estimating quadrature nodes has only been applied to specific classes of differential operators with certain boundary conditions [9,69]. In the proposed project, the goal is to develop an algorithm that can automatically carry out analyses of general linear differential operators, specified in terms of their coefficients and differentiation operators. The algorithm will apply to differential 1-D, 2-D or 3-D operators, with periodic boundary conditions as well as homogeneous Dirichlet, Neumann or Robin boundary conditions. Systems of coupled PDEs can also be accommodated, using an approach described in [53,60]. All of these avenues of generalization have been explored to an extent in [9]; the task for this project is to extrapolate to other differential operators based on patterns observed in previous analyses, at least through three iterations of Lanczos or Arnoldi iteration to produce quadrature nodes for time-stepping methods of up to 5th order.

Generalization to Nonlinear PDEs

KSS methods with one block quadrature node have been successfully applied to nonlinear diffusion equations from image processing [36,38] with first-order accuracy in time. In order to achieve higher-order accuracy, though, in addition to using more nodes, it is also necessary to account for the nonlinearity more carefully than with a simple linearization at each time step. To that end, we turn to exponential propagation iterative (EPI) methods, introduced by Tokman [79]. The idea behind these methods, applied to a nonlinear autonomous system of ODEs of the form \begin{equation} \label{gensys} \frac{d{\bf y}}{dt} = F({\bf y}(t)), \quad {\bf y}(t_0) = {\bf y}_0, \end{equation} is to use a Taylor expansion around ${\bf y}(t_n)$ to obtain an integral form \begin{equation} \label{intform} {\bf y}(t_{n+1}) = {\bf y}(t_n) + [e^{J\Delta t} - I] J^{-1} F({\bf y}(t_n)) + \int_{t_n}^{t_{n+1}} e^{J(t_{n+1}-\tau)} R({\bf y}(\tau))\,d\tau, \end{equation} where $J$ is the Jacobian of $F(y(t))$ and $R(y(t))$ is the nonlinear remainder function of the linearization of $F$. Then, the integral term is approximated numerically, which calls for the approximation of products of matrix functions $\varphi_k(J\Delta t)$ and vectors. Exponential integrator methods approximate these products by generating Krylov subspaces from the vectors, using Arnoldi or Lanczos iteration, and then evaluate the functions on the Hessenberg (or tridiagonal) matrices produced by the iteration. However, when the underlying system of ODEs is stiff, the effectiveness of this approach is reduced, because of the inability of any low-degree polynomial to approximate the exponential accurately on a large interval. It is worthwhile to explore whether methods of this type can be made more effective by replacing their Arnoldi- or Lanczos-based approaches to approximation of matrix functions with a componentwise approach such as those used by KSS methods, particularly with its enhancement through appropriate prescription of quadrature nodes. To illustrate the scalability of KSS methods, we compare several versions of EPI methods, as applied to a test problem [9]. The versions differ in the way in which they compute matrix function-vector products of the form $\varphi(A\tau)\mathbf{b}$: All of these approaches are used in the context of a 3rd-order, 2-stage EPI method [79] \begin{eqnarray} Y_1 & = & \mathbf{y}_n + \frac{1}{3}ha_{11} \varphi_1\left(\frac{1}{3}h A\right)F(\mathbf{y}_n), \label{EPI3} \\ \mathbf{y}_{n+1} & = & \mathbf{y}_n + h \varphi_1(hA)F(\mathbf{y}_n) + 3h b_1 \varphi_2(hA)[F(Y_1) - F(\mathbf{y}_n) - A(Y_1 - \mathbf{y}_n)], \label{epi3s2} \nonumber \end{eqnarray} where $a_{11} = 9/4$ and $b_1 = 32/81$, and $$R(Y_1) = F(Y_1) - F(\mathbf{y}_n) - A(Y_1 - \mathbf{y}_n).$$ For this method, $$\varphi_1(\lambda) = \frac{e^\lambda - 1}{\lambda}, \quad \varphi_2(\lambda) = \frac{e^\lambda- \lambda - 1}{\lambda^2}, \quad \varphi_3(\lambda) = \frac{e^\lambda(6-\lambda)-(6+5\lambda+2\lambda^2)}{\lambda^3}.$$ The test problem is the two-dimensional Allen-Cahn equation given by \begin{equation} \label{AllenCahn} u_t=\alpha\nabla^2u+u-u^3, \quad x,y\in [0,1], \quad t\in [0,0.2] \end{equation} with $\alpha=0.1$, using homogeneous Neumann boundary conditions and initial conditions given by $$u_0(x,y)=0.4+0.1\cos(2\pi x)\cos(2\pi y).$$ The $\nabla^2$ term is discretized using a centered finite difference. For KSS-EPI, the low-frequency portion $\mathbf{b}_L$ consists of all components with wave numbers $\omega_i \leq 7$, $i=1,2$. From Figure 3, it can be seen that for KSS-EPI, the number of overall iterations (matrix-vector multiplications $+$ FFTs) shows almost no sensitivity to the grid size, compared to Krylov-EPI, AKP and Leja interpolation, all of which exhibit substantial growth as the number of grid points increases.

Figure 3: Average number of matrix-vector products, shown on a logarithmic scale, per matrix function-vector product evaluation for each method when solving the Allen-Cahn equation (\ref{AllenCahn}) using the 3rd-order EPI method (\ref{EPI3}). For KSS-EPI, FFTs are also included. For each method, bars correspond to grid sizes of $N=25,50,150,300$ points per dimension, from left to right. Left plot: $\Delta t = 0.2$. Right plot: $\Delta t = 0.0125$.

The success of this approach relies on a determination of what is the low-frequency part of the solution, since different approaches are used to compute low- and high-frequency components [9]. In the proposed project, an algorithm for adaptively determining an appropriate cut-off, based on smoothness of the solution, will be developed. In addition, low-frequency analysis of block Lanczos iteration will be performed, similar to the high-frequency analysis described earlier in this section, so that low-frequency components can be computed using KSS methods rather than Krylov projection, thus reducing Krylov subspace dimension even further.

Solving PDEs on General Domains

KSS methods are most effective when applied to problems for which Fourier transforms (including Fourier sine and cosine transforms) can be used to efficiently achieve an approximate separation of high- and low-frequency components of the solution, so that appropriate approximate solution operators can be applied to each, even if these components are still coupled. For problems on non-rectangular domains, the lack of periodicity makes the use of the Fourier transform problematic. Fourier continuation (FC) methods [1,7,63,70] enable Fourier series approximation of non-periodic functions without the highly detrimental slow convergence and Gibbs phenomenon exhibited by Fourier series expansions of non-periodic functions. In [1] the FC approach has been combined with explicit time-stepping methods (specifically, fifth-order Runge-Kutta or Adams-Bashforth methods) to solve the compressible Navier-Stokes equations. The resulting methods achieve spectral accuracy in the interior of the domain, and fifth-order accuracy at domain boundaries. The approach was generalized to variable-coefficient PDEs in [70]. As part of the proposed project, FC methods will be combined with KSS methods (as well as EPI methods, for nonlinear problems). The key ingredient in the effectiveness of FC methods is its 1-D differentiation operator, that applies Fourier continuation along the appropriate dimension, employs the FFT to perform the differentiation with spectral accuracy, and then restricts the derivative to the domain. KSS methods function by transforming a Krylov subspace generated by the solution at time $t_n$ into Fourier space, applying frequency-dependent approximations of the solution operator to the Fourier coefficients, and then transforming back to physical space to obtain the solution at time $t_{n+1}$. In other words, evolution in time is accomplished through the application of pseudodifferential operators ($\psi d0$) with constant coefficients (namely, the operators represented by the matrices $D_j(\Delta t)$ in (\ref{diagop})), of which differentiation operators are a special case. In the proposed project, the FC methodology can be applied to these $\psi d0$ from (\ref{diagop}) as well. Techniques for rapid application of Fourier integral operators [8] and manipulation of symbols of $\psi d0$ [14] will be employed for the purpose of efficiently expressing these operators as combinations of 1-D operators, to which FC can readily be applied in the same way as it is for 1-D differentiation.

Public Domain Software

KSS methods are difficult to implement efficiently due to their componentwise nature. Therefore, to encourage adoption by researchers throughout science and engineering who need to solve PDEs, and further development by researchers in computational mathematics, the PI will disseminate public domain software that efficiently implements KSS methods for a variety of PDEs. This software will be designed to take advantage of parallel and vector architectures, especially for the computation of frequency-dependent quadrature nodes. It will also incorporate features commonly included in PDE solvers such as error estimation and adaptive time step selection. For error estimation, higher-order quadrature rules can easily be used. The PI's substantial background in computer science, which includes teaching graduate-level courses in programming and large-scale computing, and years of experience as a software engineer in industry, will guide the design and implementation of the software to ensure that an efficient and robust software package is produced.

5 Educational Activities

The goals of the educational activities are threefold, addressing three different audiences: (1) academic and industrial mathematicians and computer scientists, (2) undergraduate and graduate students at the University of Southern Mississippi, including students from under-represented groups, and (3) high school students, teachers, and counselors.

Academic and Industrial Mathematicians and Computer Scientists

The PI will broadly disseminate the results of research described in Section 4 to the scientific computing community, in order to raise awareness of the potential of techniques from ``matrices, moments and quadrature''. This dissemination will take place through the following mechanisms:

Undergraduate and Graduate Students at USM

A major component of the project is the inclusion of USM students as active participants in the proposed research. The PI requests funding for two graduate students per year, as part of an ongoing effort to establish a comprehensive research program built around KSS methods that can make progress along paths pertaining to either specific applications or general analysis. The Department of Mathematics at USM is devoted to enhancing undergraduate research. As such, the active involvement of undergraduate students will also be sought, particularly for working with new code, as the topic is naturally appealing and can be used to stimulate students' interest in a variety of applied problems and mathematical techniques. Many USM students come from groups that are underrepresented in the mathematical sciences. Table 1, using data obtained from [66], shows that not only are women and African-Americans poorly represented in the mathematical sciences, but their representation is, for the most part, declining.

Category / Year2000201220002012
Mathematics and Statistics47.843.17.75.4
Computer Science28.018.29.510.6
All fields57.357.48.39.9
Table 1: Percentages of all bachelor's degrees awarded to women and African-Americans.

On the other hand, at USM, approximately 61% of the students are women, while 27% are African-American. As a significant percentage of mathematics majors come from these groups (for example, 80% of seniors graduating in 2009 with bachelors' degrees in mathematics were women), the PI will strongly encourage such students to participate in the project. With the support of requested travel funds, graduate and undergraduate students will be encouraged to participate in conferences such as the AMS Annual Meeting and Southeastern Sectional Meetings, the SIAM Annual Meeting, the International Conference on Spectral and High-Order Methods (ICOSAHOM), and SIAM section meetings. The PI will design special topics courses based on the proposed research, in order to acquaint students with the fundamentals of spectral methods, orthogonal polynomials, and Krylov subspace methods. New courses and modifications of existing courses, as well as their broader impacts on the curriculum, will be assessed through course evaluations completed by students, and exit and alumni surveys of mathematics majors that are currently administered by the department. A learning seminar series based on the project will also be organized, with participating students, the PI, and guest lecturers among the speakers, with travel for visitors supported by a combination of project and department funds.

Mississippi High School Students, Teachers, and Counselors

It is important that students about to select their majors be aware of career opportunities that are available through study of computational mathematics. As many students choose their majors either shortly before or during their freshman year, outreach should be performed for high school students as well. The PI will provide six lessons for the high school calculus and AP calculus classes at three regional high schools: Presbyterian Christian High School in Hattiesburg, Gulfport High School, and Jim Hill High School in Jackson. A sample of proposed topics and classroom explorations are: The following topics do not lend themselves as readily to classroom exercises for high school students, but are feasible for in-class demonstrations by the PI, aided by MATLAB: Given the large proportion of community and junior college transfer students who attend USM, the PI will also work with faculty at area institutions in southern Mississippi and provide similar lessons at a higher level, but addressing the same topics and applications. Beyond southern Mississippi, an ideal venue for such outreach is the Mississippi School for Mathematics and Science (MSMS), which offers advanced courses in mathematics to academically gifted juniors and seniors. The PI will coordinate with the MSMS administration to communicate with the students in a background-appropriate manner about the importance of computational mathematics in science and engineering, including information about possible career paths. This outreach is not intended solely for the students at these institutions; it is also essential that teachers are informed of the significance of computational mathematics, including its central role within science and engineering and the career opportunities that it can provide students, so that they can more effectively train and guide their students who express interest in mathematics. In addition to exposing students to interesting content, it is also critical that students learn about potential careers in mathematics and computational science. In addition to the classroom activities using MATLAB, the PI will provide examples of real-world and industrial applications. These will include a brief overview that will be accessible for high school students, and materials for the students to review at home. These will include information about fields of study and associated career paths. Workforce development and recruitment of U.S. students into these fields has proven challenging over the past decades. In addition to working with students and their teachers, the PI will also meet with guidance counselors at each school, provide a brief presentation about career paths, and distribute a flyer that can be provided to their college-bound students. The PI will conduct pre- and post-activity surveys of participating students to evaluate the effectiveness of their participation, classroom explorations, gains in career paths, and general level of interest.

6 Project Timeline

The first phase of the project will be devoted to the following activities: The second phase of the project will be devoted to convergence analysis of KSS-EPI methods, as well as combinations of KSS methods with Fourier continuation methods in MATLAB for hyperbolic PDEs and nonlinear PDEs. The combined method already developed for linear parabolic PDEs will be adapted to handle the markedly different behavior of these problems. The final phase of the project will begin with a thorough convergence analysis, both spatial and temporal, of the newly developed and implemented KSS-FC methods. This phase will also focus on the creation of an implementation that will be conducive to vectorization and parallelization. Code from each of the two major phases of the project will be posted to MATLABCentral. The research will be primarily carried out by a graduate student under the supervision of the PI, while an undergraduate student will assist with programming tasks and numerical experimentation. As for educational activities, visits to each high school will take place annually. A learning seminar on ``matrices, moments and quadrature'' will be held in the department each fall semester. Research will be presented at the SIAM Annual Meeting and SIAM section meetings each year, as well as the bi-annual ICOSAHOM Meeting.

7 Summary

The proposed project was inspired by the PI's career-defining ambition to develop methods for variable-coefficient PDEs that possess, as much as possible, desirable properties of spectral methods for constant-coefficient PDEs on rectangular domains with periodic boundary conditions: unconditional stability and high accuracy due to the ability to compute components of the solution independently through diagonalization of the spatial differential operator. While these properties are out of reach for problems that are not so unrealistically ``nice'', nonetheless, the evolution of KSS methods has been fruitful due to the incorporation of ideas beyond numerical methods for PDEs. It is the PI's intent that the proposed project, either directly through its research results or indirectly through its educational activities, inspires present and future researchers in computational mathematics to adopt a similarly broad perspective in their own quests to advance the discipline.


  1. Albin, N., Bruno, O. P.: A spectral FC solver for the compressible Navier-Stokes equations in general domains I: Explicit time-stepping. Journal of Computational Physics 230 (2011) 6248-6270.
  2. Atkinson, K.: An Introduction to Numerical Analysis, 2nd Ed. Wiley (1989)
  3. Bai, Z., Golub, G. H.: Bounds for the trace of the inverse and the determinant of symmetric positive definite matrices, Annals Numer. Math. 4 (1997) 29-38.
  4. Bai, Z., Golub, G. H.: Some unusual matrix eigenvalue problems. Proceedings of Vecpar '98--Third International Conference for Vector and Parallel Processing J. Palma, J. Dongarra and V. Hernandez Eds., Springer (1999) 4-19.
  5. Bergamaschi, L., Caliari, M., Martìnez, A., Vianello, M.: Comparing {L}eja and {K}rylov approximations of large scale matrix exponentials. Lecture Notes in Comput. Sci., 6th International Conference, Reading, UK, May 28--31, 2006, Proceedings, Part IV, Alexandrov, V. N., van Albada, G. D., Sloot, P. M. A., Dongarra, J. (eds.), Springer (2006) 685-692.
  6. Boscheri, W., Dumbser, M., Zanotti, O.: High order cell-centered Lagrangian-type finite volume schemes with time-accurate local time stepping on unstructured triangular meshes. J. Comput. Phys. 291 (2015) 120-150.
  7. Bruno, O. P., Lyon, M.: High-order unconditionally stable FC-AD solvers for general smooth domains I. Basic elements. Journal of Computational Physics 229(6) (2010) 2009-2033.
  8. Candes, E., Demanet, L., Ying, L.: Fast Computation of Fourier Integral Operators. SIAM J. Sci. Comput. 29(6) (2007) 2464-2493.
  9. Cibotarica, A., Lambers, J. V., Palchak, E. M.: Solution of Nonlinear Time-Dependent PDEs Through Componentwise Approximation of Matrix Functions. Submitted.
  10. Coquel, F., Nguyen, Q. L., Postel, M., Tran, Q. H.: Local time stepping applied to implicit-explicit methods for hyperbolic systems. Multiscale Model. Simul. 8(2) (2009/10) 540-570.
  11. Dahlquist, G., Eisenstat, S. C., Golub, G. H.: Bounds for the error of linear systems of equations using the theory of moments. J. Math. Anal. Appl. 37 (1972) 151-166.
  12. Davis, P., Rabinowitz, P.: Methods of Numerical Integration, 2nd Ed. Academic Press (1984)
  13. Dong, S., Lui, K.: Stochastic estimation with $z_2$ noise. Phys. Lett. B 328 (1994) 130-136/
  14. Demanet, L., Ying, L.: Discrete Symbol Calculus. SIAM Review 53 (2011) 71-104.
  15. Demirel, A., Niegemann, J., Busch, K., Hochbruck, M.: Efficient multiple time-stepping algorithms of higher order. J. Comput. Phys. 285 (2015) 133-148.
  16. Diaz, J., Grote, M. J.: Energy conserving explicit local time stepping for second-order wave equations. SIAM J. Sci. Comput. 31(3) (2009) 1985-2014.
  17. Domingues, M. O., Gomes, S. M., Roussel, O., Schneider, K.: An adaptive multiresolution scheme with local time stepping for evolutionary PDEs. J. Comput. Phys. 227(8) (2008) 3758-3780.
  18. Druskin, V., Knizhnerman, L., Zaslavsky, M.: Solution of large scale evolutionary problems using rational Krylov subspaces with optimized shifts. SIAM J. Sci. Comput. 31 (2009), 3760-3780.
  19. El Soueidy, Ch. P., Younes, A., Ackerer, P.: Solving the advection-diffusion equation on unstructured meshes with discontinuous/mixed finite elements and a local time stepping procedure. Internat. J. Numer. Methods Engrg. 79(9) (2009) 1068-1093.
  20. Elhay, S., Golub, G. H., Kautsky, J.: Updating and downdating of orthogonal polynomials with data fitting applications. SIAM J. Matrix. Anal. 12(2) (1991) 327-353.
  21. Escribano, B., Akhmatskaya, E., Reich, S., Azpiroz, J. M.: Multiple-time-stepping generalized hybrid Monte Carlo methods. J. Comput. Phys. 280 (2015), 1-20.
  22. Ezziani, A., Joly, P.: Local time stepping and discontinuous Galerkin methods for symmetric first order hyperbolic systems. J. Comput. Appl. Math. 234(6) (2010) 1886-1895.
  23. Gassner, G. J., Hindenlang, F., Munz, C.-D. A Runge-Kutta based discontinuous Galerkin method with time accurate local time stepping. Adaptive high-order methods in computational fluid dynamics, Adv. Comput. Fluid Dyn. 2, World Sci. Publ., Hackensack, NJ (2011) 95-118.
  24. Gautschi, W.: Construction of Gauss-Christoffel quadrature formulas. Math. Comp. 22 (1986) 251-270.
  25. Gautschi, W.: Orthogonal polynomials--constructive theory and applications. J. of Comp. and Appl. Math. 12/13 (1985) 61-76.
  26. Georgiev, K., Kosturski, N., Margenov, S., Starý, J.: On adaptive time stepping for large-scale parabolic problems: computer simulation of heat and mass transfer in vacuum freeze-drying. J. Comput. Appl. Math. 226(2) (2009) 268-274.
  27. Golub, G. H.: Some modified matrix eigenvalue problems. SIAM Review 15 (1973) 318-334.
  28. Golub, G. H.: Bounds for matrix moments. Rocky Mnt. J. of Math. 4 (1974) 207-211.
  29. Golub, G. H., Meurant, G.: Matrices, Moments and Quadrature. Proceedings of the 15th Dundee Conference, June-July 1993, Griffiths, D. F., Watson, G. A. (eds.), Longman Scientific & Technical (1994)
  30. Golub, G. H., Stoll, M., Wathen, A.: Approximation of the scattering amplitude. Electron. T. Numer. Ana. 31 (2008) 178-203.
  31. Golub, G. H., Underwood, R.: The block Lanczos method for computing eigenvalues. Mathematical Software III, J. Rice Ed., (1977) 361-377.
  32. Golub, G. H., Welsch, J.: Calculation of Gauss Quadrature Rules. Math. Comp. 23 (1969) 221-230.
  33. Gresho, P. M., Griffiths, D. F., Silvester, D. J.: Adaptive time-stepping for incompressible flow. I. Scalar advection-diffusion. SIAM J. Sci. Comput. 30(4) (2008) 2018-2054.
  34. Grote, M. J., Mehlin, M., Mitkova, T.: Runge-Kutta-based explicit local time-stepping methods for wave propagation. SIAM J. Sci. Comput. 37(2) (2015) A747-A775.
  35. Grote, M. J., Mitkova, T.: Explicit local time-stepping methods for Maxwell's equations. J. Comput. Appl. Math. 234(12) (2010) 3283-3302.
  36. Guidotti, P., Kim, Y, Lambers, J. V.: Image Restoration with a New Class of Forward-Backward Diffusion Equations of Perona-Malik Type with Applications to Satellite Image Enhancement. SIAM J. Imag. Sci. 6(3) (2013) 1416-1444.
  37. Guidotti, P., Lambers, J. V., Sølna, K.: Analysis of 1-D Wave Propagation in Inhomogeneous Media. Numer. Funct. Anal. Opt. 27 (2006) 25-55.
  38. Guidotti, P., Longo, K.: Two Enhanced Fourth Order Diffusion Models for Image Denoising. J. Math. Imag. Vis. 40 (2011) 188-198.
  39. Hochbruck, M., Lubich, C.: A Gautschi-type method for oscillatory second-order differential equations, Numer. Math. 83 (1999) 403-426.
  40. Hochbruck, M., Lubich, C.: On Krylov Subspace Approximations to the Matrix Exponential Operator. SIAM J. Numer. Anal. 34 (1996) 1911-1925.
  41. Hochbruck, M., Lubich, C., Selhofer, H.: Exponential Integrators for Large Systems of Differential Equations. SIAM J. Sci. Comput. 19 (1998) 1552-1574.
  42. Ilie, S., Jackson, K. R., Enright, W. H.: Adaptive time-stepping for the strong numerical solution of stochastic differential equations. Numer. Algorithms 68(4) (2015) 791-812.
  43. Jannoun, G., Hachem, E., Veysset, J., Coupez, T.: Anisotropic meshing with time-stepping control for unsteady convection-dominated problems. Appl. Math. Model. 39(7) (2015) 1899-1916.
  44. Jansson, J., Logg, A.: Algorithms and data structures for multi-adaptive time-stepping. ACM Trans. Math. Software 35(3) (2008) Art. 17, 24 pp.
  45. Kay, D. A., Gresho, P. M., Griffiths, D. F., Silvester, D. J.: Adaptive time-stepping for incompressible flow. II. Navier-Stokes equations. SIAM J. Sci. Comput. 32(1) (2010) 111-128.
  46. Knoll, D. A., Keyes, D. E.: Jacobian-free Newton-Krylov methods: a survey of approaches and applications. J. Comput. Phys. 193 (2004) 357-397.
  47. Krivodonova, L.: An efficient local time-stepping scheme for solution of nonlinear conservation laws. J. Comput. Phys. 229(22) (2010) 8537-8551.
  48. Lai, J., Huang, J.: An adaptive linear time stepping algorithm for second-order linear evolution problems. Int. J. Numer. Anal. Model. 12(2) (2015) 230-253.
  49. Lambers, J. V.: Derivation of High-Order Spectral Methods for Time-dependent PDE using Modified Moments. Electron. T. Numer. Ana. 28 (2008) 114-135.
  50. Lambers, J. V.: Enhancement of Krylov Subspace Spectral Methods by Block Lanczos Iteration. Electron. T. Numer. Ana. 31 (2008) 86-109.
  51. Lambers, J. V.: Explicit High-Order Time-Stepping Based on Componentwise Application of Asymptotic Block Lanczos Iteration. Num. Lin. Alg. Appl. 19 (2012) 970-991.
  52. Lambers, J. V.: An Explicit, Stable, High-Order Spectral Method for the Wave Equation Based on Block Gaussian Quadrature. IAENG Journal of Applied Mathematics 38 (2008) 333-348.
  53. Lambers, J. V.: Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments. International Journal of Computational Science 2 (2008) 376-401.
  54. Lambers, J. V.: Krylov Subspace Methods for Variable-Coefficient Initial-Boundary Value Problems. Ph.D. Thesis, Stanford University (2003).
  55. Lambers, J. V.: Krylov Subspace Spectral Methods for the Time-Dependent Schrödinger Equation with Non-Smooth Potentials. Numer. Algorithms 51 (2009) 239-280.
  56. Lambers, J. V.: Krylov Subspace Spectral Methods for Variable-Coefficient Initial-Boundary Value Problems. Electron. T. Numer. Ana. 20 (2005) 212-234.
  57. Lambers, J. V.: A Multigrid Block Krylov Subspace Spectral Method for Variable-Coefficient Elliptic PDE. IAENG Journal of Applied Mathematics 39 (2009) 236-246.
  58. Lambers, J. V.: Practical Implementation of Krylov Subspace Spectral Methods. J. Sci. Comput. 32 (2007) 449-476.
  59. Lambers, J. V.: Spectral Methods for Time-dependent Variable-coefficient PDE Based on Block Gaussian Quadrature. Proceedings of the 2009 International Conference on Spectral and High-Order Methods (2010) in press.
  60. Lambers, J. V.: A Spectral Time-Domain Method for Computational Electrodynamics. Adv. Appl. Math. Mech. 1(6) (2009) 781-798.
  61. Loffeld, J., Tokman, M.: Comparative Performance of Exponential, Implicit and Explicit Integrators for Stiff Systems of ODEs. J. Comp. Appl. Math. 241 (2013) 45-67.
  62. Lörcher, F., Gassner, G., Munz, C.-D.: An explicit discontinuous Galerkin scheme with local time-stepping for general unsteady diffusion equations. J. Comput. Phys. 227(11) (2008) 5649-5670.
  63. Lyon, M., Bruno, O. P.: High-order unconditionally stable FC-AD solvers for general smooth domains II. Elliptic, parabolic and hyperbolic PDEs; theoretical considerations. Journal of Computational Physics 229 (2010) 3358-3381.
  64. Meurant, G.: The computation of bounds for the norm of the error in the conjugate gradient algorithm. Numer. Algorithms 16 (1997) 77-87.
  65. Moret, I., Novati, P.: RD-rational approximation of the matrix exponential operator. BIT 44 (2004) 595-615.
  66. National Science Foundation, Division of Science Resources Staistics: Women, Minorities, and Persons with Disabilities in Science and Engineering: 2009. NSF 09-305, Arlington, VA (2009).
  67. Ortner, B.: On the selection of measurement directions in second-rank tensor (e.g. elastic strain) determination of single crystals. J. Appl. Cryst. 22 (1989) 216-221.
  68. Ortner, B., Kräuter, A. R.: Lower bounds for the determinant and the trace of a class of Hermitian matrix. Linear Alg. Appl. 326 (1996) 147-180.
  69. Palchak, E. M., Cibotarica, A., Lambers, J. V.: Solution of Time-Dependent PDE Through Rapid Estimation of Block Gaussian Quadrature Nodes. Linear Alg. Appl. 468 (2015) 233-259.
  70. Bruno, O. P., Prieto, A.: Spatially dispersionless, unconditionally stable FC-AD solvers for variable-coefficient PDEs. J. Sci. Comput. 58(2) (2014) 331-366.
  71. Qiao, Z., Zhang, Z., Tang, T.: An adaptive time-stepping strategy for the molecular beam epitaxy models. SIAM J. Sci. Comput. 33(3) (2011) 1395-1414.
  72. Rainwater, G., Tokman, M.: A new class of split exponential propagation iterative methods of Runge-Kutta type (sEPIRK) for semilinear systems of ODEs. J. Comp. Phys. 269 (2014) 40-60.
  73. Sack, R. A., Donovan, A.: An algorithm for Gaussian quadrature given modified moments. Numer. Math. 18(5) (1972) 465-478.
  74. Sapoval, B., Gobron, T., Margolina, A.: Vibrations of fractal drums. Phys. Rev. Lett. 67 (1991) 2974-2977.
  75. Sexton, J. C., Weingarten, D. H.: Systematic expansion for full QCD based on the valence approximation. Report IBM T. J. Watson Research Center (1994).
  76. Shampine, L. F., Reichelt, M. W.: The MATLAB ODE suite. SIAM Journal of Scientific Computing 18 (1997) 1-22.
  77. Su, Z.: Computational Methods for least squares problems and clinical trials. Ph.D. Thesis, Stanford University (2005)
  78. Tirupathi, S., Hesthaven, J. S., Liang, Y., Parmentier, M.: Multilevel and local time-stepping discontinuous Galerkin methods for magma dynamics. Comput. Geosci. 19(4) (2015) 965-978.
  79. Tokman, M.: Efficient integration of large stiff systems of ODEs with exponential propagation iterative (EPI) methods. J. Comp. Phys. 213 (2006) 748-776.
  80. Tokman, M.: A new class of exponential propagation iterative methods of Runge-Kutta type (EPIRK). J. Comp. Phys. 230 (2011) 8762-8778.
  81. van den Eshof, J., Hochbruck, M.: Preconditioning Lanczos approximations to the matrix exponential. SIAM J. of Sci. Comput. 27 (2006) 1438-1457.
  82. Wu, S. Y., Cocks, J. A., Jayanthi, C. S.: An accelerated inversion algorithm using the resolvent matrix method. Comput. Phys. Commun. 71 (1992) 15-133.
  83. Zhang, P., Zhang, N., Deng, Y., Bluestein, D.: A multiple time stepping algorithm for efficient multiscale modeling of platelets flowing in blood plasma. J. Comput. Phys. 284 (2015) 668-686.
  84. Zhang, Z., Qiao, Z.: An adaptive time-stepping strategy for the Cahn-Hilliard equation. Commun. Comput. Phys.} 11(4) (2012) 1261-1278.