Reduced-Order Modeling

I thought I would give a short description of a neat “applied” technique for reducing computational complexity in solving time-dependent linear PDEs. I was first introduced to this concept in a presentation that I saw, given by Doug James of Cornell.

Essential Concepts

We will consider for this article, a general time-dependent linear PDE. This technique may extend to other classes of problems, but as far as I know… Your mileage will vary. A general PDE of this type will have the form

$\frac\partial{\partial t}\varphi = \mathcal{L}\varphi$

Examples of this kind of PDE include…

Diffusion Equation:   $\frac\partial{\partial t}\varphi = \nabla^2 \varphi$

Wave Equation: $\frac{\partial^2}{\partial t^2}\varphi = \nabla^2 \varphi$

Inevitably, when we go to simulate a process like this, we will end up with a discretized version of this system, which will take on the matrix form

$\frac\partial{\partial t}\B u = \B{Lu}$

where $\B u$ is the vector of unknowns representing the discrete approximation to $\varphi$ and $\B L$ is the discrete approximation to the linear operator. As an example, in the case of the diffusion system, $\B L$ would be a discrete approximation to $\nabla^2$.

Now, the only “problem” with advancing a detailed system like this forward in time is that as the number of unknowns increases (or, equivalently, as the resolution increases), the cost of evaluating $\B {Lu}$ increases, even if $\B L$ is sparse. We can cheat a little bit here…

Assume that our solution $\B u$ takes on the form

$\B u(\B x, t) = \sum_i^n \B \phi_i(\B x) w_i(t)$

so that the spatial part of the solution is separated from the time component. This means that the solution is given by a combination of “shapes” ($\phi_i$) which are combined in a weighted ($w_i$) summation varying over time. An equivalent way of writing this is

$\B u = \sum_i^k \B \phi_i w_i = \B \Phi \B w$

If we substitute this form into the original PDE, we arrive at

$\frac{\partial}{\partial t}\B \Phi \B w = \B L \B \Phi \B w$

$\B \Phi \frac{\partial}{\partial t}\B w = \B L \B \Phi \B w$

where we have moved the time derivative to the weights because the $\B \phi_i$ basis is  constant in time.

So now we will make one more assumption on our shape functions, $\phi_i$ which form a basis for our solution: We will assume that the different $\B\phi_i$ form an orthonormal set. That is, $\B \phi_i^\top \B \phi_j = \delta_{ij}$, which also means $\B \Phi^\top \B \Phi = \B I$. In that case, we can take an inner product with the basis on both sides:

$\B \Phi^\top \B \Phi \frac{\partial}{\partial t}\B w = \B \Phi^\top \B L \B \Phi \B w$

$\frac{\partial}{\partial t}\B w = \tilde{\B L} \B w$

where $\tilde{\B L} = \B \Phi^\top \B L \B \Phi$ is the dimensionality-reduced operator. This new system only has $k$ degrees of freedom, compared to the potentially much higher dimensionality of the original system. This new reduced system can be forwarded in time using the usual methods.

How to choose a good basis…

The major question after this is… How the heck do we choose a basis that has these nice orthonormality properties, while still providing a decent approximation to the original system?

One fairly common approach to computing a good basis requires us to compute a high-resolution version of this simulation once and to record snapshots of the state vector of the system into the columns of a matrix $\B S$. The Singular Value Decomposition of $\B S$, $SVD(\B S) = \B U \B \Sigma \B V^\top$ can be computed using off-the-shelf software. The point of computing this SVD is that the first $k$ columns of the matrix $\B U$ corresponding to the $k$ largest singular values provides an orthonormal basis for the input data that minimizes re-projection error out of all choices of bases for the given input data. In the code given below, this is the method used in generating a basis

Another method gives us different results, but may be more applicable in certain situations. If we know the eigenvalues/eigenfunctions of the operator and they form an orthonormal set, a slightly different approach can be taken. A choice of eigenvectors $\B V$ and a diagonal matrix of eigenvalues $\B \Lambda$ can be substituted into the original system…

$\B{L V} = \B{\Lambda V}$

$\B{u} = \B{V w}$

$\frac{\partial}{\partial t}\B{V w} = \B{L V w}$

$\frac{\partial}{\partial t} \B w = \B {\Lambda V w}$

Does it Really Work?

Yes. Here, a video is shown for a one-dimensional wave equation using the SVD-based basis selection technique for $k=8$ dimensions in the reduced setting.

Some of the basis functions computed by the SVD:

As a bit of qualitative validation, here is the spectrum of singular values computed from the snapshots of the higher-resolution simulation:

As you can see, most of the important information that explains the solutions seen in the high-resolution simulation are focused in the first ~15 basis vectors. The Octave/Matlab code used to compute this simulation and the corresponding reduced basis is available here.

Conclusion

This was a really simple example of how a choice of basis with good properties can be exploited to compute approximate solutions at a much faster rate. If you have ideas for how to expand on this, or other questions, please let me know!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>