# Peeter Joot's (OLD) Blog.

• ## Recent Comments

 Adam C Scott on avoiding gdb signal noise… Ken on Scotiabank iTrade RESP …… Alan Ball on Oops. Fixing a drill hole in P… Peeter Joot's B… on Stokes theorem in Geometric… Exploring Stokes The… on Stokes theorem in Geometric…

• 317,257

## Linearizing a set of regular differential equations.

Posted by peeterjoot on November 18, 2009

[Click here for a PDF of this post with nicer formatting]

# Motivation

A number of discrete multiple particle systems appear to generate coupled differential equations of the following form

\begin{aligned}A z' = b(z)\end{aligned} \quad\quad\quad(1)

Where $A = A(z)$ is a matrix, and $b(z)$ a column vector valued non-linear function. How do we solve such an equation?

# When the matrix is invertible.

Consider the nicely behaved case where $A(z)$ is invertible for all $z$. Then we can write

\begin{aligned}z' = A^{-1} b(z)\end{aligned} \quad\quad\quad(2)

Now with a non-linear function $b$ (like the sines that we have in the pendulum problem from $-\nabla \cos\phi$), we can’t solve this thing easily, but in some small-enough neighborhood of some point (i.e. a point in phase space containing $z$) we can make a linear approximation.

Suppose our initial phase space point is $z_0$, and we wish to solve for differential displacement from that point $x$, namely $z = z_0 + x$. Then we have for our system

\begin{aligned}x' = {\left.{{A^{-1} b(z)}}\right\vert}_{{z=z_0}} + {{\begin{bmatrix}\frac{\partial {[A^{-1}b(z)]_1}}{\partial {z_1}} & \frac{\partial {[A^{-1}b(z)]_1}}{\partial {z_2}} & \hdots & \frac{\partial {[A^{-1}b(z)]_1}}{\partial {z_N}} \\ \frac{\partial {[A^{-1}b(z)]_2}}{\partial {z_1}} & \frac{\partial {[A^{-1}b(z)]_2}}{\partial {z_2}} & \hdots & \frac{\partial {[A^{-1}b(z)]_2}}{\partial {z_N}} \\ \vdots \\ \frac{\partial {[A^{-1}b(z)]_N}}{\partial {z_1}} & \frac{\partial {[A^{-1}b(z)]_N}}{\partial {z_2}} & \hdots & \frac{\partial {[A^{-1}b(z)]_N}}{\partial {z_N}} \\ \end{bmatrix}}}_{{z = z_0}} x\end{aligned} \quad\quad\quad(3)

Now we have a linear matrix, corresponding roughly to a first order Taylor expansion of the original system of equations. Mathematically the problem is now reduced to a column vector linear system of the form

\begin{aligned}x' = B x + a\end{aligned} \quad\quad\quad(4)

When $a = 0$ the solution is just

\begin{aligned}x = e^{B t} x(0)\end{aligned} \quad\quad\quad(5)

(where you can evaluate $e^{Bt}$ the usual way using an eigenvalue similarity transformation where the exponential of the inner diagonal term is then simple).

Assuming a non-homogeneous solution of the same form $x = e^{B t} f(t)$ one finds the specific solution

\begin{aligned}x = e^{B t} \int_0^t e^{-B\tau} a d\tau\end{aligned} \quad\quad\quad(6)

So the complete solution (a specific solution plus the homogeneous solution) to the system 4 is found to be

\begin{aligned}x = e^{B t} \left( x(0) + \int_0^t e^{-B\tau} a d\tau \right)\end{aligned} \quad\quad\quad(7)

# Thoughts to consider for the non-invertible or ill formed matrix

Now, this works out all nice and pleasantly when $A$ is invertible. What do we do when it is not? Each zero eigenvalue of $A$ looks like it corresponds to a conserved quantity. Manually removing those zeros from the system can reduce things to the form dealt with here. What is even trickier seeming is what happens if the matrix is almost invertible. Example

\begin{aligned}\begin{bmatrix}1 & 0 \\ 0 & 0.0000000001\end{bmatrix} z' = b(z)\end{aligned} \quad\quad\quad(8)

This is perfectly invertible mathematically, but numerically you really don’t want to go there, since you’ll end up with garbage. What I think would work for dealing with this sort of system is using the SVD (symmetric value decomposition) techniques to determine a orthonormal basis for all the “non-zero” eigenvalues according to a reasonable numerical threshold for these zeros.

SVD it is relatively new to applied mathematics and wasn’t covered in my linear algebra course when I was in school (now over ten years ago). I’ve read of it since and was very impressed with its power and utility (but need more study of it to fully grasp it and its applications). Prof Gilbert Strang covers this in his online lectures (on iTunesU), and I’d recommend them highly.

Exploring methods of solving (in the neighborhood of a phase space point) these sorts of differential equations using SVD is something remaining to be explored in more detail (at least by me), but intuition says it is relevant.