[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]
Motivation
We’re discussing specific forms to systems of coupled linear differential equations, such as a loop of “spring” connected masses (i.e. atoms interacting with harmonic oscillator potentials) as sketched in fig. 1.1.
Fig 1.1: Three springs loop
Instead of assuming a solution, let’s see how far we can get attacking this problem systematically.
Matrix methods
Suppose that we have a set of masses constrained to a circle interacting with harmonic potentials. The Lagrangian for such a system (using modulo indexing) is
The force equations follow directly from the Euler-Lagrange equations
For the simple three particle system depicted above, this is
with equations of motion
Let’s partially non-dimensionalize this. First introduce average mass and spring constants , and rearrange slightly
With
Our system takes the form
We can at least theoretically solve this in a simple fashion if we first convert it to a first order system. We can do that by augmenting our vector of displacements with their first derivatives
So that
Now the solution is conceptually trivial
We are however, faced with the task of exponentiating the matrix . All the powers of this matrix will be required, but they turn out to be easy to calculate
allowing us to write out the matrix exponential
Case I: No zero eigenvalues
Provided that has no zero eigenvalues, we could factor this as
This initially leads us to believe the following, but we’ll find out that the three springs interaction matrix does have a zero eigenvalue, and we’ll have to be more careful. If there were any such interaction matrices that did not have such a zero we could simply write
This is
The solution, written out is
so that
As a check differentiation twice shows that this is in fact the general solution, since we have
and
Observe that this solution is a general solution to second order constant coefficient linear systems of the form we have in eq. 1.5. However, to make it meaningful we do have the additional computational task of performing an eigensystem decomposition of the matrix . We expect negative eigenvalues that will give us oscillatory solutions (ie: the matrix square roots will have imaginary eigenvalues).
Example: An example diagonalization to try things out
{example:threeSpringLoop:1}{
Let’s do that diagonalization for the simplest of the three springs system as an example, with and , so that we have
A orthonormal eigensystem for is
With
We have
We also find that and its root are intimately related in a surprising way
We also see, unfortunately that has a zero eigenvalue, so we can’t compute . We’ll have to back and up and start again differently.
}
Case II: allowing for zero eigenvalues
Now that we realize we have to deal with zero eigenvalues, a different approach suggests itself. Instead of reducing our system using a Hamiltonian transformation to a first order system, let’s utilize that diagonalization directly. Our system is
where and
Let
so that our system is just
or
This is equations, each decoupled and solvable by inspection. Suppose we group the eigenvalues into sets . Our solution is then
Transforming back to lattice coordinates using , we have
We see that the zero eigenvalues integration terms have no contribution to the lattice coordinates, since , for all .
If are a set of not necessarily orthonormal eigenvectors for , then the vectors , where are the reciprocal frame vectors. These can be extracted from (i.e., the rows of ). Taking dot products between with and , provides us with the unknown coefficients
Supposing that we constrain ourself to looking at just the oscillatory solutions (i.e. the lattice does not shake itself to pieces), then we have
Eigenvectors for eigenvalues that were degenerate have been explicitly enumerated here, something previously implied. Observe that the dot products of the form have been put into projector operator form to group terms more nicely. The solution can be thought of as a weighted projector operator working as a time evolution operator from the initial state.
Example: Our example interaction revisited
{example:threeSpringLoop:2}{
Recall that we had an orthonormal basis for the eigensubspace for the interaction example of eq. 1.20 again, so . We can sum to find
The leading matrix is an orthonormal projector of the initial conditions onto the eigen subspace for . Observe that this is proportional to itself, scaled by the square of the non-zero eigenvalue of . From this we can confirm by inspection that this is a solution to , as desired.
}
Fourier transform methods
Let’s now try another item from our usual toolbox on these sorts of second order systems, the Fourier transform. For a one variable function of time let’s write the transform pair as
One mass harmonic oscillator
The simplest second order system is that of the harmonic oscillator
Application of the transform gives
We clearly have a constraint that is a function of frequency, but one that has to hold for all time. Let’s transform this constraint to the frequency domain to consider that constraint independent of time.
How do we make sense of this?
Since is an integration variable, we can’t just mandate that it equals the constant driving frequency . It’s clear that we require a constraint on the transform as well. As a trial solution, imagine that
latex \left\lvert {\omega – \pm \omega_\circ} \right\rvert < \omega_{\text{cutoff}}$} \\ 0 & \quad \mbox{otherwise}\end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)$
This gives us
Now it is clear that we can satisfy our constraint only if the interval is made infinitesimal. Specifically, we require both a constraint and that the transform have a delta function nature. That is
Substitution back into our transform gives
We can verify quickly that this satisfies our harmonic equation .
Two mass harmonic oscillator
Having applied the transform technique to the very simplest second order system, we can now consider the next more complex system, that of two harmonically interacting masses (i.e. two masses on a frictionless spring).
F1
Our system is described by
and the pair of Euler-Lagrange equations
The equations of motion are
Let
Insertion of these transform pairs into our equations of motion produces a pair of simultaneous integral equations to solve
As with the single spring case, we can decouple these equations with an inverse transformation operation , which gives us (after dropping primes)
Taking determinants gives us the constraint on the frequency
Introducing a reduced mass
the pair of solutions are
As with the single mass oscillator, we require the functions to also be expressed as delta functions. The frequency constraint and that delta function requirement together can be expressed, for as
With a transformation back to time domain, we have functions of the form
Back insertion of these into the equations of motion, we have
Equality requires identity for all powers of , or
or and
Observe that
(with a similar alternate result). We can rewrite eq. 1.0.35.35 as
It’s clear that there’s two pairs of linear dependencies here, so the determinant is zero as expected. We can read off the remaining relations. Our undetermined coefficients are given by
Observe that the constant term is not really of interest, since it represents a constant displacement of both atoms (just a change of coordinates).
Check:
Reflection
We’ve seen that we can solve any of these constant coefficient systems exactly using matrix methods, however, these will not be practical for large systems unless we have methods to solve for all the non-zero eigenvalues and their corresponding eigenvectors. With the Fourier transform methods we find that our solutions in the frequency domain is of the form
or in the time domain
We assumed exactly this form of solution in class. The trial solution that we used in class factored out a phase shift from of the form of , but that doesn’t change the underling form of that assumed solution. We have however found a good justification for the trial solution we utilized.