Peeter Joot's Blog.

Math, physics, perl, and programming obscurity.

Found code that fails grade 11 math: log base conversion.

Posted by Peeter Joot on March 26, 2014

Unless my headache is impacting my ability to manipulate log identities, this LOG10 code is plain wrong:

      static double findLog(double value, FunctionType logFunction)
         switch (logFunction)
            case LN:
               return log(double(value));
            case LOG10:
               return log(double(value)) / log(2.0);

Perhaps it is dead code, since this divide should be log(10.0) (or just M_LN10), but nobody appears to have noticed.

Two other possibilities are:

  • somebody was being way too clever, and when they wrote LOG10, they meant it as Log base 0b10.
  • somebody thought that for computer software a “natural logarithm” would use base 2.

Posted in C/C++ development and debugging. | Tagged: , , | 2 Comments »

Herring salad recipe from Dad’s papers

Posted by Peeter Joot on March 12, 2014

This was a favorite of my dad, Jaan Joot.  I’m pretty sure that it is actually Vanaema’s recipe.


2 herring

2 small cans beets

6 potatoes (large)

4-5 pickles

1 onion

1 apple

3 (¼”) slides ham or 5” kolbassa

2 hard boiled eggs

1 tablespoon mustard

salt and pepper to taste

sour cream and mayo (small bowl)

- add above just before serving

Makes about 3-4 liters.

Posted in reciepes | 1 Comment »

Curvilinear coordinates and reciprocal basis

Posted by Peeter Joot on March 9, 2014

[Click here for a PDF of this post with nicer formatting]


Here I’d like to explore some ideas from [1] where curvilinear coordinates, manifolds, and the vector derivative are introduced.


For simplicity, let’s consider the concrete example of a 2D manifold, a surface in an n dimensional vector space, parameterized by two variables

\begin{aligned}\mathbf{x} = \mathbf{x}(a,b) = \mathbf{x}(u^1, u^2).\end{aligned} \hspace{\stretch{1}}(1.2.1)

Note that the indices here do not represent exponentiation. We can construct a basis for the manifold as

\begin{aligned}\mathbf{x}_i = \frac{\partial {\mathbf{x}}}{\partial {u^i}}.\end{aligned} \hspace{\stretch{1}}(1.2.2)

On the manifold we can calculate a reciprocal basis \{\mathbf{x}^i\}, defined by requiring, at each point on the surface

\begin{aligned}\mathbf{x}^i \cdot \mathbf{x}_j = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(1.2.3)

Associated implicitly with this basis is a curvilinear coordinate representation defined by the projection operation

\begin{aligned}\mathbf{x} = x^i \mathbf{x}_i,\end{aligned} \hspace{\stretch{1}}(1.2.4)

(sums over mixed indexes are implied). These coordinates can be calculated by taking dot products with the reciprocal frame vectors

\begin{aligned}\mathbf{x} \cdot \mathbf{x}^i &= x^j \mathbf{x}_j \cdot \mathbf{x}^i \\ &= x^j {\delta_j}^i \\ &= x^i.\end{aligned} \hspace{\stretch{1}}(1.2.4)


Let’s pause for a couple examples that have interesting aspects.

Example: Circular coordinates on a disk

Consider an infinite disk at height z_0, with the origin omitted, parameterized by circular coordinates as in fig. 1.1.

Fig 1.1: Plane with circular coordinates


Points on this surface are

\begin{aligned}\mathbf{x}(r, \theta) = (r \cos\theta, r \sin\theta, z_0).\end{aligned} \hspace{\stretch{1}}(1.3.6)

The manifold basis vectors, defined by eq. 1.2.2 are

\begin{aligned}\begin{aligned}\mathbf{x}_r &= (\cos\theta, \sin\theta, 0) \\ \mathbf{x}_\theta &= r (-\sin\theta, \cos\theta, 0).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.7)

By inspection, the reciprocal basis is

\begin{aligned}\begin{aligned}\mathbf{x}^r &= (\cos\theta, \sin\theta, 0) \\ \mathbf{x}^\theta &= \frac{1}{{r}} (-\sin\theta, \cos\theta, 0).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.8)

The first thing to note here is that we cannot reach the points \mathbf{x} of eq. 1.3.6 by linear combination of these basis vectors. Instead these basis vectors only allow us to reach other points on the surface, when already there. For example we cannot actually write

\begin{aligned}\mathbf{x} = x^r \mathbf{x}_r + x^\theta \mathbf{x}_\theta,\end{aligned} \hspace{\stretch{1}}(1.3.9)

unless z_0 = 0. This is why eq. 1.2.4 was described as a projective operation (and probably deserves an alternate notation). To recover the original parameterized form of the position vector on the surface, we require

\begin{aligned}\mathbf{x} = x^r \mathbf{x}_r + x^\theta \mathbf{x}_\theta + z_0 \hat{\mathbf{z}}.\end{aligned} \hspace{\stretch{1}}(1.3.10)

The coordinates x^r, x^\theta follow by taking dot products

\begin{aligned}x^r &= \mathbf{x} \cdot \mathbf{x}^r \\ &= (r \cos\theta, r \sin\theta, z_0) \cdot(\cos\theta, \sin\theta, 0) \\ &= r \left( \cos^2 \theta + \sin^2 \theta \right) \\ &= r\end{aligned} \hspace{\stretch{1}}(

\begin{aligned}x^\theta &= \mathbf{x} \cdot \mathbf{x}^\theta \\ &= (r \cos\theta, r \sin\theta, z_0) \cdot\frac{1}{{r}} (-\sin\theta, \cos\theta, 0) \\ &= 0.\end{aligned} \hspace{\stretch{1}}(

Therefore, a point on the plane, relative to the origin of the plane, in this case, requires just one of the tangent plane basis vectors

\begin{aligned}\mathbf{x} = r \mathbf{x}_r.\end{aligned} \hspace{\stretch{1}}(

Example: Circumference of a circle

Now consider a circular perimeter, as illustrated in fig. 1.2, with the single variable parameterization

Fig 1.2: Circular perimeter


\begin{aligned}\mathbf{x} = r_0 \left( \cos\theta, \sin\theta \right).\end{aligned} \hspace{\stretch{1}}(1.13)

Our tangent space basis is

\begin{aligned}\mathbf{x}_\theta = r_0 \left( -\sin\theta, \cos\theta \right),\end{aligned} \hspace{\stretch{1}}(1.14)

with, by inspection, a reciprocal basis

\begin{aligned}\mathbf{x}^\theta = \frac{1}{{r_0}} \left( -\sin\theta, \cos\theta \right).\end{aligned} \hspace{\stretch{1}}(1.15)

Here we have a curious condition, since the tangent space basis vector is perpendicular to the position vector for the points on the circular surface. So, should we attempt to calculate coordinates using eq. 1.2.4, we just get zero

\begin{aligned}x^\theta &= \mathbf{x} \cdot \mathbf{x}^\theta \\ &= r_0 \left( \cos\theta, \sin\theta \right) \cdot\frac{1}{{r_0}} \left( -\sin\theta, \cos\theta \right) \\ &= 0.\end{aligned} \hspace{\stretch{1}}(1.16)

It’s perhaps notable that a coordinate representation using the tangent space basis is possible, but we need to utilize a complex geometry. Assuming

\begin{aligned}\mathbf{x} = x^\theta \mathbf{x}_\theta,\end{aligned} \hspace{\stretch{1}}(1.17)

and writing i = \mathbf{e}_1 \mathbf{e}_2 for the pseudoscalar, we can write

\begin{aligned}\begin{aligned}\mathbf{x} &= r_0 \mathbf{e}_1 e^{i\theta} \\ \mathbf{x}_\theta &= r_0 \mathbf{e}_2 e^{i\theta},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.18)

so that, by inversion, the \theta coordinate is

\begin{aligned}x^\theta &= \mathbf{x}\frac{1}{{\mathbf{x}_\theta}} \\ &= \left( r_0 \mathbf{e}_1 e^{i\theta} \right)\left( \frac{ e^{-i\theta } \mathbf{e}_2 }{r_0} \right) \\ &= i,\end{aligned} \hspace{\stretch{1}}(1.19)


\begin{aligned}\mathbf{x} = i \mathbf{x}_\theta.\end{aligned} \hspace{\stretch{1}}(1.0.20)

Example: Surface of a sphere

It is also clear that any parameterization that has radial symmetry will suffer the same issue. For example, for a radial surface in 3D with radius r_0 we have

\begin{aligned}\begin{aligned}\mathbf{x} &= r_0 \left( \sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta \right) \\ \mathbf{x}_\theta &= r_0 \left( \cos\theta \cos\phi, \cos\theta \sin\phi, -\sin\theta \right) \\ \mathbf{x}_\phi &= r_0 \left( -\sin\theta \sin\phi, \sin\theta \cos\phi, 0 \right) \\ \mathbf{x}^\theta &= \frac{1}{{r_0}} \left( \cos\theta \cos\phi, \cos\theta \sin\phi, -\sin\theta \right) \\ \mathbf{x}^\phi &= \frac{1}{{r_0 \sin\theta}} \left( -\sin\phi, \cos\phi, 0 \right).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.21)

The reciprocals here were computed using the mathematica reciprocalFrameSphericalSurface.nb notebook.

Do we have a bivector parameterization of the surface using the tangent space basis? Let’s try

\begin{aligned}\mathbf{x} = x^\theta \mathbf{x}_\theta + x^\phi \mathbf{x}_\phi.\end{aligned} \hspace{\stretch{1}}(1.0.22)

Wedging with \mathbf{x}_\theta and \mathbf{x}_\phi, and writing i = \mathbf{e}_1 \mathbf{e}_2, respectively yields

\begin{aligned}x^\theta &= \mathbf{x} \wedge \mathbf{x}_\phi \frac{1}{{\mathbf{x}_\theta \wedge \mathbf{x}_\phi}} \\ &= -\mathbf{e}_1 \mathbf{e}_3 \cos \phi - \mathbf{e}_2 \mathbf{e}_3 \sin \phi \\ &= \mathbf{e}_{31} e^{ i \phi}.\end{aligned} \hspace{\stretch{1}}(1.0.22)

\begin{aligned}x^\phi &= -\mathbf{x} \wedge \mathbf{x}_\theta \frac{1}{{\mathbf{x}_\theta \wedge \mathbf{x}_\phi}} \\ &= \mathbf{e}_1 \mathbf{e}_3 \cot \theta \sin \phi + i -\mathbf{e}_2 \mathbf{e}_3 \cot \theta \cos \phi \\ &= \mathbf{e}_2 \mathbf{e}_3 \cot \theta e^{i \phi} + i.\end{aligned} \hspace{\stretch{1}}(1.0.22)

However, substitution back into eq. 1.0.22 shows either pair parameterizes the radial position vector

\begin{aligned}\mathbf{x} = x^\theta \mathbf{x}_\theta = x^\phi \mathbf{x}_\phi.\end{aligned} \hspace{\stretch{1}}(1.0.25)

It is interesting that duality relationships seem to naturally arise attempting to describe points on a surface using the tangent space basis for that surface.


[1] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

Nuland’s recent state dept Ukraine speech

Posted by Peeter Joot on February 21, 2014

Here is a presstv article, referring to a State department speech on Ukraine by Nuland.

In the spirit of Andrew Gavin Marshall’s podcasts, I would love to see a full translation of this speech into English from Statelish.  I imagine that the key to such a translation it would be along the following lines:

democratic state ; modern democratic government state subservient to the USA
free democracies democracies not subservient to the USA
return to economic health economics subservient to the USA
coordinated parallel high level diplomacy an active attempt to undermine existing government ; financing and manufacturing subversive and violent elements
a tough conversation with Yanacovich I showed him lots of the blackmail material we have collected on him.  Made him realize how short his life would be if he doesn’t cooperate.  Threatened military and financial warfare.
de-escalate the security situation step down so that we can install a puppet government, preferably one even more violent
get Ukraine back into a conversation with Europe and with the International Monetary Fund we will fuck you over if you take on debt that will allow Russia to control you instead of taking on our debt and controls
reforms that the IMF insists on are necessary for the long term economic health of the country we plan to fuck your kids and all their future progeny too
foreign investment needed US exploitation is strongly desired

Posted in Incoherent ramblings | Tagged: , , | Leave a Comment »

Gaussian quadratic form integrals and multivariable approximation of exponential integrals

Posted by Peeter Joot on January 26, 2014

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]


In [1] eq. I.2.20 is the approximation

\begin{aligned}\int d\mathbf{q} e^{-f(\mathbf{q})/\hbar} \approx e^{-f(\mathbf{a})/\hbar} \sqrt{\frac{2 \pi \hbar}{\text{Det} f''(\mathbf{a})} },\end{aligned} \hspace{\stretch{1}}(1.1.1)

where [ f''(\mathbf{a}) ]_{ij} \equiv {\left.{{\partial^2 f/\partial q_i \partial q_j}}\right\vert}_{{\mathbf{q} = \mathbf{a}}}. Here \mathbf{a} is assumed to be an extremum of f. This follows from a generalization of the Gaussian integral result. Let’s derive both in detail.


First, to second order, let’s expand f(\mathbf{q}) around a min or max at \mathbf{q} = \mathbf{a}. The usual trick, presuming that one doesn’t remember the form of this generalized Taylor expansion, is to expand g(t) = f(\mathbf{a} + t \mathbf{q}) around t = 0, then evaluate at t = 1. We have

\begin{aligned}g'(t) &= \sum_i \frac{\partial {f(\mathbf{a} + t \mathbf{q})}}{\partial {(a_i + t q_i)}} \frac{d{{ (a_i + t q_i) }}}{dt} \\ &= \sum_i q_i \frac{\partial {f(\mathbf{a} + t \mathbf{q})}}{\partial {(a_i + t q_i)}} \\ &= \mathbf{q} \cdot \left(  {\left.{{\boldsymbol{\nabla}_\mathbf{q} f(\mathbf{q})}}\right\vert}_{{\mathbf{q} = \mathbf{a} + t \mathbf{q}}}  \right).\end{aligned} \hspace{\stretch{1}}(1.2.2)

The second derivative is

\begin{aligned}g''(t) = \sum_{i j} q_i q_j \frac{\partial {}}{\partial {(a_j + t q_j)}} \frac{\partial {f(\mathbf{a} + t \mathbf{q})}}{\partial {(a_i + t q_i)}},\end{aligned} \hspace{\stretch{1}}(1.2.3)

This gives

\begin{aligned}\begin{aligned}g'(0) &= \mathbf{q} \cdot \boldsymbol{\nabla}_\mathbf{q} f(\mathbf{q}) = \sum_i q_i \partial q_i f(\mathbf{q}) \\ g''(0) &= \left( \mathbf{q} \cdot \boldsymbol{\nabla}_\mathbf{q} \right)^2 f(\mathbf{q}) = \sum_{i j} q_i q_j \partial q_i \partial q_j f(\mathbf{q}).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.2.4)

Putting these together, we have to second order in t is

\begin{aligned}f(\mathbf{a} + t \mathbf{q}) \approx f(\mathbf{a}) + \sum_i q_i \partial q_i f(\mathbf{q}) \frac{t^1}{1!}+ \sum_{i j} q_i q_j \partial q_i \partial q_j f(\mathbf{q}) \frac{t^2}{2!},\end{aligned} \hspace{\stretch{1}}(1.2.5)


\begin{aligned}f(\mathbf{a} + \mathbf{q}) \approx f(\mathbf{a}) + \sum_i q_i {\left.{{ \left(  \frac{\partial {f}}{\partial {q_i}} \right)}}\right\vert}_{\mathbf{a}}+ \frac{1}{{2}} \sum_{i j} q_i q_j {\left.{{\left(  \frac{\partial^2 f}{\partial q_i \partial q_j}  \right)}}\right\vert}_{\mathbf{a}}.\end{aligned} \hspace{\stretch{1}}(1.2.6)

We can put the terms up to second order in a nice tidy matrix forms

\begin{aligned}\mathbf{b} = {\left.{{\left(  \boldsymbol{\nabla}_\mathbf{q} f \right)}}\right\vert}_{\mathbf{a}}\end{aligned} \hspace{\stretch{1}}(1.0.7a)

\begin{aligned}A = {\begin{bmatrix}{\left.{{ \left(  \frac{\partial^2 f}{\partial q_i \partial q_j} \right)}}\right\vert}_{\mathbf{a}}\end{bmatrix}}_{i j}.\end{aligned} \hspace{\stretch{1}}(1.0.7b)

Note that eq. 1.0.7b is a real symmetric matrix, and can thus be reduced to diagonal form by an orthonormal transformation. Putting the pieces together, we have

\begin{aligned}f(\mathbf{a} + \mathbf{q}) \approx f(\mathbf{a}) + \mathbf{q}^\text{T} \mathbf{b} + \frac{1}{{2}} \mathbf{q}^\text{T} A \mathbf{q}.\end{aligned} \hspace{\stretch{1}}(1.0.8)

Integrating this, we have

\begin{aligned}\int dq_1 dq_2 \cdots dq_N \exp\left( -\left(  f(\mathbf{a}) + \mathbf{q}^\text{T} \mathbf{b} + \frac{1}{{2}} \mathbf{q}^\text{T} A \mathbf{q}  \right)\right)=e^{-f(\mathbf{a})}\int dq_1 dq_2 \cdots dq_N \exp\left(  -\mathbf{q}^\text{T} \mathbf{b} - \frac{1}{{2}} \mathbf{q}^\text{T} A \mathbf{q}  \right).\end{aligned} \hspace{\stretch{1}}(1.0.9)

Employing an orthonormal change of variables to diagonalizae the matrix

\begin{aligned}A = O^\text{T} D O,\end{aligned} \hspace{\stretch{1}}(1.0.10)

and \mathbf{r} = O \mathbf{q}, or r_i = O_{ik} q_k, the volume element after transformation is

\begin{aligned}dr_1 dr_2 \cdots dr_N &= \frac{\partial(r_1, r_2, \cdots, r_N)}{\partial(q_1, q_2, \cdots, q_N)}dq_1 dq_2 \cdots dq_N \\ &= \begin{vmatrix}O_{11} & O_{12} & \cdots & O_{1N} \\ O_{21} & O_{22} & \cdots & O_{2N} \\ \dot{v}s & \dot{v}s & \dot{v}s & \dot{v}s \\  O_{N1} & O_{N2} & \cdots & O_{NN} \\ \end{vmatrix}dq_1 dq_2 \cdots dq_N \\ &= (\text{Det} O)dq_1 dq_2 \cdots dq_N \\ &= dq_1 dq_2 \cdots dq_N \end{aligned} \hspace{\stretch{1}}(1.0.10)

Our integral is

\begin{aligned}e^{-f(\mathbf{a})}\int dq_1 dq_2 \cdots dq_N \exp\left(  -\mathbf{q}^\text{T} \mathbf{b} - \frac{1}{{2}} \mathbf{q}^\text{T} A \mathbf{q}  \right) &= e^{-f(\mathbf{a})}\int dr_1 dr_2 \cdots dr_N \exp\left(  -\mathbf{q}^\text{T} O^\text{T} O \mathbf{b} - \frac{1}{{2}} \mathbf{q}^\text{T} O^\text{T} D O \mathbf{q}  \right) \\ &= e^{-f(\mathbf{a})}\int dr_1 dr_2 \cdots dr_N \exp\left(  -\mathbf{r}^\text{T} (O \mathbf{b}) - \frac{1}{{2}} \mathbf{r}^\text{T} D \mathbf{r}  \right) \\ &= e^{-f(\mathbf{a})}\int dr_1 e^{ -\frac{1}{{2}} r_1^2 \lambda_1 - r_1 (O \mathbf{b})_1 }\int dr_2 e^{ -\frac{1}{{2}} r_2^2 \lambda_2 - r_2 (O \mathbf{b})_2 }\cdots \int dr_N e^{ -\frac{1}{{2}} r_N^2 \lambda_N - r_N (O \mathbf{b})_N }.\end{aligned} \hspace{\stretch{1}}(1.0.10)

We now have products of terms that are of the regular Gaussian form. One such integral is

\begin{aligned}\int e^{-a x^2/2 + J x} &= \int \exp\left(-\frac{1}{{2}} \left(\left(  \sqrt{a} x - J/\sqrt{a}  \right)^2- \left(  J/\sqrt{a}  \right)^2\right)\right) \\ &= e^{J^2/2a} \sqrt{2 \pi \int_0^\infty r dr e^{-a r^2/2}}\end{aligned} \hspace{\stretch{1}}(1.0.10)

This is just

\begin{aligned}\int e^{-a x^2/2 + J x}= e^{J^2/2a} \sqrt{ \frac{2 \pi}{a} }.\end{aligned} \hspace{\stretch{1}}(1.0.10)

Applying this to the integral of interest, writing m_i = (O \mathbf{b})_i

\begin{aligned}\begin{aligned}e^{-f(\mathbf{a})}\int &dq_1 dq_2 \cdots dq_N \exp\left(  -\mathbf{q}^\text{T} \mathbf{b} - \frac{1}{{2}} \mathbf{q}^\text{T} A \mathbf{q}  \right) \\ &=e^{-f(\mathbf{a})}e^{-m_1^2/2\lambda_1} \sqrt{ \frac{2 \pi}{\lambda_1}}e^{-m_2^2/2\lambda_2} \sqrt{ \frac{2 \pi}{\lambda_2}}\cdots e^{-m_N^2/2\lambda_N} \sqrt{ \frac{2 \pi}{\lambda_N}} \\ &=e^{-f(\mathbf{a})}\sqrt{\frac{2 \pi}{\text{Det} A}}\exp\left(-\frac{1}{{2}}\left(  -m_1^2/\lambda_1 -m_2^2/\lambda_2 \cdots -m_N^2/\lambda_N  \right) \right).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.10)

This last exponential argument can be put into matrix form

\begin{aligned}-m_1^2/\lambda_1-m_2^2/\lambda_2\cdots -m_N^2/\lambda_N &= (O \mathbf{b})^\text{T} D^{-1} O \mathbf{b} \\ &= \mathbf{b}^\text{T} O^\text{T} D^{-1} O \mathbf{b} \\ &= \mathbf{b}^\text{T} A^{-1} \mathbf{b},\end{aligned} \hspace{\stretch{1}}(1.0.10)

Finally, referring back to eq. 1.0.7, we have

\begin{aligned}\int d\mathbf{q} e^{-f(\mathbf{q})} \approx e^{-f(\mathbf{a})}\sqrt{\frac{2 \pi}{\text{Det} A}}e^{-\mathbf{b}^\text{T} A^{-1} \mathbf{b}/2}.\end{aligned} \hspace{\stretch{1}}(1.0.10)

Observe that we can recover eq. 1.1.1 by noting that \mathbf{b} = 0 for that system was assumed (i.e. \mathbf{a} was an extremum point), and by noting that the determinant scales with 1/\hbar since it just contains the second partials.

An afterword on notational sugar:

We didn’t need it, but it seems worth noting that we can write the Taylor expansion of eq. 1.0.8 in operator form as

\begin{aligned}f(\mathbf{a} + \mathbf{q}) = \sum_{k = 0}^\infty \frac{1}{{k!}} {\left.{{ \left( \mathbf{q} \cdot \boldsymbol{\nabla}_{\mathbf{q}'} \right)^k f(\mathbf{q}') }}\right\vert}_{{\mathbf{q}' = \mathbf{a}}}={\left.{{ e^{\mathbf{q} \cdot \boldsymbol{\nabla}_{\mathbf{q}'}} f(\mathbf{q}') }}\right\vert}_{{\mathbf{q}' = \mathbf{a}}}.\end{aligned} \hspace{\stretch{1}}(1.0.18)


[1] A. Zee. Quantum field theory in a nutshell. Universities Press, 2005.

Posted in Math and Physics Learning. | Tagged: , , | Leave a Comment »

Polarization angles for normal transmission and reflection

Posted by Peeter Joot on January 22, 2014

[Click here for a PDF of this post with nicer formatting]

Question: Polarization angles for normal transmission and reflection ([1] pr 9.14)

For normal incidence, without assuming that the reflected and transmitted waves have the same polarization as the incident wave, prove that this must be so.


Working with coordinates as illustrated in fig. 1.1, the incident wave can be assumed to have the form

fig 1.1: Normal incidence coordinates


\begin{aligned}\tilde{\mathbf{E}}_{\mathrm{I}} = E_{\mathrm{I}} e^{i (k z - \omega t)} \hat{\mathbf{x}}\end{aligned} \hspace{\stretch{1}}(1.0.1a)

\begin{aligned}\tilde{\mathbf{B}}_{\mathrm{I}} = \frac{1}{{v}} \hat{\mathbf{z}} \times \tilde{\mathbf{E}}_{\mathrm{I}} = \frac{1}{{v}} E_{\mathrm{I}} e^{i (k z - \omega t)} \hat{\mathbf{y}}.\end{aligned} \hspace{\stretch{1}}(1.0.1b)

Assuming a polarization \hat{\mathbf{n}} = \cos\theta \hat{\mathbf{x}} + \sin\theta \hat{\mathbf{y}} for the reflected wave, we have

\begin{aligned}\tilde{\mathbf{E}}_{\mathrm{R}} = E_{\mathrm{R}} e^{i (-k z - \omega t)} (\hat{\mathbf{x}} \cos\theta + \hat{\mathbf{y}} \sin\theta)\end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}\tilde{\mathbf{B}}_{\mathrm{R}} = \frac{1}{{v}} (-\hat{\mathbf{z}}) \times \tilde{\mathbf{E}}_{\mathrm{R}} = \frac{1}{{v}} E_{\mathrm{R}} e^{i (-k z - \omega t)} (\hat{\mathbf{x}} \sin\theta - \hat{\mathbf{y}} \cos\theta).\end{aligned} \hspace{\stretch{1}}(1.0.2b)

And finally assuming a polarization \hat{\mathbf{n}} = \cos\phi \hat{\mathbf{x}} + \sin\phi \hat{\mathbf{y}} for the transmitted wave, we have

\begin{aligned}\tilde{\mathbf{E}}_{\mathrm{T}} = E_{\mathrm{T}} e^{i (k' z - \omega t)} (\hat{\mathbf{x}} \cos\phi + \hat{\mathbf{y}} \sin\phi)\end{aligned} \hspace{\stretch{1}}(1.0.3a)

\begin{aligned}\tilde{\mathbf{B}}_{\mathrm{T}} = \frac{1}{{v}} \hat{\mathbf{z}} \times \tilde{\mathbf{E}}_{\mathrm{T}} = \frac{1}{{v'}} E_{\mathrm{T}} e^{i (k' z - \omega t)} (-\hat{\mathbf{x}} \sin\phi + \hat{\mathbf{y}} \cos\phi).\end{aligned} \hspace{\stretch{1}}(1.0.3b)

With no components of any of the \tilde{\mathbf{E}} or \tilde{\mathbf{B}} waves in the \hat{\mathbf{z}} directions the boundary value conditions at z = 0 require the equality of the \hat{\mathbf{x}} and \hat{\mathbf{y}} components of

\begin{aligned}\left( \tilde{\mathbf{E}}_{\mathrm{I}} + \tilde{\mathbf{E}}_{\mathrm{R}} \right)_{x,y} = \left( \tilde{\mathbf{E}}_{\mathrm{T}} \right)_{x,y}\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned} \left( \frac{1}{\mu} \left( \tilde{\mathbf{B}}_{\mathrm{I}} + \tilde{\mathbf{B}}_{\mathrm{R}} \right) \right)_{x,y} = \left( \frac{1}{\mu'} \tilde{\mathbf{B}}_{\mathrm{T}} \right)_{x,y}.\end{aligned} \hspace{\stretch{1}}(1.0.4b)

With \beta = \mu v/\mu' v', those components are

\begin{aligned}E_{\mathrm{I}} + E_{\mathrm{R}} \cos\theta = E_{\mathrm{T}} \cos\phi \end{aligned} \hspace{\stretch{1}}(1.0.5a)

\begin{aligned}E_{\mathrm{R}} \sin\theta = E_{\mathrm{T}} \sin\phi\end{aligned} \hspace{\stretch{1}}(1.0.5b)

\begin{aligned}E_{\mathrm{R}} \sin\theta = - \beta E_{\mathrm{T}} \sin\phi\end{aligned} \hspace{\stretch{1}}(1.0.5c)

\begin{aligned}E_{\mathrm{I}} - E_{\mathrm{R}} \cos\theta = \beta E_{\mathrm{T}} \cos\phi\end{aligned} \hspace{\stretch{1}}(1.0.5d)

Equality of eq. 1.0.5b, and eq. 1.0.5c require

\begin{aligned}- \beta E_{\mathrm{T}} \sin\phi = E_{\mathrm{T}} \sin\phi,\end{aligned} \hspace{\stretch{1}}(1.0.6)

or (\theta, \phi) \in \{(0, 0), (0, \pi), (\pi, 0), (\pi, \pi)\}. It turns out that all of these solutions correspond to the same physical waves. Let’s look at each in turn

  • (\theta, \phi) = (0, 0). The system eq. is reduced to

    \begin{aligned}\begin{aligned}E_{\mathrm{I}} + E_{\mathrm{R}} &= E_{\mathrm{T}} \\ E_{\mathrm{I}} - E_{\mathrm{R}} &= \beta E_{\mathrm{T}},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.7)

    with solution

    \begin{aligned}\begin{aligned}\frac{E_{\mathrm{T}}}{E_{\mathrm{I}}} &= \frac{2}{1 + \beta} \\ \frac{E_{\mathrm{R}}}{E_{\mathrm{I}}} &= \frac{1 - \beta}{1 + \beta}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.8)

  • (\theta, \phi) = (\pi, \pi). The system eq. is reduced to

    \begin{aligned}\begin{aligned}E_{\mathrm{I}} - E_{\mathrm{R}} &= -E_{\mathrm{T}} \\ E_{\mathrm{I}} + E_{\mathrm{R}} &= -\beta E_{\mathrm{T}},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.9)

    with solution

    \begin{aligned}\begin{aligned}\frac{E_{\mathrm{T}}}{E_{\mathrm{I}}} &= -\frac{2}{1 + \beta} \\ \frac{E_{\mathrm{R}}}{E_{\mathrm{I}}} &= -\frac{1 - \beta}{1 + \beta}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.10)

    Effectively the sign for the magnitude of the transmitted and reflected phasors is toggled, but the polarization vectors are also negated, with \hat{\mathbf{n}} = -\hat{\mathbf{x}}, and \hat{\mathbf{n}}' = -\hat{\mathbf{x}}. The resulting \tilde{\mathbf{E}}_{\mathrm{R}} and \tilde{\mathbf{E}}_{\mathrm{T}} are unchanged relative to those of the (0,0) solution above.

  • (\theta, \phi) = (0, \pi). The system eq. is reduced to

    \begin{aligned}\begin{aligned}E_{\mathrm{I}} + E_{\mathrm{R}} &= -E_{\mathrm{T}} \\ E_{\mathrm{I}} - E_{\mathrm{R}} &= -\beta E_{\mathrm{T}},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.11)

    with solution

    \begin{aligned}\begin{aligned}\frac{E_{\mathrm{T}}}{E_{\mathrm{I}}} &= -\frac{2}{1 + \beta} \\ \frac{E_{\mathrm{R}}}{E_{\mathrm{I}}} &= \frac{1 - \beta}{1 + \beta}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.12)

    Effectively the sign for the magnitude of the transmitted phasor is toggled. The polarization vectors in this case are \hat{\mathbf{n}} = \hat{\mathbf{x}}, and \hat{\mathbf{n}}' = -\hat{\mathbf{x}}, so the transmitted phasor magnitude change of sign does not change \tilde{\mathbf{E}}_{\mathrm{T}} relative to that of the (0,0) solution above.

  • (\theta, \phi) = (\pi, 0). The system eq. is reduced to

    \begin{aligned}\begin{aligned}E_{\mathrm{I}} - E_{\mathrm{R}} &= E_{\mathrm{T}} \\ E_{\mathrm{I}} + E_{\mathrm{R}} &= \beta E_{\mathrm{T}},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.13)

    with solution

    \begin{aligned}\begin{aligned}\frac{E_{\mathrm{T}}}{E_{\mathrm{I}}} &= \frac{2}{1 + \beta} \\ \frac{E_{\mathrm{R}}}{E_{\mathrm{I}}} &= -\frac{1 - \beta}{1 + \beta}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.14)

    This time, the sign for the magnitude of the reflected phasor is toggled. The polarization vectors in this case are \hat{\mathbf{n}} = -\hat{\mathbf{x}}, and \hat{\mathbf{n}}' = \hat{\mathbf{x}}. In this final variation the reflected phasor magnitude change of sign does not change \tilde{\mathbf{E}}_{\mathrm{R}} relative to that of the (0,0) solution.

We see that there is only one solution for the polarization angle of the transmitted and reflected waves relative to the incident wave. Although we fixed the incident polarization with \mathbf{E} along \hat{\mathbf{x}}, the polarization of the incident wave is maintained regardless of TE or TM labeling in this example, since our system is symmetric with respect to rotation.


[1] D.J. Griffith. Introduction to Electrodynamics. Prentice-Hall, 1981.

Posted in Math and Physics Learning. | Tagged: , , , , , | 1 Comment »

Final update of notes for PHY487 (condensed matter physics)

Posted by Peeter Joot on January 20, 2014

Here is what will likely be the final update of my class notes from Winter 2013, University of Toronto Condensed Matter Physics course (PHY487H1F), taught by Prof. Stephen Julian.

Official course description: “Introduction to the concepts used in the modern treatment of solids. The student is assumed to be familiar with elementary quantum mechanics. Topics include: bonding in solids, crystal structures, lattice vibrations, free electron model of metals, band structure, thermal properties, magnetism and superconductivity (time permitting)”

This document contains:

• Plain old lecture notes. These mirror what was covered in class, possibly augmented with additional details.
• Personal notes exploring details that were not clear to me from the lectures, or from the texts associated with the lecture material.
• Assigned problems. Like anything else take these as is.
• Some worked problems attempted as course prep, for fun, or for test preparation, or post test reflection.
• Links to Mathematica workbooks associated with this course.
My thanks go to Professor Julian for teaching this course.

NOTE: This v.5 update of these notes is still really big (~18M).  Some of my mathematica generated 3D images result in very large pdfs.

Changelog for this update (relative to the first, and second, and third, and the last pre-exam Changelogs).

January 19, 2014 Quadratic Deybe

January 19, 2014 One atom basis phonons in 2D

January 07, 2014 Two body harmonic oscillator in 3D
Figure out a general solution for two interacting harmonic oscillators, then use the result to calculate the matrix required for a 2D two atom diamond lattice with horizontal, vertical and diagonal nearest neighbour coupling.

December 04, 2013 Lecture 24: Superconductivity (cont.)

December 04, 2013 Problem Set 10: Drude conductivity and doped semiconductors.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Quadratic Debye

Posted by Peeter Joot on January 19, 2014

[Click here for a PDF of this post with nicer formatting]

Question: Quadratic Debye phonons (2013 midterm pr B2)

Assume a quadratic dispersion relation for the longitudinal and transverse modes

\begin{aligned}\omega = \left\{\begin{array}{l}b_{\mathrm{L}} q^2 \\ b_{\mathrm{T}} q^2\end{array}\right..\end{aligned} \hspace{\stretch{1}}(1.2)

Part a

Find the density of states.

Part b

Find the Debye frequency.

Part c

In terms of k_{\mathrm{B}} \Theta = \hbar \omega_{\mathrm{D}}, and

\begin{aligned}\mathcal{I} = \int_0^\infty \frac{y^{5/2} e^{y} dy}{\left( { e^y - 1} \right)^2 },\end{aligned} \hspace{\stretch{1}}(1.2)

find the specific heat for k_{\mathrm{B}} T \ll \hbar \omega_{\mathrm{D}}.

Part d

Find the specific heat for k_{\mathrm{B}} T \gg \hbar \omega_{\mathrm{D}}.


Part a

Working straight from the definition

\begin{aligned}Z(\omega) &= \frac{V}{(2 \pi)^3 } \sum_{L, T} \int \frac{df_\omega}{ \left\lvert { \boldsymbol{\nabla}_\mathbf{q} \omega } \right\rvert } \\ &= \frac{V}{(2 \pi)^3 } \left( { {\left.{{\frac{4 \pi q^2}{2 b_{\mathrm{L}} q} }}\right\vert}_{{\mathrm{L}}} + {\left.{{\frac{2 \times 4 \pi q^2}{2 b_{\mathrm{T}} q} }}\right\vert}_{{\mathrm{T}}} } \right) \\ &= \frac{V}{4 \pi^2 } \left( { \frac{q_{\mathrm{L}}}{b_{\mathrm{L}}} + \frac{2 q_{\mathrm{T}}}{b_{\mathrm{T}}} } \right).\end{aligned} \hspace{\stretch{1}}(1.3)

With q_{\mathrm{L}} = \sqrt{\omega/b_{\mathrm{L}}} and q_{\mathrm{T}} = \sqrt{\omega/b_{\mathrm{T}}}, this is

\begin{aligned}Z(\omega) = \frac{V}{4 \pi^2 } \left( { \frac{1}{b_{\mathrm{L}}^{3/2}} + \frac{2}{b_{\mathrm{T}}^{3/2}} } \right)\sqrt{\omega}\end{aligned} \hspace{\stretch{1}}(1.4)

Part b

The Debye frequency was given implicitly by

\begin{aligned}\int_0^{\omega_{\mathrm{D}}} Z(\omega) d\omega = 3 r N,\end{aligned} \hspace{\stretch{1}}(1.5)

which gives

\begin{aligned}3 r N=\frac{2}{3} \frac{V}{4 \pi^2 } \left( { \frac{1}{b_{\mathrm{L}}^{3/2}} + \frac{2}{b_{\mathrm{T}}^{3/2}} } \right)\omega_{\mathrm{D}}^{3/2}=\frac{V}{6 \pi^2 } \left( { \frac{1}{b_{\mathrm{L}}^{3/2}} + \frac{2}{b_{\mathrm{T}}^{3/2}} } \right)\omega_{\mathrm{D}}^{3/2}\end{aligned} \hspace{\stretch{1}}(1.6)

Part c

Assuming a Bose distribution and ignoring the zero point energy, which has no temperature dependence, the specific heat, the temperature derivative of the energy density, is

\begin{aligned}C_{\mathrm{V}} &= \frac{d}{d T} \frac{1}{{V}} \int Z(\omega) \frac{\hbar \omega}{ e^{\hbar \omega/ k_{\mathrm{B}} T } - 1} d\omega \\ &= \frac{1}{{V}} \frac{d}{d T} \int Z(\omega) \frac{\hbar \omega}{ \hbar \omega/ k_{\mathrm{B}} T + \frac{1}{{2}}( \hbar \omega/k_{\mathrm{B}} T)^2 + \cdots } d\omega \\ &\approx \frac{1}{{V}} \frac{d}{d T} \int Z(\omega) k_{\mathrm{B}} T d\omega \\ &= \frac{1}{{V}} k_{\mathrm{B}} 3 r N.\end{aligned} \hspace{\stretch{1}}(1.7)

Part d

First note that the density of states can be written

\begin{aligned}Z(\omega) = \frac{9 r N}{ 2 \omega_{\mathrm{D}}^{3/2} } \omega^{1/2},\end{aligned} \hspace{\stretch{1}}(1.8)

for a specific heat of

\begin{aligned}C_{\mathrm{V}} &= \frac{d}{d T} \frac{1}{{V}} \int_0^\infty \frac{9 r N}{ 2 \omega_{\mathrm{D}}^{3/2} } \omega^{1/2} \frac{\hbar \omega}{ e^{\hbar \omega/ k_{\mathrm{B}} T } - 1} d\omega \\ &= \frac{9 r N}{ 2 V \omega_{\mathrm{D}}^{3/2} } \int_0^\infty d\omega \omega^{1/2} \frac{d}{d T} \frac{\hbar \omega}{ e^{\hbar \omega/ k_{\mathrm{B}} T } - 1} \\ &= \frac{9 r N}{ 2 V \omega_{\mathrm{D}}^{3/2} } \int_0^\infty d\omega \omega^{1/2} \frac{-\hbar \omega}{ \left( {e^{\hbar \omega/ k_{\mathrm{B}} T } - 1} \right)^2 }  e^{\hbar \omega/k_{\mathrm{B}} T} \hbar \omega/k_{\mathrm{B}} \left( {-\frac{1}{{T^2}}} \right) \\ &= \frac{9 r N k_{\mathrm{B}} }{ 2 V \omega_{\mathrm{D}}^{3/2} } \left( { \frac{ k_{\mathrm{B}} T}{\hbar} } \right)^{3/2}\int_0^\infty d \frac{\hbar \omega}{k_{\mathrm{B}} T} \left( {\frac{\hbar \omega}{k_{\mathrm{B}} T}} \right)^{1/2} \frac{1}{ \left( {e^{\hbar \omega/ k_{\mathrm{B}} T } - 1} \right)^2 }  e^{\hbar \omega/k_{\mathrm{B}} T} \left( { \frac{\hbar \omega}{k_{\mathrm{B}} T} } \right)^2 \\ &= \frac{9 r N k_{\mathrm{B}} }{ 2 V \omega_{\mathrm{D}}^{3/2} } \left( { \frac{ k_{\mathrm{B}} T}{\hbar} } \right)^{3/2}\int_0^\infty dy \frac{y^{5/2} e^y }{ \left( {e^y - 1} \right)^2 } \\& = \frac{9 r N k_{\mathrm{B}} }{ 2 V } \left( { \frac{ T}{\Theta} } \right)^{3/2} \mathcal{I}.\end{aligned} \hspace{\stretch{1}}(1.9)

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

One atom basis phonons in 2D

Posted by Peeter Joot on January 19, 2014

[Click here for a PDF of this post with nicer formatting]

Let’s tackle a problem like the 2D problem of the final exam, but first more generally. Instead of a square lattice consider the lattice with the geometry illustrated in fig. 1.1.

Fig 1.1: Oblique one atom basis

Here, \mathbf{a} and \mathbf{b} are the vector differences between the equilibrium positions separating the masses along the K_1 and K_2 interaction directions respectively. The equilibrium spacing for the cross coupling harmonic forces are

\begin{aligned}\begin{aligned}\mathbf{r} &= (\mathbf{b} + \mathbf{a})/2 \\ \mathbf{s} &= (\mathbf{b} - \mathbf{a})/2.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.1)

Based on previous calculations, we can write the equations of motion by inspection

\begin{aligned}\begin{aligned}m \dot{d}{\mathbf{u}}_\mathbf{n} = &-K_1 \text{Proj}_{\hat{\mathbf{a}}} \sum_\pm \left( { \mathbf{u}_\mathbf{n} - \mathbf{u}_{\mathbf{n} \pm(1, 0)}} \right)^2 \\ &-K_2 \text{Proj}_{\hat{\mathbf{b}}} \sum_\pm \left( { \mathbf{u}_\mathbf{n} - \mathbf{u}_{\mathbf{n} \pm(0, 1)}} \right)^2 \\ &-K_3 \text{Proj}_{\hat{\mathbf{r}}} \sum_\pm \left( { \mathbf{u}_\mathbf{n} - \mathbf{u}_{\mathbf{n} \pm(1, 1)}} \right)^2 \\ &-K_4 \text{Proj}_{\hat{\mathbf{s}}} \sum_\pm \left( { \mathbf{u}_\mathbf{n} - \mathbf{u}_{\mathbf{n} \pm(1, -1)}} \right)^2.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.2)

Inserting the trial solution

\begin{aligned}\mathbf{u}_\mathbf{n} = \frac{1}{{\sqrt{m}}} \boldsymbol{\epsilon}(\mathbf{q}) e^{i( \mathbf{r}_\mathbf{n} \cdot \mathbf{q} - \omega t) },\end{aligned} \hspace{\stretch{1}}(1.3)

and using the matrix form for the projection operators, we have

\begin{aligned}\begin{aligned}\omega^2 \boldsymbol{\epsilon} &=\frac{K_1}{m} \hat{\mathbf{a}} \hat{\mathbf{a}}^\text{T} \boldsymbol{\epsilon}\sum_\pm\left( { 1 - e^{\pm i \mathbf{a} \cdot \mathbf{q}} } \right) \\ & +\frac{K_2}{m} \hat{\mathbf{b}} \hat{\mathbf{b}}^\text{T} \boldsymbol{\epsilon}\sum_\pm\left( { 1 - e^{\pm i \mathbf{b} \cdot \mathbf{q}} } \right) \\ & +\frac{K_3}{m} \hat{\mathbf{b}} \hat{\mathbf{b}}^\text{T} \boldsymbol{\epsilon}\sum_\pm\left( { 1 - e^{\pm i (\mathbf{b} + \mathbf{a}) \cdot \mathbf{q}} } \right) \\ & +\frac{K_3}{m} \hat{\mathbf{b}} \hat{\mathbf{b}}^\text{T} \boldsymbol{\epsilon}\sum_\pm\left( { 1 - e^{\pm i (\mathbf{b} - \mathbf{a}) \cdot \mathbf{q}} } \right) \\ &=\frac{4 K_1}{m} \hat{\mathbf{a}} \hat{\mathbf{a}}^\text{T} \boldsymbol{\epsilon} \sin^2\left( { \mathbf{a} \cdot \mathbf{q}/2 } \right)+\frac{4 K_2}{m} \hat{\mathbf{b}} \hat{\mathbf{b}}^\text{T} \boldsymbol{\epsilon} \sin^2\left( { \mathbf{b} \cdot \mathbf{q}/2 } \right) \\ &+\frac{4 K_3}{m} \hat{\mathbf{r}} \hat{\mathbf{r}}^\text{T} \boldsymbol{\epsilon} \sin^2\left( { (\mathbf{b} + \mathbf{a}) \cdot \mathbf{q}/2 } \right)+\frac{4 K_4}{m} \hat{\mathbf{s}} \hat{\mathbf{s}}^\text{T} \boldsymbol{\epsilon} \sin^2\left( { (\mathbf{b} - \mathbf{a}) \cdot \mathbf{q}/2 } \right).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.4)

This fully specifies our eigenvalue problem. Writing

\begin{aligned}\begin{aligned}S_1 &= \sin^2\left( { \mathbf{a} \cdot \mathbf{q}/2 } \right) \\ S_2 &= \sin^2\left( { \mathbf{b} \cdot \mathbf{q}/2 } \right) \\ S_3 &= \sin^2\left( { (\mathbf{b} + \mathbf{a}) \cdot \mathbf{q}/2 } \right) \\ S_4 &= \sin^2\left( { (\mathbf{b} - \mathbf{a}) \cdot \mathbf{q}/2 } \right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(

\begin{aligned}\boxed{A = \frac{4}{m}\left( { K_1 S_1 \hat{\mathbf{a}} \hat{\mathbf{a}}^\text{T} + K_2 S_2 \hat{\mathbf{b}} \hat{\mathbf{b}}^\text{T} + K_3 S_3 \hat{\mathbf{r}} \hat{\mathbf{r}}^\text{T} + K_4 S_4 \hat{\mathbf{s}} \hat{\mathbf{s}}^\text{T}} \right),}\end{aligned} \hspace{\stretch{1}}(

we wish to solve

\begin{aligned}A \boldsymbol{\epsilon} = \omega^2 \boldsymbol{\epsilon} = \lambda \boldsymbol{\epsilon}.\end{aligned} \hspace{\stretch{1}}(1.0.6)

Neglecting the specifics of the matrix at hand, consider a generic two by two matrix

\begin{aligned}A = \begin{bmatrix}a & b \\ c & d\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.6)

for which the characteristic equation is

\begin{aligned}0 &= \begin{vmatrix}\lambda - a & - b \\ -c & \lambda -d \end{vmatrix} \\ &= (\lambda - a)(\lambda - d) - b c \\ &= \lambda^2 - (a + d) \lambda + a d - b c \\ &= \lambda^2 - (Tr A) \lambda + \left\lvert {A} \right\rvert \\ &= \left( {\lambda - \frac{Tr A}{2}} \right)^2- \left( {\frac{Tr A}{2}} \right)^2 + \left\lvert {A} \right\rvert.\end{aligned} \hspace{\stretch{1}}(1.0.6)

So our angular frequencies are given by

\begin{aligned}\omega^2 = \frac{1}{{2}} \left( { Tr A \pm \sqrt{ \left(Tr A\right)^2 - 4 \left\lvert {A} \right\rvert }} \right).\end{aligned} \hspace{\stretch{1}}(1.0.6)

The square root can be simplified slightly

\begin{aligned}\left( {Tr A} \right)^2 - 4 \left\lvert {A} \right\rvert \\ &= (a + d)^2 -4 (a d - b c) \\ &= a^2 + d^2 + 2 a d - 4 a d + 4 b c \\ &= (a - d)^2 + 4 b c,\end{aligned} \hspace{\stretch{1}}(1.0.6)

so that, finally, the dispersion relation is

\begin{aligned}\boxed{\omega^2 = \frac{1}{{2}} \left( { d + a \pm \sqrt{ (d - a)^2 + 4 b c } } \right),}\end{aligned} \hspace{\stretch{1}}(1.0.6)

Our eigenvectors will be given by

\begin{aligned}0 = (\lambda - a) \boldsymbol{\epsilon}_1 - b\boldsymbol{\epsilon}_2,\end{aligned} \hspace{\stretch{1}}(1.0.6)


\begin{aligned}\boldsymbol{\epsilon}_1 \propto \frac{b}{\lambda - a}\boldsymbol{\epsilon}_2.\end{aligned} \hspace{\stretch{1}}(1.0.6)

So, our eigenvectors, the vectoral components of our atomic displacements, are

\begin{aligned}\boldsymbol{\epsilon} \propto\begin{bmatrix}b \\ \omega^2 - a\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.6)


\begin{aligned}\boxed{\boldsymbol{\epsilon} \propto\begin{bmatrix}2 b \\ d - a \pm \sqrt{ (d - a)^2 + 4 b c }\end{bmatrix}.}\end{aligned} \hspace{\stretch{1}}(1.0.6)

Square lattice

There is not too much to gain by expanding out the projection operators explicitly in general. However, let’s do this for the specific case of a square lattice (as on the exam problem). In that case, our projection operators are

\begin{aligned}\hat{\mathbf{a}} \hat{\mathbf{a}}^\text{T} = \begin{bmatrix}1 \\ 0\end{bmatrix}\begin{bmatrix}1 & 0\end{bmatrix}=\begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.16a)

\begin{aligned}\hat{\mathbf{b}} \hat{\mathbf{b}}^\text{T} = \begin{bmatrix}0\\ 1 \end{bmatrix}\begin{bmatrix}0 &1 \end{bmatrix}=\begin{bmatrix}0 & 0 \\ 0 & 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.16b)

\begin{aligned}\hat{\mathbf{r}} \hat{\mathbf{r}}^\text{T} = \frac{1}{{2}}\begin{bmatrix}1 \\ 1 \end{bmatrix}\begin{bmatrix}1 &1 \end{bmatrix}=\frac{1}{{2}}\begin{bmatrix}1 & 1 \\ 1 & 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.16c)

\begin{aligned}\hat{\mathbf{s}} \hat{\mathbf{s}}^\text{T} = \frac{1}{{2}}\begin{bmatrix}-1 \\ 1 \end{bmatrix}\begin{bmatrix}-1 &1 \end{bmatrix}=\frac{1}{{2}}\begin{bmatrix}1 & -1 \\ -1 & 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.16d)

\begin{aligned}\begin{aligned}S_1 &= \sin^2\left( { \mathbf{a} \cdot \mathbf{q} } \right) \\ S_2 &= \sin^2\left( { \mathbf{b} \cdot \mathbf{q} } \right) \\ S_3 &= \sin^2\left( { (\mathbf{b} + \mathbf{a}) \cdot \mathbf{q} } \right) \\ S_4 &= \sin^2\left( { (\mathbf{b} - \mathbf{a}) \cdot \mathbf{q} } \right),\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.16d)

Our matrix is

\begin{aligned}A = \frac{2}{m}\begin{bmatrix}2 K_1 S_1 + K_3 S_3 + K_4 S_4 & K_3 S_3 - K_4 S_4 \\ K_3 S_3 - K_4 S_4 & 2 K_2 S_2 + K_3 S_3 + K_4 S_4\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.16d)

where, specifically, the squared sines for this geometry are

\begin{aligned}S_1 = \sin^2 \left( { \mathbf{a} \cdot \mathbf{q}/2 } \right) = \sin^2 \left( { a q_x/2} \right)\end{aligned} \hspace{\stretch{1}}(1.0.19a)

\begin{aligned}S_2 = \sin^2 \left( { \mathbf{b} \cdot \mathbf{q}/2 } \right) = \sin^2 \left( { a q_y/2} \right)\end{aligned} \hspace{\stretch{1}}(1.0.19b)

\begin{aligned}S_3 = \sin^2 \left( { (\mathbf{b} + \mathbf{a}) \cdot \mathbf{q}/2 } \right) = \sin^2 \left( { a (q_x + q_y)/2} \right)\end{aligned} \hspace{\stretch{1}}(1.0.19c)

\begin{aligned}S_4 = \sin^2 \left( { (\mathbf{b} - \mathbf{a}) \cdot \mathbf{q}/2 } \right) = \sin^2 \left( { a (q_y - q_x)/2} \right).\end{aligned} \hspace{\stretch{1}}(1.0.19d)

Using eq. 1.0.6, the dispersion relation and eigenvectors are

\begin{aligned}\omega^2 = \frac{2}{m} \left( { \sum_i K_i S_i \pm \sqrt{ (K_2 S_2 - K_1 S_1)^2 + (K_3 S_3 - K_4 S_4)^2 } } \right)\end{aligned} \hspace{\stretch{1}}(

\begin{aligned}\boldsymbol{\epsilon} \propto\begin{bmatrix}K_3 S_3 - K_4 S_4 \\ K_2 S_2 - K_1 S_1 \pm \sqrt{ (K_2 S_2 - K_1 S_1)^2 + (K_3 S_3 - K_4 S_4)^2 } \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(

This calculation is confirmed in oneAtomBasisPhononSquareLatticeEigensystem.nb. Mathematica calculates an alternate form (equivalent to using a zero dot product for the second row), of

\begin{aligned}\boldsymbol{\epsilon} \propto\begin{bmatrix}K_1 S_1 - K_2 S_2 \pm \sqrt{ (K_2 S_2 - K_1 S_1)^2 + (K_3 S_3 - K_4 S_4)^2 } \\ K_3 S_3 - K_4 S_4 \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(

Either way, we see that K_3 S_3 - K_4 S_4 = 0 leads to only horizontal or vertical motion.

With the exam criteria

In the specific case that we had on the exam where K_1 = K_2 and K_3 = K_4, these are

\begin{aligned}\omega^2 = \frac{2}{m} \left( { K_1 (S_1 + S_2) + K_3(S_3 + S_4) \pm \sqrt{ K_1^2 (S_2 - S_1)^2 + K_3^2 (S_3 - S_4)^2 } } \right)\end{aligned} \hspace{\stretch{1}}(

\begin{aligned}\boldsymbol{\epsilon} \propto\begin{bmatrix}K_3 \left( { S_3 - S_4 } \right) \\ K_1 \left( { (S_1 - S_2) \pm \sqrt{ (S_2 - S_1)^2 + \left( \frac{K_3}{K_1} \right)^2 (S_3 - S_4)^2 } } \right)\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(

For horizontal and vertical motion we need S_3 = S_4, or for a 2 \pi \times \text{integer} difference in the absolute values of the sine arguments

\begin{aligned}\pm ( a (q_x + q_y) /2 ) = a (q_y - q_y) /2 + 2 \pi n.\end{aligned} \hspace{\stretch{1}}(

That is, one of

\begin{aligned}\begin{aligned}q_x &= \frac{2 \pi}{a} n \\ q_y &= \frac{2 \pi}{a} n\end{aligned}\end{aligned} \hspace{\stretch{1}}(

In the first BZ, that is one of q_x = 0 or q_y = 0.

System in rotated coordinates

On the exam, where we were asked to solve for motion along the cross directions explicitly, there was a strong hint to consider a rotated (by \pi/4) coordinate system.

The rotated the lattice basis vectors are \mathbf{a} = a \mathbf{e}_1, \mathbf{b} = a \mathbf{e}_2, and the projection matrices. Writing \hat{\mathbf{r}} = \mathbf{f}_1 and \hat{\mathbf{s}} = \mathbf{f}_2, where \mathbf{f}_1 = (\mathbf{e}_1 + \mathbf{e}_2)/\sqrt{2}, \mathbf{f}_2 = (\mathbf{e}_2 - \mathbf{e}_1)/\sqrt{2}, or \mathbf{e}_1 = (\mathbf{f}_1 - \mathbf{f}_2)/\sqrt{2}, \mathbf{e}_2 = (\mathbf{f}_1 + \mathbf{f}_2)/\sqrt{2}. In the \{\mathbf{f}_1, \mathbf{f}_2\} basis the projection matrices are

\begin{aligned}\hat{\mathbf{a}} \hat{\mathbf{a}}^\text{T} = \frac{1}{{2}}\begin{bmatrix}1 \\ -1\end{bmatrix}\begin{bmatrix}1 & -1\end{bmatrix}= \frac{1}{{2}} \begin{bmatrix}1 & -1 \\ -1 & 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.25a)

\begin{aligned}\hat{\mathbf{b}} \hat{\mathbf{b}}^\text{T} = \frac{1}{{2}}\begin{bmatrix}1 \\ 1\end{bmatrix}\begin{bmatrix}1 & 1\end{bmatrix}= \frac{1}{{2}} \begin{bmatrix}1 & 1 \\ 1 & 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.25b)

\begin{aligned}\hat{\mathbf{r}} \hat{\mathbf{r}}^\text{T} = \begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.25c)

\begin{aligned}\hat{\mathbf{s}} \hat{\mathbf{s}}^\text{T} = \begin{bmatrix}0 & 0 \\ 0 & 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.25d)

The dot products that show up in the squared sines are

\begin{aligned}\mathbf{a} \cdot \mathbf{q}=a \frac{1}{{\sqrt{2}}} (\mathbf{f}_1 - \mathbf{f}_2) \cdot (\mathbf{f}_1 k_u + \mathbf{f}_2 k_v)=\frac{a}{\sqrt{2}} (k_u - k_v)\end{aligned} \hspace{\stretch{1}}(1.0.26a)

\begin{aligned}\mathbf{b} \cdot \mathbf{q}=a \frac{1}{{\sqrt{2}}} (\mathbf{f}_1 + \mathbf{f}_2) \cdot (\mathbf{f}_1 k_u + \mathbf{f}_2 k_v)=\frac{a}{\sqrt{2}} (k_u + k_v)\end{aligned} \hspace{\stretch{1}}(1.0.26b)

\begin{aligned}(\mathbf{a} + \mathbf{b}) \cdot \mathbf{q} = \sqrt{2} a k_u \end{aligned} \hspace{\stretch{1}}(1.0.26c)

\begin{aligned}(\mathbf{b} - \mathbf{a}) \cdot \mathbf{q} = \sqrt{2} a k_v \end{aligned} \hspace{\stretch{1}}(1.0.26d)

So that in this basis

\begin{aligned}\begin{aligned}S_1 &= \sin^2 \left( { \frac{a}{\sqrt{2}} (k_u - k_v) } \right) \\ S_2 &= \sin^2 \left( { \frac{a}{\sqrt{2}} (k_u + k_v) } \right) \\ S_3 &= \sin^2 \left( { \sqrt{2} a k_u } \right) \\ S_4 &= \sin^2 \left( { \sqrt{2} a k_v } \right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.26d)

With the rotated projection operators eq. takes the form

\begin{aligned}A = \frac{2}{m}\begin{bmatrix}K_1 S_1 + K_2 S_2 + 2 K_3 S_3 & K_2 S_2 - K_1 S_1 \\ K_2 S_2 - K_1 S_1 & K_1 S_1 + K_2 S_2 + 2 K_4 S_4\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.26d)

This clearly differs from eq. 1.0.16d, and results in a different expression for the eigenvectors, but the same as eq. for the angular frequencies.

\begin{aligned}\boldsymbol{\epsilon} \propto\begin{bmatrix}K_2 S_2 - K_1 S_1 \\ K_4 S_4 - K_3 S_3 \mp \sqrt{ (K_2 S_2 - K_1 S_1)^2 + (K_3 S_3 - K_4 S_4)^2 }\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.26d)

or, equivalently

\begin{aligned}\boldsymbol{\epsilon} \propto\begin{bmatrix}K_4 S_4 - K_3 S_3 \mp \sqrt{ (K_2 S_2 - K_1 S_1)^2 + (K_3 S_3 - K_4 S_4)^2 } \\ K_1 S_1 - K_2 S_2 \\ \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.26d)

For the K_1 = K_2 and K_3 = K_4 case of the exam, this is

\begin{aligned}\boldsymbol{\epsilon} \propto\begin{bmatrix}K_1 (S_2 - S_1 ) \\ K_3 \left( { S_4 - S_3 \mp \sqrt{ \left( \frac{K_1}{K_3} \right)^2 (S_2 - S_1)^2 + (S_3 - S_4)^2 } } \right)\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.26d)

Similar to the horizontal coordinate system, we see that we have motion along the diagonals when

\begin{aligned}\pm \frac{a}{\sqrt{2}} (k_u - k_v) = \frac{a}{\sqrt{2}} (k_u + k_v) + 2 \pi n,\end{aligned} \hspace{\stretch{1}}(1.0.26d)

or one of

\begin{aligned}\begin{aligned}k_u &= \sqrt{2} \frac{\pi}{a} n \\ k_v &= \sqrt{2} \frac{\pi}{a} n\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.26d)


The exam asked why the cross coupling is required for stability. Clearly we have more complex interaction. The constant \omega surfaces will also be more complex. However, I still don’t have a good intuition what exactly was sought after for that part of the question.

Numerical computations

A Manipulate allowing for choice of the spring constants and lattice orientation, as shown in fig. 1.2, is available in phy487/oneAtomBasisPhonon.nb. This interface also provides a numerical calculation of the distribution relation as shown in fig. 1.3, and provides an animation of the normal modes for any given selection of \mathbf{q} and \omega(\mathbf{q}) (not shown).

Fig 1.2: 2D Single atom basis Manipulate interface

Fig 1.3: Sample distribution relation for 2D single atom basis.

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

Two body harmonic oscillator in 3D, and 2D diamond lattice vibrations

Posted by Peeter Joot on January 7, 2014

Abridged harmonic oscillator notes

[This is an abbreviation of more extensive PDF notes associated with the latter part of this post.]

Motivation and summary of harmonic oscillator background

After having had some trouble on a non-1D harmonic oscillator lattice problem on the exam, I attempted such a problem with enough time available to consider it properly. I found it helpful to consider first just two masses interacting harmonically in 3D, each displaced from an equilibrium position.

The Lagrangian that described this most naturally was found to be

\begin{aligned}\mathcal{L} = \frac{1}{2} m_1 \left( \dot{\mathbf{r}}_1 \right)^2+\frac{1}{2} m_2 \left( \dot{\mathbf{r}}_2 \right)^2- \frac{K}{2} \left( \left\lvert {\mathbf{r}_2 - \mathbf{r}_1} \right\rvert - a \right)^2.\end{aligned} \hspace{\stretch{1}}(2.1)

This was solved in absolute and displacement coordinates, and then I moved on to consider a linear expansion of the harmonic potential about the equilibrium point, a problem closer to the exam problem (albeit still considering only two masses). The equilibrium points were described with vectors \mathbf{a}_1, \mathbf{a}_2 as in fig. 2.1, where \Delta \mathbf{a} = \left\lvert {\Delta \mathbf{a}} \right\rvert (\cos \theta_1, \cos\theta_2, \cos\theta_3).

fig 2.1: Direction cosines relative to equilibrium position difference vector


Using such coordinates, and generalizing, it was found that the complete Lagrangian, to second order about the equilibrium positions, is

\begin{aligned}\mathcal{L} = \sum_j \frac{m_i}{2} \dot{u}_{ij}^2 -\frac{K}{2} \sum_{i j} \cos\theta_i \cos\theta_j \left( u_{2 i} - u_{1 i} \right)\left( u_{2 j} - u_{1 j} \right).\end{aligned} \hspace{\stretch{1}}(2.2)

Evaluating the Euler-Lagrange equations, the equations of motion for the displacements were found to be

\begin{aligned}\begin{aligned}m_1 \ddot{\mathbf{u}}_1 &= K \widehat{\Delta \mathbf{a}} \left( \widehat{\Delta \mathbf{a}} \cdot \Delta \mathbf{u} \right) \\ m_2 \ddot{\mathbf{u}}_2 &= -K \widehat{\Delta \mathbf{a}} \left( \widehat{\Delta \mathbf{a}} \cdot \Delta \mathbf{u} \right),\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.3)


\begin{aligned}\boxed{\begin{aligned}\mu \Delta \ddot{\mathbf{u}} &= -K \widehat{\Delta \mathbf{a}} \left( \widehat{\Delta \mathbf{a}} \cdot \Delta \mathbf{u} \right) \\ m_1 \ddot{\mathbf{u}}_1 + m_2 \ddot{\mathbf{u}}_2 &= 0.\end{aligned}}\end{aligned} \hspace{\stretch{1}}(2.4)

Observe that on the RHS above we have a projection operator, so we could also write

\begin{aligned}\mu \Delta \ddot{\mathbf{u}} = -K \text{Proj}_{\widehat{\Delta \mathbf{a}}} \Delta \mathbf{u}.\end{aligned} \hspace{\stretch{1}}(2.5)

We see that the equations of motion for the displacements of a system of harmonic oscillators has a rather pleasant expression in terms of projection operators, where we have projections onto the unit vectors between each pair of equilibrium position.

A number of harmonically coupled masses

Now let’s consider masses at lattice points indexed by a lattice vector \mathbf{n}, as illustrated in fig. 2.2.

fig 2.2: Masses harmonically coupled in a lattice


With a coupling constant of K_{\mathbf{n} \mathbf{m}} between lattice points indexed \mathbf{n} and \mathbf{m} (located at \mathbf{a}_\mathbf{n} and \mathbf{a}_\mathbf{m} respectively), and direction cosines for the equilibrium direction vector between those points given by

\begin{aligned}\mathbf{a}_\mathbf{n} - \mathbf{a}_\mathbf{m} = \Delta \mathbf{a}_{\mathbf{n} \mathbf{m}}= \left\lvert {\Delta \mathbf{a}_{\mathbf{n} \mathbf{m}}} \right\rvert (\cos \theta_{\mathbf{n} \mathbf{m} 1},\cos \theta_{\mathbf{n} \mathbf{m} 2},\cos \theta_{\mathbf{n} \mathbf{m} 3}),\end{aligned} \hspace{\stretch{1}}(2.6)

the Lagrangian is

\begin{aligned}\mathcal{L} = \sum_{\mathbf{n}, i} \frac{m_\mathbf{n}}{2} \dot{u}_{\mathbf{n} i}^2-\frac{1}{2} \sum_{\mathbf{n} \ne \mathbf{m}, i, j} \frac{K_{\mathbf{n} \mathbf{m}}}{2} \cos\theta_{\mathbf{n} \mathbf{m} i}\cos\theta_{\mathbf{n} \mathbf{m} j}\left( u_{\mathbf{n} i} - u_{\mathbf{m} i} \right)\left( u_{\mathbf{n} j} - u_{\mathbf{m} j} \right)\end{aligned} \hspace{\stretch{1}}(2.7)

Evaluating the Euler-Lagrange equations for the mass at index \mathbf{n} we have

\begin{aligned}\frac{d}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{u}_{\mathbf{n} k}}} =m_\mathbf{n} \ddot{u}_{\mathbf{n} k},\end{aligned} \hspace{\stretch{1}}(2.8)


\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {u_{\mathbf{n} k}}} &= -\sum_{\mathbf{m}, i, j}\frac{K_{\mathbf{n} \mathbf{m}}}{2} \cos\theta_{\mathbf{n} \mathbf{m} i}\cos\theta_{\mathbf{n} \mathbf{m} j}\left(\delta_{i k}\left( u_{\mathbf{n} j} - u_{\mathbf{m} j} \right)+\left( u_{\mathbf{n} i} - u_{\mathbf{m} i} \right)\delta_{j k}\right) \\ &= -\sum_{\mathbf{m}, i}K_{\mathbf{n} \mathbf{m}}\cos\theta_{\mathbf{n} \mathbf{m} k}\cos\theta_{\mathbf{n} \mathbf{m} i}\left( u_{\mathbf{n} i} - u_{\mathbf{m} i} \right) \\ &= -\sum_{\mathbf{m}}K_{\mathbf{n} \mathbf{m}}\cos\theta_{\mathbf{n} \mathbf{m} k}\widehat{\Delta \mathbf{a}} \cdot \Delta \mathbf{u}_{\mathbf{n} \mathbf{m}},\end{aligned} \hspace{\stretch{1}}(2.9)

where \Delta \mathbf{u}_{\mathbf{n} \mathbf{m}} = \mathbf{u}_\mathbf{n} - \mathbf{u}_\mathbf{m}. Equating both, we have in vector form

\begin{aligned}m_\mathbf{n} \ddot{\mathbf{u}}_\mathbf{n} = -\sum_{\mathbf{m}}K_{\mathbf{n} \mathbf{m}}\widehat{\Delta \mathbf{a}}\left( \widehat{\Delta \mathbf{a}} \cdot \Delta \mathbf{u}_{\mathbf{n} \mathbf{m}} \right),\end{aligned} \hspace{\stretch{1}}(2.10)


\begin{aligned}\boxed{m_\mathbf{n} \ddot{\mathbf{u}}_\mathbf{n} = -\sum_{\mathbf{m}}K_{\mathbf{n} \mathbf{m}}\text{Proj}_{ \widehat{\Delta \mathbf{a}} } \Delta \mathbf{u}_{\mathbf{n} \mathbf{m}},}\end{aligned} \hspace{\stretch{1}}(2.11)

This is an intuitively pleasing result. We have displacement and the direction of the lattice separations in the mix, but not the magnitude of the lattice separation itself.

Two atom basis, 2D diamond lattice

As a concrete application of the previously calculated equilibrium harmonic oscillator result, let’s consider a two atom basis diamond lattice where the horizontal length is a and vertical height is b.

Indexing for the primitive unit cells is illustrated in fig. 2.3.

fig 2.3: Primitive unit cells for rectangular lattice


Let’s write

\begin{aligned}\begin{aligned}\mathbf{r} &= a (\cos\theta, \sin\theta) = a \hat{\mathbf{r}} \\ \mathbf{s} &= a (-\cos\theta, \sin\theta) = a \hat{\mathbf{s}} \\ \mathbf{n} &= (n_1, n_2) \\ \mathbf{r}_\mathbf{n} &= n_1 \mathbf{r} + n_2 \mathbf{s},\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.12)

For mass m_\alpha, \alpha \in \{1, 2\} assume a trial solution of the form

\begin{aligned}\mathbf{u}_{\mathbf{n},\alpha} = \frac{\boldsymbol{\epsilon}_\alpha(\mathbf{q})}{\sqrt{m_\alpha}} e^{i \mathbf{r}_n \cdot \mathbf{q} - \omega t}.\end{aligned} \hspace{\stretch{1}}(2.13)

The equations of motion for the two particles are

\begin{aligned}\begin{aligned}m_1 \ddot{\mathbf{u}}_{\mathbf{n}, 1} &= - K_1 \text{Proj}_{\hat{\mathbf{x}}} \left(  \mathbf{u}_{\mathbf{n}, 1} - \mathbf{u}_{\mathbf{n} - (0,1), 2}  \right)- K_1 \text{Proj}_{\hat{\mathbf{x}}} \left(  \mathbf{u}_{\mathbf{n}, 1} - \mathbf{u}_{\mathbf{n} - (1,0), 2}  \right) \\  & \quad- K_2 \text{Proj}_{\hat{\mathbf{y}}} \left(  \mathbf{u}_{\mathbf{n}, 1} - \mathbf{u}_{\mathbf{n}, 2}  \right)- K_2 \text{Proj}_{\hat{\mathbf{y}}} \left(  \mathbf{u}_{\mathbf{n}, 1} - \mathbf{u}_{\mathbf{n} - (1,1), 2}  \right) \\  & \quad- K_3 \sum_\pm\text{Proj}_{\hat{\mathbf{r}}} \left(  \mathbf{u}_{\mathbf{n}, 1} - \mathbf{u}_{\mathbf{n} \pm (1,0), 1}  \right)- K_4 \sum_\pm\text{Proj}_{\hat{\mathbf{s}}} \left(  \mathbf{u}_{\mathbf{n}, 1} - \mathbf{u}_{\mathbf{n} \pm (0,1), 1}  \right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(

\begin{aligned}\begin{aligned}m_2 \ddot{\mathbf{u}}_{\mathbf{n}, 2} &= - K_1 \text{Proj}_{\hat{\mathbf{x}}} \left(  \mathbf{u}_{\mathbf{n}, 2} - \mathbf{u}_{\mathbf{n} + (1,0), 1}  \right)- K_1 \text{Proj}_{\hat{\mathbf{x}}} \left(  \mathbf{u}_{\mathbf{n}, 2} - \mathbf{u}_{\mathbf{n} + (0,1), 1}  \right)\\  &\quad- K_2 \text{Proj}_{\hat{\mathbf{y}}} \left(  \mathbf{u}_{\mathbf{n}, 2} - \mathbf{u}_{\mathbf{n}, 1}  \right)- K_2 \text{Proj}_{\hat{\mathbf{y}}} \left(  \mathbf{u}_{\mathbf{n}, 2} - \mathbf{u}_{\mathbf{n} + (1,1), 1}  \right)\\  &\quad- K_3 \sum_\pm\text{Proj}_{\hat{\mathbf{r}}} \left(  \mathbf{u}_{\mathbf{n}, 2} - \mathbf{u}_{\mathbf{n} \pm (1,0), 2}  \right)- K_4 \sum_\pm\text{Proj}_{\hat{\mathbf{s}}} \left(  \mathbf{u}_{\mathbf{n}, 2} - \mathbf{u}_{\mathbf{n} \pm (0,1), 2}  \right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(

Insertion of the trial solution gives

\begin{aligned}\begin{aligned} \omega^2 \sqrt{m_1} \boldsymbol{\epsilon}_1&= K_1 \text{Proj}_{\hat{\mathbf{x}}} \left(  \frac{\boldsymbol{\epsilon}_1}{\sqrt{m_1}} - \frac{\boldsymbol{\epsilon}_2}{\sqrt{m_2}} e^{ - i \mathbf{s} \cdot \mathbf{q} }  \right)+ K_1 \text{Proj}_{\hat{\mathbf{x}}} \left(  \frac{\boldsymbol{\epsilon}_1}{\sqrt{m_1}} - \frac{\boldsymbol{\epsilon}_2}{\sqrt{m_2}} e^{ - i \mathbf{r} \cdot \mathbf{q} }  \right) \\  &\quad+ K_2 \text{Proj}_{\hat{\mathbf{y}}} \left(  \frac{\boldsymbol{\epsilon}_1}{\sqrt{m_1}} - \frac{\boldsymbol{\epsilon}_2}{\sqrt{m_2}}  \right)+ K_2 \text{Proj}_{\hat{\mathbf{y}}} \left(  \frac{\boldsymbol{\epsilon}_1}{\sqrt{m_1}} - \frac{\boldsymbol{\epsilon}_2}{\sqrt{m_2}} e^{ - i (\mathbf{r} + \mathbf{s}) \cdot \mathbf{q} }  \right) \\  &\quad+ K_3 \left(  \text{Proj}_{\hat{\mathbf{r}}} \frac{\boldsymbol{\epsilon}_1}{\sqrt{m_1}}  \right)\sum_\pm\left(  1 - e^{ \pm i \mathbf{r} \cdot \mathbf{q} }  \right)+ K_4 \left(  \text{Proj}_{\hat{\mathbf{s}}} \frac{\boldsymbol{\epsilon}_1}{\sqrt{m_1}}  \right)\sum_\pm\left(  1 - e^{ \pm i \mathbf{s} \cdot \mathbf{q} }  \right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(

\begin{aligned}\begin{aligned}\omega^2 \sqrt{m_2} \boldsymbol{\epsilon}_2&=K_1 \text{Proj}_{\hat{\mathbf{x}}} \left(  \frac{\boldsymbol{\epsilon}_2}{\sqrt{m_2}} - \frac{\boldsymbol{\epsilon}_1}{\sqrt{m_1}} e^{ + i \mathbf{r} \cdot \mathbf{q} }  \right)+ K_1 \text{Proj}_{\hat{\mathbf{x}}} \left(  \frac{\boldsymbol{\epsilon}_2}{\sqrt{m_2}} - \frac{\boldsymbol{\epsilon}_1}{\sqrt{m_1}} e^{ + i \mathbf{s} \cdot \mathbf{q} }  \right)\\  &\quad+ K_2 \text{Proj}_{\hat{\mathbf{y}}} \left(  \frac{\boldsymbol{\epsilon}_2}{\sqrt{m_2}} - \frac{\boldsymbol{\epsilon}_1}{\sqrt{m_1}}  \right)+ K_2 \text{Proj}_{\hat{\mathbf{y}}} \left(  \frac{\boldsymbol{\epsilon}_2}{\sqrt{m_2}} - \frac{\boldsymbol{\epsilon}_1}{\sqrt{m_1}} e^{ + i (\mathbf{r} + \mathbf{s}) \cdot \mathbf{q} }  \right) \\  &\quad+ K_3 \left(  \text{Proj}_{\hat{\mathbf{r}}} \frac{\boldsymbol{\epsilon}_2}{\sqrt{m_2}}  \right)\sum_\pm\left(  1 - e^{ \pm i \mathbf{r} \cdot \mathbf{q} }  \right)+ K_4 \left(  \text{Proj}_{\hat{\mathbf{s}}} \frac{\boldsymbol{\epsilon}_2}{\sqrt{m_2}}  \right)\sum_\pm\left(  1 - e^{ \pm i \mathbf{s} \cdot \mathbf{q} }  \right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(

Regrouping, and using the matrix form \text{Proj}_{\hat{\mathbf{u}}} = \hat{\mathbf{u}} \hat{\mathbf{u}}^\text{T} for the projection operators, this is

\begin{aligned}\left(\omega^2 - \frac{2}{m_1} \left(  K_1 \hat{\mathbf{x}} \hat{\mathbf{x}}^T + K_2 \hat{\mathbf{y}} \hat{\mathbf{y}}^T + 2 K_3 \hat{\mathbf{r}} \hat{\mathbf{r}}^T \sin^2 (\mathbf{r} \cdot \mathbf{q}/2) + 2 K_4 \hat{\mathbf{s}} \hat{\mathbf{s}}^T \sin^2 (\mathbf{s} \cdot \mathbf{q}/2)  \right)\right)\boldsymbol{\epsilon}_1 = -\left(  K_1 \hat{\mathbf{r}} \hat{\mathbf{r}}^\text{T} \left(  e^{ - i \mathbf{s} \cdot \mathbf{q} } + e^{ - i \mathbf{r} \cdot \mathbf{q} }  \right) + K_2 \hat{\mathbf{s}} \hat{\mathbf{s}}^\text{T} \left(  1 + e^{ - i (\mathbf{r} + \mathbf{s}) \cdot \mathbf{q} }  \right)  \right)\frac{\boldsymbol{\epsilon}_2}{\sqrt{ m_1 m_2 }}\end{aligned} \hspace{\stretch{1}}(

\begin{aligned}\left( \omega^2 - \frac{2}{m_2} \left(  K_1 \hat{\mathbf{x}} \hat{\mathbf{x}}^T + K_2 \hat{\mathbf{y}} \hat{\mathbf{y}}^T + 2 K_3 \hat{\mathbf{r}} \hat{\mathbf{r}}^T \sin^2 (\mathbf{r} \cdot \mathbf{q}/2)+ 2 K_4 \hat{\mathbf{s}} \hat{\mathbf{s}}^T \sin^2 (\mathbf{s} \cdot \mathbf{q}/2) \right) \right)\boldsymbol{\epsilon}_2 = -\left( K_1    \hat{\mathbf{r}} \hat{\mathbf{r}}^\text{T}   \left(    e^{ i \mathbf{s} \cdot \mathbf{q} }   +   e^{ i \mathbf{r} \cdot \mathbf{q} }    \right) +   K_2    \hat{\mathbf{s}} \hat{\mathbf{s}}^\text{T}   \left(    1   +   e^{ i (\mathbf{r} + \mathbf{s}) \cdot \mathbf{q} }    \right) \right)\frac{\boldsymbol{\epsilon}_1}{\sqrt{ m_1 m_2 }}\end{aligned} \hspace{\stretch{1}}(

As a single matrix equation, this is

\begin{aligned}A = K_1 \hat{\mathbf{x}} \hat{\mathbf{x}}^T + K_2 \hat{\mathbf{y}} \hat{\mathbf{y}}^T + 2 K_3 \hat{\mathbf{r}} \hat{\mathbf{r}}^T \sin^2 (\mathbf{r} \cdot \mathbf{q}/2)+ 2 K_4 \hat{\mathbf{s}} \hat{\mathbf{s}}^T \sin^2 (\mathbf{s} \cdot \mathbf{q}/2)\end{aligned} \hspace{\stretch{1}}(

\begin{aligned}B = e^{ i (\mathbf{r} + \mathbf{s}) \cdot \mathbf{q}/2 }\left( {   K_1    \hat{\mathbf{r}} \hat{\mathbf{r}}^\text{T}   \cos\left(  (\mathbf{r} - \mathbf{s}) \cdot \mathbf{q}/2  \right)+   K_2    \hat{\mathbf{s}} \hat{\mathbf{s}}^\text{T}   \cos\left(  (\mathbf{r} + \mathbf{s}) \cdot \mathbf{q}/2  \right)} \right)\end{aligned} \hspace{\stretch{1}}(

\begin{aligned}0 =\begin{bmatrix}\omega^2 - \frac{2 A}{m_1} & \frac{B^{*}}{\sqrt{m_1 m_2}} \\ \frac{B}{\sqrt{m_1 m_2}} & \omega^2 - \frac{2 A}{m_2} \end{bmatrix}\begin{bmatrix}\boldsymbol{\epsilon}_1 \\ \boldsymbol{\epsilon}_2\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(

Observe that this is an eigenvalue problem E \mathbf{e} = \omega^2 \mathbf{e} for matrix

\begin{aligned}E = \begin{bmatrix}\frac{2 A}{m_1} & -\frac{B^{*}}{\sqrt{m_1 m_2}} \\ -\frac{B}{\sqrt{m_1 m_2}} & \frac{2 A}{m_2} \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(

and eigenvalues \omega^2.

To be explicit lets put the A and B functions in explicit matrix form. The orthogonal projectors have a simple form

\begin{aligned}\text{Proj}_{\hat{\mathbf{x}}} = \hat{\mathbf{x}} \hat{\mathbf{x}}^\text{T}= \begin{bmatrix}1 \\ 0\end{bmatrix}\begin{bmatrix}1 & 0\end{bmatrix}=\begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.0.19a)

\begin{aligned}\text{Proj}_{\hat{\mathbf{y}}} = \hat{\mathbf{y}} \hat{\mathbf{y}}^\text{T}= \begin{bmatrix}0 \\ 1\end{bmatrix}\begin{bmatrix}0 & 1\end{bmatrix}=\begin{bmatrix}0 & 0 \\ 0 & 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.0.19b)

For the \hat{\mathbf{r}} and \hat{\mathbf{s}} projection operators, we can use half angle formulations

\begin{aligned}\text{Proj}_{\hat{\mathbf{r}}} = \hat{\mathbf{r}} \hat{\mathbf{r}}^\text{T}= \begin{bmatrix}\cos\theta \\ \sin\theta\end{bmatrix}\begin{bmatrix}\cos\theta & \sin\theta\end{bmatrix}=\begin{bmatrix}\cos^2\theta & \cos\theta \sin\theta \\ \cos\theta \sin\theta & \sin^2 \theta\end{bmatrix}=\frac{1}{2}\begin{bmatrix}1 + \cos \left(  2 \theta  \right) & \sin \left(  2 \theta  \right) \\ \sin \left(  2 \theta  \right) & 1 - \cos \left(  2 \theta  \right)\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(

\begin{aligned}\text{Proj}_{\hat{\mathbf{s}}} = \hat{\mathbf{s}} \hat{\mathbf{s}}^\text{T}= \begin{bmatrix}-\cos\theta \\ \sin\theta\end{bmatrix}\begin{bmatrix}-\cos\theta & \sin\theta\end{bmatrix}=\begin{bmatrix}\cos^2\theta & -\cos\theta \sin\theta \\ -\cos\theta \sin\theta & \sin^2 \theta\end{bmatrix}=\frac{1}{2}\begin{bmatrix}1 + \cos \left(  2 \theta  \right) & -\sin \left(  2 \theta  \right) \\ -\sin \left(  2 \theta  \right) & 1 - \cos \left(  2 \theta  \right)\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(

After some manipulation, and the following helper functions

\begin{aligned}\begin{aligned}\alpha_\pm &= K_3 \sin^2 (\mathbf{r} \cdot \mathbf{q}/2) \pm K_4 \sin^2 (\mathbf{s} \cdot \mathbf{q}/2) \\ \beta_\pm &= K_1 \cos\left(  (\mathbf{r} - \mathbf{s}) \cdot \mathbf{q}/2  \right) \pm K_2 \cos\left(  (\mathbf{r} + \mathbf{s}) \cdot \mathbf{q}/2  \right),\end{aligned}\end{aligned} \hspace{\stretch{1}}(

the block matrices of eq. take the form

\begin{aligned}A = \begin{bmatrix}K_1 + \alpha_+ (1 + \cos\left(  2 \theta  \right)) & \alpha_- \sin\left(  2 \theta  \right) \\ \alpha_- \sin\left(  2 \theta  \right) & K_2 + \alpha_+ (1 - \cos\left(  2 \theta  \right))\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(

\begin{aligned}B =    e^{ i (\mathbf{r} + \mathbf{s}) \cdot \mathbf{q}/2 }\begin{bmatrix}   \beta_+ (1 + \cos \left(  2 \theta  \right)) & \beta_- \sin \left(  2 \theta  \right) \\    \beta_- \sin \left(  2 \theta  \right) & \beta_+( 1 -\cos \left(  2 \theta  \right))\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(

A final bit of simplification for B possible, noting that \mathbf{r} + \mathbf{s} = 2 a (0, \sin\theta ), and \mathbf{r} - \mathbf{s} = 2 a(\cos\theta, 0), so

\begin{aligned}\beta_\pm = K_1 \cos\left(  a \cos\theta q_x  \right) \pm K_2 \cos\left(  a \sin\theta q_y  \right),\end{aligned} \hspace{\stretch{1}}(


\begin{aligned}B =    e^{ i a \sin\theta q_y }\begin{bmatrix}   \beta_+ (1 + \cos \left(  2 \theta  \right)) & \beta_- \sin \left(  2 \theta  \right) \\    \beta_- \sin \left(  2 \theta  \right) & \beta_+( 1 -\cos \left(  2 \theta  \right))\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(

It isn’t particularly illuminating to expand out the determinant for such a system, even though it can be done symbolically without too much programming. However, what is easy after formulating the matrix for this system, is actually solving it. This is done, and animated, in twoAtomBasisRectangularLatticeDispersionRelation.cdf

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »


Get every new post delivered to your Inbox.

Join 36 other followers