# Peeter Joot's Blog.

• ## Archives

 ivor on Just Energy Canada nasty busin… A final pre-exam upd… on An updated compilation of note… Anon on About peeterjoot on About Anon on About
• ## People not reading this blog: 6,973,738,433 minus:

• 132,771 hits

# Posts Tagged ‘bivector’

## Tangent planes and normals in three and four dimensions

Posted by peeterjoot on January 4, 2013

# Motivation

I was reviewing the method of Lagrange in my old first year calculus book [1] and found that I needed a review of some of the geometry ideas associated with the gradient (that it is normal to the surface). The approach in the text used 3D level surfaces $f(x, y, z) = c$, which is general but not the most intuitive.

If we define a surface in the simpler explicit form $z = f(x, y)$, then how would you show this normal property? Here we explore this in 3D and 4D, using geometric and wedge products to express the tangent planes and tangent volumes respectively.

In the 4D approach, with a vector $x$ defined by coordinates $x^\mu$ and basis $\{\gamma_\mu\}$ so that

\begin{aligned}x = \gamma_\mu x^\mu,\end{aligned} \hspace{\stretch{1}}(1.1.1)

the reciprocal basis ${\gamma^\mu}$ is defined implicitly by the dot product relations

\begin{aligned}\gamma^\mu \cdot \gamma_\nu = {\delta^\mu}_\nu.\end{aligned} \hspace{\stretch{1}}(1.1.2)

Assuming such a basis makes the result general enough that the 4D (or a trivial generalization to N dimensions) holds for both Euclidean spaces as well as mixed metric (i.e. Minkowski) spaces, and avoids having to detail the specific metric in question.

# 3D surface

We start by considering figure 1:

Figure 1: A portion of a surface in 3D

We wish to determine the bivector for the tangent plane in the neighbourhood of the point $\mathbf{p}$

\begin{aligned}\mathbf{p} = ( x, y, f(x, y) ),\end{aligned} \hspace{\stretch{1}}(1.2.3)

then using duality determine the normal vector to that plane at this point. Holding either of the two free parameters constant, we find the tangent vectors on that surface to be

\begin{aligned}\mathbf{p}_1 = \left( dx, 0, \frac{\partial {f}}{\partial {x}} dx \right) \propto \left( 1, 0, \frac{\partial {f}}{\partial {x}} \right) \end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}\mathbf{p}_2 = \left( 0, dy, \frac{\partial {f}}{\partial {y}} dy \right) \propto \left( 0, 1, \frac{\partial {f}}{\partial {y}} \right) \end{aligned} \hspace{\stretch{1}}(1.0.4b)

The tangent plane is then

\begin{aligned}\mathbf{p}_1 \wedge \mathbf{p}_2 &= \left( 1, 0, \frac{\partial {f}}{\partial {x}} \right) \wedge\left( 0, 1, \frac{\partial {f}}{\partial {y}} \right) \\ &= \left( \mathbf{e}_1 + \mathbf{e}_3 \frac{\partial {f}}{\partial {x}} \right) \wedge\left( \mathbf{e}_2 + \mathbf{e}_3 \frac{\partial {f}}{\partial {y}} \right) \\ &= \mathbf{e}_1 \mathbf{e}_2 + \mathbf{e}_1 \mathbf{e}_3 \frac{\partial {f}}{\partial {y}} + \mathbf{e}_3 \mathbf{e}_2 \frac{\partial {f}}{\partial {x}}.\end{aligned} \hspace{\stretch{1}}(1.0.5)

We can factor out the pseudoscalar 3D volume element $I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3$, assuming a Euclidean space for which $\mathbf{e}_k^2 = 1$. That is

\begin{aligned}\mathbf{p}_1 \wedge \mathbf{p}_2 = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \left(\mathbf{e}_3- \mathbf{e}_2 \frac{\partial {f}}{\partial {y}} - \mathbf{e}_1 \frac{\partial {f}}{\partial {x}}\right)\end{aligned} \hspace{\stretch{1}}(1.0.6)

Multiplying through by the volume element $I$ we find that the normal to the surface at this point is

\begin{aligned}\mathbf{n} \propto -I(\mathbf{p}_1 \wedge \mathbf{p}_2) = \mathbf{e}_3- \mathbf{e}_1 \frac{\partial {f}}{\partial {x}}- \mathbf{e}_2 \frac{\partial {f}}{\partial {y}}.\end{aligned} \hspace{\stretch{1}}(1.0.7)

Observe that we can write this as

\begin{aligned}\boxed{\mathbf{n} = \boldsymbol{\nabla} ( z - f(x, y) ).}\end{aligned} \hspace{\stretch{1}}(1.0.8)

Let’s see how this works in 4D, so that we know how to handle the Minkowski spaces we find in special relativity.

# 4D surface

Now, let’s move up to one additional direction, with

\begin{aligned}x^3 = f(x^0, x^1, x^2).\end{aligned} \hspace{\stretch{1}}(1.0.9)

the differential of this is

\begin{aligned}dx^3 = \sum_{k=0}^2 \frac{\partial {f}}{\partial {x^k}} dx^k = \sum_{k=0}^2 \partial_k f dx^k .\end{aligned} \hspace{\stretch{1}}(1.0.10)

We are going to look at the 3-surface in the neighbourhood of the point

\begin{aligned}p = \left( x^0, x^1, x^2, x^3\right),\end{aligned} \hspace{\stretch{1}}(1.0.11)

so that the tangent vectors in the neighbourhood of this point are in the span of

\begin{aligned}dp = \left( x^0, x^1, x^2, \sum_{k=0}^2 \partial_k dx^k\right).\end{aligned} \hspace{\stretch{1}}(1.0.12)

In particular, in each of the directions we have

\begin{aligned}p_0 \propto ( 1, 0, 0, d_0 f)\end{aligned} \hspace{\stretch{1}}(1.0.13a)

\begin{aligned}p_1 \propto ( 0, 1, 0, d_1 f)\end{aligned} \hspace{\stretch{1}}(1.0.13b)

\begin{aligned}p_2 \propto ( 0, 0, 1, d_2 f)\end{aligned} \hspace{\stretch{1}}(1.0.13c)

Our tangent volume in this neighbourhood is

\begin{aligned}p_0 \wedge p_1 \wedge p_2&=\left( \gamma_0 + \gamma_3 \partial_0 f\right)\wedge\left( \gamma_1 + \gamma_3 \partial_1 f\right)\wedge\left( \gamma_2 + \gamma_3 \partial_2 f\right) \\ &=\left( \gamma_0 \gamma_1 + \gamma_0 \gamma_3 \partial_1 f+ \gamma_3 \gamma_1 \partial_0 f\right)\wedge\left( \gamma_2 + \gamma_3 \partial_2 f\right) \\ &=\gamma_{012} - \gamma_{023} \partial_1 f + \gamma_{123} \partial_0 f + \gamma_{013} \partial_2 f.\end{aligned} \hspace{\stretch{1}}(1.0.14)

Here the shorthand $\gamma_{ijk} = \gamma_i \gamma_j \gamma_k$ has been used. Can we factor out a 4D pseudoscalar from this and end up with a coherent result. We have

\begin{aligned}\gamma_{0123} \gamma^3 = \gamma_{012}\end{aligned} \hspace{\stretch{1}}(1.0.15a)

\begin{aligned}\gamma_{0123} \gamma^1 = \gamma_{023}\end{aligned} \hspace{\stretch{1}}(1.0.15b)

\begin{aligned}\gamma_{0123} \gamma^0 = -\gamma_{123}\end{aligned} \hspace{\stretch{1}}(1.0.15c)

\begin{aligned}\gamma_{0123} \gamma^2 = -\gamma_{013}.\end{aligned} \hspace{\stretch{1}}(1.0.15d)

This gives us

\begin{aligned}d^3 p=p_0 \wedge p_1 \wedge p_2=\gamma_{0123} \left(\gamma^3 - \gamma^1 \partial_1 f- \gamma^0 \partial_0 f- \gamma^2 \partial_2 f\right).\end{aligned} \hspace{\stretch{1}}(1.0.16)

With the usual 4d gradient definition (sum implied)

\begin{aligned}\nabla = \gamma^\mu \partial_\mu,\end{aligned} \hspace{\stretch{1}}(1.0.17)

we have

\begin{aligned}\nabla x^3 = \gamma^\mu \partial_\mu x^3 = \gamma^\mu {\delta_{\mu}}^3= \gamma^3,\end{aligned} \hspace{\stretch{1}}(1.0.18)

so we can write

\begin{aligned}d^3 p = \gamma_{0123} \nabla \left( x^3 - f(x^0, x^1, x^2) \right),\end{aligned} \hspace{\stretch{1}}(1.0.19)

so, finally, the “normal” to this surface volume element at this point is

\begin{aligned}\boxed{n = \nabla \left( x^3 - f(x^0, x^1, x^2) \right).}\end{aligned} \hspace{\stretch{1}}(1.0.20)

This is just like the 3D Euclidean result, with the exception that we need to look at the dual of a 3-volume “surface” instead of our normal 2d surface.

Also note that this is not a metric free result. The metric choice is built into the definition of the gradient 1.0.17 and its associated reciprocal basis. For example with a $1,3$ metric where $\gamma_0^2 = 1, \gamma_k^2 = -1$, we have $\gamma^0 = \gamma_0$ and $\gamma^k = -\gamma_k$.

# References

[1] S.L. Salas, E. Hille, G.J. Etgen, and G.J. Etgen. Calculus: one and several variables. Wiley New York, 1990.

## Plane wave solutions of Maxwell’s equation using Geometric Algebra

Posted by peeterjoot on September 3, 2012

# Motivation

Study of reflection and transmission of radiation in isotropic, charge and current free, linear matter utilizes the plane wave solutions to Maxwell’s equations. These have the structure of phasor equations, with some specific constraints on the components and the exponents.

These constraints are usually derived starting with the plain old vector form of Maxwell’s equations, and it is natural to wonder how this is done directly using Geometric Algebra. [1] provides one such derivation, using the covariant form of Maxwell’s equations. Here’s a slightly more pedestrian way of doing the same.

# Maxwell’s equations in media

We start with Maxwell’s equations for linear matter as found in [2]

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 0\end{aligned} \hspace{\stretch{1}}(1.2.1a)

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} = -\frac{\partial {\mathbf{B}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(1.2.1b)

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{B} = 0\end{aligned} \hspace{\stretch{1}}(1.2.1c)

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{B} = \mu\epsilon \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.1d)

We merge these using the geometric identity

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{a} + I \boldsymbol{\nabla} \times \mathbf{a} = \boldsymbol{\nabla} \mathbf{a},\end{aligned} \hspace{\stretch{1}}(1.2.2)

where $I$ is the 3D pseudoscalar $I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3$, to find

\begin{aligned}\boldsymbol{\nabla} \mathbf{E} = -I \frac{\partial {\mathbf{B}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(1.2.3a)

\begin{aligned}\boldsymbol{\nabla} \mathbf{B} = I \mu\epsilon \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.3b)

We want dimensions of $1/L$ for the derivative operator on the RHS of 1.2.3b, so we divide through by $\sqrt{\mu\epsilon} I$ for

\begin{aligned}-I \frac{1}{{\sqrt{\mu\epsilon}}} \boldsymbol{\nabla} \mathbf{B} = \sqrt{\mu\epsilon} \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.4)

This can now be added to 1.2.3a for

\begin{aligned}\left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \left( \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \right)= 0.\end{aligned} \hspace{\stretch{1}}(1.2.5)

This is Maxwell’s equation in linear isotropic charge and current free matter in Geometric Algebra form.

# Phasor solutions

We write the electromagnetic field as

\begin{aligned}F = \left( \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \right),\end{aligned} \hspace{\stretch{1}}(1.3.6)

so that for vacuum where $1/\sqrt{\mu \epsilon} = c$ we have the usual $F = \mathbf{E} + I c \mathbf{B}$. Assuming a phasor solution of

\begin{aligned}\tilde{F} = F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}\end{aligned} \hspace{\stretch{1}}(1.3.7)

where $F_0$ is allowed to be complex, and the actual field is obtained by taking the real part

\begin{aligned}F = \text{Real} \tilde{F} = \text{Real}(F_0) \cos(\mathbf{k} \cdot \mathbf{x} - \omega t)-\text{Imag}(F_0) \sin(\mathbf{k} \cdot \mathbf{x} - \omega t).\end{aligned} \hspace{\stretch{1}}(1.3.8)

Note carefully that we are using a scalar imaginary $i$, as well as the multivector (pseudoscalar) $I$, despite the fact that both have the square to scalar minus one property.

We now seek the constraints on $\mathbf{k}$, $\omega$, and $F_0$ that allow this to be a solution to 1.2.5

\begin{aligned}0 = \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F}.\end{aligned} \hspace{\stretch{1}}(1.3.9)

As usual in the non-geometric algebra treatment, we observe that any such solution $F$ to Maxwell’s equation is also a wave equation solution. In GA we can do so by right multiplying an operator that has a conjugate form,

\begin{aligned}\begin{aligned}0 &= \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F} \\ &= \left(\boldsymbol{\nabla} - \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F} \\ &=\left( \boldsymbol{\nabla}^2 - \mu\epsilon \frac{\partial^2}{\partial t^2} \right) \tilde{F} \\ &=\left( \boldsymbol{\nabla}^2 - \frac{1}{{v^2}} \frac{\partial^2}{\partial t^2} \right) \tilde{F},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.10)

where $v = 1/\sqrt{\mu\epsilon}$ is the speed of the wave described by this solution.

Inserting the exponential form of our assumed solution 1.3.7 we find

\begin{aligned}0 = -(\mathbf{k}^2 - \omega^2/v^2) F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)},\end{aligned} \hspace{\stretch{1}}(1.3.11)

which implies that the wave number vector $\mathbf{k}$ and the angular frequency $\omega$ are related by

\begin{aligned}v^2 \mathbf{k}^2 = \omega^2.\end{aligned} \hspace{\stretch{1}}(1.3.12)

Our assumed solution must also satisfy the first order system 1.3.9

\begin{aligned}\begin{aligned}0 &= \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) F_0e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)} \\ &=i\left(\mathbf{e}_m k_m - \frac{\omega}{v}\right) F_0e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)} \\ &=i k ( \hat{\mathbf{k}} - 1 ) F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.13)

The constraints on $F_0$ must then be given by

\begin{aligned}0 = ( \hat{\mathbf{k}} - 1 ) F_0.\end{aligned} \hspace{\stretch{1}}(1.3.14)

With

\begin{aligned}F_0 = \mathbf{E}_0 + I v \mathbf{B}_0,\end{aligned} \hspace{\stretch{1}}(1.3.15)

we must then have all grades of the multivector equation equal to zero

\begin{aligned}0 = ( \hat{\mathbf{k}} - 1 ) \left(\mathbf{E}_0 + I v \mathbf{B}_0\right).\end{aligned} \hspace{\stretch{1}}(1.3.16)

Writing out all the geometric products, noting that $I$ commutes with all of $\hat{\mathbf{k}}$, $\mathbf{E}_0$, and $\mathbf{B}_0$ and employing the identity $\mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b}$ we have

\begin{aligned}\begin{array}{l l l l l}0 &= \hat{\mathbf{k}} \cdot \mathbf{E}_0 & - \mathbf{E}_0 & + \hat{\mathbf{k}} \wedge \mathbf{E}_0 & I v \hat{\mathbf{k}} \cdot \mathbf{B}_0 \\ & & + I v \hat{\mathbf{k}} \wedge \mathbf{B}_0 & + I v \mathbf{B}_0 &\end{array}\end{aligned} \hspace{\stretch{1}}(1.3.17)

This is

\begin{aligned}0 = \hat{\mathbf{k}} \cdot \mathbf{E}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18a)

\begin{aligned}\mathbf{E}_0 =- \hat{\mathbf{k}} \times v \mathbf{B}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18b)

\begin{aligned}v \mathbf{B}_0 = \hat{\mathbf{k}} \times \mathbf{E}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18c)

\begin{aligned}0 = \hat{\mathbf{k}} \cdot \mathbf{B}_0.\end{aligned} \hspace{\stretch{1}}(1.3.18d)

This and 1.3.12 describe all the constraints on our phasor that are required for it to be a solution. Note that only one of the two cross product equations in are required because the two are not independent. This can be shown by crossing $\hat{\mathbf{k}}$ with 1.3.18b and using the identity

\begin{aligned}\mathbf{a} \times (\mathbf{a} \times \mathbf{b}) = - \mathbf{a}^2 \mathbf{b} + a (\mathbf{a} \cdot \mathbf{b}).\end{aligned} \hspace{\stretch{1}}(1.3.19)

One can find easily that 1.3.18b and 1.3.18c provide the same relationship between the $\mathbf{E}_0$ and $\mathbf{B}_0$ components of $F_0$. Writing out the complete expression for $F_0$ we have

\begin{aligned}\begin{aligned}F_0 &= \mathbf{E}_0 + I v \mathbf{B}_0 \\ &=\mathbf{E}_0 + I \hat{\mathbf{k}} \times \mathbf{E}_0 \\ &=\mathbf{E}_0 + \hat{\mathbf{k}} \wedge \mathbf{E}_0.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.20)

Since $\hat{\mathbf{k}} \cdot \mathbf{E}_0 = 0$, this is

\begin{aligned}F_0 = (1 + \hat{\mathbf{k}}) \mathbf{E}_0.\end{aligned} \hspace{\stretch{1}}(1.3.21)

Had we been clever enough this could have been deduced directly from the 1.3.14 directly, since we require a product that is killed by left multiplication with $\hat{\mathbf{k}} - 1$. Our complete plane wave solution to Maxwell’s equation is therefore given by

\begin{aligned}\begin{aligned}F &= \text{Real}(\tilde{F}) = \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \\ \tilde{F} &= (1 \pm \hat{\mathbf{k}}) \mathbf{E}_0 e^{i (\mathbf{k} \cdot \mathbf{x} \mp \omega t)} \\ 0 &= \hat{\mathbf{k}} \cdot \mathbf{E}_0 \\ \mathbf{k}^2 &= \omega^2 \mu \epsilon.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.22)

# References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] D.J. Griffith. Introduction to Electrodynamics. Prentice-Hall, 1981.

## Strain tensor in spherical coordinates

Posted by peeterjoot on January 23, 2012

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

## Spherical tensor.

To perform the derivation in spherical coordinates we have some setup to do first, since we need explicit representations of all three unit vectors. The radial vector we can get easily by geometry and find the usual

\begin{aligned}\hat{\mathbf{r}} =\begin{bmatrix}\sin\theta \cos\phi \\ \sin\theta \sin\phi \\ \cos\theta\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.61)

We can get $\hat{\boldsymbol{\phi}}$ by geometrical intuition since it the plane unit vector at angle $\phi$ rotated by $\pi/2$. That is

\begin{aligned}\hat{\boldsymbol{\phi}} =\begin{bmatrix}-\sin\phi \\ \cos\phi \\ 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.62)

We can get $\hat{\boldsymbol{\theta}}$ by utilizing the right handedness of the coordinates since

\begin{aligned}\hat{\boldsymbol{\phi}} \times \hat{\mathbf{r}} = \hat{\boldsymbol{\theta}}\end{aligned} \hspace{\stretch{1}}(3.63)

and find

\begin{aligned}\hat{\boldsymbol{\theta}} =\begin{bmatrix}\cos\theta \cos\phi \\ \cos\theta \sin\phi \\ -\sin\theta\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.64)

\begin{aligned}\begin{aligned}&d\mathbf{l}'^2 - d\mathbf{x}^2 \\ &=2 (dr)^2 \biggl(\frac{\partial u_r}{\partial r}+ \frac{1}{{2}}\frac{\partial u_m}{\partial r} \frac{\partial u_m}{\partial r}\biggr) \\ & + 2 r^2 (d\theta )^2 \biggl(\frac{1}{{r}} u_r + \frac{1}{{2r^2}}(u_r^2 + u_{\theta }^2) - \frac{1}{{r^2}} u_{\theta } \frac{\partial u_r}{\partial \theta }+ \left(\frac{1}{{r}} + \frac{1}{{r^2}}u_r\right) \frac{\partial u_{\theta }}{\partial \theta }+ \frac{1}{{2 r^2}} \frac{\partial u_m}{\partial \theta } \frac{\partial u_m}{\partial \theta }\biggr) \\ &+ 2 r^2 \sin^2\theta (d\phi )^2 \biggl( \frac{1}{{2 r^2 \sin^2\theta}} u_\phi^2+ \frac{1}{{2 r^2 }} u_{\theta }^2 \cot^2\theta+ \frac{1}{{r}} u_r+ \frac{1}{{2 r^2}} u_r^2+ \left(\frac{1}{{r}} + \frac{1}{{r^2}}u_r\right) u_{\theta } \cot\theta \\ &\qquad- \frac{1}{{r^2 \sin\theta}} u_{\phi } \frac{\partial u_r}{\partial \phi }- \frac{1}{{r^2 }} u_{\phi } \frac{\cos\theta}{\sin^2\theta} \frac{\partial u_{\theta }}{\partial \phi }+ \frac{1}{{r^2 }} \frac{\partial u_{\phi }}{\partial \phi } \left(u_{\theta } \frac{\cos\theta}{\sin^2\theta} + \left(r + u_r\right) \frac{1}{{\sin\theta}} \right)+ \frac{1}{{2 r^2 \sin^2\theta}} \frac{\partial u_m}{\partial \phi } \frac{\partial u_m}{\partial \phi }\biggr) \\ & + 2 dr r d\theta \biggl(- \frac{1}{{r}} u_{\theta }+ \frac{1}{{r}} \frac{\partial u_r}{\partial \theta }- \frac{1}{{r}} u_{\theta } \frac{\partial u_r}{\partial r}+ \frac{\partial u_{\theta }}{\partial r} \left(1 + \frac{u_r}{r} \right)+ \frac{1}{{r}} \frac{\partial u_m}{\partial r} \frac{\partial u_m}{\partial \theta }\biggr) \\ & + 2 r^2 \sin\theta d\theta d\phi \biggl(\frac{1}{{r^2 }} u_{\theta } u_{\phi }- \frac{1}{{r^2 \sin\theta}} u_{\theta } \frac{\partial u_r}{\partial \phi }- \frac{1}{{r^2 }} u_{\phi } \frac{\partial u_r}{\partial \theta }- \frac{1}{{r^2 }} u_{\phi } \cot\theta \left(r + u_r + \frac{\partial u_{\theta }}{\partial \theta }\right) \\ &\qquad+ \frac{1}{{r^2 \sin\theta}} \left(r + u_r \right) \frac{\partial u_{\theta }}{\partial \phi }+ \frac{\partial u_{\phi }}{\partial \theta } \left(\frac{u_{\theta }}{r^2} \cot\theta + \frac{1}{{r}} + \frac{u_r}{r^2} \right)+ \frac{1}{{r^2 \sin\theta}} \frac{\partial u_m}{\partial \theta } \frac{\partial u_m}{\partial \phi }\biggr) \\ & + 2 r \sin\theta d\phi dr \biggl(- \frac{1}{{r }} u_{\phi }+ \frac{1}{{r \sin\theta}} \frac{\partial u_r}{\partial \phi }- u_{\phi } \frac{1}{{r }} \frac{\partial u_r}{\partial r}- u_{\phi } \cot\theta \frac{1}{{r }} \frac{\partial u_{\theta }}{\partial r}+ \frac{1}{{r }} \frac{\partial u_{\phi }}{\partial r} \left( u_{\theta } \cot\theta + r + u_r \right)+ \frac{1}{{r \sin\theta}} \frac{\partial u_m}{\partial \phi } \frac{\partial u_m}{\partial r}\biggr)\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.65)

### A manual derivation.

Doing the calculation pretty much completely with Mathematica is rather unsatisfying. To set up for it let’s first compute the unit vectors from scratch. I’ll use geometric algebra to do this calculation. Consider figure (\ref{fig:qmTwoExamReflection:continuumL2fig5})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{continuumL2fig5}
\caption{Composite rotations for spherical polar unit vectors.}
\end{figure}

We have two sets of rotations, the first is a rotation about the $z$ axis by $\phi$. Writing $i = \mathbf{e}_1 \mathbf{e}_2$ for the unit bivector in the $x,y$ plane, we rotate

\begin{aligned}\mathbf{e}_1' &= \mathbf{e}_1 e^{i\phi} = \mathbf{e}_1 \cos\phi + \mathbf{e}_2 \sin\phi \\ \mathbf{e}_2' &= \mathbf{e}_2 e^{i\phi} = \mathbf{e}_2 \cos\phi - \mathbf{e}_1 \sin\phi \\ \mathbf{e}_3' &= \mathbf{e}_3\end{aligned} \hspace{\stretch{1}}(3.66)

Now we rotate in the plane spanned by $\mathbf{e}_3$ and $\mathbf{e}_1'$ by $\theta$. With $j = \mathbf{e}_3 \mathbf{e}_1'$, our vectors in the plane rotate as

\begin{aligned}\mathbf{e}_1'' &= \mathbf{e}_1' e^{j\phi} = \mathbf{e}_1 e^{i\phi} e^{j\theta} \\ \mathbf{e}_3'' &= \mathbf{e}_3' e^{j\theta} = \mathbf{e}_3 e^{j\theta},\end{aligned} \hspace{\stretch{1}}(3.69)

(with $\mathbf{e}_2'' = \mathbf{e}_2$ since $\mathbf{e}_2 \cdot j = 0$).

\begin{aligned}\hat{\boldsymbol{\theta}} = \mathbf{e}_1''&= \mathbf{e}_1 e^{i\phi} e^{j\theta} \\ &= \mathbf{e}_1 e^{i\phi} (\cos\theta + \mathbf{e}_3 \mathbf{e}_1 e^{i\phi} \sin\theta) \\ &= \mathbf{e}_1 e^{i\phi} \cos\theta -\mathbf{e}_3 \sin\theta \\ &= (\mathbf{e}_1 \cos\phi + \mathbf{e}_2 \sin\phi) \cos\theta -\mathbf{e}_3 \sin\theta \\ \end{aligned}

\begin{aligned}\hat{\mathbf{r}} = \mathbf{e}_3''&= \mathbf{e}_3 e^{j\theta} \\ &= \mathbf{e}_3 (\cos\theta + \mathbf{e}_3 \mathbf{e}_1 e^{i\phi} \sin\theta) \\ &= \mathbf{e}_3 \cos\theta + \mathbf{e}_1 e^{i\phi} \sin\theta \\ &= \mathbf{e}_3 \cos\theta + (\mathbf{e}_1 \cos\phi + \mathbf{e}_2 \sin\phi) \sin\theta \\ \end{aligned}

Now, these are all the same relations that we could find with coordinate algebra

\begin{aligned}\hat{\mathbf{r}} &= \mathbf{e}_1 \cos\phi \sin\theta +\mathbf{e}_2 \sin\phi \sin\theta +\mathbf{e}_3 \cos\theta \\ \hat{\boldsymbol{\theta}} &= \mathbf{e}_1 \cos\phi \cos\theta +\mathbf{e}_2 \sin\phi \cos\theta -\mathbf{e}_3 \sin\theta \\ \hat{\boldsymbol{\phi}} &= -\mathbf{e}_1 \sin\phi + \mathbf{e}_2 \cos\phi\end{aligned} \hspace{\stretch{1}}(3.71)

There’s nothing special in this approach if that is as far as we go, but we can put things in a nice tidy form for computation of the differentials of the unit vectors. Introducing the unit pseudoscalar $I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3$ we can write these in a compact exponential form.

\begin{aligned}\hat{\mathbf{r}}&= (\mathbf{e}_1 \cos\phi +\mathbf{e}_2 \sin\phi ) \sin\theta +\mathbf{e}_3 \cos\theta \\ &= \mathbf{e}_1 e^{i\phi} \sin\theta +\mathbf{e}_3 \cos\theta \\ &= \mathbf{e}_3 ( \cos\theta + \mathbf{e}_3 \mathbf{e}_1 e^{i\phi} \sin\theta ) \\ &= \mathbf{e}_3 ( \cos\theta + \mathbf{e}_3 \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_2 e^{i\phi} \sin\theta ) \\ &= \mathbf{e}_3 ( \cos\theta + I \hat{\boldsymbol{\phi}} \sin\theta ) \\ &= \mathbf{e}_3 e^{ I \hat{\boldsymbol{\phi}} \theta }\end{aligned}

\begin{aligned}\hat{\boldsymbol{\theta}}&=\mathbf{e}_1 \cos\phi \cos\theta +\mathbf{e}_2 \sin\phi \cos\theta -\mathbf{e}_3 \sin\theta \\ &=(\mathbf{e}_1 \cos\phi +\mathbf{e}_2 \sin\phi ) \cos\theta -\mathbf{e}_3 \sin\theta \\ &=\mathbf{e}_1 e^{i\phi} \cos\theta -\mathbf{e}_3 \sin\theta \\ &=\mathbf{e}_1 e^{i\phi} ( \cos\theta - e^{-i\phi} \mathbf{e}_1 \mathbf{e}_3 \sin\theta ) \\ &=\mathbf{e}_1 e^{i\phi} ( \cos\theta - \mathbf{e}_1 \mathbf{e}_3 e^{i\phi} \sin\theta ) \\ &=\mathbf{e}_1 e^{i\phi} ( \cos\theta - \mathbf{e}_1 \mathbf{e}_3 \mathbf{e}_2 \mathbf{e}_2 e^{i\phi} \sin\theta ) \\ &=\mathbf{e}_1 e^{i\phi} ( \cos\theta + I \hat{\boldsymbol{\phi}} \sin\theta ) \\ &=\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_2 e^{i\phi} ( \cos\theta + I \hat{\boldsymbol{\phi}} \sin\theta ) \\ &=i \hat{\boldsymbol{\phi}} e^{I \hat{\boldsymbol{\phi}} \theta}.\end{aligned}

To summarize we have

\begin{aligned}\hat{\boldsymbol{\phi}} &= \mathbf{e}_2 e^{i\phi} \\ \hat{\mathbf{r}} &= \mathbf{e}_3 e^{I\hat{\boldsymbol{\phi}} \theta} \\ \hat{\boldsymbol{\theta}} &= i \hat{\boldsymbol{\phi}} e^{I\hat{\boldsymbol{\phi}} \theta}.\end{aligned} \hspace{\stretch{1}}(3.74)

Taking differentials we find first

\begin{aligned}d\hat{\boldsymbol{\phi}} = \mathbf{e}_2 e^{i\phi} i d\phi = \hat{\boldsymbol{\phi}} i d\phi\end{aligned}

\begin{aligned}d\hat{\boldsymbol{\theta}}&= d \left( i \hat{\boldsymbol{\phi}} e^{I\hat{\boldsymbol{\phi}} \theta} \right) \\ &= i d \hat{\boldsymbol{\phi}} e^{I\hat{\boldsymbol{\phi}} \theta} + i \hat{\boldsymbol{\phi}} d \left( \cos\theta + I \hat{\boldsymbol{\phi}} \sin\theta \right) \\ &= i d \hat{\boldsymbol{\phi}} e^{I\hat{\boldsymbol{\phi}} \theta}+ i \hat{\boldsymbol{\phi}} I (d \hat{\boldsymbol{\phi}}) \sin\theta+ i \hat{\boldsymbol{\phi}} I \hat{\boldsymbol{\phi}} e^{I\hat{\boldsymbol{\phi}} \theta} d\theta \\ &= i \hat{\boldsymbol{\phi}} i e^{I\hat{\boldsymbol{\phi}} \theta} d\phi+ i \hat{\boldsymbol{\phi}} I \hat{\boldsymbol{\phi}} i \sin\theta d\phi+ i \hat{\boldsymbol{\phi}} I \hat{\boldsymbol{\phi}} e^{I\hat{\boldsymbol{\phi}} \theta} d\theta \\ &= \hat{\boldsymbol{\phi}} e^{I\hat{\boldsymbol{\phi}} \theta} d\phi- I \sin\theta d\phi- \mathbf{e}_3 e^{I\hat{\boldsymbol{\phi}} \theta} d\theta \\ &= \hat{\boldsymbol{\phi}} (\cos\theta + I \hat{\boldsymbol{\phi}} \sin\theta) d\phi- I \sin\theta d\phi- \mathbf{e}_3 e^{I\hat{\boldsymbol{\phi}} \theta} d\theta \\ &= \hat{\boldsymbol{\phi}} \cos\theta d\phi - \hat{\mathbf{r}} d\theta\end{aligned}

\begin{aligned}d \hat{\mathbf{r}}&=\mathbf{e}_3 d \left( e^{I\hat{\boldsymbol{\phi}} \theta} \right) \\ &=\mathbf{e}_3 d \left( \cos\theta + I \hat{\boldsymbol{\phi}} \sin\theta \right) \\ &=\mathbf{e}_3 \left( I (d \hat{\boldsymbol{\phi}}) \sin\theta + I \hat{\boldsymbol{\phi}} e^{I\hat{\boldsymbol{\phi}} \theta} d\theta \right) \\ &=\mathbf{e}_3 \left( I \hat{\boldsymbol{\phi}} i \sin\theta d\phi + I \hat{\boldsymbol{\phi}} e^{I\hat{\boldsymbol{\phi}} \theta} d\theta \right) \\ &=i \hat{\boldsymbol{\phi}} i \sin\theta d\phi + i \hat{\boldsymbol{\phi}} e^{I\hat{\boldsymbol{\phi}} \theta} d\theta \\ &=\hat{\boldsymbol{\phi}} \sin\theta d\phi + \hat{\boldsymbol{\theta}} d\theta\end{aligned}

Summarizing these differentials we have

\begin{aligned}d\hat{\mathbf{r}} &= \hat{\boldsymbol{\phi}} \sin\theta d\phi + \hat{\boldsymbol{\theta}} d\theta \\ d\hat{\boldsymbol{\theta}} &= \hat{\boldsymbol{\phi}} \cos\theta d\phi - \hat{\mathbf{r}} d\theta \\ d\hat{\boldsymbol{\phi}} &= \hat{\boldsymbol{\phi}} i d\phi\end{aligned} \hspace{\stretch{1}}(3.77)

A final cleanup is required. While $\hat{\boldsymbol{\phi}} i$ is a vector and has a nicely compact form, we need to decompose this into components in the $\hat{\mathbf{r}}$, $\hat{\boldsymbol{\theta}}$ and $\hat{\boldsymbol{\phi}}$ directions. Taking scalar products we have

\begin{aligned}\hat{\boldsymbol{\phi}} \cdot (\hat{\boldsymbol{\phi}} i) = 0\end{aligned}

\begin{aligned}\hat{\mathbf{r}} \cdot (\hat{\boldsymbol{\phi}} i)&=\left\langle{{ \hat{\mathbf{r}} \hat{\boldsymbol{\phi}} i}}\right\rangle \\ &=\left\langle{{ \mathbf{e}_3 e^{I\hat{\boldsymbol{\phi}} \theta} \mathbf{e}_2 e^{i\phi} i}}\right\rangle \\ &=\left\langle{{ \mathbf{e}_3 (\cos\theta + I \mathbf{e}_2 e^{i\phi} \sin\theta) \mathbf{e}_2 e^{i\phi} i}}\right\rangle \\ &=\left\langle{{ I (\cos\theta e^{-i\phi} + I \mathbf{e}_2 \sin\theta) \mathbf{e}_2 }}\right\rangle \\ &=-\sin\theta\end{aligned}

\begin{aligned}\hat{\boldsymbol{\theta}} \cdot (\hat{\boldsymbol{\phi}} i)&=\left\langle{{ \hat{\boldsymbol{\theta}} \hat{\boldsymbol{\phi}} i }}\right\rangle \\ &=\left\langle{{ i \hat{\boldsymbol{\phi}} e^{I\hat{\boldsymbol{\phi}} \theta} \hat{\boldsymbol{\phi}} i }}\right\rangle \\ &=-\left\langle{{ \hat{\boldsymbol{\phi}} e^{I\hat{\boldsymbol{\phi}} \theta} \hat{\boldsymbol{\phi}} }}\right\rangle \\ &=-\left\langle{{ e^{I\hat{\boldsymbol{\phi}} \theta} }}\right\rangle \\ &=- \cos\theta.\end{aligned}

Summarizing once again, but this time in terms of $\hat{\mathbf{r}}$, $\hat{\boldsymbol{\theta}}$ and $\hat{\boldsymbol{\phi}}$ we have

\begin{aligned}d\hat{\mathbf{r}} &= \hat{\boldsymbol{\phi}} \sin\theta d\phi + \hat{\boldsymbol{\theta}} d\theta \\ d\hat{\boldsymbol{\theta}} &= \hat{\boldsymbol{\phi}} \cos\theta d\phi - \hat{\mathbf{r}} d\theta \\ d\hat{\boldsymbol{\phi}} &= -(\hat{\mathbf{r}} \sin\theta + \hat{\boldsymbol{\theta}} \cos\theta) d\phi\end{aligned} \hspace{\stretch{1}}(3.80)

Now we are set to take differentials. With

\begin{aligned}\mathbf{x} = r \hat{\mathbf{r}},\end{aligned} \hspace{\stretch{1}}(3.83)

we have

\begin{aligned}d\mathbf{x} =dr \hat{\mathbf{r}}+ r d\hat{\mathbf{r}}=dr \hat{\mathbf{r}} + \hat{\boldsymbol{\phi}} r \sin\theta d\phi + r \hat{\boldsymbol{\theta}} d\theta.\end{aligned} \hspace{\stretch{1}}(3.84)

Squaring this we get the usual spherical polar line scalar line element

\begin{aligned}d\mathbf{x}^2 = dr^2 + r^2 \sin^2\theta d\phi^2 + r^2 d\theta^2.\end{aligned} \hspace{\stretch{1}}(3.85)

With

\begin{aligned}\mathbf{u} = u_r \hat{\mathbf{r}} + u_\theta \hat{\boldsymbol{\theta}} + u_\phi \hat{\boldsymbol{\phi}},\end{aligned} \hspace{\stretch{1}}(3.86)

our differential is

\begin{aligned}d\mathbf{u}&=du_r \hat{\mathbf{r}} + du_\theta \hat{\boldsymbol{\theta}} + du_\phi \hat{\boldsymbol{\phi}}+ u_r d\hat{\mathbf{r}} + u_\theta d\hat{\boldsymbol{\theta}} + u_\phi d \hat{\boldsymbol{\phi}} \\ &=du_r \hat{\mathbf{r}} + du_\theta \hat{\boldsymbol{\theta}} + du_\phi \hat{\boldsymbol{\phi}}+ u_r \left(\hat{\boldsymbol{\phi}} \sin\theta d\phi + \hat{\boldsymbol{\theta}} d\theta \right)+ u_\theta \left( \hat{\boldsymbol{\phi}} \cos\theta d\phi - \hat{\mathbf{r}} d\theta \right)- u_\phi (\hat{\mathbf{r}} \sin\theta + \hat{\boldsymbol{\theta}} \cos\theta) d\phi\\ &=\hat{\mathbf{r}} \left( du_r - u_\theta d\theta - u_\phi \sin\theta d\phi \right) \\ &+\hat{\boldsymbol{\theta}} \left( du_\theta + u_r d\theta - u_\phi \cos\theta d\phi \right) \\ &+\hat{\boldsymbol{\phi}} \left( du_\phi + u_r \sin\theta d\phi + u_\theta \cos\theta d\phi \right).\end{aligned}

We can add $d\mathbf{x}$ to this and take differences

\begin{aligned}\begin{aligned}(d\mathbf{u} + d\mathbf{x})^2 - d\mathbf{x}^2&=\left( du_r - u_\theta d\theta - u_\phi \sin\theta d\phi + dr \right)^2 \\ &+\left( du_\theta + u_r d\theta - u_\phi \cos\theta d\phi + r d\theta \right)^2 \\ &+\left( du_\phi + u_r \sin\theta d\phi + u_\theta \cos\theta d\phi + r \sin\theta d\phi \right)^2\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.87)

For each $m = r,\theta,\phi$ we have

\begin{aligned}du_m=\frac{\partial {u_m}}{\partial {r}} dr +\frac{\partial {u_m}}{\partial {\theta}} d\theta +\frac{\partial {u_m}}{\partial {\phi}} d\phi,\end{aligned} \hspace{\stretch{1}}(3.88)

and plugging through that calculation is really all it takes to derive the textbook result. To do this to first order in $u_m$, we find

\begin{aligned}\frac{1}{{2}} \left((d\mathbf{u} + d\mathbf{x})^2 - d\mathbf{x}^2\right)&=du_r dr- u_\theta d\theta dr- u_\phi \sin\theta d\phi dr \\ &+ du_\theta r d\theta+ u_r r d\theta^2- u_\phi r \cos\theta d\phi d\theta \\ &+ r \sin\theta du_\phi d\phi+ r \sin^2\theta u_r d\phi^2+ r \sin\theta \cos\theta u_\theta d\phi^2 \\ &=\left( \frac{\partial {u_r}}{\partial {r}} dr + \frac{\partial {u_r}}{\partial {\theta}} d\theta + \frac{\partial {u_r}}{\partial {\phi}} d\phi \right)dr- u_\theta d\theta dr- u_\phi \sin\theta d\phi dr \\ &+\left( \frac{\partial {u_\theta}}{\partial {r}} dr + \frac{\partial {u_\theta}}{\partial {\theta}} d\theta + \frac{\partial {u_\theta}}{\partial {\phi}} d\phi \right) r d\theta+ u_r r d\theta^2- u_\phi r \cos\theta d\phi d\theta \\ &+\left( \frac{\partial {u_\phi}}{\partial {r}} dr + \frac{\partial {u_\phi}}{\partial {\theta}} d\theta + \frac{\partial {u_\phi}}{\partial {\phi}} d\phi \right)r \sin\theta d\phi+ r \sin^2\theta u_r d\phi^2+ r \sin\theta \cos\theta u_\theta d\phi^2\end{aligned}

Collecting terms we have the result of the text in the braces

\begin{aligned}\begin{aligned}\left((d\mathbf{u} + d\mathbf{x})^2 - d\mathbf{x}^2\right)&=2 dr^2 \left(\frac{\partial {u_r}}{\partial {r}}\right) \\ &+2 r^2 d\theta^2 \left(\frac{1}{{r}} \frac{\partial {u_\theta}}{\partial {\theta}} + u_r \frac{1}{{r}}\right) \\ &+2 r^2 \sin^2\theta d\phi^2 \left(\frac{\partial {u_\phi}}{\partial {\phi}} \frac{1}{{r \sin\theta}} + \frac{1}{{r}} u_r + \frac{1}{{r}} \cot\theta u_\theta\right) \\ &+2 dr r d\theta \left(\frac{1}{{r}} \frac{\partial {u_r}}{\partial {\theta}} - \frac{1}{{r}} u_\theta +\frac{\partial {u_\theta}}{\partial {r}}\right) \\ &+2 r^2 \sin\theta d\theta d\phi \left(\frac{\partial {u_\theta}}{\partial {\phi}} \frac{1}{{r \sin\theta}} - \frac{1}{{r}} u_\phi \cot\theta +\frac{1}{{r}} \frac{\partial {u_\phi}}{\partial {\theta}}\right) \\ &+2 r \sin\theta d\phi dr \left(\frac{1}{{r \sin\theta}} \frac{\partial {u_r}}{\partial {\phi}} - \frac{1}{{r}} u_\phi + \frac{\partial {u_\phi}}{\partial {r}}\right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.89)

It should be possible to do the calculation to second order too, but to include all the quadratic terms in $u_m$ is again really messy. Trying that with mathematica gives the same results as above using the strictly coordinate algebra approach.

# References

[1] L.D. Landau, EM Lifshitz, JB Sykes, WH Reid, and E.H. Dill. Theory of elasticity: Vol. 7 of course of theoretical physics. 1960.

## Multivector commutators and Lorentz boosts.

Posted by peeterjoot on October 31, 2010

# Motivation.

In some reading there I found that the electrodynamic field components transform in a reversed sense to that of vectors, where instead of the perpendicular to the boost direction remaining unaffected, those are the parts that are altered.

To explore this, look at the Lorentz boost action on a multivector, utilizing symmetric and antisymmetric products to split that vector into portions effected and unaffected by the boost. For the bivector (electrodynamic case) and the four vector case, examine how these map to dot and wedge (or cross) products.

The underlying motivator for this boost consideration is an attempt to see where equation (6.70) of [1] comes from. We get to this by the very end.

# Guts.

## Structure of the bivector boost.

Recall that we can write our Lorentz boost in exponential form with

\begin{aligned}L &= e^{\alpha \boldsymbol{\sigma}/2} \\ X' &= L^\dagger X L,\end{aligned} \hspace{\stretch{1}}(2.1)

where $\boldsymbol{\sigma}$ is a spatial vector. This works for our bivector field too, assuming the composite transformation is an outermorphism of the transformed four vectors. Applying the boost to both the gradient and the potential our transformed field is then

\begin{aligned}F' &= \nabla' \wedge A' \\ &= (L^\dagger \nabla L) \wedge (L^\dagger A L) \\ &= \frac{1}{{2}} \left((L^\dagger \stackrel{ \rightarrow }{\nabla} L) (L^\dagger A L) -(L^\dagger A L) (L^\dagger \stackrel{ \leftarrow }{\nabla} L)\right) \\ &= \frac{1}{{2}} L^\dagger \left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) L \\ &= L^\dagger (\nabla \wedge A) L.\end{aligned}

Note that arrows were used briefly to indicate that the partials of the gradient are still acting on $A$ despite their vector components being to one side. We are left with the very simple transformation rule

\begin{aligned}F' = L^\dagger F L,\end{aligned} \hspace{\stretch{1}}(2.3)

which has exactly the same structure as the four vector boost.

## Employing the commutator and anticommutator to find the parallel and perpendicular components.

If we apply the boost to a four vector, those components of the four vector that commute with the spatial direction $\boldsymbol{\sigma}$ are unaffected. As an example, which also serves to ensure we have the sign of the rapidity angle $\alpha$ correct, consider $\boldsymbol{\sigma} = \boldsymbol{\sigma}_1$. We have

\begin{aligned}X' = e^{-\alpha \boldsymbol{\sigma}/2} ( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 ) (\cosh \alpha/2 + \gamma_1 \gamma_0 \sinh \alpha/2 )\end{aligned} \hspace{\stretch{1}}(2.4)

We observe that the scalar and $\boldsymbol{\sigma}_1 = \gamma_1 \gamma_0$ components of the exponential commute with $\gamma_2$ and $\gamma_3$ since there is no vector in common, but that $\boldsymbol{\sigma}_1$ anticommutes with $\gamma_0$ and $\gamma_1$. We can therefore write

\begin{aligned}X' &= x^2 \gamma_2 + x^3 \gamma_3 +( x^0 \gamma_0 + x^1 \gamma_1 + ) (\cosh \alpha + \gamma_1 \gamma_0 \sinh \alpha ) \\ &= x^2 \gamma_2 + x^3 \gamma_3 +\gamma_0 ( x^0 \cosh\alpha - x^1 \sinh \alpha )+ \gamma_1 ( x^1 \cosh\alpha - x^0 \sinh \alpha )\end{aligned}

reproducing the familiar matrix result should we choose to write it out. How can we express the commutation property without resorting to components. We could write the four vector as a spatial and timelike component, as in

\begin{aligned}X = x^0 \gamma_0 + \mathbf{x} \gamma_0,\end{aligned} \hspace{\stretch{1}}(2.5)

and further separate that into components parallel and perpendicular to the spatial unit vector $\boldsymbol{\sigma}$ as

\begin{aligned}X = x^0 \gamma_0 + (\mathbf{x} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0 + (\mathbf{x} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0.\end{aligned} \hspace{\stretch{1}}(2.6)

However, it would be nicer to group the first two terms together, since they are ones that are affected by the transformation. It would also be nice to not have to resort to spatial dot and wedge products, since we get into trouble too easily if we try to mix dot and wedge products of four vector and spatial vector components.

What we can do is employ symmetric and antisymmetric products (the anticommutator and commutator respectively). Recall that we can write any multivector product this way, and in particular

\begin{aligned}M \boldsymbol{\sigma} = \frac{1}{{2}} (M \boldsymbol{\sigma} + \boldsymbol{\sigma} M) + \frac{1}{{2}} (M \boldsymbol{\sigma} - \boldsymbol{\sigma} M).\end{aligned} \hspace{\stretch{1}}(2.7)

Left multiplying by the unit spatial vector $\boldsymbol{\sigma}$ we have

\begin{aligned}M = \frac{1}{{2}} (M + \boldsymbol{\sigma} M \boldsymbol{\sigma}) + \frac{1}{{2}} (M - \boldsymbol{\sigma} M \boldsymbol{\sigma}) = \frac{1}{{2}} \left\{{M},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} + \frac{1}{{2}} \left[{M},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma}.\end{aligned} \hspace{\stretch{1}}(2.8)

When $M = \mathbf{a}$ is a spatial vector this is our familiar split into parallel and perpendicular components with the respective projection and rejection operators

\begin{aligned}\mathbf{a} = \frac{1}{{2}} \left\{\mathbf{a},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} + \frac{1}{{2}} \left[{\mathbf{a}},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} = (\mathbf{a} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} + (\mathbf{a} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}.\end{aligned} \hspace{\stretch{1}}(2.9)

However, the more general split employing symmetric and antisymmetric products in 2.8, is something we can use for our four vector and bivector objects too.

Observe that we have the commutation and anti-commutation relationships

\begin{aligned}\left( \frac{1}{{2}} \left\{{M},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} \right) \boldsymbol{\sigma} &= \boldsymbol{\sigma} \left( \frac{1}{{2}} \left\{{M},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} \right) \\ \left( \frac{1}{{2}} \left[{M},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} \right) \boldsymbol{\sigma} &= -\boldsymbol{\sigma} \left( \frac{1}{{2}} \left[{M},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} \right).\end{aligned} \hspace{\stretch{1}}(2.10)

This split therefore serves to separate the multivector object in question nicely into the portions that are acted on by the Lorentz boost, or left unaffected.

## Application of the symmetric and antisymmetric split to the bivector field.

Let’s apply 2.8 to the spacetime event $X$ again with an x-axis boost $\sigma = \sigma_1$. The anticommutator portion of X in this boost direction is

\begin{aligned}\frac{1}{{2}} \left\{{X},{\boldsymbol{\sigma}_1}\right\} \boldsymbol{\sigma}_1&=\frac{1}{{2}} \left(\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right)+\gamma_1 \gamma_0\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right) \gamma_1 \gamma_0\right) \\ &=x^2 \gamma_2 + x^3 \gamma_3,\end{aligned}

whereas the commutator portion gives us

\begin{aligned}\frac{1}{{2}} \left[{X},{\boldsymbol{\sigma}_1}\right] \boldsymbol{\sigma}_1&=\frac{1}{{2}} \left(\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right)-\gamma_1 \gamma_0\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right) \gamma_1 \gamma_0\right) \\ &=x^0 \gamma_0 + x^1 \gamma_1.\end{aligned}

We’ve seen that only these commutator portions are acted on by the boost. We have therefore found the desired logical grouping of the four vector $X$ into portions that are left unchanged by the boost and those that are affected. That is

\begin{aligned}\frac{1}{{2}} \left[{X},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} &= x^0 \gamma_0 + (\mathbf{x} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0 \\ \frac{1}{{2}} \left\{{X},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} &= (\mathbf{x} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0 \end{aligned} \hspace{\stretch{1}}(2.12)

Let’s now return to the bivector field $F = \nabla \wedge A = \mathbf{E} + I c \mathbf{B}$, and split that multivector into boostable and unboostable portions with the commutator and anticommutator respectively.

Observing that our pseudoscalar $I$ commutes with all spatial vectors we have for the anticommutator parts that will not be affected by the boost

\begin{aligned}\frac{1}{{2}} \left\{{\mathbf{E} + I c \mathbf{B}},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} &= (\mathbf{E} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma},\end{aligned} \hspace{\stretch{1}}(2.14)

and for the components that will be boosted we have

\begin{aligned}\frac{1}{{2}} \left[{\mathbf{E} + I c \mathbf{B}},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} &= (\mathbf{E} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}.\end{aligned} \hspace{\stretch{1}}(2.15)

For the four vector case we saw that the components that lay “perpendicular” to the boost direction, were unaffected by the boost. For the field we see the opposite, and the components of the individual electric and magnetic fields that are parallel to the boost direction are unaffected.

Our boosted field is therefore

\begin{aligned}F' = (\mathbf{E} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma}+ \left( (\mathbf{E} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}\right) \left( \cosh \alpha + \boldsymbol{\sigma} \sinh \alpha \right)\end{aligned} \hspace{\stretch{1}}(2.16)

Focusing on just the non-parallel terms we have

\begin{aligned}\left( (\mathbf{E} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}\right) \left( \cosh \alpha + \boldsymbol{\sigma} \sinh \alpha \right)&=(\mathbf{E}_\perp + I c \mathbf{B}_\perp ) \cosh\alpha+(I \mathbf{E} \times \boldsymbol{\sigma} - c \mathbf{B} \times \boldsymbol{\sigma} ) \sinh\alpha \\ &=\mathbf{E}_\perp \cosh\alpha - c (\mathbf{B} \times \boldsymbol{\sigma} ) \sinh\alpha + I ( c \mathbf{B}_\perp \cosh\alpha + (\mathbf{E} \times \boldsymbol{\sigma}) \sinh\alpha ) \\ &=\gamma \left(\mathbf{E}_\perp - c (\mathbf{B} \times \boldsymbol{\sigma} ) {\left\lvert{\mathbf{v}}\right\rvert}/c+ I ( c \mathbf{B}_\perp + (\mathbf{E} \times \boldsymbol{\sigma}) {\left\lvert{\mathbf{v}}\right\rvert}/c) \right)\end{aligned}

A final regrouping gives us

\begin{aligned}F'&=\mathbf{E}_\parallel + \gamma \left( \mathbf{E}_\perp - \mathbf{B} \times \mathbf{v} \right)+I c \left( \mathbf{B}_\parallel + \gamma \left( \mathbf{B}_\perp + \mathbf{E} \times \mathbf{v}/c^2 \right) \right)\end{aligned} \hspace{\stretch{1}}(2.17)

In particular when we consider the proton, electron system as in equation (6.70) of [1] where it is stated that the electron will feel a magnetic field given by

\begin{aligned}\mathbf{B} = - \frac{\mathbf{v}}{c} \times \mathbf{E}\end{aligned} \hspace{\stretch{1}}(2.18)

we can see where this comes from. If $F = \mathbf{E} + I c (0)$ is the field acting on the electron, then application of a $\mathbf{v}$ boost to the electron perpendicular to the field (ie: radial motion), we get

\begin{aligned}F' = I c \gamma \mathbf{E} \times \mathbf{v}/c^2 =-I c \gamma \frac{\mathbf{v}}{c^2} \times \mathbf{E}\end{aligned} \hspace{\stretch{1}}(2.19)

We also have an additional $1/c$ factor in our result, but that’s a consequence of the choice of units where the dimensions of $\mathbf{E}$ match $c \mathbf{B}$, whereas in the text we have $\mathbf{E}$ and $\mathbf{B}$ in the same units. We also have an additional $\gamma$ factor, so we must presume that ${\left\lvert{\mathbf{v}}\right\rvert} << c$ in this portion of the text. That is actually a requirement here, for if the electron was already in motion, we'd have to boost a field that also included a magnetic component. A consequence of this is that the final interaction Hamiltonian of (6.75) is necessarily non-relativistic.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## Derivation of the spherical polar Laplacian

Posted by peeterjoot on October 9, 2010

# Motivation.

In [1] was a Geometric Algebra derivation of the 2D polar Laplacian by squaring the quadient. In [2] was a factorization of the spherical polar unit vectors in a tidy compact form. Here both these ideas are utilized to derive the spherical polar form for the Laplacian, an operation that is strictly algebraic (squaring the gradient) provided we operate on the unit vectors correctly.

# Our rotation multivector.

Our starting point is a pair of rotations. We rotate first in the $x,y$ plane by $\phi$

\begin{aligned}\mathbf{x} &\rightarrow \mathbf{x}' = \tilde{R_\phi} \mathbf{x} R_\phi \\ i &\equiv \mathbf{e}_1 \mathbf{e}_2 \\ R_\phi &= e^{i \phi/2}\end{aligned} \hspace{\stretch{1}}(2.1)

Then apply a rotation in the $\mathbf{e}_3 \wedge (\tilde{R_\phi} \mathbf{e}_1 R_\phi) = \tilde{R_\phi} \mathbf{e}_3 \mathbf{e}_1 R_\phi$ plane

\begin{aligned}\mathbf{x}' &\rightarrow \mathbf{x}'' = \tilde{R_\theta} \mathbf{x}' R_\theta \\ R_\theta &= e^{ \tilde{R_\phi} \mathbf{e}_3 \mathbf{e}_1 R_\phi \theta/2 } = \tilde{R_\phi} e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } R_\phi\end{aligned} \hspace{\stretch{1}}(2.4)

The composition of rotations now gives us

\begin{aligned}\mathbf{x}&\rightarrow \mathbf{x}'' = \tilde{R_\theta} \tilde{R_\phi} \mathbf{x} R_\phi R_\theta = \tilde{R} \mathbf{x} R \\ R &= R_\phi R_\theta = e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 }.\end{aligned}

# Expressions for the unit vectors.

The unit vectors in the rotated frame can now be calculated. With $I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3$ we can calculate

\begin{aligned}\hat{\boldsymbol{\phi}} &= \tilde{R} \mathbf{e}_2 R \\ \hat{\mathbf{r}} &= \tilde{R} \mathbf{e}_3 R \\ \hat{\boldsymbol{\theta}} &= \tilde{R} \mathbf{e}_1 R\end{aligned} \hspace{\stretch{1}}(3.6)

Performing these we get

\begin{aligned}\hat{\boldsymbol{\phi}}&= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } e^{ -\mathbf{e}_3 \mathbf{e}_1 \theta/2 } \mathbf{e}_2 e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= \mathbf{e}_2 e^{ i \phi },\end{aligned}

and

\begin{aligned}\hat{\mathbf{r}}&= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } e^{ -\mathbf{e}_3 \mathbf{e}_1 \theta/2 } \mathbf{e}_3 e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } (\mathbf{e}_3 \cos\theta + \mathbf{e}_1 \sin\theta ) e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= \mathbf{e}_3 \cos\theta +\mathbf{e}_1 \sin\theta e^{ \mathbf{e}_1 \mathbf{e}_2 \phi } \\ &= \mathbf{e}_3 (\cos\theta + \mathbf{e}_3 \mathbf{e}_1 \sin\theta e^{ \mathbf{e}_1 \mathbf{e}_2 \phi } ) \\ &= \mathbf{e}_3 e^{I \hat{\boldsymbol{\phi}} \theta},\end{aligned}

and

\begin{aligned}\hat{\boldsymbol{\theta}}&= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } e^{ -\mathbf{e}_3 \mathbf{e}_1 \theta/2 } \mathbf{e}_1 e^{ \mathbf{e}_3 \mathbf{e}_1 \theta/2 } e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= e^{ -\mathbf{e}_1 \mathbf{e}_2 \phi/2 } ( \mathbf{e}_1 \cos\theta - \mathbf{e}_3 \sin\theta ) e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= \mathbf{e}_1 \cos\theta e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } - \mathbf{e}_3 \sin\theta \\ &= i \hat{\boldsymbol{\phi}} \cos\theta - \mathbf{e}_3 \sin\theta \\ &= i \hat{\boldsymbol{\phi}} (\cos\theta + \hat{\boldsymbol{\phi}} i \mathbf{e}_3 \sin\theta ) \\ &= i \hat{\boldsymbol{\phi}} e^{I \hat{\boldsymbol{\phi}} \theta}.\end{aligned}

Summarizing these are

\begin{aligned}\hat{\boldsymbol{\phi}} &= \mathbf{e}_2 e^{ i \phi } \\ \hat{\mathbf{r}} &= \mathbf{e}_3 e^{I \hat{\boldsymbol{\phi}} \theta} \\ \hat{\boldsymbol{\theta}} &= i \hat{\boldsymbol{\phi}} e^{I \hat{\boldsymbol{\phi}} \theta}.\end{aligned} \hspace{\stretch{1}}(3.9)

# Derivatives of the unit vectors.

We’ll need the partials. Most of these can be computed from 3.9 by inspection, and are

\begin{aligned}\partial_r \hat{\boldsymbol{\phi}} &= 0 \\ \partial_r \hat{\mathbf{r}} &= 0 \\ \partial_r \hat{\boldsymbol{\theta}} &= 0 \\ \partial_\theta \hat{\boldsymbol{\phi}} &= 0 \\ \partial_\theta \hat{\mathbf{r}} &= \hat{\mathbf{r}} I \hat{\boldsymbol{\phi}} \\ \partial_\theta \hat{\boldsymbol{\theta}} &= \hat{\boldsymbol{\theta}} I \hat{\boldsymbol{\phi}} \\ \partial_\phi \hat{\boldsymbol{\phi}} &= \hat{\boldsymbol{\phi}} i \\ \partial_\phi \hat{\mathbf{r}} &= \hat{\boldsymbol{\phi}} \sin\theta \\ \partial_\phi \hat{\boldsymbol{\theta}} &= \hat{\boldsymbol{\phi}} \cos\theta\end{aligned} \hspace{\stretch{1}}(4.12)

# Expanding the Laplacian.

We note that the line element is $ds = dr + r d\theta + r\sin\theta d\phi$, so our gradient in spherical coordinates is

\begin{aligned}\boldsymbol{\nabla} &= \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi.\end{aligned} \hspace{\stretch{1}}(5.21)

We can now evaluate the Laplacian

\begin{aligned}\boldsymbol{\nabla}^2 &=\left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right) \cdot\left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right).\end{aligned} \hspace{\stretch{1}}(5.22)

Evaluating these one set at a time we have

\begin{aligned}\hat{\mathbf{r}} \partial_r \cdot \left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right) &= \partial_{rr},\end{aligned}

and

\begin{aligned}\frac{1}{{r}} \hat{\boldsymbol{\theta}} \partial_\theta \cdot \left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right)&=\frac{1}{{r}} \left\langle{{\hat{\boldsymbol{\theta}} \left(\hat{\mathbf{r}} I \hat{\boldsymbol{\phi}} \partial_r + \hat{\mathbf{r}} \partial_{\theta r}+ \frac{\hat{\boldsymbol{\theta}}}{r} \partial_{\theta\theta} + \frac{1}{{r}} \hat{\boldsymbol{\theta}} I \hat{\boldsymbol{\phi}} \partial_\theta+ \hat{\boldsymbol{\phi}} \partial_\theta \frac{1}{{r\sin\theta}} \partial_\phi\right)}}\right\rangle \\ &= \frac{1}{{r}} \partial_r+\frac{1}{{r^2}} \partial_{\theta\theta},\end{aligned}

and

\begin{aligned}\frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi &\cdot\left( \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta + \frac{\hat{\boldsymbol{\phi}}}{r\sin\theta} \partial_\phi \right) \\ &=\frac{1}{r\sin\theta} \left\langle{{\hat{\boldsymbol{\phi}}\left(\hat{\boldsymbol{\phi}} \sin\theta \partial_r + \hat{\mathbf{r}} \partial_{\phi r} + \hat{\boldsymbol{\phi}} \cos\theta \frac{1}{r} \partial_\theta + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_{\phi \theta }+ \hat{\boldsymbol{\phi}} i \frac{1}{r\sin\theta} \partial_\phi + \hat{\boldsymbol{\phi}} \frac{1}{r\sin\theta} \partial_{\phi \phi }\right)}}\right\rangle \\ &=\frac{1}{{r}} \partial_r+ \frac{\cot\theta}{r^2}\partial_\theta+ \frac{1}{{r^2 \sin^2\theta}} \partial_{\phi\phi}\end{aligned}

Summing these we have

\begin{aligned}\boldsymbol{\nabla}^2 &=\partial_{rr}+ \frac{2}{r} \partial_r+\frac{1}{{r^2}} \partial_{\theta\theta}+ \frac{\cot\theta}{r^2}\partial_\theta+ \frac{1}{{r^2 \sin^2\theta}} \partial_{\phi\phi}\end{aligned} \hspace{\stretch{1}}(5.23)

This is often written with a chain rule trick to considate the $r$ and $\theta$ partials

\begin{aligned}\boldsymbol{\nabla}^2 \Psi &=\frac{1}{{r}} \partial_{rr} (r \Psi)+ \frac{1}{{r^2 \sin\theta}} \partial_\theta \left( \sin\theta \partial_\theta \Psi \right)+ \frac{1}{{r^2 \sin^2\theta}} \partial_{\psi\psi} \Psi\end{aligned} \hspace{\stretch{1}}(5.24)

It’s simple to verify that this is identical to 5.23.

# References

[2] Peeter Joot. Spherical Polar unit vectors in exponential form. [online]. http://sites.google.com/site/peeterjoot/math2009/sphericalPolarUnit.pdf .

## Fourier transform solutions and associated energy and momentum for the homogeneous Maxwell equation. (rework once more)

Posted by peeterjoot on December 29, 2009

[Click here for a PDF of this post with nicer formatting]. Note that this PDF file is formatted in a wide-for-screen layout that is probably not good for printing.

These notes build on and replace those formerly posted in Energy and momentum for assumed Fourier transform solutions to the homogeneous Maxwell equation.

# Motivation and notation.

In Electrodynamic field energy for vacuum (reworked) [1], building on Energy and momentum for Complex electric and magnetic field phasors [2], a derivation for the energy and momentum density was derived for an assumed Fourier series solution to the homogeneous Maxwell’s equation. Here we move to the continuous case examining Fourier transform solutions and the associated energy and momentum density.

A complex (phasor) representation is implied, so taking real parts when all is said and done is required of the fields. For the energy momentum tensor the Geometric Algebra form, modified for complex fields, is used

\begin{aligned}T(a) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \hspace{\stretch{1}}(1.1)

The assumed four vector potential will be written

\begin{aligned}A(\mathbf{x}, t) = A^\mu(\mathbf{x}, t) \gamma_\mu = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(1.2)

Subject to the requirement that $A$ is a solution of Maxwell’s equation

\begin{aligned}\nabla (\nabla \wedge A) = 0.\end{aligned} \hspace{\stretch{1}}(1.3)

To avoid latex hell, no special notation will be used for the Fourier coefficients,

\begin{aligned}A(\mathbf{k}, t) = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{x}.\end{aligned} \hspace{\stretch{1}}(1.4)

When convenient and unambiguous, this $(\mathbf{k},t)$ dependence will be implied.

Having picked a time and space representation for the field, it will be natural to express both the four potential and the gradient as scalar plus spatial vector, instead of using the Dirac basis. For the gradient this is

\begin{aligned}\nabla &= \gamma^\mu \partial_\mu = (\partial_0 - \boldsymbol{\nabla}) \gamma_0 = \gamma_0 (\partial_0 + \boldsymbol{\nabla}),\end{aligned} \hspace{\stretch{1}}(1.5)

and for the four potential (or the Fourier transform functions), this is

\begin{aligned}A &= \gamma_\mu A^\mu = (\phi + \mathbf{A}) \gamma_0 = \gamma_0 (\phi - \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(1.6)

# Setup

The field bivector $F = \nabla \wedge A$ is required for the energy momentum tensor. This is

\begin{aligned}\nabla \wedge A&= \frac{1}{{2}}\left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \\ &= \frac{1}{{2}}\left( (\stackrel{ \rightarrow }{\partial}_0 - \stackrel{ \rightarrow }{\boldsymbol{\nabla}}) \gamma_0 \gamma_0 (\phi - \mathbf{A})-(\phi + \mathbf{A}) \gamma_0 \gamma_0 (\stackrel{ \leftarrow }{\partial}_0 + \stackrel{ \leftarrow }{\boldsymbol{\nabla}})\right) \\ &= -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \frac{1}{{2}}(\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}})\end{aligned}

This last term is a spatial curl and the field is then

\begin{aligned}F = -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A}\end{aligned} \hspace{\stretch{1}}(2.7)

Applied to the Fourier representation this is

\begin{aligned}F =\frac{1}{{(\sqrt{2 \pi})^3}} \int\left(- \frac{1}{c} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(2.8)

It is only the real parts of this that we are actually interested in, unless physical meaning can be assigned to the complete complex vector field.

# Constraints supplied by Maxwell’s equation.

A Fourier transform solution of Maxwell’s vacuum equation $\nabla F = 0$ has been assumed. Having expressed the Faraday bivector in terms of spatial vector quantities, it is more convenient to do this back substitution into after pre-multiplying Maxwell’s equation by $\gamma_0$, namely

\begin{aligned}0&= \gamma_0 \nabla F \\ &= (\partial_0 + \boldsymbol{\nabla}) F.\end{aligned} \hspace{\stretch{1}}(3.9)

Applied to the spatially decomposed field as specified in (2.7), this is

\begin{aligned}0&=-\partial_0 \boldsymbol{\nabla} \phi-\partial_{00} \mathbf{A}+ \partial_0 \boldsymbol{\nabla} \wedge \mathbf{A}-\boldsymbol{\nabla}^2 \phi- \boldsymbol{\nabla} \partial_0 \mathbf{A}+ \boldsymbol{\nabla} \cdot (\boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &=- \partial_0 \boldsymbol{\nabla} \phi - \boldsymbol{\nabla}^2 \phi- \partial_{00} \mathbf{A}- \boldsymbol{\nabla} \cdot \partial_0 \mathbf{A}+ \boldsymbol{\nabla}^2 \mathbf{A} - \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} ) \\ \end{aligned}

All grades of this equation must simultaneously equal zero, and the bivector grades have canceled (assuming commuting space and time partials), leaving two equations of constraint for the system

\begin{aligned}0 &=\boldsymbol{\nabla}^2 \phi + \boldsymbol{\nabla} \cdot \partial_0 \mathbf{A}\end{aligned} \hspace{\stretch{1}}(3.11)

\begin{aligned}0 &=\partial_{00} \mathbf{A} - \boldsymbol{\nabla}^2 \mathbf{A}+ \boldsymbol{\nabla} \partial_0 \phi + \boldsymbol{\nabla} ( \boldsymbol{\nabla} \cdot \mathbf{A} )\end{aligned} \hspace{\stretch{1}}(3.12)

It is immediately evident that a gauge transformation could be immediately helpful to simplify things. In [3] the gauge choice $\boldsymbol{\nabla} \cdot \mathbf{A} = 0$ is used. From (3.11) this implies that $\boldsymbol{\nabla}^2 \phi = 0$. Bohm argues that for this current and charge free case this implies $\phi = 0$, but he also has a periodicity constraint. Without a periodicity constraint it is easy to manufacture non-zero counterexamples. One is a linear function in the space and time coordinates

\begin{aligned}\phi = p x + q y + r z + s t\end{aligned} \hspace{\stretch{1}}(3.13)

This is a valid scalar potential provided that the wave equation for the vector potential is also a solution. We can however, force $\phi = 0$ by making the transformation $A^\mu \rightarrow A^\mu + \partial^\mu \psi$, which in non-covariant notation is

\begin{aligned}\phi &\rightarrow \phi + \frac{1}{c} \partial_t \psi \\ \mathbf{A} &\rightarrow \phi - \boldsymbol{\nabla} \psi\end{aligned} \hspace{\stretch{1}}(3.14)

If the transformed field $\phi' = \phi + \partial_t \psi/c$ can be forced to zero, then the complexity of the associated Maxwell equations are reduced. In particular, antidifferentiation of $\phi = -(1/c) \partial_t \psi$, yields

\begin{aligned}\psi(\mathbf{x},t) = \psi(\mathbf{x}, 0) - c \int_{\tau=0}^t \phi(\mathbf{x}, \tau) d\tau.\end{aligned} \hspace{\stretch{1}}(3.16)

Dropping primes, the transformed Maxwell equations now take the form

\begin{aligned}0 &= \partial_t( \boldsymbol{\nabla} \cdot \mathbf{A} )\end{aligned} \hspace{\stretch{1}}(3.17)

\begin{aligned}0 &=\partial_{00} \mathbf{A} - \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} ).\end{aligned} \hspace{\stretch{1}}(3.18)

There are two classes of solutions that stand out for these equations. If the vector potential is constant in time $\mathbf{A}(\mathbf{x},t) = \mathbf{A}(\mathbf{x})$, Maxwell’s equations are reduced to the single equation

\begin{aligned}0&= - \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} ).\end{aligned} \hspace{\stretch{1}}(3.19)

Observe that a gradient can be factored out of this equation

\begin{aligned}- \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} )&=\boldsymbol{\nabla} (-\boldsymbol{\nabla} \mathbf{A} + \boldsymbol{\nabla} \cdot \mathbf{A} ) \\ &=-\boldsymbol{\nabla} (\boldsymbol{\nabla} \wedge \mathbf{A}).\end{aligned}

The solutions are then those $\mathbf{A}$s that satisfy both

\begin{aligned}0 &= \partial_t \mathbf{A} \\ 0 &= \boldsymbol{\nabla} (\boldsymbol{\nabla} \wedge \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(3.20)

In particular any non-time dependent potential $\mathbf{A}$ with constant curl provides a solution to Maxwell’s equations. There may be other solutions to (3.19) too that are more general. Returning to (3.17) a second way to satisfy these equations stands out. Instead of requiring of $\mathbf{A}$ constant curl, constant divergence with respect to the time partial eliminates (3.17). The simplest resulting equations are those for which the divergence is a constant in time and space (such as zero). The solution set are then spanned by the vectors $\mathbf{A}$ for which

\begin{aligned}\text{constant} &= \boldsymbol{\nabla} \cdot \mathbf{A} \end{aligned} \hspace{\stretch{1}}(3.22)

\begin{aligned}0 &= \frac{1}{{c^2}} \partial_{tt} \mathbf{A} - \boldsymbol{\nabla}^2 \mathbf{A}.\end{aligned} \hspace{\stretch{1}}(3.23)

Any $\mathbf{A}$ that both has constant divergence and satisfies the wave equation will via (2.7) then produce a solution to Maxwell’s equation.

# Maxwell equation constraints applied to the assumed Fourier solutions.

Let’s consider Maxwell’s equations in all three forms, (3.11), (3.20), and (3.22) and apply these constraints to the assumed Fourier solution.

In all cases the starting point is a pair of Fourier transform relationships, where the Fourier transforms are the functions to be determined

\begin{aligned}\phi(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \phi(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(4.24)

\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \mathbf{A}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(4.25)

## Case I. Constant time vector potential. Scalar potential eliminated by gauge transformation.

From (4.24) we require

\begin{aligned}0 = (2 \pi)^{-3/2} \int \partial_t \mathbf{A}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.26)

So the Fourier transform also cannot have any time dependence, and we have

\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \mathbf{A}(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(4.27)

What is the curl of this? Temporarily falling back to coordinates is easiest for this calculation

\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{A}(\mathbf{k}) e^{i\mathbf{k} \cdot \mathbf{x}}&=\sigma_m \partial_m \wedge \sigma_n A^n(\mathbf{k}) e^{i \mathbf{x} \cdot \mathbf{x}} \\ &=\sigma_m \wedge \sigma_n A^n(\mathbf{k}) i k^m e^{i \mathbf{x} \cdot \mathbf{x}} \\ &=i\mathbf{k} \wedge \mathbf{A}(\mathbf{k}) e^{i \mathbf{x} \cdot \mathbf{x}} \\ \end{aligned}

This gives

\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.28)

We want to equate the divergence of this to zero. Neglecting the integral and constant factor this requires

\begin{aligned}0 &= \boldsymbol{\nabla} \cdot \left( i \mathbf{k} \wedge \mathbf{A} e^{i\mathbf{k} \cdot \mathbf{x}} \right) \\ &= {\left\langle{{ \sigma_m \partial_m i (\mathbf{k} \wedge \mathbf{A}) e^{i\mathbf{k} \cdot \mathbf{x}} }}\right\rangle}_{1} \\ &= -{\left\langle{{ \sigma_m (\mathbf{k} \wedge \mathbf{A}) k^m e^{i\mathbf{k} \cdot \mathbf{x}} }}\right\rangle}_{1} \\ &= -\mathbf{k} \cdot (\mathbf{k} \wedge \mathbf{A}) e^{i\mathbf{k} \cdot \mathbf{x}} \\ \end{aligned}

Requiring that the plane spanned by $\mathbf{k}$ and $\mathbf{A}(\mathbf{k})$ be perpendicular to $\mathbf{k}$ implies that $\mathbf{A} \propto \mathbf{k}$. The solution set is then completely described by functions of the form

\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \mathbf{k} \psi(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k},\end{aligned} \hspace{\stretch{1}}(4.29)

where $\psi(\mathbf{k})$ is an arbitrary scalar valued function. This is however, an extremely uninteresting solution since the curl is uniformly zero

\begin{aligned}F &= \boldsymbol{\nabla} \wedge \mathbf{A} \\ &= (2 \pi)^{-3/2} \int (i \mathbf{k}) \wedge \mathbf{k} \psi(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned}

Since $\mathbf{k} \wedge \mathbf{k} = 0$, when all is said and done the $\phi = 0$, $\partial_t \mathbf{A} = 0$ case appears to have no non-trivial (zero) solutions. Moving on, …

## Case II. Constant vector potential divergence. Scalar potential eliminated by gauge transformation.

Next in the order of complexity is consideration of the case (3.22). Here we also have $\phi = 0$, eliminated by gauge transformation, and are looking for solutions with the constraint

\begin{aligned}\text{constant} &= \boldsymbol{\nabla} \cdot \mathbf{A}(\mathbf{x}, t) \\ &= (2 \pi)^{-3/2} \int i \mathbf{k} \cdot \mathbf{A}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned}

How can this constraint be enforced? The only obvious way is a requirement for $\mathbf{k} \cdot \mathbf{A}(\mathbf{k}, t)$ to be zero for all $(\mathbf{k},t)$, meaning that our to be determined Fourier transform coefficients are required to be perpendicular to the wave number vector parameters at all times.

The remainder of Maxwell’s equations, (3.23) impose the addition constraint on the Fourier transform $\mathbf{A}(\mathbf{k},t)$

\begin{aligned}0 &= (2 \pi)^{-3/2} \int \left( \frac{1}{{c^2}} \partial_{tt} \mathbf{A}(\mathbf{k}, t) - i^2 \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t)\right) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.30)

For zero equality for all $\mathbf{x}$ it appears that we require the Fourier transforms $\mathbf{A}(\mathbf{k})$ to be harmonic in time

\begin{aligned}\partial_{tt} \mathbf{A}(\mathbf{k}, t) = - c^2 \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t).\end{aligned} \hspace{\stretch{1}}(4.31)

This has the familiar exponential solutions

\begin{aligned}\mathbf{A}(\mathbf{k}, t) = \mathbf{A}_{\pm}(\mathbf{k}) e^{ \pm i c {\left\lvert{\mathbf{k}}\right\rvert} t },\end{aligned} \hspace{\stretch{1}}(4.32)

also subject to a requirement that $\mathbf{k} \cdot \mathbf{A}(\mathbf{k}) = 0$. Our field, where the $\mathbf{A}_{\pm}(\mathbf{k})$ are to be determined by initial time conditions, is by (2.7) of the form

\begin{aligned}F(\mathbf{x}, t)= \text{Real} \frac{i}{(\sqrt{2\pi})^3} \int \Bigl( -{\left\lvert{\mathbf{k}}\right\rvert} \mathbf{A}_{+}(\mathbf{k}) + \mathbf{k} \wedge \mathbf{A}_{+}(\mathbf{k}) \Bigr) \exp(i \mathbf{k} \cdot \mathbf{x} + i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}+ \text{Real} \frac{i}{(\sqrt{2\pi})^3} \int \Bigl( {\left\lvert{\mathbf{k}}\right\rvert} \mathbf{A}_{-}(\mathbf{k}) + \mathbf{k} \wedge \mathbf{A}_{-}(\mathbf{k}) \Bigr) \exp(i \mathbf{k} \cdot \mathbf{x} - i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.33)

Since $0 = \mathbf{k} \cdot \mathbf{A}_{\pm}(\mathbf{k})$, we have $\mathbf{k} \wedge \mathbf{A}_{\pm}(\mathbf{k}) = \mathbf{k} \mathbf{A}_{\pm}$. This allows for factoring out of ${\left\lvert{\mathbf{k}}\right\rvert}$. The structure of the solution is not changed by incorporating the $i (2\pi)^{-3/2} {\left\lvert{\mathbf{k}}\right\rvert}$ factors into $\mathbf{A}_{\pm}$, leaving the field having the general form

\begin{aligned}F(\mathbf{x}, t)= \text{Real} \int ( \hat{\mathbf{k}} - 1 ) \mathbf{A}_{+}(\mathbf{k}) \exp(i \mathbf{k} \cdot \mathbf{x} + i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}+ \text{Real} \int ( \hat{\mathbf{k}} + 1 ) \mathbf{A}_{-}(\mathbf{k}) \exp(i \mathbf{k} \cdot \mathbf{x} - i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.34)

The original meaning of $\mathbf{A}_{\pm}$ as Fourier transforms of the vector potential is obscured by the tidy up change to absorb ${\left\lvert{\mathbf{k}}\right\rvert}$, but the geometry of the solution is clearer this way.

It is also particularly straightforward to confirm that $\gamma_0 \nabla F = 0$ separately for either half of (4.34).

## Case III. Non-zero scalar potential. No gauge transformation.

Now lets work from (3.11). In particular, a divergence operation can be factored from (3.11), for

\begin{aligned}0 = \boldsymbol{\nabla} \cdot (\boldsymbol{\nabla} \phi + \partial_0 \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(4.35)

Right off the top, there is a requirement for

\begin{aligned}\text{constant} = \boldsymbol{\nabla} \phi + \partial_0 \mathbf{A}.\end{aligned} \hspace{\stretch{1}}(4.36)

In terms of the Fourier transforms this is

\begin{aligned}\text{constant} = \frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(i \mathbf{k} \phi(\mathbf{k}, t) + \frac{1}{c} \partial_t \mathbf{A}(\mathbf{k}, t)\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.37)

Are there any ways for this to equal a constant for all $\mathbf{x}$ without requiring that constant to be zero? Assuming no for now, and that this constant must be zero, this implies a coupling between the $\phi$ and $\mathbf{A}$ Fourier transforms of the form

\begin{aligned}\phi(\mathbf{k}, t) = -\frac{1}{{i c \mathbf{k}}} \partial_t \mathbf{A}(\mathbf{k}, t)\end{aligned} \hspace{\stretch{1}}(4.38)

A secondary implication is that $\partial_t \mathbf{A}(\mathbf{k}, t) \propto \mathbf{k}$ or else $\phi(\mathbf{k}, t)$ is not a scalar. We had a transverse solution by requiring via gauge transformation that $\phi = 0$, and here we have instead the vector potential in the propagation direction.

A secondary confirmation that this is a required coupling between the scalar and vector potential can be had by evaluating the divergence equation of (4.35)

\begin{aligned}0 = \frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(- \mathbf{k}^2 \phi(\mathbf{k}, t) + \frac{i\mathbf{k}}{c} \cdot \partial_t \mathbf{A}(\mathbf{k}, t)\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.39)

Rearranging this also produces (4.38). We want to now substitute this relationship into (3.12).

Starting with just the $\partial_0 \phi - \boldsymbol{\nabla} \cdot \mathbf{A}$ part we have

\begin{aligned}\partial_0 \phi + \boldsymbol{\nabla} \cdot \mathbf{A}&=\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(\frac{i}{c^2 \mathbf{k}} \partial_{tt} \mathbf{A}(\mathbf{k}, t) + i \mathbf{k} \cdot \mathbf{A}\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.40)

Taking the gradient of this brings down a factor of $i\mathbf{k}$ for

\begin{aligned}\boldsymbol{\nabla} (\partial_0 \phi + \boldsymbol{\nabla} \cdot \mathbf{A})&=-\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(\frac{1}{c^2} \partial_{tt} \mathbf{A}(\mathbf{k}, t) + \mathbf{k} (\mathbf{k} \cdot \mathbf{A})\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.41)

(3.12) in its entirety is now

\begin{aligned}0 &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(- (i\mathbf{k})^2 \mathbf{A}+ \mathbf{k} (\mathbf{k} \cdot \mathbf{A})\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.42)

This isn’t terribly pleasant looking. Perhaps going the other direction. We could write

\begin{aligned}\phi = \frac{i}{c \mathbf{k}} \frac{\partial {\mathbf{A}}}{\partial {t}} = \frac{i}{c} \frac{\partial {\psi}}{\partial {t}},\end{aligned} \hspace{\stretch{1}}(4.43)

so that

\begin{aligned}\mathbf{A}(\mathbf{k}, t) = \mathbf{k} \psi(\mathbf{k}, t).\end{aligned} \hspace{\stretch{1}}(4.44)

\begin{aligned}0 &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(\frac{1}{{c^2}} \mathbf{k} \psi_{tt}- \boldsymbol{\nabla}^2 \mathbf{k} \psi + \boldsymbol{\nabla} \frac{i}{c^2} \psi_{tt}+\boldsymbol{\nabla}( \boldsymbol{\nabla} \cdot (\mathbf{k} \psi) )\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \\ \end{aligned}

Note that the gradients here operate on everything to the right, including and especially the exponential. Each application of the gradient brings down an additional $i\mathbf{k}$ factor, and we have

\begin{aligned}\frac{1}{{(\sqrt{2 \pi})^3}} \int \mathbf{k} \Bigl(\frac{1}{{c^2}} \psi_{tt}- i^2 \mathbf{k}^2 \psi + \frac{i^2}{c^2} \psi_{tt}+i^2 \mathbf{k}^2 \psi \Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned}

This is identically zero, so we see that this second equation provides no additional information. That is somewhat surprising since there is not a whole lot of constraints supplied by the first equation. The function $\psi(\mathbf{k}, t)$ can be anything. Understanding of this curiosity comes from computation of the Faraday bivector itself. From (2.7), that is

\begin{aligned}F = \frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(-i \mathbf{k} \frac{i}{c}\psi_t - \frac{1}{c} \mathbf{k} \psi_t + i \mathbf{k} \wedge \mathbf{k} \psi\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.45)

All terms cancel, so we see that a non-zero $\phi$ leads to $F = 0$, as was the case when considering (4.24) (a case that also resulted in $\mathbf{A}(\mathbf{k}) \propto \mathbf{k}$).

Can this Fourier representation lead to a non-transverse solution to Maxwell’s equation? If so, it is not obvious how.

# The energy momentum tensor

The energy momentum tensor is then

\begin{aligned}T(a) &= -\frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left(- \frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)+ i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)- i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)a\left(- \frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)- i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(5.46)

Observing that $\gamma_0$ commutes with spatial bivectors and anticommutes with spatial vectors, and writing $\sigma_\mu = \gamma_\mu \gamma_0$, the tensor splits neatly into scalar and spatial vector components

\begin{aligned}T(\gamma_\mu) \cdot \gamma_0 &= \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left\langle{{\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)+ i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)\sigma_\mu\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)}}\right\rangle e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}' \\ T(\gamma_\mu) \wedge \gamma_0 &= \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint{\left\langle{{\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)+ i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)\sigma_\mu\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)}}\right\rangle}_{1}e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(5.47)

In particular for $\mu = 0$, we have

\begin{aligned}H &\equiv T(\gamma_0) \cdot \gamma_0 = \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left(\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)\right)\cdot\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)\right)- (\mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)) \cdot (\mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t))\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}' \\ \mathbf{P} &\equiv T(\gamma_\mu) \wedge \gamma_0 = \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left(i\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)\right) \cdot\left(\mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)-i\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)\right)\cdot\left(\mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(5.49)

Integrating this over all space and identification of the delta function

\begin{aligned}\delta(\mathbf{k}) \equiv \frac{1}{{(2 \pi)^3}} \int e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{x},\end{aligned} \hspace{\stretch{1}}(5.51)

reduces the tensor to a single integral in the continuous angular wave number space of $\mathbf{k}$.

\begin{aligned}\int T(a) d^3 \mathbf{x} &= -\frac{\epsilon_0}{2} \text{Real} \int\left(- \frac{1}{c} {{\dot{\mathbf{A}}}}^{*}+ i \mathbf{k} {{\phi}}^{*}- i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)a\left(- \frac{1}{c} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.52)

Or,

\begin{aligned}\int T(\gamma_\mu) \gamma_0 d^3 \mathbf{x} =\frac{\epsilon_0}{2} \text{Real} \int{\left\langle{{\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}- i \mathbf{k} {{\phi}}^{*}+ i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)\sigma_\mu\left(\frac{1}{c} \dot{\mathbf{A}}+ i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)}}\right\rangle}_{{0,1}}d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.53)

Multiplying out (5.53) yields for $\int H$

\begin{aligned}\int H d^3 \mathbf{x} &=\frac{\epsilon_0}{2} \int d^3 \mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}}\right\rvert}^2 + \mathbf{k}^2 ({\left\lvert{\phi}\right\rvert}^2 + {\left\lvert{\mathbf{A}}\right\rvert}^2 )- {\left\lvert{\mathbf{k} \cdot \mathbf{A}}\right\rvert}^2+ 2 \frac{\mathbf{k}}{c} \cdot \text{Real}( i {{\phi}}^{*} \dot{\mathbf{A}} )\right)\end{aligned} \hspace{\stretch{1}}(5.54)

Recall that the only non-trivial solution we found for the assumed Fourier transform representation of $F$ was for $\phi = 0$, $\mathbf{k} \cdot \mathbf{A}(\mathbf{k}, t) = 0$. Thus we have for the energy density integrated over all space, just

\begin{aligned}\int H d^3 \mathbf{x} &=\frac{\epsilon_0}{2} \int d^3 \mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}}\right\rvert}^2 + \mathbf{k}^2 {\left\lvert{\mathbf{A}}\right\rvert}^2 \right).\end{aligned} \hspace{\stretch{1}}(5.55)

Observe that we have the structure of a Harmonic oscillator for the energy of the radiation system. What is the canonical momentum for this system? Will it correspond to the Poynting vector, integrated over all space?

Let’s reduce the vector component of (5.53), after first imposing the $\phi=0$, and $\mathbf{k} \cdot \mathbf{A} = 0$ conditions used to above for our harmonic oscillator form energy relationship. This is

\begin{aligned}\int \mathbf{P} d^3 \mathbf{x} &=\frac{\epsilon_0}{2 c} \text{Real} \int d^3 \mathbf{k} \left( i {\mathbf{A}}^{*}_t \cdot (\mathbf{k} \wedge \mathbf{A})+ i (\mathbf{k} \wedge {\mathbf{A}}^{*}) \cdot \mathbf{A}_t\right) \\ &=\frac{\epsilon_0}{2 c} \text{Real} \int d^3 \mathbf{k} \left( -i ({\mathbf{A}}^{*}_t \cdot \mathbf{A}) \mathbf{k}+ i \mathbf{k} ({\mathbf{A}}^{*} \cdot \mathbf{A}_t)\right)\end{aligned}

This is just

\begin{aligned}\int \mathbf{P} d^3 \mathbf{x} &=\frac{\epsilon_0}{c} \text{Real} i \int \mathbf{k} ({\mathbf{A}}^{*} \cdot \mathbf{A}_t) d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.56)

Recall that the Fourier transforms for the transverse propagation case had the form $\mathbf{A}(\mathbf{k}, t) = \mathbf{A}_{\pm}(\mathbf{k}) e^{\pm i c {\left\lvert{\mathbf{k}}\right\rvert} t}$, where the minus generated the advanced wave, and the plus the receding wave. With substitution of the vector potential for the advanced wave into the energy and momentum results of (5.55) and (5.56) respectively, we have

\begin{aligned}\int H d^3 \mathbf{x} &= \epsilon_0 \int \mathbf{k}^2 {\left\lvert{\mathbf{A}(\mathbf{k})}\right\rvert}^2 d^3 \mathbf{k} \\ \int \mathbf{P} d^3 \mathbf{x} &= \epsilon_0 \int \hat{\mathbf{k}} \mathbf{k}^2 {\left\lvert{\mathbf{A}(\mathbf{k})}\right\rvert}^2 d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.57)

After a somewhat circuitous route, this has the relativistic symmetry that is expected. In particular the for the complete $\mu=0$ tensor we have after integration over all space

\begin{aligned}\int T(\gamma_0) \gamma_0 d^3 \mathbf{x} = \epsilon_0 \int (1 + \hat{\mathbf{k}}) \mathbf{k}^2 {\left\lvert{\mathbf{A}(\mathbf{k})}\right\rvert}^2 d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.59)

The receding wave solution would give the same result, but directed as $1 - \hat{\mathbf{k}}$ instead.

Observe that we also have the four divergence conservation statement that is expected

\begin{aligned}\frac{\partial {}}{\partial {t}} \int H d^3 \mathbf{x} + \boldsymbol{\nabla} \cdot \int c \mathbf{P} d^3 \mathbf{x} &= 0.\end{aligned} \hspace{\stretch{1}}(5.60)

This follows trivially since both the derivatives are zero. If the integration region was to be more specific instead of a $0 + 0 = 0$ relationship, we’d have the power flux ${\partial {H}}/{\partial {t}}$ equal in magnitude to the momentum change through a bounding surface. For a more general surface the time and spatial dependencies shouldn’t necessarily vanish, but we should still have this radiation energy momentum conservation.

# References

[1] Peeter Joot. Electrodynamic field energy for vacuum. [online]. http://sites.google.com/site/peeterjoot/math2009/fourierMaxVac.pdf.

[2] Peeter Joot. {Energy and momentum for Complex electric and magnetic field phasors.} [online]. http://sites.google.com/site/peeterjoot/math2009/complexFieldEnergy.pdf.

[3] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

## Energy and momentum for assumed Fourier transform solutions to the homogeneous Maxwell equation.

Posted by peeterjoot on December 22, 2009

# Motivation and notation.

In Electrodynamic field energy for vacuum (reworked) [1], building on Energy and momentum for Complex electric and magnetic field phasors [2] a derivation for the energy and momentum density was derived for an assumed Fourier series solution to the homogeneous Maxwell’s equation. Here we move to the continuous case examining Fourier transform solutions and the associated energy and momentum density.

A complex (phasor) representation is implied, so taking real parts when all is said and done is required of the fields. For the energy momentum tensor the Geometric Algebra form, modified for complex fields, is used

\begin{aligned}T(a) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \hspace{\stretch{1}}(1.1)

The assumed four vector potential will be written

\begin{aligned}A(\mathbf{x}, t) = A^\mu(\mathbf{x}, t) \gamma_\mu = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(1.2)

Subject to the requirement that $A$ is a solution of Maxwell’s equation

\begin{aligned}\nabla (\nabla \wedge A) = 0.\end{aligned} \hspace{\stretch{1}}(1.3)

To avoid latex hell, no special notation will be used for the Fourier coefficients,

\begin{aligned}A(\mathbf{k}, t) = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{x}.\end{aligned} \hspace{\stretch{1}}(1.4)

When convenient and unambiguous, this $(\mathbf{k},t)$ dependence will be implied.

Having picked a time and space representation for the field, it will be natural to express both the four potential and the gradient as scalar plus spatial vector, instead of using the Dirac basis. For the gradient this is

\begin{aligned}\nabla &= \gamma^\mu \partial_\mu = (\partial_0 - \boldsymbol{\nabla}) \gamma_0 = \gamma_0 (\partial_0 + \boldsymbol{\nabla}),\end{aligned} \hspace{\stretch{1}}(1.5)

and for the four potential (or the Fourier transform functions), this is

\begin{aligned}A &= \gamma_\mu A^\mu = (\phi + \mathbf{A}) \gamma_0 = \gamma_0 (\phi - \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(1.6)

# Setup

The field bivector $F = \nabla \wedge A$ is required for the energy momentum tensor. This is

\begin{aligned}\nabla \wedge A&= \frac{1}{{2}}\left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \\ &= \frac{1}{{2}}\left( (\stackrel{ \rightarrow }{\partial}_0 - \stackrel{ \rightarrow }{\boldsymbol{\nabla}}) \gamma_0 \gamma_0 (\phi - \mathbf{A})- (\phi + \mathbf{A}) \gamma_0 \gamma_0 (\stackrel{ \leftarrow }{\partial}_0 + \stackrel{ \leftarrow }{\boldsymbol{\nabla}})\right) \\ &= -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \frac{1}{{2}}(\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}}) \end{aligned}

This last term is a spatial curl and the field is then

\begin{aligned}F = -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} \end{aligned} \hspace{\stretch{1}}(2.7)

Applied to the Fourier representation this is

\begin{aligned}F = \frac{1}{{(\sqrt{2 \pi})^3}} \int \left( - \frac{1}{{c}} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(2.8)

The energy momentum tensor is then

\begin{aligned}T(a) &= -\frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint \left( - \frac{1}{{c}} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)+ i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)- i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)a\left( - \frac{1}{{c}} \dot{\mathbf{A}}(\mathbf{k}, t)- i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(2.9)

# The tensor integrated over all space. Energy and momentum?

Integrating this over all space and identification of the delta function

\begin{aligned}\delta(\mathbf{k}) \equiv \frac{1}{{(2 \pi)^3}} \int e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{x},\end{aligned} \hspace{\stretch{1}}(3.10)

reduces the tensor to a single integral in the continuous angular wave number space of $\mathbf{k}$.

\begin{aligned}\int T(a) d^3 \mathbf{x} &= -\frac{\epsilon_0}{2} \text{Real} \int \left( - \frac{1}{{c}} {{\dot{\mathbf{A}}}}^{*}+ i \mathbf{k} {{\phi}}^{*}- i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)a\left( - \frac{1}{{c}} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.11)

Observing that $\gamma_0$ commutes with spatial bivectors and anticommutes with spatial vectors, and writing $\sigma_\mu = \gamma_\mu \gamma_0$, one has

\begin{aligned}\int T(\gamma_\mu) \gamma_0 d^3 \mathbf{x} = \frac{\epsilon_0}{2} \text{Real} \int {\left\langle{{\left( \frac{1}{{c}} {{\dot{\mathbf{A}}}}^{*}- i \mathbf{k} {{\phi}}^{*}+ i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)\sigma_\mu\left( \frac{1}{{c}} \dot{\mathbf{A}}+ i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)}}\right\rangle}_{{0,1}}d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.12)

The scalar and spatial vector grade selection operator has been added for convenience and does not change the result since those are necessarily the only grades anyhow. The post multiplication by the observer frame time basis vector $\gamma_0$ serves to separate the energy and momentum like components of the tensor nicely into scalar and vector aspects. In particular for $T(\gamma^0)$, one could write

\begin{aligned}\int T(\gamma^0) d^3 \mathbf{x} = (H + \mathbf{P}) \gamma_0,\end{aligned} \hspace{\stretch{1}}(3.13)

If these are correctly identified with energy and momentum then it also ought to be true that we have the conservation relationship

\begin{aligned}\frac{\partial {H}}{\partial {t}} + \boldsymbol{\nabla} \cdot (c \mathbf{P}) = 0.\end{aligned} \hspace{\stretch{1}}(3.14)

However, multiplying out (3.12) yields for $H$

\begin{aligned}H &= \frac{\epsilon_0}{2} \int d^3 \mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}}\right\rvert}^2 + \mathbf{k}^2 ({\left\lvert{\phi}\right\rvert}^2 + {\left\lvert{\mathbf{A}}\right\rvert}^2 )- {\left\lvert{\mathbf{k} \cdot \mathbf{A}}\right\rvert}^2 + 2 \frac{\mathbf{k}}{c} \cdot \text{Real}( i {{\phi}}^{*} \dot{\mathbf{A}} )\right)\end{aligned} \hspace{\stretch{1}}(3.15)

The vector component takes a bit more work to reduce

\begin{aligned}\mathbf{P} &= \frac{\epsilon_0}{2} \int d^3 \mathbf{k} \text{Real} \left(\frac{i}{c} ({{\dot{\mathbf{A}}}}^{*} \cdot (\mathbf{k} \wedge \mathbf{A})+ {{\phi}}^{*} \mathbf{k} \cdot (\mathbf{k} \wedge \mathbf{A})+ \frac{i}{c} (\mathbf{k} \wedge {\mathbf{A}}^{*}) \cdot \dot{\mathbf{A}}- \phi (\mathbf{k} \wedge {\mathbf{A}}^{*}) \cdot \mathbf{k}\right) \\ &=\frac{\epsilon_0}{2} \int d^3 \mathbf{k} \text{Real} \left(\frac{i}{c} \left( ({{\dot{\mathbf{A}}}}^{*} \cdot \mathbf{k}) \mathbf{A} -({{\dot{\mathbf{A}}}}^{*} \cdot \mathbf{A}) \mathbf{k} \right)+ {{\phi}}^{*} \left( \mathbf{k}^2 \mathbf{A} - (\mathbf{k} \cdot \mathbf{A}) \mathbf{k} \right)+ \frac{i}{c} \left( ({\mathbf{A}}^{*} \cdot \dot{\mathbf{A}}) \mathbf{k} - (\mathbf{k} \cdot \dot{\mathbf{A}}) {\mathbf{A}}^{*} \right)+ \phi \left( \mathbf{k}^2 {\mathbf{A}}^{*} -({\mathbf{A}}^{*} \cdot \mathbf{k}) \mathbf{k} \right) \right).\end{aligned}

Canceling and regrouping leaves

\begin{aligned}\mathbf{P}&=\epsilon_0 \int d^3 \mathbf{k} \text{Real} \left(\mathbf{A} \left( \mathbf{k}^2 {{\phi}}^{*} + \mathbf{k} \cdot {{\dot{\mathbf{A}}}}^{*} \right)+ \mathbf{k} \left( -{{\phi}}^{*} (\mathbf{k} \cdot \mathbf{A}) + \frac{i}{c} ({\mathbf{A}}^{*} \cdot \dot{\mathbf{A}})\right)\right).\end{aligned} \hspace{\stretch{1}}(3.16)

This has no explicit $\mathbf{x}$ dependence, so the conservation relation (3.14) is violated unless ${\partial {H}}/{\partial {t}} = 0$. There is no reason to assume that will be the case. In the discrete Fourier series treatment, a gauge transformation allowed for elimination of $\phi$, and this implied $\mathbf{k} \cdot \mathbf{A}_\mathbf{k} = 0$ or $\mathbf{A}_\mathbf{k}$ constant. We will probably have a similar result here, eliminating most of the terms in (3.15) and (3.16). Except for the constant $\mathbf{A}_\mathbf{k}$ solution of the field equations there is no obvious way that such a simplified energy expression will have zero derivative.

A more reasonable conclusion is that this approach is flawed. We ought to be looking at the divergence relation as a starting point, and instead of integrating over all space, instead employing Gauss’s theorem to convert the divergence integral into a surface integral. Without math, the conservation relationship probably ought to be expressed as energy change in a volume is matched by the momentum change through the surface. However, without an integral over all space, we do not get the nice delta function cancellation observed above. How to proceed is not immediately clear. Stepping back to review applications of Gauss’s theorem is probably a good first step.

# References

[1] Peeter Joot. Electrodynamic field energy for vacuum. [online]. http://sites.google.com/site/peeterjoot/math2009/fourierMaxVac.pdf.

[2] Peeter Joot. {Energy and momentum for Complex electric and magnetic field phasors.} [online]. http://sites.google.com/site/peeterjoot/math2009/complexFieldEnergy.pdf.

## Electrodynamic field energy for vacuum (reworked)

Posted by peeterjoot on December 21, 2009

# Previous version.

Reducing the products in the Dirac basis makes life more complicated then it needs to be (became obvious when attempting to derive an expression for the Poynting integral).

# Motivation.

From Energy and momentum for Complex electric and magnetic field phasors [PDF] how to formulate the energy momentum tensor for complex vector fields (ie. phasors) in the Geometric Algebra formalism is now understood. To recap, for the field $F = \mathbf{E} + I c \mathbf{B}$, where $\mathbf{E}$ and $\mathbf{B}$ may be complex vectors we have for Maxwell’s equation

\begin{aligned}\nabla F = J/\epsilon_0 c.\end{aligned} \quad\quad\quad(1)

This is a doubly complex representation, with the four vector pseudoscalar $I = \gamma_0 \gamma_1 \gamma_2 \gamma_3$ acting as a non-commutatitive imaginary, as well as real and imaginary parts for the electric and magnetic field vectors. We take the real part (not the scalar part) of any bivector solution $F$ of Maxwell’s equation as the actual solution, but allow ourself the freedom to work with the complex phasor representation when convenient. In these phasor vectors, the imaginary $i$, as in $\mathbf{E} = \text{Real}(\mathbf{E}) + i \text{Imag}(\mathbf{E})$, is a commuting imaginary, commuting with all the multivector elements in the algebra.

The real valued, four vector, energy momentum tensor $T(a)$ was found to be

\begin{aligned}T(a) = \frac{\epsilon_0}{4} \Bigl( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} \Bigr) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \quad\quad\quad(2)

To supply some context that gives meaning to this tensor the associated conservation relationship was found to be

\begin{aligned}\nabla \cdot T(a) &= a \cdot \frac{1}{{ c }} \text{Real} \left( J \cdot {{F}}^{*} \right).\end{aligned} \quad\quad\quad(3)

and in particular for $a = \gamma^0$, this four vector divergence takes the form

\begin{aligned}\frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ \boldsymbol{\nabla} \cdot \frac{1}{{\mu_0}} \text{Real} (\mathbf{E} \times {\mathbf{B}}^{*} )+ \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) = 0,\end{aligned} \quad\quad\quad(4)

relating the energy term $T^{00} = T(\gamma^0) \cdot \gamma^0$ and the Poynting spatial vector $T(\gamma^0) \wedge \gamma^0$ with the current density and electric field product that constitutes the energy portion of the Lorentz force density.

Let’s apply this to calculating the energy associated with the field that is periodic within a rectangular prism as done by Bohm in [2]. We do not necessarily need the Geometric Algebra formalism for this calculation, but this will be a fun way to attempt it.

# Setup

Let’s assume a Fourier representation for the four vector potential $A$ for the field $F = \nabla \wedge A$. That is

\begin{aligned}A = \sum_{\mathbf{k}} A_\mathbf{k}(t) e^{i \mathbf{k} \cdot \mathbf{x}},\end{aligned} \quad\quad\quad(5)

where summation is over all angular wave number triplets $\mathbf{k} = 2 \pi (k_1/\lambda_1, k_2/\lambda_2, k_3/\lambda_3)$. The Fourier coefficients $A_\mathbf{k} = {A_\mathbf{k}}^\mu \gamma_\mu$ are allowed to be complex valued, as is the resulting four vector $A$, and the associated bivector field $F$.

Fourier inversion, with $V = \lambda_1 \lambda_2 \lambda_3$, follows from

\begin{aligned}\delta_{\mathbf{k}', \mathbf{k}} =\frac{1}{{ V }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} e^{ i \mathbf{k}' \cdot \mathbf{x}} e^{-i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(6)

but only this orthogonality relationship and not the Fourier coefficients themselves

\begin{aligned}A_\mathbf{k} = \frac{1}{{ V }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} A(\mathbf{x}, t) e^{- i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(7)

will be of interest here. Evaluating the curl for this potential yields

\begin{aligned}F = \nabla \wedge A= \sum_{\mathbf{k}} \left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \gamma^m \wedge A_\mathbf{k} \frac{2 \pi i k_m}{\lambda_m} \right) e^{i \mathbf{k} \cdot \mathbf{x}}.\end{aligned} \quad\quad\quad(8)

Since the four vector potential has been expressed using an explicit split into time and space components it will be natural to re express the bivector field in terms of scalar and (spatial) vector potentials, with the Fourier coefficients. Writing $\sigma_m = \gamma_m \gamma_0$ for the spatial basis vectors, ${A_\mathbf{k}}^0 = \phi_\mathbf{k}$, and $\mathbf{A} = A^k \sigma_k$, this is

\begin{aligned}A_\mathbf{k} = (\phi_\mathbf{k} + \mathbf{A}_\mathbf{k}) \gamma_0.\end{aligned} \quad\quad\quad(9)

The Faraday bivector field $F$ is then

\begin{aligned}F = \sum_\mathbf{k} \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) e^{i \mathbf{k} \cdot \mathbf{x}}.\end{aligned} \quad\quad\quad(10)

This is now enough to express the energy momentum tensor $T(\gamma^\mu)$

\begin{aligned}T(\gamma^\mu) &= -\frac{\epsilon_0}{2} \sum_{\mathbf{k},\mathbf{k}'}\text{Real} \left(\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}'})}}^{*} + i \mathbf{k}' {{\phi_{\mathbf{k}'}}}^{*} - i \mathbf{k}' \wedge {{\mathbf{A}_{\mathbf{k}'}}}^{*} \right) \gamma^\mu \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x}}\right).\end{aligned} \quad\quad\quad(11)

It will be more convenient to work with a scalar plus bivector (spatial vector) form of this tensor, and right multiplication by $\gamma_0$ produces such a split

\begin{aligned}T(\gamma^\mu) \gamma_0 = \left\langle{{T(\gamma^\mu) \gamma_0}}\right\rangle + \sigma_a \left\langle{{ \sigma_a T(\gamma^\mu) \gamma_0 }}\right\rangle\end{aligned} \quad\quad\quad(12)

The primary object of this treatment will be consideration of the $\mu = 0$ components of the tensor, which provide a split into energy density $T(\gamma^0) \cdot \gamma_0$, and Poynting vector (momentum density) $T(\gamma^0) \wedge \gamma_0$.

Our first step is to integrate (12) over the volume $V$. This integration and the orthogonality relationship (6), removes the exponentials, leaving

\begin{aligned}\int T(\gamma^\mu) \cdot \gamma_0&= -\frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \left\langle{{\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \gamma^\mu \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) \gamma_0 }}\right\rangle \\ \int T(\gamma^\mu) \wedge \gamma_0&= -\frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \gamma^\mu \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) \gamma_0}}\right\rangle \end{aligned} \quad\quad\quad(13)

Because $\gamma_0$ commutes with the spatial bivectors, and anticommutes with the spatial vectors, the remainder of the Dirac basis vectors in these expressions can be eliminated

\begin{aligned}\int T(\gamma^0) \cdot \gamma_0&= -\frac{\epsilon_0 V }{2} \sum_{\mathbf{k}}\text{Real} \left\langle{{\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \end{aligned} \quad\quad\quad(15)

\begin{aligned}\int T(\gamma^0) \wedge \gamma_0&= -\frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \end{aligned} \quad\quad\quad(16)

\begin{aligned}\int T(\gamma^m) \cdot \gamma_0&= \frac{\epsilon_0 V }{2} \sum_{\mathbf{k}}\text{Real} \left\langle{{\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \sigma_m\left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \end{aligned} \quad\quad\quad(17)

\begin{aligned}\int T(\gamma^m) \wedge \gamma_0&= \frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \sigma_m\left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle.\end{aligned} \quad\quad\quad(18)

# Expanding the energy momentum tensor components.

## Energy

In (15) only the bivector-bivector and vector-vector products produce any scalar grades. Except for the bivector product this can be done by inspection. For that part we utilize the identity

\begin{aligned}\left\langle{{ (\mathbf{k} \wedge \mathbf{a}) (\mathbf{k} \wedge \mathbf{b}) }}\right\rangle= (\mathbf{a} \cdot \mathbf{k}) (\mathbf{b} \cdot \mathbf{k}) - \mathbf{k}^2 (\mathbf{a} \cdot \mathbf{b}).\end{aligned} \quad\quad\quad(19)

This leaves for the energy $H = \int T(\gamma^0) \cdot \gamma_0$ in the volume

\begin{aligned}H = \frac{\epsilon_0 V}{2} \sum_\mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2 +\mathbf{k}^2 \left( {\left\lvert{\phi_\mathbf{k}}\right\rvert}^2 + {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 \right) - {\left\lvert{\mathbf{k} \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2+ \frac{2}{c} \text{Real} \left( i {{\phi_\mathbf{k}}}^{*} \cdot \dot{\mathbf{A}}_\mathbf{k} \right)\right)\end{aligned} \quad\quad\quad(20)

We are left with a completely real expression, and one without any explicit Geometric Algebra. This does not look like the Harmonic oscillator Hamiltonian that was expected. A gauge transformation to eliminate $\phi_\mathbf{k}$ and an observation about when $\mathbf{k} \cdot \mathbf{A}_\mathbf{k}$ equals zero will give us that, but first lets get the mechanical jobs done, and reduce the products for the field momentum.

## Momentum

Now move on to (16). For the factors other than $\sigma_a$ only the vector-bivector products can contribute to the scalar product. We have two such products, one of the form

\begin{aligned}\sigma_a \left\langle{{ \sigma_a \mathbf{a} (\mathbf{k} \wedge \mathbf{c}) }}\right\rangle&=\sigma_a (\mathbf{c} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{k}) - \sigma_a (\mathbf{k} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{c}) \\ &=\mathbf{c} (\mathbf{a} \cdot \mathbf{k}) - \mathbf{k} (\mathbf{a} \cdot \mathbf{c}),\end{aligned}

and the other

\begin{aligned}\sigma_a \left\langle{{ \sigma_a (\mathbf{k} \wedge \mathbf{c}) \mathbf{a} }}\right\rangle&=\sigma_a (\mathbf{k} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{c}) - \sigma_a (\mathbf{c} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{k}) \\ &=\mathbf{k} (\mathbf{a} \cdot \mathbf{c}) - \mathbf{c} (\mathbf{a} \cdot \mathbf{k}).\end{aligned}

The momentum $\mathbf{P} = \int T(\gamma^0) \wedge \gamma_0$ in this volume follows by computation of

\begin{aligned}&\sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \\ &= i \mathbf{A}_\mathbf{k} \left( \left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} \right) \cdot \mathbf{k} \right) - i \mathbf{k} \left( \left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} \right) \cdot \mathbf{A}_\mathbf{k} \right) \\ &- i \mathbf{k} \left( \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} \right) \cdot {{\mathbf{A}_\mathbf{k}}}^{*} \right) + i {{\mathbf{A}_{\mathbf{k}}}}^{*} \left( \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} \right) \cdot \mathbf{k} \right)\end{aligned}

All the products are paired in nice conjugates, taking real parts, and premultiplication with $-\epsilon_0 V/2$ gives the desired result. Observe that two of these terms cancel, and another two have no real part. Those last are

\begin{aligned}-\frac{\epsilon_0 V \mathbf{k}}{2 c} \text{Real} \left( i {{(\dot{\mathbf{A}}_\mathbf{k}}}^{*} \cdot \mathbf{A}_\mathbf{k}+\dot{\mathbf{A}}_\mathbf{k} \cdot {{\mathbf{A}_\mathbf{k}}}^{*} \right)&=-\frac{\epsilon_0 V \mathbf{k}}{2 c} \text{Real} \left( i \frac{d}{dt} \mathbf{A}_\mathbf{k} \cdot {{\mathbf{A}_\mathbf{k}}}^{*} \right)\end{aligned}

Taking the real part of this pure imaginary $i {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2$ is zero, leaving just

\begin{aligned}\mathbf{P} &= \epsilon_0 V \sum_{\mathbf{k}}\text{Real} \left(i \mathbf{A}_\mathbf{k} \left( \frac{1}{{c}} {{\dot{\mathbf{A}}_\mathbf{k}}}^{*} \cdot \mathbf{k} \right)+ \mathbf{k}^2 \phi_\mathbf{k} {{ \mathbf{A}_\mathbf{k} }}^{*}- \mathbf{k} {{\phi_\mathbf{k}}}^{*} (\mathbf{k} \cdot \mathbf{A}_\mathbf{k})\right)\end{aligned} \quad\quad\quad(21)

I am not sure why exactly, but I actually expected a term with ${\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2$, quadratic in the vector potential. Is there a mistake above?

## Gauge transformation to simplify the Hamiltonian.

In (20) something that looked like the Harmonic oscillator was expected. On the surface this does not appear to be such a beast. Exploitation of gauge freedom is required to make the simplification that puts things into the Harmonic oscillator form.

If we are to change our four vector potential $A \rightarrow A + \nabla \psi$, then Maxwell’s equation takes the form

\begin{aligned}J/\epsilon_0 c = \nabla (\nabla \wedge (A + \nabla \psi) = \nabla (\nabla \wedge A) + \nabla (\underbrace{\nabla \wedge \nabla \psi}_{=0}),\end{aligned} \quad\quad\quad(22)

which is unchanged by the addition of the gradient to any original potential solution to the equation. In coordinates this is a transformation of the form

\begin{aligned}A^\mu \rightarrow A^\mu + \partial_\mu \psi,\end{aligned} \quad\quad\quad(23)

and we can use this to force any one of the potential coordinates to zero. For this problem, it appears that it is desirable to seek a $\psi$ such that $A^0 + \partial_0 \psi = 0$. That is

\begin{aligned}\sum_\mathbf{k} \phi_\mathbf{k}(t) e^{i \mathbf{k} \cdot \mathbf{x}} + \frac{1}{{c}} \partial_t \psi = 0.\end{aligned} \quad\quad\quad(24)

Or,

\begin{aligned}\psi(\mathbf{x},t) = \psi(\mathbf{x},0) -\frac{1}{{c}} \sum_\mathbf{k} e^{i \mathbf{k} \cdot \mathbf{x}} \int_{\tau=0}^t \phi_\mathbf{k}(\tau).\end{aligned} \quad\quad\quad(25)

With such a transformation, the $\phi_\mathbf{k}$ and $\dot{\mathbf{A}}_\mathbf{k}$ cross term in the Hamiltonian (20) vanishes, as does the $\phi_\mathbf{k}$ term in the four vector square of the last term, leaving just

\begin{aligned}H = \frac{\epsilon_0}{c^2} V \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} \Bigl((c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 + {\left\lvert{ ( c \mathbf{k}) \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2+ {\left\lvert{ c \mathbf{k} \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2\Bigr)\right).\end{aligned} \quad\quad\quad(26)

Additionally, wedging (5) with $\gamma_0$ now does not loose any information so our potential Fourier series is reduced to just

\begin{aligned}\mathbf{A} &= \sum_{\mathbf{k}} \mathbf{A}_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} \\ \mathbf{A}_\mathbf{k} &= \frac{1}{{ V }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} \mathbf{A}(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3.\end{aligned} \quad\quad\quad(27)

The desired harmonic oscillator form would be had in (26) if it were not for the $\mathbf{k} \cdot \mathbf{A}_\mathbf{k}$ term. Does that vanish? Returning to Maxwell’s equation should answer that question, but first it has to be expressed in terms of the vector potential. While $\mathbf{A} = A \wedge \gamma_0$, the lack of an $A^0$ component means that this can be inverted as

\begin{aligned}A = \mathbf{A} \gamma_0 = -\gamma_0 \mathbf{A}.\end{aligned} \quad\quad\quad(29)

The gradient can also be factored scalar and spatial vector components

\begin{aligned}\nabla = \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) = ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0.\end{aligned} \quad\quad\quad(30)

So, with this $A^0 = 0$ gauge choice the bivector field $F$ is

\begin{aligned}F = \nabla \wedge A = \frac{1}{{2}} \left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \end{aligned} \quad\quad\quad(31)

From the left the gradient action on $A$ is

\begin{aligned}\stackrel{ \rightarrow }{\nabla} A &= ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0 (-\gamma_0 \mathbf{A}) \\ &= ( -\partial_0 + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} ) \mathbf{A},\end{aligned}

and from the right

\begin{aligned}A \stackrel{ \leftarrow }{\nabla}&= \mathbf{A} \gamma_0 \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \mathbf{A} ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \partial_0 \mathbf{A} + \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \end{aligned}

Taking the difference we have

\begin{aligned}F &= \frac{1}{{2}} \Bigl( -\partial_0 \mathbf{A} + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} - \partial_0 \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \Bigr).\end{aligned}

Which is just

\begin{aligned}F = -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A}.\end{aligned} \quad\quad\quad(32)

For this vacuum case, premultiplication of Maxwell’s equation by $\gamma_0$ gives

\begin{aligned}0 &= \gamma_0 \nabla ( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= (\partial_0 + \boldsymbol{\nabla})( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} - \partial_0 \boldsymbol{\nabla} \cdot \mathbf{A} - \partial_0 \boldsymbol{\nabla} \wedge \mathbf{A} + \partial_0 ( \boldsymbol{\nabla} \wedge \mathbf{A} ) + \underbrace{\boldsymbol{\nabla} \cdot ( \boldsymbol{\nabla} \wedge \mathbf{A} ) }_{\boldsymbol{\nabla}^2 \mathbf{A} - \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A})}+ \underbrace{\boldsymbol{\nabla} \wedge ( \boldsymbol{\nabla} \wedge \mathbf{A} )}_{=0} \\ \end{aligned}

The spatial bivector and trivector grades are all zero. Equating the remaining scalar and vector components to zero separately yields a pair of equations in $\mathbf{A}$

\begin{aligned}0 &= \partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) \\ 0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) \end{aligned} \quad\quad\quad(33)

If the divergence of the vector potential is constant we have just a wave equation. Let’s see what that divergence is with the assumed Fourier representation

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{A} &=\sum_{\mathbf{k} \ne (0,0,0)} {\mathbf{A}_\mathbf{k}}^m 2 \pi i \frac{k_m}{\lambda_m} e^{i \mathbf{k} \cdot \mathbf{x}} \\ &=i \sum_{\mathbf{k} \ne (0,0,0)} (\mathbf{A}_\mathbf{k} \cdot \mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x}} \\ &=i \sum_\mathbf{k} (\mathbf{A}_\mathbf{k} \cdot \mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x}} \end{aligned}

Since $\mathbf{A}_\mathbf{k} = \mathbf{A}_\mathbf{k}(t)$, there are two ways for $\partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) = 0$. For each $\mathbf{k}$ there must be a requirement for either $\mathbf{A}_\mathbf{k} \cdot \mathbf{k} = 0$ or $\mathbf{A}_\mathbf{k} = \text{constant}$. The constant $\mathbf{A}_\mathbf{k}$ solution to the first equation appears to represent a standing spatial wave with no time dependence. Is that of any interest?

The more interesting seeming case is where we have some non-static time varying state. In this case, if $\mathbf{A}_\mathbf{k} \cdot \mathbf{k}$, the second of these Maxwell’s equations is just the vector potential wave equation, since the divergence is zero. That is

\begin{aligned}0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} \end{aligned} \quad\quad\quad(35)

Solving this isn’t really what is of interest, since the objective was just to determine if the divergence could be assumed to be zero. This shows then, that if the transverse solution to Maxwell’s equation is picked, the Hamiltonian for this field, with this gauge choice, becomes

\begin{aligned}H = \frac{\epsilon_0}{c^2} V \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} (c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 \right).\end{aligned} \quad\quad\quad(36)

How does the gauge choice alter the Poynting vector? From (21), all the $\phi_\mathbf{k}$ dependence in that integrated momentum density is lost

\begin{aligned}\mathbf{P} &= \epsilon_0 V \sum_{\mathbf{k}}\text{Real} \left(i \mathbf{A}_\mathbf{k} \left( \frac{1}{{c}} {{\dot{\mathbf{A}}_\mathbf{k}}}^{*} \cdot \mathbf{k} \right)\right).\end{aligned} \quad\quad\quad(37)

The $\mathbf{A}_\mathbf{k} \cdot \mathbf{k}$ solutions to Maxwell’s equation are seen to result in zero momentum for this infinite periodic field. My expectation was something of the form $c \mathbf{P} = H \hat{\mathbf{k}}$, so intuition is either failing me, or my math is failing me, or this contrived periodic field solution leads to trouble.

# Conclusions and followup.

The objective was met, a reproduction of Bohm’s Harmonic oscillator result using a complex exponential Fourier series instead of separate sine and cosines.

The reason for Bohm’s choice to fix zero divergence as the gauge choice upfront is now clear. That automatically cuts complexity from the results. Figuring out how to work this problem with complex valued potentials and also using the Geometric Algebra formulation probably also made the work a bit more difficult since blundering through both simultaneously was required instead of just one at a time.

This was an interesting exercise though, since doing it this way I am able to understand all the intermediate steps. Bohm employed some subtler argumentation to eliminate the scalar potential $\phi$ upfront, and I have to admit I did not follow his logic, whereas blindly following where the math leads me all makes sense.

As a bit of followup, I’d like to consider the constant $\mathbf{A}_\mathbf{k}$ case in more detail, and any implications of the freedom to pick $\mathbf{A}_0$.

The general calculation of $T^{\mu\nu}$ for the assumed Fourier solution should be possible too, but was not attempted. Doing that general calculation with a four dimensional Fourier series is likely tidier than working with scalar and spatial variables as done here.

Now that the math is out of the way (except possibly for the momentum which doesn’t seem right), some discussion of implications and applications is also in order. My preference is to let the math sink-in a bit first and mull over the momentum issues at leisure.

# References

[2] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

## Electrodynamic field energy for vacuum.

Posted by peeterjoot on December 19, 2009

# Motivation.

We now know how to formulate the energy momentum tensor for complex vector fields (ie. phasors) in the Geometric Algebra formalism. To recap, for the field $F = \mathbf{E} + I c \mathbf{B}$, where $\mathbf{E}$ and $\mathbf{B}$ may be complex vectors we have for Maxwell’s equation

\begin{aligned}\nabla F = J/\epsilon_0 c.\end{aligned} \quad\quad\quad(1)

This is a doubly complex representation, with the four vector pseudoscalar $I = \gamma_0 \gamma_1 \gamma_2 \gamma_3$ acting as a non-commutatitive imaginary, as well as real and imaginary parts for the electric and magnetic field vectors. We take the real part (not the scalar part) of any bivector solution $F$ of Maxwell’s equation as the actual solution, but allow ourself the freedom to work with the complex phasor representation when convenient. In these phasor vectors, the imaginary $i$, as in $\mathbf{E} = \text{Real}(\mathbf{E}) + i \text{Imag}(\mathbf{E})$, is a commuting imaginary, commuting with all the multivector elements in the algebra.

The real valued, four vector, energy momentum tensor $T(a)$ was found to be

\begin{aligned}T(a) = \frac{\epsilon_0}{4} \Bigl( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} \Bigr) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \quad\quad\quad(2)

To supply some context that gives meaning to this tensor the associated conservation relationship was found to be

\begin{aligned}\nabla \cdot T(a) &= a \cdot \frac{1}{{ c }} \text{Real} \left( J \cdot {{F}}^{*} \right).\end{aligned} \quad\quad\quad(3)

and in particular for $a = \gamma^0$, this four vector divergence takes the form

\begin{aligned}\frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ \boldsymbol{\nabla} \cdot \frac{1}{{\mu_0}} \text{Real} (\mathbf{E} \times {\mathbf{B}}^{*} )+ \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) = 0,\end{aligned} \quad\quad\quad(4)

relating the energy term $T^{00} = T(\gamma^0) \cdot \gamma^0$ and the Poynting spatial vector $T(\gamma^0) \wedge \gamma^0$ with the current density and electric field product that constitutes the energy portion of the Lorentz force density.

Let’s apply this to calculating the energy associated with the field that is periodic within a rectangular prism as done by Bohm in [1]. We do not necessarily need the Geometric Algebra formalism for this calculation, but this will be a fun way to attempt it.

# Setup

Let’s assume a Fourier representation for the four vector potential $A$ for the field $F = \nabla \wedge A$. That is

\begin{aligned}A = \sum_{\mathbf{k}} A_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}},\end{aligned} \quad\quad\quad(5)

where summation is over all wave number triplets $\mathbf{k} = (p/\lambda_1,q/\lambda_2,r/\lambda_3)$. The Fourier coefficients $A_\mathbf{k} = {A_\mathbf{k}}^\mu \gamma_\mu$ are allowed to be complex valued, as is the resulting four vector $A$, and the associated bivector field $F$.

Fourier inversion follows from

\begin{aligned}\delta_{\mathbf{k}', \mathbf{k}} =\frac{1}{{ \lambda_1 \lambda_2 \lambda_3 }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} e^{2 \pi i \mathbf{k}' \cdot \mathbf{x}} e^{-2 \pi i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(6)

but only this orthogonality relationship and not the Fourier coefficients themselves

\begin{aligned}A_\mathbf{k} = \frac{1}{{ \lambda_1 \lambda_2 \lambda_3 }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} A(\mathbf{x}, t) e^{-2 \pi i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(7)

will be of interest here. Evaluating the curl for this potential yields

\begin{aligned}F = \nabla \wedge A= \sum_{\mathbf{k}} \left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \sum_{m=1}^3 \gamma^m \wedge A_\mathbf{k} \frac{2 \pi i k_m}{\lambda_m} \right) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}}.\end{aligned} \quad\quad\quad(8)

We can now form the energy density

\begin{aligned}U = T(\gamma^0) \cdot \gamma^0=-\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} \gamma^0 F \gamma^0 \Bigr).\end{aligned} \quad\quad\quad(9)

With implied summation over all repeated integer indexes (even without matching uppers and lowers), this is

\begin{aligned}U =-\frac{\epsilon_0}{2} \sum_{\mathbf{k}', \mathbf{k}} \text{Real} \left\langle{{\left( \frac{1}{{c}} \gamma^0 \wedge {{\dot{A}_{\mathbf{k}'}}}^{*} - \gamma^m \wedge {{A_{\mathbf{k}'}}}^{*} \frac{2 \pi i k_m'}{\lambda_m} \right) e^{-2 \pi i \mathbf{k}' \cdot \mathbf{x}}\gamma^0\left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \gamma^n \wedge A_\mathbf{k} \frac{2 \pi i k_n}{\lambda_n} \right) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}}\gamma^0}}\right\rangle.\end{aligned} \quad\quad\quad(10)

The grade selection used here doesn’t change the result since we already have a scalar, but will just make it convenient to filter out any higher order products that will cancel anyways. Integrating over the volume element and taking advantage of the orthogonality relationship (6), the exponentials are removed, leaving the energy contained in the volume

\begin{aligned}H = -\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2}\sum_{\mathbf{k}} \text{Real} \left\langle{{\left( \frac{1}{{c}} \gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} - \gamma^m \wedge {{A_{\mathbf{k}}}}^{*} \frac{2 \pi i k_m}{\lambda_m} \right) \gamma^0\left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \gamma^n \wedge A_\mathbf{k} \frac{2 \pi i k_n}{\lambda_n} \right) \gamma^0}}\right\rangle.\end{aligned} \quad\quad\quad(11)

# First reduction of the Hamiltonian.

Let’s take the products involved in sequence one at a time, and evaluate, later adding and taking real parts if required all of

\begin{aligned}\frac{1}{{c^2}}\left\langle{{ (\gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) \gamma^0 (\gamma^0 \wedge \dot{A}_\mathbf{k}) \gamma^0 }}\right\rangle &=-\frac{1}{{c^2}}\left\langle{{ (\gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) (\gamma^0 \wedge \dot{A}_\mathbf{k}) }}\right\rangle \end{aligned} \quad\quad\quad(12)

\begin{aligned}- \frac{2 \pi i k_m}{c \lambda_m} \left\langle{{ (\gamma^m \wedge {{A_{\mathbf{k}}}}^{*} ) \gamma^0 ( \gamma^0 \wedge \dot{A}_\mathbf{k} ) \gamma^0}}\right\rangle &=\frac{2 \pi i k_m}{c \lambda_m} \left\langle{{ (\gamma^m \wedge {{A_{\mathbf{k}}}}^{*} ) ( \gamma^0 \wedge \dot{A}_\mathbf{k} ) }}\right\rangle \end{aligned} \quad\quad\quad(13)

\begin{aligned}\frac{2 \pi i k_n}{c \lambda_n} \left\langle{{ ( \gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) \gamma^0 ( \gamma^n \wedge A_\mathbf{k} ) \gamma^0}}\right\rangle &=-\frac{2 \pi i k_n}{c \lambda_n} \left\langle{{ ( \gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) ( \gamma^n \wedge A_\mathbf{k} ) }}\right\rangle \end{aligned} \quad\quad\quad(14)

\begin{aligned}-\frac{4 \pi^2 k_m k_n}{\lambda_m \lambda_n}\left\langle{{ (\gamma^m \wedge {{A_{\mathbf{k}}}}^{*} ) \gamma^0(\gamma^n \wedge A_\mathbf{k} ) \gamma^0}}\right\rangle. &\end{aligned} \quad\quad\quad(15)

The expectation is to obtain a Hamiltonian for the field that has the structure of harmonic oscillators, where the middle two products would have to be zero or sum to zero or have real parts that sum to zero. The first is expected to contain only products of ${\left\lvert{{\dot{A}_\mathbf{k}}^m}\right\rvert}^2$, and the last only products of ${\left\lvert{{A_\mathbf{k}}^m}\right\rvert}^2$.

While initially guessing that (13) and (14) may cancel, this isn’t so obviously the case. The use of cyclic permutation of multivectors within the scalar grade selection operator $\left\langle{{A B}}\right\rangle = \left\langle{{B A}}\right\rangle$ plus a change of dummy summation indexes in one of the two shows that this sum is of the form $Z + {{Z}}^{*}$. This sum is intrinsically real, so we can neglect one of the two doubling the other, but we will still be required to show that the real part of either is zero.

Lets reduce these one at a time starting with (12), and write $\dot{A}_\mathbf{k} = \kappa$ temporarily

\begin{aligned}\left\langle{{ (\gamma^0 \wedge {{\kappa}}^{*} ) (\gamma^0 \wedge \kappa }}\right\rangle &={\kappa^m}^{{*}} \kappa^{m'}\left\langle{{ \gamma^0 \gamma_m \gamma^0 \gamma_{m'} }}\right\rangle \\ &=-{\kappa^m}^{{*}} \kappa^{m'}\left\langle{{ \gamma_m \gamma_{m'} }}\right\rangle \\ &={\kappa^m}^{{*}} \kappa^{m'}\delta_{m m'}.\end{aligned}

So the first of our Hamiltonian terms is

\begin{aligned}\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2 c^2}\left\langle{{ (\gamma^0 \wedge {{\dot{A}_\mathbf{k}}}^{*} ) (\gamma^0 \wedge \dot{A}_\mathbf{k} }}\right\rangle &=\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2 c^2}{\left\lvert{{{\dot{A}}_{\mathbf{k}}}^m}\right\rvert}^2.\end{aligned} \quad\quad\quad(16)

Note that summation over $m$ is still implied here, so we’d be better off with a spatial vector representation of the Fourier coefficients $\mathbf{A}_\mathbf{k} = A_\mathbf{k} \wedge \gamma_0$. With such a notation, this contribution to the Hamiltonian is

\begin{aligned}\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2 c^2} \dot{\mathbf{A}}_\mathbf{k} \cdot {{\dot{\mathbf{A}}_\mathbf{k}}}^{*}.\end{aligned} \quad\quad\quad(17)

To reduce (13) and (13), this time writing $\kappa = A_\mathbf{k}$, we can start with just the scalar selection

\begin{aligned}\left\langle{{ (\gamma^m \wedge {{\kappa}}^{*} ) ( \gamma^0 \wedge \dot{\kappa} ) }}\right\rangle &=\Bigl( \gamma^m {{(\kappa^0)}}^{*} - {{\kappa}}^{*} \underbrace{(\gamma^m \cdot \gamma^0)}_{=0} \Bigr) \cdot \dot{\kappa} \\ &={{(\kappa^0)}}^{*} \dot{\kappa}^m\end{aligned}

Thus the contribution to the Hamiltonian from (13) and (13) is

\begin{aligned}\frac{2 \epsilon_0 \lambda_1 \lambda_2 \lambda_3 \pi k_m}{c \lambda_m} \text{Real} \Bigl( i {{(A_\mathbf{k}^0)}}^{*} \dot{A_\mathbf{k}}^m \Bigl)=\frac{2 \pi \epsilon_0 \lambda_1 \lambda_2 \lambda_3}{c} \text{Real} \Bigl( i {{(A_\mathbf{k}^0)}}^{*} \mathbf{k} \cdot \dot{\mathbf{A}}_\mathbf{k} \Bigl).\end{aligned} \quad\quad\quad(18)

Most definitively not zero in general. Our final expansion (15) is the messiest. Again with $A_\mathbf{k} = \kappa$ for short, the grade selection of this term in coordinates is

\begin{aligned}\left\langle{{ (\gamma^m \wedge {{\kappa}}^{*} ) \gamma^0 (\gamma^n \wedge \kappa ) \gamma^0 }}\right\rangle&=- {{\kappa_\mu}}^{*} \kappa^\nu \left\langle{{ (\gamma^m \wedge \gamma^\mu) \gamma^0 (\gamma_n \wedge \gamma_\nu) \gamma^0 }}\right\rangle\end{aligned} \quad\quad\quad(19)

Expanding this out yields

\begin{aligned}\left\langle{{ (\gamma^m \wedge {{\kappa}}^{*} ) \gamma^0 (\gamma^n \wedge \kappa ) \gamma^0 }}\right\rangle&=- ( {\left\lvert{\kappa_0}\right\rvert}^2 - {\left\lvert{A^a}\right\rvert}^2 ) \delta_{m n} + {{A^n}}^{*} A^m.\end{aligned} \quad\quad\quad(20)

The contribution to the Hamiltonian from this, with $\phi_\mathbf{k} = A^0_\mathbf{k}$, is then

\begin{aligned}2 \pi^2 \epsilon_0 \lambda_1 \lambda_2 \lambda_3 \Bigl(-\mathbf{k}^2 {{\phi_\mathbf{k}}}^{*} \phi_\mathbf{k} + \mathbf{k}^2 ({{\mathbf{A}_\mathbf{k}}}^{*} \cdot \mathbf{A}_\mathbf{k})+ (\mathbf{k} \cdot {{\mathbf{A}_k}}^{*}) (\mathbf{k} \cdot \mathbf{A}_k)\Bigr).\end{aligned} \quad\quad\quad(21)

A final reassembly of the Hamiltonian from the parts (17) and (18) and (21) is then

\begin{aligned}H = \epsilon_0 \lambda_1 \lambda_2 \lambda_3 \sum_\mathbf{k}\left(\frac{1}{{2 c^2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{2 \pi}{c} \text{Real} \Bigl( i {{ \phi_\mathbf{k} }}^{*} (\mathbf{k} \cdot \dot{\mathbf{A}}_\mathbf{k}) \Bigl)+2 \pi^2 \Bigl(\mathbf{k}^2 ( -{\left\lvert{\phi_\mathbf{k}}\right\rvert}^2 + {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 ) + {\left\lvert{\mathbf{k} \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2\Bigr)\right).\end{aligned} \quad\quad\quad(22)

This is finally reduced to a completely real expression, and one without any explicit Geometric Algebra. All the four vector Fourier vector potentials written out explicitly in terms of the spacetime split $A_\mathbf{k} = (\phi_\mathbf{k}, \mathbf{A}_\mathbf{k})$, which is natural since an explicit time and space split was the starting point.

# Gauge transformation to simplify the Hamiltonian.

While (22) has considerably simpler form than (11), what was expected, was something that looked like the Harmonic oscillator. On the surface this does not appear to be such a beast. Exploitation of gauge freedom is required to make the simplification that puts things into the Harmonic oscillator form.

If we are to change our four vector potential $A \rightarrow A + \nabla \psi$, then Maxwell’s equation takes the form

\begin{aligned}J/\epsilon_0 c = \nabla (\nabla \wedge (A + \nabla \psi) = \nabla (\nabla \wedge A) + \nabla (\underbrace{\nabla \wedge \nabla \psi}_{=0}),\end{aligned} \quad\quad\quad(23)

which is unchanged by the addition of the gradient to any original potential solution to the equation. In coordinates this is a transformation of the form

\begin{aligned}A^\mu \rightarrow A^\mu + \partial_\mu \psi,\end{aligned} \quad\quad\quad(24)

and we can use this to force any one of the potential coordinates to zero. For this problem, it appears that it is desirable to seek a $\psi$ such that $A^0 + \partial_0 \psi = 0$. That is

\begin{aligned}\sum_\mathbf{k} \phi_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} + \frac{1}{{c}} \partial_t \psi = 0.\end{aligned} \quad\quad\quad(25)

Or,

\begin{aligned}\psi(\mathbf{x},t) = \psi(\mathbf{x},0) -\frac{1}{{c}} \sum_\mathbf{k} e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} \int_{\tau=0}^t \phi_\mathbf{k}(\tau).\end{aligned} \quad\quad\quad(26)

With such a transformation, the $\phi_\mathbf{k}$ and $\dot{\mathbf{A}}_\mathbf{k}$ cross term in the Hamiltonian (22) vanishes, as does the $\phi_\mathbf{k}$ term in the four vector square of the last term, leaving just

\begin{aligned}H = \frac{\epsilon_0}{c^2} \lambda_1 \lambda_2 \lambda_3 \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} \Bigl((2 \pi c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 + {\left\lvert{ ( 2 \pi c \mathbf{k}) \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2\Bigr)\right).\end{aligned} \quad\quad\quad(27)

Additionally, wedging (5) with $\gamma_0$ now does not loose any information so our potential Fourier series is reduced to just

\begin{aligned}\mathbf{A} &= \sum_{\mathbf{k}} \mathbf{A}_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} \\ \mathbf{A}_\mathbf{k} &= \frac{1}{{ \lambda_1 \lambda_2 \lambda_3 }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} \mathbf{A}(\mathbf{x}, t) e^{-2 \pi i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3.\end{aligned} \quad\quad\quad(28)

The desired harmonic oscillator form would be had in (27) if it were not for the $\mathbf{k} \cdot \mathbf{A}_\mathbf{k}$ term. Does that vanish? Returning to Maxwell’s equation should answer that question, but first it has to be expressed in terms of the vector potential. While $\mathbf{A} = A \wedge \gamma_0$, the lack of an $A^0$ component means that this can be inverted as

\begin{aligned}A = \mathbf{A} \gamma_0 = -\gamma_0 \mathbf{A}.\end{aligned} \quad\quad\quad(30)

The gradient can also be factored scalar and spatial vector components

\begin{aligned}\nabla = \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) = ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0.\end{aligned} \quad\quad\quad(31)

So, with this $A^0 = 0$ gauge choice the bivector field $F$ is

\begin{aligned}F = \nabla \wedge A = \frac{1}{{2}} \left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \end{aligned} \quad\quad\quad(32)

From the left the gradient action on $A$ is

\begin{aligned}\stackrel{ \rightarrow }{\nabla} A &= ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0 (-\gamma_0 \mathbf{A}) \\ &= ( -\partial_0 + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} ) \mathbf{A},\end{aligned}

and from the right

\begin{aligned}A \stackrel{ \leftarrow }{\nabla}&= \mathbf{A} \gamma_0 \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \mathbf{A} ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \partial_0 \mathbf{A} + \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \end{aligned}

Taking the difference we have

\begin{aligned}F &= \frac{1}{{2}} \Bigl( -\partial_0 \mathbf{A} + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} - \partial_0 \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \Bigr).\end{aligned}

Which is just

\begin{aligned}F = -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A}.\end{aligned} \quad\quad\quad(33)

For this vacuum case, premultiplication of Maxwell’s equation by $\gamma_0$ gives

\begin{aligned}0 &= \gamma_0 \nabla ( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= (\partial_0 + \boldsymbol{\nabla})( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} - \partial_0 \boldsymbol{\nabla} \cdot \mathbf{A} - \partial_0 \boldsymbol{\nabla} \wedge \mathbf{A} + \partial_0 ( \boldsymbol{\nabla} \wedge \mathbf{A} ) + \underbrace{\boldsymbol{\nabla} \cdot ( \boldsymbol{\nabla} \wedge \mathbf{A} ) }_{\boldsymbol{\nabla}^2 \mathbf{A} - \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A})}+ \underbrace{\boldsymbol{\nabla} \wedge ( \boldsymbol{\nabla} \wedge \mathbf{A} )}_{=0} \\ \end{aligned}

The spatial bivector and trivector grades are all zero. Equating the remaining scalar and vector components to zero separately yields a pair of equations in $\mathbf{A}$

\begin{aligned}0 &= \partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) \\ 0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) \end{aligned} \quad\quad\quad(34)

If the divergence of the vector potential is constant we have just a wave equation. Let’s see what that divergence is with the assumed Fourier representation

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{A} &=\sum_{k \ne (0,0,0)} {\mathbf{A}_\mathbf{k}}^m 2 \pi i \frac{k_m}{\lambda_m} e^{2\pi i \mathbf{k} \cdot \mathbf{x}} \\ &=2 \pi i \sum_{k \ne (0,0,0)} (\mathbf{A}_\mathbf{k} \cdot \mathbf{k}) e^{2\pi i \mathbf{k} \cdot \mathbf{x}} \\ \end{aligned}

Since $\mathbf{A}_\mathbf{k} = \mathbf{A}_\mathbf{k}(t)$, there are two ways for $\partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) = 0$. For each $\mathbf{k} \ne 0$ there must be a requirement for either $\mathbf{A}_\mathbf{k} \cdot \mathbf{k} = 0$ or $\mathbf{A}_\mathbf{k} = \text{constant}$. The constant $\mathbf{A}_\mathbf{k}$ solution to the first equation appears to represent a standing spatial wave with no time dependence. Is that of any interest?

The more interesting seeming case is where we have some non-static time varying state. In this case, if $\mathbf{A}_\mathbf{k} \cdot \mathbf{k}$ for all $\mathbf{k} \ne 0$ the second of these Maxwell’s equations is just the vector potential wave equation, since the divergence is zero. That is

\begin{aligned}0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} \end{aligned} \quad\quad\quad(36)

Solving this isn’t really what is of interest, since the objective was just to determine if the divergence could be assumed to be zero. This shows then, that if the transverse solution to Maxwell’s equation is picked, the Hamiltonian for this field, with this gauge choice, becomes

\begin{aligned}H = \frac{\epsilon_0}{c^2} \lambda_1 \lambda_2 \lambda_3 \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} (2 \pi c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 \right).\end{aligned} \quad\quad\quad(37)

# Conclusions and followup.

The objective was met, a reproduction of Bohm’s Harmonic oscillator result using a complex exponential Fourier series instead of separate sine and cosines.

The reason for Bohm’s choice to fix zero divergence as the gauge choice upfront is now clear. That automatically cuts complexity from the results. Figuring out how to work this problem with complex valued potentials and also using the Geometric Algebra formulation probably also made the work a bit more difficult since blundering through both simultaneously was required instead of just one at a time.

This was an interesting exercise though, since doing it this way I am able to understand all the intermediate steps. Bohm employed some subtler argumentation to eliminate the scalar potential $\phi$ upfront, and I have to admit I did not follow his logic, whereas blindly following where the math leads me all makes sense.

As a bit of followup, I’d like to consider the constant $\mathbf{A}_\mathbf{k}$ case, and any implications of the freedom to pick $\mathbf{A}_0$. I’d also like to construct the Poynting vector $T(\gamma^0) \wedge \gamma_0$, and see what the structure of that is with this Fourier representation.

A general calculation of $T^{\mu\nu}$ for an assumed Fourier solution should be possible too, but working in spatial quantities for the general case is probably torture. A four dimensional Fourier series is likely a superior option for the general case.

# References

[1] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

## Spherical polar pendulum for one and multiple masses (Take II)

Posted by peeterjoot on November 10, 2009

# Motivation

Attempting the multiple spherical pendulum problem with a bivector parameterized Lagrangian has just been attempted ([1]), but did not turn out to be an effective approach. Here a variation is used, employing regular plain old scalar spherical angle parameterized Kinetic energy, but still employing Geometric Algebra to express the Hermitian quadratic form associated with this energy term.

The same set of simplifying assumptions will be made. These are point masses, zero friction at the pivots and rigid nonspringy massless connecting rods between the masses.

# The Lagrangian.

A two particle spherical pendulum is depicted in figure (\ref{fig:sPolarMultiPendulum:pendulumDouble})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{pendulumDouble}
\caption{Double spherical pendulum.}
\end{figure}

The position vector for each particle can be expressed relative to the mass it is connected to (or the origin for the first particle), as in

\begin{aligned}z_k &= z_{k-1} + \mathbf{e}_3 l_k e^{j_k \theta_k} \\ j_k &= \mathbf{e}_3 \wedge \left( \mathbf{e}_1 e^{i \phi_k} \right) \\ i &= \mathbf{e}_1 \wedge \mathbf{e}_2\end{aligned} \quad\quad\quad(1)

To express the Kinetic energy for any of the masses $m_k$, we need the derivative of the incremental difference in position

\begin{aligned}\frac{d}{dt} \left( \mathbf{e}_3 e^{j_k \theta_k} \right)&=\mathbf{e}_3 \left( j_k \dot{\theta}_k e^{j_k \theta_k} + \frac{d j_k }{dt} \sin\theta_k \right) \\ &=\mathbf{e}_3 \left( j_k \dot{\theta}_k e^{j_k \theta_k} + \mathbf{e}_3 \mathbf{e}_2 \dot{\phi}_k e^{i \phi_k} \sin\theta_k \right) \\ &=\left( \frac{d}{dt}\begin{bmatrix}\theta_k & \phi_k\end{bmatrix} \right)\begin{bmatrix}\mathbf{e}_1 e^{i \phi_k} e^{j_k \theta_k} \\ \mathbf{e}_2 e^{i \phi_k} \sin\theta_k\end{bmatrix}\end{aligned}

Introducing a Hermitian conjugation $A^\dagger = \tilde{A}^\text{T}$, reversing and transposing the matrix, and writing

\begin{aligned}A_k &=\begin{bmatrix}\mathbf{e}_1 e^{i \phi_k} e^{j_k \theta_k} \\ \mathbf{e}_2 e^{i \phi_k} \sin\theta_k\end{bmatrix} \\ \boldsymbol{\Theta}_k &=\begin{bmatrix}\theta_k \\ \phi_k\end{bmatrix}\end{aligned} \quad\quad\quad(4)

We can now write the relative velocity differential as

\begin{aligned}(\dot{z}_k - \dot{z}_{k-1})^2 = l_k^2 {\dot{\boldsymbol{\Theta}}_k}^\dagger A_k A_k^\dagger \dot{\boldsymbol{\Theta}}_k\end{aligned} \quad\quad\quad(6)

Observe that the inner product is Hermitian under this definition since $(A_k A_k^\dagger)^\dagger = A_k A_k^\dagger$. \footnote{Realized later, and being too lazy to adjust everything in these notes, the use of reversion here is not neccessary. Since the generalized coordinates are scalars we could use transposition instead of Hermitian conjugation. All the matrix elements are vectors so reversal doesn’t change anything.}

The total (squared) velocity of the $k$th particle is then

\begin{aligned}\boldsymbol{\Theta} &=\begin{bmatrix}\boldsymbol{\Theta}_1 \\ \boldsymbol{\Theta}_2 \\ \vdots \\ \boldsymbol{\Theta}_N \\ \end{bmatrix} \\ B_k &=\begin{bmatrix}l_1 A_1 \\ l_2 A_2 \\ \vdots \\ l_k A_k \\ 0 \\ \end{bmatrix} \\ (\dot{z}_k)^2 &=\dot{\boldsymbol{\Theta}}^\dagger B_k B_k^\dagger \dot{\boldsymbol{\Theta}}\end{aligned} \quad\quad\quad(7)

(where the zero matrix in $B_k$ is a $N-k$ by one zero). Summing over all masses and adding in the potential energy we have for the Lagrangian of the system

\begin{aligned}K &=\frac{1}{{2}} \sum_{k=1}^N m_k \dot{\boldsymbol{\Theta}}^\dagger B_k B_k^\dagger \dot{\boldsymbol{\Theta}} \\ \mu_k &= \sum_{j=k}^N m_j \\ \Phi &=g \sum_{k=1}^N \mu_k l_k \cos\theta_k \\ \mathcal{L} &= K - \Phi\end{aligned} \quad\quad\quad(10)

There’s a few layers of equations involved and we still have an unholy mess of matrix and geometric algebra in the kernel of the kinetic energy quadratic form, but at least this time all the generalized coordinates of the system are scalars.

# Some tidy up.

Before continuing with evaluation of the Euler-Lagrange equations it is helpful to make a couple of observations about the structure of the matrix products that make up our velocity quadratic forms

\begin{aligned}\dot{\boldsymbol{\Theta}}^\dagger B_k B_k^\dagger \dot{\boldsymbol{\Theta}}&=\dot{\boldsymbol{\Theta}}^\dagger \begin{bmatrix}\begin{bmatrix}l_1^2 A_1 A_1^\dagger & l_1 l_2 A_1 A_2^\dagger & \hdots & l_1 l_k A_1 A_k^\dagger \\ l_2 l_1 A_2 A_1^\dagger & l_2^2 A_2 A_2^\dagger & \hdots & l_2 l_k A_2 A_k^\dagger \\ \vdots \\ l_k l_1 A_k A_1^\dagger & l_k l_2 A_k A_2^\dagger & \hdots & l_k^2 A_k A_k^\dagger \end{bmatrix} & 0 \\ 0 & 0\end{bmatrix}\dot{\boldsymbol{\Theta}}\end{aligned} \quad\quad\quad(14)

Specifically, consider the $A_a A_b^\dagger$ products that make up the elements of the matrices $Q_k = B_k B_k^\dagger$. Without knowing anything about the grades that make up the elements of $Q_k$, since it is Hermitian (by this definition of Hermitian) there can be no elements of grade order two or three in the final matrix. This is because reversion of such grades inverts the sign, and the matrix elements in $Q_k$ all equal their reverse. Additionally, the elements of the multivector column matrices $A_k$ are vectors, so in the product $A_a A_b^\dagger$ we can only have scalar and bivector (grade two) elements. The resulting one by one scalar matrix is a sum over all the mixed angular velocities $\dot{\theta}_a \dot{\theta}_b$, $\dot{\theta}_a \dot{\phi}_b$, and $\dot{\phi}_a \dot{\phi}_b$, so once this summation is complete any bivector grades of $A_a A_b^\dagger$ must cancel out. This is consistent with the expectation that we have a one by one scalar matrix result out of this in the end (i.e. a number). The end result is a freedom to exploit the convienence of explicitly using a scalar selection operator that filters out any vector, bivector, and trivector grades in the products $A_a A_b^\dagger$. We will get the same result if we write

\begin{aligned}\dot{\boldsymbol{\Theta}}^\dagger B_k B_k^\dagger \dot{\boldsymbol{\Theta}}&=\dot{\boldsymbol{\Theta}}^\dagger \begin{bmatrix}\begin{bmatrix}l_1^2 \left\langle{{A_1 A_1^\dagger}}\right\rangle & l_1 l_2 \left\langle{{A_1 A_2^\dagger}}\right\rangle & \hdots & l_1 l_k \left\langle{{A_1 A_k^\dagger}}\right\rangle \\ l_2 l_1 \left\langle{{A_2 A_1^\dagger}}\right\rangle & l_2^2 \left\langle{{A_2 A_2^\dagger}}\right\rangle & \hdots & l_2 l_k \left\langle{{A_2 A_k^\dagger}}\right\rangle \\ \vdots \\ l_k l_1 \left\langle{{A_k A_1^\dagger}}\right\rangle & l_k l_2 \left\langle{{A_k A_2^\dagger}}\right\rangle & \hdots & l_k^2 \left\langle{{A_k A_k^\dagger}}\right\rangle\end{bmatrix} & 0 \\ 0 & 0\end{bmatrix}\dot{\boldsymbol{\Theta}}\end{aligned} \quad\quad\quad(15)

Pulling in the summation over $m_k$ we have

\begin{aligned}\sum_k m_k\dot{\boldsymbol{\Theta}}^\dagger B_k B_k^\dagger \dot{\boldsymbol{\Theta}}&=\dot{\boldsymbol{\Theta}}^\dagger {\begin{bmatrix}\mu_{\max(r,c)} l_r l_c \left\langle{{A_r A_c^\dagger}}\right\rangle\end{bmatrix}}_{rc}\dot{\boldsymbol{\Theta}}\end{aligned} \quad\quad\quad(16)

It appears justifiable to lable the $\mu_{\max(r,c)} l_r l_c$ factors of the angular velocity matrices as moments of inertia in a generalized sense. Using this block matrix form, and scalar selection, we can now write the Lagrangian in a slightly tidier form

\begin{aligned}\mu_k &= \sum_{j=k}^N m_j \\ Q &= {\begin{bmatrix}\mu_{\max(r,c)} l_r l_c A_r A_c^\dagger \end{bmatrix}}_{rc} \\ K &=\frac{1}{{2}} \dot{\boldsymbol{\Theta}}^\dagger Q\dot{\boldsymbol{\Theta}} =\frac{1}{{2}} \dot{\boldsymbol{\Theta}}^\text{T} \left\langle{{Q}}\right\rangle\dot{\boldsymbol{\Theta}} \\ \Phi &=g \sum_{k=1}^N \mu_k l_k \cos\theta_k \\ \mathcal{L} &= K - \Phi\end{aligned} \quad\quad\quad(17)

After some expansion, writing $S_\theta = \sin\theta$, $C_\phi = \cos\phi$ and so forth, one can find that the scalar parts of the block matrixes $A_r A_c^\dagger$ contained in $Q$ are

\begin{aligned}\left\langle{{A_r A_c^\dagger}}\right\rangle=\begin{bmatrix}C_{\phi_c - \phi_r} C_{\theta_r}C_{\theta_c}+S_{\theta_r}S_{\theta_c} &-S_{\phi_c - \phi_r} C_{\theta_r} S_{\theta_c} \\ S_{\phi_c - \phi_r} C_{\theta_c} S_{\theta_r} &C_{\phi_c - \phi_r} S_{\theta_r} S_{\theta_c}\end{bmatrix}\end{aligned} \quad\quad\quad(22)

The diagonal blocks are particularily simple and have no $\phi$ dependence

\begin{aligned}\left\langle{{A_r A_r^\dagger}}\right\rangle=\begin{bmatrix}1 & 0 \\ 0 & \sin^2 \theta_r\end{bmatrix}\end{aligned} \quad\quad\quad(23)

Observe also that $\left\langle{{A_r A_c^\dagger}}\right\rangle^T = \left\langle{{A_c A_r^\dagger}}\right\rangle$, so the scalar matrix

\begin{aligned}\left\langle{{Q}}\right\rangle = {\begin{bmatrix}\mu_{\max(r,c)} l_r l_c \left\langle{{ A_r A_c^\dagger }}\right\rangle\end{bmatrix}}_{rc}\end{aligned} \quad\quad\quad(24)

is a real symmetric matrix. We have the option of using this explicit scalar expansion if desired for further computations associated with this problem. That completely eliminates the Geometric algebra from the problem, and is probably a logical way to formulate things for numerical work since one can then exploit any pre existing matrix algebra system without having to create one that understands non-commuting variables and vector products.

# Evaluating the Euler-Lagrange equations.

For the acceleration terms of the Euler-Lagrange equations our computation reduces nicely to a function of only $\left\langle{{Q}}\right\rangle$

\begin{aligned}\frac{d}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}_a}}&=\frac{1}{{2}} \frac{d}{dt} \left(\frac{\partial {\dot{\boldsymbol{\Theta}}}}{\partial {\dot{\theta}_a}}^\text{T}\left\langle{{Q}}\right\rangle \dot{\boldsymbol{\Theta}}+\dot{\boldsymbol{\Theta}}^\text{T}\left\langle{{Q}}\right\rangle \frac{\partial {\dot{\boldsymbol{\Theta}}}}{\partial {\dot{\theta}_a}}\right) \\ &=\frac{d}{dt} \left({\begin{bmatrix}\delta_{ac}\begin{bmatrix}1 & 0\end{bmatrix}\end{bmatrix}}_c\left\langle{{Q}}\right\rangle \dot{\boldsymbol{\Theta}}\right) \end{aligned}

and

\begin{aligned}\frac{d}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{\phi}_a}}&=\frac{1}{{2}} \frac{d}{dt} \left(\frac{\partial {\dot{\boldsymbol{\Theta}}}}{\partial {\dot{\phi}_a}}^\text{T}\left\langle{{Q}}\right\rangle \dot{\boldsymbol{\Theta}}+\dot{\boldsymbol{\Theta}}^\text{T}\left\langle{{Q}}\right\rangle \frac{\partial {\dot{\boldsymbol{\Theta}}}}{\partial {\dot{\phi}_a}}\right) \\ &=\frac{d}{dt} \left({\begin{bmatrix}\delta_{ac}\begin{bmatrix}0 & 1\end{bmatrix}\end{bmatrix}}_c\left\langle{{Q}}\right\rangle \dot{\boldsymbol{\Theta}}\right) \end{aligned}

The last groupings above made use of $\left\langle{{Q}}\right\rangle = \left\langle{{Q}}\right\rangle^\text{T}$, and in particular $(\left\langle{{Q}}\right\rangle + \left\langle{{Q}}\right\rangle^\text{T})/2 = \left\langle{{Q}}\right\rangle$. We can now form a column matrix putting all the angular velocity gradient in a tidy block matrix representation

\begin{aligned}\nabla_{\dot{\boldsymbol{\Theta}}} \mathcal{L} = {\begin{bmatrix}\begin{bmatrix}\frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}_r}} \\ \frac{\partial {\mathcal{L}}}{\partial {\dot{\phi}_r}} \\ \end{bmatrix}\end{bmatrix}}_r = \left\langle{{Q}}\right\rangle \dot{\boldsymbol{\Theta}}\end{aligned} \quad\quad\quad(25)

A small aside on Hamiltonian form. This velocity gradient is also the conjugate momentum of the Hamiltonian, so if we wish to express the Hamiltonian in terms of conjugate momenta, we require invertability of $\left\langle{{Q}}\right\rangle$ at the point in time that we evaluate things. Writing

\begin{aligned}P_{\boldsymbol{\Theta}} = \nabla_{\dot{\boldsymbol{\Theta}}} \mathcal{L} \end{aligned} \quad\quad\quad(26)

and noting that $(\left\langle{{Q}}\right\rangle^{-1})^\text{T} = \left\langle{{Q}}\right\rangle^{-1}$, we get for the kinetic energy portion of the Hamiltonian

\begin{aligned}K = \frac{1}{{2}} {P_{\boldsymbol{\Theta}}}^\text{T} \left\langle{{Q}}\right\rangle^{-1} P_{\boldsymbol{\Theta}}\end{aligned} \quad\quad\quad(27)

Now, the invertability of $\left\langle{{Q}}\right\rangle$ cannot be taken for granted. Even in the single particle case we do not have invertability. For the single particle case we have

\begin{aligned}\left\langle{{Q}}\right\rangle =m l^2 \begin{bmatrix}1 & 0 \\ 0 & \sin^2 \theta\end{bmatrix}\end{aligned} \quad\quad\quad(28)

so at $\theta = \pm \pi/2$ this quadratic form is singlular, and the planar angular momentum becomes a constant of motion.

Returning to the evaluation of the Euler-Lagrange equations, the problem is now reduced to calculating the right hand side of the following system

\begin{aligned}\frac{d}{dt} \left( \left\langle{{Q}}\right\rangle \dot{\boldsymbol{\Theta}} \right) ={\begin{bmatrix}\begin{bmatrix}\frac{\partial {\mathcal{L}}}{\partial {\theta_r}} \\ \frac{\partial {\mathcal{L}}}{\partial {\phi_r}} \\ \end{bmatrix}\end{bmatrix}}_r\end{aligned} \quad\quad\quad(29)

With back substituition of 22, and 24 we have a complete non-multivector expansion of the left hand side. For the right hand side taking the $\theta_a$ and $\phi_a$ derivatives respectively we get

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\theta_a}}=\frac{1}{{2}} \dot{\boldsymbol{\Theta}}^\dagger {\begin{bmatrix}\mu_{\max(r,c)} l_r l_c \left\langle{{\frac{\partial {A_r}}{\partial {\theta_a}} A_c^\dagger + A_r \frac{\partial {A_c}}{\partial {\theta_a}}^\dagger}}\right\rangle\end{bmatrix}}_{rc} \dot{\boldsymbol{\Theta}}-g \mu_a l_a \sin\theta_a \end{aligned} \quad\quad\quad(30)

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\phi_a}}=\frac{1}{{2}} \dot{\boldsymbol{\Theta}}^\dagger {\begin{bmatrix}\mu_{\max(r,c)} l_r l_c \left\langle{{\frac{\partial {A_r}}{\partial {\phi_a}} A_c^\dagger + A_r \frac{\partial {A_c}}{\partial {\phi_a}}^\dagger}}\right\rangle\end{bmatrix}}_{rc} \dot{\boldsymbol{\Theta}}\end{aligned} \quad\quad\quad(31)

So to procede we must consider the $\left\langle{{A_r A_c^\dagger}}\right\rangle$ partials. A bit of thought shows that the matrices of partials above are mostly zeros. Illustrating by example, consider ${\partial {\left\langle{{Q}}\right\rangle}}/{\partial {\theta_2}}$, which in block matrix form is

\begin{aligned}\frac{\partial {\left\langle{{Q}}\right\rangle}}{\partial {\theta_2}}=\begin{bmatrix}0 & \frac{1}{{2}} \mu_2 l_1 l_2 \left\langle{{A_1 \frac{\partial {A_2}}{\partial {\theta_2}}^\dagger}}\right\rangle & 0 & \hdots & 0 \\ \frac{1}{{2}} \mu_2 l_2 l_1 \left\langle{{\frac{\partial {A_2}}{\partial {\theta_2}} A_1^\dagger}}\right\rangle &\frac{1}{{2}} \mu_2 l_2 l_2 \left\langle{{A_2 \frac{\partial {A_2}}{\partial {\theta_2}}^\dagger + \frac{\partial {A_2}}{\partial {\theta_2}} A_2^\dagger}}\right\rangle &\frac{1}{{2}} \mu_3 l_2 l_3 \left\langle{{\frac{\partial {A_2}}{\partial {\theta_2}} A_3^\dagger}}\right\rangle & \hdots &\frac{1}{{2}} \mu_N l_2 l_N \left\langle{{\frac{\partial {A_2}}{\partial {\theta_2}} A_N^\dagger}}\right\rangle \\ 0 & \frac{1}{{2}} \mu_3 l_3 l_2 \left\langle{{A_3 \frac{\partial {A_2}}{\partial {\theta_2}}^\dagger}}\right\rangle & 0 & \hdots & 0 \\ 0 & \vdots & 0 & \hdots & 0 \\ 0 & \frac{1}{{2}} \mu_N l_N l_2 \left\langle{{A_N \frac{\partial {A_2}}{\partial {\theta_2}}^\dagger}}\right\rangle & 0 & \hdots & 0 \\ \end{bmatrix}\end{aligned} \quad\quad\quad(32)

Observe that the diagonal term has a scalar plus its reverse, so we can drop the one half factor and one of the summands for a total contribution to ${\partial {\mathcal{L}}}/{\partial {\theta_2}}$ of just

\begin{aligned}\mu_2 {l_2}^2 {\dot{\boldsymbol{\Theta}}_2}^\text{T} \left\langle{{\frac{\partial {A_2}}{\partial {\theta_2}} A_2^\dagger}}\right\rangle \dot{\boldsymbol{\Theta}}_2\end{aligned}

Now consider one of the pairs of off diagonal terms. Adding these we contributions to ${\partial {\mathcal{L}}}/{\partial {\theta_2}}$ of

\begin{aligned}\frac{1}{{2}} \mu_2 l_1 l_2 {\dot{\boldsymbol{\Theta}}_1}^\text{T}\left\langle{{A_1 \frac{\partial {A_2}}{\partial {\theta_2}}^\dagger}}\right\rangle \dot{\boldsymbol{\Theta}}_2+\frac{1}{{2}} \mu_2 l_2 l_1 {\dot{\boldsymbol{\Theta}}_2}^\text{T}\left\langle{{\frac{\partial {A_2}}{\partial {\theta_2}} A_1^\dagger}}\right\rangle \dot{\boldsymbol{\Theta}}_1&=\frac{1}{{2}} \mu_2 l_1 l_2 {\dot{\boldsymbol{\Theta}}_1}^\text{T}\left\langle{{A_1 \frac{\partial {A_2}}{\partial {\theta_2}}^\dagger + A_1 \frac{\partial {A_2}}{\partial {\theta_2}}^\dagger}}\right\rangle \dot{\boldsymbol{\Theta}}_2 \\ &=\mu_2 l_1 l_2 {\dot{\boldsymbol{\Theta}}_1}^\text{T}\left\langle{{A_1 \frac{\partial {A_2}}{\partial {\theta_2}}^\dagger}}\right\rangle \dot{\boldsymbol{\Theta}}_2 \\ \end{aligned}

This has exactly the same form as the diagonal term, so summing over all terms we get for the position gradient components of the Euler-Lagrange equation just

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\theta_a}}&=\sum_{k}\mu_{\max(k,a)} l_k l_a {\dot{\boldsymbol{\Theta}}_k}^\text{T}\left\langle{{A_k \frac{\partial {A_a}}{\partial {\theta_a}}^\dagger}}\right\rangle \dot{\boldsymbol{\Theta}}_a -g \mu_a l_a \sin\theta_a \end{aligned} \quad\quad\quad(33)

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\phi_a}}&=\sum_{k}\mu_{\max(k,a)} l_k l_a {\dot{\boldsymbol{\Theta}}_k}^\text{T}\left\langle{{A_k \frac{\partial {A_a}}{\partial {\phi_a}}^\dagger}}\right\rangle \dot{\boldsymbol{\Theta}}_a \end{aligned} \quad\quad\quad(34)

The only thing that remains to do is evaluate the $\left\langle{{A_k {\partial {A_a}}/{\partial {\phi_a}}^\dagger}}\right\rangle$ matrixes.

It should be possible but it is tedious to calculate the block matrix derivative terms from the $A_a$ partials using

\begin{aligned}\frac{\partial {A_a}}{\partial {\theta_a}} &=\begin{bmatrix}-\mathbf{e}_3 e^{j_a \theta_a} \\ \mathbf{e}_2 e^{i \phi_a} C_{\theta_a}\end{bmatrix}\end{aligned} \quad\quad\quad(35)

\begin{aligned}\frac{\partial {A_a}}{\partial {\phi_a}}&=\begin{bmatrix}\mathbf{e}_2 e^{i \phi_a} C_{\theta_a} \\ -\mathbf{e}_1 e^{i \phi_a} S_{\theta_a}\end{bmatrix}\end{aligned} \quad\quad\quad(36)

However multiplying this out and reducing is a bit tedious and would be a better job for a symbolic algebra package. With 22 available to use, one gets easily

\begin{aligned}\left\langle{{ A_k \frac{\partial {A_c}}{\partial {\theta_c}}^\dagger }}\right\rangle&=\begin{bmatrix}-C_{\phi_a - \phi_k} C_{\theta_k} S_{\theta_a} + S_{\theta_k} C_{\theta_a} &-S_{\phi_a - \phi_k} C_{\theta_k} C_{\theta_a} \\ -S_{\phi_a - \phi_k} S_{\theta_a} S_{\theta_k} &C_{\phi_a - \phi_k} (1 + \delta_{k a}) S_{\theta_k} C_{\theta_a} \end{bmatrix}\end{aligned} \quad\quad\quad(37)

\begin{aligned}\left\langle{{ A_k \frac{\partial {A_a}}{\partial {\phi_a}}^\dagger }}\right\rangle&=\begin{bmatrix}-S_{\phi_a - \phi_k} C_{\theta_k} C_{\theta_a} + S_{\theta_k} S_{\theta_a} &-C_{\phi_a - \phi_k} C_{\theta_k} S_{\theta_a} \\ C_{\phi_a - \phi_k} C_{\theta_a} S_{\theta_k} &-S_{\phi_a - \phi_k} S_{\theta_k} S_{\theta_a} \end{bmatrix}\end{aligned} \quad\quad\quad(38)

The right hand side of the Euler-Lagrange equations now becomes

\begin{aligned}\nabla_{\boldsymbol{\Theta}} \mathcal{L} =\sum_k{\begin{bmatrix}\begin{bmatrix}\mu_{\max(k,r)} l_k l_r {\dot{\boldsymbol{\Theta}}_k}^\text{T} \left\langle{{ A_k \frac{\partial {A_r}}{\partial {\theta_r}}^\dagger }}\right\rangle \dot{\boldsymbol{\Theta}}_r \\ \mu_{\max(k,r)} l_k l_r {\dot{\boldsymbol{\Theta}}_k}^\text{T} \left\langle{{ A_k \frac{\partial {A_r}}{\partial {\phi_r}}^\dagger }}\right\rangle \dot{\boldsymbol{\Theta}}_r \end{bmatrix}\end{bmatrix}}_r- g{\begin{bmatrix}\mu_r l_r \sin\theta_r \begin{bmatrix}1 \\ 0\end{bmatrix}\end{bmatrix}}_r\end{aligned} \quad\quad\quad(39)

Can the $\dot{\boldsymbol{\Theta}}_a$ matrices be factored out, perhaps allowing for expression as a function of $\dot{\boldsymbol{\Theta}}$? How to do that if it is possible is not obvious. The driving reason to do so would be to put things into a tidy form where things are a function of the system angular velocity vector $\boldsymbol{\Theta}$, but this is not possible anyways since the gradient is non-linear.

# Hamiltonian form and linearization.

Having calculated the Hamiltonian equations for the multiple mass planar pendulum in [2], doing so for the spherical pendulum can now be done by inspection. With the introduction of a phase space vector for the system using the conjugate momenta (for angles where these conjugate momenta are non-singlular)

\begin{aligned}\mathbf{z} = \begin{bmatrix}P_{\boldsymbol{\Theta}} \\ \boldsymbol{\Theta}\end{bmatrix}\end{aligned} \quad\quad\quad(40)

we can write the Hamiltonian equations

\begin{aligned}\frac{d\mathbf{z}}{dt} = \begin{bmatrix}\nabla_{\boldsymbol{\Theta}} \mathcal{L} \\ \left\langle{{Q}}\right\rangle^{-1} P_{\boldsymbol{\Theta}}\end{bmatrix}\end{aligned} \quad\quad\quad(41)

The position gradient is given explicitly in 39, and that can be substituted here. That gradient is expressed in terms of $\dot{\boldsymbol{\Theta}}_k$ and not the conjugate momenta, but the mapping required to express the whole system in terms of the conjugate momenta is simple enough

\begin{aligned}\dot{\boldsymbol{\Theta}}_k = {\begin{bmatrix}\delta_{kc} I_{22}\end{bmatrix}}_c \left\langle{{Q}}\right\rangle^{-1} P_{\boldsymbol{\Theta}}\end{aligned} \quad\quad\quad(42)

It is apparent that for any sort of numerical treatment use of a angular momentum and angular position phase space vector is not prudent. If the aim is nothing more than working with a first order system instead of second order, then we are probably better off with an angular velocity plus angular position phase space system.

\begin{aligned}\frac{d}{dt}\begin{bmatrix}\left\langle{{Q}}\right\rangle \dot{\boldsymbol{\Theta}} \\ \boldsymbol{\Theta}\end{bmatrix}=\begin{bmatrix}\nabla_{\boldsymbol{\Theta}} \mathcal{L} \\ \dot{\boldsymbol{\Theta}}\end{bmatrix}\end{aligned} \quad\quad\quad(43)

This eliminates the requirement for inverting the sometimes singular matrix $\left\langle{{Q}}\right\rangle$, but one is still left with something that is perhaps tricky to work with since we have the possibility of zeros on the left hand side. The resulting equation is of the form

\begin{aligned}M \mathbf{x}' = f(\mathbf{x})\end{aligned} \quad\quad\quad(44)

where $M = \left[\begin{smallmatrix}\left\langle{{Q}}\right\rangle & 0 \\ 0 & I\end{smallmatrix}\right]$ is a possibly singular matrix, and $f$ is a non-linear function of the components of $\boldsymbol{\Theta}$, and $\dot{\boldsymbol{\Theta}}$. This is concievably linearizable in the neighbourhood of a particular phase space point $\mathbf{x}_0$. If that is done, resulting in an equation of the form

\begin{aligned}M \mathbf{y}' = f(\mathbf{x}_0) + B \mathbf{y} \end{aligned} \quad\quad\quad(45)

where $\mathbf{x} = \mathbf{y} + \mathbf{x}_0$ and $B$ is an appropriate matrix of partials (the specifics of which don’t really have to be spelled out here). Because of the possible singularities of $M$ the exponentiation techniques applied to the linearized planar pendulum may not be possible with such a linearization. Study of this less well formed system of LDEs probably has interesting aspects, but is also likely best tackled independently of the specifics of the spherical pendulum problem.

## Thoughts about the Hamiltonian singularity.

The fact that the Hamiltonian goes singular on the horizontal in this spherical polar representation is actually what I think is the most interesting bit in the problem (the rest being a lot mechanical details). On the horizontal $\phi=0$ or $\dot{\phi} = 37000$ radians/sec makes no difference to the dynamics. All you can say is that the horizontal plane angular momentum is a constant of the system. It seems very much like the increasing uncertaintly that you get in the corresponding radial QM equation. Once you start pinning down the $\theta$ angle, you loose the ability to say much about $\phi$.

It is also kind of curious how the energy of the system is never ill defined but a choice of a particular orientation to use as a reference for observations of the momenta introduces the singularity as the system approaches the horizontal in that reference frame.

Perhaps there are some deeper connections relating these classical and QM similarity. Would learning about symplectic flows and phase space volume invariance shed some light on this?

# A summary.

A fair amount of notation was introduced along the way in the process of formulating the spherical pendulum equations. It is worthwhile to do a final consise summary of notation and results before moving on for future reference.

The positions of the masses are given by

\begin{aligned}z_k &= z_{k-1} + \mathbf{e}_3 l_k e^{j_k \theta_k} \\ j_k &= \mathbf{e}_3 \wedge \left( \mathbf{e}_1 e^{i \phi_k} \right) \\ i &= \mathbf{e}_1 \wedge \mathbf{e}_2\end{aligned} \quad\quad\quad(46)

With the introduction of a column vector of vectors (where we multiply matrices using the Geometric vector product),

\begin{aligned}\boldsymbol{\Theta}_k &=\begin{bmatrix}\theta_k \\ \phi_k\end{bmatrix}\end{aligned} \quad\quad\quad(49)

\begin{aligned}\boldsymbol{\Theta} &={\begin{bmatrix}\boldsymbol{\Theta}_1 &\boldsymbol{\Theta}_2 &\hdots &\boldsymbol{\Theta}_N \end{bmatrix}}^\text{T}\end{aligned} \quad\quad\quad(50)

and a matrix of velocity components (with matrix multiplication of the vector elements using the Geometric vector product), we can form the Lagrangian

\begin{aligned}A_k &=\begin{bmatrix}\mathbf{e}_1 e^{i \phi_k} e^{j_k \theta_k} \\ \mathbf{e}_2 e^{i \phi_k} S_{\theta_k}\end{bmatrix} \end{aligned} \quad\quad\quad(51)

\begin{aligned}\mu_k &= \sum_{j=k}^N m_j \\ \left\langle{{Q}}\right\rangle &= {\begin{bmatrix}\mu_{\max(r,c)} l_r l_c \left\langle{{A_r A_c^\text{T}}}\right\rangle\end{bmatrix}}_{rc} \\ K &= \frac{1}{{2}} \dot{\boldsymbol{\Theta}}^\text{T} \left\langle{{Q}}\right\rangle\dot{\boldsymbol{\Theta}} \\ \Phi &=g \sum_{k=1}^N \mu_k l_k C_{\theta_k} \\ \mathcal{L} &= K - \Phi\end{aligned} \quad\quad\quad(52)

An explicit scalar matrix evaluation of the (symmetric) block matrix components of $\left\langle{{Q}}\right\rangle$ was evaluated and found to be

\begin{aligned}\left\langle{{A_r A_c^\text{T}}}\right\rangle=\begin{bmatrix}C_{\phi_c - \phi_r} C_{\theta_r}C_{\theta_c}+S_{\theta_r}S_{\theta_c} &-S_{\phi_c - \phi_r} C_{\theta_r} S_{\theta_c} \\ S_{\phi_c - \phi_r} C_{\theta_c} S_{\theta_r} &C_{\phi_c - \phi_r} S_{\theta_r} S_{\theta_c}\end{bmatrix}\end{aligned} \quad\quad\quad(57)

These can be used if explicit evaluation of the Kinetic energy is desired, avoiding redundant summation over the pairs of skew entries in the quadratic form matrix $\left\langle{{Q}}\right\rangle$

\begin{aligned}K = \frac{1}{{2}} \sum_k \mu_k l_k^2 {\dot{\boldsymbol{\Theta}}_k}^T \left\langle{{A_k A_k^\text{T}}}\right\rangle \dot{\boldsymbol{\Theta}}_k+ \sum_{r

We utilize angular position and velocity gradients

\begin{aligned}\nabla_{\boldsymbol{\Theta}_k} &= \begin{bmatrix}\frac{\partial}{\partial \theta_k} \\ \frac{\partial}{\partial \phi_k} \end{bmatrix} \\ \nabla_{\dot{\boldsymbol{\Theta}}_k} &= \begin{bmatrix}\frac{\partial }{\partial \dot{\theta}_k} \\ \frac{\partial }{\partial \dot{\phi}_k} \end{bmatrix} \\ \nabla_{\boldsymbol{\Theta}} &= {\begin{bmatrix}{\nabla_{\boldsymbol{\Theta}_1}}^\text{T} & {\nabla_{\boldsymbol{\Theta}_2}}^\text{T} & \hdots & {\nabla_{\boldsymbol{\Theta}_N}}^\text{T} \end{bmatrix}}^\text{T} \\ \nabla_{\boldsymbol{\dot{\Theta}}} &= {\begin{bmatrix}{\nabla_{\boldsymbol{\dot{\Theta}}_1}}^\text{T} & {\nabla_{\boldsymbol{\dot{\Theta}}_2}}^\text{T} & \hdots & {\nabla_{\boldsymbol{\dot{\Theta}}_N}}^\text{T} \end{bmatrix}}^\text{T} \end{aligned} \quad\quad\quad(59)

and use these to form the Euler-Lagrange equations for the system in column vector form

\begin{aligned}\frac{d}{dt} \nabla_{\dot{\boldsymbol{\Theta}}} \mathcal{L} = \nabla_{\boldsymbol{\Theta}} \mathcal{L}\end{aligned} \quad\quad\quad(63)

For the canonical momenta we found the simple result

\begin{aligned}\nabla_{\dot{\boldsymbol{\Theta}}} \mathcal{L} = \left\langle{{Q}}\right\rangle \dot{\boldsymbol{\Theta}}\end{aligned} \quad\quad\quad(64)

For the position gradient portion of the Euler-Lagrange equations 63 we found in block matrix form

\begin{aligned}\nabla_{\boldsymbol{\Theta}} \mathcal{L} =\sum_k{\begin{bmatrix}\begin{bmatrix}\mu_{\max(k,r)} l_k l_r {\dot{\boldsymbol{\Theta}}_k}^\text{T} \left\langle{{ A_k \frac{\partial {A_r}}{\partial {\theta_r}}^\dagger }}\right\rangle \dot{\boldsymbol{\Theta}}_r \\ \mu_{\max(k,r)} l_k l_r {\dot{\boldsymbol{\Theta}}_k}^\text{T} \left\langle{{ A_k \frac{\partial {A_r}}{\partial {\phi_r}}^\dagger }}\right\rangle \dot{\boldsymbol{\Theta}}_r \end{bmatrix}\end{bmatrix}}_r- g{\begin{bmatrix}\mu_r l_r S_{\theta_r}\begin{bmatrix}1 \\ 0\end{bmatrix}\end{bmatrix}}_r\end{aligned} \quad\quad\quad(65)

\begin{aligned}\left\langle{{ A_k \frac{\partial {A_c}}{\partial {\theta_c}}^\dagger }}\right\rangle&=\begin{bmatrix}-C_{\phi_a - \phi_k} C_{\theta_k} S_{\theta_a} + S_{\theta_k} C_{\theta_a} &-S_{\phi_a - \phi_k} C_{\theta_k} C_{\theta_a} \\ -S_{\phi_a - \phi_k} S_{\theta_a} S_{\theta_k} &C_{\phi_a - \phi_k} (1 + \delta_{k a}) S_{\theta_k} C_{\theta_a} \end{bmatrix}\end{aligned} \quad\quad\quad(66)

\begin{aligned}\left\langle{{ A_k \frac{\partial {A_a}}{\partial {\phi_a}}^\dagger }}\right\rangle&=\begin{bmatrix}-S_{\phi_a - \phi_k} C_{\theta_k} C_{\theta_a} + S_{\theta_k} S_{\theta_a} &-C_{\phi_a - \phi_k} C_{\theta_k} S_{\theta_a} \\ C_{\phi_a - \phi_k} C_{\theta_a} S_{\theta_k} &-S_{\phi_a - \phi_k} S_{\theta_k} S_{\theta_a} \end{bmatrix}\end{aligned} \quad\quad\quad(67)

A set of Hamiltonian equations for the system could also be formed. However, this requires that one somehow restrict attention to the subset of phase space where the canonical momenta matrix $\left\langle{{Q}}\right\rangle$ is non-singular, something not generally possible.

# References

[1] Peeter Joot. {Spherical polar pendulum for one and multiple masses, and multivector Euler-Lagrange formulation.} [online]. http://sites.google.com/site/peeterjoot/math2009/sPolarMultiPendulum.pdf.

[2] Peeter Joot. Hamiltonian notes. [online]. http://sites.google.com/site/peeterjoot/math2009/hamiltonian.pdf.