Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘maxwells equations’

Polarization angles for normal transmission and reflection

Posted by peeterjoot on January 22, 2014

[Click here for a PDF of this post with nicer formatting]

Question: Polarization angles for normal transmission and reflection ([1] pr 9.14)

For normal incidence, without assuming that the reflected and transmitted waves have the same polarization as the incident wave, prove that this must be so.

Answer

Working with coordinates as illustrated in fig. 1.1, the incident wave can be assumed to have the form

fig 1.1: Normal incidence coordinates

 

\begin{aligned}\tilde{\mathbf{E}}_{\mathrm{I}} = E_{\mathrm{I}} e^{i (k z - \omega t)} \hat{\mathbf{x}}\end{aligned} \hspace{\stretch{1}}(1.0.1a)

\begin{aligned}\tilde{\mathbf{B}}_{\mathrm{I}} = \frac{1}{{v}} \hat{\mathbf{z}} \times \tilde{\mathbf{E}}_{\mathrm{I}} = \frac{1}{{v}} E_{\mathrm{I}} e^{i (k z - \omega t)} \hat{\mathbf{y}}.\end{aligned} \hspace{\stretch{1}}(1.0.1b)

Assuming a polarization \hat{\mathbf{n}} = \cos\theta \hat{\mathbf{x}} + \sin\theta \hat{\mathbf{y}} for the reflected wave, we have

\begin{aligned}\tilde{\mathbf{E}}_{\mathrm{R}} = E_{\mathrm{R}} e^{i (-k z - \omega t)} (\hat{\mathbf{x}} \cos\theta + \hat{\mathbf{y}} \sin\theta)\end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}\tilde{\mathbf{B}}_{\mathrm{R}} = \frac{1}{{v}} (-\hat{\mathbf{z}}) \times \tilde{\mathbf{E}}_{\mathrm{R}} = \frac{1}{{v}} E_{\mathrm{R}} e^{i (-k z - \omega t)} (\hat{\mathbf{x}} \sin\theta - \hat{\mathbf{y}} \cos\theta).\end{aligned} \hspace{\stretch{1}}(1.0.2b)

And finally assuming a polarization \hat{\mathbf{n}} = \cos\phi \hat{\mathbf{x}} + \sin\phi \hat{\mathbf{y}} for the transmitted wave, we have

\begin{aligned}\tilde{\mathbf{E}}_{\mathrm{T}} = E_{\mathrm{T}} e^{i (k' z - \omega t)} (\hat{\mathbf{x}} \cos\phi + \hat{\mathbf{y}} \sin\phi)\end{aligned} \hspace{\stretch{1}}(1.0.3a)

\begin{aligned}\tilde{\mathbf{B}}_{\mathrm{T}} = \frac{1}{{v}} \hat{\mathbf{z}} \times \tilde{\mathbf{E}}_{\mathrm{T}} = \frac{1}{{v'}} E_{\mathrm{T}} e^{i (k' z - \omega t)} (-\hat{\mathbf{x}} \sin\phi + \hat{\mathbf{y}} \cos\phi).\end{aligned} \hspace{\stretch{1}}(1.0.3b)

With no components of any of the \tilde{\mathbf{E}} or \tilde{\mathbf{B}} waves in the \hat{\mathbf{z}} directions the boundary value conditions at z = 0 require the equality of the \hat{\mathbf{x}} and \hat{\mathbf{y}} components of

\begin{aligned}\left( \tilde{\mathbf{E}}_{\mathrm{I}} + \tilde{\mathbf{E}}_{\mathrm{R}} \right)_{x,y} = \left( \tilde{\mathbf{E}}_{\mathrm{T}} \right)_{x,y}\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned} \left( \frac{1}{\mu} \left( \tilde{\mathbf{B}}_{\mathrm{I}} + \tilde{\mathbf{B}}_{\mathrm{R}} \right) \right)_{x,y} = \left( \frac{1}{\mu'} \tilde{\mathbf{B}}_{\mathrm{T}} \right)_{x,y}.\end{aligned} \hspace{\stretch{1}}(1.0.4b)

With \beta = \mu v/\mu' v', those components are

\begin{aligned}E_{\mathrm{I}} + E_{\mathrm{R}} \cos\theta = E_{\mathrm{T}} \cos\phi \end{aligned} \hspace{\stretch{1}}(1.0.5a)

\begin{aligned}E_{\mathrm{R}} \sin\theta = E_{\mathrm{T}} \sin\phi\end{aligned} \hspace{\stretch{1}}(1.0.5b)

\begin{aligned}E_{\mathrm{R}} \sin\theta = - \beta E_{\mathrm{T}} \sin\phi\end{aligned} \hspace{\stretch{1}}(1.0.5c)

\begin{aligned}E_{\mathrm{I}} - E_{\mathrm{R}} \cos\theta = \beta E_{\mathrm{T}} \cos\phi\end{aligned} \hspace{\stretch{1}}(1.0.5d)

Equality of eq. 1.0.5b, and eq. 1.0.5c require

\begin{aligned}- \beta E_{\mathrm{T}} \sin\phi = E_{\mathrm{T}} \sin\phi,\end{aligned} \hspace{\stretch{1}}(1.0.6)

or (\theta, \phi) \in \{(0, 0), (0, \pi), (\pi, 0), (\pi, \pi)\}. It turns out that all of these solutions correspond to the same physical waves. Let’s look at each in turn

  • (\theta, \phi) = (0, 0). The system eq. 1.0.5.5 is reduced to

    \begin{aligned}\begin{aligned}E_{\mathrm{I}} + E_{\mathrm{R}} &= E_{\mathrm{T}} \\ E_{\mathrm{I}} - E_{\mathrm{R}} &= \beta E_{\mathrm{T}},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.7)

    with solution

    \begin{aligned}\begin{aligned}\frac{E_{\mathrm{T}}}{E_{\mathrm{I}}} &= \frac{2}{1 + \beta} \\ \frac{E_{\mathrm{R}}}{E_{\mathrm{I}}} &= \frac{1 - \beta}{1 + \beta}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.8)

  • (\theta, \phi) = (\pi, \pi). The system eq. 1.0.5.5 is reduced to

    \begin{aligned}\begin{aligned}E_{\mathrm{I}} - E_{\mathrm{R}} &= -E_{\mathrm{T}} \\ E_{\mathrm{I}} + E_{\mathrm{R}} &= -\beta E_{\mathrm{T}},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.9)

    with solution

    \begin{aligned}\begin{aligned}\frac{E_{\mathrm{T}}}{E_{\mathrm{I}}} &= -\frac{2}{1 + \beta} \\ \frac{E_{\mathrm{R}}}{E_{\mathrm{I}}} &= -\frac{1 - \beta}{1 + \beta}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.10)

    Effectively the sign for the magnitude of the transmitted and reflected phasors is toggled, but the polarization vectors are also negated, with \hat{\mathbf{n}} = -\hat{\mathbf{x}}, and \hat{\mathbf{n}}' = -\hat{\mathbf{x}}. The resulting \tilde{\mathbf{E}}_{\mathrm{R}} and \tilde{\mathbf{E}}_{\mathrm{T}} are unchanged relative to those of the (0,0) solution above.

  • (\theta, \phi) = (0, \pi). The system eq. 1.0.5.5 is reduced to

    \begin{aligned}\begin{aligned}E_{\mathrm{I}} + E_{\mathrm{R}} &= -E_{\mathrm{T}} \\ E_{\mathrm{I}} - E_{\mathrm{R}} &= -\beta E_{\mathrm{T}},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.11)

    with solution

    \begin{aligned}\begin{aligned}\frac{E_{\mathrm{T}}}{E_{\mathrm{I}}} &= -\frac{2}{1 + \beta} \\ \frac{E_{\mathrm{R}}}{E_{\mathrm{I}}} &= \frac{1 - \beta}{1 + \beta}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.12)

    Effectively the sign for the magnitude of the transmitted phasor is toggled. The polarization vectors in this case are \hat{\mathbf{n}} = \hat{\mathbf{x}}, and \hat{\mathbf{n}}' = -\hat{\mathbf{x}}, so the transmitted phasor magnitude change of sign does not change \tilde{\mathbf{E}}_{\mathrm{T}} relative to that of the (0,0) solution above.

  • (\theta, \phi) = (\pi, 0). The system eq. 1.0.5.5 is reduced to

    \begin{aligned}\begin{aligned}E_{\mathrm{I}} - E_{\mathrm{R}} &= E_{\mathrm{T}} \\ E_{\mathrm{I}} + E_{\mathrm{R}} &= \beta E_{\mathrm{T}},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.13)

    with solution

    \begin{aligned}\begin{aligned}\frac{E_{\mathrm{T}}}{E_{\mathrm{I}}} &= \frac{2}{1 + \beta} \\ \frac{E_{\mathrm{R}}}{E_{\mathrm{I}}} &= -\frac{1 - \beta}{1 + \beta}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.14)

    This time, the sign for the magnitude of the reflected phasor is toggled. The polarization vectors in this case are \hat{\mathbf{n}} = -\hat{\mathbf{x}}, and \hat{\mathbf{n}}' = \hat{\mathbf{x}}. In this final variation the reflected phasor magnitude change of sign does not change \tilde{\mathbf{E}}_{\mathrm{R}} relative to that of the (0,0) solution.

We see that there is only one solution for the polarization angle of the transmitted and reflected waves relative to the incident wave. Although we fixed the incident polarization with \mathbf{E} along \hat{\mathbf{x}}, the polarization of the incident wave is maintained regardless of TE or TM labeling in this example, since our system is symmetric with respect to rotation.

References

[1] D.J. Griffith. Introduction to Electrodynamics. Prentice-Hall, 1981.

Advertisements

Posted in Math and Physics Learning. | Tagged: , , , , , | 1 Comment »

PHY450H1S. Relativistic Electrodynamics Tutorial 4 (TA: Simon Freedman). Waveguides: confined EM waves.

Posted by peeterjoot on March 14, 2011

[Click here for a PDF of this post with nicer formatting]

Motivation

While this isn’t part of the course, the topic of waveguides is one of so many applications that it is worth a mention, and that will be done in this tutorial.

We will setup our system with a waveguide (conducting surface that confines the radiation) oriented in the \hat{\mathbf{z}} direction. The shape can be arbitrary

PICTURE: cross section of wacky shape.

At the surface of a conductor.

At the surface of the conductor (I presume this means the interior surface where there is no charge or current enclosed) we have

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} &= - \frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} \\ \boldsymbol{\nabla} \times \mathbf{B} &= \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} \\ \boldsymbol{\nabla} \cdot \mathbf{B} &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{E} &= 0\end{aligned} \hspace{\stretch{1}}(1.1)

If we are talking about the exterior surface, do we need to make any other assumptions (perfect conductors, or constant potentials)?

Wave equations.

For electric and magnetic fields in vacuum, we can show easily that these, like the potentials, separately satisfy the wave equation

Taking curls of the Maxwell curl equations above we have

\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{E}) &= - \frac{1}{{c^2}} \frac{\partial^2 {\mathbf{E}}}{\partial {{t}}^2} \\ \boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{B}) &= - \frac{1}{{c^2}} \frac{\partial^2 {\mathbf{B}}}{\partial {{t}}^2},\end{aligned} \hspace{\stretch{1}}(1.5)

but we have for vector \mathbf{M}

\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{M})=\boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{M}) - \Delta \mathbf{M},\end{aligned} \hspace{\stretch{1}}(1.7)

which gives us a pair of wave equations

\begin{aligned}\square \mathbf{E} &= 0 \\ \square \mathbf{B} &= 0.\end{aligned} \hspace{\stretch{1}}(1.8)

We still have the original constraints of Maxwell’s equations to deal with, but we are free now to pick the complex exponentials as fundamental solutions, as our starting point

\begin{aligned}\mathbf{E} &= \mathbf{E}_0 e^{i k^a x_a} = \mathbf{E}_0 e^{ i (k^0 x_0 - \mathbf{k} \cdot \mathbf{x}) } \\ \mathbf{B} &= \mathbf{B}_0 e^{i k^a x_a} = \mathbf{B}_0 e^{ i (k^0 x_0 - \mathbf{k} \cdot \mathbf{x}) },\end{aligned} \hspace{\stretch{1}}(1.10)

With k_0 = \omega/c and x_0 = c t this is

\begin{aligned}\mathbf{E} &= \mathbf{E}_0 e^{ i (\omega t - \mathbf{k} \cdot \mathbf{x}) } \\ \mathbf{B} &= \mathbf{B}_0 e^{ i (\omega t - \mathbf{k} \cdot \mathbf{x}) }.\end{aligned} \hspace{\stretch{1}}(1.12)

For the vacuum case, with monochromatic light, we treated the amplitudes as constants. Let’s see what happens if we relax this assumption, and allow for spatial dependence (but no time dependence) of \mathbf{E}_0 and \mathbf{B}_0. For the LHS of the electric field curl equation we have

\begin{aligned}0 &= \boldsymbol{\nabla} \times \mathbf{E}_0 e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 - \mathbf{E}_0 \times \boldsymbol{\nabla}) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 - \mathbf{E}_0 \times \mathbf{e}^\alpha i k_a \partial_\alpha x^a) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 + \mathbf{E}_0 \times \mathbf{e}^\alpha i k^a {\delta_\alpha}^a ) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \times \mathbf{E}_0 + i \mathbf{E}_0 \times \mathbf{k} ) e^{i k_a x^a}.\end{aligned}

Similarly for the divergence we have

\begin{aligned}0 &= \boldsymbol{\nabla} \cdot \mathbf{E}_0 e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 + \mathbf{E}_0 \cdot \boldsymbol{\nabla}) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 + \mathbf{E}_0 \cdot \mathbf{e}^\alpha i k_a \partial_\alpha x^a) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 - \mathbf{E}_0 \cdot \mathbf{e}^\alpha i k^a {\delta_\alpha}^a ) e^{i k_a x^a} \\ &= (\boldsymbol{\nabla} \cdot \mathbf{E}_0 - i \mathbf{k} \cdot \mathbf{E}_0 ) e^{i k_a x^a}.\end{aligned}

This provides constraints on the amplitudes

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E}_0 - i \mathbf{k} \times \mathbf{E}_0 &= -i \frac{\omega}{c} \mathbf{B}_0 \\ \boldsymbol{\nabla} \times \mathbf{B}_0 - i \mathbf{k} \times \mathbf{B}_0 &= i \frac{\omega}{c} \mathbf{E}_0 \\ \boldsymbol{\nabla} \cdot \mathbf{E}_0 - i \mathbf{k} \cdot \mathbf{E}_0 &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{B}_0 - i \mathbf{k} \cdot \mathbf{B}_0 &= 0\end{aligned} \hspace{\stretch{1}}(1.14)

Applying the wave equation operator to our phasor we get

\begin{aligned}0 &=\left(\frac{1}{{c^2}} \partial_{tt} - \boldsymbol{\nabla}^2 \right) \mathbf{E}_0 e^{i (\omega t - \mathbf{k} \cdot \mathbf{x})} \\ &=\left(-\frac{\omega^2}{c^2} - \boldsymbol{\nabla}^2 + \mathbf{k}^2 \right) \mathbf{E}_0 e^{i (\omega t - \mathbf{k} \cdot \mathbf{x})}\end{aligned}

So the momentum space equivalents of the wave equations are

\begin{aligned}\left( \boldsymbol{\nabla}^2 +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{E}_0 &= 0 \\ \left( \boldsymbol{\nabla}^2 +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{B}_0 &= 0.\end{aligned} \hspace{\stretch{1}}(1.18)

Observe that if c^2 \mathbf{k}^2 = \omega^2, then these amplitudes are harmonic functions (solutions to the Laplacian equation). However, it doesn’t appear that we require such a light like relation for the four vector k^a = (\omega/c, \mathbf{k}).

Back to the tutorial notes.

In class we went straight to an assumed solution of the form

\begin{aligned}\mathbf{E} &= \mathbf{E}_0(x, y) e^{ i(\omega t - k z) } \\ \mathbf{B} &= \mathbf{B}_0(x, y) e^{ i(\omega t - k z) },\end{aligned} \hspace{\stretch{1}}(2.20)

where \mathbf{k} = k \hat{\mathbf{z}}. Our Laplacian was also written as the sum of components in the propagation and perpendicular directions

\begin{aligned}\boldsymbol{\nabla}^2 = \frac{\partial^2 {{}}}{\partial {{x_\perp}}^2} + \frac{\partial^2 {{}}}{\partial {{z}}^2}.\end{aligned} \hspace{\stretch{1}}(2.22)

With no z dependence in the amplitudes we have

\begin{aligned}\left( \frac{\partial^2 {{}}}{\partial {{x_\perp}}^2} +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{E}_0 &= 0 \\ \left( \frac{\partial^2 {{}}}{\partial {{x_\perp}}^2} +\frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \mathbf{B}_0 &= 0.\end{aligned} \hspace{\stretch{1}}(2.23)

Separation into components.

It was left as an exercise to separate out our Maxwell equations, so that our field components \mathbf{E}_0 = \mathbf{E}_\perp + \mathbf{E}_z and \mathbf{B}_0 = \mathbf{B}_\perp + \mathbf{B}_z in the propagation direction, and components in the perpendicular direction are separated

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E}_0 &=(\boldsymbol{\nabla}_\perp + \hat{\mathbf{z}}\partial_z) \times \mathbf{E}_0 \\ &=\boldsymbol{\nabla}_\perp \times \mathbf{E}_0 \\ &=\boldsymbol{\nabla}_\perp \times (\mathbf{E}_\perp + \mathbf{E}_z) \\ &=\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp +\boldsymbol{\nabla}_\perp \times \mathbf{E}_z \\ &=( \hat{\mathbf{x}} \partial_x +\hat{\mathbf{y}} \partial_y ) \times ( \hat{\mathbf{x}} E_x +\hat{\mathbf{y}} E_y ) +\boldsymbol{\nabla}_\perp \times \mathbf{E}_z \\ &=\hat{\mathbf{z}} (\partial_x E_y - \partial_z E_z) +\boldsymbol{\nabla}_\perp \times \mathbf{E}_z.\end{aligned}

We can do something similar for \mathbf{B}_0. This allows for a split of 1.14 into \hat{\mathbf{z}} and perpendicular components

\begin{aligned}\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp &= -i \frac{\omega}{c} \mathbf{B}_z \\ \boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp &= i \frac{\omega}{c} \mathbf{E}_z \\ \boldsymbol{\nabla}_\perp \times \mathbf{E}_z - i \mathbf{k} \times \mathbf{E}_\perp &= -i \frac{\omega}{c} \mathbf{B}_\perp \\ \boldsymbol{\nabla}_\perp \times \mathbf{B}_z - i \mathbf{k} \times \mathbf{B}_\perp &= i \frac{\omega}{c} \mathbf{E}_\perp \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{E}_\perp &= i k E_z - \partial_z E_z \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{B}_\perp &= i k B_z - \partial_z B_z.\end{aligned} \hspace{\stretch{1}}(3.25)

So we see that once we have a solution for \mathbf{E}_z and \mathbf{B}_z (by solving the wave equation above for those components), the components for the fields in terms of those components can be found. Alternately, if one solves for the perpendicular components of the fields, these propagation components are available immediately with only differentiation.

In the case where the perpendicular components are taken as given

\begin{aligned}\mathbf{B}_z &= i \frac{ c  }{\omega} \boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp \\ \mathbf{E}_z &= -i \frac{ c  }{\omega} \boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp,\end{aligned} \hspace{\stretch{1}}(3.31)

we can express the remaining ones strictly in terms of the perpendicular fields

\begin{aligned}\frac{\omega}{c} \mathbf{B}_\perp &= \frac{c}{\omega} \boldsymbol{\nabla}_\perp \times (\boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp) + \mathbf{k} \times \mathbf{E}_\perp \\ \frac{\omega}{c} \mathbf{E}_\perp &= \frac{c}{\omega} \boldsymbol{\nabla}_\perp \times (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp) - \mathbf{k} \times \mathbf{B}_\perp \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{E}_\perp &= -i \frac{c}{\omega} (i k - \partial_z) \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp) \\ \boldsymbol{\nabla}_\perp \cdot \mathbf{B}_\perp &= i \frac{c}{\omega} (i k - \partial_z) \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp).\end{aligned} \hspace{\stretch{1}}(3.33)

Is it at all helpful to expand the double cross products?

\begin{aligned}\frac{\omega^2}{c^2} \mathbf{B}_\perp &= \boldsymbol{\nabla}_\perp (\boldsymbol{\nabla}_\perp \cdot \mathbf{B}_\perp) -{\boldsymbol{\nabla}_\perp}^2 \mathbf{B}_\perp + \frac{\omega}{c} \mathbf{k} \times \mathbf{E}_\perp \\ &= i \frac{c}{\omega}(i k - \partial_z)\boldsymbol{\nabla}_\perp \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp)-{\boldsymbol{\nabla}_\perp}^2 \mathbf{B}_\perp + \frac{\omega}{c} \mathbf{k} \times \mathbf{E}_\perp \end{aligned}

This gives us

\begin{aligned}\left( {\boldsymbol{\nabla}_\perp}^2 + \frac{\omega^2}{c^2} \right) \mathbf{B}_\perp &= - \frac{c}{\omega} (k + i\partial_z) \boldsymbol{\nabla}_\perp \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{E}_\perp) + \frac{\omega}{c} \mathbf{k} \times \mathbf{E}_\perp \\ \left( {\boldsymbol{\nabla}_\perp}^2 + \frac{\omega^2}{c^2} \right) \mathbf{E}_\perp &= -\frac{c}{\omega} (k + i\partial_z) \boldsymbol{\nabla}_\perp \hat{\mathbf{z}} \cdot (\boldsymbol{\nabla}_\perp \times \mathbf{B}_\perp) - \frac{\omega}{c} \mathbf{k} \times \mathbf{B}_\perp,\end{aligned} \hspace{\stretch{1}}(3.37)

but that doesn’t seem particularly useful for completely solving the system? It appears fairly messy to try to solve for \mathbf{E}_\perp and \mathbf{B}_\perp given the propagation direction fields. I wonder if there is a simplification available that I am missing?

Solving the momentum space wave equations.

Back to the class notes. We proceeded to solve for \mathbf{E}_z and \mathbf{B}_z from the wave equations by separation of variables. We wish to solve equations of the form

\begin{aligned}\left( \frac{\partial^2 {{}}}{\partial {{x}}^2} + \frac{\partial^2 {{}}}{\partial {{y}}^2} + \frac{\omega^2}{c^2} - \mathbf{k}^2 \right) \phi(x,y) = 0\end{aligned} \hspace{\stretch{1}}(4.39)

Write \phi(x,y) = X(x) Y(y), so that we have

\begin{aligned}\frac{X''}{X} + \frac{Y''}{Y} = \mathbf{k}^2 - \frac{\omega^2}{c^2}\end{aligned} \hspace{\stretch{1}}(4.40)

One solution is sinusoidal

\begin{aligned}\frac{X''}{X} &= -k_1^2 \\ \frac{Y''}{Y} &= -k_2^2 \\ -k_1^2 - k_2^2&= \mathbf{k}^2 - \frac{\omega^2}{c^2}.\end{aligned} \hspace{\stretch{1}}(4.41)

The example in the tutorial now switched to a rectangular waveguide, still oriented with the propagation direction down the z-axis, but with lengths a and b along the x and y axis respectively.

Writing k_1 = 2\pi m/a, and k_2 = 2 \pi n/ b, we have

\begin{aligned}\phi(x, y) = \sum_{m n} a_{m n} \exp\left( \frac{2 \pi i m}{a} x \right)\exp\left( \frac{2 \pi i n}{b} y \right)\end{aligned} \hspace{\stretch{1}}(4.44)

We were also provided with some definitions

\begin{definition}TE (Transverse Electric)

\mathbf{E}_3 = 0.
\end{definition}
\begin{definition}
TM (Transverse Magnetic)

\mathbf{B}_3 = 0.
\end{definition}
\begin{definition}
TM (Transverse Electromagnetic)

\mathbf{E}_3 = \mathbf{B}_3 = 0.
\end{definition}

\begin{claim}TEM do not existing in a hollow waveguide.
\end{claim}

Why: I had in my notes

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} = 0 & \implies \frac{\partial {E_2}}{\partial {x^1}} -\frac{\partial {E_1}}{\partial {x^2}} = 0 \\ \boldsymbol{\nabla} \cdot \mathbf{E} = 0 & \implies \frac{\partial {E_1}}{\partial {x^1}} +\frac{\partial {E_2}}{\partial {x^2}} = 0\end{aligned}

and then

\begin{aligned}\boldsymbol{\nabla}^2 \phi &= 0 \\ \phi &= \text{const}\end{aligned}

In retrospect I fail to see how these are connected? What happened to the \partial_t \mathbf{B} term in the curl equation above?

It was argued that we have \mathbf{E}_\parallel = \mathbf{B}_\perp = 0 on the boundary.

So for the TE case, where \mathbf{E}_3 = 0, we have from the separation of variables argument

\begin{aligned}\hat{\mathbf{z}} \cdot \mathbf{B}_0(x, y) =\sum_{m n} a_{m n} \cos\left( \frac{2 \pi i m}{a} x \right)\cos\left( \frac{2 \pi i n}{b} y \right).\end{aligned} \hspace{\stretch{1}}(4.45)

No sines because

\begin{aligned}B_1 \propto \frac{\partial {B_3}}{\partial {x_a}} \rightarrow \cos(k_1 x^1).\end{aligned} \hspace{\stretch{1}}(4.46)

The quantity

\begin{aligned}a_{m n}\cos\left( \frac{2 \pi i m}{a} x \right)\cos\left( \frac{2 \pi i n}{b} y \right).\end{aligned} \hspace{\stretch{1}}(4.47)

is called the TE_{m n} mode. Note that since B = \text{const} an ampere loop requires \mathbf{B} = 0 since there is no current.

Writing

\begin{aligned}k &= \frac{\omega}{c} \sqrt{ 1 - \left(\frac{\omega_{m n}}{\omega}\right)^2 } \\ \omega_{m n} &= 2 \pi c \sqrt{ \left(\frac{m}{a} \right)^2 + \left(\frac{n}{b} \right)^2 }\end{aligned} \hspace{\stretch{1}}(4.48)

Since \omega < \omega_{m n} we have k purely imaginary, and the term

\begin{aligned}e^{-i k z} = e^{- {\left\lvert{k}\right\rvert} z}\end{aligned} \hspace{\stretch{1}}(4.50)

represents the die off.

\omega_{10} is the smallest.

Note that the convention is that the m in TE_{m n} is the bigger of the two indexes, so \omega > \omega_{10}.

The phase velocity

\begin{aligned}V_\phi = \frac{\omega}{k} = \frac{c}{\sqrt{ 1 - \left(\frac{\omega_{m n}}{\omega}\right)^2 }} \ge c\end{aligned} \hspace{\stretch{1}}(4.51)

However, energy is transmitted with the group velocity, the ratio of the Poynting vector and energy density

\begin{aligned}\frac{\left\langle{\mathbf{S}}\right\rangle}{\left\langle{{U}}\right\rangle} = V_g = \frac{\partial {\omega}}{\partial {k}} = 1/\frac{\partial {k}}{\partial {\omega}}\end{aligned} \hspace{\stretch{1}}(4.52)

(This can be shown).

Since

\begin{aligned}\left(\frac{\partial {k}}{\partial {\omega}}\right)^{-1} = \left(\frac{\partial {}}{\partial {\omega}}\sqrt{ (\omega/c)^2 - (\omega_{m n}/c)^2 }\right)^{-1} = c \sqrt{ 1 - (\omega_{m n}/\omega)^2 } \le c\end{aligned} \hspace{\stretch{1}}(4.53)

We see that the energy is transmitted at less than the speed of light as expected.

Final remarks.

I’d started converting my handwritten scrawl for this tutorial into an attempt at working through these ideas with enough detail that they self contained, but gave up part way. This appears to me to be too big of a sub-discipline to give it justice in one hours class. As is, it is enough to at least get an concept of some of the ideas involved. I think were I to learn this for real, I’d need a good text as a reference (or the time to attempt to blunder through the ideas in much much more detail).

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Lecture 11 (Taught by Prof. Erich Poppitz). Unpacking Lorentz force equation. Lorentz transformations of the strength tensor, Lorentz field invariants, Bianchi identity, and first half of Maxwell’s.

Posted by peeterjoot on February 24, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Reading.

Covering chapter 3 material from the text [1].

Covering lecture notes pp. 74-83: Lorentz transformation of the strength tensor (82) [Tuesday, Feb. 8] [extra reading for the mathematically minded: gauge field, strength tensor, and gauge transformations in differential form language, not to be covered in class (83)]

Covering lecture notes pp. 84-102: Lorentz invariants of the electromagnetic field (84-86); Bianchi identity and the first half of Maxwell’s equations (87-90)

Chewing on the four vector form of the Lorentz force equation.

After much effort, we arrived at

\begin{aligned}\frac{d{{(m c u_l) }}}{ds} = \frac{e}{c} \left( \partial_l A_i - \partial_i A_l \right) u^i\end{aligned} \hspace{\stretch{1}}(2.1)

or

\begin{aligned}\frac{d{{ p_l }}}{ds} = \frac{e}{c} F_{l i} u^i\end{aligned} \hspace{\stretch{1}}(2.2)

Elements of the strength tensor

\paragraph{Claim}: there are only 6 independent elements of this matrix (tensor)

\begin{aligned}\begin{bmatrix}0 & . & . & . \\    & 0 & . & . \\    &   & 0 & . \\    &   &   & 0 \\  \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.3)

This is a no-brainer, for we just have to mechanically plug in the elements of the field strength tensor

Recall

\begin{aligned}A^i &= (\phi, \mathbf{A}) \\ A_i &= (\phi, -\mathbf{A})\end{aligned} \hspace{\stretch{1}}(2.4)

\begin{aligned}F_{0\alpha} &= \partial_0 A_\alpha - \partial_\alpha A_0  \\ &= -\partial_0 (\mathbf{A})_\alpha - \partial_\alpha \phi  \\ \end{aligned}

\begin{aligned}F_{0\alpha} = E_\alpha\end{aligned} \hspace{\stretch{1}}(2.6)

For the purely spatial index combinations we have

\begin{aligned}F_{\alpha\beta} &= \partial_\alpha A_\beta - \partial_\beta A_\alpha  \\ &= -\partial_\alpha (\mathbf{A})_\beta + \partial_\beta (\mathbf{A})_\alpha  \\ \end{aligned}

Written out explicitly, these are

\begin{aligned}F_{1 2} &= \partial_2 (\mathbf{A})_1 -\partial_1 (\mathbf{A})_2  \\ F_{2 3} &= \partial_3 (\mathbf{A})_2 -\partial_2 (\mathbf{A})_3  \\ F_{3 1} &= \partial_1 (\mathbf{A})_3 -\partial_3 (\mathbf{A})_1 .\end{aligned} \hspace{\stretch{1}}(2.7)

We can compare this to the elements of \mathbf{B}

\begin{aligned}\mathbf{B} = \begin{vmatrix}\hat{\mathbf{x}} & \hat{\mathbf{y}} & \hat{\mathbf{z}} \\ \partial_1 & \partial_2 & \partial_3 \\ A_x & A_y & A_z\end{vmatrix}\end{aligned} \hspace{\stretch{1}}(2.10)

We see that

\begin{aligned}(\mathbf{B})_z &= \partial_1 A_y - \partial_2 A_x \\ (\mathbf{B})_x &= \partial_2 A_z - \partial_3 A_y \\ (\mathbf{B})_y &= \partial_3 A_x - \partial_1 A_z\end{aligned} \hspace{\stretch{1}}(2.11)

So we have

\begin{aligned}F_{1 2} &= - (\mathbf{B})_3 \\ F_{2 3} &= - (\mathbf{B})_1 \\ F_{3 1} &= - (\mathbf{B})_2.\end{aligned} \hspace{\stretch{1}}(2.14)

These can be summarized as simply

\begin{aligned}F_{\alpha\beta} = - \epsilon_{\alpha\beta\gamma} B_\gamma.\end{aligned} \hspace{\stretch{1}}(2.17)

This provides all the info needed to fill in the matrix above

\begin{aligned}{\left\lVert{ F_{i j} }\right\rVert} = \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0.\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.18)

Index raising of rank 2 tensor

To raise indexes we compute

\begin{aligned}F^{i j} = g^{i l} g^{j k} F_{l k}.\end{aligned} \hspace{\stretch{1}}(2.19)

Justifying the raising operation.

To justify this consider raising one index at a time by applying the metric tensor to our definition of F_{l k}. That is

\begin{aligned}g^{a l} F_{l k} &=g^{a l} (\partial_l A_k - \partial_k A_l) \\ &=\partial^a A_k - \partial_k A^a.\end{aligned}

Now apply the metric tensor once more

\begin{aligned}g^{b k} g^{a l} F_{l k} &=g^{b k} (\partial^a A_k - \partial_k A^a) \\ &=\partial^a A^b - \partial^b A^a.\end{aligned}

This is, by definition F^{a b}. Since a rank 2 tensor has been defined as an object that transforms like the product of two pairs of coordinates, it makes sense that this particular tensor raises in the same fashion as would a product of two vector coordinates (in this case, it happens to be an antisymmetric product of two vectors, and one of which is an operator, but we have the same idea).

Consider the components of the raised F_{i j} tensor.

\begin{aligned}F^{0\alpha} &= -F_{0\alpha} \\ F^{\alpha\beta} &= F_{\alpha\beta}.\end{aligned} \hspace{\stretch{1}}(2.20)

\begin{aligned}{\left\lVert{ F^{i j} }\right\rVert} = \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.22)

Back to chewing on the Lorentz force equation.

\begin{aligned}m c \frac{d{{ u_i }}}{ds} = \frac{e}{c} F_{i j} u^j\end{aligned} \hspace{\stretch{1}}(2.23)

\begin{aligned}u^i &= \gamma \left( 1, \frac{\mathbf{v}}{c} \right) \\ u_i &= \gamma \left( 1, -\frac{\mathbf{v}}{c} \right)\end{aligned} \hspace{\stretch{1}}(2.24)

For the spatial components of the Lorentz force equation we have

\begin{aligned}m c \frac{d{{ u_\alpha }}}{ds} &= \frac{e}{c} F_{\alpha j} u^j \\ &= \frac{e}{c} F_{\alpha 0} u^0+ \frac{e}{c} F_{\alpha \beta} u^\beta \\ &= \frac{e}{c} (-E_{\alpha}) \gamma+ \frac{e}{c} (- \epsilon_{\alpha\beta\gamma} B_\gamma ) \frac{v^\beta}{c} \gamma \end{aligned}

But

\begin{aligned}m c \frac{d{{ u_\alpha }}}{ds} &= -m \frac{d{{(\gamma \mathbf{v}_\alpha)}}}{ds} \\ &= -m \frac{d(\gamma \mathbf{v}_\alpha)}{c \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} dt} \\ &= -\gamma \frac{d(m \gamma \mathbf{v}_\alpha)}{c dt}.\end{aligned}

Canceling the common -\gamma/c terms, and switching to vector notation, we are left with

\begin{aligned}\frac{d( m \gamma \mathbf{v}_\alpha)}{dt} = e \left( E_\alpha + \frac{1}{{c}} (\mathbf{v} \times \mathbf{B})_\alpha \right).\end{aligned} \hspace{\stretch{1}}(2.26)

Now for the energy term. We have

\begin{aligned}m c \frac{d{{u_0}}}{ds} &= \frac{e}{c} F_{0\alpha} u^\alpha \\ &= \frac{e}{c} E_{\alpha} \gamma \frac{v^\alpha}{c} \\ \frac{d{{ m c \gamma }}}{ds} &=\end{aligned}

Putting the final two lines into vector form we have

\begin{aligned}\frac{d{{ (m c^2 \gamma)}}}{dt} = e \mathbf{E} \cdot \mathbf{v},\end{aligned} \hspace{\stretch{1}}(2.27)

or

\begin{aligned}\frac{d{{ \mathcal{E} }}}{dt} = e \mathbf{E} \cdot \mathbf{v}\end{aligned} \hspace{\stretch{1}}(2.28)

Transformation of rank two tensors in matrix and index form.

Transformation of the metric tensor, and some identities.

With

\begin{aligned}\hat{G} = {\left\lVert{ g_{i j} }\right\rVert} = {\left\lVert{ g^{i j} }\right\rVert}\end{aligned} \hspace{\stretch{1}}(3.29)

\paragraph{We claim:}
The rank two tensor \hat{G} transforms in the following sort of sandwich operation, and this leaves it invariant

\begin{aligned}\hat{G} \rightarrow \hat{O} \hat{G} \hat{O}^\text{T} = \hat{G}.\end{aligned} \hspace{\stretch{1}}(3.30)

To demonstrate this let’s consider a transformed vector in coordinate form as follows

\begin{aligned}{x'}^i &= O^{i j} x_j = {O^i}_j x^j \\ {x'}_i &= O_{i j} x^j = {O_i}^j x_j.\end{aligned} \hspace{\stretch{1}}(3.31)

We can thus write the equation in matrix form with

\begin{aligned}X &= {\left\lVert{x^i}\right\rVert} \\ X' &= {\left\lVert{{x'}^i}\right\rVert} \\ \hat{O} &= {\left\lVert{{O^i}_j}\right\rVert} \\ X' &= \hat{O} X\end{aligned} \hspace{\stretch{1}}(3.33)

Our invariant for the vector square, which is required to remain unchanged is

\begin{aligned}{x'}^i {x'}_i &= (O^{i j} x_j)(O_{i k} x^k) \\ &= x^k (O^{i j} O_{i k}) x_j.\end{aligned}

This shows that we have a delta function relationship for the Lorentz transform matrix, when we sum over the first index

\begin{aligned}O^{a i} O_{a j} = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(3.37)

It appears we can put 3.37 into matrix form as

\begin{aligned}\hat{G} \hat{O}^\text{T} \hat{G} \hat{O} = I\end{aligned} \hspace{\stretch{1}}(3.38)

Now, if one considers that the transpose of a rotation is an inverse rotation, and the transpose of a boost leaves it unchanged, the transpose of a general Lorentz transformation, a composition of an arbitrary sequence of boosts and rotations, must also be a Lorentz transformation, and must then also leave the norm unchanged. For the transpose of our Lorentz transformation \hat{O} lets write

\begin{aligned}\hat{P} = \hat{O}^\text{T}\end{aligned} \hspace{\stretch{1}}(3.39)

For the action of this on our position vector let’s write

\begin{aligned}{x''}^i &= P^{i j} x_j = O^{j i} x_j \\ {x''}_i &= P_{i j} x^j = O_{j i} x^j\end{aligned} \hspace{\stretch{1}}(3.40)

so that our norm is

\begin{aligned}{x''}^a {x''}_a &= (O_{k a} x^k)(O^{j a} x_j) \\ &= x^k (O_{k a} O^{j a} ) x_j \\ &= x^j x_j \\ \end{aligned}

We must then also have an identity when summing over the second index

\begin{aligned}{\delta_{k}}^j = O_{k a} O^{j a} \end{aligned} \hspace{\stretch{1}}(3.42)

Armed with these facts on the products of O_{i j} and O^{i j} we can now consider the transformation of the metric tensor.

The rule (definition) supplied to us for the transformation of an arbitrary rank two tensor, is that this transforms as its indexes transform individually. Very much as if it was the product of two coordinate vectors and we transform those coordinates separately. Doing so for the metric tensor we have

\begin{aligned}g^{i j} &\rightarrow {O^i}_k g^{k m} {O^j}_m \\ &= ({O^i}_k g^{k m}) {O^j}_m \\ &= O^{i m} {O^j}_m \\ &= O^{i m} (O_{a m} g^{a j}) \\ &= (O^{i m} O_{a m}) g^{a j}\end{aligned}

However, by 3.42, we have O_{a m} O^{i m} = {\delta_a}^i, and we prove that

\begin{aligned}g^{i j} \rightarrow g^{i j}.\end{aligned} \hspace{\stretch{1}}(3.43)

Finally, we wish to put the above transformation in matrix form, look more carefully at the very first line

\begin{aligned}g^{i j}&\rightarrow {O^i}_k g^{k m} {O^j}_m \\ \end{aligned}

which is

\begin{aligned}\hat{G} \rightarrow \hat{O} \hat{G} \hat{O}^\text{T} = \hat{G}\end{aligned} \hspace{\stretch{1}}(3.44)

We see that this particular form of transformation, a sandwich between \hat{O} and \hat{O}^\text{T}, leaves the metric tensor invariant.

Lorentz transformation of the electrodynamic tensor

Having identified a composition of Lorentz transformation matrices, when acting on the metric tensor, leaves it invariant, it is a reasonable question to ask how this form of transformation acts on our electrodynamic tensor F^{i j}?

\paragraph{Claim:} A transformation of the following form is required to maintain the norm of the Lorentz force equation

\begin{aligned}\hat{F} \rightarrow \hat{O} \hat{F} \hat{O}^\text{T} ,\end{aligned} \hspace{\stretch{1}}(3.45)

where \hat{F} = {\left\lVert{F^{i j}}\right\rVert}. Observe that our Lorentz force equation can be written exclusively in upper index quantities as

\begin{aligned}m c \frac{d{{u^i}}}{ds} = \frac{e}{c} F^{i j} g_{j l} u^l\end{aligned} \hspace{\stretch{1}}(3.46)

Because we have a vector on one side of the equation, and it transforms by multiplication with by a Lorentz matrix in SO(1,3)

\begin{aligned}\frac{du^i}{ds} \rightarrow \hat{O} \frac{du^i}{ds} \end{aligned} \hspace{\stretch{1}}(3.47)

The LHS of the Lorentz force equation provides us with one invariant

\begin{aligned}(m c)^2 \frac{d{{u^i}}}{ds} \frac{d{{u_i}}}{ds}\end{aligned} \hspace{\stretch{1}}(3.48)

so the RHS must also provide one

\begin{aligned}\frac{e^2}{c^2} F^{i j} g_{j l} u^lF_{i k} g^{k m} u_m=\frac{e^2}{c^2} F^{i j} u_jF_{i k} u^k.\end{aligned} \hspace{\stretch{1}}(3.49)

Let’s look at the RHS in matrix form. Writing

\begin{aligned}U = {\left\lVert{u^i}\right\rVert},\end{aligned} \hspace{\stretch{1}}(3.50)

we can rewrite the Lorentz force equation as

\begin{aligned}m c \dot{U} = \frac{e}{c} \hat{F} \hat{G} U.\end{aligned} \hspace{\stretch{1}}(3.51)

In this matrix formalism our invariant 3.49 is

\begin{aligned}\frac{e^2}{c^2} (\hat{F} \hat{G} U)^\text{T} G \hat{F} \hat{G} U=\frac{e^2}{c^2} U^\text{T} \hat{G} \hat{F}^\text{T} G \hat{F} \hat{G} U.\end{aligned} \hspace{\stretch{1}}(3.52)

If we compare this to the transformed Lorentz force equation we have

\begin{aligned}m c \hat{O} \dot{U} = \frac{e}{c} \hat{F'} \hat{G} \hat{O} U.\end{aligned} \hspace{\stretch{1}}(3.53)

Our invariant for the transformed equation is

\begin{aligned}\frac{e^2}{c^2} (\hat{F'} \hat{G} \hat{O} U)^\text{T} G \hat{F'} \hat{G} \hat{O} U&=\frac{e^2}{c^2} U^\text{T} \hat{O}^\text{T} \hat{G} \hat{F'}^\text{T} G \hat{F'} \hat{G} \hat{O} U \\ \end{aligned}

Thus the transformed electrodynamic tensor \hat{F}' must satisfy the identity

\begin{aligned}\hat{O}^\text{T} \hat{G} \hat{F'}^\text{T} G \hat{F'} \hat{G} \hat{O} = \hat{G} \hat{F}^\text{T} G \hat{F} \hat{G} \end{aligned} \hspace{\stretch{1}}(3.54)

With the substitution \hat{F}' = \hat{O} \hat{F} \hat{O}^\text{T} the LHS is

\begin{aligned}\hat{O}^\text{T} \hat{G} \hat{F'}^\text{T} \hat{G} \hat{F'} \hat{G} \hat{O} &= \hat{O}^\text{T} \hat{G} ( \hat{O} \hat{F} \hat{O}^\text{T})^\T \hat{G} (\hat{O} \hat{F} \hat{O}^\text{T}) \hat{G} \hat{O}  \\ &= (\hat{O}^\text{T} \hat{G} \hat{O}) \hat{F}^\text{T} (\hat{O}^\text{T} \hat{G} \hat{O}) \hat{F} (\hat{O}^\text{T} \hat{G} \hat{O}) \\ \end{aligned}

We’ve argued that \hat{P} = \hat{O}^\text{T} is also a Lorentz transformation, thus

\begin{aligned}\hat{O}^\text{T} \hat{G} \hat{O}&=\hat{P} \hat{G} \hat{O}^\text{T} \\ &=\hat{G}\end{aligned}

This is enough to make both sides of 3.54 match, verifying that this transformation does provide the invariant properties desired.

Direct computation of the Lorentz transformation of the electrodynamic tensor.

We can construct the transformed field tensor more directly, by simply transforming the coordinates of the four gradient and the four potential directly. That is

\begin{aligned}F^{i j} = \partial^i A^j - \partial^j A^i&\rightarrow {O^i}_a {O^j}_b \left( \partial^a A^b - \partial^b A^a \right) \\ &={O^i}_a F^{a b} {O^j}_b \end{aligned}

By inspection we can see that this can be represented in matrix form as

\begin{aligned}\hat{F} \rightarrow \hat{O} \hat{F} \hat{O}^\text{T}\end{aligned} \hspace{\stretch{1}}(3.55)

Four vector invariants

For three vectors \mathbf{A} and \mathbf{B} invariants are

\begin{aligned}\mathbf{A} \cdot \mathbf{B} = A^\alpha B_\alpha\end{aligned} \hspace{\stretch{1}}(4.56)

For four vectors A^i and B^i invariants are

\begin{aligned}A^i B_i = A^i g_{i j} B^j  \end{aligned} \hspace{\stretch{1}}(4.57)

For F_{i j} what are the invariants? One invariant is

\begin{aligned}g^{i j} F_{i j} = 0,\end{aligned} \hspace{\stretch{1}}(4.58)

but this isn’t interesting since it is uniformly zero (product of symmetric and antisymmetric).

The two invariants are

\begin{aligned}F_{i j}F^{i j}\end{aligned} \hspace{\stretch{1}}(4.59)

and

\begin{aligned}\epsilon^{i j k l} F_{i j}F_{k l}\end{aligned} \hspace{\stretch{1}}(4.60)

where

\begin{aligned}\epsilon^{i j k l} =\left\{\begin{array}{l l}0 & \quad \mbox{if any two indexes coincide} \\ 1 & \quad \mbox{for even permutations of i j k l=0123$ } \\ -1 & \quad \mbox{for odd permutations of $i j k l=0123$ } \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(4.61)$

We can show (homework) that

\begin{aligned}F_{i j}F^{i j} \propto \mathbf{E}^2 - \mathbf{B}^2\end{aligned} \hspace{\stretch{1}}(4.62)

\begin{aligned}\epsilon^{i j k l} F_{i j}F_{k l} \propto \mathbf{E} \cdot \mathbf{B}\end{aligned} \hspace{\stretch{1}}(4.63)

This first invariant serves as the action density for the Maxwell field equations.

There’s some useful properties of these invariants. One is that if the fields are perpendicular in one frame, then will be in any other.

From the first, note that if {\left\lvert{\mathbf{E}}\right\rvert} > {\left\lvert{\mathbf{B}}\right\rvert}, the invariant is positive, and must be positive in all frames, or if {\left\lvert{\mathbf{E}}\right\rvert}  {\left\lvert{\mathbf{B}}\right\rvert} in one frame, we can transform to a frame with only \mathbf{E}' component, solve that, and then transform back. Similarly if {\left\lvert{\mathbf{E}}\right\rvert} < {\left\lvert{\mathbf{B}}\right\rvert} in one frame, we can transform to a frame with only \mathbf{B}' component, solve that, and then transform back.

The first half of Maxwell’s equations.

\paragraph{Claim: } The source free portions of Maxwell’s equations are a consequence of the definition of the field tensor alone.

Given

\begin{aligned}F_{i j} = \partial_i A_j - \partial_j A_i,\end{aligned} \hspace{\stretch{1}}(5.64)

where

\begin{aligned}\partial_i = \frac{\partial {}}{\partial {x^i}}\end{aligned} \hspace{\stretch{1}}(5.65)

This alone implies half of Maxwell’s equations. To show this we consider

\begin{aligned}e^{m k i j} \partial_k F_{i j} = 0.\end{aligned} \hspace{\stretch{1}}(5.66)

This is the Bianchi identity. To demonstrate this identity, we’ll have to swap indexes, employ derivative commutation, and then swap indexes once more

\begin{aligned}e^{m k i j} \partial_k F_{i j} &= e^{m k i j} \partial_k (\partial_i A_j - \partial_j A_i) \\ &= 2 e^{m k i j} \partial_k \partial_i A_j \\ &= 2 e^{m k i j} \frac{1}{{2}} \left( \partial_k \partial_i A_j + \partial_i \partial_k A_j \right) \\ &= e^{m k i j} \partial_k \partial_i A_j e^{m i k j} \partial_k \partial_i A_j  \\ &= (e^{m k i j} - e^{m k i j}) \partial_k \partial_i A_j \\ &= 0 \qquad \square\end{aligned}

This is the 4D analogue of

\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} f) = 0\end{aligned} \hspace{\stretch{1}}(5.67)

i.e.

\begin{aligned}e^{\alpha\beta\gamma} \partial_\beta \partial_\gamma f = 0\end{aligned} \hspace{\stretch{1}}(5.68)

Let’s do this explicitly, starting with

\begin{aligned}{\left\lVert{ F_{i j} }\right\rVert} = \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0.\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(5.69)

For the m= 0 case we have

\begin{aligned}\epsilon^{0 k i j} \partial_k F_{i j}&=\epsilon^{\alpha \beta \gamma} \partial_\alpha F_{\beta \gamma} \\ &= \epsilon^{\alpha \beta \gamma} \partial_\alpha (-\epsilon_{\beta \gamma \delta} B_\delta) \\ &= -\epsilon^{\alpha \beta \gamma} \epsilon_{\delta \beta \gamma }\partial_\alpha B_\delta \\ &= - 2 {\delta^\alpha}_\delta \partial_\alpha B_\delta \\ &= - 2 \partial_\alpha B_\alpha \end{aligned}

We must then have

\begin{aligned}\partial_\alpha B_\alpha = 0.\end{aligned} \hspace{\stretch{1}}(5.70)

This is just Gauss’s law for magnetism

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{B} = 0.\end{aligned} \hspace{\stretch{1}}(5.71)

Let’s do the spatial portion, for which we have three equations, one for each \alpha of

\begin{aligned}e^{\alpha j k l} \partial_j F_{k l}&=e^{\alpha 0 \beta \gamma} \partial_0 F_{\beta \gamma}+e^{\alpha 0 \gamma \beta} \partial_0 F_{\gamma \beta}+e^{\alpha \beta 0 \gamma} \partial_\beta F_{0 \gamma}+e^{\alpha \beta \gamma 0} \partial_\beta F_{\gamma 0}+e^{\alpha \gamma 0 \beta} \partial_\gamma F_{0 \beta}+e^{\alpha \gamma \beta 0} \partial_\gamma F_{\beta 0} \\ &=2 \left( e^{\alpha 0 \beta \gamma} \partial_0 F_{\beta \gamma}+e^{\alpha \beta 0 \gamma} \partial_\beta F_{0 \gamma}+e^{\alpha \gamma 0 \beta} \partial_\gamma F_{0 \beta}\right) \\ &=2 e^{0 \alpha \beta \gamma} \left(-\partial_0 F_{\beta \gamma}+\partial_\beta F_{0 \gamma}- \partial_\gamma F_{0 \beta}\right)\end{aligned}

This implies

\begin{aligned}0 =-\partial_0 F_{\beta \gamma}+\partial_\beta F_{0 \gamma}- \partial_\gamma F_{0 \beta}\end{aligned} \hspace{\stretch{1}}(5.72)

Referring back to the previous expansions of 2.6 and 2.17, we have

\begin{aligned}0 =\partial_0 \epsilon_{\beta\gamma\mu} B_\mu+\partial_\beta E_\gamma- \partial_\gamma E_{\beta},\end{aligned} \hspace{\stretch{1}}(5.73)

or

\begin{aligned}\frac{1}{{c}} \frac{\partial {B_\alpha}}{\partial {t}} + (\boldsymbol{\nabla} \times \mathbf{E})_\alpha = 0.\end{aligned} \hspace{\stretch{1}}(5.74)

These are just the components of the Maxwell-Faraday equation

\begin{aligned}0 = \frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} + \boldsymbol{\nabla} \times \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(5.75)

Appendix. Some additional index gymnastics.

Transposition of mixed index tensor.

Is the transpose of a mixed index object just a substitution of the free indexes? This wasn’t obvious to me that it would be the case, especially since I’d made an error in some index gymnastics that had me temporarily convinced differently. However, working some examples clears the fog. For example let’s take the transpose of 3.37.

\begin{aligned}{\left\lVert{ {\delta^i}_j }\right\rVert}^\text{T} &= {\left\lVert{ O^{a i} O_{a j} }\right\rVert}^\text{T} \\ &= \left( {\left\lVert{ O^{j i} }\right\rVert} {\left\lVert{ O_{i j} }\right\rVert} \right)^\text{T} \\ &={\left\lVert{ O_{i j} }\right\rVert}^\text{T}{\left\lVert{ O^{j i} }\right\rVert}^\text{T}  \\ &={\left\lVert{ O_{j i} }\right\rVert}{\left\lVert{ O^{i j} }\right\rVert} \\ &={\left\lVert{ O_{a i} O^{a j} }\right\rVert} \\ \end{aligned}

If the transpose of a mixed index tensor just swapped the indexes we would have

\begin{aligned}{\left\lVert{ {\delta^i}_j }\right\rVert}^\text{T} = {\left\lVert{ O_{a i} O^{a j} }\right\rVert} \end{aligned} \hspace{\stretch{1}}(6.76)

From this it does appear that all we have to do is switch the indexes and we will write

\begin{aligned}{\delta^j}_i = O_{a i} O^{a j} \end{aligned} \hspace{\stretch{1}}(6.77)

We can consider a more general operation

\begin{aligned}{\left\lVert{{A^i}_j}\right\rVert}^\text{T}&={\left\lVert{ A^{i m} g_{m j} }\right\rVert}^\text{T} \\ &={\left\lVert{ g_{i j} }\right\rVert}^\text{T}{\left\lVert{ A^{i j} }\right\rVert}^\text{T}  \\ &={\left\lVert{ g_{i j} }\right\rVert}{\left\lVert{ A^{j i} }\right\rVert} \\ &={\left\lVert{ g_{i m} A^{j m} }\right\rVert} \\ &={\left\lVert{ {A^{j}}_i }\right\rVert}\end{aligned}

So we see that we do just have to swap indexes.

Transposition of lower index tensor.

We’ve saw above that we had

\begin{aligned}{\left\lVert{ {A^{i}}_j }\right\rVert}^\text{T} &= {\left\lVert{ {A_{j}}^i }\right\rVert} \\ {\left\lVert{ {A_{i}}^j }\right\rVert}^\text{T} &= {\left\lVert{ {A^{j}}_i }\right\rVert} \end{aligned} \hspace{\stretch{1}}(6.78)

which followed by careful treatment of the transposition in terms of A^{i j} for which we defined a transpose operation. We assumed as well that

\begin{aligned}{\left\lVert{ A_{i j} }\right\rVert}^\text{T} = {\left\lVert{ A_{j i} }\right\rVert}.\end{aligned} \hspace{\stretch{1}}(6.80)

However, this does not have to be assumed, provided that g^{i j} = g_{i j}, and (AB)^\text{T} = B^\text{T} A^\text{T}. We see this by expanding this transposition in products of A^{i j} and \hat{G}

\begin{aligned}{\left\lVert{ A_{i j} }\right\rVert}^\text{T}&= \left( {\left\lVert{g_{i j}}\right\rVert} {\left\lVert{ A^{i j} }\right\rVert} {\left\lVert{g_{i j}}\right\rVert} \right)^\text{T} \\ &= \left( {\left\lVert{g^{i j}}\right\rVert} {\left\lVert{ A^{i j} }\right\rVert} {\left\lVert{g^{i j}}\right\rVert} \right)^\text{T} \\ &= {\left\lVert{g^{i j}}\right\rVert}^\text{T} {\left\lVert{ A^{i j}}\right\rVert}^\text{T} {\left\lVert{g^{i j}}\right\rVert}^\text{T} \\ &= {\left\lVert{g^{i j}}\right\rVert} {\left\lVert{ A^{j i}}\right\rVert} {\left\lVert{g^{i j}}\right\rVert} \\ &= {\left\lVert{g_{i j}}\right\rVert} {\left\lVert{ A^{i j}}\right\rVert} {\left\lVert{g_{i j}}\right\rVert} \\ &= {\left\lVert{ A_{j i}}\right\rVert} \end{aligned}

It would be worthwhile to go through all of this index manipulation stuff and lay it out in a structured axiomatic form. What is the minimal set of assumptions, and how does all of this generalize to non-diagonal metric tensors (even in Euclidean spaces).

Translating the index expression of identity from Lorentz products to matrix form

A verification that the matrix expression 3.38, matches the index expression 3.37 as claimed is worthwhile. It would be easy to guess something similar like \hat{O}^\text{T} \hat{G} \hat{O} \hat{G} is instead the matrix representation. That was in fact my first erroneous attempt to form the matrix equivalent, but is the transpose of 3.38. Either way you get an identity, but the indexes didn’t match.

Since we have g^{i j} = g_{i j} which do we pick to do this verification? This appears to be dictated by requirements to match lower and upper indexes on the summed over index. This is probably clearest by example, so let’s expand the products on the LHS explicitly

\begin{aligned}{\left\lVert{ g^{i j} }\right\rVert} {\left\lVert{ {O^{i}}_j }\right\rVert} ^\text{T}{\left\lVert{ g_{i j} }\right\rVert}{\left\lVert{ {O^{i}}_j }\right\rVert} &=\left( {\left\lVert{ {O^{i}}_j }\right\rVert} {\left\lVert{ g^{i j} }\right\rVert} \right) ^\text{T}{\left\lVert{ g_{i j} }\right\rVert}{\left\lVert{ {O^{i}}_j }\right\rVert}  \\ &=\left( {\left\lVert{ {O^{i}}_k g^{k j} }\right\rVert} \right) ^\text{T}{\left\lVert{ g_{i m} {O^{m}}_j }\right\rVert}  \\ &={\left\lVert{ O^{i j} }\right\rVert} ^\text{T}{\left\lVert{ O_{i j} }\right\rVert}  \\ &={\left\lVert{ O^{j i} }\right\rVert} {\left\lVert{ O_{i j} }\right\rVert}  \\ &={\left\lVert{ O^{k i} O_{k j} }\right\rVert}  \\ \end{aligned}

This matches the {\left\lVert{{\delta^i}_j}\right\rVert} that we have on the RHS, and all is well.

References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Lecture 14 (Taught by Simon Freedman). Wave equation in Coulomb and Lorentz gauges.

Posted by peeterjoot on February 17, 2011

[Click here for a PDF of this post with nicer formatting]

Reading.

Covering chapter 4 material from the text [1].

Covering lecture notes pp.103-114: the wave equation in the relativistic Lorentz gauge (114-114) [Tuesday, Feb. 15; Wednesday, Feb.16]…

Covering lecture notes pp. 114-127: reminder on wave equations (114); reminder on Fourier series and integral (115-117); Fourier expansion of the EM potential in Coulomb gauge and equation of motion for the spatial Fourier components (118-119); the general solution of Maxwell’s equations in vacuum (120-121) [Tuesday, Mar. 1]; properties of monochromatic plane EM waves (122-124); energy and energy flux of the EM field and energy conservation from the equations of motion (125-127) [Wednesday, Mar. 2]

Trying to understand “c”

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} &= 0 \\ \boldsymbol{\nabla} \times \mathbf{B} &= \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(2.1)

Maxwell’s equations in a vacuum were

\begin{aligned}\boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) &= \boldsymbol{\nabla}^2 \mathbf{A}  -\frac{1}{{c}} \frac{\partial {}}{\partial {t}} \boldsymbol{\nabla} \phi - \frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}}{\partial t^2} \\ \boldsymbol{\nabla} \cdot \mathbf{E} &= - \boldsymbol{\nabla}^2 \phi - \frac{1}{{c}} \frac{\partial {\boldsymbol{\nabla} \cdot \mathbf{A}}}{\partial {t}} \end{aligned} \hspace{\stretch{1}}(2.3)

There’s a redundancy here since we can change \phi and \mathbf{A} without changing the EOM

\begin{aligned}(\phi, \mathbf{A}) \rightarrow (\phi', \mathbf{A}')\end{aligned} \hspace{\stretch{1}}(2.5)

with

\begin{aligned}\phi &= \phi' + \frac{1}{{c}} \frac{\partial {\chi}}{\partial {t}} \\ \mathbf{A} &= \mathbf{A}' - \boldsymbol{\nabla} \chi\end{aligned} \hspace{\stretch{1}}(2.6)

\begin{aligned}\chi(\mathbf{x}, t) = c \int dt \phi(\mathbf{x}, t)\end{aligned} \hspace{\stretch{1}}(2.8)

which gives

\begin{aligned}\phi' = 0\end{aligned} \hspace{\stretch{1}}(2.9)

\begin{aligned}(\phi, \mathbf{A}) \sim (\phi = 0, \mathbf{A}')\end{aligned} \hspace{\stretch{1}}(2.10)

Maxwell’s equations are now

\begin{aligned}\boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}') &= \boldsymbol{\nabla}^2 \mathbf{A}'  - \frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}'}{\partial t^2} \\ \frac{\partial {\boldsymbol{\nabla} \cdot \mathbf{A}'}}{\partial {t}}  &= 0\end{aligned}

Can we make \boldsymbol{\nabla} \cdot \mathbf{A}'' = 0, while \phi'' = 0.

\begin{aligned}\underbrace{\phi}_{=0} &= \underbrace{\phi'}_{=0} + \frac{1}{{c}} \frac{\partial {\chi'}}{\partial {t}} \\ \end{aligned} \hspace{\stretch{1}}(2.11)

We need

\begin{aligned}\frac{\partial {\chi'}}{\partial {t}} = 0\end{aligned} \hspace{\stretch{1}}(2.13)

How about \mathbf{A}'

\begin{aligned}\mathbf{A}' = \mathbf{A}'' - \boldsymbol{\nabla} \chi'\end{aligned} \hspace{\stretch{1}}(2.14)

We want the divergence of \mathbf{A}' to be zero, which means

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{A}' = \underbrace{\boldsymbol{\nabla} \cdot \mathbf{A}''}_{=0} - \boldsymbol{\nabla}^2 \chi'\end{aligned} \hspace{\stretch{1}}(2.15)

So we want

\begin{aligned}\boldsymbol{\nabla}^2 \chi' = \boldsymbol{\nabla} \cdot \mathbf{A}'\end{aligned} \hspace{\stretch{1}}(2.16)

Can we solve this?

Recall that in electrostatics we have

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 4 \pi \rho\end{aligned} \hspace{\stretch{1}}(2.17)

and

\begin{aligned}\mathbf{E} = -\boldsymbol{\nabla} \phi\end{aligned} \hspace{\stretch{1}}(2.18)

which meant that we had

\begin{aligned}\boldsymbol{\nabla}^2 \phi = 4 \pi \rho\end{aligned} \hspace{\stretch{1}}(2.19)

This has the identical form (with \phi \sim \chi, and 4 \pi \rho \sim \boldsymbol{\nabla} \cdot \mathbf{A}').

While we aren’t trying to actually solve this (just show that it can be solved). One way to look at this problem is that it is just a Laplace equation, and we could utilize a Green’s function solution if desired.

On the Green’s function.

Recall that the Green’s function for the Laplacian was

\begin{aligned}G(\mathbf{x}, \mathbf{x}') = \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}}\end{aligned} \hspace{\stretch{1}}(2.20)

with the property

\begin{aligned}\boldsymbol{\nabla}^2 G(\mathbf{x}, \mathbf{x}') = \delta(\mathbf{x} - \mathbf{x}')\end{aligned} \hspace{\stretch{1}}(2.21)

Our LDE to solve by Green’s method is

\begin{aligned}\boldsymbol{\nabla}^2 \phi = 4 \pi \rho,\end{aligned} \hspace{\stretch{1}}(2.22)

We let this equation (after switching to primed coordinates) operate on the Green’s function

\begin{aligned}\int d^3 \mathbf{x}' {\boldsymbol{\nabla}'}^2 \phi(\mathbf{x}') G(\mathbf{x}, \mathbf{x}') =\int d^3 \mathbf{x}' 4 \pi \phi(\mathbf{x}') G(\mathbf{x}, \mathbf{x}').\end{aligned} \hspace{\stretch{1}}(2.23)

Assuming that the left action of the Green’s function on the test function \phi(\mathbf{x}') is the same as the right action (i.e. \phi(\mathbf{x}') and G(\mathbf{x}, \mathbf{x}') commute), we have for the LHS

\begin{aligned}\int d^3 \mathbf{x}' {\boldsymbol{\nabla}'}^2 \phi(\mathbf{x}') G(\mathbf{x}, \mathbf{x}') &=\int d^3 \mathbf{x}' {\boldsymbol{\nabla}'}^2 G(\mathbf{x}, \mathbf{x}') \phi(\mathbf{x}') \\ &=\int d^3 \mathbf{x}' \delta(\mathbf{x} - \mathbf{x}') \phi(\mathbf{x}') \\ &=\phi(\mathbf{x}).\end{aligned}

Substitution of G(\mathbf{x}, \mathbf{x}') = 1/{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} on the RHS then gives us the general solution

\begin{aligned}\phi(\mathbf{x}) = 4 \pi \int d^3 \mathbf{x}' \frac{\rho(\mathbf{x}') }{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}\end{aligned} \hspace{\stretch{1}}(2.24)

Back to Maxwell’s equations in vacuum.

What are the Maxwell’s vacuum equations now?

With the second gauge substitution we have

\begin{aligned}\boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}'') &= \boldsymbol{\nabla}^2 \mathbf{A}''  - \frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}''}{\partial t^2} \\ \frac{\partial {\boldsymbol{\nabla} \cdot \mathbf{A}''}}{\partial {t}}  &= 0\end{aligned}

but we can utilize

\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{A}) = \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) - \boldsymbol{\nabla}^2 \mathbf{A},\end{aligned} \hspace{\stretch{1}}(2.25)

to reduce Maxwell’s equations (after dropping primes) to just

\begin{aligned}\frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}''}{\partial t^2} - \Delta \mathbf{A} = 0\end{aligned} \hspace{\stretch{1}}(2.26)

where

\begin{aligned}\Delta = \boldsymbol{\nabla}^2 = \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} = \frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial y^2}\end{aligned} \hspace{\stretch{1}}(2.27)

Note that for this to be correct we have to also explicitly include the gauge condition used. This particular gauge is called the \underline{Coulomb gauge}.

\begin{aligned}\phi &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{A}'' &= 0 \end{aligned} \hspace{\stretch{1}}(2.28)

Claim: EM waves propagate with speed c and are transverse.

\paragraph{Note:} Is the Coulomb gauge Lorentz invariant?
\paragraph{No.} We can boost which will introduce a non-zero \phi.

The gauge that is Lorentz Invariant is the “Lorentz gauge”. This one uses

\begin{aligned}\partial_i A^i = 0\end{aligned} \hspace{\stretch{1}}(3.30)

Recall that Maxwell’s equations are

\begin{aligned}\partial_i F^{ij} = j^j = 0\end{aligned} \hspace{\stretch{1}}(3.31)

where

\begin{aligned}\partial_i &= \frac{\partial {}}{\partial {x^i}} \\ \partial^i &= \frac{\partial {}}{\partial {x_i}}\end{aligned} \hspace{\stretch{1}}(3.32)

Writing out the equations in terms of potentials we have

\begin{aligned}0 &= \partial_i (\partial^i A^j - \partial^j A^i)  \\ &= \partial_i \partial^i A^j - \partial_i \partial^j A^i \\ &= \partial_i \partial^i A^j - \partial^j \partial_i A^i \\ \end{aligned}

So, if we pick the gauge condition \partial_i A^i = 0, we are left with just

\begin{aligned}0 = \partial_i \partial^i A^j\end{aligned} \hspace{\stretch{1}}(3.34)

Can we choose {A'}^i such that \partial_i A^i = 0?

Our gauge condition is

\begin{aligned}A^i = {A'}^i + \partial^i \chi\end{aligned} \hspace{\stretch{1}}(3.35)

Hit it with a derivative for

\begin{aligned}\partial_i A^i = \partial_i {A'}^i + \partial_i \partial^i \chi\end{aligned} \hspace{\stretch{1}}(3.36)

If we want \partial_i A^i = 0, then we have

\begin{aligned}-\partial_i {A'}^i = \partial_i \partial^i \chi = \left( \frac{1}{{c^2}} \frac{\partial^2}{\partial t^2} - \Delta \right) \chi\end{aligned} \hspace{\stretch{1}}(3.37)

This is the physicist proof. Yes, it can be solved. To really solve this, we’d want to use Green’s functions. I seem to recall the Green’s function is a retarded time version of the Laplacian Green’s function, and we can figure that exact form out by switching to a Fourier frequency domain representation.

Anyways. Returning to Maxwell’s equations we have

\begin{aligned}0 &= \partial_i \partial^i A^j \\ 0 &= \partial_i A^i ,\end{aligned} \hspace{\stretch{1}}(3.38)

where the first is Maxwell’s equation, and the second is our gauge condition.

Observe that the gauge condition is now a Lorentz scalar.

\begin{aligned}\partial^i A_i \rightarrow \partial^j {O_j}^i {O_i}^k A_k\end{aligned} \hspace{\stretch{1}}(3.40)

But the Lorentz transform matrices multiply out to identity, in the same way that they do for the transformation of a plain old four vector dot product x^i y_i.

What happens with a Massive vector field?

\begin{aligned}S = \int d^4 x \left( \frac{1}{{4}} F^{ij} F_{ij} + \frac{m^2}{2} A^i A_i \right)\end{aligned} \hspace{\stretch{1}}(4.41)

An aside on units

“Note that this action is expressed in dimensions where \hbar = c = 1, making the action is unit-less (energy and time are inverse units of each other). The d^4x has units of m^{-4} (since [x] = \hbar/mc), so F has units of m^2, and then A has units of mass. Therefore d^4x A A has units of m^{-2} and therefore you need something that has units of m^2 to make the action unit-less. When you don’t take c=1, then you’ve got to worry about those factors, but I think you’ll see it works out fine.”

For what it’s worth, I can adjust the units of this action to those that we’ve used in class with,

\begin{aligned}S = \int d^4 x \left( -\frac{1}{{16 \pi c}} F^{ij} F_{ij} - \frac{m^2 c^2}{8 \hbar^2} A^i A_i \right)\end{aligned} \hspace{\stretch{1}}(4.42)

Back to the problem.

The variation of the field invariant is

\begin{aligned}\delta (F_{ij} F^{ij})&=2 (\delta F_{ij}) F^{ij}) \\ &=2 (\delta(\partial_i A_j -\partial_j A_i)) F^{ij}) \\ &=2 (\partial_i \delta(A_j) -\partial_j \delta(A_i)) F^{ij}) \\ &=4 F^{ij} \partial_i \delta(A_j) \\ &=4 \partial_i (F^{ij} \delta(A_j)) - 4 (\partial_i F^{ij}) \delta(A_j).\end{aligned}

Variation of the A^2 term gives us

\begin{aligned}\delta (A^j A_j) = 2 A^j \delta(A_j),\end{aligned} \hspace{\stretch{1}}(4.43)

so we have

\begin{aligned}0 &= \delta S \\ &= \int d^4 x \delta(A_j) \left( -\partial_i F^{ij} + m^2 A^j \right)+ \int d^4 x \partial_i (F^{ij} \delta(A_j))\end{aligned}

The last integral vanishes on the boundary with the assumption that \delta(A_j) = 0 on that boundary.

Since this must be true for all variations, this leaves us with

\begin{aligned}\partial_i F^{ij} = m^2 A^j\end{aligned} \hspace{\stretch{1}}(4.44)

The RHS can be expanded into wave equation and divergence parts

\begin{aligned}\partial_i F^{ij}&=\partial_i (\partial^i A^j - \partial^j A^i) \\ &=(\partial_i \partial^i) A^j - \partial^j (\partial_i A^i) \\ \end{aligned}

With \square for the wave equation operator

\begin{aligned}\square = \partial_i \partial^i = \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta,\end{aligned} \hspace{\stretch{1}}(4.45)

we can manipulate the EOM to pull out an A_i factor

\begin{aligned}0 &= \left( \square -m^2 \right) A^j - \partial^j (\partial_i A^i) \\ &= \left( \square -m^2 \right) g^{ij} A_i - \partial^j (\partial^i A_i) \\ &= \left( \left( \square -m^2 \right) g^{ij} - \partial^j \partial^i \right) A_i.\end{aligned}

If we hit this with a derivative we get

\begin{aligned}0 &= \partial_j \left( \left( \square -m^2 \right) g^{ij} - \partial^j \partial^i \right) A_i \\ &= \left( \left( \square -m^2 \right) \partial^i - \partial_j \partial^j \partial^i \right) A_i \\ &= \left( \left( \square -m^2 \right) \partial^i - \square \partial^i \right) A_i \\ &= \left( \square -m^2 - \square \right) \partial^i A_i \\ &= -m^2 \partial^i A_i \\ \end{aligned}

Since m is presumed to be non-zero here, this means that the Lorentz gauge is already chosen for us by the equations of motion.

References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

Fourier transform solutions and associated energy and momentum for the homogeneous Maxwell equation. (rework once more)

Posted by peeterjoot on December 29, 2009

[Click here for a PDF of this post with nicer formatting]. Note that this PDF file is formatted in a wide-for-screen layout that is probably not good for printing.

These notes build on and replace those formerly posted in Energy and momentum for assumed Fourier transform solutions to the homogeneous Maxwell equation.

Motivation and notation.

In Electrodynamic field energy for vacuum (reworked) [1], building on Energy and momentum for Complex electric and magnetic field phasors [2], a derivation for the energy and momentum density was derived for an assumed Fourier series solution to the homogeneous Maxwell’s equation. Here we move to the continuous case examining Fourier transform solutions and the associated energy and momentum density.

A complex (phasor) representation is implied, so taking real parts when all is said and done is required of the fields. For the energy momentum tensor the Geometric Algebra form, modified for complex fields, is used

\begin{aligned}T(a) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \hspace{\stretch{1}}(1.1)

The assumed four vector potential will be written

\begin{aligned}A(\mathbf{x}, t) = A^\mu(\mathbf{x}, t) \gamma_\mu = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(1.2)

Subject to the requirement that A is a solution of Maxwell’s equation

\begin{aligned}\nabla (\nabla \wedge A) = 0.\end{aligned} \hspace{\stretch{1}}(1.3)

To avoid latex hell, no special notation will be used for the Fourier coefficients,

\begin{aligned}A(\mathbf{k}, t) = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{x}.\end{aligned} \hspace{\stretch{1}}(1.4)

When convenient and unambiguous, this (\mathbf{k},t) dependence will be implied.

Having picked a time and space representation for the field, it will be natural to express both the four potential and the gradient as scalar plus spatial vector, instead of using the Dirac basis. For the gradient this is

\begin{aligned}\nabla &= \gamma^\mu \partial_\mu = (\partial_0 - \boldsymbol{\nabla}) \gamma_0 = \gamma_0 (\partial_0 + \boldsymbol{\nabla}),\end{aligned} \hspace{\stretch{1}}(1.5)

and for the four potential (or the Fourier transform functions), this is

\begin{aligned}A &= \gamma_\mu A^\mu = (\phi + \mathbf{A}) \gamma_0 = \gamma_0 (\phi - \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(1.6)

Setup

The field bivector F = \nabla \wedge A is required for the energy momentum tensor. This is

\begin{aligned}\nabla \wedge A&= \frac{1}{{2}}\left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \\ &= \frac{1}{{2}}\left( (\stackrel{ \rightarrow }{\partial}_0 - \stackrel{ \rightarrow }{\boldsymbol{\nabla}}) \gamma_0 \gamma_0 (\phi - \mathbf{A})-(\phi + \mathbf{A}) \gamma_0 \gamma_0 (\stackrel{ \leftarrow }{\partial}_0 + \stackrel{ \leftarrow }{\boldsymbol{\nabla}})\right) \\ &= -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \frac{1}{{2}}(\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}})\end{aligned}

This last term is a spatial curl and the field is then

\begin{aligned}F = -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A}\end{aligned} \hspace{\stretch{1}}(2.7)

Applied to the Fourier representation this is

\begin{aligned}F =\frac{1}{{(\sqrt{2 \pi})^3}} \int\left(- \frac{1}{c} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(2.8)

It is only the real parts of this that we are actually interested in, unless physical meaning can be assigned to the complete complex vector field.

Constraints supplied by Maxwell’s equation.

A Fourier transform solution of Maxwell’s vacuum equation \nabla F = 0 has been assumed. Having expressed the Faraday bivector in terms of spatial vector quantities, it is more convenient to do this back substitution into after pre-multiplying Maxwell’s equation by \gamma_0, namely

\begin{aligned}0&= \gamma_0 \nabla F \\ &= (\partial_0 + \boldsymbol{\nabla}) F.\end{aligned} \hspace{\stretch{1}}(3.9)

Applied to the spatially decomposed field as specified in (2.7), this is

\begin{aligned}0&=-\partial_0 \boldsymbol{\nabla} \phi-\partial_{00} \mathbf{A}+ \partial_0 \boldsymbol{\nabla} \wedge \mathbf{A}-\boldsymbol{\nabla}^2 \phi- \boldsymbol{\nabla} \partial_0 \mathbf{A}+ \boldsymbol{\nabla} \cdot (\boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &=- \partial_0 \boldsymbol{\nabla} \phi - \boldsymbol{\nabla}^2 \phi- \partial_{00} \mathbf{A}- \boldsymbol{\nabla} \cdot \partial_0 \mathbf{A}+ \boldsymbol{\nabla}^2 \mathbf{A} - \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} ) \\ \end{aligned}

All grades of this equation must simultaneously equal zero, and the bivector grades have canceled (assuming commuting space and time partials), leaving two equations of constraint for the system

\begin{aligned}0 &=\boldsymbol{\nabla}^2 \phi + \boldsymbol{\nabla} \cdot \partial_0 \mathbf{A}\end{aligned} \hspace{\stretch{1}}(3.11)

\begin{aligned}0 &=\partial_{00} \mathbf{A} - \boldsymbol{\nabla}^2 \mathbf{A}+ \boldsymbol{\nabla} \partial_0 \phi + \boldsymbol{\nabla} ( \boldsymbol{\nabla} \cdot \mathbf{A} )\end{aligned} \hspace{\stretch{1}}(3.12)

It is immediately evident that a gauge transformation could be immediately helpful to simplify things. In [3] the gauge choice \boldsymbol{\nabla} \cdot \mathbf{A} = 0 is used. From (3.11) this implies that \boldsymbol{\nabla}^2 \phi = 0. Bohm argues that for this current and charge free case this implies \phi = 0, but he also has a periodicity constraint. Without a periodicity constraint it is easy to manufacture non-zero counterexamples. One is a linear function in the space and time coordinates

\begin{aligned}\phi = p x + q y + r z + s t\end{aligned} \hspace{\stretch{1}}(3.13)

This is a valid scalar potential provided that the wave equation for the vector potential is also a solution. We can however, force \phi = 0 by making the transformation A^\mu \rightarrow A^\mu + \partial^\mu \psi, which in non-covariant notation is

\begin{aligned}\phi &\rightarrow \phi + \frac{1}{c} \partial_t \psi \\ \mathbf{A} &\rightarrow \phi - \boldsymbol{\nabla} \psi\end{aligned} \hspace{\stretch{1}}(3.14)

If the transformed field \phi' = \phi + \partial_t \psi/c can be forced to zero, then the complexity of the associated Maxwell equations are reduced. In particular, antidifferentiation of \phi = -(1/c) \partial_t \psi, yields

\begin{aligned}\psi(\mathbf{x},t) = \psi(\mathbf{x}, 0) - c \int_{\tau=0}^t \phi(\mathbf{x}, \tau) d\tau.\end{aligned} \hspace{\stretch{1}}(3.16)

Dropping primes, the transformed Maxwell equations now take the form

\begin{aligned}0 &= \partial_t( \boldsymbol{\nabla} \cdot \mathbf{A} )\end{aligned} \hspace{\stretch{1}}(3.17)

\begin{aligned}0 &=\partial_{00} \mathbf{A} - \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} ).\end{aligned} \hspace{\stretch{1}}(3.18)

There are two classes of solutions that stand out for these equations. If the vector potential is constant in time \mathbf{A}(\mathbf{x},t) = \mathbf{A}(\mathbf{x}), Maxwell’s equations are reduced to the single equation

\begin{aligned}0&= - \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} ).\end{aligned} \hspace{\stretch{1}}(3.19)

Observe that a gradient can be factored out of this equation

\begin{aligned}- \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} )&=\boldsymbol{\nabla} (-\boldsymbol{\nabla} \mathbf{A} + \boldsymbol{\nabla} \cdot \mathbf{A} ) \\ &=-\boldsymbol{\nabla} (\boldsymbol{\nabla} \wedge \mathbf{A}).\end{aligned}

The solutions are then those \mathbf{A}s that satisfy both

\begin{aligned}0 &= \partial_t \mathbf{A} \\ 0 &= \boldsymbol{\nabla} (\boldsymbol{\nabla} \wedge \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(3.20)

In particular any non-time dependent potential \mathbf{A} with constant curl provides a solution to Maxwell’s equations. There may be other solutions to (3.19) too that are more general. Returning to (3.17) a second way to satisfy these equations stands out. Instead of requiring of \mathbf{A} constant curl, constant divergence with respect to the time partial eliminates (3.17). The simplest resulting equations are those for which the divergence is a constant in time and space (such as zero). The solution set are then spanned by the vectors \mathbf{A} for which

\begin{aligned}\text{constant} &= \boldsymbol{\nabla} \cdot \mathbf{A} \end{aligned} \hspace{\stretch{1}}(3.22)

\begin{aligned}0 &= \frac{1}{{c^2}} \partial_{tt} \mathbf{A} - \boldsymbol{\nabla}^2 \mathbf{A}.\end{aligned} \hspace{\stretch{1}}(3.23)

Any \mathbf{A} that both has constant divergence and satisfies the wave equation will via (2.7) then produce a solution to Maxwell’s equation.

Maxwell equation constraints applied to the assumed Fourier solutions.

Let’s consider Maxwell’s equations in all three forms, (3.11), (3.20), and (3.22) and apply these constraints to the assumed Fourier solution.

In all cases the starting point is a pair of Fourier transform relationships, where the Fourier transforms are the functions to be determined

\begin{aligned}\phi(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \phi(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(4.24)

\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \mathbf{A}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(4.25)

Case I. Constant time vector potential. Scalar potential eliminated by gauge transformation.

From (4.24) we require

\begin{aligned}0 = (2 \pi)^{-3/2} \int \partial_t \mathbf{A}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.26)

So the Fourier transform also cannot have any time dependence, and we have

\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \mathbf{A}(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(4.27)

What is the curl of this? Temporarily falling back to coordinates is easiest for this calculation

\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{A}(\mathbf{k}) e^{i\mathbf{k} \cdot \mathbf{x}}&=\sigma_m \partial_m \wedge \sigma_n A^n(\mathbf{k}) e^{i \mathbf{x} \cdot \mathbf{x}} \\ &=\sigma_m \wedge \sigma_n A^n(\mathbf{k}) i k^m e^{i \mathbf{x} \cdot \mathbf{x}} \\ &=i\mathbf{k} \wedge \mathbf{A}(\mathbf{k}) e^{i \mathbf{x} \cdot \mathbf{x}} \\ \end{aligned}

This gives

\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.28)

We want to equate the divergence of this to zero. Neglecting the integral and constant factor this requires

\begin{aligned}0 &= \boldsymbol{\nabla} \cdot \left( i \mathbf{k} \wedge \mathbf{A} e^{i\mathbf{k} \cdot \mathbf{x}} \right) \\ &= {\left\langle{{ \sigma_m \partial_m i (\mathbf{k} \wedge \mathbf{A}) e^{i\mathbf{k} \cdot \mathbf{x}} }}\right\rangle}_{1} \\ &= -{\left\langle{{ \sigma_m (\mathbf{k} \wedge \mathbf{A}) k^m e^{i\mathbf{k} \cdot \mathbf{x}} }}\right\rangle}_{1} \\ &= -\mathbf{k} \cdot (\mathbf{k} \wedge \mathbf{A}) e^{i\mathbf{k} \cdot \mathbf{x}} \\ \end{aligned}

Requiring that the plane spanned by \mathbf{k} and \mathbf{A}(\mathbf{k}) be perpendicular to \mathbf{k} implies that \mathbf{A} \propto \mathbf{k}. The solution set is then completely described by functions of the form

\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \mathbf{k} \psi(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k},\end{aligned} \hspace{\stretch{1}}(4.29)

where \psi(\mathbf{k}) is an arbitrary scalar valued function. This is however, an extremely uninteresting solution since the curl is uniformly zero

\begin{aligned}F &= \boldsymbol{\nabla} \wedge \mathbf{A} \\ &= (2 \pi)^{-3/2} \int (i \mathbf{k}) \wedge \mathbf{k} \psi(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned}

Since \mathbf{k} \wedge \mathbf{k} = 0, when all is said and done the \phi = 0, \partial_t \mathbf{A} = 0 case appears to have no non-trivial (zero) solutions. Moving on, …

Case II. Constant vector potential divergence. Scalar potential eliminated by gauge transformation.

Next in the order of complexity is consideration of the case (3.22). Here we also have \phi = 0, eliminated by gauge transformation, and are looking for solutions with the constraint

\begin{aligned}\text{constant} &= \boldsymbol{\nabla} \cdot \mathbf{A}(\mathbf{x}, t) \\ &= (2 \pi)^{-3/2} \int i \mathbf{k} \cdot \mathbf{A}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned}

How can this constraint be enforced? The only obvious way is a requirement for \mathbf{k} \cdot \mathbf{A}(\mathbf{k}, t) to be zero for all (\mathbf{k},t), meaning that our to be determined Fourier transform coefficients are required to be perpendicular to the wave number vector parameters at all times.

The remainder of Maxwell’s equations, (3.23) impose the addition constraint on the Fourier transform \mathbf{A}(\mathbf{k},t)

\begin{aligned}0 &= (2 \pi)^{-3/2} \int \left( \frac{1}{{c^2}} \partial_{tt} \mathbf{A}(\mathbf{k}, t) - i^2 \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t)\right) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.30)

For zero equality for all \mathbf{x} it appears that we require the Fourier transforms \mathbf{A}(\mathbf{k}) to be harmonic in time

\begin{aligned}\partial_{tt} \mathbf{A}(\mathbf{k}, t) = - c^2 \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t).\end{aligned} \hspace{\stretch{1}}(4.31)

This has the familiar exponential solutions

\begin{aligned}\mathbf{A}(\mathbf{k}, t) = \mathbf{A}_{\pm}(\mathbf{k}) e^{ \pm i c {\left\lvert{\mathbf{k}}\right\rvert} t },\end{aligned} \hspace{\stretch{1}}(4.32)

also subject to a requirement that \mathbf{k} \cdot \mathbf{A}(\mathbf{k}) = 0. Our field, where the \mathbf{A}_{\pm}(\mathbf{k}) are to be determined by initial time conditions, is by (2.7) of the form

\begin{aligned}F(\mathbf{x}, t)= \text{Real} \frac{i}{(\sqrt{2\pi})^3} \int \Bigl( -{\left\lvert{\mathbf{k}}\right\rvert} \mathbf{A}_{+}(\mathbf{k}) + \mathbf{k} \wedge \mathbf{A}_{+}(\mathbf{k}) \Bigr) \exp(i \mathbf{k} \cdot \mathbf{x} + i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}+ \text{Real} \frac{i}{(\sqrt{2\pi})^3} \int \Bigl( {\left\lvert{\mathbf{k}}\right\rvert} \mathbf{A}_{-}(\mathbf{k}) + \mathbf{k} \wedge \mathbf{A}_{-}(\mathbf{k}) \Bigr) \exp(i \mathbf{k} \cdot \mathbf{x} - i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.33)

Since 0 = \mathbf{k} \cdot \mathbf{A}_{\pm}(\mathbf{k}), we have \mathbf{k} \wedge \mathbf{A}_{\pm}(\mathbf{k}) = \mathbf{k} \mathbf{A}_{\pm}. This allows for factoring out of {\left\lvert{\mathbf{k}}\right\rvert}. The structure of the solution is not changed by incorporating the i (2\pi)^{-3/2} {\left\lvert{\mathbf{k}}\right\rvert} factors into \mathbf{A}_{\pm}, leaving the field having the general form

\begin{aligned}F(\mathbf{x}, t)= \text{Real} \int ( \hat{\mathbf{k}} - 1 ) \mathbf{A}_{+}(\mathbf{k}) \exp(i \mathbf{k} \cdot \mathbf{x} + i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}+ \text{Real} \int ( \hat{\mathbf{k}} + 1 ) \mathbf{A}_{-}(\mathbf{k}) \exp(i \mathbf{k} \cdot \mathbf{x} - i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.34)

The original meaning of \mathbf{A}_{\pm} as Fourier transforms of the vector potential is obscured by the tidy up change to absorb {\left\lvert{\mathbf{k}}\right\rvert}, but the geometry of the solution is clearer this way.

It is also particularly straightforward to confirm that \gamma_0 \nabla F = 0 separately for either half of (4.34).

Case III. Non-zero scalar potential. No gauge transformation.

Now lets work from (3.11). In particular, a divergence operation can be factored from (3.11), for

\begin{aligned}0 = \boldsymbol{\nabla} \cdot (\boldsymbol{\nabla} \phi + \partial_0 \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(4.35)

Right off the top, there is a requirement for

\begin{aligned}\text{constant} = \boldsymbol{\nabla} \phi + \partial_0 \mathbf{A}.\end{aligned} \hspace{\stretch{1}}(4.36)

In terms of the Fourier transforms this is

\begin{aligned}\text{constant} = \frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(i \mathbf{k} \phi(\mathbf{k}, t) + \frac{1}{c} \partial_t \mathbf{A}(\mathbf{k}, t)\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.37)

Are there any ways for this to equal a constant for all \mathbf{x} without requiring that constant to be zero? Assuming no for now, and that this constant must be zero, this implies a coupling between the \phi and \mathbf{A} Fourier transforms of the form

\begin{aligned}\phi(\mathbf{k}, t) = -\frac{1}{{i c \mathbf{k}}} \partial_t \mathbf{A}(\mathbf{k}, t)\end{aligned} \hspace{\stretch{1}}(4.38)

A secondary implication is that \partial_t \mathbf{A}(\mathbf{k}, t) \propto \mathbf{k} or else \phi(\mathbf{k}, t) is not a scalar. We had a transverse solution by requiring via gauge transformation that \phi = 0, and here we have instead the vector potential in the propagation direction.

A secondary confirmation that this is a required coupling between the scalar and vector potential can be had by evaluating the divergence equation of (4.35)

\begin{aligned}0 = \frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(- \mathbf{k}^2 \phi(\mathbf{k}, t) + \frac{i\mathbf{k}}{c} \cdot \partial_t \mathbf{A}(\mathbf{k}, t)\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.39)

Rearranging this also produces (4.38). We want to now substitute this relationship into (3.12).

Starting with just the \partial_0 \phi - \boldsymbol{\nabla} \cdot \mathbf{A} part we have

\begin{aligned}\partial_0 \phi + \boldsymbol{\nabla} \cdot \mathbf{A}&=\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(\frac{i}{c^2 \mathbf{k}} \partial_{tt} \mathbf{A}(\mathbf{k}, t) + i \mathbf{k} \cdot \mathbf{A}\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.40)

Taking the gradient of this brings down a factor of i\mathbf{k} for

\begin{aligned}\boldsymbol{\nabla} (\partial_0 \phi + \boldsymbol{\nabla} \cdot \mathbf{A})&=-\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(\frac{1}{c^2} \partial_{tt} \mathbf{A}(\mathbf{k}, t) + \mathbf{k} (\mathbf{k} \cdot \mathbf{A})\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.41)

(3.12) in its entirety is now

\begin{aligned}0 &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(- (i\mathbf{k})^2 \mathbf{A}+ \mathbf{k} (\mathbf{k} \cdot \mathbf{A})\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.42)

This isn’t terribly pleasant looking. Perhaps going the other direction. We could write

\begin{aligned}\phi = \frac{i}{c \mathbf{k}} \frac{\partial {\mathbf{A}}}{\partial {t}} = \frac{i}{c} \frac{\partial {\psi}}{\partial {t}},\end{aligned} \hspace{\stretch{1}}(4.43)

so that

\begin{aligned}\mathbf{A}(\mathbf{k}, t) = \mathbf{k} \psi(\mathbf{k}, t).\end{aligned} \hspace{\stretch{1}}(4.44)

\begin{aligned}0 &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(\frac{1}{{c^2}} \mathbf{k} \psi_{tt}- \boldsymbol{\nabla}^2 \mathbf{k} \psi + \boldsymbol{\nabla} \frac{i}{c^2} \psi_{tt}+\boldsymbol{\nabla}( \boldsymbol{\nabla} \cdot (\mathbf{k} \psi) )\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \\ \end{aligned}

Note that the gradients here operate on everything to the right, including and especially the exponential. Each application of the gradient brings down an additional i\mathbf{k} factor, and we have

\begin{aligned}\frac{1}{{(\sqrt{2 \pi})^3}} \int \mathbf{k} \Bigl(\frac{1}{{c^2}} \psi_{tt}- i^2 \mathbf{k}^2 \psi + \frac{i^2}{c^2} \psi_{tt}+i^2 \mathbf{k}^2 \psi \Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned}

This is identically zero, so we see that this second equation provides no additional information. That is somewhat surprising since there is not a whole lot of constraints supplied by the first equation. The function \psi(\mathbf{k}, t) can be anything. Understanding of this curiosity comes from computation of the Faraday bivector itself. From (2.7), that is

\begin{aligned}F = \frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(-i \mathbf{k} \frac{i}{c}\psi_t - \frac{1}{c} \mathbf{k} \psi_t + i \mathbf{k} \wedge \mathbf{k} \psi\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.45)

All terms cancel, so we see that a non-zero \phi leads to F = 0, as was the case when considering (4.24) (a case that also resulted in \mathbf{A}(\mathbf{k}) \propto \mathbf{k}).

Can this Fourier representation lead to a non-transverse solution to Maxwell’s equation? If so, it is not obvious how.

The energy momentum tensor

The energy momentum tensor is then

\begin{aligned}T(a) &= -\frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left(- \frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)+ i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)- i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)a\left(- \frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)- i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(5.46)

Observing that \gamma_0 commutes with spatial bivectors and anticommutes with spatial vectors, and writing \sigma_\mu = \gamma_\mu \gamma_0, the tensor splits neatly into scalar and spatial vector components

\begin{aligned}T(\gamma_\mu) \cdot \gamma_0 &= \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left\langle{{\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)+ i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)\sigma_\mu\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)}}\right\rangle e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}' \\ T(\gamma_\mu) \wedge \gamma_0 &= \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint{\left\langle{{\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)+ i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)\sigma_\mu\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)}}\right\rangle}_{1}e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(5.47)

In particular for \mu = 0, we have

\begin{aligned}H &\equiv T(\gamma_0) \cdot \gamma_0 = \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left(\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)\right)\cdot\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)\right)- (\mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)) \cdot (\mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t))\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}' \\ \mathbf{P} &\equiv T(\gamma_\mu) \wedge \gamma_0 = \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left(i\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)\right) \cdot\left(\mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)-i\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)\right)\cdot\left(\mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(5.49)

Integrating this over all space and identification of the delta function

\begin{aligned}\delta(\mathbf{k}) \equiv \frac{1}{{(2 \pi)^3}} \int e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{x},\end{aligned} \hspace{\stretch{1}}(5.51)

reduces the tensor to a single integral in the continuous angular wave number space of \mathbf{k}.

\begin{aligned}\int T(a) d^3 \mathbf{x} &= -\frac{\epsilon_0}{2} \text{Real} \int\left(- \frac{1}{c} {{\dot{\mathbf{A}}}}^{*}+ i \mathbf{k} {{\phi}}^{*}- i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)a\left(- \frac{1}{c} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.52)

Or,

\begin{aligned}\int T(\gamma_\mu) \gamma_0 d^3 \mathbf{x} =\frac{\epsilon_0}{2} \text{Real} \int{\left\langle{{\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}- i \mathbf{k} {{\phi}}^{*}+ i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)\sigma_\mu\left(\frac{1}{c} \dot{\mathbf{A}}+ i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)}}\right\rangle}_{{0,1}}d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.53)

Multiplying out (5.53) yields for \int H

\begin{aligned}\int H d^3 \mathbf{x} &=\frac{\epsilon_0}{2} \int d^3 \mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}}\right\rvert}^2 + \mathbf{k}^2 ({\left\lvert{\phi}\right\rvert}^2 + {\left\lvert{\mathbf{A}}\right\rvert}^2 )- {\left\lvert{\mathbf{k} \cdot \mathbf{A}}\right\rvert}^2+ 2 \frac{\mathbf{k}}{c} \cdot \text{Real}( i {{\phi}}^{*} \dot{\mathbf{A}} )\right)\end{aligned} \hspace{\stretch{1}}(5.54)

Recall that the only non-trivial solution we found for the assumed Fourier transform representation of F was for \phi = 0, \mathbf{k} \cdot \mathbf{A}(\mathbf{k}, t) = 0. Thus we have for the energy density integrated over all space, just

\begin{aligned}\int H d^3 \mathbf{x} &=\frac{\epsilon_0}{2} \int d^3 \mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}}\right\rvert}^2 + \mathbf{k}^2 {\left\lvert{\mathbf{A}}\right\rvert}^2 \right).\end{aligned} \hspace{\stretch{1}}(5.55)

Observe that we have the structure of a Harmonic oscillator for the energy of the radiation system. What is the canonical momentum for this system? Will it correspond to the Poynting vector, integrated over all space?

Let’s reduce the vector component of (5.53), after first imposing the \phi=0, and \mathbf{k} \cdot \mathbf{A} = 0 conditions used to above for our harmonic oscillator form energy relationship. This is

\begin{aligned}\int \mathbf{P} d^3 \mathbf{x} &=\frac{\epsilon_0}{2 c} \text{Real} \int d^3 \mathbf{k} \left( i {\mathbf{A}}^{*}_t \cdot (\mathbf{k} \wedge \mathbf{A})+ i (\mathbf{k} \wedge {\mathbf{A}}^{*}) \cdot \mathbf{A}_t\right) \\ &=\frac{\epsilon_0}{2 c} \text{Real} \int d^3 \mathbf{k} \left( -i ({\mathbf{A}}^{*}_t \cdot \mathbf{A}) \mathbf{k}+ i \mathbf{k} ({\mathbf{A}}^{*} \cdot \mathbf{A}_t)\right)\end{aligned}

This is just

\begin{aligned}\int \mathbf{P} d^3 \mathbf{x} &=\frac{\epsilon_0}{c} \text{Real} i \int \mathbf{k} ({\mathbf{A}}^{*} \cdot \mathbf{A}_t) d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.56)

Recall that the Fourier transforms for the transverse propagation case had the form \mathbf{A}(\mathbf{k}, t) = \mathbf{A}_{\pm}(\mathbf{k}) e^{\pm i c {\left\lvert{\mathbf{k}}\right\rvert} t}, where the minus generated the advanced wave, and the plus the receding wave. With substitution of the vector potential for the advanced wave into the energy and momentum results of (5.55) and (5.56) respectively, we have

\begin{aligned}\int H d^3 \mathbf{x}   &= \epsilon_0 \int \mathbf{k}^2 {\left\lvert{\mathbf{A}(\mathbf{k})}\right\rvert}^2 d^3 \mathbf{k} \\ \int \mathbf{P} d^3 \mathbf{x} &= \epsilon_0 \int \hat{\mathbf{k}} \mathbf{k}^2 {\left\lvert{\mathbf{A}(\mathbf{k})}\right\rvert}^2 d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.57)

After a somewhat circuitous route, this has the relativistic symmetry that is expected. In particular the for the complete \mu=0 tensor we have after integration over all space

\begin{aligned}\int T(\gamma_0) \gamma_0 d^3 \mathbf{x} = \epsilon_0 \int (1 + \hat{\mathbf{k}}) \mathbf{k}^2 {\left\lvert{\mathbf{A}(\mathbf{k})}\right\rvert}^2 d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.59)

The receding wave solution would give the same result, but directed as 1 - \hat{\mathbf{k}} instead.

Observe that we also have the four divergence conservation statement that is expected

\begin{aligned}\frac{\partial {}}{\partial {t}} \int H d^3 \mathbf{x} + \boldsymbol{\nabla} \cdot \int c \mathbf{P} d^3 \mathbf{x} &= 0.\end{aligned} \hspace{\stretch{1}}(5.60)

This follows trivially since both the derivatives are zero. If the integration region was to be more specific instead of a 0 + 0 = 0 relationship, we’d have the power flux {\partial {H}}/{\partial {t}} equal in magnitude to the momentum change through a bounding surface. For a more general surface the time and spatial dependencies shouldn’t necessarily vanish, but we should still have this radiation energy momentum conservation.

References

[1] Peeter Joot. Electrodynamic field energy for vacuum. [online]. http://sites.google.com/site/peeterjoot/math2009/fourierMaxVac.pdf.

[2] Peeter Joot. {Energy and momentum for Complex electric and magnetic field phasors.} [online]. http://sites.google.com/site/peeterjoot/math2009/complexFieldEnergy.pdf.

[3] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

Energy and momentum for assumed Fourier transform solutions to the homogeneous Maxwell equation.

Posted by peeterjoot on December 22, 2009

[Click here for a PDF of this post with nicer formatting]

Motivation and notation.

In Electrodynamic field energy for vacuum (reworked) [1], building on Energy and momentum for Complex electric and magnetic field phasors [2] a derivation for the energy and momentum density was derived for an assumed Fourier series solution to the homogeneous Maxwell’s equation. Here we move to the continuous case examining Fourier transform solutions and the associated energy and momentum density.

A complex (phasor) representation is implied, so taking real parts when all is said and done is required of the fields. For the energy momentum tensor the Geometric Algebra form, modified for complex fields, is used

\begin{aligned}T(a) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \hspace{\stretch{1}}(1.1)

The assumed four vector potential will be written

\begin{aligned}A(\mathbf{x}, t) = A^\mu(\mathbf{x}, t) \gamma_\mu = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(1.2)

Subject to the requirement that A is a solution of Maxwell’s equation

\begin{aligned}\nabla (\nabla \wedge A) = 0.\end{aligned} \hspace{\stretch{1}}(1.3)

To avoid latex hell, no special notation will be used for the Fourier coefficients,

\begin{aligned}A(\mathbf{k}, t) = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{x}.\end{aligned} \hspace{\stretch{1}}(1.4)

When convenient and unambiguous, this (\mathbf{k},t) dependence will be implied.

Having picked a time and space representation for the field, it will be natural to express both the four potential and the gradient as scalar plus spatial vector, instead of using the Dirac basis. For the gradient this is

\begin{aligned}\nabla &= \gamma^\mu \partial_\mu = (\partial_0 - \boldsymbol{\nabla}) \gamma_0 = \gamma_0 (\partial_0 + \boldsymbol{\nabla}),\end{aligned} \hspace{\stretch{1}}(1.5)

and for the four potential (or the Fourier transform functions), this is

\begin{aligned}A &= \gamma_\mu A^\mu = (\phi + \mathbf{A}) \gamma_0 = \gamma_0 (\phi - \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(1.6)

Setup

The field bivector F = \nabla \wedge A is required for the energy momentum tensor. This is

\begin{aligned}\nabla \wedge A&= \frac{1}{{2}}\left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \\ &= \frac{1}{{2}}\left( (\stackrel{ \rightarrow }{\partial}_0 - \stackrel{ \rightarrow }{\boldsymbol{\nabla}}) \gamma_0 \gamma_0 (\phi - \mathbf{A})- (\phi + \mathbf{A}) \gamma_0 \gamma_0 (\stackrel{ \leftarrow }{\partial}_0 + \stackrel{ \leftarrow }{\boldsymbol{\nabla}})\right) \\ &= -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \frac{1}{{2}}(\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}}) \end{aligned}

This last term is a spatial curl and the field is then

\begin{aligned}F = -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} \end{aligned} \hspace{\stretch{1}}(2.7)

Applied to the Fourier representation this is

\begin{aligned}F = \frac{1}{{(\sqrt{2 \pi})^3}} \int \left( - \frac{1}{{c}} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(2.8)

The energy momentum tensor is then

\begin{aligned}T(a) &= -\frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint \left( - \frac{1}{{c}} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)+ i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)- i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)a\left( - \frac{1}{{c}} \dot{\mathbf{A}}(\mathbf{k}, t)- i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(2.9)

The tensor integrated over all space. Energy and momentum?

Integrating this over all space and identification of the delta function

\begin{aligned}\delta(\mathbf{k}) \equiv \frac{1}{{(2 \pi)^3}} \int e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{x},\end{aligned} \hspace{\stretch{1}}(3.10)

reduces the tensor to a single integral in the continuous angular wave number space of \mathbf{k}.

\begin{aligned}\int T(a) d^3 \mathbf{x} &= -\frac{\epsilon_0}{2} \text{Real} \int \left( - \frac{1}{{c}} {{\dot{\mathbf{A}}}}^{*}+ i \mathbf{k} {{\phi}}^{*}- i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)a\left( - \frac{1}{{c}} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.11)

Observing that \gamma_0 commutes with spatial bivectors and anticommutes with spatial vectors, and writing \sigma_\mu = \gamma_\mu \gamma_0, one has

\begin{aligned}\int T(\gamma_\mu) \gamma_0 d^3 \mathbf{x} = \frac{\epsilon_0}{2} \text{Real} \int {\left\langle{{\left( \frac{1}{{c}} {{\dot{\mathbf{A}}}}^{*}- i \mathbf{k} {{\phi}}^{*}+ i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)\sigma_\mu\left( \frac{1}{{c}} \dot{\mathbf{A}}+ i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)}}\right\rangle}_{{0,1}}d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.12)

The scalar and spatial vector grade selection operator has been added for convenience and does not change the result since those are necessarily the only grades anyhow. The post multiplication by the observer frame time basis vector \gamma_0 serves to separate the energy and momentum like components of the tensor nicely into scalar and vector aspects. In particular for T(\gamma^0), one could write

\begin{aligned}\int T(\gamma^0) d^3 \mathbf{x} = (H + \mathbf{P}) \gamma_0,\end{aligned} \hspace{\stretch{1}}(3.13)

If these are correctly identified with energy and momentum then it also ought to be true that we have the conservation relationship

\begin{aligned}\frac{\partial {H}}{\partial {t}} + \boldsymbol{\nabla} \cdot (c \mathbf{P}) = 0.\end{aligned} \hspace{\stretch{1}}(3.14)

However, multiplying out (3.12) yields for H

\begin{aligned}H &= \frac{\epsilon_0}{2} \int d^3 \mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}}\right\rvert}^2 + \mathbf{k}^2 ({\left\lvert{\phi}\right\rvert}^2 + {\left\lvert{\mathbf{A}}\right\rvert}^2 )- {\left\lvert{\mathbf{k} \cdot \mathbf{A}}\right\rvert}^2 + 2 \frac{\mathbf{k}}{c} \cdot \text{Real}( i {{\phi}}^{*} \dot{\mathbf{A}} )\right)\end{aligned} \hspace{\stretch{1}}(3.15)

The vector component takes a bit more work to reduce

\begin{aligned}\mathbf{P} &= \frac{\epsilon_0}{2} \int d^3 \mathbf{k} \text{Real} \left(\frac{i}{c} ({{\dot{\mathbf{A}}}}^{*} \cdot (\mathbf{k} \wedge \mathbf{A})+ {{\phi}}^{*} \mathbf{k} \cdot (\mathbf{k} \wedge \mathbf{A})+ \frac{i}{c} (\mathbf{k} \wedge {\mathbf{A}}^{*}) \cdot \dot{\mathbf{A}}- \phi (\mathbf{k} \wedge {\mathbf{A}}^{*}) \cdot \mathbf{k}\right) \\ &=\frac{\epsilon_0}{2} \int d^3 \mathbf{k} \text{Real} \left(\frac{i}{c} \left( ({{\dot{\mathbf{A}}}}^{*} \cdot \mathbf{k}) \mathbf{A} -({{\dot{\mathbf{A}}}}^{*} \cdot \mathbf{A}) \mathbf{k} \right)+ {{\phi}}^{*} \left( \mathbf{k}^2 \mathbf{A} - (\mathbf{k} \cdot \mathbf{A}) \mathbf{k} \right)+ \frac{i}{c} \left( ({\mathbf{A}}^{*} \cdot \dot{\mathbf{A}}) \mathbf{k} - (\mathbf{k} \cdot \dot{\mathbf{A}}) {\mathbf{A}}^{*} \right)+ \phi \left( \mathbf{k}^2 {\mathbf{A}}^{*} -({\mathbf{A}}^{*} \cdot \mathbf{k}) \mathbf{k} \right) \right).\end{aligned}

Canceling and regrouping leaves

\begin{aligned}\mathbf{P}&=\epsilon_0 \int d^3 \mathbf{k} \text{Real} \left(\mathbf{A} \left( \mathbf{k}^2 {{\phi}}^{*} + \mathbf{k} \cdot {{\dot{\mathbf{A}}}}^{*} \right)+ \mathbf{k} \left( -{{\phi}}^{*} (\mathbf{k} \cdot \mathbf{A}) + \frac{i}{c} ({\mathbf{A}}^{*} \cdot \dot{\mathbf{A}})\right)\right).\end{aligned} \hspace{\stretch{1}}(3.16)

This has no explicit \mathbf{x} dependence, so the conservation relation (3.14) is violated unless {\partial {H}}/{\partial {t}} = 0. There is no reason to assume that will be the case. In the discrete Fourier series treatment, a gauge transformation allowed for elimination of \phi, and this implied \mathbf{k} \cdot \mathbf{A}_\mathbf{k} = 0 or \mathbf{A}_\mathbf{k} constant. We will probably have a similar result here, eliminating most of the terms in (3.15) and (3.16). Except for the constant \mathbf{A}_\mathbf{k} solution of the field equations there is no obvious way that such a simplified energy expression will have zero derivative.

A more reasonable conclusion is that this approach is flawed. We ought to be looking at the divergence relation as a starting point, and instead of integrating over all space, instead employing Gauss’s theorem to convert the divergence integral into a surface integral. Without math, the conservation relationship probably ought to be expressed as energy change in a volume is matched by the momentum change through the surface. However, without an integral over all space, we do not get the nice delta function cancellation observed above. How to proceed is not immediately clear. Stepping back to review applications of Gauss’s theorem is probably a good first step.

References

[1] Peeter Joot. Electrodynamic field energy for vacuum. [online]. http://sites.google.com/site/peeterjoot/math2009/fourierMaxVac.pdf.

[2] Peeter Joot. {Energy and momentum for Complex electric and magnetic field phasors.} [online]. http://sites.google.com/site/peeterjoot/math2009/complexFieldEnergy.pdf.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | 1 Comment »

Electrodynamic field energy for vacuum (reworked)

Posted by peeterjoot on December 21, 2009

[Click here for a PDF of this post with nicer formatting]

Previous version.

This is a reworked version of a previous post ([also in PDF]

Reducing the products in the Dirac basis makes life more complicated then it needs to be (became obvious when attempting to derive an expression for the Poynting integral).

Motivation.

From Energy and momentum for Complex electric and magnetic field phasors [PDF] how to formulate the energy momentum tensor for complex vector fields (ie. phasors) in the Geometric Algebra formalism is now understood. To recap, for the field F = \mathbf{E} + I c \mathbf{B}, where \mathbf{E} and \mathbf{B} may be complex vectors we have for Maxwell’s equation

\begin{aligned}\nabla F = J/\epsilon_0 c.\end{aligned} \quad\quad\quad(1)

This is a doubly complex representation, with the four vector pseudoscalar I = \gamma_0 \gamma_1 \gamma_2 \gamma_3 acting as a non-commutatitive imaginary, as well as real and imaginary parts for the electric and magnetic field vectors. We take the real part (not the scalar part) of any bivector solution F of Maxwell’s equation as the actual solution, but allow ourself the freedom to work with the complex phasor representation when convenient. In these phasor vectors, the imaginary i, as in \mathbf{E} = \text{Real}(\mathbf{E}) + i \text{Imag}(\mathbf{E}), is a commuting imaginary, commuting with all the multivector elements in the algebra.

The real valued, four vector, energy momentum tensor T(a) was found to be

\begin{aligned}T(a) = \frac{\epsilon_0}{4} \Bigl( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} \Bigr) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \quad\quad\quad(2)

To supply some context that gives meaning to this tensor the associated conservation relationship was found to be

\begin{aligned}\nabla \cdot T(a) &= a \cdot \frac{1}{{ c }} \text{Real} \left( J \cdot {{F}}^{*} \right).\end{aligned} \quad\quad\quad(3)

and in particular for a = \gamma^0, this four vector divergence takes the form

\begin{aligned}\frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ \boldsymbol{\nabla} \cdot \frac{1}{{\mu_0}} \text{Real} (\mathbf{E} \times {\mathbf{B}}^{*} )+ \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) = 0,\end{aligned} \quad\quad\quad(4)

relating the energy term T^{00} = T(\gamma^0) \cdot \gamma^0 and the Poynting spatial vector T(\gamma^0) \wedge \gamma^0 with the current density and electric field product that constitutes the energy portion of the Lorentz force density.

Let’s apply this to calculating the energy associated with the field that is periodic within a rectangular prism as done by Bohm in [2]. We do not necessarily need the Geometric Algebra formalism for this calculation, but this will be a fun way to attempt it.

Setup

Let’s assume a Fourier representation for the four vector potential A for the field F = \nabla \wedge A. That is

\begin{aligned}A = \sum_{\mathbf{k}} A_\mathbf{k}(t) e^{i \mathbf{k} \cdot \mathbf{x}},\end{aligned} \quad\quad\quad(5)

where summation is over all angular wave number triplets \mathbf{k} = 2 \pi (k_1/\lambda_1, k_2/\lambda_2, k_3/\lambda_3). The Fourier coefficients A_\mathbf{k} = {A_\mathbf{k}}^\mu \gamma_\mu are allowed to be complex valued, as is the resulting four vector A, and the associated bivector field F.

Fourier inversion, with V = \lambda_1 \lambda_2 \lambda_3, follows from

\begin{aligned}\delta_{\mathbf{k}', \mathbf{k}} =\frac{1}{{ V }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} e^{ i \mathbf{k}' \cdot \mathbf{x}} e^{-i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(6)

but only this orthogonality relationship and not the Fourier coefficients themselves

\begin{aligned}A_\mathbf{k} = \frac{1}{{ V }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} A(\mathbf{x}, t) e^{- i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(7)

will be of interest here. Evaluating the curl for this potential yields

\begin{aligned}F = \nabla \wedge A= \sum_{\mathbf{k}} \left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \gamma^m \wedge A_\mathbf{k} \frac{2 \pi i k_m}{\lambda_m} \right) e^{i \mathbf{k} \cdot \mathbf{x}}.\end{aligned} \quad\quad\quad(8)

Since the four vector potential has been expressed using an explicit split into time and space components it will be natural to re express the bivector field in terms of scalar and (spatial) vector potentials, with the Fourier coefficients. Writing \sigma_m = \gamma_m \gamma_0 for the spatial basis vectors, {A_\mathbf{k}}^0 = \phi_\mathbf{k}, and \mathbf{A} = A^k \sigma_k, this is

\begin{aligned}A_\mathbf{k} = (\phi_\mathbf{k} + \mathbf{A}_\mathbf{k}) \gamma_0.\end{aligned} \quad\quad\quad(9)

The Faraday bivector field F is then

\begin{aligned}F = \sum_\mathbf{k} \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) e^{i \mathbf{k} \cdot \mathbf{x}}.\end{aligned} \quad\quad\quad(10)

This is now enough to express the energy momentum tensor T(\gamma^\mu)

\begin{aligned}T(\gamma^\mu) &= -\frac{\epsilon_0}{2} \sum_{\mathbf{k},\mathbf{k}'}\text{Real} \left(\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}'})}}^{*} + i \mathbf{k}' {{\phi_{\mathbf{k}'}}}^{*} - i \mathbf{k}' \wedge {{\mathbf{A}_{\mathbf{k}'}}}^{*} \right) \gamma^\mu \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x}}\right).\end{aligned} \quad\quad\quad(11)

It will be more convenient to work with a scalar plus bivector (spatial vector) form of this tensor, and right multiplication by \gamma_0 produces such a split

\begin{aligned}T(\gamma^\mu) \gamma_0 = \left\langle{{T(\gamma^\mu) \gamma_0}}\right\rangle + \sigma_a \left\langle{{ \sigma_a T(\gamma^\mu) \gamma_0 }}\right\rangle\end{aligned} \quad\quad\quad(12)

The primary object of this treatment will be consideration of the \mu = 0 components of the tensor, which provide a split into energy density T(\gamma^0) \cdot \gamma_0, and Poynting vector (momentum density) T(\gamma^0) \wedge \gamma_0.

Our first step is to integrate (12) over the volume V. This integration and the orthogonality relationship (6), removes the exponentials, leaving

\begin{aligned}\int T(\gamma^\mu) \cdot \gamma_0&= -\frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \left\langle{{\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \gamma^\mu \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) \gamma_0 }}\right\rangle \\ \int T(\gamma^\mu) \wedge \gamma_0&= -\frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \gamma^\mu \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) \gamma_0}}\right\rangle \end{aligned} \quad\quad\quad(13)

Because \gamma_0 commutes with the spatial bivectors, and anticommutes with the spatial vectors, the remainder of the Dirac basis vectors in these expressions can be eliminated

\begin{aligned}\int T(\gamma^0) \cdot \gamma_0&= -\frac{\epsilon_0 V }{2} \sum_{\mathbf{k}}\text{Real} \left\langle{{\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \end{aligned} \quad\quad\quad(15)

\begin{aligned}\int T(\gamma^0) \wedge \gamma_0&= -\frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \end{aligned} \quad\quad\quad(16)

\begin{aligned}\int T(\gamma^m) \cdot \gamma_0&= \frac{\epsilon_0 V }{2} \sum_{\mathbf{k}}\text{Real} \left\langle{{\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \sigma_m\left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \end{aligned} \quad\quad\quad(17)

\begin{aligned}\int T(\gamma^m) \wedge \gamma_0&= \frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \sigma_m\left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle.\end{aligned} \quad\quad\quad(18)

Expanding the energy momentum tensor components.

Energy

In (15) only the bivector-bivector and vector-vector products produce any scalar grades. Except for the bivector product this can be done by inspection. For that part we utilize the identity

\begin{aligned}\left\langle{{ (\mathbf{k} \wedge \mathbf{a}) (\mathbf{k} \wedge \mathbf{b}) }}\right\rangle= (\mathbf{a} \cdot \mathbf{k}) (\mathbf{b} \cdot \mathbf{k}) - \mathbf{k}^2 (\mathbf{a} \cdot \mathbf{b}).\end{aligned} \quad\quad\quad(19)

This leaves for the energy H = \int T(\gamma^0) \cdot \gamma_0 in the volume

\begin{aligned}H = \frac{\epsilon_0 V}{2} \sum_\mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2 +\mathbf{k}^2 \left( {\left\lvert{\phi_\mathbf{k}}\right\rvert}^2 + {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 \right) - {\left\lvert{\mathbf{k} \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2+ \frac{2}{c} \text{Real} \left( i {{\phi_\mathbf{k}}}^{*} \cdot \dot{\mathbf{A}}_\mathbf{k} \right)\right)\end{aligned} \quad\quad\quad(20)

We are left with a completely real expression, and one without any explicit Geometric Algebra. This does not look like the Harmonic oscillator Hamiltonian that was expected. A gauge transformation to eliminate \phi_\mathbf{k} and an observation about when \mathbf{k} \cdot \mathbf{A}_\mathbf{k} equals zero will give us that, but first lets get the mechanical jobs done, and reduce the products for the field momentum.

Momentum

Now move on to (16). For the factors other than \sigma_a only the vector-bivector products can contribute to the scalar product. We have two such products, one of the form

\begin{aligned}\sigma_a \left\langle{{ \sigma_a \mathbf{a} (\mathbf{k} \wedge \mathbf{c}) }}\right\rangle&=\sigma_a (\mathbf{c} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{k}) - \sigma_a (\mathbf{k} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{c}) \\ &=\mathbf{c} (\mathbf{a} \cdot \mathbf{k}) - \mathbf{k} (\mathbf{a} \cdot \mathbf{c}),\end{aligned}

and the other

\begin{aligned}\sigma_a \left\langle{{ \sigma_a (\mathbf{k} \wedge \mathbf{c}) \mathbf{a} }}\right\rangle&=\sigma_a (\mathbf{k} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{c}) - \sigma_a (\mathbf{c} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{k}) \\ &=\mathbf{k} (\mathbf{a} \cdot \mathbf{c}) - \mathbf{c} (\mathbf{a} \cdot \mathbf{k}).\end{aligned}

The momentum \mathbf{P} = \int T(\gamma^0) \wedge \gamma_0 in this volume follows by computation of

\begin{aligned}&\sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \\ &=  i \mathbf{A}_\mathbf{k} \left( \left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} \right) \cdot \mathbf{k} \right)  - i \mathbf{k} \left( \left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} \right) \cdot \mathbf{A}_\mathbf{k} \right)  \\ &- i \mathbf{k} \left( \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} \right) \cdot {{\mathbf{A}_\mathbf{k}}}^{*} \right)  + i {{\mathbf{A}_{\mathbf{k}}}}^{*} \left( \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} \right) \cdot \mathbf{k} \right)\end{aligned}

All the products are paired in nice conjugates, taking real parts, and premultiplication with -\epsilon_0 V/2 gives the desired result. Observe that two of these terms cancel, and another two have no real part. Those last are

\begin{aligned}-\frac{\epsilon_0 V \mathbf{k}}{2 c} \text{Real} \left( i {{(\dot{\mathbf{A}}_\mathbf{k}}}^{*} \cdot \mathbf{A}_\mathbf{k}+\dot{\mathbf{A}}_\mathbf{k} \cdot {{\mathbf{A}_\mathbf{k}}}^{*} \right)&=-\frac{\epsilon_0 V \mathbf{k}}{2 c} \text{Real} \left( i \frac{d}{dt} \mathbf{A}_\mathbf{k} \cdot {{\mathbf{A}_\mathbf{k}}}^{*} \right)\end{aligned}

Taking the real part of this pure imaginary i {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 is zero, leaving just

\begin{aligned}\mathbf{P} &= \epsilon_0 V \sum_{\mathbf{k}}\text{Real} \left(i \mathbf{A}_\mathbf{k} \left( \frac{1}{{c}} {{\dot{\mathbf{A}}_\mathbf{k}}}^{*} \cdot \mathbf{k} \right)+ \mathbf{k}^2 \phi_\mathbf{k} {{ \mathbf{A}_\mathbf{k} }}^{*}- \mathbf{k} {{\phi_\mathbf{k}}}^{*} (\mathbf{k} \cdot \mathbf{A}_\mathbf{k})\right)\end{aligned} \quad\quad\quad(21)

I am not sure why exactly, but I actually expected a term with {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2, quadratic in the vector potential. Is there a mistake above?

Gauge transformation to simplify the Hamiltonian.

In (20) something that looked like the Harmonic oscillator was expected. On the surface this does not appear to be such a beast. Exploitation of gauge freedom is required to make the simplification that puts things into the Harmonic oscillator form.

If we are to change our four vector potential A \rightarrow A + \nabla \psi, then Maxwell’s equation takes the form

\begin{aligned}J/\epsilon_0 c = \nabla (\nabla \wedge (A + \nabla \psi) = \nabla (\nabla \wedge A) + \nabla (\underbrace{\nabla \wedge \nabla \psi}_{=0}),\end{aligned} \quad\quad\quad(22)

which is unchanged by the addition of the gradient to any original potential solution to the equation. In coordinates this is a transformation of the form

\begin{aligned}A^\mu \rightarrow A^\mu + \partial_\mu \psi,\end{aligned} \quad\quad\quad(23)

and we can use this to force any one of the potential coordinates to zero. For this problem, it appears that it is desirable to seek a \psi such that A^0 + \partial_0 \psi = 0. That is

\begin{aligned}\sum_\mathbf{k} \phi_\mathbf{k}(t) e^{i \mathbf{k} \cdot \mathbf{x}} + \frac{1}{{c}} \partial_t \psi = 0.\end{aligned} \quad\quad\quad(24)

Or,

\begin{aligned}\psi(\mathbf{x},t) = \psi(\mathbf{x},0) -\frac{1}{{c}} \sum_\mathbf{k} e^{i \mathbf{k} \cdot \mathbf{x}} \int_{\tau=0}^t \phi_\mathbf{k}(\tau).\end{aligned} \quad\quad\quad(25)

With such a transformation, the \phi_\mathbf{k} and \dot{\mathbf{A}}_\mathbf{k} cross term in the Hamiltonian (20) vanishes, as does the \phi_\mathbf{k} term in the four vector square of the last term, leaving just

\begin{aligned}H = \frac{\epsilon_0}{c^2} V \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} \Bigl((c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 + {\left\lvert{ ( c \mathbf{k}) \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2+ {\left\lvert{ c \mathbf{k} \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2\Bigr)\right).\end{aligned} \quad\quad\quad(26)

Additionally, wedging (5) with \gamma_0 now does not loose any information so our potential Fourier series is reduced to just

\begin{aligned}\mathbf{A} &= \sum_{\mathbf{k}} \mathbf{A}_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} \\ \mathbf{A}_\mathbf{k} &= \frac{1}{{ V }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} \mathbf{A}(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3.\end{aligned} \quad\quad\quad(27)

The desired harmonic oscillator form would be had in (26) if it were not for the \mathbf{k} \cdot \mathbf{A}_\mathbf{k} term. Does that vanish? Returning to Maxwell’s equation should answer that question, but first it has to be expressed in terms of the vector potential. While \mathbf{A} = A \wedge \gamma_0, the lack of an A^0 component means that this can be inverted as

\begin{aligned}A = \mathbf{A} \gamma_0 = -\gamma_0 \mathbf{A}.\end{aligned} \quad\quad\quad(29)

The gradient can also be factored scalar and spatial vector components

\begin{aligned}\nabla = \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) = ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0.\end{aligned} \quad\quad\quad(30)

So, with this A^0 = 0 gauge choice the bivector field F is

\begin{aligned}F = \nabla \wedge A = \frac{1}{{2}} \left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \end{aligned} \quad\quad\quad(31)

From the left the gradient action on A is

\begin{aligned}\stackrel{ \rightarrow }{\nabla} A &= ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0 (-\gamma_0 \mathbf{A}) \\ &= ( -\partial_0 + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} ) \mathbf{A},\end{aligned}

and from the right

\begin{aligned}A \stackrel{ \leftarrow }{\nabla}&= \mathbf{A} \gamma_0 \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \mathbf{A} ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \partial_0 \mathbf{A} + \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \end{aligned}

Taking the difference we have

\begin{aligned}F &= \frac{1}{{2}} \Bigl( -\partial_0 \mathbf{A} + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} -  \partial_0 \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \Bigr).\end{aligned}

Which is just

\begin{aligned}F = -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A}.\end{aligned} \quad\quad\quad(32)

For this vacuum case, premultiplication of Maxwell’s equation by \gamma_0 gives

\begin{aligned}0 &= \gamma_0 \nabla ( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= (\partial_0 + \boldsymbol{\nabla})( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} - \partial_0 \boldsymbol{\nabla} \cdot \mathbf{A} - \partial_0 \boldsymbol{\nabla} \wedge \mathbf{A} + \partial_0 ( \boldsymbol{\nabla} \wedge \mathbf{A} ) + \underbrace{\boldsymbol{\nabla} \cdot ( \boldsymbol{\nabla} \wedge \mathbf{A} ) }_{\boldsymbol{\nabla}^2 \mathbf{A} - \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A})}+ \underbrace{\boldsymbol{\nabla} \wedge ( \boldsymbol{\nabla} \wedge \mathbf{A} )}_{=0} \\ \end{aligned}

The spatial bivector and trivector grades are all zero. Equating the remaining scalar and vector components to zero separately yields a pair of equations in \mathbf{A}

\begin{aligned}0 &= \partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) \\ 0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) \end{aligned} \quad\quad\quad(33)

If the divergence of the vector potential is constant we have just a wave equation. Let’s see what that divergence is with the assumed Fourier representation

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{A} &=\sum_{\mathbf{k} \ne (0,0,0)} {\mathbf{A}_\mathbf{k}}^m 2 \pi i \frac{k_m}{\lambda_m} e^{i \mathbf{k} \cdot \mathbf{x}} \\ &=i \sum_{\mathbf{k} \ne (0,0,0)} (\mathbf{A}_\mathbf{k} \cdot \mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x}} \\ &=i \sum_\mathbf{k} (\mathbf{A}_\mathbf{k} \cdot \mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x}} \end{aligned}

Since \mathbf{A}_\mathbf{k} = \mathbf{A}_\mathbf{k}(t), there are two ways for \partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) = 0. For each \mathbf{k} there must be a requirement for either \mathbf{A}_\mathbf{k} \cdot \mathbf{k} = 0 or \mathbf{A}_\mathbf{k} = \text{constant}. The constant \mathbf{A}_\mathbf{k} solution to the first equation appears to represent a standing spatial wave with no time dependence. Is that of any interest?

The more interesting seeming case is where we have some non-static time varying state. In this case, if \mathbf{A}_\mathbf{k} \cdot \mathbf{k}, the second of these Maxwell’s equations is just the vector potential wave equation, since the divergence is zero. That is

\begin{aligned}0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} \end{aligned} \quad\quad\quad(35)

Solving this isn’t really what is of interest, since the objective was just to determine if the divergence could be assumed to be zero. This shows then, that if the transverse solution to Maxwell’s equation is picked, the Hamiltonian for this field, with this gauge choice, becomes

\begin{aligned}H = \frac{\epsilon_0}{c^2} V \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} (c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 \right).\end{aligned} \quad\quad\quad(36)

How does the gauge choice alter the Poynting vector? From (21), all the \phi_\mathbf{k} dependence in that integrated momentum density is lost

\begin{aligned}\mathbf{P} &= \epsilon_0 V \sum_{\mathbf{k}}\text{Real} \left(i \mathbf{A}_\mathbf{k} \left( \frac{1}{{c}} {{\dot{\mathbf{A}}_\mathbf{k}}}^{*} \cdot \mathbf{k} \right)\right).\end{aligned} \quad\quad\quad(37)

The \mathbf{A}_\mathbf{k} \cdot \mathbf{k} solutions to Maxwell’s equation are seen to result in zero momentum for this infinite periodic field. My expectation was something of the form c \mathbf{P} = H \hat{\mathbf{k}}, so intuition is either failing me, or my math is failing me, or this contrived periodic field solution leads to trouble.

Conclusions and followup.

The objective was met, a reproduction of Bohm’s Harmonic oscillator result using a complex exponential Fourier series instead of separate sine and cosines.

The reason for Bohm’s choice to fix zero divergence as the gauge choice upfront is now clear. That automatically cuts complexity from the results. Figuring out how to work this problem with complex valued potentials and also using the Geometric Algebra formulation probably also made the work a bit more difficult since blundering through both simultaneously was required instead of just one at a time.

This was an interesting exercise though, since doing it this way I am able to understand all the intermediate steps. Bohm employed some subtler argumentation to eliminate the scalar potential \phi upfront, and I have to admit I did not follow his logic, whereas blindly following where the math leads me all makes sense.

As a bit of followup, I’d like to consider the constant \mathbf{A}_\mathbf{k} case in more detail, and any implications of the freedom to pick \mathbf{A}_0.

The general calculation of T^{\mu\nu} for the assumed Fourier solution should be possible too, but was not attempted. Doing that general calculation with a four dimensional Fourier series is likely tidier than working with scalar and spatial variables as done here.

Now that the math is out of the way (except possibly for the momentum which doesn’t seem right), some discussion of implications and applications is also in order. My preference is to let the math sink-in a bit first and mull over the momentum issues at leisure.

References

[2] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | 2 Comments »

Electrodynamic field energy for vacuum.

Posted by peeterjoot on December 19, 2009

[Click here for a PDF of this post with nicer formatting]

Motivation.

We now know how to formulate the energy momentum tensor for complex vector fields (ie. phasors) in the Geometric Algebra formalism. To recap, for the field F = \mathbf{E} + I c \mathbf{B}, where \mathbf{E} and \mathbf{B} may be complex vectors we have for Maxwell’s equation

\begin{aligned}\nabla F = J/\epsilon_0 c.\end{aligned} \quad\quad\quad(1)

This is a doubly complex representation, with the four vector pseudoscalar I = \gamma_0 \gamma_1 \gamma_2 \gamma_3 acting as a non-commutatitive imaginary, as well as real and imaginary parts for the electric and magnetic field vectors. We take the real part (not the scalar part) of any bivector solution F of Maxwell’s equation as the actual solution, but allow ourself the freedom to work with the complex phasor representation when convenient. In these phasor vectors, the imaginary i, as in \mathbf{E} = \text{Real}(\mathbf{E}) + i \text{Imag}(\mathbf{E}), is a commuting imaginary, commuting with all the multivector elements in the algebra.

The real valued, four vector, energy momentum tensor T(a) was found to be

\begin{aligned}T(a) = \frac{\epsilon_0}{4} \Bigl( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} \Bigr) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \quad\quad\quad(2)

To supply some context that gives meaning to this tensor the associated conservation relationship was found to be

\begin{aligned}\nabla \cdot T(a) &= a \cdot \frac{1}{{ c }} \text{Real} \left( J \cdot {{F}}^{*} \right).\end{aligned} \quad\quad\quad(3)

and in particular for a = \gamma^0, this four vector divergence takes the form

\begin{aligned}\frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ \boldsymbol{\nabla} \cdot \frac{1}{{\mu_0}} \text{Real} (\mathbf{E} \times {\mathbf{B}}^{*} )+ \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) = 0,\end{aligned} \quad\quad\quad(4)

relating the energy term T^{00} = T(\gamma^0) \cdot \gamma^0 and the Poynting spatial vector T(\gamma^0) \wedge \gamma^0 with the current density and electric field product that constitutes the energy portion of the Lorentz force density.

Let’s apply this to calculating the energy associated with the field that is periodic within a rectangular prism as done by Bohm in [1]. We do not necessarily need the Geometric Algebra formalism for this calculation, but this will be a fun way to attempt it.

Setup

Let’s assume a Fourier representation for the four vector potential A for the field F = \nabla \wedge A. That is

\begin{aligned}A = \sum_{\mathbf{k}} A_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}},\end{aligned} \quad\quad\quad(5)

where summation is over all wave number triplets \mathbf{k} = (p/\lambda_1,q/\lambda_2,r/\lambda_3). The Fourier coefficients A_\mathbf{k} = {A_\mathbf{k}}^\mu \gamma_\mu are allowed to be complex valued, as is the resulting four vector A, and the associated bivector field F.

Fourier inversion follows from

\begin{aligned}\delta_{\mathbf{k}', \mathbf{k}} =\frac{1}{{ \lambda_1 \lambda_2 \lambda_3 }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} e^{2 \pi i \mathbf{k}' \cdot \mathbf{x}} e^{-2 \pi i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(6)

but only this orthogonality relationship and not the Fourier coefficients themselves

\begin{aligned}A_\mathbf{k} = \frac{1}{{ \lambda_1 \lambda_2 \lambda_3 }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} A(\mathbf{x}, t) e^{-2 \pi i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(7)

will be of interest here. Evaluating the curl for this potential yields

\begin{aligned}F = \nabla \wedge A= \sum_{\mathbf{k}} \left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \sum_{m=1}^3 \gamma^m \wedge A_\mathbf{k} \frac{2 \pi i k_m}{\lambda_m} \right) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}}.\end{aligned} \quad\quad\quad(8)

We can now form the energy density

\begin{aligned}U = T(\gamma^0) \cdot \gamma^0=-\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} \gamma^0 F \gamma^0 \Bigr).\end{aligned} \quad\quad\quad(9)

With implied summation over all repeated integer indexes (even without matching uppers and lowers), this is

\begin{aligned}U =-\frac{\epsilon_0}{2} \sum_{\mathbf{k}', \mathbf{k}} \text{Real} \left\langle{{\left( \frac{1}{{c}} \gamma^0 \wedge {{\dot{A}_{\mathbf{k}'}}}^{*} - \gamma^m \wedge {{A_{\mathbf{k}'}}}^{*} \frac{2 \pi i k_m'}{\lambda_m} \right) e^{-2 \pi i \mathbf{k}' \cdot \mathbf{x}}\gamma^0\left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \gamma^n \wedge A_\mathbf{k} \frac{2 \pi i k_n}{\lambda_n} \right) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}}\gamma^0}}\right\rangle.\end{aligned} \quad\quad\quad(10)

The grade selection used here doesn’t change the result since we already have a scalar, but will just make it convenient to filter out any higher order products that will cancel anyways. Integrating over the volume element and taking advantage of the orthogonality relationship (6), the exponentials are removed, leaving the energy contained in the volume

\begin{aligned}H = -\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2}\sum_{\mathbf{k}} \text{Real} \left\langle{{\left( \frac{1}{{c}} \gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} - \gamma^m \wedge {{A_{\mathbf{k}}}}^{*} \frac{2 \pi i k_m}{\lambda_m} \right) \gamma^0\left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \gamma^n \wedge A_\mathbf{k} \frac{2 \pi i k_n}{\lambda_n} \right) \gamma^0}}\right\rangle.\end{aligned} \quad\quad\quad(11)

First reduction of the Hamiltonian.

Let’s take the products involved in sequence one at a time, and evaluate, later adding and taking real parts if required all of

\begin{aligned}\frac{1}{{c^2}}\left\langle{{ (\gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) \gamma^0 (\gamma^0 \wedge \dot{A}_\mathbf{k}) \gamma^0 }}\right\rangle &=-\frac{1}{{c^2}}\left\langle{{ (\gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) (\gamma^0 \wedge \dot{A}_\mathbf{k}) }}\right\rangle \end{aligned} \quad\quad\quad(12)

\begin{aligned}- \frac{2 \pi i k_m}{c \lambda_m} \left\langle{{ (\gamma^m \wedge {{A_{\mathbf{k}}}}^{*} ) \gamma^0 ( \gamma^0 \wedge \dot{A}_\mathbf{k} ) \gamma^0}}\right\rangle &=\frac{2 \pi i k_m}{c \lambda_m} \left\langle{{ (\gamma^m \wedge {{A_{\mathbf{k}}}}^{*} ) ( \gamma^0 \wedge \dot{A}_\mathbf{k} ) }}\right\rangle \end{aligned} \quad\quad\quad(13)

\begin{aligned}\frac{2 \pi i k_n}{c \lambda_n} \left\langle{{ ( \gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) \gamma^0 ( \gamma^n \wedge A_\mathbf{k} ) \gamma^0}}\right\rangle &=-\frac{2 \pi i k_n}{c \lambda_n} \left\langle{{ ( \gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) ( \gamma^n \wedge A_\mathbf{k} ) }}\right\rangle \end{aligned} \quad\quad\quad(14)

\begin{aligned}-\frac{4 \pi^2 k_m k_n}{\lambda_m \lambda_n}\left\langle{{ (\gamma^m \wedge {{A_{\mathbf{k}}}}^{*} ) \gamma^0(\gamma^n \wedge A_\mathbf{k} ) \gamma^0}}\right\rangle. &\end{aligned} \quad\quad\quad(15)

The expectation is to obtain a Hamiltonian for the field that has the structure of harmonic oscillators, where the middle two products would have to be zero or sum to zero or have real parts that sum to zero. The first is expected to contain only products of {\left\lvert{{\dot{A}_\mathbf{k}}^m}\right\rvert}^2, and the last only products of {\left\lvert{{A_\mathbf{k}}^m}\right\rvert}^2.

While initially guessing that (13) and (14) may cancel, this isn’t so obviously the case. The use of cyclic permutation of multivectors within the scalar grade selection operator \left\langle{{A B}}\right\rangle = \left\langle{{B A}}\right\rangle plus a change of dummy summation indexes in one of the two shows that this sum is of the form Z + {{Z}}^{*}. This sum is intrinsically real, so we can neglect one of the two doubling the other, but we will still be required to show that the real part of either is zero.

Lets reduce these one at a time starting with (12), and write \dot{A}_\mathbf{k} = \kappa temporarily

\begin{aligned}\left\langle{{ (\gamma^0 \wedge {{\kappa}}^{*} ) (\gamma^0 \wedge \kappa }}\right\rangle &={\kappa^m}^{{*}} \kappa^{m'}\left\langle{{ \gamma^0 \gamma_m \gamma^0 \gamma_{m'} }}\right\rangle \\ &=-{\kappa^m}^{{*}} \kappa^{m'}\left\langle{{ \gamma_m \gamma_{m'} }}\right\rangle  \\ &={\kappa^m}^{{*}} \kappa^{m'}\delta_{m m'}.\end{aligned}

So the first of our Hamiltonian terms is

\begin{aligned}\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2 c^2}\left\langle{{ (\gamma^0 \wedge {{\dot{A}_\mathbf{k}}}^{*} ) (\gamma^0 \wedge \dot{A}_\mathbf{k} }}\right\rangle &=\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2 c^2}{\left\lvert{{{\dot{A}}_{\mathbf{k}}}^m}\right\rvert}^2.\end{aligned} \quad\quad\quad(16)

Note that summation over m is still implied here, so we’d be better off with a spatial vector representation of the Fourier coefficients \mathbf{A}_\mathbf{k} = A_\mathbf{k} \wedge \gamma_0. With such a notation, this contribution to the Hamiltonian is

\begin{aligned}\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2 c^2} \dot{\mathbf{A}}_\mathbf{k} \cdot {{\dot{\mathbf{A}}_\mathbf{k}}}^{*}.\end{aligned} \quad\quad\quad(17)

To reduce (13) and (13), this time writing \kappa = A_\mathbf{k}, we can start with just the scalar selection

\begin{aligned}\left\langle{{ (\gamma^m \wedge {{\kappa}}^{*} ) ( \gamma^0 \wedge \dot{\kappa} ) }}\right\rangle &=\Bigl( \gamma^m {{(\kappa^0)}}^{*} - {{\kappa}}^{*} \underbrace{(\gamma^m \cdot \gamma^0)}_{=0} \Bigr) \cdot \dot{\kappa} \\ &={{(\kappa^0)}}^{*} \dot{\kappa}^m\end{aligned}

Thus the contribution to the Hamiltonian from (13) and (13) is

\begin{aligned}\frac{2 \epsilon_0 \lambda_1 \lambda_2 \lambda_3 \pi k_m}{c \lambda_m} \text{Real} \Bigl( i {{(A_\mathbf{k}^0)}}^{*} \dot{A_\mathbf{k}}^m \Bigl)=\frac{2 \pi \epsilon_0 \lambda_1 \lambda_2 \lambda_3}{c} \text{Real} \Bigl( i {{(A_\mathbf{k}^0)}}^{*} \mathbf{k} \cdot \dot{\mathbf{A}}_\mathbf{k} \Bigl).\end{aligned} \quad\quad\quad(18)

Most definitively not zero in general. Our final expansion (15) is the messiest. Again with A_\mathbf{k} = \kappa for short, the grade selection of this term in coordinates is

\begin{aligned}\left\langle{{ (\gamma^m \wedge {{\kappa}}^{*} ) \gamma^0 (\gamma^n \wedge \kappa ) \gamma^0 }}\right\rangle&=- {{\kappa_\mu}}^{*} \kappa^\nu   \left\langle{{ (\gamma^m \wedge \gamma^\mu) \gamma^0 (\gamma_n \wedge \gamma_\nu) \gamma^0 }}\right\rangle\end{aligned} \quad\quad\quad(19)

Expanding this out yields

\begin{aligned}\left\langle{{ (\gamma^m \wedge {{\kappa}}^{*} ) \gamma^0 (\gamma^n \wedge \kappa ) \gamma^0 }}\right\rangle&=- ( {\left\lvert{\kappa_0}\right\rvert}^2 - {\left\lvert{A^a}\right\rvert}^2 ) \delta_{m n} + {{A^n}}^{*} A^m.\end{aligned} \quad\quad\quad(20)

The contribution to the Hamiltonian from this, with \phi_\mathbf{k} = A^0_\mathbf{k}, is then

\begin{aligned}2 \pi^2 \epsilon_0 \lambda_1 \lambda_2 \lambda_3 \Bigl(-\mathbf{k}^2 {{\phi_\mathbf{k}}}^{*} \phi_\mathbf{k} + \mathbf{k}^2 ({{\mathbf{A}_\mathbf{k}}}^{*} \cdot \mathbf{A}_\mathbf{k})+ (\mathbf{k} \cdot {{\mathbf{A}_k}}^{*}) (\mathbf{k} \cdot \mathbf{A}_k)\Bigr).\end{aligned} \quad\quad\quad(21)

A final reassembly of the Hamiltonian from the parts (17) and (18) and (21) is then

\begin{aligned}H = \epsilon_0 \lambda_1 \lambda_2 \lambda_3 \sum_\mathbf{k}\left(\frac{1}{{2 c^2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{2 \pi}{c} \text{Real} \Bigl( i {{ \phi_\mathbf{k} }}^{*} (\mathbf{k} \cdot \dot{\mathbf{A}}_\mathbf{k}) \Bigl)+2 \pi^2 \Bigl(\mathbf{k}^2 ( -{\left\lvert{\phi_\mathbf{k}}\right\rvert}^2 + {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 ) + {\left\lvert{\mathbf{k} \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2\Bigr)\right).\end{aligned} \quad\quad\quad(22)

This is finally reduced to a completely real expression, and one without any explicit Geometric Algebra. All the four vector Fourier vector potentials written out explicitly in terms of the spacetime split A_\mathbf{k} = (\phi_\mathbf{k}, \mathbf{A}_\mathbf{k}), which is natural since an explicit time and space split was the starting point.

Gauge transformation to simplify the Hamiltonian.

While (22) has considerably simpler form than (11), what was expected, was something that looked like the Harmonic oscillator. On the surface this does not appear to be such a beast. Exploitation of gauge freedom is required to make the simplification that puts things into the Harmonic oscillator form.

If we are to change our four vector potential A \rightarrow A + \nabla \psi, then Maxwell’s equation takes the form

\begin{aligned}J/\epsilon_0 c = \nabla (\nabla \wedge (A + \nabla \psi) = \nabla (\nabla \wedge A) + \nabla (\underbrace{\nabla \wedge \nabla \psi}_{=0}),\end{aligned} \quad\quad\quad(23)

which is unchanged by the addition of the gradient to any original potential solution to the equation. In coordinates this is a transformation of the form

\begin{aligned}A^\mu \rightarrow A^\mu + \partial_\mu \psi,\end{aligned} \quad\quad\quad(24)

and we can use this to force any one of the potential coordinates to zero. For this problem, it appears that it is desirable to seek a \psi such that A^0 + \partial_0 \psi = 0. That is

\begin{aligned}\sum_\mathbf{k} \phi_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} + \frac{1}{{c}} \partial_t \psi = 0.\end{aligned} \quad\quad\quad(25)

Or,

\begin{aligned}\psi(\mathbf{x},t) = \psi(\mathbf{x},0) -\frac{1}{{c}} \sum_\mathbf{k} e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} \int_{\tau=0}^t \phi_\mathbf{k}(\tau).\end{aligned} \quad\quad\quad(26)

With such a transformation, the \phi_\mathbf{k} and \dot{\mathbf{A}}_\mathbf{k} cross term in the Hamiltonian (22) vanishes, as does the \phi_\mathbf{k} term in the four vector square of the last term, leaving just

\begin{aligned}H = \frac{\epsilon_0}{c^2} \lambda_1 \lambda_2 \lambda_3 \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} \Bigl((2 \pi c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 + {\left\lvert{ ( 2 \pi c \mathbf{k}) \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2\Bigr)\right).\end{aligned} \quad\quad\quad(27)

Additionally, wedging (5) with \gamma_0 now does not loose any information so our potential Fourier series is reduced to just

\begin{aligned}\mathbf{A} &= \sum_{\mathbf{k}} \mathbf{A}_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} \\ \mathbf{A}_\mathbf{k} &= \frac{1}{{ \lambda_1 \lambda_2 \lambda_3 }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} \mathbf{A}(\mathbf{x}, t) e^{-2 \pi i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3.\end{aligned} \quad\quad\quad(28)

The desired harmonic oscillator form would be had in (27) if it were not for the \mathbf{k} \cdot \mathbf{A}_\mathbf{k} term. Does that vanish? Returning to Maxwell’s equation should answer that question, but first it has to be expressed in terms of the vector potential. While \mathbf{A} = A \wedge \gamma_0, the lack of an A^0 component means that this can be inverted as

\begin{aligned}A = \mathbf{A} \gamma_0 = -\gamma_0 \mathbf{A}.\end{aligned} \quad\quad\quad(30)

The gradient can also be factored scalar and spatial vector components

\begin{aligned}\nabla = \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) = ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0.\end{aligned} \quad\quad\quad(31)

So, with this A^0 = 0 gauge choice the bivector field F is

\begin{aligned}F = \nabla \wedge A = \frac{1}{{2}} \left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \end{aligned} \quad\quad\quad(32)

From the left the gradient action on A is

\begin{aligned}\stackrel{ \rightarrow }{\nabla} A &= ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0 (-\gamma_0 \mathbf{A}) \\ &= ( -\partial_0 + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} ) \mathbf{A},\end{aligned}

and from the right

\begin{aligned}A \stackrel{ \leftarrow }{\nabla}&= \mathbf{A} \gamma_0 \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \mathbf{A} ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \partial_0 \mathbf{A} + \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \end{aligned}

Taking the difference we have

\begin{aligned}F &= \frac{1}{{2}} \Bigl( -\partial_0 \mathbf{A} + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} -  \partial_0 \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \Bigr).\end{aligned}

Which is just

\begin{aligned}F = -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A}.\end{aligned} \quad\quad\quad(33)

For this vacuum case, premultiplication of Maxwell’s equation by \gamma_0 gives

\begin{aligned}0 &= \gamma_0 \nabla ( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= (\partial_0 + \boldsymbol{\nabla})( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} - \partial_0 \boldsymbol{\nabla} \cdot \mathbf{A} - \partial_0 \boldsymbol{\nabla} \wedge \mathbf{A} + \partial_0 ( \boldsymbol{\nabla} \wedge \mathbf{A} ) + \underbrace{\boldsymbol{\nabla} \cdot ( \boldsymbol{\nabla} \wedge \mathbf{A} ) }_{\boldsymbol{\nabla}^2 \mathbf{A} - \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A})}+ \underbrace{\boldsymbol{\nabla} \wedge ( \boldsymbol{\nabla} \wedge \mathbf{A} )}_{=0} \\ \end{aligned}

The spatial bivector and trivector grades are all zero. Equating the remaining scalar and vector components to zero separately yields a pair of equations in \mathbf{A}

\begin{aligned}0 &= \partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) \\ 0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) \end{aligned} \quad\quad\quad(34)

If the divergence of the vector potential is constant we have just a wave equation. Let’s see what that divergence is with the assumed Fourier representation

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{A} &=\sum_{k \ne (0,0,0)} {\mathbf{A}_\mathbf{k}}^m 2 \pi i \frac{k_m}{\lambda_m} e^{2\pi i \mathbf{k} \cdot \mathbf{x}} \\ &=2 \pi i \sum_{k \ne (0,0,0)} (\mathbf{A}_\mathbf{k} \cdot \mathbf{k}) e^{2\pi i \mathbf{k} \cdot \mathbf{x}} \\ \end{aligned}

Since \mathbf{A}_\mathbf{k} = \mathbf{A}_\mathbf{k}(t), there are two ways for \partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) = 0. For each \mathbf{k} \ne 0 there must be a requirement for either \mathbf{A}_\mathbf{k} \cdot \mathbf{k} = 0 or \mathbf{A}_\mathbf{k} = \text{constant}. The constant \mathbf{A}_\mathbf{k} solution to the first equation appears to represent a standing spatial wave with no time dependence. Is that of any interest?

The more interesting seeming case is where we have some non-static time varying state. In this case, if \mathbf{A}_\mathbf{k} \cdot \mathbf{k} for all \mathbf{k} \ne 0 the second of these Maxwell’s equations is just the vector potential wave equation, since the divergence is zero. That is

\begin{aligned}0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} \end{aligned} \quad\quad\quad(36)

Solving this isn’t really what is of interest, since the objective was just to determine if the divergence could be assumed to be zero. This shows then, that if the transverse solution to Maxwell’s equation is picked, the Hamiltonian for this field, with this gauge choice, becomes

\begin{aligned}H = \frac{\epsilon_0}{c^2} \lambda_1 \lambda_2 \lambda_3 \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} (2 \pi c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 \right).\end{aligned} \quad\quad\quad(37)

Conclusions and followup.

The objective was met, a reproduction of Bohm’s Harmonic oscillator result using a complex exponential Fourier series instead of separate sine and cosines.

The reason for Bohm’s choice to fix zero divergence as the gauge choice upfront is now clear. That automatically cuts complexity from the results. Figuring out how to work this problem with complex valued potentials and also using the Geometric Algebra formulation probably also made the work a bit more difficult since blundering through both simultaneously was required instead of just one at a time.

This was an interesting exercise though, since doing it this way I am able to understand all the intermediate steps. Bohm employed some subtler argumentation to eliminate the scalar potential \phi upfront, and I have to admit I did not follow his logic, whereas blindly following where the math leads me all makes sense.

As a bit of followup, I’d like to consider the constant \mathbf{A}_\mathbf{k} case, and any implications of the freedom to pick \mathbf{A}_0. I’d also like to construct the Poynting vector T(\gamma^0) \wedge \gamma_0, and see what the structure of that is with this Fourier representation.

A general calculation of T^{\mu\nu} for an assumed Fourier solution should be possible too, but working in spatial quantities for the general case is probably torture. A four dimensional Fourier series is likely a superior option for the general case.

References

[1] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | 1 Comment »

Energy and momentum for Complex electric and magnetic field phasors.

Posted by peeterjoot on December 15, 2009

[Click here for a PDF of this post with nicer formatting]

Motivation.

In [1] a complex phasor representations of the electric and magnetic fields is used

\begin{aligned}\mathbf{E} &= \boldsymbol{\mathcal{E}} e^{-i\omega t} \\ \mathbf{B} &= \mathbf{B} e^{-i\omega t}.\end{aligned} \quad\quad\quad(1)

Here the vectors \boldsymbol{\mathcal{E}} and \mathbf{B} are allowed to take on complex values. Jackson uses the real part of these complex vectors as the true fields, so one is really interested in just these quantities

\begin{aligned}\text{Real} \mathbf{E} &= \boldsymbol{\mathcal{E}}_r \cos(\omega t) + \boldsymbol{\mathcal{E}}_i \sin(\omega t) \\ \text{Real} \mathbf{B} &= \mathbf{B}_r \cos(\omega t) + \mathbf{B}_i \sin(\omega t),\end{aligned} \quad\quad\quad(3)

but carry the whole thing in manipulations to make things simpler. It is stated that the energy for such complex vector fields takes the form (ignoring constant scaling factors and units)

\begin{aligned}\text{Energy} \propto \mathbf{E} \cdot {\mathbf{E}}^{*} + \mathbf{B} \cdot {\mathbf{B}}^{*}.\end{aligned} \quad\quad\quad(5)

In some ways this is an obvious generalization. Less obvious is how this and the Poynting vector are related in their corresponding conservation relationships.

Here I explore this, employing a Geometric Algebra representation of the energy momentum tensor based on the real field representation found in [2]. Given the complex valued fields and a requirement that both the real and imaginary parts of the field satisfy Maxwell’s equation, it should be possible to derive the conservation relationship between the energy density and Poynting vector from first principles.

Review of GA formalism for real fields.

In SI units the Geometric algebra form of Maxwell’s equation is

\begin{aligned}\nabla F &= J/\epsilon_0 c,\end{aligned} \quad\quad\quad(6)

where one has for the symbols

\begin{aligned}F &= \mathbf{E} + c I \mathbf{B} \\ I &= \gamma_0 \gamma_1 \gamma_2 \gamma_3 \\ \mathbf{E} &= E^k \gamma_k \gamma_0  \\ \mathbf{B} &= B^k \gamma_k \gamma_0  \\ (\gamma^0)^2 &= -(\gamma^k)^2 = 1 \\ \gamma^\mu \cdot \gamma_\nu &= {\delta^\mu}_\nu \\ J &= c \rho \gamma_0 + J^k \gamma_k \\ \nabla &= \gamma^\mu \partial_\mu = \gamma^\mu {\partial {}}/{\partial {x^\mu}}.\end{aligned} \quad\quad\quad(7)

The symmetric electrodynamic energy momentum tensor for real fields \mathbf{E} and \mathbf{B} is

\begin{aligned}T(a) &= \frac{-\epsilon_0}{2} F a F = \frac{\epsilon_0}{2} F a \tilde{F}.\end{aligned} \quad\quad\quad(15)

It may not be obvious that this is in fact a four vector, but this can be seen since it can only have grade one and three components, and also equals its reverse implying that the grade three terms are all zero. To illustrate this explicitly consider the components of T^{\mu 0}

\begin{aligned}\frac{2}{\epsilon_0} T(\gamma^0) &= -(\mathbf{E} + c I \mathbf{B}) \gamma^0 (\mathbf{E} + c I \mathbf{B}) \\ &= (\mathbf{E} + c I \mathbf{B}) (\mathbf{E} - c I \mathbf{B}) \gamma^0 \\ &= (\mathbf{E}^2 + c^2 \mathbf{B}^2 + c I (\mathbf{B} \mathbf{E} - \mathbf{E} \mathbf{B})) \gamma^0 \\ &= (\mathbf{E}^2 + c^2 \mathbf{B}^2) \gamma^0 + 2 c I ( \mathbf{B} \wedge \mathbf{E} ) \gamma^0 \\ &= (\mathbf{E}^2 + c^2 \mathbf{B}^2) \gamma^0 + 2 c ( \mathbf{E} \times \mathbf{B} ) \gamma^0 \\ \end{aligned}

Our result is a four vector in the Dirac basis as expected

\begin{aligned}T(\gamma^0) &= T^{\mu 0} \gamma_\mu \\ T^{0 0} &= \frac{\epsilon_0}{2} (\mathbf{E}^2 + c^2 \mathbf{B}^2) \\ T^{k 0} &= c \epsilon_0 (\mathbf{E} \times \mathbf{B})_k \end{aligned} \quad\quad\quad(16)

Similar expansions are possible for the general tensor components T^{\mu\nu} but lets defer this more general expansion until considering complex valued fields. The main point here is to remind oneself how to express the energy momentum tensor in a fashion that is natural in a GA context. We also know that one has a conservation relationship associated with the divergence of this tensor \nabla \cdot T(a) (ie. \partial_\mu T^{\mu\nu}), and want to rederive this relationship after guessing what form the GA expression for the energy momentum tensor takes when one allows the field vectors to take complex values.

Computing the conservation relationship for complex field vectors.

As in 5, if one wants

\begin{aligned}T^{0 0} \propto \mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*},\end{aligned} \quad\quad\quad(19)

it is reasonable to assume that our energy momentum tensor will take the form

\begin{aligned}T(a) &= \frac{\epsilon_0}{4} \left( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} \right)= \frac{\epsilon_0}{2} \text{Real} \left( {{F}}^{*} a \tilde{F} \right)\end{aligned} \quad\quad\quad(20)

For real vector fields this reduces to the previous results and should produce the desired mix of real and imaginary dot products for the energy density term of the tensor. This is also a real four vector even when the field is complex, so the energy density and power density terms will all be real valued, which seems desirable.

Expanding the tensor. Easy parts.

As with real fields expansion of T(a) in terms of \mathbf{E} and \mathbf{B} is simplest for a = \gamma^0. Let’s start with that.

\begin{aligned}\frac{4}{\epsilon_0} T(\gamma^0) \gamma_0&=-({\mathbf{E}}^{*} + c I {\mathbf{B}}^{*} )\gamma^0 (\mathbf{E} + c I \mathbf{B}) \gamma_0-(\mathbf{E} + c I \mathbf{B} )\gamma^0 ({\mathbf{E}}^{*} + c I {\mathbf{B}}^{*} ) \gamma_0 \\ &=({\mathbf{E}}^{*} + c I {\mathbf{B}}^{*} ) (\mathbf{E} - c I \mathbf{B}) +(\mathbf{E} + c I \mathbf{B} ) ({\mathbf{E}}^{*} - c I {\mathbf{B}}^{*} ) \\ &={\mathbf{E}}^{*} \mathbf{E} + \mathbf{E} {\mathbf{E}}^{*} + c^2 ({\mathbf{B}}^{*} \mathbf{B} + \mathbf{B} {\mathbf{B}}^{*} ) + c I ( {\mathbf{B}}^{*} \mathbf{E} - {\mathbf{E}}^{*} \mathbf{B} + \mathbf{B} {\mathbf{E}}^{*} - \mathbf{E} {\mathbf{B}}^{*} ) \\ &=2 \mathbf{E} \cdot {\mathbf{E}}^{*} + 2 c^2 \mathbf{B} \cdot {\mathbf{B}}^{*}+ 2 c ( \mathbf{E} \times {\mathbf{B}}^{*} + {\mathbf{E}}^{*} \times \mathbf{B} ).\end{aligned}

This gives

\begin{aligned}T(\gamma^0) &=\frac{\epsilon_0}{2} \left( \mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*} \right) \gamma^0+ \frac{\epsilon_0 c}{2} ( \mathbf{E} \times {\mathbf{B}}^{*} + {\mathbf{E}}^{*} \times \mathbf{B} ) \gamma^0\end{aligned} \quad\quad\quad(21)

The sum of {{F}}^{*} a F and its conjugate has produced the desired energy density expression. An implication of this is that one can form and take real parts of a complex Poynting vector \mathbf{S} \propto \mathbf{E} \times {\mathbf{B}}^{*} to calculate the momentum density. This is stated but not demonstrated in Jackson, perhaps considered too obvious or messy to derive.

Observe that the a choice to work with complex valued vector fields gives a nice consistency, and one has the same factor of 1/2 in both the energy and momentum terms. While the energy term is obviously real, the momentum terms can be written in an explicitly real notation as well since one has a quantity plus its conjugate. Using a more conventional four vector notation (omitting the explicit Dirac basis vectors), one can write this out as a strictly real quantity.

\begin{aligned}T(\gamma^0) &=\epsilon_0 \Bigl( \frac{1}{{2}}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*}),c \text{Real}( \mathbf{E} \times {\mathbf{B}}^{*} ) \Bigr)\end{aligned} \quad\quad\quad(22)

Observe that when the vector fields are restricted to real quantities, the conjugate and real part operators can be dropped and the real vector field result 16 is recovered.

Expanding the tensor. Messier parts.

I intended here to compute T(\gamma^k), and my starting point was a decomposition of the field vectors into components that anticommute or commute with \gamma^k

\begin{aligned}\mathbf{E} &= \mathbf{E}_\parallel + \mathbf{E}_\perp \\ \mathbf{B} &= \mathbf{B}_\parallel + \mathbf{B}_\perp.\end{aligned} \quad\quad\quad(23)

The components parallel to the spatial vector \sigma_k = \gamma_k \gamma_0 are anticommuting \gamma^k \mathbf{E}_\parallel = -\mathbf{E}_\parallel \gamma^k, whereas the perpendicular components commute \gamma^k \mathbf{E}_\perp = \mathbf{E}_\perp \gamma^k. The expansion of the tensor products is then

\begin{aligned}({{F}}^{*} \gamma^k \tilde{F} + \tilde{F} \gamma^k {{F}}^{*}) \gamma_k&= - ({\mathbf{E}}^{*} + I c {\mathbf{B}}^{*}) \gamma^k ( \mathbf{E}_\parallel + \mathbf{E}_\perp + c I ( \mathbf{B}_\parallel + \mathbf{B}_\perp ) ) \gamma_k \\ &- (\mathbf{E} + I c \mathbf{B}) \gamma^k ( {\mathbf{E}_\parallel}^{*} + {\mathbf{E}_\perp}^{*} + c I ( {\mathbf{B}_\parallel}^{*} + {\mathbf{B}_\perp}^{*} ) ) \gamma_k \\ &=  ({\mathbf{E}}^{*} + I c {\mathbf{B}}^{*}) ( \mathbf{E}_\parallel - \mathbf{E}_\perp + c I ( -\mathbf{B}_\parallel + \mathbf{B}_\perp ) ) \\ &+ (\mathbf{E} + I c \mathbf{B}) ( {\mathbf{E}_\parallel}^{*} - {\mathbf{E}_\perp}^{*} + c I ( -{\mathbf{B}_\parallel}^{*} + {\mathbf{B}_\perp}^{*} ) ) \\ \end{aligned}

This isn’t particularly pretty to expand out. I did attempt it, but my result looked wrong. For the application I have in mind I do not actually need anything more than T^{\mu 0}, so rather than show something wrong, I’ll just omit it (at least for now).

Calculating the divergence.

Working with 20, let’s calculate the divergence and see what one finds for the corresponding conservation relationship.

\begin{aligned}\frac{4}{\epsilon_0} \nabla \cdot T(a) &=\left\langle{{ \nabla ( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} )}}\right\rangle \\ &=-\left\langle{{ F \stackrel{ \leftrightarrow }\nabla {{F}}^{*} a + {{F}}^{*} \stackrel{ \leftrightarrow }\nabla F a }}\right\rangle \\ &=-{\left\langle{{ F \stackrel{ \leftrightarrow }\nabla {{F}}^{*} + {{F}}^{*} \stackrel{ \leftrightarrow }\nabla F }}\right\rangle}_{1} \cdot a \\ &=-{\left\langle{{ F \stackrel{ \rightarrow }\nabla {{F}}^{*} +F \stackrel{ \leftarrow }\nabla {{F}}^{*} + {{F}}^{*} \stackrel{ \leftarrow }\nabla F+ {{F}}^{*} \stackrel{ \rightarrow }\nabla F}}\right\rangle}_{1} \cdot a \\ &=-\frac{1}{{\epsilon_0 c}} {\left\langle{{ F {{J}}^{*} - J {{F}}^{*} - {{J}}^{*} F+ {{F}}^{*} J}}\right\rangle}_{1} \cdot a \\ &= \frac{2}{\epsilon_0 c} a \cdot ( J \cdot {{F}}^{*} + {{J}}^{*} \cdot F) \\ &= \frac{4}{\epsilon_0 c} a \cdot \text{Real} ( J \cdot {{F}}^{*} ).\end{aligned}

We have then for the divergence

\begin{aligned}\nabla \cdot T(a) &= a \cdot \frac{1}{{ c }} \text{Real} \left( J \cdot {{F}}^{*} \right).\end{aligned} \quad\quad\quad(25)

Lets write out J \cdot {{F}}^{*} in the (stationary) observer frame where J = (c\rho + \mathbf{J}) \gamma_0. This is

\begin{aligned}J \cdot {{F}}^{*} &={\left\langle{{ (c\rho + \mathbf{J}) \gamma_0 ( {\mathbf{E}}^{*} + I c {\mathbf{B}}^{*} ) }}\right\rangle}_{1} \\ &=- (\mathbf{J} \cdot {\mathbf{E}}^{*} ) \gamma_0- c \left( \rho {\mathbf{E}}^{*} + \mathbf{J} \times {\mathbf{B}}^{*}\right) \gamma_0\end{aligned}

Writing out the four divergence relationships in full one has

\begin{aligned}\nabla \cdot T(\gamma^0) &= - \frac{1}{{ c }} \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) \\ \nabla \cdot T(\gamma^k) &= - \text{Real} \left( \rho {{(E^k)}}^{*} + (\mathbf{J} \times {\mathbf{B}}^{*})_k \right)\end{aligned} \quad\quad\quad(26)

Just as in the real field case one has a nice relativistic split into energy density and force (momentum change) components, but one has to take real parts and conjugate half the terms appropriately when one has complex fields.

Combining the divergence relation for T(\gamma^0) with 22 the conservation relation for this subset of the energy momentum tensor becomes

\begin{aligned}\frac{1}{{c}} \frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ c \epsilon_0 \text{Real} \boldsymbol{\nabla} \cdot (\mathbf{E} \times {\mathbf{B}}^{*} )=- \frac{1}{{c}} \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) \end{aligned} \quad\quad\quad(28)

Or

\begin{aligned}\frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ \text{Real} \boldsymbol{\nabla} \cdot \frac{1}{{\mu_0}} (\mathbf{E} \times {\mathbf{B}}^{*} )+ \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) = 0\end{aligned} \quad\quad\quad(29)

It is this last term that puts some meaning behind Jackson’s treatment since we now know how the energy and momentum are related as a four vector quantity in this complex formalism.

While I’ve used geometric algebra to get to this final result, I would be interested to compare how the intermediate mess compares with the same complex field vector result obtained via traditional vector techniques. I am sure I could try this myself, but am not interested enough to attempt it.

Instead, now that this result is obtained, proceeding on to application is now possible. My intention is to try the vacuum electromagnetic energy density example from [3] using complex exponential Fourier series instead of the doubled sum of sines and cosines that Bohm used.

References

[1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

Posted in Math and Physics Learning. | Tagged: , , , , , | 3 Comments »

Electromagnetic Gauge invariance.

Posted by peeterjoot on September 24, 2009

[Click here for a PDF of this post with nicer formatting]

At the end of section 12.1 in Jackson [1] he states that it is obvious that the Lorentz force equations are gauge invarient.

\begin{aligned}\frac{d \mathbf{p}}{dt} &= e \left( \mathbf{E} + \frac{\mathbf{u}}{c} \times \mathbf{B} \right) \\ \frac{d E}{dt} &= e \mathbf{u} \cdot \mathbf{E} \end{aligned} \quad\quad\quad(1)

Since I didn’t remember what Gauge invariance was, it wasn’t so obvious. But if I looking ahead to one of the problem 12.2 on this invariance we have a Gauge transformation defined in four vector form as

\begin{aligned}A^\alpha \rightarrow A^\alpha + \partial^\alpha \psi\end{aligned} \quad\quad\quad(3)

In vector form with A = \gamma_\alpha A^\alpha, this gauge transformation can be written

\begin{aligned}A \rightarrow A + \nabla \psi\end{aligned} \quad\quad\quad(4)

so this is really a statement that we add a spacetime gradient of something to the four vector potential. Given this, how does the field transform?

\begin{aligned}F &= \nabla \wedge A \\ &\rightarrow \nabla \wedge (A + \nabla \psi) \\ &= F + \nabla \wedge \nabla \psi\end{aligned}

But \nabla \wedge \nabla \psi = 0 (assuming partials are interchangable) so the field is invariant regardless of whether we are talking about the Lorentz force

\begin{aligned}\nabla F = J/\epsilon_0 c\end{aligned} \quad\quad\quad(5)

or the field equations themselves

\begin{aligned}\frac{dp}{d\tau} = e F \cdot v/c\end{aligned} \quad\quad\quad(6)

So, once you know the definition of the gauge transformation in four vector form, yes this justifiably obvious, however, to anybody who is not familiar with Geometric Algebra, perhaps this is still not so obvious. How does this translate to the more common place tensor or space time vector notations? The tensor four vector translation is the easier of the two, and there we have

\begin{aligned}F^{\alpha\beta} &= \partial^\alpha A^\beta -\partial^\beta A^\alpha \\ &\rightarrow \partial^\alpha (A^\beta + \partial^\beta \psi) -\partial^\beta (A^\alpha + \partial^\alpha \psi) \\ &= F^{\alpha\beta} + \partial^\alpha \partial^\beta \psi -\partial^\beta \partial^\alpha \psi \\ \end{aligned}

As required for \nabla \wedge \nabla \psi = 0 interchange of partials means the field components F^{\alpha\beta} are unchanged by adding this gradient. Finally, in plain old spatial vector form, how is this gauge invariance expressed?

In components we have

\begin{aligned}A^0 &\rightarrow A^0 + \partial^0 \psi = \phi + \frac{1}{{c}}\frac{\partial \psi}{\partial t} \\ A^k &\rightarrow A^k + \partial^k \psi = A^k - \frac{\partial \psi}{\partial x^k}\end{aligned} \quad\quad\quad(7)

This last in vector form is \mathbf{A} \rightarrow \mathbf{A} - \boldsymbol{\nabla} \psi, where the sign inversion comes from \partial^k = -\partial_k = -\partial/\partial x^k, assuming a +--- metric.

We want to apply this to the electric and magnetic field components

\begin{aligned}\mathbf{E} &= -\boldsymbol{\nabla} \phi - \frac{1}{{c}}\frac{\partial \mathbf{A}}{\partial t} \\ \mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{A}\end{aligned} \quad\quad\quad(9)

The electric field transforms as

\begin{aligned}\mathbf{E} &\rightarrow -\boldsymbol{\nabla} \left( \phi + \frac{1}{{c}}\frac{\partial \psi}{\partial t}\right) - \frac{1}{{c}}\frac{\partial }{\partial t} \left( \mathbf{A} - \boldsymbol{\nabla} \psi \right) \\ &= \mathbf{E} -\frac{1}{{c}} \boldsymbol{\nabla} \frac{\partial \psi}{\partial t} + \frac{1}{{c}}\frac{\partial }{\partial t} \boldsymbol{\nabla} \psi \end{aligned}

With partial interchange this is just \mathbf{E}. For the magnetic field we have

\begin{aligned}\mathbf{B} &\rightarrow \boldsymbol{\nabla} \times \left( \mathbf{A} - \boldsymbol{\nabla} \psi \right) \\ &= \mathbf{B}  - \boldsymbol{\nabla} \times \boldsymbol{\nabla} \psi \end{aligned}

Again since the partials interchange we have \boldsymbol{\nabla} \times \boldsymbol{\nabla} \psi = 0, so this is just the magnetic field.

Alright. Worked this in three different ways, so now I can say its obvious.

References

[1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »