Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘delta function’

A final pre-exam update of my notes compilation for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on April 22, 2013

Here’s my third update of my notes compilation for this course, including all of the following:

April 21, 2013 Fermi function expansion for thermodynamic quantities

April 20, 2013 Relativistic Fermi Gas

April 10, 2013 Non integral binomial coefficient

April 10, 2013 energy distribution around mean energy

April 09, 2013 Velocity volume element to momentum volume element

April 04, 2013 Phonon modes

April 03, 2013 BEC and phonons

April 03, 2013 Max entropy, fugacity, and Fermi gas

April 02, 2013 Bosons

April 02, 2013 Relativisitic density of states

March 28, 2013 Bosons

plus everything detailed in the description of my previous update and before.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment »

PHY452H1S Basic Statistical Mechanics. Problem Set 7: BEC and phonons

Posted by peeterjoot on April 10, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer

This is an ungraded set of answers to the problems posed.

Question: Bose-Einstein condensation (BEC) in one and two dimensions

Obtain the density of states N(\epsilon) in one and two dimensions for a particle with an energy-momentum relation

\begin{aligned}E_\mathbf{k} = \frac{\hbar^2 \mathbf{k}^2}{2 m}.\end{aligned} \hspace{\stretch{1}}(1.1)

Using this, show that for particles whose number is conserved the BEC transition temperature vanishes in these cases – so we can always pick a chemical potential \mu < 0 which preserves a constant density at any temperature.

Answer

We’d like to evaluate

\begin{aligned}N_d(\epsilon) \equiv\sum_\mathbf{k}\delta(\epsilon - \epsilon_\mathbf{k})\approx\frac{L^d}{(2 \pi)^d} \int d^d \mathbf{k} \delta\left( \epsilon - \frac{\hbar^2 k^2}{2 m} \right),\end{aligned} \hspace{\stretch{1}}(1.2)

We’ll use

\begin{aligned}\delta(g(x)) = \sum_{x_0} \frac{\delta(x - x_0)}{\left\lvert {g'(x_0)} \right\rvert},\end{aligned} \hspace{\stretch{1}}(1.3)

where the roots of g(x) are x_0. With

\begin{aligned}g(k) = \epsilon - \frac{\hbar^2 k^2}{2 m},\end{aligned} \hspace{\stretch{1}}(1.4)

the roots k^{*} of g(k) = 0 are

\begin{aligned}k^{*} = \pm \sqrt{\frac{2 m \epsilon }{\hbar^2}}.\end{aligned} \hspace{\stretch{1}}(1.5)

The derivative of g(k) evaluated at these roots are

\begin{aligned}g'(k^{*}) &= -\frac{\hbar^2 k^{*}}{m} \\ &= \mp \frac{\hbar^2}{m}\frac{\sqrt{2 m \epsilon}}{ \hbar } \\ &= \mp \frac{\hbar \sqrt{2 m \epsilon} }{m}.\end{aligned} \hspace{\stretch{1}}(1.6)

In 2D, we can evaluate over a shell in k space

\begin{aligned}N_2(\epsilon) &= \frac{A}{(2 \pi)^2} \int_0^\infty 2 \pi k dk\left( \delta \left( k - k^{*}  \right) + \delta \left( k + k^{*}  \right)  \right)\frac{m}{\hbar \sqrt{2 m \epsilon} } \\ &= \frac{A}{2 \pi} \not{{k^{*}}}\frac{m}{\hbar^2 \not{{k^{*}}} }\end{aligned} \hspace{\stretch{1}}(1.7)

or

\begin{aligned}\boxed{N_2(\epsilon) = \frac{2 \pi A m}{h^2}.}\end{aligned} \hspace{\stretch{1}}(1.8)

In 1D we have

\begin{aligned}N_1(\epsilon) &= \frac{L}{2 \pi} \int_{-\infty}^\infty dk\left( \delta \left( k - k^{*}  \right) + \delta \left( k + k^{*}  \right)  \right)\frac{m}{\hbar \sqrt{2 m \epsilon} } \\ &= \frac{2 L}{2 \pi} \frac{m}{\hbar \sqrt{2 m \epsilon} }.\end{aligned} \hspace{\stretch{1}}(1.9)

Observe that this time for 1D, unlike in 2D when we used a radial shell in k space, we have contributions from both the delta function roots. Our end result is

\begin{aligned}\boxed{N_1(\epsilon) =\frac{2 L}{h} \sqrt{\frac{m}{2 \epsilon}}.}\end{aligned} \hspace{\stretch{1}}(1.10)

To consider the question of the BEC temperature, we’ll need to calculate the density. For the 2D case we have

\begin{aligned}\rho = \frac{N}{A} &= \frac{1}{A} A \int \frac{d^2 \mathbf{k}}{(2 \pi)^2} f(e_\mathbf{k}) \\ &= \frac{1}{A} \frac{2 \pi A m}{h^2}\int_0^\infty d\epsilon \frac{1}{{ z^{-1} e^{\beta \epsilon} -1 }} \\ &= \frac{2 \pi m}{h^2 \beta}\int_0^\infty dx \frac{1}{{ z^{-1} e^{x} -1 }} \\ &= -\frac{2 \pi m k_{\mathrm{B}} T}{h^2} \ln (1 - z) \\ &= -\frac{1}{{\lambda^2}} \ln (1 - z).\end{aligned} \hspace{\stretch{1}}(1.11)

Recall for the 3D case that we had an upper bound as z \rightarrow 1. We don’t have that for this 2D density, so for any value of k_{\mathrm{B}} T > 0, a corresponding value of z can be found. That is

\begin{aligned}z &= 1 - e^{-\rho \lambda^2} \\ &= 1 - e^{-\rho h^4/(2 \pi m k_{\mathrm{B}} T)^2}.\end{aligned} \hspace{\stretch{1}}(1.1.12)

For the 1D case we have

\begin{aligned}\rho &= \frac{N}{L} \\ &= \frac{1}{L} L \int \frac{dk}{2 \pi} f(e_\mathbf{k}) \\ &= \frac{1}{L} \frac{2 L}{h} \sqrt{\frac{m}{2}}\int_0^\infty d\epsilon \frac{1}{{\sqrt{\epsilon}}}\frac{1}{{ z^{-1} e^{\beta \epsilon} -1 }} \\ &= \frac{1}{{h}} \sqrt{\frac{2 m}{\beta}} \int_0^\infty \frac{x^{1/2 - 1}}{z^{-1} e^x - 1} \\ &= \frac{1}{{h}} \sqrt{\frac{2 m}{\beta}} \Gamma(1/2) f^-_{1/2}(z),\end{aligned} \hspace{\stretch{1}}(1.1.12)

or

\begin{aligned}\rho= \frac{1}{{\lambda}} f^-_{1/2}(z).\end{aligned} \hspace{\stretch{1}}(1.1.12)

See fig. 1.1 for plots of f^-_\nu(z) for \nu \in \{1/2, 1, 3/2\}, the respective results for the 1D, 2D and 3D densities respectively.

Fig 1.1: Density integrals for 1D, 2D and 3D cases

We’ve found that f^-_{1/2}(z) is also unbounded as z \rightarrow 1, so while we cannot invert this easily as in the 2D case, we can at least say that there will be some z for any value of k_{\mathrm{B}} T > 0 that allows the density (and thus the number of particles) to remain fixed.

Question: Estimating the BEC transition temperature

Find data for the atomic mass of liquid {}^4 He and its density at ambient atmospheric pressure and hence estimate its BEC temperature assuming interactions are unimportant (even though this assumption is a very bad one!).

For dilute atomic gases of the sort used in Professor
Thywissen’s lab
, one typically has a cloud of 10^6 atoms confined to an approximate cubic region with linear dimension 1 \mu\,m. Find the density – it is pretty low, so interactions can be assumed to be extremely weak. Assuming these are {}^{87} Rb atoms, estimate the BEC transition temperature.

Answer

With an atomic weight of 4.0026, the mass in grams for one atom of Helium is

\begin{aligned}4.0026 \,\text{amu} \times \frac{\text{g}}{6.022 \times 10^{23} \text{amu}} &= 6.64 \times 10^{-24} \text{g} \\ &= 6.64 \times 10^{-27} \text{kg}.\end{aligned} \hspace{\stretch{1}}(1.15)

With the density of liquid He-4, at 5.2K (boiling point): 125 grams per liter, the number density is

\begin{aligned}\rho &= \frac{\text{mass}}{\text{volume}} \times \frac{1}{{\text{mass of one He atom}}} \\ &= \frac{125 \text{g}}{10^{-3} m^3} \times \frac{1}{{6.64 \times 10^{-24} g}} \\ &= \frac{125 \text{g}}{10^{-3} m^3} \times \frac{1}{{6.64 \times 10^{-24} g}} \\ &= 1.88 \times 10^{28} m^{-3}\end{aligned} \hspace{\stretch{1}}(1.16)

In class the T_{\mathrm{BEC}} was found to be

\begin{aligned}T_{\mathrm{BEC}} &= \frac{1}{k_{\mathrm{B}}} \left( \frac{\rho}{\zeta(3/2)}  \right)^{2/3} \frac{ 2 \pi \hbar^2}{M} \\ &= \frac{1}{{1.3806488 \times 10^{-23} m^2 kg/s^2/K}} \left( \frac{\rho}{ 2.61238 }  \right)^{2/3} \frac{ 2 \pi (1.05457173 \times 10^{-34} m^2 kg / s)^2}{M} \\ &= 2.66824 \times 10^{-45} \frac{\rho^{2/3}}{M} K.\end{aligned} \hspace{\stretch{1}}(1.17)

So for liquid helium we have

\begin{aligned}T_{\mathrm{BEC}} &= 2.66824 \times 10^{-45} \left( 1.88 \times 10^{28}  \right)^{2/3} \frac{1}{{ 6.64 \times 10^{-27} }} K \\ &= 2.84 K.\end{aligned} \hspace{\stretch{1}}(1.18)

The number density for the gas in Thywissen’s lab is

\begin{aligned}\rho &= \frac{10^6}{(10^{-6} \text{m})^3} \\ &= 10^{24} m^{-3}.\end{aligned} \hspace{\stretch{1}}(1.1.19)

The mass of an atom of {}^{87} Rb is

\begin{aligned}86.90 \,\text{amu} \times \frac{10^{-3} \text{kg}}{6.022 \times 10^{23} \text{amu}} = 1.443 \times 10^{-25} \text{kg},\end{aligned} \hspace{\stretch{1}}(1.1.19)

which gives us

\begin{aligned}T_{\mathrm{BEC}} &= 2.66824 \times 10^{-45} \left( 10^{24}  \right)^{2/3} \frac{1}{{ 1.443 \times 10^{-25} }} K \\ &= 1.85 \times 10^{-4} K.\end{aligned} \hspace{\stretch{1}}(1.1.19)

Question: Phonons in two dimensions

Consider phonons (quanta of lattice vibrations) which obey a dispersion relation

\begin{aligned}E_\mathbf{k} = \hbar v \left\lvert {\mathbf{k}} \right\rvert\end{aligned} \hspace{\stretch{1}}(1.1.22)

for small momenta \left\lvert {\mathbf{k}} \right\rvert, where v is the speed of sound. Assuming a two-dimensional crystal, phonons only propagate along the plane containing the atoms. Find the specific heat of this crystal due to phonons at low temperature. Recall that phonons are not conserved, so there is no chemical potential associated with maintaining a fixed phonon density.

The energy density of the system is

\begin{aligned}\frac{E}{V} &= \int \frac{d^2 \mathbf{k}}{(2 \pi)^2} \frac{\epsilon}{ e^{\beta \epsilon} - 1 } \\ &= \int d\epsilon \frac{N(\epsilon)}{V} \frac{\epsilon}{ e^{\beta \epsilon} - 1 }.\end{aligned} \hspace{\stretch{1}}(1.23)

For the density of states we have

\begin{aligned}\frac{N(\epsilon) }{V} &= \int \frac{d^2 \mathbf{k}}{(2 \pi)^2} \delta( \epsilon - \epsilon_\mathbf{k} ) \\ &= \frac{1}{{(2 \pi)^2}} 2 \pi \int_0^\infty k dk \delta( \epsilon - \hbar v k ) \\ &= \frac{1}{{2 \pi}} \int_0^\infty k dk \delta \left( k - \frac{\epsilon}{\hbar v}  \right) \frac{1}{{\hbar v}} \\ &= \frac{1}{{2 \pi}} \frac{\epsilon}{(\hbar v)^2}.\end{aligned} \hspace{\stretch{1}}(1.24)

Plugging back into the energy density we have

\begin{aligned}\frac{E}{V} &= \frac{2 \pi}{(\hbar v)^2}\int_0^\infty d\epsilon \frac{\epsilon^2}{ e^{\beta \epsilon} - 1 } \\ &= \frac{\pi \left( k_{\mathrm{B}} T \right)^3 }{(\hbar v)^2}\zeta(3),\end{aligned} \hspace{\stretch{1}}(1.25)

where \zeta(3) \approx 2.40411. Taking derivatives we have

\begin{aligned}\boxed{C_V = \frac{dE}{dT} = V\frac{3 \pi k_{\mathrm{B}}^3 T^2 }{(\hbar v)^2}\zeta(3).}\end{aligned} \hspace{\stretch{1}}(1.1.26)

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

Relativisitic density of states

Posted by peeterjoot on April 2, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Setup

For photons and high velocity particles our non-relativisitic density of states is insufficient. Let’s redo these calculations for particles for which the energy is given by

\begin{aligned}\epsilon = \sqrt{ \left( m c^2 \right)^2 + (p c)^2 }.\end{aligned} \hspace{\stretch{1}}(1.1)

We want to convert a sum over momentum values to an energy integral

\begin{aligned}\mathcal{D}_3(\epsilon) &= \sum_\mathbf{p} \delta( \epsilon - \epsilon_\mathbf{p} ) \\ &\rightarrow L^d\int \frac{d^d \mathbf{k}}{(2 \pi)^d}\delta( \epsilon - \epsilon_\mathbf{p} ) \\ &= L^d\int \frac{d^3 \mathbf{p}}{(2 \pi \, \hbar)^d}\delta( \epsilon - \epsilon_\mathbf{p} ) \\ &= L^d\int \frac{d^d (c \mathbf{p})}{(c h)^d}\delta( \epsilon - \epsilon_\mathbf{p} ).\end{aligned} \hspace{\stretch{1}}(1.2)

Now we want to use

\begin{aligned}\delta(g(x)) = \sum_{x_0} \frac{ \delta(x - x_0)}{ \left\lvert {g'(x)} \right\rvert_{x = x_0}},\end{aligned} \hspace{\stretch{1}}(1.3)

where x_0 are the roots of g(x). With

\begin{aligned}g( cp ) = \epsilon - \sqrt{ \left( m c^2  \right)^2 + ( c p )^2 }.\end{aligned} \hspace{\stretch{1}}(1.4)

Writing p^{*} for the roots we have

\begin{aligned}c p^{*} = \sqrt{ \epsilon^2 - \left( m c^2  \right)^2 }.\end{aligned} \hspace{\stretch{1}}(1.5)

Note that

\begin{aligned}\sqrt{ \left( m c^2  \right)^2 + ( c p^{*} )^2 }=\sqrt{ \epsilon^2 } = \epsilon.\end{aligned} \hspace{\stretch{1}}(1.0.6)

we have

\begin{aligned}\left\lvert {g'( c p )} \right\rvert_{p = p^{*}}= \frac{1}{{2}} \frac{2 (c p^{*})}{	\sqrt{ 		\left( m c^2  \right)^2 		+ ( c p^{*} )^2 	}}=\frac{ \sqrt{ \epsilon^2 - \left( m c^2  \right)^2 } }{\epsilon}.\end{aligned} \hspace{\stretch{1}}(1.0.6)

3D case

We can now evaluate the density of states, and do the 3D case first. We have

\begin{aligned}\mathcal{D}_3(\epsilon)=\frac{V}{ (c h)^3 } \int_0^\infty 4 \pi (c p)^2 d (c p)\left( \delta \left( c p - \sqrt{ \epsilon^2 - \left( m c^2  \right)^2 }  \right) + \delta \left( c p + \sqrt{ \epsilon^2 - \left( m c^2  \right)^2 }  \right)  \right)\frac{ \sqrt{ \epsilon^2 - \left( m c^2  \right)^2} }{\epsilon}.\end{aligned} \hspace{\stretch{1}}(1.0.6)

Observe that in the switch to spherical coordinates in momentum space, our integration is now over a “radius” of momentum space, requiring just integration over the positive values. This will kill off one of our delta functions, leaving just

\begin{aligned}\mathcal{D}_3(\epsilon)=\frac{4 \pi V}{ (c h)^3 } \left( \epsilon^2 - \left( m c^2  \right)^2  \right)\frac{ \sqrt{ \epsilon^2 - \left( m c^2  \right)^2} }{\epsilon},\end{aligned} \hspace{\stretch{1}}(1.0.6)

or

\begin{aligned}\boxed{\mathcal{D}_3(\epsilon)=\frac{4 \pi V}{ (c h)^3 } \frac{\left( \epsilon^2 - \left( m c^2  \right)^2  \right)^{3/2}}{\epsilon}.}\end{aligned} \hspace{\stretch{1}}(1.0.6)

In particular, for very high energy particles where \epsilon \gg \left( m c^2 \right), our 3D density of states is

\begin{aligned}\boxed{\mathcal{D}_3(\epsilon)\approx\frac{4 \pi V}{ (c h)^3 } \epsilon^2}\end{aligned} \hspace{\stretch{1}}(1.0.6)

This is also the desired result for photons or other massless particles.

2D case

For 2D we have

\begin{aligned}\mathcal{D}_2(\epsilon)=\frac{A}{ (c h)^2 } \int_0^\infty 2 \pi \left\lvert {c p} \right\rvert d (c p)\left( \delta \left( c p - \sqrt{ \epsilon^2 - \left( m c^2  \right)^2 }  \right) + \delta \left( c p + \sqrt{ \epsilon^2 - \left( m c^2  \right)^2 }  \right)  \right)\frac{ \sqrt{ \epsilon^2 - \left( m c^2  \right)^2} }{\epsilon}.\end{aligned} \hspace{\stretch{1}}(1.0.6)

Note again that we are dealing with a “radius” over this shell of momentum space volume. This is a strictly positive value. That and the corresponding integration range is important in this case since including the negative range of c p would kill the entire density function because of the pair of delta functions. That wasn’t the case in 3D, where it would have resulted in an off by two error instead. Continuing the evaluation we have

\begin{aligned}\mathcal{D}_2(\epsilon)=\frac{2 \pi A}{ (c h)^2 } \sqrt{ \epsilon^2 - \left( m c^2  \right)^2 } \frac{ \sqrt{ \epsilon^2 - \left( m c^2  \right)^2} }{\epsilon},\end{aligned} \hspace{\stretch{1}}(1.0.6)

or

\begin{aligned}\boxed{\mathcal{D}_2(\epsilon)=\frac{2 \pi A}{ (c h)^2 } \frac{ \epsilon^2 - \left( m c^2  \right)^2 }{ \epsilon }.}\end{aligned} \hspace{\stretch{1}}(1.0.6)

For an extreme relativisitic gas where \epsilon \gg m c^2 (or photons where m = 0), we have

\begin{aligned}\boxed{\mathcal{D}_2(\epsilon)\approx\frac{2 \pi A}{ (c h)^2 } \epsilon.}\end{aligned} \hspace{\stretch{1}}(1.0.6)

1D case

\begin{aligned}\mathcal{D}_1(\epsilon)=\frac{L}{ c h } \int d (c p)\left( \delta \left( c p - \sqrt{ \epsilon^2 - \left( m c^2  \right)^2 }  \right) + \delta \left( c p + \sqrt{ \epsilon^2 - \left( m c^2  \right)^2 }  \right)  \right)\frac{ \sqrt{ \epsilon^2 - \left( m c^2  \right)^2} }{\epsilon}.\end{aligned} \hspace{\stretch{1}}(1.0.6)

Question: For the 1D case, we don’t have to make a switch to spherical or cylindrical coordinates, so it looks like the second delta function has to be included, and the integration range over both positive and negative values of c p?

Assuming that’s the case, we have

\begin{aligned}\boxed{\mathcal{D}_1(\epsilon)=\frac{2 L}{ c h } \frac{ \sqrt{ \epsilon^2 - \left( m c^2  \right)^2 } }{\epsilon},}\end{aligned} \hspace{\stretch{1}}(1.0.6)

and for \epsilon \gg m c^2 or m = 0

\begin{aligned}\boxed{\mathcal{D}_1(\epsilon)=\frac{2 L}{ c h }.}\end{aligned} \hspace{\stretch{1}}(1.0.6)

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 27, 2013

Here’s my second update of my notes compilation for this course, including all of the following:

March 27, 2013 Fermi gas

March 26, 2013 Fermi gas thermodynamics

March 26, 2013 Fermi gas thermodynamics

March 23, 2013 Relativisitic generalization of statistical mechanics

March 21, 2013 Kittel Zipper problem

March 18, 2013 Pathria chapter 4 diatomic molecule problem

March 17, 2013 Gibbs sum for a two level system

March 16, 2013 open system variance of N

March 16, 2013 probability forms of entropy

March 14, 2013 Grand Canonical/Fermion-Bosons

March 13, 2013 Quantum anharmonic oscillator

March 12, 2013 Grand canonical ensemble

March 11, 2013 Heat capacity of perturbed harmonic oscillator

March 10, 2013 Langevin small approximation

March 10, 2013 Addition of two one half spins

March 10, 2013 Midterm II reflection

March 07, 2013 Thermodynamic identities

March 06, 2013 Temperature

March 05, 2013 Interacting spin

plus everything detailed in the description of my first update and before.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment »

PHY452H1S Basic Statistical Mechanics. Lecture 17: Fermi gas thermodynamics. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 26, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Fermi gas thermodynamics

  • Energy was found to be

    \begin{aligned}\frac{E}{N} = \frac{3}{5} \epsilon_{\mathrm{F}}\qquad \text{where} \quad T = 0.\end{aligned} \hspace{\stretch{1}}(1.2.1)

  • Pressure was found to have the form fig. 1.1

    Fig 1.1: Pressure in Fermi gas

  • The chemical potential was found to have the form fig. 1.2.

    \begin{aligned}e^{\beta \mu} = \rho \lambda_{\mathrm{T}}^3\end{aligned} \hspace{\stretch{1}}(1.0.2a)

    \begin{aligned}\lambda_{\mathrm{T}} = \frac{h}{\sqrt{ 2 \pi m k_{\mathrm{B}} T}},\end{aligned} \hspace{\stretch{1}}(1.0.2b)

    so that the zero crossing is approximately when

    \begin{aligned}e^{\beta \times 0} = 1 = \rho \lambda_{\mathrm{T}}^3.\end{aligned} \hspace{\stretch{1}}(1.0.3)

    That last identification provides the relation T \sim  T_{\mathrm{F}}. FIXME: that bit wasn’t clear to me.

    Fig 1.2: Chemical potential in Fermi gas

How about at other temperatures?

  • \mu(T) = ?
  • E(T) = ?
  • C_{\mathrm{V}}(T) = ?

We had

\begin{aligned}N = \sum_k \frac{1}{{e^{\beta (\epsilon_k - \mu)} + 1}} = \sum_{\mathbf{k}} n_{\mathrm{F}}(\epsilon_\mathbf{k})\end{aligned} \hspace{\stretch{1}}(1.0.4)

\begin{aligned}E(T) =\sum_k \epsilon_\mathbf{k} n_{\mathrm{F}}(\epsilon_\mathbf{k}).\end{aligned} \hspace{\stretch{1}}(1.0.5)

FIXME: references to earlier sections where these were derived.

We can define a density of states

\begin{aligned}\sum_\mathbf{k} &= \sum_\mathbf{k} \int_{-\infty}^\infty d\epsilon  \delta(\epsilon  - \epsilon_\mathbf{k}) \\ &= \int_{-\infty}^\infty d\epsilon \sum_\mathbf{k}\delta(\epsilon  - \epsilon_\mathbf{k}),\end{aligned} \hspace{\stretch{1}}(1.0.6)

where the liberty to informally switch the order of differentiation and integration has been used. This construction allows us to write a more general sum

\begin{aligned}\sum_\mathbf{k} f(\epsilon_\mathbf{k}) &= \sum_\mathbf{k} \int_{-\infty}^\infty d\epsilon  \delta(\epsilon  - \epsilon_\mathbf{k}) f(\epsilon_\mathbf{k}) \\ &= \sum_\mathbf{k}\int_{-\infty}^\infty d\epsilon \delta(\epsilon  - \epsilon_\mathbf{k})f(\epsilon_\mathbf{k}) \\ &=\int_{-\infty}^\infty d\epsilon  f(\epsilon_\mathbf{k})\left( \sum_\mathbf{k} \delta(\epsilon  - \epsilon_\mathbf{k}) \right).\end{aligned} \hspace{\stretch{1}}(1.0.7)

This sum, evaluated using a continuum approximation, is

\begin{aligned}N(\epsilon ) &\equiv \sum_\mathbf{k}\delta(\epsilon  - \epsilon_\mathbf{k}) \\ &= \frac{V}{(2 \pi)^3} \int d^3 \mathbf{k} \delta\left( \epsilon  - \frac{\hbar^2 k^2}{2 m} \right) \\ &= \frac{V}{(2 \pi)^3} 4 \pi \int_0^\infty k^2 dk \delta\left( \epsilon  - \frac{\hbar^2 k^2}{2 m} \right)\end{aligned} \hspace{\stretch{1}}(1.0.8)

Using

\begin{aligned}\delta(g(x)) = \sum_{x_0} \frac{\delta(x - x_0)}{\left\lvert {g'(x_0)} \right\rvert},\end{aligned} \hspace{\stretch{1}}(1.0.9)

where the roots of g(x) are x_0, we have

\begin{aligned}N(\epsilon ) &= \frac{V}{(2 \pi)^3} 4 \pi \int_0^\infty k^2 dk \delta\left( k - \frac{\sqrt{2 m \epsilon }}{\hbar} \right)\frac{m \hbar }{ \hbar^2 \sqrt{2 m \epsilon }} \\ &= \frac{V}{(2 \pi)^3} 2 \pi \frac{2 m \epsilon }{\hbar^2}\frac{2 m \hbar }{ \hbar^2 \sqrt{2 m \epsilon }} \\ &= V \left( \frac{2 m}{\hbar^2} \right)^{3/2} \frac{1}{{4 \pi^2}} \sqrt{\epsilon }.\end{aligned} \hspace{\stretch{1}}(1.0.10)

In 2D this would be

\begin{aligned}N(\epsilon ) \sim  V \int dk k \delta \left( \epsilon  - \frac{\hbar^2 k^2}{2m} \right) = V \frac{\sqrt{2 m \epsilon }}{\hbar} \frac{m \hbar}{\hbar^2 \sqrt{ 2 m \epsilon }} \sim  V\end{aligned} \hspace{\stretch{1}}(1.0.11)

and in 1D

\begin{aligned}N(\epsilon ) &\sim  V \int dk \delta \left( \epsilon  - \frac{\hbar^2 k^2}{2m} \right) \\ &= V \frac{m \hbar}{\hbar^2 \sqrt{ 2 m \epsilon }} \\ &\sim  \frac{1}{{\sqrt{\epsilon }}}.\end{aligned} \hspace{\stretch{1}}(1.0.12)

What happens when we have linear energy momentum relationships?

Suppose that we have a linear energy momentum relationship like

\begin{aligned}\epsilon_\mathbf{k} = v \left\lvert {\mathbf{k}} \right\rvert.\end{aligned} \hspace{\stretch{1}}(1.0.13)

An example of such a relationship is the high velocity relation between the energy and momentum of a particle

\begin{aligned}\epsilon_\mathbf{k} = \sqrt{ m_0^2 c^4 + p^2 c^2 } \sim  \left\lvert {\mathbf{p}} \right\rvert c.\end{aligned} \hspace{\stretch{1}}(1.0.14)

Another example is graphene, a carbon structure of the form fig. 1.3. The energy and momentum for such a structure is related in roughly as shown in fig. 1.4, where

Fig 1.3: Graphene bond structure

 

Fig 1.4: Graphene energy momentum dependence

 

\begin{aligned}\epsilon_\mathbf{k} = \pm v_{\mathrm{F}} \left\lvert {\mathbf{k}} \right\rvert.\end{aligned} \hspace{\stretch{1}}(1.0.15)

Continuing with the 3D case we have

FIXME: Is this (or how is this) related to the linear energy momentum relationships for Graphene like substances?

\begin{aligned}N = V \int_0^\infty\underbrace{n_{\mathrm{F}}(\epsilon )}_{1/(e^{\beta (\epsilon  - \mu)} + 1)}\underbrace{N(\epsilon )}_{\epsilon ^{1/2}}\end{aligned} \hspace{\stretch{1}}(1.0.16)

\begin{aligned}\rho &= \frac{N}{V} \\ &= \left( \frac{2m}{\hbar^2 } \right)^{3/2} \frac{1}{{ 4 \pi^2}}\int_0^\infty d\epsilon  \frac{\epsilon ^{1/2}}{z^{-1} e^{\beta \epsilon } + 1} \\ &= \left( \frac{2m}{\hbar^2 } \right)^{3/2} \frac{1}{{ 4 \pi^2}}\left( k_{\mathrm{B}} T \right)^{3/2}\int_0^\infty dx \frac{x^{1/2}}{z^{-1} e^{x} + 1}\end{aligned} \hspace{\stretch{1}}(1.0.17)

where z = e^{\beta \mu} as usual, and we write x = \beta \epsilon . For the low temperature asymptotic behavior see [1] appendix section E. For z large it can be shown that this is

\begin{aligned}\int_0^\infty dx \frac{x^{1/2}}{z^{-1} e^{x} + 1}\approx \frac{2}{3}\left( \ln z \right)^{3/2}\left( 1 + \frac{\pi^2}{8} \frac{1}{{(\ln z)^2}} \right),\end{aligned} \hspace{\stretch{1}}(1.0.18)

so that

\begin{aligned}\rho &\approx  \left( \frac{2m}{\hbar^2 } \right)^{3/2} \frac{1}{{ 4 \pi^2}}\left( k_{\mathrm{B}} T \right)^{3/2}\frac{2}{3}\left( \ln z \right)^{3/2}\left( 1 + \frac{\pi^2}{8} \frac{1}{{(\ln z)^2}} \right) \\ &= \left( \frac{2m}{\hbar^2 } \right)^{3/2} \frac{1}{{ 4 \pi^2}}\frac{2}{3}\mu^{3/2}\left( 1 + \frac{\pi^2}{8} \frac{1}{{(\beta \mu)^2}} \right) \\ &= \left( \frac{2m}{\hbar^2 } \right)^{3/2} \frac{1}{{ 4 \pi^2}}\frac{2}{3}\mu^{3/2}\left( 1 + \frac{\pi^2}{8} \left( \frac{k_{\mathrm{B}} T}{\mu} \right)^2 \right) \\ &= \rho_{T = 0}\left( \frac{\mu}{ \epsilon_{\mathrm{F}} } \right)^{3/2}\left( 1 + \frac{\pi^2}{8} \left( \frac{k_{\mathrm{B}} T}{\mu} \right)^2 \right)\end{aligned} \hspace{\stretch{1}}(1.0.19)

Assuming a quadratic form for the chemical potential at low temperature as in fig. 1.5, we have

Fig 1.5: Assumed quadratic form for low temperature chemical potential

 

\begin{aligned}1 &= \left( \frac{\mu}{ \epsilon_{\mathrm{F}} } \right)^{3/2}\left( 1 + \frac{\pi^2}{8} \left( \frac{k_{\mathrm{B}} T}{\mu} \right)^2 \right) \\ &= \left( \frac{\epsilon_{\mathrm{F}} - a T^2}{ \epsilon_{\mathrm{F}} } \right)^{3/2}\left( 1 + \frac{\pi^2}{8} \left( \frac{k_{\mathrm{B}} T}{\epsilon_{\mathrm{F}} - a T^2} \right)^2 \right) \\ &\approx  \left( 1 - \frac{3}{2} a \frac{T^2}{\epsilon_{\mathrm{F}}} \right)\left( 1 + \frac{\pi^2}{8} \frac{(k_{\mathrm{B}} T)^2}{\epsilon_{\mathrm{F}}^2} \right) \\ &\approx  1 - \frac{3}{2} a \frac{T^2}{\epsilon_{\mathrm{F}}} + \frac{\pi^2}{8} \frac{(k_{\mathrm{B}} T)^2}{\epsilon_{\mathrm{F}}^2},\end{aligned} \hspace{\stretch{1}}(1.0.20)

or

\begin{aligned}a = \frac{\pi^2}{12} \frac{k_{\mathrm{B}}^2}{\epsilon_{\mathrm{F}}},\end{aligned} \hspace{\stretch{1}}(1.0.21)

We have used a Taylor expansion (1 + x)^n \approx  1 + n x for small x, for an end result of

\begin{aligned}\mu = \epsilon_{\mathrm{F}} - \frac{\pi^2}{12} \frac{(k_{\mathrm{B}} T)^2}{\epsilon_{\mathrm{F}}}.\end{aligned} \hspace{\stretch{1}}(1.0.22)

References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Lecture 18 (Taught by Prof. Erich Poppitz). Green’s function solution to Maxwell’s equation.

Posted by peeterjoot on March 12, 2011

[Click here for a PDF of this post with nicer formatting]

Reading.

Covering chapter 8 material from the text [1].

Covering lecture notes pp. 136-146: continued reminder of electrostatic Green’s function (136); the retarded Green’s function of the d’Alembert operator: derivation and properties (137-140); the solution of the d’Alembert equation with a source: retarded potentials (141-142)

Solving the forced wave equation.

See the notes for a complex variables and Fourier transform method of deriving the Green’s function. In class, we’ll just pull it out of a magic hat. We wish to solve

\begin{aligned}\square A^k = \partial_i \partial^i A^k = \frac{4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(2.1)

(with a \partial_i A^i = 0 gauge choice).

Our Green’s method utilizes

\begin{aligned}\square_{(\mathbf{x}, t)} G(\mathbf{x} - \mathbf{x}', t - t') = \delta^3( \mathbf{x} - \mathbf{x}') \delta( t - t')\end{aligned} \hspace{\stretch{1}}(2.2)

If we know such a function, our solution is simple to obtain

\begin{aligned}A^k(\mathbf{x}, t)= \int d^3 \mathbf{x}' dt' \frac{4 \pi}{c} j^k(\mathbf{x}', t') G(\mathbf{x} - \mathbf{x}', t - t')\end{aligned} \hspace{\stretch{1}}(2.3)

Proof:

\begin{aligned}\square_{(\mathbf{x}, t)} A^k(\mathbf{x}, t)&=\int d^3 \mathbf{x}' dt' \frac{4 \pi}{c} j^k(\mathbf{x}', t')\square_{(\mathbf{x}, t)}G(\mathbf{x} - \mathbf{x}', t - t') \\ &=\int d^3 \mathbf{x}' dt' \frac{4 \pi}{c} j^k(\mathbf{x}', t')\delta^3( \mathbf{x} - \mathbf{x}') \delta( t - t') \\ &=\frac{4 \pi}{c} j^k(\mathbf{x}, t)\end{aligned}

Claim:

\begin{aligned}G(\mathbf{x}, t) = \frac{\delta(t - {\left\lvert{\mathbf{x}}\right\rvert}/c)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }\end{aligned} \hspace{\stretch{1}}(2.4)

This is the retarded Green’s function of the operator \square, where

\begin{aligned}\square G(\mathbf{x}, t) = \delta^3(\mathbf{x}) \delta(t)\end{aligned} \hspace{\stretch{1}}(2.5)

Proof of the d’Alembertian Green’s function

Our Prof is excellent at motivating any results that he pulls out of magic hats. He’s said that he’s included a derivation using Fourier transforms and tricky contour integration arguments in the class notes for anybody who is interested (and for those who also know how to do contour integration). For those who don’t know contour integration yet (some people are taking it concurrently), one can actually prove this by simply applying the wave equation operator to this function. This treats the delta function as a normal function that one can take the derivatives of, something that can be well defined in the context of generalized functions. Chugging ahead with this approach we have

\begin{aligned}\square G(\mathbf{x}, t)=\left(\frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta\right)\frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\frac{\delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi c^2 {\left\lvert{\mathbf{x}}\right\rvert} }- \Delta \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }.\end{aligned} \hspace{\stretch{1}}(2.6)

This starts things off and now things get a bit hairy. It’s helpful to consider a chain rule expansion of the Laplacian

\begin{aligned}\Delta (u v)&=\partial_{\alpha\alpha} (u v) \\ &=\partial_{\alpha} (v \partial_\alpha u+ u\partial_\alpha v) \\ &=(\partial_\alpha v) (\partial_\alpha u ) + v \partial_{\alpha\alpha} u+(\partial_\alpha u) (\partial_\alpha v ) + u \partial_{\alpha\alpha} v).\end{aligned}

In vector form this is

\begin{aligned}\Delta (u v) = u \Delta v + 2 (\boldsymbol{\nabla} u) \cdot (\boldsymbol{\nabla} v) + v \Delta u.\end{aligned} \hspace{\stretch{1}}(2.7)

Applying this to the Laplacian portion of 2.6 we have

\begin{aligned}\Delta \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)\Delta\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}+\left(\boldsymbol{\nabla} \frac{1}{{2 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}\right)\cdot\left(\boldsymbol{\nabla}\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \right)+\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}\Delta\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right).\end{aligned} \hspace{\stretch{1}}(2.8)

Here we make the identification

\begin{aligned}\Delta \frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }} = - \delta^3(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.9)

This could be considered a given from our knowledge of electrostatics, but it’s not too much work to just do so.

An aside. Proving the Laplacian Green’s function.

If -1/{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} } is a Green’s function for the Laplacian, then the Laplacian of the convolution of this with a test function should recover that test function

\begin{aligned}\Delta \int d^3 \mathbf{x}' \left(-\frac{1}{{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} \right) f(\mathbf{x}') = f(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.10)

We can directly evaluate the LHS of this equation, following the approach in [2]. First note that the Laplacian can be pulled into the integral and operates only on the presumed Green’s function. For that operation we have

\begin{aligned}\Delta \left(-\frac{1}{{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} \right)=-\frac{1}{{4 \pi}} \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(2.11)

It will be helpful to compute the gradient of various powers of {\left\lvert{\mathbf{x}}\right\rvert}

\begin{aligned}\boldsymbol{\nabla} {\left\lvert{\mathbf{x}}\right\rvert}^a&=e_\alpha \partial_\alpha (x^\beta x^\beta)^{a/2} \\ &=e_\alpha \left(\frac{a}{2}\right) 2 x^\beta {\delta_\beta}^\alpha {\left\lvert{\mathbf{x}}\right\rvert}^{a - 2}.\end{aligned}

In particular we have, when \mathbf{x} \ne 0, this gives us

\begin{aligned}\boldsymbol{\nabla} {\left\lvert{\mathbf{x}}\right\rvert} &= \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ \boldsymbol{\nabla} \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} &= -\frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ \boldsymbol{\nabla} \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}^3}} &= -3 \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^5}.\end{aligned} \hspace{\stretch{1}}(2.12)

For the Laplacian of 1/{\left\lvert{\mathbf{x}}\right\rvert}, at the points \mathbf{e} \ne 0 where this is well defined we have

\begin{aligned}\Delta \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} &=\boldsymbol{\nabla} \cdot \boldsymbol{\nabla} \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} \\ &= -\partial_\alpha \frac{x^\alpha}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= -\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}^3} - x^\alpha \partial_\alpha \frac{1}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= -\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}^3} - \mathbf{x} \cdot \boldsymbol{\nabla} \frac{1}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= -\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}^3} + 3 \frac{\mathbf{x}^2}{{\left\lvert{\mathbf{x}}\right\rvert}^5}\end{aligned}

So we have a zero. This means that the Laplacian operation

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') =\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^3 \mathbf{x}' \Delta \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}},\end{aligned} \hspace{\stretch{1}}(2.15)

can only have a value in a neighborhood of point \mathbf{x}. Writing \Delta = \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} we have

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') =\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^3 \mathbf{x}' \boldsymbol{\nabla} \cdot -\frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(2.16)

Observing that \boldsymbol{\nabla} \cdot f(\mathbf{x} -\mathbf{x}') = -\boldsymbol{\nabla}' f(\mathbf{x} - \mathbf{x}') we can put this in a form that allows for use of Stokes theorem so that we can convert this to a surface integral

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') &=\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^3 \mathbf{x}' \boldsymbol{\nabla}' \cdot \frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}^3} \\ &=\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^2 \mathbf{x}' \mathbf{n} \cdot \frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}^3} \\ &= \int_{\phi=0}^{2\pi} \int_{\theta = 0}^\pi \epsilon^2 \sin\theta d\theta d\phi \frac{\mathbf{x}' - \mathbf{x}}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}} \cdot \frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}^3} \\ &= -\int_{\phi=0}^{2\pi} \int_{\theta = 0}^\pi \epsilon^2 \sin\theta d\theta d\phi \frac{\epsilon^2}{\epsilon^4}\end{aligned}

where we use (\mathbf{x}' - \mathbf{x})/{\left\lvert{\mathbf{x}' - \mathbf{x}}\right\rvert} as the outwards normal for a sphere centered at \mathbf{x} of radius \epsilon. This integral is just -4 \pi, so we have

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{-4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') =f(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.17)

The convolution of f(\mathbf{x}) with -\Delta/4 \pi {\left\lvert{\mathbf{x}}\right\rvert} produces f(\mathbf{x}), allowing an identification of this function with a delta function, since the two have the same operational effect

\begin{aligned}\int d^3 \mathbf{x}' \delta(\mathbf{x} - \mathbf{x}') f(\mathbf{x}') =f(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.18)

Returning to the d’Alembertian Green’s function.

We need two additional computations to finish the job. The first is the gradient of the delta function

\begin{aligned}\boldsymbol{\nabla} \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &= ? \\ \Delta \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &= ?\end{aligned}

Consider \boldsymbol{\nabla} f(g(\mathbf{x})). This is

\begin{aligned}\boldsymbol{\nabla} f(g(\mathbf{x}))&=e_\alpha \frac{\partial {f(g(\mathbf{x}))}}{\partial {x^\alpha}} \\ &=e_\alpha \frac{\partial {f}}{\partial {g}} \frac{\partial {g}}{\partial {x^\alpha}},\end{aligned}

so we have

\begin{aligned}\boldsymbol{\nabla} f(g(\mathbf{x}))=\frac{\partial {f}}{\partial {g}} \boldsymbol{\nabla} g.\end{aligned} \hspace{\stretch{1}}(2.19)

The Laplacian is similar

\begin{aligned}\Delta f(g)&= \boldsymbol{\nabla} \cdot \left(\frac{\partial {f}}{\partial {g}} \boldsymbol{\nabla} g \right) \\ &= \partial_\alpha \left(\frac{\partial {f}}{\partial {g}} \partial_\alpha g \right) \\ &= \left( \partial_\alpha \frac{\partial {f}}{\partial {g}} \right) \partial_\alpha g +\frac{\partial {f}}{\partial {g}} \partial_{\alpha\alpha} g  \\ &= \frac{\partial^2 {{f}}}{\partial {{g}}^2} \left( \partial_\alpha g \right) (\partial_\alpha g)+\frac{\partial {f}}{\partial {g}} \Delta g,\end{aligned}

so we have

\begin{aligned}\Delta f(g)= \frac{\partial^2 {{f}}}{\partial {{g}}^2} (\boldsymbol{\nabla} g)^2 +\frac{\partial {f}}{\partial {g}} \Delta g\end{aligned} \hspace{\stretch{1}}(2.20)

With g(\mathbf{x}) = {\left\lvert{\mathbf{x}}\right\rvert}, we’ll need the Laplacian of this vector magnitude

\begin{aligned}\Delta {\left\lvert{\mathbf{x}}\right\rvert}&=\partial_\alpha \frac{x_\alpha}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ &=\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}} + x_\alpha \partial_\alpha (x^\beta x^\beta)^{-1/2} \\ &=\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}} - \frac{x_\alpha x_\alpha}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= \frac{2}{{\left\lvert{\mathbf{x}}\right\rvert}} \end{aligned}

So that we have

\begin{aligned}\boldsymbol{\nabla} \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &= -\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ \Delta \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &=\frac{1}{{c^2}} \delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) -\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{2}{{\left\lvert{\mathbf{x}}\right\rvert}} \end{aligned} \hspace{\stretch{1}}(2.21)

Now we have all the bits and pieces of 2.8 ready to assemble

\begin{aligned}\Delta \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }&=-\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \delta^3(\mathbf{x}) \\ &\quad +\frac{1}{{2\pi}} \left( - \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \right)\cdot-\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ &\quad +\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}\left(\frac{1}{{c^2}} \delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) -\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{2}{{\left\lvert{\mathbf{x}}\right\rvert}} \right) \\ &=-\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \delta^3(\mathbf{x}) +\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} c^2 }}\delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \end{aligned}

Since we also have

\begin{aligned}\frac{1}{{c^2}} \partial_{tt}\frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\frac{\delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} c^2}\end{aligned} \hspace{\stretch{1}}(2.23)

The \delta'' terms cancel out in the d’Alembertian, leaving just

\begin{aligned}\square \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \delta^3(\mathbf{x}) \end{aligned} \hspace{\stretch{1}}(2.24)

Noting that the spatial delta function is non-zero only when \mathbf{x} = 0, which means \delta(t - {\left\lvert{\mathbf{x}}\right\rvert}/c) = \delta(t) in this product, and we finally have

\begin{aligned}\square \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\delta(t) \delta^3(\mathbf{x}) \end{aligned} \hspace{\stretch{1}}(2.25)

We write

\begin{aligned}G(\mathbf{x}, t) = \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} },\end{aligned} \hspace{\stretch{1}}(2.26)

Elaborating on the wave equation Green’s function

The Green’s function 2.26 is a distribution that is non-zero only on the future lightcone. Observe that for t < 0 we have

\begin{aligned}\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)&=\delta\left(-{\left\lvert{t}\right\rvert} - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \\ &= 0.\end{aligned}

We say that G is supported only on the future light cone. At \mathbf{x} = 0, only the contributions for t > 0 matter. Note that in the “old days”, Green’s functions used to be called influence functions, a name that works particularly well in this case. We have other Green’s functions for the d’Alembertian. The one above is called the retarded Green’s functions and we also have an advanced Green’s function. Writing + for advanced and - for retarded these are

\begin{aligned}G_{\pm} = \frac{\delta\left(t \pm \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert}}\end{aligned} \hspace{\stretch{1}}(3.27)

There are also causal and non-causal variations that won’t be of interest for this course.

This arms us now to solve any problem in the Lorentz gauge

\begin{aligned}A^k(\mathbf{x}, t) = \frac{1}{{c}} \int d^3 \mathbf{x}' dt' \frac{\delta\left(t - t' - \frac{{\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}j^k(\mathbf{x}', t')+\text{An arbitrary collection of EM waves.}\end{aligned} \hspace{\stretch{1}}(3.28)

The additional EM waves are the possible contributions from the homogeneous equation.

Since \delta(t - t' - {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}/c) is non-zero only when t' = t - {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}/c), the non-homogeneous parts of 3.28 reduce to

\begin{aligned}A^k(\mathbf{x}, t) = \frac{1}{{c}} \int d^3 \mathbf{x}' \frac{j^k(\mathbf{x}', t - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}/c)}{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(3.29)

Our potentials at time t and spatial position \mathbf{x} are completely specified in terms of the sums of the currents acting at the retarded time t - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}/c. The field can only depend on the charge and current distribution in the past. Specifically, it can only depend on the charge and current distribution on the past light cone of the spacetime point at which we measure the field.

Example of the Green’s function. Consider a charged particle moving on a worldline

\begin{aligned}(c t, \mathbf{x}_c(t))\end{aligned} \hspace{\stretch{1}}(4.30)

(c for classical)

For this particle

\begin{aligned}\rho(\mathbf{x}, t) &= e \delta^3(\mathbf{x} - \mathbf{x}_c(t)) \\ \mathbf{j}(\mathbf{x}, t) &= e \dot{\mathbf{x}}_c(t) \delta^3(\mathbf{x} - \mathbf{x}_c(t))\end{aligned} \hspace{\stretch{1}}(4.31)

\begin{aligned}\begin{bmatrix}A^0(\mathbf{x}, t)\mathbf{A}(\mathbf{x}, t)\end{bmatrix}&=\frac{1}{{c}}\int d^3 \mathbf{x}' dt'\frac{ \delta( t - t' - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}/c }{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}\begin{bmatrix}c e \\ e \dot{\mathbf{x}}_c(t)\end{bmatrix}\delta^3(\mathbf{x} - \mathbf{x}_c(t)) \\ &=\int_{-\infty}^\infty\frac{ \delta( t - t' - {\left\lvert{\mathbf{x} - \mathbf{x}_c(t')}\right\rvert}/c }{{\left\lvert{\mathbf{x}_c(t') - \mathbf{x}}\right\rvert}}\begin{bmatrix}e \\ e \frac{\dot{\mathbf{x}}_c(t)}{c}\end{bmatrix}\end{aligned}

PICTURE: light cones, and curved worldline. Pick an arbitrary point (\mathbf{x}_0, t_0), and draw the past light cone, looking at where this intersects with the trajectory

For the arbitrary point (\mathbf{x}_0, t_0) we see that this point and the retarded time (\mathbf{x}_c(t_r), t_r) obey the relation

\begin{aligned}c (t_0 - t_r) = {\left\lvert{\mathbf{x}_0 - \mathbf{x}_c(t_r)}\right\rvert}\end{aligned} \hspace{\stretch{1}}(4.33)

This retarded time is unique. There is only one such intersection.

Our job is to calculate

\begin{aligned}\int_{-\infty}^\infty \delta(f(x)) g(x) = \frac{g(x_{*})}{f'(x_{*})}\end{aligned} \hspace{\stretch{1}}(4.34)

where f(x_{*}) = 0.

\begin{aligned}f(t') = t - t' - {\left\lvert{\mathbf{x} - \mathbf{x}_c(t')}\right\rvert}/c\end{aligned} \hspace{\stretch{1}}(4.35)

\begin{aligned}\frac{\partial {f}}{\partial {t'}}&= -1 - \frac{1}{{c}} \frac{\partial {}}{\partial {t'}} \sqrt{ (\mathbf{x} - \mathbf{x}_c(t')) \cdot (\mathbf{x} - \mathbf{x}_c(t')) } \\ &= -1 + \frac{1}{{c}} \frac{\partial {}}{\partial {t'}} \frac{(\mathbf{x} - \mathbf{x}_c(t')) \cdot \mathbf{v}_c(t_r)}{{\left\lvert{\mathbf{x} - \mathbf{x}_c(t_r)}\right\rvert}}\end{aligned}

References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

[2] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , | Leave a Comment »

PHY450H1S (relativistic electrodynamics) Problem Set 3.

Posted by peeterjoot on March 2, 2011

[Click here for a PDF of this post with nicer formatting]

Disclaimer.

This problem set is as yet ungraded (although only the second question will be graded).

Problem 1. Fun with \epsilon_{\alpha\beta\gamma}, \epsilon^{ijkl}, F_{ij}, and the duality of Maxwell’s equations in vacuum.

1. Statement. rank 3 spatial antisymmetric tensor identities.

Prove that

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \nu \gamma}=\delta_{\alpha\mu} \delta_{\beta\nu}-\delta_{\alpha\nu} \delta_{\beta\mu}\end{aligned} \hspace{\stretch{1}}(2.1)

and use it to find the familiar relation for

\begin{aligned}(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D})\end{aligned} \hspace{\stretch{1}}(2.2)

Also show that

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}=2 \delta_{\alpha\mu}.\end{aligned} \hspace{\stretch{1}}(2.3)

(Einstein summation implied all throughout this problem).

1. Solution

We can explicitly expand the (implied) sum over indexes \gamma. This is

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \nu \gamma}=\epsilon_{\alpha \beta 1} \epsilon_{\mu \nu 1}+\epsilon_{\alpha \beta 2} \epsilon_{\mu \nu 2}+\epsilon_{\alpha \beta 3} \epsilon_{\mu \nu 3}\end{aligned} \hspace{\stretch{1}}(2.4)

For any \alpha \ne \beta only one term is non-zero. For example with \alpha,\beta = 2,3, we have just a contribution from the \gamma = 1 part of the sum

\begin{aligned}\epsilon_{2 3 1} \epsilon_{\mu \nu 1}.\end{aligned} \hspace{\stretch{1}}(2.5)

The value of this for (\mu,\nu) = (\alpha,\beta) is

\begin{aligned}(\epsilon_{2 3 1})^2\end{aligned} \hspace{\stretch{1}}(2.6)

whereas for (\mu,\nu) = (\beta,\alpha) we have

\begin{aligned}-(\epsilon_{2 3 1})^2\end{aligned} \hspace{\stretch{1}}(2.7)

Our sum has value one when (\alpha, \beta) matches (\mu, \nu), and value minus one for when (\mu, \nu) are permuted. We can summarize this, by saying that when \alpha \ne \beta we have

\begin{aligned}\boxed{\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \nu \gamma}=\delta_{\alpha\mu} \delta_{\beta\nu}-\delta_{\alpha\nu} \delta_{\beta\mu}.}\end{aligned} \hspace{\stretch{1}}(2.8)

However, observe that when \alpha = \beta the RHS is

\begin{aligned}\delta_{\alpha\mu} \delta_{\alpha\nu}-\delta_{\alpha\nu} \delta_{\alpha\mu} = 0,\end{aligned} \hspace{\stretch{1}}(2.9)

as desired, so this form works in general without any \alpha \ne \beta qualifier, completing this part of the problem.

\begin{aligned}(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D})&=(\epsilon_{\alpha \beta \gamma} \mathbf{e}^\alpha A^\beta B^\gamma ) \cdot(\epsilon_{\mu \nu \sigma} \mathbf{e}^\mu C^\nu D^\sigma ) \\ &=\epsilon_{\alpha \beta \gamma} A^\beta B^\gamma\epsilon_{\alpha \nu \sigma} C^\nu D^\sigma \\ &=(\delta_{\beta \nu} \delta_{\gamma\sigma}-\delta_{\beta \sigma} \delta_{\gamma\nu} )A^\beta B^\gammaC^\nu D^\sigma \\ &=A^\nu B^\sigmaC^\nu D^\sigma-A^\sigma B^\nuC^\nu D^\sigma.\end{aligned}

This gives us

\begin{aligned}\boxed{(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D})=(\mathbf{A} \cdot \mathbf{C})(\mathbf{B} \cdot \mathbf{D})-(\mathbf{A} \cdot \mathbf{D})(\mathbf{B} \cdot \mathbf{C}).}\end{aligned} \hspace{\stretch{1}}(2.10)

We have one more identity to deal with.

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}\end{aligned} \hspace{\stretch{1}}(2.11)

We can expand out this (implied) sum slow and dumb as well

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}&=\epsilon_{\alpha 1 2} \epsilon_{\mu 1 2}+\epsilon_{\alpha 2 1} \epsilon_{\mu 2 1} \\ &+\epsilon_{\alpha 1 3} \epsilon_{\mu 1 3}+\epsilon_{\alpha 3 1} \epsilon_{\mu 3 1} \\ &+\epsilon_{\alpha 2 3} \epsilon_{\mu 2 3}+\epsilon_{\alpha 3 2} \epsilon_{\mu 3 2} \\ &=2 \epsilon_{\alpha 1 2} \epsilon_{\mu 1 2}+ 2 \epsilon_{\alpha 1 3} \epsilon_{\mu 1 3}+ 2 \epsilon_{\alpha 2 3} \epsilon_{\mu 2 3}\end{aligned}

Now, observe that for any \alpha \in (1,2,3) only one term of this sum is picked up. For example, with no loss of generality, pick \alpha = 1. We are left with only

\begin{aligned}2 \epsilon_{1 2 3} \epsilon_{\mu 2 3}\end{aligned} \hspace{\stretch{1}}(2.12)

This has the value

\begin{aligned}2 (\epsilon_{1 2 3})^2 = 2\end{aligned} \hspace{\stretch{1}}(2.13)

when \mu = \alpha and is zero otherwise. We can therefore summarize the evaluation of this sum as

\begin{aligned}\boxed{\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}=  2\delta_{\alpha\mu},}\end{aligned} \hspace{\stretch{1}}(2.14)

completing this problem.

2. Statement. Determinant of three by three matrix.

Prove that for any 3 \times 3 matrix {\left\lVert{A_{\alpha\beta}}\right\rVert}: \epsilon_{\mu\nu\lambda} A_{\alpha \mu} A_{\beta\nu} A_{\gamma\lambda} = \epsilon_{\alpha \beta \gamma} \text{Det} A and that \epsilon_{\alpha\beta\gamma} \epsilon_{\mu\nu\lambda} A_{\alpha \mu} A_{\beta\nu} A_{\gamma\lambda} = 6 \text{Det} A.

2. Solution

In class Simon showed us how the first identity can be arrived at using the triple product \mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}) = \text{Det}(\mathbf{a} \mathbf{b} \mathbf{c}). It occurred to me later that I’d seen the identity to be proven in the context of Geometric Algebra, but hadn’t recognized it in this tensor form. Basically, a wedge product can be expanded in sums of determinants, and when the dimension of the space is the same as the vector, we have a pseudoscalar times the determinant of the components.

For example, in \mathbb{R}^{2}, let’s take the wedge product of a pair of vectors. As preparation for the relativistic \mathbb{R}^{4} case We won’t require an orthonormal basis, but express the vector in terms of a reciprocal frame and the associated components

\begin{aligned}a = a^i e_i = a_j e^j\end{aligned} \hspace{\stretch{1}}(2.15)

where

\begin{aligned}e^i \cdot e_j = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(2.16)

When we get to the relativistic case, we can pick (but don’t have to) the standard basis

\begin{aligned}e_0 &= (1, 0, 0, 0) \\ e_1 &= (0, 1, 0, 0) \\ e_2 &= (0, 0, 1, 0) \\ e_3 &= (0, 0, 0, 1),\end{aligned} \hspace{\stretch{1}}(2.17)

for which our reciprocal frame is implicitly defined by the metric

\begin{aligned}e^0 &= (1, 0, 0, 0) \\ e^1 &= (0, -1, 0, 0) \\ e^2 &= (0, 0, -1, 0) \\ e^3 &= (0, 0, 0, -1).\end{aligned} \hspace{\stretch{1}}(2.21)

Anyways. Back to the problem. Let’s examine the \mathbb{R}^{2} case. Our wedge product in coordinates is

\begin{aligned}a \wedge b=a^i b^j (e_i \wedge e_j)\end{aligned} \hspace{\stretch{1}}(2.25)

Since there are only two basis vectors we have

\begin{aligned}a \wedge b=(a^1 b^2 - a^2 b^1) e_1 \wedge e_2 = \text{Det} {\left\lVert{a^i b^j}\right\rVert} (e_1 \wedge e_2).\end{aligned} \hspace{\stretch{1}}(2.26)

Our wedge product is a product of the determinant of the vector coordinates, times the \mathbb{R}^{2} pseudoscalar e_1 \wedge e_2.

This doesn’t look quite like the \mathbb{R}^{3} relation that we want to prove, which had an antisymmetric tensor factor for the determinant. Observe that we get the determinant by picking off the e_1 \wedge e_2 component of the bivector result (the only component in this case), and we can do that by dotting with e^2 \cdot e^1. To get an antisymmetric tensor times the determinant, we have only to dot with a different pseudoscalar (one that differs by a possible sign due to permutation of the indexes). That is

\begin{aligned}(e^t \wedge e^s) \cdot (a \wedge b)&=a^i b^j (e^t \wedge e^s) \cdot (e_i \wedge e_j) \\ &=a^i b^j\left( {\delta^{s}}_i {\delta^{t}}_j-{\delta^{t}}_i {\delta^{s}}_j  \right) \\ &=a^i b^j{\delta^{[t}}_j {\delta^{s]}}_i \\ &=a^i b^j{\delta^{t}}_{[j} {\delta^{s}}_{i]} \\ &=a^{[i} b^{j]}{\delta^{t}}_{j} {\delta^{s}}_{i} \\ &=a^{[s} b^{t]}\end{aligned}

Now, if we write a^i = A^{1 i} and b^j = A^{2 j} we have

\begin{aligned}(e^t \wedge e^s) \cdot (a \wedge b)=A^{1 s} A^{2 t} -A^{1 t} A^{2 s}\end{aligned} \hspace{\stretch{1}}(2.27)

We can write this in two different ways. One of which is

\begin{aligned}A^{1 s} A^{2 t} -A^{1 t} A^{2 s} =\epsilon^{s t} \text{Det} {\left\lVert{A^{ij}}\right\rVert}\end{aligned} \hspace{\stretch{1}}(2.28)

and the other of which is by introducing free indexes for 1 and 2, and summing antisymmetrically over these. That is

\begin{aligned}A^{1 s} A^{2 t} -A^{1 t} A^{2 s}=A^{a s} A^{b t} \epsilon_{a b}\end{aligned} \hspace{\stretch{1}}(2.29)

So, we have

\begin{aligned}\boxed{A^{a s} A^{b t} \epsilon_{a b} =A^{1 i} A^{2 j} {\delta^{[t}}_j {\delta^{s]}}_i =\epsilon^{s t} \text{Det} {\left\lVert{A^{ij}}\right\rVert},}\end{aligned} \hspace{\stretch{1}}(2.30)

This result hold regardless of the metric for the space, and does not require that we were using an orthonormal basis. When the metric is Euclidean and we have an orthonormal basis, then all the indexes can be dropped.

The \mathbb{R}^{3} and \mathbb{R}^{4} cases follow in exactly the same way, we just need more vectors in the wedge products.

For the \mathbb{R}^{3} case we have

\begin{aligned}(e^u \wedge e^t \wedge e^s) \cdot ( a \wedge b \wedge c)&=a^i b^j c^k(e^u \wedge e^t \wedge e^s) \cdot (e_i \wedge e_j \wedge e_k) \\ &=a^i b^j c^k{\delta^{[u}}_k{\delta^{t}}_j{\delta^{s]}}_i \\ &=a^{[s} b^t c^{u]}\end{aligned}

Again, with a^i = A^{1 i} and b^j = A^{2 j}, and c^k = A^{3 k} we have

\begin{aligned}(e^u \wedge e^t \wedge e^s) \cdot ( a \wedge b \wedge c)=A^{1 i} A^{2 j} A^{3 k}{\delta^{[u}}_k{\delta^{t}}_j{\delta^{s]}}_i\end{aligned} \hspace{\stretch{1}}(2.31)

and we can choose to write this in either form, resulting in the identity

\begin{aligned}\boxed{\epsilon^{s t u} \text{Det} {\left\lVert{A^{ij}}\right\rVert}=A^{1 i} A^{2 j} A^{3 k}{\delta^{[u}}_k{\delta^{t}}_j{\delta^{s]}}_i=\epsilon_{a b c} A^{a s} A^{b t} A^{c u}.}\end{aligned} \hspace{\stretch{1}}(2.32)

The \mathbb{R}^{4} case follows exactly the same way, and we have

\begin{aligned}(e^v \wedge e^u \wedge e^t \wedge e^s) \cdot ( a \wedge b \wedge c \wedge d)&=a^i b^j c^k d^l(e^v \wedge e^u \wedge e^t \wedge e^s) \cdot (e_i \wedge e_j \wedge e_k \wedge e_l) \\ &=a^i b^j c^k d^l{\delta^{[v}}_l{\delta^{u}}_k{\delta^{t}}_j{\delta^{s]}}_i \\ &=a^{[s} b^t c^{u} d^{v]}.\end{aligned}

This time with a^i = A^{0 i} and b^j = A^{1 j}, and c^k = A^{2 k}, and d^l = A^{3 l} we have

\begin{aligned}\boxed{\epsilon^{s t u v} \text{Det} {\left\lVert{A^{ij}}\right\rVert}=A^{0 i} A^{1 j} A^{2 k} A^{3 l}{\delta^{[v}}_l{\delta^{u}}_k{\delta^{t}}_j{\delta^{s]}}_i=\epsilon_{a b c d} A^{a s} A^{b t} A^{c u} A^{d v}.}\end{aligned} \hspace{\stretch{1}}(2.33)

This one is almost the identity to be established later in problem 1.4. We have only to raise and lower some indexes to get that one. Note that in the Minkowski standard basis above, because s, t, u, v must be a permutation of 0,1,2,3 for a non-zero result, we must have

\begin{aligned}\epsilon^{s t u v} = (-1)^3 (+1) \epsilon_{s t u v}.\end{aligned} \hspace{\stretch{1}}(2.34)

So raising and lowering the identity above gives us

\begin{aligned}-\epsilon_{s t u v} \text{Det} {\left\lVert{A_{ij}}\right\rVert}=\epsilon^{a b c d} A_{a s} A_{b t} A_{c u} A_{d u}.\end{aligned} \hspace{\stretch{1}}(2.35)

No sign changes were required for the indexes a, b, c, d, since they are paired.

Until we did the raising and lowering operations here, there was no specific metric required, so our first result 2.33 is the more general one.

There’s one more part to this problem, doing the antisymmetric sums over the indexes s, t, \cdots. For the \mathbb{R}^{2} case we have

\begin{aligned}\epsilon_{s t} \epsilon_{a b} A^{a s} A^{b t}&=\epsilon_{s t} \epsilon^{s t} \text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=\left( \epsilon_{1 2} \epsilon^{1 2} +\epsilon_{2 1} \epsilon^{2 1} \right)\text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=\left( 1^2 + (-1)^2\right)\text{Det} {\left\lVert{A^{ij}}\right\rVert}\end{aligned}

We conclude that

\begin{aligned}\boxed{\epsilon_{s t} \epsilon_{a b} A^{a s} A^{b t} = 2! \text{Det} {\left\lVert{A^{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.36)

For the \mathbb{R}^{3} case we have the same operation

\begin{aligned}\epsilon_{s t u} \epsilon_{a b c} A^{a s} A^{b t} A^{c u}&=\epsilon_{s t u} \epsilon^{s t u} \text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=\left( \epsilon_{1 2 3} \epsilon^{1 2 3} +\epsilon_{1 3 2} \epsilon^{1 3 2} + \cdots\right)\text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=(\pm 1)^2 (3!)\text{Det} {\left\lVert{A^{ij}}\right\rVert}.\end{aligned}

So we conclude

\begin{aligned}\boxed{\epsilon_{s t u} \epsilon_{a b c} A^{a s} A^{b t} A^{c u}= 3! \text{Det} {\left\lVert{A^{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.37)

It’s clear what the pattern is, and if we evaluate the sum of the antisymmetric tensor squares in \mathbb{R}^{4} we have

\begin{aligned}\epsilon_{s t u v} \epsilon_{s t u v}&=\epsilon_{0 1 2 3} \epsilon_{0 1 2 3}+\epsilon_{0 1 3 2} \epsilon_{0 1 3 2}+\epsilon_{0 2 1 3} \epsilon_{0 2 1 3}+ \cdots \\ &= (\pm 1)^2 (4!),\end{aligned}

So, for our SR case we have

\begin{aligned}\boxed{\epsilon_{s t u v} \epsilon_{a b c d} A^{a s} A^{b t} A^{c u} A^{d v}= 4! \text{Det} {\left\lVert{A^{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.38)

This was part of question 1.4, albeit in lower index form. Here since all indexes are matched, we have the same result without major change

\begin{aligned}\boxed{\epsilon^{s t u v} \epsilon^{a b c d} A_{a s} A_{b t} A_{c u} A_{d v}= 4! \text{Det} {\left\lVert{A_{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.39)

The main difference is that we are now taking the determinant of a lower index tensor.

3. Statement. Rotational invariance of 3D antisymmetric tensor

Use the previous results to show that \epsilon_{\mu\nu\lambda} is invariant under rotations.

3. Solution

We apply transformations to coordinates (and thus indexes) of the form

\begin{aligned}x_\mu \rightarrow O_{\mu\nu} x_\nu\end{aligned} \hspace{\stretch{1}}(2.40)

With our tensor transforming as its indexes, we have

\begin{aligned}\epsilon_{\mu\nu\lambda} \rightarrow \epsilon_{\alpha\beta\sigma} O_{\mu\alpha} O_{\nu\beta} O_{\lambda\sigma}.\end{aligned} \hspace{\stretch{1}}(2.41)

We’ve got 2.32, which after dropping indexes, because we are in a Euclidean space, we have

\begin{aligned}\epsilon_{\mu \nu \lambda} \text{Det} {\left\lVert{A_{ij}}\right\rVert} = \epsilon_{\alpha \beta \sigma} A_{\alpha \mu} A_{\beta \nu} A_{\sigma \lambda}.\end{aligned} \hspace{\stretch{1}}(2.42)

Let A_{i j} = O_{j i}, which gives us

\begin{aligned}\epsilon_{\mu\nu\lambda} \rightarrow \epsilon_{\mu\nu\lambda} \text{Det} A^\text{T}\end{aligned} \hspace{\stretch{1}}(2.43)

but since \text{Det} O = \text{Det} O^\text{T}, we have shown that \epsilon_{\mu\nu\lambda} is invariant under rotation.

4. Statement. Rotational invariance of 4D antisymmetric tensor

Use the previous results to show that \epsilon_{i j k l} is invariant under Lorentz transformations.

4. Solution

This follows the same way. We assume a transformation of coordinates of the following form

\begin{aligned}(x')^i &= {O^i}_j x^j \\ (x')_i &= {O_i}^j x_j,\end{aligned} \hspace{\stretch{1}}(2.44)

where the determinant of {O^i}_j = 1 (sanity check of sign: {O^i}_j = {\delta^i}_j).

Our antisymmetric tensor transforms as its coordinates individually

\begin{aligned}\epsilon_{i j k l} &\rightarrow \epsilon_{a b c d} {O_i}^a{O_j}^b{O_k}^c{O_l}^d \\ &= \epsilon^{a b c d} O_{i a}O_{j b}O_{k c}O_{l d} \\ \end{aligned}

Let P_{ij} = O_{ji}, and raise and lower all the indexes in 2.46 for

\begin{aligned}-\epsilon_{s t u v} \text{Det} {\left\lVert{P_{ij}}\right\rVert}=\epsilon^{a b c d} P_{a s} P_{b t} P_{c u} P_{d v}.\end{aligned} \hspace{\stretch{1}}(2.46)

We have

\begin{aligned}\epsilon_{i j k l} &= \epsilon^{a b c d} P_{a i}P_{a j}P_{a k}P_{a l} \\ &=-\epsilon_{i j k l} \text{Det} {\left\lVert{P_{ij}}\right\rVert} \\ &=-\epsilon_{i j k l} \text{Det} {\left\lVert{O_{ij}}\right\rVert} \\ &=-\epsilon_{i j k l} \text{Det} {\left\lVert{g_{im} {O^m}_j }\right\rVert} \\ &=-\epsilon_{i j k l} (-1)(1) \\ &=\epsilon_{i j k l}\end{aligned}

Since \epsilon_{i j k l} = -\epsilon^{i j k l} both are therefore invariant under Lorentz transformation.

5. Statement. Sum of contracting symmetric and antisymmetric rank 2 tensors

Show that A^{ij} B_{ij} = 0 if A is symmetric and B is antisymmetric.

5. Solution

We swap indexes in B, switch dummy indexes, then swap indexes in A

\begin{aligned}A^{i j} B_{i j} &= -A^{i j} B_{j i} \\ &= -A^{j i} B_{i j} \\ &= -A^{i j} B_{i j} \\ \end{aligned}

Our result is the negative of itself, so must be zero.

6. Statement. Characteristic equation for the electromagnetic strength tensor

Show that P(\lambda) = \text{Det} {\left\lVert{F_{i j} - \lambda g_{i j}}\right\rVert} is invariant under Lorentz transformations. Consider the polynomial of P(\lambda), also called the characteristic polynomial of the matrix {\left\lVert{F_{i j}}\right\rVert}. Find the coefficients of the expansion of P(\lambda) in powers of \lambda in terms of the components of {\left\lVert{F_{i j}}\right\rVert}. Use the result to argue that \mathbf{E} \cdot \mathbf{B} and \mathbf{E}^2 - \mathbf{B}^2 are Lorentz invariant.

6. Solution

The invariance of the determinant

Let’s consider how any lower index rank 2 tensor transforms. Given a transformation of coordinates

\begin{aligned}(x^i)' &= {O^i}_j x^j \\ (x_i)' &= {O_i}^j x^j ,\end{aligned} \hspace{\stretch{1}}(2.47)

where \text{Det} {\left\lVert{ {O^i}_j }\right\rVert} = 1, and {O_i}^j = {O^m}_n g_{i m} g^{j n}. Let’s reflect briefly on why this determinant is unit valued. We have

\begin{aligned}(x^i)' (x_i)'= {O_i}^a x^a {O^i}_b x^b = x^b x_b,\end{aligned} \hspace{\stretch{1}}(2.49)

which implies that the transformation product is

\begin{aligned}{O_i}^a {O^i}_b = {\delta^a}_b,\end{aligned} \hspace{\stretch{1}}(2.50)

the identity matrix. The identity matrix has unit determinant, so we must have

\begin{aligned}1 = (\text{Det} \hat{G})^2 (\text{Det} {\left\lVert{ {O^i}_j }\right\rVert})^2.\end{aligned} \hspace{\stretch{1}}(2.51)

Since \text{Det} \hat{G} = -1 we have

\begin{aligned}\text{Det} {\left\lVert{ {O^i}_j }\right\rVert} = \pm 1,\end{aligned} \hspace{\stretch{1}}(2.52)

which is all that we can say about the determinant of this class of transformations by considering just invariance. If we restrict the transformations of coordinates to those of the same determinant sign as the identity matrix, we rule out reflections in time or space. This seems to be the essence of the SO(1,3) labeling.

Why dwell on this? Well, I wanted to be clear on the conventions I’d chosen, since parts of the course notes used \hat{O} = {\left\lVert{O^{i j}}\right\rVert}, and X' = \hat{O} X, and gave that matrix unit determinant. That O^{i j} looks like it is equivalent to my {O^i}_j, except that the one in the course notes is loose when it comes to lower and upper indexes since it gives (x')^i = O^{i j} x^j.

I’ll write

\begin{aligned}\hat{O} = {\left\lVert{{O^i}_j}\right\rVert},\end{aligned} \hspace{\stretch{1}}(2.53)

and require this (not {\left\lVert{O^{i j}}\right\rVert}) to be the matrix with unit determinant. Having cleared the index upper and lower confusion I had trying to reconcile the class notes with the rules for index manipulation, let’s now consider the Lorentz transformation of a lower index rank 2 tensor (not necessarily antisymmetric or symmetric)

We have, transforming in the same fashion as a lower index coordinate four vector (but twice, once for each index)

\begin{aligned}A_{i j} \rightarrow A_{k m} {O_i}^k{O_j}^m.\end{aligned} \hspace{\stretch{1}}(2.54)

The determinant of the transformation tensor {O_i}^j is

\begin{aligned}\text{Det} {\left\lVert{ {O_i}^j }\right\rVert} = \text{Det} {\left\lVert{ g^{i m} {O^m}_n g^{n j} }\right\rVert} = (\text{Det} \hat{G}) (1) (\text{Det} \hat{G} ) = (-1)^2 (1) = 1.\end{aligned} \hspace{\stretch{1}}(2.55)

We see that the determinant of a lower index rank 2 tensor is invariant under Lorentz transformation. This would include our characteristic polynomial P(\lambda).

Expanding the determinant.

Utilizing 2.39 we can now calculate the characteristic polynomial. This is

\begin{aligned}\text{Det} {\left\lVert{F_{ij} - \lambda g_{ij} }\right\rVert}&= \frac{1}{{4!}}\epsilon^{s t u v} \epsilon^{a b c d} (F_{ a s } - \lambda g_{a s}) (F_{ b t } - \lambda g_{b t}) (F_{ c u } - \lambda g_{c u}) (F_{ d v } - \lambda g_{d v}) \\ &=\frac{1}{{24}}\epsilon^{s t u v} \epsilon_{a b c d} ({F^a}_s - \lambda {g^a}_s) ({F^b}_t - \lambda {g^b}_t) ({F^c}_u - \lambda {g^c}_u) ({F^d}_v - \lambda {g^d}_v) \\ \end{aligned}

However, {g^a}_b = g_{b c} g^{a c}, or {\left\lVert{{g^a}_b}\right\rVert} = \hat{G}^2 = I. This means we have

\begin{aligned}{g^a}_b = {\delta^a}_b,\end{aligned} \hspace{\stretch{1}}(2.56)

and our determinant is reduced to

\begin{aligned}\begin{aligned}P(\lambda) &=\frac{1}{{24}}\epsilon^{s t u v} \epsilon_{a b c d} \Bigl({F^a}_s {F^b}_t - \lambda( {\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) + \lambda^2 {\delta^a}_s {\delta^b}_t \Bigr) \\ &\times \qquad \qquad \Bigl({F^c}_u {F^d}_v - \lambda( {\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) + \lambda^2 {\delta^c}_u {\delta^d}_v \Bigr) \end{aligned}\end{aligned} \hspace{\stretch{1}}(2.57)

If we expand this out we have our powers of \lambda coefficients are

\begin{aligned}\lambda^0 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} {F^a}_s {F^b}_t {F^c}_u {F^d}_v \\ \lambda^1 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl(- ({\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) {F^a}_s {F^b}_t - ({\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) {F^c}_u {F^d}_v \Bigr) \\ \lambda^2 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl({\delta^c}_u {\delta^d}_v {F^a}_s {F^b}_t +( {\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) ( {\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) + {\delta^a}_s {\delta^b}_t  {F^c}_u {F^d}_v \Bigr) \\ \lambda^3 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl(- ( {\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) {\delta^c}_u {\delta^d}_v - {\delta^a}_s {\delta^b}_t  ( {\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) \Bigr) \\ \lambda^4 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl({\delta^a}_s {\delta^b}_t {\delta^c}_u {\delta^d}_v \Bigr) \\ \end{aligned}

By 2.39 the \lambda^0 coefficient is just \text{Det} {\left\lVert{F_{i j}}\right\rVert}.

The \lambda^3 terms can be seen to be zero. For example, the first one is

\begin{aligned}-\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} {\delta^a}_s {F^b}_t {\delta^c}_u {\delta^d}_v &=-\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{s b u v} {F^b}_t \\ &=-\frac{1}{{12}} \delta^{t}_b {F^b}_t \\ &=-\frac{1}{{12}} {F^b}_b \\ &=-\frac{1}{{12}} F^{bu} g_{ub} \\ &= 0,\end{aligned}

where the final equality to zero comes from summing a symmetric and antisymmetric product.

Similarly the \lambda coefficients can be shown to be zero. Again the first as a sample is

\begin{aligned}-\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} {\delta^c}_u {F^d}_v {F^a}_s {F^b}_t &=-\frac{1}{{24}} \epsilon^{u s t v} \epsilon_{u a b d} {F^d}_v {F^a}_s {F^b}_t  \\ &=-\frac{1}{{24}} \delta^{[s}_a\delta^{t}_b\delta^{v]}_d{F^d}_v {F^a}_s {F^b}_t  \\ &=-\frac{1}{{24}} {F^a}_{[s}{F^b}_{t}{F^d}_{v]} \\ \end{aligned}

Disregarding the -1/24 factor, let’s just expand this antisymmetric sum

\begin{aligned}{F^a}_{[a}{F^b}_{b}{F^d}_{d]}&={F^a}_{a}{F^b}_{b}{F^d}_{d}+{F^a}_{d}{F^b}_{a}{F^d}_{b}+{F^a}_{b}{F^b}_{d}{F^d}_{a}-{F^a}_{a}{F^b}_{d}{F^d}_{b}-{F^a}_{d}{F^b}_{b}{F^d}_{a}-{F^a}_{b}{F^b}_{a}{F^d}_{d} \\ &={F^a}_{d}{F^b}_{a}{F^d}_{b}+{F^a}_{b}{F^b}_{d}{F^d}_{a} \\ \end{aligned}

Of the two terms above that were retained, they are the only ones without a zero {F^i}_i factor. Consider the first part of this remaining part of the sum. Employing the metric tensor, to raise indexes so that the antisymmetry of F^{ij} can be utilized, and then finally relabeling all the dummy indexes we have

\begin{aligned}{F^a}_{d}{F^b}_{a}{F^d}_{b}&=F^{a u}F^{b v}F^{d w}g_{d u}g_{a v}g_{b w} \\ &=(-1)^3F^{u a}F^{v b}F^{w d}g_{d u}g_{a v}g_{b w} \\ &=-(F^{u a}g_{a v})(F^{v b}g_{b w} )(F^{w d}g_{d u})\\ &=-{F^u}_v{F^v}_w{F^w}_u\\ &=-{F^a}_b{F^b}_d{F^d}_a\\ \end{aligned}

This is just the negative of the second term in the sum, leaving us with zero.

Finally, we have for the \lambda^2 coefficient (\times 24)

\begin{aligned}&\epsilon^{s t u v} \epsilon_{a b c d} \Bigl({\delta^c}_u {\delta^d}_v {F^a}_s {F^b}_t +{\delta^a}_s {F^b}_t {\delta^c}_u {F^d}_v +{\delta^b}_t {F^a}_s {\delta^d}_v {F^c}_u  \\ &\qquad +{\delta^b}_t {F^a}_s {\delta^c}_u {F^d}_v +{\delta^a}_s {F^b}_t {\delta^d}_v {F^c}_u + {\delta^a}_s {\delta^b}_t  {F^c}_u {F^d}_v \Bigr) \\ &=\epsilon^{s t u v} \epsilon_{a b u v}   {F^a}_s {F^b}_t +\epsilon^{s t u v} \epsilon_{s b u d}  {F^b}_t  {F^d}_v +\epsilon^{s t u v} \epsilon_{a t c v}  {F^a}_s  {F^c}_u  \\ &\qquad +\epsilon^{s t u v} \epsilon_{a t u d}  {F^a}_s  {F^d}_v +\epsilon^{s t u v} \epsilon_{s b c v}  {F^b}_t  {F^c}_u + \epsilon^{s t u v} \epsilon_{s t c d}    {F^c}_u {F^d}_v \\ &=\epsilon^{s t u v} \epsilon_{a b u v}   {F^a}_s {F^b}_t +\epsilon^{t v s u } \epsilon_{b d s u}  {F^b}_t  {F^d}_v +\epsilon^{s u t v} \epsilon_{a c t v}  {F^a}_s  {F^c}_u  \\ &\qquad +\epsilon^{s v t u} \epsilon_{a d t u}  {F^a}_s  {F^d}_v +\epsilon^{t u s v} \epsilon_{b c s v}  {F^b}_t  {F^c}_u + \epsilon^{u v s t} \epsilon_{c d s t}    {F^c}_u {F^d}_v \\ &=6\epsilon^{s t u v} \epsilon_{a b u v} {F^a}_s {F^b}_t  \\ &=6 (2){\delta^{[s}}_a{\delta^{t]}}_b{F^a}_s {F^b}_t  \\ &=12{F^a}_{[a} {F^b}_{b]}  \\ &=12( {F^a}_{a} {F^b}_{b} - {F^a}_{b} {F^b}_{a} ) \\ &=-12 {F^a}_{b} {F^b}_{a} \\ &=-12 F^{a b} F_{b a} \\ &=12 F^{a b} F_{a b}\end{aligned}

Therefore, our characteristic polynomial is

\begin{aligned}\boxed{P(\lambda) = \text{Det} {\left\lVert{F_{i j}}\right\rVert} + \frac{\lambda^2}{2} F^{a b} F_{a b} + \lambda^4.}\end{aligned} \hspace{\stretch{1}}(2.58)

Observe that in matrix form our strength tensors are

\begin{aligned}{\left\lVert{ F^{ij} }\right\rVert} &= \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} \\ {\left\lVert{ F_{ij} }\right\rVert} &= \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.59)

From these we can compute F^{a b} F_{a b} easily by inspection

\begin{aligned}F^{a b} F_{a b} = 2 (\mathbf{B}^2 - \mathbf{E}^2).\end{aligned} \hspace{\stretch{1}}(2.61)

Computing the determinant is not so easy. The dumb and simple way of expanding by cofactors takes two pages, and yields eventually

\begin{aligned}\text{Det} {\left\lVert{ F^{i j} }\right\rVert} = (\mathbf{E} \cdot \mathbf{B})^2.\end{aligned} \hspace{\stretch{1}}(2.62)

That supplies us with a relation for the characteristic polynomial in \mathbf{E} and \mathbf{B}

\begin{aligned}\boxed{P(\lambda) = (\mathbf{E} \cdot \mathbf{B})^2 + \lambda^2 (\mathbf{B}^2 - \mathbf{E}^2) + \lambda^4.}\end{aligned} \hspace{\stretch{1}}(2.63)

Observe that we found this for the special case where \mathbf{E} and \mathbf{B} were perpendicular in homework 2. Observe that when we have that perpendicularity, we can solve for the eigenvalues by inspection

\begin{aligned}\lambda \in \{ 0, 0, \pm \sqrt{ \mathbf{E}^2 - \mathbf{B}^2 } \},\end{aligned} \hspace{\stretch{1}}(2.64)

and were able to diagonalize the matrix {F^{i}}_j to solve the Lorentz force equation in parametric form. When {\left\lvert{\mathbf{E}}\right\rvert} > {\left\lvert{\mathbf{B}}\right\rvert} we had real eigenvalues and an orthogonal diagonalization when \mathbf{B} = 0. For the {\left\lvert{\mathbf{B}}\right\rvert} > {\left\lvert{\mathbf{E}}\right\rvert}, we had a two purely imaginary eigenvalues, and when \mathbf{E} = 0 this was a Hermitian diagonalization. For the general case, when one of \mathbf{E}, or \mathbf{B} was zero, things didn’t have the same nice closed form solution.

In general our eigenvalues are

\begin{aligned}\lambda = \pm \frac{1}{{\sqrt{2}}} \sqrt{ \mathbf{E}^2 - \mathbf{B}^2 \pm \sqrt{ (\mathbf{E}^2 - \mathbf{B}^2)^2 - 4 (\mathbf{E} \cdot \mathbf{B})^2 }}.\end{aligned} \hspace{\stretch{1}}(2.65)

For the purposes of this problem we really only wish to show that \mathbf{E} \cdot \mathbf{B} and \mathbf{E}^2 - \mathbf{B}^2 are Lorentz invariants. When \lambda = 0 we have P(\lambda) = (\mathbf{E} \cdot \mathbf{B})^2, a Lorentz invariant. This must mean that \mathbf{E} \cdot \mathbf{B} is itself a Lorentz invariant. Since that is invariant, and we require P(\lambda) to be invariant for any other possible values of \lambda, the difference \mathbf{E}^2 - \mathbf{B}^2 must also be Lorentz invariant.

7. Statement. Show that the pseudoscalar invariant has only boundary effects.

Use integration by parts to show that \int d^4 x \epsilon^{i j k l} F_{ i j } F_{ k l } only depends on the values of A^i(x) at the “boundary” of spacetime (e.g. the “surface” depicted on page 105 of the notes) and hence does not affect the equations of motion for the electromagnetic field.

7. Solution

This proceeds in a fairly straightforward fashion

\begin{aligned}\int d^4 x \epsilon^{i j k l} F_{ i j } F_{ k l }&=\int d^4 x \epsilon^{i j k l} (\partial_i A_j - \partial_j A_i) F_{ k l } \\ &=\int d^4 x \epsilon^{i j k l} (\partial_i A_j) F_{ k l } -\epsilon^{j i k l} (\partial_i A_j) F_{ k l } \\ &=2 \int d^4 x \epsilon^{i j k l} (\partial_i A_j) F_{ k l } \\ &=2 \int d^4 x \epsilon^{i j k l} \left( \frac{\partial {}}{\partial {x^i}}(A_j F_{ k l }-A_j \frac{\partial { F_{ k l } }}{\partial {x^i}}\right)\\ \end{aligned}

Now, observe that by the Bianchi identity, this second term is zero

\begin{aligned}\epsilon^{i j k l} \frac{\partial { F_{ k l } }}{\partial {x^i}}=-\epsilon^{j i k l} \partial_i F_{ k l } = 0\end{aligned} \hspace{\stretch{1}}(2.66)

Now we have a set of perfect differentials, and can integrate

\begin{aligned}\int d^4 x \epsilon^{i j k l} F_{ i j } F_{ k l }&= 2 \int d^4 x \epsilon^{i j k l} \frac{\partial {}}{\partial {x^i}}(A_j F_{ k l })\\ &= 2 \int dx^j dx^k dx^l\epsilon^{i j k l} {\left.{{(A_j F_{ k l })}}\right\vert}_{{\Delta x^i}}\\ \end{aligned}

We are left with a only contributions to the integral from the boundary terms on the spacetime hypervolume, three-volume normals bounding the four-volume integration in the original integral.

8. Statement. Electromagnetic duality transformations.

Show that the Maxwell equations in vacuum are invariant under the transformation: F_{i j} \rightarrow \tilde{F}_{i j}, where \tilde{F}_{i j} = \frac{1}{{2}} \epsilon_{i j k l} F^{k l} is the dual electromagnetic stress tensor. Replacing F with \tilde{F} is known as “electric-magnetic duality”. Explain this name by considering the transformation in terms of \mathbf{E} and \mathbf{B}. Are the Maxwell equations with sources invariant under electric-magnetic duality transformations?

8. Solution

Let’s first consider the explanation of the name. First recall what the expansions are of F_{i j} and F^{i j} in terms of \mathbf{E} and \mathbf{E}. These are

\begin{aligned}F_{0 \alpha} &= \partial_0 A_\alpha - \partial_\alpha A_0 \\ &= -\frac{1}{{c}} \frac{\partial {A^\alpha}}{\partial {t}} - \frac{\partial {\phi}}{\partial {x^\alpha}} \\ &= E_\alpha\end{aligned}

with F^{0 \alpha} = -E^\alpha, and E^\alpha = E_\alpha.

The magnetic field components are

\begin{aligned}F_{\beta \alpha} &= \partial_\beta A_\alpha - \partial_\alpha A_\beta \\ &= -\partial_\beta A^\alpha + \partial_\alpha A^\beta \\ &= \epsilon_{\alpha \beta \sigma} B^\sigma\end{aligned}

with F^{\beta \alpha} = \epsilon^{\alpha \beta \sigma} B_\sigma and B_\sigma = B^\sigma.

Now let’s expand the dual tensors. These are

\begin{aligned}\tilde{F}_{0 \alpha} &=\frac{1}{{2}} \epsilon_{0 \alpha i j} F^{i j} \\ &=\frac{1}{{2}} \epsilon_{0 \alpha \beta \sigma} F^{\beta \sigma} \\ &=\frac{1}{{2}} \epsilon_{0 \alpha \beta \sigma} \epsilon^{\sigma \beta \mu} B_\mu \\ &=-\frac{1}{{2}} \epsilon_{0 \alpha \beta \sigma} \epsilon^{\mu \beta \sigma} B_\mu \\ &=-\frac{1}{{2}} (2!) {\delta_\alpha}^\mu B_\mu \\ &=- B_\alpha \\ \end{aligned}

and

\begin{aligned}\tilde{F}_{\beta \alpha} &=\frac{1}{{2}} \epsilon_{\beta \alpha i j} F^{i j} \\ &=\frac{1}{{2}} \left(\epsilon_{\beta \alpha 0 \sigma} F^{0 \sigma} +\epsilon_{\beta \alpha \sigma 0} F^{\sigma 0} \right) \\ &=\epsilon_{0 \beta \alpha \sigma} (-E^\sigma) \\ &=\epsilon_{\alpha \beta \sigma} E^\sigma\end{aligned}

Summarizing we have

\begin{aligned}F_{0 \alpha} &= E^\alpha \\ F^{0 \alpha} &= -E^\alpha \\ F^{\beta \alpha} &= F_{\beta \alpha} = \epsilon_{\alpha \beta \sigma} B^\sigma \\ \tilde{F}_{0 \alpha} &= - B_\alpha \\ \tilde{F}^{0 \alpha} &= B_\alpha \\ \tilde{F}_{\beta \alpha} &= \tilde{F}^{\beta \alpha} = \epsilon_{\alpha \beta \sigma} E^\sigma\end{aligned} \hspace{\stretch{1}}(2.67)

Is there a sign error in the \tilde{F}_{0 \alpha} = - B_\alpha result? Other than that we have the same sort of structure for the tensor with E and B switched around.

Let’s write these in matrix form, to compare

\begin{aligned}\begin{array}{l l l l}{\left\lVert{ \tilde{F}_{i j} }\right\rVert} &= \begin{bmatrix}0 & -B_x & -B_y & -B_z \\ B_x & 0 & -E_z & E_y \\ B_y & E_z & 0 & E_x \\ B_z & -E_y & -E_x & 0 \\ \end{bmatrix} ^{i j} }\right\rVert} &= \begin{bmatrix}0 & B_x & B_y & B_z \\ -B_x & 0 & -E_z & E_y \\ -B_y & E_z & 0 & -E_x \\ -B_z & -E_y & E_x & 0 \\ \end{bmatrix} \\ {\left\lVert{ F^{ij} }\right\rVert} &= \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} }\right\rVert} &= \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0\end{bmatrix}.\end{array}\end{aligned} \hspace{\stretch{1}}(2.73)

From these we can see by inspection that we have

\begin{aligned}\tilde{F}^{i j} F_{ij} = \tilde{F}_{i j} F^{ij} = 4 (\mathbf{E} \cdot \mathbf{B})\end{aligned} \hspace{\stretch{1}}(2.74)

This is consistent with the stated result in [1] (except for a factor of c due to units differences), so it appears the signs above are all kosher.

Now, let’s see if the if the dual tensor satisfies the vacuum equations.

\begin{aligned}\partial_j \tilde{F}^{i j}&=\partial_j \frac{1}{{2}} \epsilon^{i j k l} F_{k l} \\ &=\frac{1}{{2}} \epsilon^{i j k l} \partial_j (\partial_k A_l - \partial_l A_k) \\ &=\frac{1}{{2}} \epsilon^{i j k l} \partial_j \partial_k A_l - \frac{1}{{2}} \epsilon^{i j l k} \partial_k A_l \\ &=\frac{1}{{2}} (\epsilon^{i j k l} - \epsilon^{i j k l} \partial_k A_l \\ &= 0 \qquad\square\end{aligned}

So the first checks out, provided we have no sources. If we have sources, then we see here that Maxwell’s equations do not hold since this would imply that the four current density must be zero.

How about the Bianchi identity? That gives us

\begin{aligned}\epsilon^{i j k l} \partial_j \tilde{F}_{k l} &=\epsilon^{i j k l} \partial_j \frac{1}{{2}} \epsilon_{k l a b} F^{a b} \\ &=\frac{1}{{2}} \epsilon^{k l i j} \epsilon_{k l a b} \partial_j F^{a b} \\ &=\frac{1}{{2}} (2!) {\delta^i}_{[a} {\delta^j}_{b]} \partial_j F^{a b} \\ &=\partial_j (F^{i j} - F^{j i} ) \\ &=2 \partial_j F^{i j} .\end{aligned}

The factor of two is slightly curious. Is there a mistake above? If there is a mistake, it doesn’t change the fact that Maxwell’s equation

\begin{aligned}\partial_k F^{k i} = \frac{4 \pi}{c} j^i\end{aligned} \hspace{\stretch{1}}(2.75)

Gives us zero for the Bianchi identity under source free conditions of j^i = 0.

Problem 2. Transformation properties of \mathbf{E} and \mathbf{B}, again.

1. Statement

Use the form of F^{i j} from page 82 in the class notes, the transformation law for {\left\lVert{ F^{i j} }\right\rVert} given further down that same page, and the explicit form of the SO(1,3) matrix \hat{O} (say, corresponding to motion in the positive x_1 direction with speed v) to derive the transformation law of the fields \mathbf{E} and \mathbf{B}. Use the transformation law to find the electromagnetic field of a charged particle moving with constant speed v in the positive x_1 direction and check that the result agrees with the one that you obtained in Homework 2.

1. Solution

Given a transformation of coordinates

\begin{aligned}{x'}^i \rightarrow {O^i}_j x^j\end{aligned} \hspace{\stretch{1}}(3.76)

our rank 2 tensor F^{i j} transforms as

\begin{aligned}F^{i j} \rightarrow {O^i}_aF^{a b}{O^j}_b.\end{aligned} \hspace{\stretch{1}}(3.77)

Introducing matrices

\begin{aligned}\hat{O} &= {\left\lVert{{O^i}_j}\right\rVert} \\ \hat{F} &= {\left\lVert{F^{ij}}\right\rVert} = \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(3.78)

and noting that \hat{O}^\text{T} = {\left\lVert{{O^j}_i}\right\rVert}, we can express the electromagnetic strength tensor transformation as

\begin{aligned}\hat{F} \rightarrow \hat{O} \hat{F} \hat{O}^\text{T}.\end{aligned} \hspace{\stretch{1}}(3.80)

The class notes use {x'}^i \rightarrow O^{ij} x^j, which violates our conventions on mixed upper and lower indexes, but the end result 3.80 is the same.

\begin{aligned}{\left\lVert{{O^i}_j}\right\rVert} =\begin{bmatrix}\cosh\alpha & -\sinh\alpha & 0 & 0 \\ -\sinh\alpha & \cosh\alpha & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.81)

Writing

\begin{aligned}C &= \cosh\alpha = \gamma \\ S &= -\sinh\alpha = -\gamma \beta,\end{aligned} \hspace{\stretch{1}}(3.82)

we can compute the transformed field strength tensor

\begin{aligned}\hat{F}' &=\begin{bmatrix}C & S & 0 & 0 \\ S & C & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} \begin{bmatrix}C & S & 0 & 0 \\ S & C & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix} \\ &=\begin{bmatrix}C & S & 0 & 0 \\ S & C & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}- S E_x        & -C E_x        & -E_y  & -E_z \\ C E_x          & S E_x         & -B_z  & B_y \\ C E_y + S B_z  & S E_y + C B_z & 0     & -B_x \\ C E_z - S B_y  & S E_z - C B_y & B_x   & 0 \end{bmatrix} \\ &=\begin{bmatrix}0 & -E_x & -C E_y - S B_z & - C E_z + S B_y \\ E_x & 0 & -S E_y - C B_z & - S E_z + C B_y \\ C E_y + S B_z & S E_y + C B_z & 0 & -B_x \\ C E_z - S B_y & S E_z - C B_y & B_x & 0\end{bmatrix} \\ &=\begin{bmatrix}0 & -E_x & -\gamma(E_y - \beta B_z) & - \gamma(E_z + \beta B_y) \\ E_x & 0 & - \gamma (-\beta E_y + B_z) & \gamma( \beta E_z + B_y) \\ \gamma (E_y - \beta B_z) & \gamma(-\beta E_y + B_z) & 0 & -B_x \\ \gamma (E_z + \beta B_y) & -\gamma(\beta E_z + B_y) & B_x & 0\end{bmatrix}.\end{aligned}

As a check we have the antisymmetry that is expected. There is also a regularity to the end result that is aesthetically pleasing, hinting that things are hopefully error free. In coordinates for \mathbf{E} and \mathbf{B} this is

\begin{aligned}E_x &\rightarrow E_x \\ E_y &\rightarrow \gamma ( E_y - \beta B_z ) \\ E_z &\rightarrow \gamma ( E_z + \beta B_y ) \\ B_z &\rightarrow B_x \\ B_y &\rightarrow \gamma ( B_y + \beta E_z ) \\ B_z &\rightarrow \gamma ( B_z - \beta E_y ) \end{aligned} \hspace{\stretch{1}}(3.84)

Writing \boldsymbol{\beta} = \mathbf{e}_1 \beta, we have

\begin{aligned}\boldsymbol{\beta} \times \mathbf{B} = \begin{vmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\ \beta & 0 & 0 \\ B_x & B_y & B_z\end{vmatrix} = \mathbf{e}_2 (-\beta B_z) + \mathbf{e}_3( \beta B_y ),\end{aligned} \hspace{\stretch{1}}(3.90)

which puts us enroute to a tidier vector form

\begin{aligned}E_x &\rightarrow E_x \\ E_y &\rightarrow \gamma ( E_y + (\boldsymbol{\beta} \times \mathbf{B})_y ) \\ E_z &\rightarrow \gamma ( E_z + (\boldsymbol{\beta} \times \mathbf{B})_z ) \\ B_z &\rightarrow B_x \\ B_y &\rightarrow \gamma ( B_y - (\boldsymbol{\beta} \times \mathbf{E})_y ) \\ B_z &\rightarrow \gamma ( B_z - (\boldsymbol{\beta} \times \mathbf{E})_z ).\end{aligned} \hspace{\stretch{1}}(3.91)

For a vector \mathbf{A}, write \mathbf{A}_\parallel = (\mathbf{A} \cdot \hat{\mathbf{v}})\hat{\mathbf{v}}, \mathbf{A}_\perp = \mathbf{A} - \mathbf{A}_\parallel, allowing a compact description of the field transformation

\begin{aligned}\mathbf{E} &\rightarrow \mathbf{E}_\parallel + \gamma \mathbf{E}_\perp + \gamma (\boldsymbol{\beta} \times \mathbf{B})_\perp \\ \mathbf{B} &\rightarrow \mathbf{B}_\parallel + \gamma \mathbf{B}_\perp - \gamma (\boldsymbol{\beta} \times \mathbf{E})_\perp.\end{aligned} \hspace{\stretch{1}}(3.97)

Now, we want to consider the field of a moving particle. In the particle’s (unprimed) rest frame the field due to its potential \phi = q/r is

\begin{aligned}\mathbf{E} &= \frac{q}{r^2} \hat{\mathbf{r}} \\ \mathbf{B} &= 0.\end{aligned} \hspace{\stretch{1}}(3.99)

Coordinates for a “stationary” observer, who sees this particle moving along the x-axis at speed v are related by a boost in the -v direction

\begin{aligned}\begin{bmatrix}ct' \\ x' \\ y' \\ z'\end{bmatrix}\begin{bmatrix}\gamma & \gamma (v/c) & 0 & 0 \\ \gamma (v/c) & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}ct \\ x \\ y \\ z\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.101)

Therefore the fields in the observer frame will be

\begin{aligned}\mathbf{E}' &= \mathbf{E}_\parallel + \gamma \mathbf{E}_\perp - \gamma \frac{v}{c}(\mathbf{e}_1 \times \mathbf{B})_\perp = \mathbf{E}_\parallel + \gamma \mathbf{E}_\perp \\ \mathbf{B}' &= \mathbf{B}_\parallel + \gamma \mathbf{B}_\perp + \gamma \frac{v}{c}(\mathbf{e}_1 \times \mathbf{E})_\perp = \gamma \frac{v}{c}(\mathbf{e}_1 \times \mathbf{E})_\perp \end{aligned} \hspace{\stretch{1}}(3.102)

More explicitly with \mathbf{E} = \frac{q}{r^3}(x, y, z) this is

\begin{aligned}\mathbf{E}' &= \frac{q}{r^3}(x, \gamma y, \gamma z) \\ \mathbf{B}' &= \gamma \frac{q v}{c r^3} ( 0, -z, y )\end{aligned} \hspace{\stretch{1}}(3.104)

Comparing to Problem 3 in Problem set 2, I see that this matches the result obtained by separately transforming the gradient, the time partial, and the scalar potential. Actually, if I am being honest, I see that I made a sign error in all the coordinates of \mathbf{E}' when I initially did (this ungraded problem) in problem set 2. That sign error should have been obvious by considering the v=0 case which would have mysteriously resulted in inversion of all the coordinates of the observed electric field.

2. Statement

A particle is moving with velocity \mathbf{v} in perpendicular \mathbf{E} and \mathbf{B} fields, all given in some particular “stationary” frame of reference.

\begin{enumerate}
\item Show that there exists a frame where the problem of finding the particle trajectory can be reduced to having either only an electric or only a magnetic field.
\item Explain what determines which case takes place.
\item Find the velocity \mathbf{v}_0 of that frame relative to the “stationary” frame.
\end{enumerate}

2. Solution

\paragraph{Part 1 and 2:} Existence of the transformation.

In the single particle Lorentz trajectory problem we wish to solve

\begin{aligned}m c \frac{du^i}{ds} = \frac{e}{c} F^{i j} u_j,\end{aligned} \hspace{\stretch{1}}(3.106)

which in matrix form we can write as

\begin{aligned}\frac{d U}{ds} = \frac{e}{m c^2} \hat{F} \hat{G} U.\end{aligned} \hspace{\stretch{1}}(3.107)

where we write our column vector proper velocity as U = {\left\lVert{u^i}\right\rVert}. Under transformation of coordinates {u'}^i = {O^i}_j x^j, with \hat{O} = {\left\lVert{{O^i}_j}\right\rVert}, this becomes

\begin{aligned}\hat{O} \frac{d U}{ds} = \frac{e}{m c^2} \hat{O} \hat{F} \hat{O}^\text{T} \hat{G} \hat{O} U.\end{aligned} \hspace{\stretch{1}}(3.108)

Suppose we can find eigenvectors for the matrix \hat{O} \hat{F} \hat{O}^\text{T} \hat{G}. That is for some eigenvalue \lambda, we can find an eigenvector \Sigma

\begin{aligned}\hat{O} \hat{F} \hat{O}^\text{T} \hat{G} \Sigma = \lambda \Sigma.\end{aligned} \hspace{\stretch{1}}(3.109)

Rearranging we have

\begin{aligned}(\hat{O} \hat{F} \hat{O}^\text{T} \hat{G} - \lambda I) \Sigma = 0\end{aligned} \hspace{\stretch{1}}(3.110)

and conclude that \Sigma lies in the null space of the matrix \hat{O} \hat{F} \hat{O}^\text{T} \hat{G} - \lambda I and that this difference of matrices must have a zero determinant

\begin{aligned}\text{Det} (\hat{O} \hat{F} \hat{O}^\text{T} \hat{G} - \lambda I) = -\text{Det} (\hat{O} \hat{F} \hat{O}^\text{T} - \lambda \hat{G}) = 0.\end{aligned} \hspace{\stretch{1}}(3.111)

Since \hat{G} = \hat{O} \hat{G} \hat{O}^\text{T} for any Lorentz transformation \hat{O} in SO(1,3), and \text{Det} ABC = \text{Det} A \text{Det} B \text{Det} C we have

\begin{aligned}\text{Det} (\hat{O} \hat{F} \hat{O}^\text{T} - \lambda G)= \text{Det} (\hat{F} - \lambda \hat{G}).\end{aligned} \hspace{\stretch{1}}(3.112)

In problem 1.6, we called this our characteristic equation P(\lambda) = \text{Det} (\hat{F} - \lambda \hat{G}). Observe that the characteristic equation is Lorentz invariant for any \lambda, which requires that the eigenvalues \lambda are also Lorentz invariants.

In problem 1.6 of this problem set we computed that this characteristic equation expands to

\begin{aligned}P(\lambda) = \text{Det} (\hat{F} - \lambda \hat{G}) = (\mathbf{E} \cdot \mathbf{B})^2 + \lambda^2 (\mathbf{B}^2 - \mathbf{E}^2) + \lambda^4.\end{aligned} \hspace{\stretch{1}}(3.113)

The eigenvalues for the system, also each necessarily Lorentz invariants, are

\begin{aligned}\lambda = \pm \frac{1}{{\sqrt{2}}} \sqrt{ \mathbf{E}^2 - \mathbf{B}^2 \pm \sqrt{ (\mathbf{E}^2 - \mathbf{B}^2)^2 - 4 (\mathbf{E} \cdot \mathbf{B})^2 }}.\end{aligned} \hspace{\stretch{1}}(3.114)

Observe that in the specific case where \mathbf{E} \cdot \mathbf{B} = 0, as in this problem, we must have \mathbf{E}' \cdot \mathbf{B}' in all frames, and the two non-zero eigenvalues of our characteristic polynomial are simply

\begin{aligned}\lambda = \pm \sqrt{\mathbf{E}^2 - \mathbf{B}^2}.\end{aligned} \hspace{\stretch{1}}(3.115)

These and \mathbf{E} \cdot \mathbf{B} = 0 are the invariants for this system. If we have \mathbf{E}^2 > \mathbf{B}^2 in one frame, we must also have {\mathbf{E}'}^2 > {\mathbf{B}'}^2 in another frame, still maintaining perpendicular fields. In particular if \mathbf{B}' = 0 we maintain real eigenvalues. Similarly if \mathbf{B}^2 > \mathbf{E}^2 in some frame, we must always have imaginary eigenvalues, and this is also true in the \mathbf{E}' = 0 case.

While the problem can be posed as a pure diagonalization problem (and even solved numerically this way for the general constant fields case), we can also work symbolically, thinking of the trajectories problem as simply seeking a transformation of frames that reduce the scope of the problem to one that is more tractable. That does not have to be the linear transformation that diagonalizes the system. Instead we are free to transform to a frame where one of the two fields \mathbf{E}' or \mathbf{B}' is zero, provided the invariants discussed are maintained.

\paragraph{Part 3:} Finding the boost velocity that wipes out one of the fields.

Let’s now consider a Lorentz boost \hat{O}, and seek to solve for the boost velocity that wipes out one of the fields, given the invariants that must be maintained for the system

To make things concrete, suppose that our perpendicular fields are given by \mathbf{E} = E \mathbf{e}_2 and \mathbf{B} = B \mathbf{e}_3.

Let also assume that we can find the velocity \mathbf{v}_0 for which one or more of the transformed fields is zero. Suppose that velocity is

\begin{aligned}\mathbf{v}_0 = v_0 (\alpha_1, \alpha_2, \alpha_3) = v_0 \hat{\mathbf{v}}_0,\end{aligned} \hspace{\stretch{1}}(3.116)

where \alpha_i are the direction cosines of \mathbf{v}_0 so that \sum_i \alpha_i^2 = 1. We will want to compute the components of \mathbf{E} and \mathbf{B} parallel and perpendicular to this velocity.

Those are

\begin{aligned}\mathbf{E}_\parallel &= E \mathbf{e}_2 \cdot (\alpha_1, \alpha_2, \alpha_3) (\alpha_1, \alpha_2, \alpha_3) \\ &= E \alpha_2 (\alpha_1, \alpha_2, \alpha_3) \\ \end{aligned}

\begin{aligned}\mathbf{E}_\perp &= E \mathbf{e}_2 - \mathbf{E}_\parallel \\ &= E (-\alpha_1 \alpha_2, 1 - \alpha_2^2, -\alpha_2 \alpha_3) \\ &= E (-\alpha_1 \alpha_2, \alpha_1^2 + \alpha_3^2, -\alpha_2 \alpha_3) \\ \end{aligned}

For the magnetic field we have

\begin{aligned}\mathbf{B}_\parallel &= B \alpha_3 (\alpha_1, \alpha_2, \alpha_3),\end{aligned}

and

\begin{aligned}\mathbf{B}_\perp &= B \mathbf{e}_3 - \mathbf{B}_\parallel \\ &= B (-\alpha_1 \alpha_3, -\alpha_2 \alpha_3, \alpha_1^2 + \alpha_2^2)  \\ \end{aligned}

Now, observe that (\boldsymbol{\beta} \times \mathbf{B})_\parallel \propto ((\mathbf{v}_0 \times \mathbf{B}) \cdot \mathbf{v}_0) \mathbf{v}_0, but this is just zero. So we have (\boldsymbol{\beta} \times \mathbf{B})_\parallel = \boldsymbol{\beta} \times \mathbf{B}. So our cross products terms are just

\begin{aligned}\hat{\mathbf{v}}_0 \times \mathbf{B} &=         \begin{vmatrix}         \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\         \alpha_1 & \alpha_2 & \alpha_3 \\         0 & 0 & B         \end{vmatrix} = B (\alpha_2, -\alpha_1, 0) \\ \hat{\mathbf{v}}_0 \times \mathbf{E} &=         \begin{vmatrix}         \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\         \alpha_1 & \alpha_2 & \alpha_3 \\         0 & E & 0         \end{vmatrix} = E (-\alpha_3, 0, \alpha_1)\end{aligned}

We can now express how the fields transform, given this arbitrary boost velocity. From 3.97, this is

\begin{aligned}\mathbf{E} &\rightarrow E \alpha_2 (\alpha_1, \alpha_2, \alpha_3) + \gamma E (-\alpha_1 \alpha_2, \alpha_1^2 + \alpha_3^2, -\alpha_2 \alpha_3) + \gamma \frac{v_0^2}{c^2} B (\alpha_2, -\alpha_1, 0) \\ \mathbf{B} &\rightarrowB \alpha_3 (\alpha_1, \alpha_2, \alpha_3)+ \gamma B (-\alpha_1 \alpha_3, -\alpha_2 \alpha_3, \alpha_1^2 + \alpha_2^2)  - \gamma \frac{v_0^2}{c^2} E (-\alpha_3, 0, \alpha_1)\end{aligned} \hspace{\stretch{1}}(3.117)

Zero Electric field case.

Let’s tackle the two cases separately. First when {\left\lvert{\mathbf{B}}\right\rvert} > {\left\lvert{\mathbf{E}}\right\rvert}, we can transform to a frame where \mathbf{E}'=0. In coordinates from 3.117 this supplies us three sets of equations. These are

\begin{aligned}0 &= E \alpha_2 \alpha_1 (1 - \gamma) + \gamma \frac{v_0^2}{c^2} B \alpha_2  \\ 0 &= E \alpha_2^2 + \gamma E (\alpha_1^2 + \alpha_3^2) - \gamma \frac{v_0^2}{c^2} B \alpha_1  \\ 0 &= E \alpha_2 \alpha_3 (1 - \gamma).\end{aligned} \hspace{\stretch{1}}(3.119)

With an assumed solution the \mathbf{e}_3 coordinate equation implies that one of \alpha_2 or \alpha_3 is zero. Perhaps there are solutions with \alpha_3 = 0 too, but inspection shows that \alpha_2 = 0 nicely kills off the first equation. Since \alpha_1^2 + \alpha_2^2 + \alpha_3^2 = 1, that also implies that we are left with

\begin{aligned}0 = E - \frac{v_0^2}{c^2} B \alpha_1 \end{aligned} \hspace{\stretch{1}}(3.122)

Or

\begin{aligned}\alpha_1 &= \frac{E}{B} \frac{c^2}{v_0^2} \\ \alpha_2 &= 0 \\ \alpha_3 &= \sqrt{1 - \frac{E^2}{B^2} \frac{c^4}{v_0^4} }\end{aligned} \hspace{\stretch{1}}(3.123)

Our velocity was \mathbf{v}_0 = v_0 (\alpha_1, \alpha_2, \alpha_3) solving the problem for the {\left\lvert{\mathbf{B}}\right\rvert}^2 > {\left\lvert{\mathbf{E}}\right\rvert}^2 case up to an adjustable constant v_0. That constant comes with constraints however, since we must also have our cosine \alpha_1 \le 1. Expressed another way, the magnitude of the boost velocity is constrained by the relation

\begin{aligned}\frac{\mathbf{v}_0^2}{c^2} \ge {\left\lvert{\frac{E}{B}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(3.126)

It appears we may also pick the equality case, so one velocity (not unique) that should transform away the electric field is

\begin{aligned}\boxed{\mathbf{v}_0 = c \sqrt{{\left\lvert{\frac{E}{B}}\right\rvert}} \mathbf{e}_1 = \pm c \sqrt{{\left\lvert{\frac{E}{B}}\right\rvert}} \frac{\mathbf{E} \times \mathbf{B}}{{\left\lvert{\mathbf{E}}\right\rvert} {\left\lvert{\mathbf{B}}\right\rvert}}.}\end{aligned} \hspace{\stretch{1}}(3.127)

This particular boost direction is perpendicular to both fields. Observe that this highlights the invariance condition {\left\lvert{\frac{E}{B}}\right\rvert} < 1 since we see this is required for a physically realizable velocity. Boosting in this direction will reduce our problem to one that has only the magnetic field component.

Zero Magnetic field case.

Now, let’s consider the case where we transform the magnetic field away, the case when our characteristic polynomial has strictly real eigenvalues \lambda = \pm \sqrt{\mathbf{E}^2 - \mathbf{B}^2}. In this case, if we write out our equations for the transformed magnetic field and require these to separately equal zero, we have

\begin{aligned}0 &= B \alpha_3 \alpha_1 ( 1 - \gamma ) + \gamma \frac{v_0^2}{c^2} E \alpha_3 \\ 0 &= B \alpha_2 \alpha_3 ( 1 - \gamma ) \\ 0 &= B (\alpha_3^2 + \gamma (\alpha_1^2 + \alpha_2^2)) - \gamma \frac{v_0^2}{c^2} E \alpha_1.\end{aligned} \hspace{\stretch{1}}(3.128)

Similar to before we see that \alpha_3 = 0 kills off the first and second equations, leaving just

\begin{aligned}0 = B - \frac{v_0^2}{c^2} E \alpha_1.\end{aligned} \hspace{\stretch{1}}(3.131)

We now have a solution for the family of direction vectors that kill the magnetic field off

\begin{aligned}\alpha_1 &= \frac{B}{E} \frac{c^2}{v_0^2} \\ \alpha_2 &= \sqrt{ 1 - \frac{B^2}{E^2} \frac{c^4}{v_0^4} } \\ \alpha_3 &= 0.\end{aligned} \hspace{\stretch{1}}(3.132)

In addition to the initial constraint that {\left\lvert{\frac{B}{E}}\right\rvert} < 1, we have as before, constraints on the allowable values of v_0

\begin{aligned}\frac{\mathbf{v}_0^2}{c^2} \ge {\left\lvert{\frac{B}{E}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(3.135)

Like before we can pick the equality \alpha_1^2 = 1, yielding a boost direction of

\begin{aligned}\boxed{\mathbf{v}_0 = c \sqrt{{\left\lvert{\frac{B}{E}}\right\rvert}} \mathbf{e}_1 = \pm c \sqrt{{\left\lvert{\frac{B}{E}}\right\rvert}} \frac{\mathbf{E} \times \mathbf{B}}{{\left\lvert{\mathbf{E}}\right\rvert} {\left\lvert{\mathbf{B}}\right\rvert}}.}\end{aligned} \hspace{\stretch{1}}(3.136)

Again, we see that the invariance condition {\left\lvert{\mathbf{B}}\right\rvert} < {\left\lvert{\mathbf{E}}\right\rvert} is required for a physically realizable velocity if that velocity is entirely perpendicular to the fields.

Problem 3. Continuity equation for delta function current distributions.

Statement

Show explicitly that the electromagnetic 4-current j^i for a particle moving with constant velocity (considered in class, p. 100-101 of notes) is conserved \partial_i j^i = 0. Give a physical interpretation of this conservation law, for example by integrating \partial_i j^i over some spacetime region and giving an integral form to the conservation law (\partial_i j^i = 0 is known as the “continuity equation”).

Solution

First lets review. Our four current was defined as

\begin{aligned}j^i(x) = \sum_A c e_A \int_{x(\tau)} dx_A^i(\tau) \delta^4(x - x_A(\tau)).\end{aligned} \hspace{\stretch{1}}(4.137)

If each of the trajectories x_A(\tau) represents constant motion we have

\begin{aligned}x_A(\tau) = x_A(0) + \gamma_A \tau ( c, \mathbf{v}_A ).\end{aligned} \hspace{\stretch{1}}(4.138)

The spacetime split of this four vector is

\begin{aligned}x_A^0(\tau) &= x_A^0(0) + \gamma_A \tau c \\ \mathbf{x}_A(\tau) &= \mathbf{x}_A(0) + \gamma_A \tau \mathbf{v},\end{aligned} \hspace{\stretch{1}}(4.139)

with differentials

\begin{aligned}dx_A^0(\tau) &= \gamma_A d\tau c \\ d\mathbf{x}_A(\tau) &= \gamma_A d\tau \mathbf{v}_A.\end{aligned} \hspace{\stretch{1}}(4.141)

Writing out the delta functions explicitly we have

\begin{aligned}\begin{aligned}j^i(x) = \sum_A &c e_A \int_{x(\tau)} dx_A^i(\tau) \delta(x^0 - x_A^0(0) - \gamma_A c \tau) \delta(x^1 - x_A^1(0) - \gamma_A v_A^1 \tau) \\ &\delta(x^2 - x_A^2(0) - \gamma_A v_A^2 \tau) \delta(x^3 - x_A^3(0) - \gamma_A v_A^3 \tau)\end{aligned}\end{aligned} \hspace{\stretch{1}}(4.143)

So our time and space components of the current can be written

\begin{aligned}j^0(x) &= \sum_A c^2 e_A \gamma_A \int_{x(\tau)} d\tau\delta(x^0 - x_A^0(0) - \gamma_A c \tau)\delta^3(\mathbf{x} - \mathbf{x}_A(0) - \gamma_A \mathbf{v}_A \tau) \\ \mathbf{j}(x) &= \sum_A c e_A \mathbf{v}_A \gamma_A \int_{x(\tau)} d\tau\delta(x^0 - x_A^0(0) - \gamma_A c \tau)\delta^3(\mathbf{x} - \mathbf{x}_A(0) - \gamma_A \mathbf{v}_A \tau).\end{aligned} \hspace{\stretch{1}}(4.144)

Each of these integrals can be evaluated with respect to the time coordinate delta function leaving the distribution

\begin{aligned}j^0(x) &= \sum_A c e_A \delta^3(\mathbf{x} - \mathbf{x}_A(0) - \frac{\mathbf{v}_A}{c} (x^0 - x_A^0(0))) \\ \mathbf{j}(x) &= \sum_A e_A \mathbf{v}_A \delta^3(\mathbf{x} - \mathbf{x}_A(0) - \frac{\mathbf{v}_A}{c} (x^0 - x_A^0(0)))\end{aligned} \hspace{\stretch{1}}(4.146)

With this more general expression (multi-particle case) it should be possible to show that the four divergence is zero, however, the problem only asks for one particle. For the one particle case, we can make things really easy by taking the initial point in space and time as the origin, and aligning our velocity with one of the coordinates (say x).

Doing so we have the result derived in class

\begin{aligned}j = e \begin{bmatrix}c \\ v \\ 0 \\ 0 \end{bmatrix}\delta(x - v x^0/c)\delta(y)\delta(z).\end{aligned} \hspace{\stretch{1}}(4.148)

Our divergence then has only two portions

\begin{aligned}\frac{\partial {j^0}}{\partial {x^0}} &= e c (-v/c) \delta'(x - v x^0/c) \delta(y) \delta(z) \\ \frac{\partial {j^1}}{\partial {x}} &= e v \delta'(x - v x^0/c) \delta(y) \delta(z).\end{aligned} \hspace{\stretch{1}}(4.149)

and these cancel out when summed. Note that this requires us to be loose with our delta functions, treating them like regular functions that are differentiable.

For the more general multiparticle case, we can treat the sum one particle at a time, and in each case, rotate coordinates so that the four divergence only picks up one term.

As for physical interpretation via integral, we have using the four dimensional divergence theorem

\begin{aligned}\int d^4 x \partial_i j^i = \int j^i dS_i\end{aligned} \hspace{\stretch{1}}(4.151)

where dS_i is the three-volume element perpendicular to a x^i = \text{constant} plane. These volume elements are detailed generally in the text [2], however, they do note that one special case specifically dS_0 = dx dy dz, the element of the three-dimensional (spatial) volume “normal” to hyperplanes ct = \text{constant}.

Without actually computing the determinants, we have something that is roughly of the form

\begin{aligned}0 = \int j^i dS_i=\int c \rho dx dy dz+\int \mathbf{j} \cdot (\mathbf{n}_x c dt dy dz + \mathbf{n}_y c dt dx dz + \mathbf{n}_z c dt dx dy).\end{aligned} \hspace{\stretch{1}}(4.152)

This is cheating a bit to just write \mathbf{n}_x, \mathbf{n}_y, \mathbf{n}_z. Are there specific orientations required by the metric. To be precise we’d have to calculate the determinants detailed in the text, and then do the duality transformations.

Per unit time, we can write instead

\begin{aligned}\frac{\partial {}}{\partial {t}} \int \rho dV= -\int \mathbf{j} \cdot (\mathbf{n}_x dy dz + \mathbf{n}_y dx dz + \mathbf{n}_z dx dy)\end{aligned} \hspace{\stretch{1}}(4.153)

Rather loosely this appears to roughly describe that the rate of change of charge in a volume must be matched with the “flow” of current through the surface within that amount of time.

References

[1] Wikipedia. Electromagnetic tensor — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 27-February-2011]. http://en.wikipedia.org/w/index.php?title=Electromagnetic_tensor&oldid=414989505.

[2] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , | Leave a Comment »

My submission for PHY356 (Quantum Mechanics I) Problem Set 4.

Posted by peeterjoot on December 7, 2010

[Click here for a PDF of this post with nicer formatting]

Grading notes.

The pdf version above has been adjusted with some grading commentary. [Click here for the PDF for the original submission, as found below.

Problem 1.

Statement

Is it possible to derive the eigenvalues and eigenvectors presented in Section 8.2 from those in Section 8.1.2? What does this say about the potential energy operator in these two situations?

For reference 8.1.2 was a finite potential barrier, V(x) = V_0, {\left\lvert{x}\right\rvert} > a, and zero in the interior of the well. This had trigonometric solutions in the interior, and died off exponentially past the boundary of the well.

On the other hand, 8.2 was a delta function potential V(x) = -g \delta(x), which had the solution u(x) = \sqrt{\beta} e^{-\beta {\left\lvert{x}\right\rvert}}, where \beta = m g/\hbar^2.

Solution

The pair of figures in the text [1] for these potentials doesn’t make it clear that there are possibly any similarities. The attractive delta function potential isn’t illustrated (although the delta function is, but with opposite sign), and the scaling and the reference energy levels are different. Let’s illustrate these using the same reference energy level and sign conventions to make the similarities more obvious.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{FiniteWellPotential}
\caption{8.1.2 Finite Well potential (with energy shifted downwards by V_0)}
\end{figure}

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{deltaFunctionPotential}
\caption{8.2 Delta function potential.}
\end{figure}

The physics isn’t changed by picking a different point for the reference energy level, so let’s compare the two potentials, and their solutions using V(x) = 0 outside of the well for both cases. The method used to solve the finite well problem in the text is hard to follow, so re-doing this from scratch in a slightly tidier way doesn’t hurt.

Schr\”{o}dinger’s equation for the finite well, in the {\left\lvert{x}\right\rvert} > a region is

\begin{aligned}-\frac{\hbar^2}{2m} u'' = E u = - E_B u,\end{aligned} \hspace{\stretch{1}}(2.1)

where a positive bound state energy E_B = -E > 0 has been introduced.

Writing

\begin{aligned}\beta = \sqrt{\frac{2 m E_B}{\hbar^2}},\end{aligned} \hspace{\stretch{1}}(2.2)

the wave functions outside of the well are

\begin{aligned}u(x) =\left\{\begin{array}{l l}u(-a) e^{\beta(x+a)} &\quad \mbox{latex x a$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.3)$

Within the well Schr\”{o}dinger’s equation is

\begin{aligned}-\frac{\hbar^2}{2m} u'' - V_0 u = E u = - E_B u,\end{aligned} \hspace{\stretch{1}}(2.4)

or

\begin{aligned}\frac{\hbar^2}{2m} u'' = - \frac{2m}{\hbar^2} (V_0 - E_B) u,\end{aligned} \hspace{\stretch{1}}(2.5)

Noting that the bound state energies are the E_B < V_0 values, let \alpha^2 = 2m (V_0 - E_B)/\hbar^2, so that the solutions are of the form

\begin{aligned}u(x) = A e^{i\alpha x} + B e^{-i\alpha x}.\end{aligned} \hspace{\stretch{1}}(2.6)

As was done for the wave functions outside of the well, the normalization constants can be expressed in terms of the values of the wave functions on the boundary. That provides a pair of equations to solve

\begin{aligned}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix}=\begin{bmatrix}e^{i \alpha a} & e^{-i \alpha a} \\ e^{-i \alpha a} & e^{i \alpha a}\end{bmatrix}\begin{bmatrix}A \\ B\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.7)

Inverting this and substitution back into 2.6 yields

\begin{aligned}u(x) &=\begin{bmatrix}e^{i\alpha x} & e^{-i\alpha x}\end{bmatrix}\begin{bmatrix}A \\ B\end{bmatrix} \\ &=\begin{bmatrix}e^{i\alpha x} & e^{-i\alpha x}\end{bmatrix}\frac{1}{{e^{2 i \alpha a} - e^{-2 i \alpha a}}}\begin{bmatrix}e^{i \alpha a} & -e^{-i \alpha a} \\ -e^{-i \alpha a} & e^{i \alpha a}\end{bmatrix}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix} \\ &=\begin{bmatrix}\frac{\sin(\alpha (a + x))}{\sin(2 \alpha a)} &\frac{\sin(\alpha (a - x))}{\sin(2 \alpha a)}\end{bmatrix}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix}.\end{aligned}

Expanding the last of these matrix products the wave function is close to completely specified.

\begin{aligned}u(x) =\left\{\begin{array}{l l}u(-a) e^{\beta(x+a)} & \quad \mbox{latex x < -a$} \\ u(a) \frac{\sin(\alpha (a + x))}{\sin(2 \alpha a)} +u(-a) \frac{\sin(\alpha (a – x))}{\sin(2 \alpha a)} & \quad \mbox{$latex {\left\lvert{x}\right\rvert} a$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.8)$

There are still two unspecified constants u(\pm a) and the constraints on E_B have not been determined (both \alpha and \beta are functions of that energy level). It should be possible to eliminate at least one of the u(\pm a) by computing the wavefunction normalization, and since the well is being narrowed the \alpha term will not be relevant. Since only the vanishingly narrow case where a \rightarrow 0, x \in [-a,a] is of interest, the wave function in that interval approaches

\begin{aligned}u(x) \rightarrow \frac{1}{{2}} (u(a) + u(-a)) + \frac{x}{2} ( u(a) - u(-a) ) \rightarrow \frac{1}{{2}} (u(a) + u(-a)).\end{aligned} \hspace{\stretch{1}}(2.9)

Since no discontinuity is expected this is just u(a) = u(-a). Let’s write \lim_{a\rightarrow 0} u(a) = A for short, and the limited width well wave function becomes

\begin{aligned}u(x) =\left\{\begin{array}{l l}A e^{\beta x} & \quad \mbox{latex x 0$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.10)$

This is now the same form as the delta function potential, and normalization also gives A = \sqrt{\beta}.

One task remains before the attractive delta function potential can be considered a limiting case for the finite well, since the relation between a, V_0, and g has not been established. To do so integrate the Schr\”{o}dinger equation over the infinitesimal range [-a,a]. This was done in the text for the delta function potential, and that provided the relation

\begin{aligned}\beta = \frac{mg}{\hbar^2}\end{aligned} \hspace{\stretch{1}}(2.11)

For the finite well this is

\begin{aligned}\int_{-a}^a -\frac{\hbar^2}{2m} u'' - V_0 \int_{-a}^a u = -E_B \int_{-a}^a u \\ \end{aligned} \hspace{\stretch{1}}(2.12)

In the limit as a \rightarrow 0 this is

\begin{aligned}\frac{\hbar^2}{2m} (u'(a) - u'(-a)) + V_0 2 a u(0) = 2 E_B a u(0).\end{aligned} \hspace{\stretch{1}}(2.13)

Some care is required with the V_0 a term since a \rightarrow 0 as V_0 \rightarrow \infty, but the E_B term is unambiguously killed, leaving

\begin{aligned}\frac{\hbar^2}{2m} u(0) (-2\beta e^{-\beta a}) = -V_0 2 a u(0).\end{aligned} \hspace{\stretch{1}}(2.14)

The exponential vanishes in the limit and leaves

\begin{aligned}\beta = \frac{m (2 a) V_0}{\hbar^2}\end{aligned} \hspace{\stretch{1}}(2.15)

Comparing to 2.11 from the attractive delta function completes the problem. The conclusion is that when the finite well is narrowed with a \rightarrow 0, also letting V_0 \rightarrow \infty such that the absolute area of the well g = (2 a) V_0 is maintained, the finite potential well produces exactly the attractive delta function wave function and associated bound state energy.

Problem 2.

Statement

For the hydrogen atom, determine {\langle {nlm} \rvert}(1/R){\lvert {nlm} \rangle} and 1/{\langle {nlm} \rvert}R{\lvert {nlm} \rangle} such that (nlm)=(211) and R is the radial position operator (X^2+Y^2+Z^2)^{1/2}. What do these quantities represent physically and are they the same?

Solution

Both of the computation tasks for the hydrogen like atom require expansion of a braket of the following form

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle},\end{aligned} \hspace{\stretch{1}}(3.16)

where A(R) = R = (X^2 + Y^2 + Z^2)^{1/2} or A(R) = 1/R.

The spherical representation of the identity resolution is required to convert this braket into integral form

\begin{aligned}\mathbf{1} = \int r^2 \sin\theta dr d\theta d\phi {\lvert { r \theta \phi} \rangle}{\langle { r \theta \phi} \rvert},\end{aligned} \hspace{\stretch{1}}(3.17)

where the spherical wave function is given by the braket \left\langle{{ r \theta \phi}} \vert {{nlm}}\right\rangle = R_{nl}(r) Y_{lm}(\theta,\phi).

Additionally, the radial form of the delta function will be required, which is

\begin{aligned}\delta(\mathbf{x} - \mathbf{x}') = \frac{1}{{r^2 \sin\theta}} \delta(r - r') \delta(\theta - \theta') \delta(\phi - \phi')\end{aligned} \hspace{\stretch{1}}(3.18)

Two applications of the identity operator to the braket yield

\begin{aligned}\rvert} A(R) {\lvert {nlm} \rangle} \\ &={\langle {nlm} \rvert} \mathbf{1} A(R) \mathbf{1} {\lvert {nlm} \rangle} \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' \left\langle{{nlm}} \vert {{ r \theta \phi}}\right\rangle{\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle}\left\langle{{ r' \theta' \phi'}} \vert {{nlm}}\right\rangle \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' R_{nl}(r) Y_{lm}^{*}(\theta, \phi){\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle}R_{nl}(r') Y_{lm}(\theta', \phi') \\ \end{aligned}

To continue an assumption about the matrix element {\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle} is required. It seems reasonable that this would be

\begin{aligned}{\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle} = \\ \delta(\mathbf{x} - \mathbf{x}') A(r) = \frac{1}{{r^2 \sin\theta}} \delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r).\end{aligned} \hspace{\stretch{1}}(3.19)

The braket can now be written completely in integral form as

\begin{aligned}\rvert} A(R) {\lvert {nlm} \rangle} \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' R_{nl}(r) Y_{lm}^{*}(\theta, \phi)\frac{1}{{r^2 \sin\theta}} \delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r)R_{nl}(r') Y_{lm}(\theta', \phi') \\ &=\int dr d\theta d\phi {r'}^2 \sin\theta' dr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta, \phi)\delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r)R_{nl}(r') Y_{lm}(\theta', \phi') \\ \end{aligned}

Application of the delta functions then reduces the integral, since the only \theta, and \phi dependence is in the (orthonormal) Y_{lm} terms they are found to drop out

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle}&=\int dr d\theta d\phi r^2 \sin\theta R_{nl}(r) Y_{lm}^{*}(\theta, \phi)A(r)R_{nl}(r) Y_{lm}(\theta, \phi) \\ &=\int dr r^2 R_{nl}(r) A(r)R_{nl}(r) \underbrace{\int\sin\theta d\theta d\phi Y_{lm}^{*}(\theta, \phi)Y_{lm}(\theta, \phi) }_{=1}\\ \end{aligned}

This leaves just the radial wave functions in the integral

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle}=\int dr r^2 R_{nl}^2(r) A(r)\end{aligned} \hspace{\stretch{1}}(3.20)

As a consistency check, observe that with A(r) = 1, this integral evaluates to 1 according to equation (8.274) in the text, so we can think of (r R_{nl}(r))^2 as the radial probability density for functions of r.

The problem asks specifically for these expectation values for the {\lvert {211} \rangle} state. For that state the radial wavefunction is found in (8.277) as

\begin{aligned}R_{21}(r) = \left(\frac{Z}{2a_0}\right)^{3/2} \frac{ Z r }{a_0 \sqrt{3}} e^{-Z r/2 a_0}\end{aligned} \hspace{\stretch{1}}(3.21)

The braket can now be written explicitly

\begin{aligned}{\langle {21m} \rvert} A(R) {\lvert {21m} \rangle}=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^4 e^{-Z r/ a_0}A(r)\end{aligned} \hspace{\stretch{1}}(3.22)

Now, let’s consider the two functions A(r) separately. First for A(r) = r we have

\begin{aligned}{\langle {21m} \rvert} R {\lvert {21m} \rangle}&=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^5 e^{-Z r/ a_0} \\ &=\frac{ a_0 }{ 24 Z } \int_0^\inftydu u^5 e^{-u} \\ \end{aligned}

The last integral evaluates to 120, leaving

\begin{aligned}{\langle {21m} \rvert} R {\lvert {21m} \rangle}=\frac{ 5 a_0 }{ Z }.\end{aligned} \hspace{\stretch{1}}(3.23)

The expectation value associated with this {\lvert {21m} \rangle} state for the radial position is found to be proportional to the Bohr radius. For the hydrogen atom where Z=1 this average value for repeated measurements of the physical quantity associated with the operator R is found to be 5 times the Bohr radius for n=2, l=1 states.

Our problem actually asks for the inverse of this expectation value, and for reference this is

\begin{aligned}1/ {\langle {21m} \rvert} R {\lvert {21m} \rangle}=\frac{ Z }{ 5 a_0 } \end{aligned} \hspace{\stretch{1}}(3.24)

Performing the same task for A(R) = 1/R

\begin{aligned}{\langle {21m} \rvert} 1/R {\lvert {21m} \rangle}&=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^3e^{-Z r/ a_0} \\ &=\frac{1}{{24}} \frac{ Z }{ a_0 } \int_0^\inftydu u^3e^{-u}.\end{aligned}

This last integral has value 6, and we have the second part of the computational task complete

\begin{aligned}{\langle {21m} \rvert} 1/R {\lvert {21m} \rangle} = \frac{1}{{4}} \frac{ Z }{ a_0 } \end{aligned} \hspace{\stretch{1}}(3.25)

The question of whether or not 3.24, and 3.25 are equal is answered. They are not.

Still remaining for this problem is the question of the what these quantities represent physically.

The quantity {\langle {nlm} \rvert} R {\lvert {nlm} \rangle} is the expectation value for the radial position of the particle measured from the center of mass of the system. This is the average outcome for many measurements of this radial distance when the system is prepared in the state {\lvert {nlm} \rangle} prior to each measurement.

Interestingly, the physical quantity that we associate with the operator R has a different measurable value than the inverse of the expectation value for the inverted operator 1/R. Regardless, we have a physical (observable) quantity associated with the operator 1/R, and when the system is prepared in state {\lvert {21m} \rangle} prior to each measurement, the average outcome of many measurements of this physical quantity produces this value {\langle {21m} \rvert} 1/R {\lvert {21m} \rangle} = Z/n^2 a_0, a quantity inversely proportional to the Bohr radius.

ASIDE: Comparing to the general case.

As a confirmation of the results obtained, we can check 3.24, and 3.25 against the general form of the expectation values \left\langle{{R^s}}\right\rangle for various powers s of the radial position operator. These can be found in locations such as farside.ph.utexas.edu which gives for Z=1 (without proof), and in [2] (where these and harder looking ones expectation values are left as an exercise for the reader to prove). Both of those give:

\begin{aligned}\left\langle{{R}}\right\rangle &= \frac{a_0}{2} ( 3 n^2 -l (l+1) ) \\ \left\langle{{1/R}}\right\rangle &= \frac{1}{n^2 a_0} \end{aligned} \hspace{\stretch{1}}(3.26)

It is curious to me that the general expectation values noted in 3.26 we have a l quantum number dependence for \left\langle{{R}}\right\rangle, but only the n quantum number dependence for \left\langle{{1/R}}\right\rangle. It is not obvious to me why this would be the case.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] R. Liboff. Introductory quantum mechanics. Cambridge: Addison-Wesley Press, Inc, 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »