Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘Lorentz force’

PHY354 Advanced Classical Mechanics. Problem set 1 (ungraded).

Posted by peeterjoot on February 3, 2012

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Ungraded solutions to posted problem set 1 (I’m auditing half the lectures for this course and won’t be submitting any solutions for grading).

Problem 1. Lorentz force Lagrangian.

Evaluate the Euler-Lagrange equations.

This problem has two parts. The first is to derive the Lorentz force equation

\begin{aligned}\mathbf{F} &= q (\mathbf{E} + \mathbf{v} \times \mathbf{B}) \\ \mathbf{E} &= -\boldsymbol{\nabla} \phi - \frac{\partial {\mathbf{A}}}{\partial {t}} \\ \mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{A}\end{aligned} \hspace{\stretch{1}}(2.1)

using the Euler-Lagrange equations using the Lagrangian

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m \mathbf{v}^2 + q \mathbf{v} \cdot \mathbf{A} - q \phi.\end{aligned} \hspace{\stretch{1}}(2.4)

In coordinates, employing summation convention, this Lagrangian is

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m \dot{x}_j \dot{x}_j + q \dot{x}_j A_j - q \phi.\end{aligned} \hspace{\stretch{1}}(2.5)

Taking derivatives

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\dot{x}_i}} = m \dot{x}_i + q A_i,\end{aligned} \hspace{\stretch{1}}(2.6)

\begin{aligned}\frac{d}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{x}_i}} &= m \dot{d}{x}_i + q \frac{\partial {A_i}}{\partial {t}}+ q \frac{\partial {A_i}}{\partial {x_j}} \frac{dx_j}{dt} \\ &=m \dot{d}{x}_i + q \frac{\partial {A_i}}{\partial {t}}+ q \frac{\partial {A_i}}{\partial {x_j}} \dot{x}_j\end{aligned}

This must equal

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {x_i}} = q \dot{x}_j \frac{\partial {A_j}}{\partial {x_i}} - q \frac{\partial {\phi}}{\partial {x_i}},\end{aligned} \hspace{\stretch{1}}(2.7)

So we have

\begin{aligned}m \dot{d}{x}_i &= -q \frac{\partial {A_i}}{\partial {t}}- q \frac{\partial {A_i}}{\partial {x_j}} \dot{x}_j+q \dot{x}_j \frac{\partial {A_j}}{\partial {x_i}} - q \frac{\partial {\phi}}{\partial {x_i}} \\ &=-q \left( \frac{\partial {A_i}}{\partial {t}} - \frac{\partial {\phi}}{\partial {x_i}} \right)+q v_j \left( \frac{\partial {A_j}}{\partial {x_i}} - \frac{\partial {A_i}}{\partial {x_j}} \right)\end{aligned}

The first term is just E_i. If we expand out (\mathbf{v} \times \mathbf{B})_i we see that matches

\begin{aligned}(\mathbf{v} \times \mathbf{B})_i&=v_a B_b \epsilon_{abi} \\ &=v_a \partial_r A_s \epsilon_{rsb} \epsilon_{abi} \\ &=v_a \partial_r A_s \delta_{rs}^{[ia]} \\ &=v_a (\partial_i A_a - \partial_a A_i).\end{aligned}

A a \rightarrow j substition, and comparision of this with the Euler-Lagrange result above completes the exersize.

Show that the Lagrangian is gauge invariant.

With a gauge transformation of the form

\begin{aligned}\phi &\rightarrow \phi + \frac{\partial {\chi}}{\partial {t}} \\ \mathbf{A} &\rightarrow \mathbf{A} - \boldsymbol{\nabla} \chi,\end{aligned} \hspace{\stretch{1}}(2.8)

show that the Lagrangian is invariant.

We really only have to show that

\begin{aligned}\mathbf{v} \cdot \mathbf{A} - \phi\end{aligned} \hspace{\stretch{1}}(2.10)

is invariant. Making the transformation we have

\begin{aligned}\mathbf{v} \cdot \mathbf{A} - \phi&\rightarrow v_j \left(A_j - \partial_j \chi \right) - \left(\phi + \frac{\partial {\chi}}{\partial {t}} \right) \\ &=v_j A_j - \phi - v_j \partial_j \chi - \frac{\partial {\chi}}{\partial {t}} \\ &=\mathbf{v} \cdot \mathbf{A} - \phi- \left( \frac{d x_j}{dt} \frac{\partial {\chi}}{\partial {x_j}} + \frac{\partial {\chi}}{\partial {t}} \right) \\ &=\mathbf{v} \cdot \mathbf{A} - \phi- \frac{d \chi(\mathbf{x}, t)}{dt}.\end{aligned}

We see then that the Lagrangian transforms as

\begin{aligned}\mathcal{L} \rightarrow \mathcal{L} + \frac{d}{dt}\left( -q \chi \right),\end{aligned} \hspace{\stretch{1}}(2.11)

and differs only by a total derivative. With the lemma from the lecture, we see that this gauge transformation does not have any effect on the end result of applying the Euler-Lagrange equations.

Problem 2. Action minimization problem for surface gravity.

Here we are told to guess at a solution

\begin{aligned}y = a_2 t^2 + a_1 t + a_0,\end{aligned} \hspace{\stretch{1}}(3.12)

for the height of a particle thrown up into the air. With initial condition y(0) = 0 we have

\begin{aligned}a_0 = 0,\end{aligned} \hspace{\stretch{1}}(3.13)

and with a final condition of y(T) = 0 we also have

\begin{aligned}0 &= a_2 T^2 + a_1 T \\ &= T( a_2 T + a_1 ),\end{aligned}

so have

\begin{aligned}y(t) &= a_2 t^2 - a_2 T t = a_2 (t^2 - T t) \\ \dot{y}(t) &= a_2 (2 t - T )\end{aligned} \hspace{\stretch{1}}(3.14)

So our Lagrangian is

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m a_2^2 (2 t - T )^2 - m g a_2 (t^2 - T t)\end{aligned} \hspace{\stretch{1}}(3.16)

and our action is

\begin{aligned}S = \int_0^T dt \left( \frac{1}{{2}} m a_2^2 (2 t - T )^2 - m g a_2 (t^2 - T t)\right).\end{aligned} \hspace{\stretch{1}}(3.17)

To minimize this action with respect to a_2 we take the derivative

\begin{aligned}\frac{\partial {S}}{\partial {a_2}} = \int_0^T\left( m a_2 (2 t - T )^2 - m g (t^2 - T t)\right).\end{aligned} \hspace{\stretch{1}}(3.18)

Integrating we have

\begin{aligned}0 &= \frac{\partial {S}}{\partial {a_2}} \\ &={\left.\left(\frac{1}{{6}} m a_2 (2 t - T )^3 - m g \left(\frac{1}{{3}}t^3 - \frac{1}{{2}}T t^2 \right)\right)\right\vert}_0^T \\ &=\frac{1}{{6}} m a_2 T^3 - m g \left(\frac{1}{{3}}T^3 - \frac{1}{{2}}T^3 \right)-\frac{1}{{6}} m a_2 (- T )^3 \\ &=m T^3 \left( \frac{1}{{3}} a_2 - g \left( \frac{1}{{3}} - \frac{1}{{2}} \right) \right) \\ &=\frac{1}{{3}} m T^3 \left( a_2 - g \left( 1 - \frac{3}{2} \right) \right) \\ \end{aligned}

or

\begin{aligned}a_2 + g/2 = 0,\end{aligned} \hspace{\stretch{1}}(3.19)

which is the result we are required to show.

Problem 3. Change of variables in a Lagrangian.

Here we want to show that after a change of variables, provided such a transformation is non-singular, the Euler-Lagrange equations are still valid.

Let’s write

\begin{aligned}r_i = r_i(q_1, q_2, \cdots q_N).\end{aligned} \hspace{\stretch{1}}(4.20)

Our “velocity” variables in terms of the original parameterization q_i are

\begin{aligned}\dot{r}_j = \frac{dr_j}{dt} = \frac{\partial {r_j}}{\partial {q_i}} \frac{\partial {q_i}}{\partial {t}} = \dot{q}_i \frac{\partial {r_j}}{\partial {q_i}},\end{aligned} \hspace{\stretch{1}}(4.21)

so we have

\begin{aligned}\frac{\partial {\dot{r}_j}}{\partial {\dot{q}_i}} = \frac{\partial {r_j}}{\partial {q_i}}.\end{aligned} \hspace{\stretch{1}}(4.22)

Computing the LHS of the Euler Lagrange equation we find

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {q_i}} = \frac{\partial {\mathcal{L}}}{\partial {r_j}} \frac{\partial {r_j}}{\partial {q_i}}+\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {\dot{r}_j}}{\partial {q_i}}.\end{aligned} \hspace{\stretch{1}}(4.23)

For our RHS we start with

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\dot{q}_i}} = \frac{\partial {\mathcal{L}}}{\partial {r_j}} \frac{\partial {r_j}}{\partial {\dot{q}_i}}+\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {\dot{r}_j}}{\partial {\dot{q}_i}}= \frac{\partial {\mathcal{L}}}{\partial {r_j}} \frac{\partial {r_j}}{\partial {\dot{q}_i}}+\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {r_j}}{\partial {q_i}},\end{aligned} \hspace{\stretch{1}}(4.24)

but {\partial {r_j}}/{\partial {\dot{q}_i}} = 0, so this is just

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\dot{q}_i}} = \frac{\partial {\mathcal{L}}}{\partial {r_j}} \frac{\partial {r_j}}{\partial {\dot{q}_i}}+\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {\dot{r}_j}}{\partial {\dot{q}_i}}= \frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {r_j}}{\partial {q_i}}.\end{aligned} \hspace{\stretch{1}}(4.25)

The Euler-Lagrange equations become

\begin{aligned}0 &=\frac{\partial {\mathcal{L}}}{\partial {r_j}} \frac{\partial {r_j}}{\partial {q_i}}+\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {\dot{r}_j}}{\partial {q_i}}- \frac{d{{}}}{dt} \left(\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {r_j}}{\partial {q_i}}\right) \\ &=   \frac{\partial {\mathcal{L}}}{\partial {r_j}} \frac{\partial {r_j}}{\partial {q_i}}+ \not{{\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{\partial {\dot{r}_j}}{\partial {q_i}}}}- \left( \frac{d{{}}}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \right) \frac{\partial {r_j}}{\partial {q_i}}- \not{{\frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \frac{d{{}}}{dt} \frac{\partial {r_j}}{\partial {q_i}} }}\\ &=\left( \frac{\partial {\mathcal{L}}}{\partial {r_j}} -\frac{d{{}}}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}} \right) \frac{\partial {r_j}}{\partial {q_i}}\end{aligned}

Since we have an assumption that the transformation is non-singular, we have for all j

\begin{aligned}\frac{\partial {r_j}}{\partial {q_i}} \ne 0,\end{aligned} \hspace{\stretch{1}}(4.26)

so we have the Euler-Lagrange equations for the new abstract coordinates as well

\begin{aligned}0 = \frac{\partial {\mathcal{L}}}{\partial {r_j}} -\frac{d{{}}}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{r}_j}}.\end{aligned} \hspace{\stretch{1}}(4.27)

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

PHY450H1S, Relativistic Electrodynamics, Problem Set 2.

Posted by peeterjoot on March 4, 2011

[Click here for a PDF of this post with nicer formatting]

Problem 1.

Statement

A particle of rest mass m whose energy is three times its rest energy collides with an identical particle at rest. Suppose they stick together. Use conservation laws to find the mass of the resulting particle and its velocity. Is its mass greater or smaller than 2m? Comment.

Solution

The energy of the initially moving particle before collision is

\begin{aligned}\mathcal{E} = \frac{m c^2 }{\sqrt{1 - \frac{\mathbf{v}^2}{c^2}}} = 3 m c^2.\end{aligned} \hspace{\stretch{1}}(1.1)

Solving for the velocity we have

\begin{aligned}{\left\lvert{\frac{\mathbf{v}}{c}}\right\rvert} = \frac{2 \sqrt{2}}{3}.\end{aligned} \hspace{\stretch{1}}(1.2)

Our four velocity is

\begin{aligned}u^i= \gamma \left( 1, \frac{\mathbf{v}}{c} \right) = ( 3, 2 \sqrt{2} ).\end{aligned} \hspace{\stretch{1}}(1.3)

Designate the four momentum for this particle as

\begin{aligned}p_{(1)}^i = m c ( 3, 2 \sqrt{2} ).\end{aligned} \hspace{\stretch{1}}(1.4)

For the second particle we have

\begin{aligned}p_{(2)}^i = m c ( 1, 0 ).\end{aligned} \hspace{\stretch{1}}(1.5)

Our initial and final four momentum will be equal, and our resulting velocity can only be in the direction of the initial particle. This leaves us with

\begin{aligned}p_{(f)}^i&= M c \frac{1}{{\sqrt{1 - \frac{\mathbf{v}_f^2}{c^2}}}} \left( 1, \frac{\mathbf{v}_f}{c} \right) \\ &= m c ( 1, 0 ) + m c ( 3, 2 \sqrt{2} )  \\ &= m c ( 4, 2 \sqrt{2} ) \\ &= 4 m c \left( 1, \frac{1}{{\sqrt{2}}} \right)\end{aligned}

Our final velocity is v_f = c/\sqrt{2}.

We have M \gamma = 4 for the final particle, but we have

\begin{aligned}\gamma = \frac{1}{\sqrt{1 - 1/2}} = \sqrt{2},\end{aligned} \hspace{\stretch{1}}(1.6)

so our final mass is

\begin{aligned}M = \frac{4}{\sqrt{2}} = 2 \sqrt{2} > 2.\end{aligned} \hspace{\stretch{1}}(1.7)

Relativistically, we have conservation of four-momentum, not conservation of mass, so a composite body will not necessarily have a mass measurement that is the sum of the parts. One possible way to reconcile this statement with intuition is to define mass in terms of the four momentum

\begin{aligned}m^2 = \frac{p^i p_i}{c^2},\end{aligned} \hspace{\stretch{1}}(1.8)

and think of it as a derived quantity, not fundamental.

Problem 2.

Statement

This problem has three parts
\begin{enumerate}
\item Express the “normal” (i.e. not 4-, but 3-) acceleration, equal to \dot{\mathbf{v}}, or a particle in terms of its velocity, \mathbf{E}, and \mathbf{B}, using the equation of motion of a relativistic particle in an external electromagnetic field.
\item Consider now a beam of electrons, moving along the x direction with a known energy \mathcal{E}, entering a region with constant homogeneous \mathbf{E} and \mathbf{B} fields. The fields are perpendicular, \mathbf{E} is along the y direction while \mathbf{B} is along the z direction.
\begin{enumerate}
\item
Show that by tuning the values of \mathbf{E} and \mathbf{B} it is possible to balance electric and magnetic forces so that the beam does not deviate from its original direction (and, say, hits a screen directly ahead).
\item Find a relation determining the mass of the electron using \mathcal{E} and the measured values of the fields for which no deviation occurs. Do not assume a non-relativistic limit and elucidate which part of this problem (a way to measure the mass of the electron) is affected by relativity.
\end{enumerate}
\item Solve for the motion (i.e. find the trajectories) of a relativistic charged particle in perpendicular constant and homogeneous electric and magnetic fields; do not assume \mathbf{E} = \mathbf{B}.
\end{enumerate}

Solution

1. Finding \dot{\mathbf{v}}

With the particle’s energy given by

\begin{aligned}\mathcal{E} = \gamma m c^2,\end{aligned} \hspace{\stretch{1}}(2.9)

we note that

\begin{aligned}\mathcal{E}\mathbf{v} = (\gamma m \mathbf{v}) c^2 = \mathbf{p} c^2.\end{aligned} \hspace{\stretch{1}}(2.10)

Taking derivatives we have

\begin{aligned}c^2 \frac{d{\mathbf{p}}}{dt} &= \mathbf{v} \frac{d{{\mathcal{E}}}}{dt} + \frac{d{\mathbf{v}}}{dt} \mathcal{E} \\ &= \mathbf{v} (e \mathbf{E} \cdot \mathbf{v}) + \frac{d{\mathbf{v}}}{dt} \mathcal{E} \\ \end{aligned}

Rearranging we have

\begin{aligned}\frac{d{\mathbf{v}}}{dt}=\frac{c^2 e \left( \mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B} \right) - \mathbf{v} (e \mathbf{E} \cdot \mathbf{v}) }{ \mathcal{E} } \end{aligned} \hspace{\stretch{1}}(2.11)

which leaves us with the desired result

\begin{aligned}\boxed{\dot{\mathbf{v}} =\frac{e}{m} \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} \left( \mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B} - \frac{\mathbf{v}}{c} \left(\mathbf{E} \cdot \frac{\mathbf{v}}{c} \right) \right)}\end{aligned} \hspace{\stretch{1}}(2.12)

1b. On the energy change rate.

Note that when the problem set was assigned, the relation

\begin{aligned}\frac{d{{\mathcal{E}}}}{dt} = e \mathbf{E} \cdot \mathbf{v}\end{aligned} \hspace{\stretch{1}}(2.13)

had not been demonstrated. To show this observe that we have

\begin{aligned}\frac{d}{dt} \mathcal{E}&= m c^2 \frac{d\gamma}{dt} \\ &= m c^2 \frac{d}{dt} \frac{1}{{\sqrt{1 - \frac{\mathbf{v}^2}{c^2}}}} \\ &= m c^2 \frac{\frac{\mathbf{v}}{c^2} \cdot \frac{d\mathbf{v}}{dt}}{\left(1 - \frac{\mathbf{v}^2}{c^2}\right)^{3/2}} \\ &= \frac{m \gamma \mathbf{v} \cdot \frac{d\mathbf{v}}{dt}}{1 - \frac{\mathbf{v}^2}{c^2}}\end{aligned}

We also have

\begin{aligned}\mathbf{v} \cdot \frac{d{\mathbf{p}}}{dt} &= \mathbf{v} \cdot \frac{d{{}}}{dt} \frac{m \mathbf{v}}{\sqrt{1 - \frac{\mathbf{v}^2}{c^2}}} \\ &= m\mathbf{v}^2 \frac{d{{\gamma}}}{dt} + m \gamma \mathbf{v} \cdot \frac{d{\mathbf{v}}}{dt} \\ &= m\mathbf{v}^2 \frac{d{{\gamma}}}{dt} + m c^2 \frac{d{{\gamma}}}{dt} \left( 1 - \frac{\mathbf{v}^2}{c^2} \right) \\ &= m c^2 \frac{d{{\gamma}}}{dt}.\end{aligned}

Utilizing the Lorentz force equation, we have

\begin{aligned}\mathbf{v} \cdot \frac{d{\mathbf{p}}}{dt} = e \left( \mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B} \right) \cdot \mathbf{v} = e \mathbf{E} \cdot \mathbf{v}\end{aligned} \hspace{\stretch{1}}(2.14)

and are able to assemble the above, and find that we have

\begin{aligned}\frac{d{{(m c^2 \gamma)}}}{dt} = e \mathbf{E} \cdot \mathbf{v} \end{aligned} \hspace{\stretch{1}}(2.15)

2. (a). Tuning \mathbf{E} and \mathbf{B}

Using our previous result with \mathbf{E} = E \hat{\mathbf{y}} and \mathbf{B} = B \hat{\mathbf{z}}, our system of equations takes the form

\begin{aligned}\dot{\mathbf{v}} = \frac{e}{m} \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} \left( E \hat{\mathbf{y}} + \hat{\mathbf{x}} \frac{v_y}{c} B - \hat{\mathbf{y}} \frac{y_x} B - \frac{\mathbf{v}}{c} E \frac{v_y}{c} \right)\end{aligned} \hspace{\stretch{1}}(2.16)

This is really three equations, but they are coupled with the nasty \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} term. However, since it is specified that the particles have a known energy \mathcal{E}, and that energy is

\begin{aligned}\mathcal{E} = \frac{ m c^2 }{\sqrt{1 - \frac{\mathbf{v}^2}{c^2}}},\end{aligned} \hspace{\stretch{1}}(2.17)

we can write

\begin{aligned}\sqrt{1 - \frac{\mathbf{v}^2}{c^2}} = \frac{ m c^2 }{\mathcal{E}}\end{aligned} \hspace{\stretch{1}}(2.18)

This eliminates the worst of the coupling, leaving three less hairy equations to solve

\begin{aligned}\dot{v}_x &= \frac{e c^2}{\mathcal{E}} \left( \frac{v_y}{c} B - \frac{v_x v_y}{c^2} E \right) \\ \dot{v}_y &= \frac{e c^2}{\mathcal{E}} \left( E - \frac{v_x}{c} B - \frac{v_y^2}{c^2} E \right) \\ \dot{v}_z &= \frac{e c^2}{\mathcal{E}} \left( - \frac{v_y v_z}{c^2} E \right)\end{aligned} \hspace{\stretch{1}}(2.19)

We don’t actually want to compute general solutions for these equations. Instead we just wish to examine the constraints on E and B that will keep v_y = v_z = 0.

First off we see from the \dot{v}_z equation above that if v_y = 0 or v_z = 0 initially, then \dot{v}_z = 0, and v_z(t) = \text{constant} = v_z(0) = 0. So, if the beam is initially aligned with the x direction, it will not deviate towards the z axis (in the direction of the magnetic field) at all.

Next, if we initially have v_y = 0, then at that point of time, our equation for \dot{v}_x and \dot{v}_y are respectively

\begin{aligned}\dot{v}_x &= 0 \\ \dot{v}_y &= \frac{e c^2}{\mathcal{E}} \left( E - \frac{v_x}{c} B \right) \end{aligned} \hspace{\stretch{1}}(2.22)

We are able to solve for the time evolution of the velocities directly

\begin{aligned}v_x(t) &= \text{constant} = v_x(0) \\ v_y(t) &= \frac{e c^2}{\mathcal{E}} \left( E - \frac{v_x(0)}{c} B \right) t\end{aligned} \hspace{\stretch{1}}(2.24)

We can maintain zero deviation in the y direction (v_y(t) = 0) provided we pick

\begin{aligned}E = \frac{v_x(0)}{c} B\end{aligned} \hspace{\stretch{1}}(2.26)

2. (b). Finding the mass of the electron.

After measuring the fields that once adjusted produce no deviation in the y and z directions, our particles velocity must then be

\begin{aligned}\frac{v_x}{c} = \frac{E}{B}\end{aligned} \hspace{\stretch{1}}(2.27)

If the energy has also been measured, we have a relation between the mass from

\begin{aligned}\mathcal{E} = \frac{m c^2}{\sqrt{1 - v_x^2/c^2}} = \frac{ m c^2 }{ \sqrt{ 1 - E^2/B^2 }}\end{aligned} \hspace{\stretch{1}}(2.28)

With a slight rearrangement, our mass can then be calculated from the energy \mathcal{E}, and field measurements

\begin{aligned}m = \frac{ \mathcal{E} }{c^2} \sqrt{ 1 - E^2/B^2 }.\end{aligned} \hspace{\stretch{1}}(2.29)

3. Solve for the relativistic trajectory of a particle in perpendicular fields.

Our equation to solve is

\begin{aligned}\frac{d{{u^i}}}{ds} = \frac{e}{m c^2} F^{ij} g_{jk} u^k,\end{aligned} \hspace{\stretch{1}}(2.30)

where

\begin{aligned}{\left\lVert{ F^{ij} g_{jk} }\right\rVert} = \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix}\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{bmatrix}=\begin{bmatrix}0 & E_x & E_y & E_z \\ E_x & 0 & B_z & -B_y \\ E_y & -B_z & 0 & B_x \\ E_z & B_y & -B_x & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.31)

However, with the fields being perpendicular, we are free to align them with our choice of axis. As above, let’s use \mathbf{E} = E \hat{\mathbf{y}}, and \mathbf{B} = B \hat{\mathbf{z}}. Writing u for the column vector with components u^i we have a matrix equation to solve

\begin{aligned}\frac{d{{u}}}{ds} = \frac{ e }{m c^2}\begin{bmatrix}0 & 0 & E & 0 \\ 0 & 0 & B & 0 \\ E & -B & 0 & 0 \\ 0 & 0 & 0 & 0\end{bmatrix} u = F u.\end{aligned} \hspace{\stretch{1}}(2.32)

It is simple to verify that our characteristic equation is

\begin{aligned}0 &= {\left\lvert{ F - \lambda I }\right\rvert} \\ &= \begin{vmatrix}-\lambda & 0 & E & 0 \\ 0 & -\lambda & B & 0 \\ E & -B & -\lambda & 0 \\ 0 & 0 & 0 & -\lambda\end{vmatrix} \\ &= -\lambda^2 ( -\lambda^2 - B^2 + E^2 )\end{aligned}

so that our eigenvalues are

\begin{aligned}\lambda = 0, 0, \pm \sqrt{E^2 - B^2}.\end{aligned} \hspace{\stretch{1}}(2.33)

Since the fields are constant, we can diagonalize this, and solve by exponentiation.

Let

\begin{aligned}D = \sqrt{E^2 - B^2}.\end{aligned} \hspace{\stretch{1}}(2.34)

To solve for the eigenvector e_D for \lambda = D we need solutions to

\begin{aligned}\begin{bmatrix}-D & 0 & E & 0 \\ 0 & -D & B & 0 \\ E & -B & -D & 0 \\ 0 & 0 & 0 & -D\end{bmatrix} \begin{bmatrix} a \\ b \\ c \\ d\end{bmatrix}  = 0,\end{aligned} \hspace{\stretch{1}}(2.35)

and it is straightforward to compute

\begin{aligned}e_D = \frac{1}{{\sqrt{2}E}}\begin{bmatrix} E \\ B \\ D \\ 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.36)

Similarly for the \lambda = -D eigenvector e_{-D} we wish to solve

\begin{aligned}\begin{bmatrix}D & 0 & E & 0 \\ 0 & D & B & 0 \\ E & -B & D & 0 \\ 0 & 0 & 0 & D\end{bmatrix} \begin{bmatrix} a \\ b \\ c \\ d\end{bmatrix}  = 0,\end{aligned} \hspace{\stretch{1}}(2.37)

and find that

\begin{aligned}e_{-D} = \frac{1}{{\sqrt{2}E}}\begin{bmatrix} E \\ B \\ -D \\ 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.38)

We can also pick orthonormal eigenvectors for the degenerate zero eigenvalues from the null space of the matrix

\begin{aligned}\begin{bmatrix}0 & 0 & E & 0 \\ 0 & 0 & B & 0 \\ E & -B & 0 & 0 \\ 0 & 0 & 0 & 0\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(2.39)

By inspection, two such eigenvectors are

\begin{aligned}\frac{1}{{\sqrt{E^2 + B^2}}}\begin{bmatrix} B \\ E \\ 0 \\ 0 \end{bmatrix},\begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.40)

Unfortunately, the first is not generally orthonormal to either of e_{\pm D}, so our similarity transformation matrix is not invertible by Hermitian transposition. Regardless, we are now well on track to putting the matrix equation we wish to solve into a much simpler form. With

\begin{aligned}S =\begin{bmatrix}\frac{1}{{\sqrt{2}E}}\begin{bmatrix} E \\ B \\ D \\ 0\end{bmatrix} &\frac{1}{{\sqrt{2}E}}\begin{bmatrix} E \\ B \\ -D \\ 0\end{bmatrix} &\frac{1}{{\sqrt{E^2 + B^2}}}\begin{bmatrix} B \\ E \\ 0 \\ 0 \end{bmatrix} &\begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(2.41)

and

\begin{aligned}\Sigma = \begin{bmatrix}D & 0 & 0 & 0 \\ 0 & -D & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.42)

observe that our Lorentz force equation can now be written

\begin{aligned}\frac{d{{u}}}{ds} = \frac{e}{m c^2} S \Sigma S^{-1} u.\end{aligned} \hspace{\stretch{1}}(2.43)

This we can rearrange, leaving us with a diagonal system that has a trivial solution

\begin{aligned}\frac{d{{}}}{ds} (S^{-1} u) = \frac{e}{m c^2} \Sigma (S^{-1} u).\end{aligned} \hspace{\stretch{1}}(2.44)

Let’s write

\begin{aligned}v = S^{-1} u,\end{aligned} \hspace{\stretch{1}}(2.45)

and introduce a sort of proper distance wave number

\begin{aligned}k = \frac{e \sqrt{E^2 - B^2}}{m c^2}.\end{aligned} \hspace{\stretch{1}}(2.46)

With this the Lorentz force equation is left in the form

\begin{aligned}\frac{d{{v}}}{ds} = \begin{bmatrix}k & 0 & 0 & 0 \\ 0 & -k & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} v.\end{aligned} \hspace{\stretch{1}}(2.47)

Integrating once, our solution is

\begin{aligned}v(s) = \begin{bmatrix}e^{ks} & 0 & 0 & 0 \\ 0 & e^{-ks} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} v(s=0)\end{aligned} \hspace{\stretch{1}}(2.48)

Our proper velocity is thus given by

\begin{aligned}u = \frac{d{{X}}}{ds} = S \begin{bmatrix}e^{ks} & 0 & 0 & 0 \\ 0 & e^{-ks} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} S^{-1} u(s=0).\end{aligned} \hspace{\stretch{1}}(2.49)

We can integrate once more for our trajectory, parametrized by proper distance on the worldline of the particle. That is

\begin{aligned}X(s) - X(0) = S \left( \int_{s'=0}^s ds'\begin{bmatrix}e^{ks'} & 0 & 0 & 0 \\ 0 & e^{-ks'} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \right) S^{-1} u(s=0).\end{aligned} \hspace{\stretch{1}}(2.50)

With u(0) = \gamma_0 (1, \mathbf{v}_0/c), and X = (c t_0, \mathbf{x}_0), plus the defining relations 2.41, and 2.46 our parametric equation for the trajectory is fully specified

\begin{aligned}\begin{bmatrix}c t(s) \\ \mathbf{x}^\text{T}(s)\end{bmatrix}- \begin{bmatrix}c t_0 \\ \mathbf{x}_0^\text{T}\end{bmatrix}= S \begin{bmatrix}\frac{1}{{k}}(e^{ks} -1) & 0 & 0 & 0 \\ 0 & -\frac{1}{{k}}(e^{-ks} -1) & 0 & 0 \\ 0 & 0 & s & 0 \\ 0 & 0 & 0 & s \\ \end{bmatrix} S^{-1} \frac{1}{{\sqrt{1 - (\mathbf{v}_0)^2/c^2}}}\begin{bmatrix}1 \\ \mathbf{v}_0^\text{T}/c\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.51)

Observe that for the case E^2 > B^2, our value k is real, so the solution is entirely composed of linear combinations of the hyperbolic functions \cosh(k s) and \sinh(ks). However, for the E^2 < B^2 case where our eigenvalues are purely imaginary, the constant k is also purely imaginary (and our eigenvectors e_{\pm D} are complex). In that case, we can take the real part of this equation, and will be left with a solution that is formed of linear combinations of \sin(ks) and \cos(ks) terms. The E = B case would have to be handled separately, and this is done in depth in the text, so there is little value repeating it here.

Problem 3.

Statement

In class, we introduced the 4-vector potential A^i and its transformation law under Lorentz transformations. While we have not yet discussed how \mathbf{E} and \mathbf{B} transform, knowing how A^i transforms is enough to solve some concrete problems. Suppose in one (unprimed) frame there is a charge at rest, which creates an electrostatic field: A^0 = \phi = \frac{q}{r}, \mathbf{A} = 0.

\begin{enumerate}
\item Find the values of \mathbf{E} and \mathbf{B} in this frame.
\item Consider now the same field in a (primed) frame moving in the x-direction with velocity v. Using the transformation law of the vector potential, find {A^i}' in the primed frame.
\item Use the relations between electric and magnetic field strengths and vector potential (valid in every frame) to find the electric and magnetic fields in the primed frame (i.e. find the electromagnetic field of a moving charge). Sketch the lines of constant electric and magnetic field and comment on the result.
\end{enumerate}

Solution

1.

In the unprimed frame we have

\begin{aligned}\mathbf{E} &= - \boldsymbol{\nabla} \phi - \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}} \\ &= -\boldsymbol{\nabla} \phi \\ &= - \hat{\mathbf{r}} q \partial_r (1/r) \\ &= \hat{\mathbf{r}} \frac{q}{r^2},\end{aligned}

and

\begin{aligned}\mathbf{B} = \boldsymbol{\nabla} \times \mathbf{A} = 0\end{aligned}

2.

The coordinates in the moving frame, assuming the frames are overlapping at t=0, are related to the unprimed coordinates by

\begin{aligned}\begin{bmatrix}ct' \\ x' \\ y' \\ z'\end{bmatrix}=\begin{bmatrix}\gamma & -\gamma \beta & 0 & 0 \\ -\gamma \beta & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\begin{bmatrix}ct \\ x \\ y \\ z\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.52)

Our four vector potential also transforms in the same fashion, and we have

\begin{aligned}\begin{bmatrix}\phi' \\ A_x' \\ A_y' \\ A_z' \\ \end{bmatrix}=\begin{bmatrix}\gamma & -\gamma \beta & 0 & 0 \\ -\gamma \beta & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\begin{bmatrix}\phi \\ 0 \\ 0 \\ 0 \\ \end{bmatrix}= \gamma \phi ( 1, -\beta, 0, 0 )\end{aligned} \hspace{\stretch{1}}(3.53)

So in the primed frame we have

\begin{aligned}\phi' &= \gamma \frac{q}{r} \\ A_x' &= -\gamma \beta \frac{q}{r} \\ A_y' &= 0 \\ A_z' &= 0 \end{aligned} \hspace{\stretch{1}}(3.54)

3.

In the primed frame our electric and magnetic fields are

\begin{aligned}\mathbf{E}' &= - \boldsymbol{\nabla}' \phi' - \frac{1}{{c}} \frac{\partial {\mathbf{A}'}}{\partial {t'}} \\ \mathbf{B}' &= \boldsymbol{\nabla}' \times \mathbf{A}'\end{aligned} \hspace{\stretch{1}}(3.58)

We have \phi' and \mathbf{A}' expressed in terms of the unprimed coordinates, so need to calculate the transformation of the gradient and time partial too. These partials transform as

\begin{aligned}\frac{\partial {}}{\partial {c t'}} &= \frac{\partial {ct}}{\partial {ct'}} \frac{\partial {}}{\partial {ct}} + \frac{\partial {x}}{\partial {ct'}} \frac{\partial {}}{\partial {x}} \\ \frac{\partial {}}{\partial {x'}} &= \frac{\partial {ct}}{\partial {x'}} \frac{\partial {}}{\partial {ct}} + \frac{\partial {x}}{\partial {x'}} \frac{\partial {}}{\partial {x}} \\ \frac{\partial {}}{\partial {y'}} &= \frac{\partial {}}{\partial {y}} \\ \frac{\partial {}}{\partial {z'}} &= \frac{\partial {}}{\partial {z}}\end{aligned} \hspace{\stretch{1}}(3.60)

Utilizing the inverse transformation

\begin{aligned}\begin{bmatrix}ct \\ x \\ y \\ z\end{bmatrix}=\begin{bmatrix}\gamma & \gamma \beta & 0 & 0 \\ \gamma \beta & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\begin{bmatrix}ct' \\ x' \\ y' \\ z'\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.64)

we have

\begin{aligned}\frac{\partial {}}{\partial {c t'}} &= \gamma \frac{\partial {}}{\partial {ct}} + \gamma \beta \frac{\partial {}}{\partial {x}} \\ \frac{\partial {}}{\partial {x'}} &= \gamma \beta \frac{\partial {}}{\partial {ct}} + \gamma \frac{\partial {}}{\partial {x}} \\ \frac{\partial {}}{\partial {y'}} &= \frac{\partial {}}{\partial {y}} \\ \frac{\partial {}}{\partial {z'}} &= \frac{\partial {}}{\partial {z}}\end{aligned} \hspace{\stretch{1}}(3.65)

Since neither \phi' nor \mathbf{A}' have time dependence, we have for electric field in the primed frame

\begin{aligned}\mathbf{E}' &= -\boldsymbol{\nabla}' \phi' - \frac{1}{{c}} \frac{\partial {\mathbf{A}'}}{\partial {t'}} \\ &= -\left( \gamma \frac{\partial {}}{\partial {x}}, \frac{\partial {}}{\partial {y}}, \frac{\partial {}}{\partial {z}} \right) \phi'- \gamma \beta \frac{\partial {\mathbf{A}'}}{\partial {x}} \\ &= -\left( \gamma \frac{\partial {}}{\partial {x}}, \frac{\partial {}}{\partial {y}}, \frac{\partial {}}{\partial {z}} \right) \gamma \frac{q}{r}- \gamma \beta \frac{\partial {}}{\partial {x}} \left( -\gamma \beta \frac{q}{r}, 0, 0 \right) \\ &= -q \left( \gamma^2 ( 1 - \beta^2 ) \frac{\partial {}}{\partial {x}}, \gamma \frac{\partial {}}{\partial {y}}, \gamma \frac{\partial {}}{\partial {z}} \right) \frac{1}{{r}} \\ &= -q \left( \frac{\partial {}}{\partial {x}}, \gamma \frac{\partial {}}{\partial {y}}, \gamma \frac{\partial {}}{\partial {z}} \right) \frac{1}{{r}}\end{aligned}

Our electric field in the primed frame is thus

\begin{aligned}\mathbf{E}' = \frac{q}{r^3} \left( x, \gamma y, \gamma z \right) \end{aligned} \hspace{\stretch{1}}(3.69)

Now for the magnetic field. We want

\begin{aligned}\mathbf{B}' &= \begin{vmatrix}\hat{\mathbf{x}} & \hat{\mathbf{y}} & \hat{\mathbf{z}} \\ \partial_{x'} & \partial_{y'} & \partial_{z'} \\ -\gamma \beta q/r & 0 & 0\end{vmatrix} \\ &=\left( 0, \partial_{z'}, -\partial_{y'} \right) \frac{-\gamma \beta q}{r} \\ \end{aligned}

\begin{aligned}\mathbf{B}'=\frac{q \gamma \beta}{r^3} \left( 0, -z, y \right)\end{aligned} \hspace{\stretch{1}}(3.70)

FIXME: sketch and comment.

Notes on grading of my solution.

I lost two marks for not reducing my solution for the trajectory in 2.51 to x(t), y(t) or x(y) form. That’s difficult in the form that I solved this for arbitrary initial conditions (this is easy for u^i = (1, 0, 0, 0) when \mathbf{B} = 0). I’ll be curious to see the Professor’s approach later.

FIXME: I’d expanded out the trajectory in the way that appears to have been desired on paper for the special case above. Re-do this and include it here (at least as a check of my final result since I switched the orientation of the fields when I typed it up). Also include a similar special case expansion for the case where the invariant E^2 - B^2 is negative.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

PHY450H1S (relativistic electrodynamics) Problem Set 3.

Posted by peeterjoot on March 2, 2011

[Click here for a PDF of this post with nicer formatting]

Disclaimer.

This problem set is as yet ungraded (although only the second question will be graded).

Problem 1. Fun with \epsilon_{\alpha\beta\gamma}, \epsilon^{ijkl}, F_{ij}, and the duality of Maxwell’s equations in vacuum.

1. Statement. rank 3 spatial antisymmetric tensor identities.

Prove that

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \nu \gamma}=\delta_{\alpha\mu} \delta_{\beta\nu}-\delta_{\alpha\nu} \delta_{\beta\mu}\end{aligned} \hspace{\stretch{1}}(2.1)

and use it to find the familiar relation for

\begin{aligned}(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D})\end{aligned} \hspace{\stretch{1}}(2.2)

Also show that

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}=2 \delta_{\alpha\mu}.\end{aligned} \hspace{\stretch{1}}(2.3)

(Einstein summation implied all throughout this problem).

1. Solution

We can explicitly expand the (implied) sum over indexes \gamma. This is

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \nu \gamma}=\epsilon_{\alpha \beta 1} \epsilon_{\mu \nu 1}+\epsilon_{\alpha \beta 2} \epsilon_{\mu \nu 2}+\epsilon_{\alpha \beta 3} \epsilon_{\mu \nu 3}\end{aligned} \hspace{\stretch{1}}(2.4)

For any \alpha \ne \beta only one term is non-zero. For example with \alpha,\beta = 2,3, we have just a contribution from the \gamma = 1 part of the sum

\begin{aligned}\epsilon_{2 3 1} \epsilon_{\mu \nu 1}.\end{aligned} \hspace{\stretch{1}}(2.5)

The value of this for (\mu,\nu) = (\alpha,\beta) is

\begin{aligned}(\epsilon_{2 3 1})^2\end{aligned} \hspace{\stretch{1}}(2.6)

whereas for (\mu,\nu) = (\beta,\alpha) we have

\begin{aligned}-(\epsilon_{2 3 1})^2\end{aligned} \hspace{\stretch{1}}(2.7)

Our sum has value one when (\alpha, \beta) matches (\mu, \nu), and value minus one for when (\mu, \nu) are permuted. We can summarize this, by saying that when \alpha \ne \beta we have

\begin{aligned}\boxed{\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \nu \gamma}=\delta_{\alpha\mu} \delta_{\beta\nu}-\delta_{\alpha\nu} \delta_{\beta\mu}.}\end{aligned} \hspace{\stretch{1}}(2.8)

However, observe that when \alpha = \beta the RHS is

\begin{aligned}\delta_{\alpha\mu} \delta_{\alpha\nu}-\delta_{\alpha\nu} \delta_{\alpha\mu} = 0,\end{aligned} \hspace{\stretch{1}}(2.9)

as desired, so this form works in general without any \alpha \ne \beta qualifier, completing this part of the problem.

\begin{aligned}(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D})&=(\epsilon_{\alpha \beta \gamma} \mathbf{e}^\alpha A^\beta B^\gamma ) \cdot(\epsilon_{\mu \nu \sigma} \mathbf{e}^\mu C^\nu D^\sigma ) \\ &=\epsilon_{\alpha \beta \gamma} A^\beta B^\gamma\epsilon_{\alpha \nu \sigma} C^\nu D^\sigma \\ &=(\delta_{\beta \nu} \delta_{\gamma\sigma}-\delta_{\beta \sigma} \delta_{\gamma\nu} )A^\beta B^\gammaC^\nu D^\sigma \\ &=A^\nu B^\sigmaC^\nu D^\sigma-A^\sigma B^\nuC^\nu D^\sigma.\end{aligned}

This gives us

\begin{aligned}\boxed{(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D})=(\mathbf{A} \cdot \mathbf{C})(\mathbf{B} \cdot \mathbf{D})-(\mathbf{A} \cdot \mathbf{D})(\mathbf{B} \cdot \mathbf{C}).}\end{aligned} \hspace{\stretch{1}}(2.10)

We have one more identity to deal with.

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}\end{aligned} \hspace{\stretch{1}}(2.11)

We can expand out this (implied) sum slow and dumb as well

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}&=\epsilon_{\alpha 1 2} \epsilon_{\mu 1 2}+\epsilon_{\alpha 2 1} \epsilon_{\mu 2 1} \\ &+\epsilon_{\alpha 1 3} \epsilon_{\mu 1 3}+\epsilon_{\alpha 3 1} \epsilon_{\mu 3 1} \\ &+\epsilon_{\alpha 2 3} \epsilon_{\mu 2 3}+\epsilon_{\alpha 3 2} \epsilon_{\mu 3 2} \\ &=2 \epsilon_{\alpha 1 2} \epsilon_{\mu 1 2}+ 2 \epsilon_{\alpha 1 3} \epsilon_{\mu 1 3}+ 2 \epsilon_{\alpha 2 3} \epsilon_{\mu 2 3}\end{aligned}

Now, observe that for any \alpha \in (1,2,3) only one term of this sum is picked up. For example, with no loss of generality, pick \alpha = 1. We are left with only

\begin{aligned}2 \epsilon_{1 2 3} \epsilon_{\mu 2 3}\end{aligned} \hspace{\stretch{1}}(2.12)

This has the value

\begin{aligned}2 (\epsilon_{1 2 3})^2 = 2\end{aligned} \hspace{\stretch{1}}(2.13)

when \mu = \alpha and is zero otherwise. We can therefore summarize the evaluation of this sum as

\begin{aligned}\boxed{\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}=  2\delta_{\alpha\mu},}\end{aligned} \hspace{\stretch{1}}(2.14)

completing this problem.

2. Statement. Determinant of three by three matrix.

Prove that for any 3 \times 3 matrix {\left\lVert{A_{\alpha\beta}}\right\rVert}: \epsilon_{\mu\nu\lambda} A_{\alpha \mu} A_{\beta\nu} A_{\gamma\lambda} = \epsilon_{\alpha \beta \gamma} \text{Det} A and that \epsilon_{\alpha\beta\gamma} \epsilon_{\mu\nu\lambda} A_{\alpha \mu} A_{\beta\nu} A_{\gamma\lambda} = 6 \text{Det} A.

2. Solution

In class Simon showed us how the first identity can be arrived at using the triple product \mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}) = \text{Det}(\mathbf{a} \mathbf{b} \mathbf{c}). It occurred to me later that I’d seen the identity to be proven in the context of Geometric Algebra, but hadn’t recognized it in this tensor form. Basically, a wedge product can be expanded in sums of determinants, and when the dimension of the space is the same as the vector, we have a pseudoscalar times the determinant of the components.

For example, in \mathbb{R}^{2}, let’s take the wedge product of a pair of vectors. As preparation for the relativistic \mathbb{R}^{4} case We won’t require an orthonormal basis, but express the vector in terms of a reciprocal frame and the associated components

\begin{aligned}a = a^i e_i = a_j e^j\end{aligned} \hspace{\stretch{1}}(2.15)

where

\begin{aligned}e^i \cdot e_j = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(2.16)

When we get to the relativistic case, we can pick (but don’t have to) the standard basis

\begin{aligned}e_0 &= (1, 0, 0, 0) \\ e_1 &= (0, 1, 0, 0) \\ e_2 &= (0, 0, 1, 0) \\ e_3 &= (0, 0, 0, 1),\end{aligned} \hspace{\stretch{1}}(2.17)

for which our reciprocal frame is implicitly defined by the metric

\begin{aligned}e^0 &= (1, 0, 0, 0) \\ e^1 &= (0, -1, 0, 0) \\ e^2 &= (0, 0, -1, 0) \\ e^3 &= (0, 0, 0, -1).\end{aligned} \hspace{\stretch{1}}(2.21)

Anyways. Back to the problem. Let’s examine the \mathbb{R}^{2} case. Our wedge product in coordinates is

\begin{aligned}a \wedge b=a^i b^j (e_i \wedge e_j)\end{aligned} \hspace{\stretch{1}}(2.25)

Since there are only two basis vectors we have

\begin{aligned}a \wedge b=(a^1 b^2 - a^2 b^1) e_1 \wedge e_2 = \text{Det} {\left\lVert{a^i b^j}\right\rVert} (e_1 \wedge e_2).\end{aligned} \hspace{\stretch{1}}(2.26)

Our wedge product is a product of the determinant of the vector coordinates, times the \mathbb{R}^{2} pseudoscalar e_1 \wedge e_2.

This doesn’t look quite like the \mathbb{R}^{3} relation that we want to prove, which had an antisymmetric tensor factor for the determinant. Observe that we get the determinant by picking off the e_1 \wedge e_2 component of the bivector result (the only component in this case), and we can do that by dotting with e^2 \cdot e^1. To get an antisymmetric tensor times the determinant, we have only to dot with a different pseudoscalar (one that differs by a possible sign due to permutation of the indexes). That is

\begin{aligned}(e^t \wedge e^s) \cdot (a \wedge b)&=a^i b^j (e^t \wedge e^s) \cdot (e_i \wedge e_j) \\ &=a^i b^j\left( {\delta^{s}}_i {\delta^{t}}_j-{\delta^{t}}_i {\delta^{s}}_j  \right) \\ &=a^i b^j{\delta^{[t}}_j {\delta^{s]}}_i \\ &=a^i b^j{\delta^{t}}_{[j} {\delta^{s}}_{i]} \\ &=a^{[i} b^{j]}{\delta^{t}}_{j} {\delta^{s}}_{i} \\ &=a^{[s} b^{t]}\end{aligned}

Now, if we write a^i = A^{1 i} and b^j = A^{2 j} we have

\begin{aligned}(e^t \wedge e^s) \cdot (a \wedge b)=A^{1 s} A^{2 t} -A^{1 t} A^{2 s}\end{aligned} \hspace{\stretch{1}}(2.27)

We can write this in two different ways. One of which is

\begin{aligned}A^{1 s} A^{2 t} -A^{1 t} A^{2 s} =\epsilon^{s t} \text{Det} {\left\lVert{A^{ij}}\right\rVert}\end{aligned} \hspace{\stretch{1}}(2.28)

and the other of which is by introducing free indexes for 1 and 2, and summing antisymmetrically over these. That is

\begin{aligned}A^{1 s} A^{2 t} -A^{1 t} A^{2 s}=A^{a s} A^{b t} \epsilon_{a b}\end{aligned} \hspace{\stretch{1}}(2.29)

So, we have

\begin{aligned}\boxed{A^{a s} A^{b t} \epsilon_{a b} =A^{1 i} A^{2 j} {\delta^{[t}}_j {\delta^{s]}}_i =\epsilon^{s t} \text{Det} {\left\lVert{A^{ij}}\right\rVert},}\end{aligned} \hspace{\stretch{1}}(2.30)

This result hold regardless of the metric for the space, and does not require that we were using an orthonormal basis. When the metric is Euclidean and we have an orthonormal basis, then all the indexes can be dropped.

The \mathbb{R}^{3} and \mathbb{R}^{4} cases follow in exactly the same way, we just need more vectors in the wedge products.

For the \mathbb{R}^{3} case we have

\begin{aligned}(e^u \wedge e^t \wedge e^s) \cdot ( a \wedge b \wedge c)&=a^i b^j c^k(e^u \wedge e^t \wedge e^s) \cdot (e_i \wedge e_j \wedge e_k) \\ &=a^i b^j c^k{\delta^{[u}}_k{\delta^{t}}_j{\delta^{s]}}_i \\ &=a^{[s} b^t c^{u]}\end{aligned}

Again, with a^i = A^{1 i} and b^j = A^{2 j}, and c^k = A^{3 k} we have

\begin{aligned}(e^u \wedge e^t \wedge e^s) \cdot ( a \wedge b \wedge c)=A^{1 i} A^{2 j} A^{3 k}{\delta^{[u}}_k{\delta^{t}}_j{\delta^{s]}}_i\end{aligned} \hspace{\stretch{1}}(2.31)

and we can choose to write this in either form, resulting in the identity

\begin{aligned}\boxed{\epsilon^{s t u} \text{Det} {\left\lVert{A^{ij}}\right\rVert}=A^{1 i} A^{2 j} A^{3 k}{\delta^{[u}}_k{\delta^{t}}_j{\delta^{s]}}_i=\epsilon_{a b c} A^{a s} A^{b t} A^{c u}.}\end{aligned} \hspace{\stretch{1}}(2.32)

The \mathbb{R}^{4} case follows exactly the same way, and we have

\begin{aligned}(e^v \wedge e^u \wedge e^t \wedge e^s) \cdot ( a \wedge b \wedge c \wedge d)&=a^i b^j c^k d^l(e^v \wedge e^u \wedge e^t \wedge e^s) \cdot (e_i \wedge e_j \wedge e_k \wedge e_l) \\ &=a^i b^j c^k d^l{\delta^{[v}}_l{\delta^{u}}_k{\delta^{t}}_j{\delta^{s]}}_i \\ &=a^{[s} b^t c^{u} d^{v]}.\end{aligned}

This time with a^i = A^{0 i} and b^j = A^{1 j}, and c^k = A^{2 k}, and d^l = A^{3 l} we have

\begin{aligned}\boxed{\epsilon^{s t u v} \text{Det} {\left\lVert{A^{ij}}\right\rVert}=A^{0 i} A^{1 j} A^{2 k} A^{3 l}{\delta^{[v}}_l{\delta^{u}}_k{\delta^{t}}_j{\delta^{s]}}_i=\epsilon_{a b c d} A^{a s} A^{b t} A^{c u} A^{d v}.}\end{aligned} \hspace{\stretch{1}}(2.33)

This one is almost the identity to be established later in problem 1.4. We have only to raise and lower some indexes to get that one. Note that in the Minkowski standard basis above, because s, t, u, v must be a permutation of 0,1,2,3 for a non-zero result, we must have

\begin{aligned}\epsilon^{s t u v} = (-1)^3 (+1) \epsilon_{s t u v}.\end{aligned} \hspace{\stretch{1}}(2.34)

So raising and lowering the identity above gives us

\begin{aligned}-\epsilon_{s t u v} \text{Det} {\left\lVert{A_{ij}}\right\rVert}=\epsilon^{a b c d} A_{a s} A_{b t} A_{c u} A_{d u}.\end{aligned} \hspace{\stretch{1}}(2.35)

No sign changes were required for the indexes a, b, c, d, since they are paired.

Until we did the raising and lowering operations here, there was no specific metric required, so our first result 2.33 is the more general one.

There’s one more part to this problem, doing the antisymmetric sums over the indexes s, t, \cdots. For the \mathbb{R}^{2} case we have

\begin{aligned}\epsilon_{s t} \epsilon_{a b} A^{a s} A^{b t}&=\epsilon_{s t} \epsilon^{s t} \text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=\left( \epsilon_{1 2} \epsilon^{1 2} +\epsilon_{2 1} \epsilon^{2 1} \right)\text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=\left( 1^2 + (-1)^2\right)\text{Det} {\left\lVert{A^{ij}}\right\rVert}\end{aligned}

We conclude that

\begin{aligned}\boxed{\epsilon_{s t} \epsilon_{a b} A^{a s} A^{b t} = 2! \text{Det} {\left\lVert{A^{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.36)

For the \mathbb{R}^{3} case we have the same operation

\begin{aligned}\epsilon_{s t u} \epsilon_{a b c} A^{a s} A^{b t} A^{c u}&=\epsilon_{s t u} \epsilon^{s t u} \text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=\left( \epsilon_{1 2 3} \epsilon^{1 2 3} +\epsilon_{1 3 2} \epsilon^{1 3 2} + \cdots\right)\text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=(\pm 1)^2 (3!)\text{Det} {\left\lVert{A^{ij}}\right\rVert}.\end{aligned}

So we conclude

\begin{aligned}\boxed{\epsilon_{s t u} \epsilon_{a b c} A^{a s} A^{b t} A^{c u}= 3! \text{Det} {\left\lVert{A^{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.37)

It’s clear what the pattern is, and if we evaluate the sum of the antisymmetric tensor squares in \mathbb{R}^{4} we have

\begin{aligned}\epsilon_{s t u v} \epsilon_{s t u v}&=\epsilon_{0 1 2 3} \epsilon_{0 1 2 3}+\epsilon_{0 1 3 2} \epsilon_{0 1 3 2}+\epsilon_{0 2 1 3} \epsilon_{0 2 1 3}+ \cdots \\ &= (\pm 1)^2 (4!),\end{aligned}

So, for our SR case we have

\begin{aligned}\boxed{\epsilon_{s t u v} \epsilon_{a b c d} A^{a s} A^{b t} A^{c u} A^{d v}= 4! \text{Det} {\left\lVert{A^{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.38)

This was part of question 1.4, albeit in lower index form. Here since all indexes are matched, we have the same result without major change

\begin{aligned}\boxed{\epsilon^{s t u v} \epsilon^{a b c d} A_{a s} A_{b t} A_{c u} A_{d v}= 4! \text{Det} {\left\lVert{A_{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.39)

The main difference is that we are now taking the determinant of a lower index tensor.

3. Statement. Rotational invariance of 3D antisymmetric tensor

Use the previous results to show that \epsilon_{\mu\nu\lambda} is invariant under rotations.

3. Solution

We apply transformations to coordinates (and thus indexes) of the form

\begin{aligned}x_\mu \rightarrow O_{\mu\nu} x_\nu\end{aligned} \hspace{\stretch{1}}(2.40)

With our tensor transforming as its indexes, we have

\begin{aligned}\epsilon_{\mu\nu\lambda} \rightarrow \epsilon_{\alpha\beta\sigma} O_{\mu\alpha} O_{\nu\beta} O_{\lambda\sigma}.\end{aligned} \hspace{\stretch{1}}(2.41)

We’ve got 2.32, which after dropping indexes, because we are in a Euclidean space, we have

\begin{aligned}\epsilon_{\mu \nu \lambda} \text{Det} {\left\lVert{A_{ij}}\right\rVert} = \epsilon_{\alpha \beta \sigma} A_{\alpha \mu} A_{\beta \nu} A_{\sigma \lambda}.\end{aligned} \hspace{\stretch{1}}(2.42)

Let A_{i j} = O_{j i}, which gives us

\begin{aligned}\epsilon_{\mu\nu\lambda} \rightarrow \epsilon_{\mu\nu\lambda} \text{Det} A^\text{T}\end{aligned} \hspace{\stretch{1}}(2.43)

but since \text{Det} O = \text{Det} O^\text{T}, we have shown that \epsilon_{\mu\nu\lambda} is invariant under rotation.

4. Statement. Rotational invariance of 4D antisymmetric tensor

Use the previous results to show that \epsilon_{i j k l} is invariant under Lorentz transformations.

4. Solution

This follows the same way. We assume a transformation of coordinates of the following form

\begin{aligned}(x')^i &= {O^i}_j x^j \\ (x')_i &= {O_i}^j x_j,\end{aligned} \hspace{\stretch{1}}(2.44)

where the determinant of {O^i}_j = 1 (sanity check of sign: {O^i}_j = {\delta^i}_j).

Our antisymmetric tensor transforms as its coordinates individually

\begin{aligned}\epsilon_{i j k l} &\rightarrow \epsilon_{a b c d} {O_i}^a{O_j}^b{O_k}^c{O_l}^d \\ &= \epsilon^{a b c d} O_{i a}O_{j b}O_{k c}O_{l d} \\ \end{aligned}

Let P_{ij} = O_{ji}, and raise and lower all the indexes in 2.46 for

\begin{aligned}-\epsilon_{s t u v} \text{Det} {\left\lVert{P_{ij}}\right\rVert}=\epsilon^{a b c d} P_{a s} P_{b t} P_{c u} P_{d v}.\end{aligned} \hspace{\stretch{1}}(2.46)

We have

\begin{aligned}\epsilon_{i j k l} &= \epsilon^{a b c d} P_{a i}P_{a j}P_{a k}P_{a l} \\ &=-\epsilon_{i j k l} \text{Det} {\left\lVert{P_{ij}}\right\rVert} \\ &=-\epsilon_{i j k l} \text{Det} {\left\lVert{O_{ij}}\right\rVert} \\ &=-\epsilon_{i j k l} \text{Det} {\left\lVert{g_{im} {O^m}_j }\right\rVert} \\ &=-\epsilon_{i j k l} (-1)(1) \\ &=\epsilon_{i j k l}\end{aligned}

Since \epsilon_{i j k l} = -\epsilon^{i j k l} both are therefore invariant under Lorentz transformation.

5. Statement. Sum of contracting symmetric and antisymmetric rank 2 tensors

Show that A^{ij} B_{ij} = 0 if A is symmetric and B is antisymmetric.

5. Solution

We swap indexes in B, switch dummy indexes, then swap indexes in A

\begin{aligned}A^{i j} B_{i j} &= -A^{i j} B_{j i} \\ &= -A^{j i} B_{i j} \\ &= -A^{i j} B_{i j} \\ \end{aligned}

Our result is the negative of itself, so must be zero.

6. Statement. Characteristic equation for the electromagnetic strength tensor

Show that P(\lambda) = \text{Det} {\left\lVert{F_{i j} - \lambda g_{i j}}\right\rVert} is invariant under Lorentz transformations. Consider the polynomial of P(\lambda), also called the characteristic polynomial of the matrix {\left\lVert{F_{i j}}\right\rVert}. Find the coefficients of the expansion of P(\lambda) in powers of \lambda in terms of the components of {\left\lVert{F_{i j}}\right\rVert}. Use the result to argue that \mathbf{E} \cdot \mathbf{B} and \mathbf{E}^2 - \mathbf{B}^2 are Lorentz invariant.

6. Solution

The invariance of the determinant

Let’s consider how any lower index rank 2 tensor transforms. Given a transformation of coordinates

\begin{aligned}(x^i)' &= {O^i}_j x^j \\ (x_i)' &= {O_i}^j x^j ,\end{aligned} \hspace{\stretch{1}}(2.47)

where \text{Det} {\left\lVert{ {O^i}_j }\right\rVert} = 1, and {O_i}^j = {O^m}_n g_{i m} g^{j n}. Let’s reflect briefly on why this determinant is unit valued. We have

\begin{aligned}(x^i)' (x_i)'= {O_i}^a x^a {O^i}_b x^b = x^b x_b,\end{aligned} \hspace{\stretch{1}}(2.49)

which implies that the transformation product is

\begin{aligned}{O_i}^a {O^i}_b = {\delta^a}_b,\end{aligned} \hspace{\stretch{1}}(2.50)

the identity matrix. The identity matrix has unit determinant, so we must have

\begin{aligned}1 = (\text{Det} \hat{G})^2 (\text{Det} {\left\lVert{ {O^i}_j }\right\rVert})^2.\end{aligned} \hspace{\stretch{1}}(2.51)

Since \text{Det} \hat{G} = -1 we have

\begin{aligned}\text{Det} {\left\lVert{ {O^i}_j }\right\rVert} = \pm 1,\end{aligned} \hspace{\stretch{1}}(2.52)

which is all that we can say about the determinant of this class of transformations by considering just invariance. If we restrict the transformations of coordinates to those of the same determinant sign as the identity matrix, we rule out reflections in time or space. This seems to be the essence of the SO(1,3) labeling.

Why dwell on this? Well, I wanted to be clear on the conventions I’d chosen, since parts of the course notes used \hat{O} = {\left\lVert{O^{i j}}\right\rVert}, and X' = \hat{O} X, and gave that matrix unit determinant. That O^{i j} looks like it is equivalent to my {O^i}_j, except that the one in the course notes is loose when it comes to lower and upper indexes since it gives (x')^i = O^{i j} x^j.

I’ll write

\begin{aligned}\hat{O} = {\left\lVert{{O^i}_j}\right\rVert},\end{aligned} \hspace{\stretch{1}}(2.53)

and require this (not {\left\lVert{O^{i j}}\right\rVert}) to be the matrix with unit determinant. Having cleared the index upper and lower confusion I had trying to reconcile the class notes with the rules for index manipulation, let’s now consider the Lorentz transformation of a lower index rank 2 tensor (not necessarily antisymmetric or symmetric)

We have, transforming in the same fashion as a lower index coordinate four vector (but twice, once for each index)

\begin{aligned}A_{i j} \rightarrow A_{k m} {O_i}^k{O_j}^m.\end{aligned} \hspace{\stretch{1}}(2.54)

The determinant of the transformation tensor {O_i}^j is

\begin{aligned}\text{Det} {\left\lVert{ {O_i}^j }\right\rVert} = \text{Det} {\left\lVert{ g^{i m} {O^m}_n g^{n j} }\right\rVert} = (\text{Det} \hat{G}) (1) (\text{Det} \hat{G} ) = (-1)^2 (1) = 1.\end{aligned} \hspace{\stretch{1}}(2.55)

We see that the determinant of a lower index rank 2 tensor is invariant under Lorentz transformation. This would include our characteristic polynomial P(\lambda).

Expanding the determinant.

Utilizing 2.39 we can now calculate the characteristic polynomial. This is

\begin{aligned}\text{Det} {\left\lVert{F_{ij} - \lambda g_{ij} }\right\rVert}&= \frac{1}{{4!}}\epsilon^{s t u v} \epsilon^{a b c d} (F_{ a s } - \lambda g_{a s}) (F_{ b t } - \lambda g_{b t}) (F_{ c u } - \lambda g_{c u}) (F_{ d v } - \lambda g_{d v}) \\ &=\frac{1}{{24}}\epsilon^{s t u v} \epsilon_{a b c d} ({F^a}_s - \lambda {g^a}_s) ({F^b}_t - \lambda {g^b}_t) ({F^c}_u - \lambda {g^c}_u) ({F^d}_v - \lambda {g^d}_v) \\ \end{aligned}

However, {g^a}_b = g_{b c} g^{a c}, or {\left\lVert{{g^a}_b}\right\rVert} = \hat{G}^2 = I. This means we have

\begin{aligned}{g^a}_b = {\delta^a}_b,\end{aligned} \hspace{\stretch{1}}(2.56)

and our determinant is reduced to

\begin{aligned}\begin{aligned}P(\lambda) &=\frac{1}{{24}}\epsilon^{s t u v} \epsilon_{a b c d} \Bigl({F^a}_s {F^b}_t - \lambda( {\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) + \lambda^2 {\delta^a}_s {\delta^b}_t \Bigr) \\ &\times \qquad \qquad \Bigl({F^c}_u {F^d}_v - \lambda( {\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) + \lambda^2 {\delta^c}_u {\delta^d}_v \Bigr) \end{aligned}\end{aligned} \hspace{\stretch{1}}(2.57)

If we expand this out we have our powers of \lambda coefficients are

\begin{aligned}\lambda^0 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} {F^a}_s {F^b}_t {F^c}_u {F^d}_v \\ \lambda^1 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl(- ({\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) {F^a}_s {F^b}_t - ({\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) {F^c}_u {F^d}_v \Bigr) \\ \lambda^2 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl({\delta^c}_u {\delta^d}_v {F^a}_s {F^b}_t +( {\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) ( {\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) + {\delta^a}_s {\delta^b}_t  {F^c}_u {F^d}_v \Bigr) \\ \lambda^3 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl(- ( {\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) {\delta^c}_u {\delta^d}_v - {\delta^a}_s {\delta^b}_t  ( {\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) \Bigr) \\ \lambda^4 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl({\delta^a}_s {\delta^b}_t {\delta^c}_u {\delta^d}_v \Bigr) \\ \end{aligned}

By 2.39 the \lambda^0 coefficient is just \text{Det} {\left\lVert{F_{i j}}\right\rVert}.

The \lambda^3 terms can be seen to be zero. For example, the first one is

\begin{aligned}-\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} {\delta^a}_s {F^b}_t {\delta^c}_u {\delta^d}_v &=-\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{s b u v} {F^b}_t \\ &=-\frac{1}{{12}} \delta^{t}_b {F^b}_t \\ &=-\frac{1}{{12}} {F^b}_b \\ &=-\frac{1}{{12}} F^{bu} g_{ub} \\ &= 0,\end{aligned}

where the final equality to zero comes from summing a symmetric and antisymmetric product.

Similarly the \lambda coefficients can be shown to be zero. Again the first as a sample is

\begin{aligned}-\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} {\delta^c}_u {F^d}_v {F^a}_s {F^b}_t &=-\frac{1}{{24}} \epsilon^{u s t v} \epsilon_{u a b d} {F^d}_v {F^a}_s {F^b}_t  \\ &=-\frac{1}{{24}} \delta^{[s}_a\delta^{t}_b\delta^{v]}_d{F^d}_v {F^a}_s {F^b}_t  \\ &=-\frac{1}{{24}} {F^a}_{[s}{F^b}_{t}{F^d}_{v]} \\ \end{aligned}

Disregarding the -1/24 factor, let’s just expand this antisymmetric sum

\begin{aligned}{F^a}_{[a}{F^b}_{b}{F^d}_{d]}&={F^a}_{a}{F^b}_{b}{F^d}_{d}+{F^a}_{d}{F^b}_{a}{F^d}_{b}+{F^a}_{b}{F^b}_{d}{F^d}_{a}-{F^a}_{a}{F^b}_{d}{F^d}_{b}-{F^a}_{d}{F^b}_{b}{F^d}_{a}-{F^a}_{b}{F^b}_{a}{F^d}_{d} \\ &={F^a}_{d}{F^b}_{a}{F^d}_{b}+{F^a}_{b}{F^b}_{d}{F^d}_{a} \\ \end{aligned}

Of the two terms above that were retained, they are the only ones without a zero {F^i}_i factor. Consider the first part of this remaining part of the sum. Employing the metric tensor, to raise indexes so that the antisymmetry of F^{ij} can be utilized, and then finally relabeling all the dummy indexes we have

\begin{aligned}{F^a}_{d}{F^b}_{a}{F^d}_{b}&=F^{a u}F^{b v}F^{d w}g_{d u}g_{a v}g_{b w} \\ &=(-1)^3F^{u a}F^{v b}F^{w d}g_{d u}g_{a v}g_{b w} \\ &=-(F^{u a}g_{a v})(F^{v b}g_{b w} )(F^{w d}g_{d u})\\ &=-{F^u}_v{F^v}_w{F^w}_u\\ &=-{F^a}_b{F^b}_d{F^d}_a\\ \end{aligned}

This is just the negative of the second term in the sum, leaving us with zero.

Finally, we have for the \lambda^2 coefficient (\times 24)

\begin{aligned}&\epsilon^{s t u v} \epsilon_{a b c d} \Bigl({\delta^c}_u {\delta^d}_v {F^a}_s {F^b}_t +{\delta^a}_s {F^b}_t {\delta^c}_u {F^d}_v +{\delta^b}_t {F^a}_s {\delta^d}_v {F^c}_u  \\ &\qquad +{\delta^b}_t {F^a}_s {\delta^c}_u {F^d}_v +{\delta^a}_s {F^b}_t {\delta^d}_v {F^c}_u + {\delta^a}_s {\delta^b}_t  {F^c}_u {F^d}_v \Bigr) \\ &=\epsilon^{s t u v} \epsilon_{a b u v}   {F^a}_s {F^b}_t +\epsilon^{s t u v} \epsilon_{s b u d}  {F^b}_t  {F^d}_v +\epsilon^{s t u v} \epsilon_{a t c v}  {F^a}_s  {F^c}_u  \\ &\qquad +\epsilon^{s t u v} \epsilon_{a t u d}  {F^a}_s  {F^d}_v +\epsilon^{s t u v} \epsilon_{s b c v}  {F^b}_t  {F^c}_u + \epsilon^{s t u v} \epsilon_{s t c d}    {F^c}_u {F^d}_v \\ &=\epsilon^{s t u v} \epsilon_{a b u v}   {F^a}_s {F^b}_t +\epsilon^{t v s u } \epsilon_{b d s u}  {F^b}_t  {F^d}_v +\epsilon^{s u t v} \epsilon_{a c t v}  {F^a}_s  {F^c}_u  \\ &\qquad +\epsilon^{s v t u} \epsilon_{a d t u}  {F^a}_s  {F^d}_v +\epsilon^{t u s v} \epsilon_{b c s v}  {F^b}_t  {F^c}_u + \epsilon^{u v s t} \epsilon_{c d s t}    {F^c}_u {F^d}_v \\ &=6\epsilon^{s t u v} \epsilon_{a b u v} {F^a}_s {F^b}_t  \\ &=6 (2){\delta^{[s}}_a{\delta^{t]}}_b{F^a}_s {F^b}_t  \\ &=12{F^a}_{[a} {F^b}_{b]}  \\ &=12( {F^a}_{a} {F^b}_{b} - {F^a}_{b} {F^b}_{a} ) \\ &=-12 {F^a}_{b} {F^b}_{a} \\ &=-12 F^{a b} F_{b a} \\ &=12 F^{a b} F_{a b}\end{aligned}

Therefore, our characteristic polynomial is

\begin{aligned}\boxed{P(\lambda) = \text{Det} {\left\lVert{F_{i j}}\right\rVert} + \frac{\lambda^2}{2} F^{a b} F_{a b} + \lambda^4.}\end{aligned} \hspace{\stretch{1}}(2.58)

Observe that in matrix form our strength tensors are

\begin{aligned}{\left\lVert{ F^{ij} }\right\rVert} &= \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} \\ {\left\lVert{ F_{ij} }\right\rVert} &= \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.59)

From these we can compute F^{a b} F_{a b} easily by inspection

\begin{aligned}F^{a b} F_{a b} = 2 (\mathbf{B}^2 - \mathbf{E}^2).\end{aligned} \hspace{\stretch{1}}(2.61)

Computing the determinant is not so easy. The dumb and simple way of expanding by cofactors takes two pages, and yields eventually

\begin{aligned}\text{Det} {\left\lVert{ F^{i j} }\right\rVert} = (\mathbf{E} \cdot \mathbf{B})^2.\end{aligned} \hspace{\stretch{1}}(2.62)

That supplies us with a relation for the characteristic polynomial in \mathbf{E} and \mathbf{B}

\begin{aligned}\boxed{P(\lambda) = (\mathbf{E} \cdot \mathbf{B})^2 + \lambda^2 (\mathbf{B}^2 - \mathbf{E}^2) + \lambda^4.}\end{aligned} \hspace{\stretch{1}}(2.63)

Observe that we found this for the special case where \mathbf{E} and \mathbf{B} were perpendicular in homework 2. Observe that when we have that perpendicularity, we can solve for the eigenvalues by inspection

\begin{aligned}\lambda \in \{ 0, 0, \pm \sqrt{ \mathbf{E}^2 - \mathbf{B}^2 } \},\end{aligned} \hspace{\stretch{1}}(2.64)

and were able to diagonalize the matrix {F^{i}}_j to solve the Lorentz force equation in parametric form. When {\left\lvert{\mathbf{E}}\right\rvert} > {\left\lvert{\mathbf{B}}\right\rvert} we had real eigenvalues and an orthogonal diagonalization when \mathbf{B} = 0. For the {\left\lvert{\mathbf{B}}\right\rvert} > {\left\lvert{\mathbf{E}}\right\rvert}, we had a two purely imaginary eigenvalues, and when \mathbf{E} = 0 this was a Hermitian diagonalization. For the general case, when one of \mathbf{E}, or \mathbf{B} was zero, things didn’t have the same nice closed form solution.

In general our eigenvalues are

\begin{aligned}\lambda = \pm \frac{1}{{\sqrt{2}}} \sqrt{ \mathbf{E}^2 - \mathbf{B}^2 \pm \sqrt{ (\mathbf{E}^2 - \mathbf{B}^2)^2 - 4 (\mathbf{E} \cdot \mathbf{B})^2 }}.\end{aligned} \hspace{\stretch{1}}(2.65)

For the purposes of this problem we really only wish to show that \mathbf{E} \cdot \mathbf{B} and \mathbf{E}^2 - \mathbf{B}^2 are Lorentz invariants. When \lambda = 0 we have P(\lambda) = (\mathbf{E} \cdot \mathbf{B})^2, a Lorentz invariant. This must mean that \mathbf{E} \cdot \mathbf{B} is itself a Lorentz invariant. Since that is invariant, and we require P(\lambda) to be invariant for any other possible values of \lambda, the difference \mathbf{E}^2 - \mathbf{B}^2 must also be Lorentz invariant.

7. Statement. Show that the pseudoscalar invariant has only boundary effects.

Use integration by parts to show that \int d^4 x \epsilon^{i j k l} F_{ i j } F_{ k l } only depends on the values of A^i(x) at the “boundary” of spacetime (e.g. the “surface” depicted on page 105 of the notes) and hence does not affect the equations of motion for the electromagnetic field.

7. Solution

This proceeds in a fairly straightforward fashion

\begin{aligned}\int d^4 x \epsilon^{i j k l} F_{ i j } F_{ k l }&=\int d^4 x \epsilon^{i j k l} (\partial_i A_j - \partial_j A_i) F_{ k l } \\ &=\int d^4 x \epsilon^{i j k l} (\partial_i A_j) F_{ k l } -\epsilon^{j i k l} (\partial_i A_j) F_{ k l } \\ &=2 \int d^4 x \epsilon^{i j k l} (\partial_i A_j) F_{ k l } \\ &=2 \int d^4 x \epsilon^{i j k l} \left( \frac{\partial {}}{\partial {x^i}}(A_j F_{ k l }-A_j \frac{\partial { F_{ k l } }}{\partial {x^i}}\right)\\ \end{aligned}

Now, observe that by the Bianchi identity, this second term is zero

\begin{aligned}\epsilon^{i j k l} \frac{\partial { F_{ k l } }}{\partial {x^i}}=-\epsilon^{j i k l} \partial_i F_{ k l } = 0\end{aligned} \hspace{\stretch{1}}(2.66)

Now we have a set of perfect differentials, and can integrate

\begin{aligned}\int d^4 x \epsilon^{i j k l} F_{ i j } F_{ k l }&= 2 \int d^4 x \epsilon^{i j k l} \frac{\partial {}}{\partial {x^i}}(A_j F_{ k l })\\ &= 2 \int dx^j dx^k dx^l\epsilon^{i j k l} {\left.{{(A_j F_{ k l })}}\right\vert}_{{\Delta x^i}}\\ \end{aligned}

We are left with a only contributions to the integral from the boundary terms on the spacetime hypervolume, three-volume normals bounding the four-volume integration in the original integral.

8. Statement. Electromagnetic duality transformations.

Show that the Maxwell equations in vacuum are invariant under the transformation: F_{i j} \rightarrow \tilde{F}_{i j}, where \tilde{F}_{i j} = \frac{1}{{2}} \epsilon_{i j k l} F^{k l} is the dual electromagnetic stress tensor. Replacing F with \tilde{F} is known as “electric-magnetic duality”. Explain this name by considering the transformation in terms of \mathbf{E} and \mathbf{B}. Are the Maxwell equations with sources invariant under electric-magnetic duality transformations?

8. Solution

Let’s first consider the explanation of the name. First recall what the expansions are of F_{i j} and F^{i j} in terms of \mathbf{E} and \mathbf{E}. These are

\begin{aligned}F_{0 \alpha} &= \partial_0 A_\alpha - \partial_\alpha A_0 \\ &= -\frac{1}{{c}} \frac{\partial {A^\alpha}}{\partial {t}} - \frac{\partial {\phi}}{\partial {x^\alpha}} \\ &= E_\alpha\end{aligned}

with F^{0 \alpha} = -E^\alpha, and E^\alpha = E_\alpha.

The magnetic field components are

\begin{aligned}F_{\beta \alpha} &= \partial_\beta A_\alpha - \partial_\alpha A_\beta \\ &= -\partial_\beta A^\alpha + \partial_\alpha A^\beta \\ &= \epsilon_{\alpha \beta \sigma} B^\sigma\end{aligned}

with F^{\beta \alpha} = \epsilon^{\alpha \beta \sigma} B_\sigma and B_\sigma = B^\sigma.

Now let’s expand the dual tensors. These are

\begin{aligned}\tilde{F}_{0 \alpha} &=\frac{1}{{2}} \epsilon_{0 \alpha i j} F^{i j} \\ &=\frac{1}{{2}} \epsilon_{0 \alpha \beta \sigma} F^{\beta \sigma} \\ &=\frac{1}{{2}} \epsilon_{0 \alpha \beta \sigma} \epsilon^{\sigma \beta \mu} B_\mu \\ &=-\frac{1}{{2}} \epsilon_{0 \alpha \beta \sigma} \epsilon^{\mu \beta \sigma} B_\mu \\ &=-\frac{1}{{2}} (2!) {\delta_\alpha}^\mu B_\mu \\ &=- B_\alpha \\ \end{aligned}

and

\begin{aligned}\tilde{F}_{\beta \alpha} &=\frac{1}{{2}} \epsilon_{\beta \alpha i j} F^{i j} \\ &=\frac{1}{{2}} \left(\epsilon_{\beta \alpha 0 \sigma} F^{0 \sigma} +\epsilon_{\beta \alpha \sigma 0} F^{\sigma 0} \right) \\ &=\epsilon_{0 \beta \alpha \sigma} (-E^\sigma) \\ &=\epsilon_{\alpha \beta \sigma} E^\sigma\end{aligned}

Summarizing we have

\begin{aligned}F_{0 \alpha} &= E^\alpha \\ F^{0 \alpha} &= -E^\alpha \\ F^{\beta \alpha} &= F_{\beta \alpha} = \epsilon_{\alpha \beta \sigma} B^\sigma \\ \tilde{F}_{0 \alpha} &= - B_\alpha \\ \tilde{F}^{0 \alpha} &= B_\alpha \\ \tilde{F}_{\beta \alpha} &= \tilde{F}^{\beta \alpha} = \epsilon_{\alpha \beta \sigma} E^\sigma\end{aligned} \hspace{\stretch{1}}(2.67)

Is there a sign error in the \tilde{F}_{0 \alpha} = - B_\alpha result? Other than that we have the same sort of structure for the tensor with E and B switched around.

Let’s write these in matrix form, to compare

\begin{aligned}\begin{array}{l l l l}{\left\lVert{ \tilde{F}_{i j} }\right\rVert} &= \begin{bmatrix}0 & -B_x & -B_y & -B_z \\ B_x & 0 & -E_z & E_y \\ B_y & E_z & 0 & E_x \\ B_z & -E_y & -E_x & 0 \\ \end{bmatrix} ^{i j} }\right\rVert} &= \begin{bmatrix}0 & B_x & B_y & B_z \\ -B_x & 0 & -E_z & E_y \\ -B_y & E_z & 0 & -E_x \\ -B_z & -E_y & E_x & 0 \\ \end{bmatrix} \\ {\left\lVert{ F^{ij} }\right\rVert} &= \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} }\right\rVert} &= \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0\end{bmatrix}.\end{array}\end{aligned} \hspace{\stretch{1}}(2.73)

From these we can see by inspection that we have

\begin{aligned}\tilde{F}^{i j} F_{ij} = \tilde{F}_{i j} F^{ij} = 4 (\mathbf{E} \cdot \mathbf{B})\end{aligned} \hspace{\stretch{1}}(2.74)

This is consistent with the stated result in [1] (except for a factor of c due to units differences), so it appears the signs above are all kosher.

Now, let’s see if the if the dual tensor satisfies the vacuum equations.

\begin{aligned}\partial_j \tilde{F}^{i j}&=\partial_j \frac{1}{{2}} \epsilon^{i j k l} F_{k l} \\ &=\frac{1}{{2}} \epsilon^{i j k l} \partial_j (\partial_k A_l - \partial_l A_k) \\ &=\frac{1}{{2}} \epsilon^{i j k l} \partial_j \partial_k A_l - \frac{1}{{2}} \epsilon^{i j l k} \partial_k A_l \\ &=\frac{1}{{2}} (\epsilon^{i j k l} - \epsilon^{i j k l} \partial_k A_l \\ &= 0 \qquad\square\end{aligned}

So the first checks out, provided we have no sources. If we have sources, then we see here that Maxwell’s equations do not hold since this would imply that the four current density must be zero.

How about the Bianchi identity? That gives us

\begin{aligned}\epsilon^{i j k l} \partial_j \tilde{F}_{k l} &=\epsilon^{i j k l} \partial_j \frac{1}{{2}} \epsilon_{k l a b} F^{a b} \\ &=\frac{1}{{2}} \epsilon^{k l i j} \epsilon_{k l a b} \partial_j F^{a b} \\ &=\frac{1}{{2}} (2!) {\delta^i}_{[a} {\delta^j}_{b]} \partial_j F^{a b} \\ &=\partial_j (F^{i j} - F^{j i} ) \\ &=2 \partial_j F^{i j} .\end{aligned}

The factor of two is slightly curious. Is there a mistake above? If there is a mistake, it doesn’t change the fact that Maxwell’s equation

\begin{aligned}\partial_k F^{k i} = \frac{4 \pi}{c} j^i\end{aligned} \hspace{\stretch{1}}(2.75)

Gives us zero for the Bianchi identity under source free conditions of j^i = 0.

Problem 2. Transformation properties of \mathbf{E} and \mathbf{B}, again.

1. Statement

Use the form of F^{i j} from page 82 in the class notes, the transformation law for {\left\lVert{ F^{i j} }\right\rVert} given further down that same page, and the explicit form of the SO(1,3) matrix \hat{O} (say, corresponding to motion in the positive x_1 direction with speed v) to derive the transformation law of the fields \mathbf{E} and \mathbf{B}. Use the transformation law to find the electromagnetic field of a charged particle moving with constant speed v in the positive x_1 direction and check that the result agrees with the one that you obtained in Homework 2.

1. Solution

Given a transformation of coordinates

\begin{aligned}{x'}^i \rightarrow {O^i}_j x^j\end{aligned} \hspace{\stretch{1}}(3.76)

our rank 2 tensor F^{i j} transforms as

\begin{aligned}F^{i j} \rightarrow {O^i}_aF^{a b}{O^j}_b.\end{aligned} \hspace{\stretch{1}}(3.77)

Introducing matrices

\begin{aligned}\hat{O} &= {\left\lVert{{O^i}_j}\right\rVert} \\ \hat{F} &= {\left\lVert{F^{ij}}\right\rVert} = \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(3.78)

and noting that \hat{O}^\text{T} = {\left\lVert{{O^j}_i}\right\rVert}, we can express the electromagnetic strength tensor transformation as

\begin{aligned}\hat{F} \rightarrow \hat{O} \hat{F} \hat{O}^\text{T}.\end{aligned} \hspace{\stretch{1}}(3.80)

The class notes use {x'}^i \rightarrow O^{ij} x^j, which violates our conventions on mixed upper and lower indexes, but the end result 3.80 is the same.

\begin{aligned}{\left\lVert{{O^i}_j}\right\rVert} =\begin{bmatrix}\cosh\alpha & -\sinh\alpha & 0 & 0 \\ -\sinh\alpha & \cosh\alpha & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.81)

Writing

\begin{aligned}C &= \cosh\alpha = \gamma \\ S &= -\sinh\alpha = -\gamma \beta,\end{aligned} \hspace{\stretch{1}}(3.82)

we can compute the transformed field strength tensor

\begin{aligned}\hat{F}' &=\begin{bmatrix}C & S & 0 & 0 \\ S & C & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} \begin{bmatrix}C & S & 0 & 0 \\ S & C & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix} \\ &=\begin{bmatrix}C & S & 0 & 0 \\ S & C & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}- S E_x        & -C E_x        & -E_y  & -E_z \\ C E_x          & S E_x         & -B_z  & B_y \\ C E_y + S B_z  & S E_y + C B_z & 0     & -B_x \\ C E_z - S B_y  & S E_z - C B_y & B_x   & 0 \end{bmatrix} \\ &=\begin{bmatrix}0 & -E_x & -C E_y - S B_z & - C E_z + S B_y \\ E_x & 0 & -S E_y - C B_z & - S E_z + C B_y \\ C E_y + S B_z & S E_y + C B_z & 0 & -B_x \\ C E_z - S B_y & S E_z - C B_y & B_x & 0\end{bmatrix} \\ &=\begin{bmatrix}0 & -E_x & -\gamma(E_y - \beta B_z) & - \gamma(E_z + \beta B_y) \\ E_x & 0 & - \gamma (-\beta E_y + B_z) & \gamma( \beta E_z + B_y) \\ \gamma (E_y - \beta B_z) & \gamma(-\beta E_y + B_z) & 0 & -B_x \\ \gamma (E_z + \beta B_y) & -\gamma(\beta E_z + B_y) & B_x & 0\end{bmatrix}.\end{aligned}

As a check we have the antisymmetry that is expected. There is also a regularity to the end result that is aesthetically pleasing, hinting that things are hopefully error free. In coordinates for \mathbf{E} and \mathbf{B} this is

\begin{aligned}E_x &\rightarrow E_x \\ E_y &\rightarrow \gamma ( E_y - \beta B_z ) \\ E_z &\rightarrow \gamma ( E_z + \beta B_y ) \\ B_z &\rightarrow B_x \\ B_y &\rightarrow \gamma ( B_y + \beta E_z ) \\ B_z &\rightarrow \gamma ( B_z - \beta E_y ) \end{aligned} \hspace{\stretch{1}}(3.84)

Writing \boldsymbol{\beta} = \mathbf{e}_1 \beta, we have

\begin{aligned}\boldsymbol{\beta} \times \mathbf{B} = \begin{vmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\ \beta & 0 & 0 \\ B_x & B_y & B_z\end{vmatrix} = \mathbf{e}_2 (-\beta B_z) + \mathbf{e}_3( \beta B_y ),\end{aligned} \hspace{\stretch{1}}(3.90)

which puts us enroute to a tidier vector form

\begin{aligned}E_x &\rightarrow E_x \\ E_y &\rightarrow \gamma ( E_y + (\boldsymbol{\beta} \times \mathbf{B})_y ) \\ E_z &\rightarrow \gamma ( E_z + (\boldsymbol{\beta} \times \mathbf{B})_z ) \\ B_z &\rightarrow B_x \\ B_y &\rightarrow \gamma ( B_y - (\boldsymbol{\beta} \times \mathbf{E})_y ) \\ B_z &\rightarrow \gamma ( B_z - (\boldsymbol{\beta} \times \mathbf{E})_z ).\end{aligned} \hspace{\stretch{1}}(3.91)

For a vector \mathbf{A}, write \mathbf{A}_\parallel = (\mathbf{A} \cdot \hat{\mathbf{v}})\hat{\mathbf{v}}, \mathbf{A}_\perp = \mathbf{A} - \mathbf{A}_\parallel, allowing a compact description of the field transformation

\begin{aligned}\mathbf{E} &\rightarrow \mathbf{E}_\parallel + \gamma \mathbf{E}_\perp + \gamma (\boldsymbol{\beta} \times \mathbf{B})_\perp \\ \mathbf{B} &\rightarrow \mathbf{B}_\parallel + \gamma \mathbf{B}_\perp - \gamma (\boldsymbol{\beta} \times \mathbf{E})_\perp.\end{aligned} \hspace{\stretch{1}}(3.97)

Now, we want to consider the field of a moving particle. In the particle’s (unprimed) rest frame the field due to its potential \phi = q/r is

\begin{aligned}\mathbf{E} &= \frac{q}{r^2} \hat{\mathbf{r}} \\ \mathbf{B} &= 0.\end{aligned} \hspace{\stretch{1}}(3.99)

Coordinates for a “stationary” observer, who sees this particle moving along the x-axis at speed v are related by a boost in the -v direction

\begin{aligned}\begin{bmatrix}ct' \\ x' \\ y' \\ z'\end{bmatrix}\begin{bmatrix}\gamma & \gamma (v/c) & 0 & 0 \\ \gamma (v/c) & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}ct \\ x \\ y \\ z\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.101)

Therefore the fields in the observer frame will be

\begin{aligned}\mathbf{E}' &= \mathbf{E}_\parallel + \gamma \mathbf{E}_\perp - \gamma \frac{v}{c}(\mathbf{e}_1 \times \mathbf{B})_\perp = \mathbf{E}_\parallel + \gamma \mathbf{E}_\perp \\ \mathbf{B}' &= \mathbf{B}_\parallel + \gamma \mathbf{B}_\perp + \gamma \frac{v}{c}(\mathbf{e}_1 \times \mathbf{E})_\perp = \gamma \frac{v}{c}(\mathbf{e}_1 \times \mathbf{E})_\perp \end{aligned} \hspace{\stretch{1}}(3.102)

More explicitly with \mathbf{E} = \frac{q}{r^3}(x, y, z) this is

\begin{aligned}\mathbf{E}' &= \frac{q}{r^3}(x, \gamma y, \gamma z) \\ \mathbf{B}' &= \gamma \frac{q v}{c r^3} ( 0, -z, y )\end{aligned} \hspace{\stretch{1}}(3.104)

Comparing to Problem 3 in Problem set 2, I see that this matches the result obtained by separately transforming the gradient, the time partial, and the scalar potential. Actually, if I am being honest, I see that I made a sign error in all the coordinates of \mathbf{E}' when I initially did (this ungraded problem) in problem set 2. That sign error should have been obvious by considering the v=0 case which would have mysteriously resulted in inversion of all the coordinates of the observed electric field.

2. Statement

A particle is moving with velocity \mathbf{v} in perpendicular \mathbf{E} and \mathbf{B} fields, all given in some particular “stationary” frame of reference.

\begin{enumerate}
\item Show that there exists a frame where the problem of finding the particle trajectory can be reduced to having either only an electric or only a magnetic field.
\item Explain what determines which case takes place.
\item Find the velocity \mathbf{v}_0 of that frame relative to the “stationary” frame.
\end{enumerate}

2. Solution

\paragraph{Part 1 and 2:} Existence of the transformation.

In the single particle Lorentz trajectory problem we wish to solve

\begin{aligned}m c \frac{du^i}{ds} = \frac{e}{c} F^{i j} u_j,\end{aligned} \hspace{\stretch{1}}(3.106)

which in matrix form we can write as

\begin{aligned}\frac{d U}{ds} = \frac{e}{m c^2} \hat{F} \hat{G} U.\end{aligned} \hspace{\stretch{1}}(3.107)

where we write our column vector proper velocity as U = {\left\lVert{u^i}\right\rVert}. Under transformation of coordinates {u'}^i = {O^i}_j x^j, with \hat{O} = {\left\lVert{{O^i}_j}\right\rVert}, this becomes

\begin{aligned}\hat{O} \frac{d U}{ds} = \frac{e}{m c^2} \hat{O} \hat{F} \hat{O}^\text{T} \hat{G} \hat{O} U.\end{aligned} \hspace{\stretch{1}}(3.108)

Suppose we can find eigenvectors for the matrix \hat{O} \hat{F} \hat{O}^\text{T} \hat{G}. That is for some eigenvalue \lambda, we can find an eigenvector \Sigma

\begin{aligned}\hat{O} \hat{F} \hat{O}^\text{T} \hat{G} \Sigma = \lambda \Sigma.\end{aligned} \hspace{\stretch{1}}(3.109)

Rearranging we have

\begin{aligned}(\hat{O} \hat{F} \hat{O}^\text{T} \hat{G} - \lambda I) \Sigma = 0\end{aligned} \hspace{\stretch{1}}(3.110)

and conclude that \Sigma lies in the null space of the matrix \hat{O} \hat{F} \hat{O}^\text{T} \hat{G} - \lambda I and that this difference of matrices must have a zero determinant

\begin{aligned}\text{Det} (\hat{O} \hat{F} \hat{O}^\text{T} \hat{G} - \lambda I) = -\text{Det} (\hat{O} \hat{F} \hat{O}^\text{T} - \lambda \hat{G}) = 0.\end{aligned} \hspace{\stretch{1}}(3.111)

Since \hat{G} = \hat{O} \hat{G} \hat{O}^\text{T} for any Lorentz transformation \hat{O} in SO(1,3), and \text{Det} ABC = \text{Det} A \text{Det} B \text{Det} C we have

\begin{aligned}\text{Det} (\hat{O} \hat{F} \hat{O}^\text{T} - \lambda G)= \text{Det} (\hat{F} - \lambda \hat{G}).\end{aligned} \hspace{\stretch{1}}(3.112)

In problem 1.6, we called this our characteristic equation P(\lambda) = \text{Det} (\hat{F} - \lambda \hat{G}). Observe that the characteristic equation is Lorentz invariant for any \lambda, which requires that the eigenvalues \lambda are also Lorentz invariants.

In problem 1.6 of this problem set we computed that this characteristic equation expands to

\begin{aligned}P(\lambda) = \text{Det} (\hat{F} - \lambda \hat{G}) = (\mathbf{E} \cdot \mathbf{B})^2 + \lambda^2 (\mathbf{B}^2 - \mathbf{E}^2) + \lambda^4.\end{aligned} \hspace{\stretch{1}}(3.113)

The eigenvalues for the system, also each necessarily Lorentz invariants, are

\begin{aligned}\lambda = \pm \frac{1}{{\sqrt{2}}} \sqrt{ \mathbf{E}^2 - \mathbf{B}^2 \pm \sqrt{ (\mathbf{E}^2 - \mathbf{B}^2)^2 - 4 (\mathbf{E} \cdot \mathbf{B})^2 }}.\end{aligned} \hspace{\stretch{1}}(3.114)

Observe that in the specific case where \mathbf{E} \cdot \mathbf{B} = 0, as in this problem, we must have \mathbf{E}' \cdot \mathbf{B}' in all frames, and the two non-zero eigenvalues of our characteristic polynomial are simply

\begin{aligned}\lambda = \pm \sqrt{\mathbf{E}^2 - \mathbf{B}^2}.\end{aligned} \hspace{\stretch{1}}(3.115)

These and \mathbf{E} \cdot \mathbf{B} = 0 are the invariants for this system. If we have \mathbf{E}^2 > \mathbf{B}^2 in one frame, we must also have {\mathbf{E}'}^2 > {\mathbf{B}'}^2 in another frame, still maintaining perpendicular fields. In particular if \mathbf{B}' = 0 we maintain real eigenvalues. Similarly if \mathbf{B}^2 > \mathbf{E}^2 in some frame, we must always have imaginary eigenvalues, and this is also true in the \mathbf{E}' = 0 case.

While the problem can be posed as a pure diagonalization problem (and even solved numerically this way for the general constant fields case), we can also work symbolically, thinking of the trajectories problem as simply seeking a transformation of frames that reduce the scope of the problem to one that is more tractable. That does not have to be the linear transformation that diagonalizes the system. Instead we are free to transform to a frame where one of the two fields \mathbf{E}' or \mathbf{B}' is zero, provided the invariants discussed are maintained.

\paragraph{Part 3:} Finding the boost velocity that wipes out one of the fields.

Let’s now consider a Lorentz boost \hat{O}, and seek to solve for the boost velocity that wipes out one of the fields, given the invariants that must be maintained for the system

To make things concrete, suppose that our perpendicular fields are given by \mathbf{E} = E \mathbf{e}_2 and \mathbf{B} = B \mathbf{e}_3.

Let also assume that we can find the velocity \mathbf{v}_0 for which one or more of the transformed fields is zero. Suppose that velocity is

\begin{aligned}\mathbf{v}_0 = v_0 (\alpha_1, \alpha_2, \alpha_3) = v_0 \hat{\mathbf{v}}_0,\end{aligned} \hspace{\stretch{1}}(3.116)

where \alpha_i are the direction cosines of \mathbf{v}_0 so that \sum_i \alpha_i^2 = 1. We will want to compute the components of \mathbf{E} and \mathbf{B} parallel and perpendicular to this velocity.

Those are

\begin{aligned}\mathbf{E}_\parallel &= E \mathbf{e}_2 \cdot (\alpha_1, \alpha_2, \alpha_3) (\alpha_1, \alpha_2, \alpha_3) \\ &= E \alpha_2 (\alpha_1, \alpha_2, \alpha_3) \\ \end{aligned}

\begin{aligned}\mathbf{E}_\perp &= E \mathbf{e}_2 - \mathbf{E}_\parallel \\ &= E (-\alpha_1 \alpha_2, 1 - \alpha_2^2, -\alpha_2 \alpha_3) \\ &= E (-\alpha_1 \alpha_2, \alpha_1^2 + \alpha_3^2, -\alpha_2 \alpha_3) \\ \end{aligned}

For the magnetic field we have

\begin{aligned}\mathbf{B}_\parallel &= B \alpha_3 (\alpha_1, \alpha_2, \alpha_3),\end{aligned}

and

\begin{aligned}\mathbf{B}_\perp &= B \mathbf{e}_3 - \mathbf{B}_\parallel \\ &= B (-\alpha_1 \alpha_3, -\alpha_2 \alpha_3, \alpha_1^2 + \alpha_2^2)  \\ \end{aligned}

Now, observe that (\boldsymbol{\beta} \times \mathbf{B})_\parallel \propto ((\mathbf{v}_0 \times \mathbf{B}) \cdot \mathbf{v}_0) \mathbf{v}_0, but this is just zero. So we have (\boldsymbol{\beta} \times \mathbf{B})_\parallel = \boldsymbol{\beta} \times \mathbf{B}. So our cross products terms are just

\begin{aligned}\hat{\mathbf{v}}_0 \times \mathbf{B} &=         \begin{vmatrix}         \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\         \alpha_1 & \alpha_2 & \alpha_3 \\         0 & 0 & B         \end{vmatrix} = B (\alpha_2, -\alpha_1, 0) \\ \hat{\mathbf{v}}_0 \times \mathbf{E} &=         \begin{vmatrix}         \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\         \alpha_1 & \alpha_2 & \alpha_3 \\         0 & E & 0         \end{vmatrix} = E (-\alpha_3, 0, \alpha_1)\end{aligned}

We can now express how the fields transform, given this arbitrary boost velocity. From 3.97, this is

\begin{aligned}\mathbf{E} &\rightarrow E \alpha_2 (\alpha_1, \alpha_2, \alpha_3) + \gamma E (-\alpha_1 \alpha_2, \alpha_1^2 + \alpha_3^2, -\alpha_2 \alpha_3) + \gamma \frac{v_0^2}{c^2} B (\alpha_2, -\alpha_1, 0) \\ \mathbf{B} &\rightarrowB \alpha_3 (\alpha_1, \alpha_2, \alpha_3)+ \gamma B (-\alpha_1 \alpha_3, -\alpha_2 \alpha_3, \alpha_1^2 + \alpha_2^2)  - \gamma \frac{v_0^2}{c^2} E (-\alpha_3, 0, \alpha_1)\end{aligned} \hspace{\stretch{1}}(3.117)

Zero Electric field case.

Let’s tackle the two cases separately. First when {\left\lvert{\mathbf{B}}\right\rvert} > {\left\lvert{\mathbf{E}}\right\rvert}, we can transform to a frame where \mathbf{E}'=0. In coordinates from 3.117 this supplies us three sets of equations. These are

\begin{aligned}0 &= E \alpha_2 \alpha_1 (1 - \gamma) + \gamma \frac{v_0^2}{c^2} B \alpha_2  \\ 0 &= E \alpha_2^2 + \gamma E (\alpha_1^2 + \alpha_3^2) - \gamma \frac{v_0^2}{c^2} B \alpha_1  \\ 0 &= E \alpha_2 \alpha_3 (1 - \gamma).\end{aligned} \hspace{\stretch{1}}(3.119)

With an assumed solution the \mathbf{e}_3 coordinate equation implies that one of \alpha_2 or \alpha_3 is zero. Perhaps there are solutions with \alpha_3 = 0 too, but inspection shows that \alpha_2 = 0 nicely kills off the first equation. Since \alpha_1^2 + \alpha_2^2 + \alpha_3^2 = 1, that also implies that we are left with

\begin{aligned}0 = E - \frac{v_0^2}{c^2} B \alpha_1 \end{aligned} \hspace{\stretch{1}}(3.122)

Or

\begin{aligned}\alpha_1 &= \frac{E}{B} \frac{c^2}{v_0^2} \\ \alpha_2 &= 0 \\ \alpha_3 &= \sqrt{1 - \frac{E^2}{B^2} \frac{c^4}{v_0^4} }\end{aligned} \hspace{\stretch{1}}(3.123)

Our velocity was \mathbf{v}_0 = v_0 (\alpha_1, \alpha_2, \alpha_3) solving the problem for the {\left\lvert{\mathbf{B}}\right\rvert}^2 > {\left\lvert{\mathbf{E}}\right\rvert}^2 case up to an adjustable constant v_0. That constant comes with constraints however, since we must also have our cosine \alpha_1 \le 1. Expressed another way, the magnitude of the boost velocity is constrained by the relation

\begin{aligned}\frac{\mathbf{v}_0^2}{c^2} \ge {\left\lvert{\frac{E}{B}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(3.126)

It appears we may also pick the equality case, so one velocity (not unique) that should transform away the electric field is

\begin{aligned}\boxed{\mathbf{v}_0 = c \sqrt{{\left\lvert{\frac{E}{B}}\right\rvert}} \mathbf{e}_1 = \pm c \sqrt{{\left\lvert{\frac{E}{B}}\right\rvert}} \frac{\mathbf{E} \times \mathbf{B}}{{\left\lvert{\mathbf{E}}\right\rvert} {\left\lvert{\mathbf{B}}\right\rvert}}.}\end{aligned} \hspace{\stretch{1}}(3.127)

This particular boost direction is perpendicular to both fields. Observe that this highlights the invariance condition {\left\lvert{\frac{E}{B}}\right\rvert} < 1 since we see this is required for a physically realizable velocity. Boosting in this direction will reduce our problem to one that has only the magnetic field component.

Zero Magnetic field case.

Now, let’s consider the case where we transform the magnetic field away, the case when our characteristic polynomial has strictly real eigenvalues \lambda = \pm \sqrt{\mathbf{E}^2 - \mathbf{B}^2}. In this case, if we write out our equations for the transformed magnetic field and require these to separately equal zero, we have

\begin{aligned}0 &= B \alpha_3 \alpha_1 ( 1 - \gamma ) + \gamma \frac{v_0^2}{c^2} E \alpha_3 \\ 0 &= B \alpha_2 \alpha_3 ( 1 - \gamma ) \\ 0 &= B (\alpha_3^2 + \gamma (\alpha_1^2 + \alpha_2^2)) - \gamma \frac{v_0^2}{c^2} E \alpha_1.\end{aligned} \hspace{\stretch{1}}(3.128)

Similar to before we see that \alpha_3 = 0 kills off the first and second equations, leaving just

\begin{aligned}0 = B - \frac{v_0^2}{c^2} E \alpha_1.\end{aligned} \hspace{\stretch{1}}(3.131)

We now have a solution for the family of direction vectors that kill the magnetic field off

\begin{aligned}\alpha_1 &= \frac{B}{E} \frac{c^2}{v_0^2} \\ \alpha_2 &= \sqrt{ 1 - \frac{B^2}{E^2} \frac{c^4}{v_0^4} } \\ \alpha_3 &= 0.\end{aligned} \hspace{\stretch{1}}(3.132)

In addition to the initial constraint that {\left\lvert{\frac{B}{E}}\right\rvert} < 1, we have as before, constraints on the allowable values of v_0

\begin{aligned}\frac{\mathbf{v}_0^2}{c^2} \ge {\left\lvert{\frac{B}{E}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(3.135)

Like before we can pick the equality \alpha_1^2 = 1, yielding a boost direction of

\begin{aligned}\boxed{\mathbf{v}_0 = c \sqrt{{\left\lvert{\frac{B}{E}}\right\rvert}} \mathbf{e}_1 = \pm c \sqrt{{\left\lvert{\frac{B}{E}}\right\rvert}} \frac{\mathbf{E} \times \mathbf{B}}{{\left\lvert{\mathbf{E}}\right\rvert} {\left\lvert{\mathbf{B}}\right\rvert}}.}\end{aligned} \hspace{\stretch{1}}(3.136)

Again, we see that the invariance condition {\left\lvert{\mathbf{B}}\right\rvert} < {\left\lvert{\mathbf{E}}\right\rvert} is required for a physically realizable velocity if that velocity is entirely perpendicular to the fields.

Problem 3. Continuity equation for delta function current distributions.

Statement

Show explicitly that the electromagnetic 4-current j^i for a particle moving with constant velocity (considered in class, p. 100-101 of notes) is conserved \partial_i j^i = 0. Give a physical interpretation of this conservation law, for example by integrating \partial_i j^i over some spacetime region and giving an integral form to the conservation law (\partial_i j^i = 0 is known as the “continuity equation”).

Solution

First lets review. Our four current was defined as

\begin{aligned}j^i(x) = \sum_A c e_A \int_{x(\tau)} dx_A^i(\tau) \delta^4(x - x_A(\tau)).\end{aligned} \hspace{\stretch{1}}(4.137)

If each of the trajectories x_A(\tau) represents constant motion we have

\begin{aligned}x_A(\tau) = x_A(0) + \gamma_A \tau ( c, \mathbf{v}_A ).\end{aligned} \hspace{\stretch{1}}(4.138)

The spacetime split of this four vector is

\begin{aligned}x_A^0(\tau) &= x_A^0(0) + \gamma_A \tau c \\ \mathbf{x}_A(\tau) &= \mathbf{x}_A(0) + \gamma_A \tau \mathbf{v},\end{aligned} \hspace{\stretch{1}}(4.139)

with differentials

\begin{aligned}dx_A^0(\tau) &= \gamma_A d\tau c \\ d\mathbf{x}_A(\tau) &= \gamma_A d\tau \mathbf{v}_A.\end{aligned} \hspace{\stretch{1}}(4.141)

Writing out the delta functions explicitly we have

\begin{aligned}\begin{aligned}j^i(x) = \sum_A &c e_A \int_{x(\tau)} dx_A^i(\tau) \delta(x^0 - x_A^0(0) - \gamma_A c \tau) \delta(x^1 - x_A^1(0) - \gamma_A v_A^1 \tau) \\ &\delta(x^2 - x_A^2(0) - \gamma_A v_A^2 \tau) \delta(x^3 - x_A^3(0) - \gamma_A v_A^3 \tau)\end{aligned}\end{aligned} \hspace{\stretch{1}}(4.143)

So our time and space components of the current can be written

\begin{aligned}j^0(x) &= \sum_A c^2 e_A \gamma_A \int_{x(\tau)} d\tau\delta(x^0 - x_A^0(0) - \gamma_A c \tau)\delta^3(\mathbf{x} - \mathbf{x}_A(0) - \gamma_A \mathbf{v}_A \tau) \\ \mathbf{j}(x) &= \sum_A c e_A \mathbf{v}_A \gamma_A \int_{x(\tau)} d\tau\delta(x^0 - x_A^0(0) - \gamma_A c \tau)\delta^3(\mathbf{x} - \mathbf{x}_A(0) - \gamma_A \mathbf{v}_A \tau).\end{aligned} \hspace{\stretch{1}}(4.144)

Each of these integrals can be evaluated with respect to the time coordinate delta function leaving the distribution

\begin{aligned}j^0(x) &= \sum_A c e_A \delta^3(\mathbf{x} - \mathbf{x}_A(0) - \frac{\mathbf{v}_A}{c} (x^0 - x_A^0(0))) \\ \mathbf{j}(x) &= \sum_A e_A \mathbf{v}_A \delta^3(\mathbf{x} - \mathbf{x}_A(0) - \frac{\mathbf{v}_A}{c} (x^0 - x_A^0(0)))\end{aligned} \hspace{\stretch{1}}(4.146)

With this more general expression (multi-particle case) it should be possible to show that the four divergence is zero, however, the problem only asks for one particle. For the one particle case, we can make things really easy by taking the initial point in space and time as the origin, and aligning our velocity with one of the coordinates (say x).

Doing so we have the result derived in class

\begin{aligned}j = e \begin{bmatrix}c \\ v \\ 0 \\ 0 \end{bmatrix}\delta(x - v x^0/c)\delta(y)\delta(z).\end{aligned} \hspace{\stretch{1}}(4.148)

Our divergence then has only two portions

\begin{aligned}\frac{\partial {j^0}}{\partial {x^0}} &= e c (-v/c) \delta'(x - v x^0/c) \delta(y) \delta(z) \\ \frac{\partial {j^1}}{\partial {x}} &= e v \delta'(x - v x^0/c) \delta(y) \delta(z).\end{aligned} \hspace{\stretch{1}}(4.149)

and these cancel out when summed. Note that this requires us to be loose with our delta functions, treating them like regular functions that are differentiable.

For the more general multiparticle case, we can treat the sum one particle at a time, and in each case, rotate coordinates so that the four divergence only picks up one term.

As for physical interpretation via integral, we have using the four dimensional divergence theorem

\begin{aligned}\int d^4 x \partial_i j^i = \int j^i dS_i\end{aligned} \hspace{\stretch{1}}(4.151)

where dS_i is the three-volume element perpendicular to a x^i = \text{constant} plane. These volume elements are detailed generally in the text [2], however, they do note that one special case specifically dS_0 = dx dy dz, the element of the three-dimensional (spatial) volume “normal” to hyperplanes ct = \text{constant}.

Without actually computing the determinants, we have something that is roughly of the form

\begin{aligned}0 = \int j^i dS_i=\int c \rho dx dy dz+\int \mathbf{j} \cdot (\mathbf{n}_x c dt dy dz + \mathbf{n}_y c dt dx dz + \mathbf{n}_z c dt dx dy).\end{aligned} \hspace{\stretch{1}}(4.152)

This is cheating a bit to just write \mathbf{n}_x, \mathbf{n}_y, \mathbf{n}_z. Are there specific orientations required by the metric. To be precise we’d have to calculate the determinants detailed in the text, and then do the duality transformations.

Per unit time, we can write instead

\begin{aligned}\frac{\partial {}}{\partial {t}} \int \rho dV= -\int \mathbf{j} \cdot (\mathbf{n}_x dy dz + \mathbf{n}_y dx dz + \mathbf{n}_z dx dy)\end{aligned} \hspace{\stretch{1}}(4.153)

Rather loosely this appears to roughly describe that the rate of change of charge in a volume must be matched with the “flow” of current through the surface within that amount of time.

References

[1] Wikipedia. Electromagnetic tensor — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 27-February-2011]. http://en.wikipedia.org/w/index.php?title=Electromagnetic_tensor&oldid=414989505.

[2] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Lecture 11 (Taught by Prof. Erich Poppitz). Unpacking Lorentz force equation. Lorentz transformations of the strength tensor, Lorentz field invariants, Bianchi identity, and first half of Maxwell’s.

Posted by peeterjoot on February 24, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Reading.

Covering chapter 3 material from the text [1].

Covering lecture notes pp. 74-83: Lorentz transformation of the strength tensor (82) [Tuesday, Feb. 8] [extra reading for the mathematically minded: gauge field, strength tensor, and gauge transformations in differential form language, not to be covered in class (83)]

Covering lecture notes pp. 84-102: Lorentz invariants of the electromagnetic field (84-86); Bianchi identity and the first half of Maxwell’s equations (87-90)

Chewing on the four vector form of the Lorentz force equation.

After much effort, we arrived at

\begin{aligned}\frac{d{{(m c u_l) }}}{ds} = \frac{e}{c} \left( \partial_l A_i - \partial_i A_l \right) u^i\end{aligned} \hspace{\stretch{1}}(2.1)

or

\begin{aligned}\frac{d{{ p_l }}}{ds} = \frac{e}{c} F_{l i} u^i\end{aligned} \hspace{\stretch{1}}(2.2)

Elements of the strength tensor

\paragraph{Claim}: there are only 6 independent elements of this matrix (tensor)

\begin{aligned}\begin{bmatrix}0 & . & . & . \\    & 0 & . & . \\    &   & 0 & . \\    &   &   & 0 \\  \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.3)

This is a no-brainer, for we just have to mechanically plug in the elements of the field strength tensor

Recall

\begin{aligned}A^i &= (\phi, \mathbf{A}) \\ A_i &= (\phi, -\mathbf{A})\end{aligned} \hspace{\stretch{1}}(2.4)

\begin{aligned}F_{0\alpha} &= \partial_0 A_\alpha - \partial_\alpha A_0  \\ &= -\partial_0 (\mathbf{A})_\alpha - \partial_\alpha \phi  \\ \end{aligned}

\begin{aligned}F_{0\alpha} = E_\alpha\end{aligned} \hspace{\stretch{1}}(2.6)

For the purely spatial index combinations we have

\begin{aligned}F_{\alpha\beta} &= \partial_\alpha A_\beta - \partial_\beta A_\alpha  \\ &= -\partial_\alpha (\mathbf{A})_\beta + \partial_\beta (\mathbf{A})_\alpha  \\ \end{aligned}

Written out explicitly, these are

\begin{aligned}F_{1 2} &= \partial_2 (\mathbf{A})_1 -\partial_1 (\mathbf{A})_2  \\ F_{2 3} &= \partial_3 (\mathbf{A})_2 -\partial_2 (\mathbf{A})_3  \\ F_{3 1} &= \partial_1 (\mathbf{A})_3 -\partial_3 (\mathbf{A})_1 .\end{aligned} \hspace{\stretch{1}}(2.7)

We can compare this to the elements of \mathbf{B}

\begin{aligned}\mathbf{B} = \begin{vmatrix}\hat{\mathbf{x}} & \hat{\mathbf{y}} & \hat{\mathbf{z}} \\ \partial_1 & \partial_2 & \partial_3 \\ A_x & A_y & A_z\end{vmatrix}\end{aligned} \hspace{\stretch{1}}(2.10)

We see that

\begin{aligned}(\mathbf{B})_z &= \partial_1 A_y - \partial_2 A_x \\ (\mathbf{B})_x &= \partial_2 A_z - \partial_3 A_y \\ (\mathbf{B})_y &= \partial_3 A_x - \partial_1 A_z\end{aligned} \hspace{\stretch{1}}(2.11)

So we have

\begin{aligned}F_{1 2} &= - (\mathbf{B})_3 \\ F_{2 3} &= - (\mathbf{B})_1 \\ F_{3 1} &= - (\mathbf{B})_2.\end{aligned} \hspace{\stretch{1}}(2.14)

These can be summarized as simply

\begin{aligned}F_{\alpha\beta} = - \epsilon_{\alpha\beta\gamma} B_\gamma.\end{aligned} \hspace{\stretch{1}}(2.17)

This provides all the info needed to fill in the matrix above

\begin{aligned}{\left\lVert{ F_{i j} }\right\rVert} = \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0.\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.18)

Index raising of rank 2 tensor

To raise indexes we compute

\begin{aligned}F^{i j} = g^{i l} g^{j k} F_{l k}.\end{aligned} \hspace{\stretch{1}}(2.19)

Justifying the raising operation.

To justify this consider raising one index at a time by applying the metric tensor to our definition of F_{l k}. That is

\begin{aligned}g^{a l} F_{l k} &=g^{a l} (\partial_l A_k - \partial_k A_l) \\ &=\partial^a A_k - \partial_k A^a.\end{aligned}

Now apply the metric tensor once more

\begin{aligned}g^{b k} g^{a l} F_{l k} &=g^{b k} (\partial^a A_k - \partial_k A^a) \\ &=\partial^a A^b - \partial^b A^a.\end{aligned}

This is, by definition F^{a b}. Since a rank 2 tensor has been defined as an object that transforms like the product of two pairs of coordinates, it makes sense that this particular tensor raises in the same fashion as would a product of two vector coordinates (in this case, it happens to be an antisymmetric product of two vectors, and one of which is an operator, but we have the same idea).

Consider the components of the raised F_{i j} tensor.

\begin{aligned}F^{0\alpha} &= -F_{0\alpha} \\ F^{\alpha\beta} &= F_{\alpha\beta}.\end{aligned} \hspace{\stretch{1}}(2.20)

\begin{aligned}{\left\lVert{ F^{i j} }\right\rVert} = \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.22)

Back to chewing on the Lorentz force equation.

\begin{aligned}m c \frac{d{{ u_i }}}{ds} = \frac{e}{c} F_{i j} u^j\end{aligned} \hspace{\stretch{1}}(2.23)

\begin{aligned}u^i &= \gamma \left( 1, \frac{\mathbf{v}}{c} \right) \\ u_i &= \gamma \left( 1, -\frac{\mathbf{v}}{c} \right)\end{aligned} \hspace{\stretch{1}}(2.24)

For the spatial components of the Lorentz force equation we have

\begin{aligned}m c \frac{d{{ u_\alpha }}}{ds} &= \frac{e}{c} F_{\alpha j} u^j \\ &= \frac{e}{c} F_{\alpha 0} u^0+ \frac{e}{c} F_{\alpha \beta} u^\beta \\ &= \frac{e}{c} (-E_{\alpha}) \gamma+ \frac{e}{c} (- \epsilon_{\alpha\beta\gamma} B_\gamma ) \frac{v^\beta}{c} \gamma \end{aligned}

But

\begin{aligned}m c \frac{d{{ u_\alpha }}}{ds} &= -m \frac{d{{(\gamma \mathbf{v}_\alpha)}}}{ds} \\ &= -m \frac{d(\gamma \mathbf{v}_\alpha)}{c \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} dt} \\ &= -\gamma \frac{d(m \gamma \mathbf{v}_\alpha)}{c dt}.\end{aligned}

Canceling the common -\gamma/c terms, and switching to vector notation, we are left with

\begin{aligned}\frac{d( m \gamma \mathbf{v}_\alpha)}{dt} = e \left( E_\alpha + \frac{1}{{c}} (\mathbf{v} \times \mathbf{B})_\alpha \right).\end{aligned} \hspace{\stretch{1}}(2.26)

Now for the energy term. We have

\begin{aligned}m c \frac{d{{u_0}}}{ds} &= \frac{e}{c} F_{0\alpha} u^\alpha \\ &= \frac{e}{c} E_{\alpha} \gamma \frac{v^\alpha}{c} \\ \frac{d{{ m c \gamma }}}{ds} &=\end{aligned}

Putting the final two lines into vector form we have

\begin{aligned}\frac{d{{ (m c^2 \gamma)}}}{dt} = e \mathbf{E} \cdot \mathbf{v},\end{aligned} \hspace{\stretch{1}}(2.27)

or

\begin{aligned}\frac{d{{ \mathcal{E} }}}{dt} = e \mathbf{E} \cdot \mathbf{v}\end{aligned} \hspace{\stretch{1}}(2.28)

Transformation of rank two tensors in matrix and index form.

Transformation of the metric tensor, and some identities.

With

\begin{aligned}\hat{G} = {\left\lVert{ g_{i j} }\right\rVert} = {\left\lVert{ g^{i j} }\right\rVert}\end{aligned} \hspace{\stretch{1}}(3.29)

\paragraph{We claim:}
The rank two tensor \hat{G} transforms in the following sort of sandwich operation, and this leaves it invariant

\begin{aligned}\hat{G} \rightarrow \hat{O} \hat{G} \hat{O}^\text{T} = \hat{G}.\end{aligned} \hspace{\stretch{1}}(3.30)

To demonstrate this let’s consider a transformed vector in coordinate form as follows

\begin{aligned}{x'}^i &= O^{i j} x_j = {O^i}_j x^j \\ {x'}_i &= O_{i j} x^j = {O_i}^j x_j.\end{aligned} \hspace{\stretch{1}}(3.31)

We can thus write the equation in matrix form with

\begin{aligned}X &= {\left\lVert{x^i}\right\rVert} \\ X' &= {\left\lVert{{x'}^i}\right\rVert} \\ \hat{O} &= {\left\lVert{{O^i}_j}\right\rVert} \\ X' &= \hat{O} X\end{aligned} \hspace{\stretch{1}}(3.33)

Our invariant for the vector square, which is required to remain unchanged is

\begin{aligned}{x'}^i {x'}_i &= (O^{i j} x_j)(O_{i k} x^k) \\ &= x^k (O^{i j} O_{i k}) x_j.\end{aligned}

This shows that we have a delta function relationship for the Lorentz transform matrix, when we sum over the first index

\begin{aligned}O^{a i} O_{a j} = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(3.37)

It appears we can put 3.37 into matrix form as

\begin{aligned}\hat{G} \hat{O}^\text{T} \hat{G} \hat{O} = I\end{aligned} \hspace{\stretch{1}}(3.38)

Now, if one considers that the transpose of a rotation is an inverse rotation, and the transpose of a boost leaves it unchanged, the transpose of a general Lorentz transformation, a composition of an arbitrary sequence of boosts and rotations, must also be a Lorentz transformation, and must then also leave the norm unchanged. For the transpose of our Lorentz transformation \hat{O} lets write

\begin{aligned}\hat{P} = \hat{O}^\text{T}\end{aligned} \hspace{\stretch{1}}(3.39)

For the action of this on our position vector let’s write

\begin{aligned}{x''}^i &= P^{i j} x_j = O^{j i} x_j \\ {x''}_i &= P_{i j} x^j = O_{j i} x^j\end{aligned} \hspace{\stretch{1}}(3.40)

so that our norm is

\begin{aligned}{x''}^a {x''}_a &= (O_{k a} x^k)(O^{j a} x_j) \\ &= x^k (O_{k a} O^{j a} ) x_j \\ &= x^j x_j \\ \end{aligned}

We must then also have an identity when summing over the second index

\begin{aligned}{\delta_{k}}^j = O_{k a} O^{j a} \end{aligned} \hspace{\stretch{1}}(3.42)

Armed with these facts on the products of O_{i j} and O^{i j} we can now consider the transformation of the metric tensor.

The rule (definition) supplied to us for the transformation of an arbitrary rank two tensor, is that this transforms as its indexes transform individually. Very much as if it was the product of two coordinate vectors and we transform those coordinates separately. Doing so for the metric tensor we have

\begin{aligned}g^{i j} &\rightarrow {O^i}_k g^{k m} {O^j}_m \\ &= ({O^i}_k g^{k m}) {O^j}_m \\ &= O^{i m} {O^j}_m \\ &= O^{i m} (O_{a m} g^{a j}) \\ &= (O^{i m} O_{a m}) g^{a j}\end{aligned}

However, by 3.42, we have O_{a m} O^{i m} = {\delta_a}^i, and we prove that

\begin{aligned}g^{i j} \rightarrow g^{i j}.\end{aligned} \hspace{\stretch{1}}(3.43)

Finally, we wish to put the above transformation in matrix form, look more carefully at the very first line

\begin{aligned}g^{i j}&\rightarrow {O^i}_k g^{k m} {O^j}_m \\ \end{aligned}

which is

\begin{aligned}\hat{G} \rightarrow \hat{O} \hat{G} \hat{O}^\text{T} = \hat{G}\end{aligned} \hspace{\stretch{1}}(3.44)

We see that this particular form of transformation, a sandwich between \hat{O} and \hat{O}^\text{T}, leaves the metric tensor invariant.

Lorentz transformation of the electrodynamic tensor

Having identified a composition of Lorentz transformation matrices, when acting on the metric tensor, leaves it invariant, it is a reasonable question to ask how this form of transformation acts on our electrodynamic tensor F^{i j}?

\paragraph{Claim:} A transformation of the following form is required to maintain the norm of the Lorentz force equation

\begin{aligned}\hat{F} \rightarrow \hat{O} \hat{F} \hat{O}^\text{T} ,\end{aligned} \hspace{\stretch{1}}(3.45)

where \hat{F} = {\left\lVert{F^{i j}}\right\rVert}. Observe that our Lorentz force equation can be written exclusively in upper index quantities as

\begin{aligned}m c \frac{d{{u^i}}}{ds} = \frac{e}{c} F^{i j} g_{j l} u^l\end{aligned} \hspace{\stretch{1}}(3.46)

Because we have a vector on one side of the equation, and it transforms by multiplication with by a Lorentz matrix in SO(1,3)

\begin{aligned}\frac{du^i}{ds} \rightarrow \hat{O} \frac{du^i}{ds} \end{aligned} \hspace{\stretch{1}}(3.47)

The LHS of the Lorentz force equation provides us with one invariant

\begin{aligned}(m c)^2 \frac{d{{u^i}}}{ds} \frac{d{{u_i}}}{ds}\end{aligned} \hspace{\stretch{1}}(3.48)

so the RHS must also provide one

\begin{aligned}\frac{e^2}{c^2} F^{i j} g_{j l} u^lF_{i k} g^{k m} u_m=\frac{e^2}{c^2} F^{i j} u_jF_{i k} u^k.\end{aligned} \hspace{\stretch{1}}(3.49)

Let’s look at the RHS in matrix form. Writing

\begin{aligned}U = {\left\lVert{u^i}\right\rVert},\end{aligned} \hspace{\stretch{1}}(3.50)

we can rewrite the Lorentz force equation as

\begin{aligned}m c \dot{U} = \frac{e}{c} \hat{F} \hat{G} U.\end{aligned} \hspace{\stretch{1}}(3.51)

In this matrix formalism our invariant 3.49 is

\begin{aligned}\frac{e^2}{c^2} (\hat{F} \hat{G} U)^\text{T} G \hat{F} \hat{G} U=\frac{e^2}{c^2} U^\text{T} \hat{G} \hat{F}^\text{T} G \hat{F} \hat{G} U.\end{aligned} \hspace{\stretch{1}}(3.52)

If we compare this to the transformed Lorentz force equation we have

\begin{aligned}m c \hat{O} \dot{U} = \frac{e}{c} \hat{F'} \hat{G} \hat{O} U.\end{aligned} \hspace{\stretch{1}}(3.53)

Our invariant for the transformed equation is

\begin{aligned}\frac{e^2}{c^2} (\hat{F'} \hat{G} \hat{O} U)^\text{T} G \hat{F'} \hat{G} \hat{O} U&=\frac{e^2}{c^2} U^\text{T} \hat{O}^\text{T} \hat{G} \hat{F'}^\text{T} G \hat{F'} \hat{G} \hat{O} U \\ \end{aligned}

Thus the transformed electrodynamic tensor \hat{F}' must satisfy the identity

\begin{aligned}\hat{O}^\text{T} \hat{G} \hat{F'}^\text{T} G \hat{F'} \hat{G} \hat{O} = \hat{G} \hat{F}^\text{T} G \hat{F} \hat{G} \end{aligned} \hspace{\stretch{1}}(3.54)

With the substitution \hat{F}' = \hat{O} \hat{F} \hat{O}^\text{T} the LHS is

\begin{aligned}\hat{O}^\text{T} \hat{G} \hat{F'}^\text{T} \hat{G} \hat{F'} \hat{G} \hat{O} &= \hat{O}^\text{T} \hat{G} ( \hat{O} \hat{F} \hat{O}^\text{T})^\T \hat{G} (\hat{O} \hat{F} \hat{O}^\text{T}) \hat{G} \hat{O}  \\ &= (\hat{O}^\text{T} \hat{G} \hat{O}) \hat{F}^\text{T} (\hat{O}^\text{T} \hat{G} \hat{O}) \hat{F} (\hat{O}^\text{T} \hat{G} \hat{O}) \\ \end{aligned}

We’ve argued that \hat{P} = \hat{O}^\text{T} is also a Lorentz transformation, thus

\begin{aligned}\hat{O}^\text{T} \hat{G} \hat{O}&=\hat{P} \hat{G} \hat{O}^\text{T} \\ &=\hat{G}\end{aligned}

This is enough to make both sides of 3.54 match, verifying that this transformation does provide the invariant properties desired.

Direct computation of the Lorentz transformation of the electrodynamic tensor.

We can construct the transformed field tensor more directly, by simply transforming the coordinates of the four gradient and the four potential directly. That is

\begin{aligned}F^{i j} = \partial^i A^j - \partial^j A^i&\rightarrow {O^i}_a {O^j}_b \left( \partial^a A^b - \partial^b A^a \right) \\ &={O^i}_a F^{a b} {O^j}_b \end{aligned}

By inspection we can see that this can be represented in matrix form as

\begin{aligned}\hat{F} \rightarrow \hat{O} \hat{F} \hat{O}^\text{T}\end{aligned} \hspace{\stretch{1}}(3.55)

Four vector invariants

For three vectors \mathbf{A} and \mathbf{B} invariants are

\begin{aligned}\mathbf{A} \cdot \mathbf{B} = A^\alpha B_\alpha\end{aligned} \hspace{\stretch{1}}(4.56)

For four vectors A^i and B^i invariants are

\begin{aligned}A^i B_i = A^i g_{i j} B^j  \end{aligned} \hspace{\stretch{1}}(4.57)

For F_{i j} what are the invariants? One invariant is

\begin{aligned}g^{i j} F_{i j} = 0,\end{aligned} \hspace{\stretch{1}}(4.58)

but this isn’t interesting since it is uniformly zero (product of symmetric and antisymmetric).

The two invariants are

\begin{aligned}F_{i j}F^{i j}\end{aligned} \hspace{\stretch{1}}(4.59)

and

\begin{aligned}\epsilon^{i j k l} F_{i j}F_{k l}\end{aligned} \hspace{\stretch{1}}(4.60)

where

\begin{aligned}\epsilon^{i j k l} =\left\{\begin{array}{l l}0 & \quad \mbox{if any two indexes coincide} \\ 1 & \quad \mbox{for even permutations of i j k l=0123$ } \\ -1 & \quad \mbox{for odd permutations of $i j k l=0123$ } \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(4.61)$

We can show (homework) that

\begin{aligned}F_{i j}F^{i j} \propto \mathbf{E}^2 - \mathbf{B}^2\end{aligned} \hspace{\stretch{1}}(4.62)

\begin{aligned}\epsilon^{i j k l} F_{i j}F_{k l} \propto \mathbf{E} \cdot \mathbf{B}\end{aligned} \hspace{\stretch{1}}(4.63)

This first invariant serves as the action density for the Maxwell field equations.

There’s some useful properties of these invariants. One is that if the fields are perpendicular in one frame, then will be in any other.

From the first, note that if {\left\lvert{\mathbf{E}}\right\rvert} > {\left\lvert{\mathbf{B}}\right\rvert}, the invariant is positive, and must be positive in all frames, or if {\left\lvert{\mathbf{E}}\right\rvert}  {\left\lvert{\mathbf{B}}\right\rvert} in one frame, we can transform to a frame with only \mathbf{E}' component, solve that, and then transform back. Similarly if {\left\lvert{\mathbf{E}}\right\rvert} < {\left\lvert{\mathbf{B}}\right\rvert} in one frame, we can transform to a frame with only \mathbf{B}' component, solve that, and then transform back.

The first half of Maxwell’s equations.

\paragraph{Claim: } The source free portions of Maxwell’s equations are a consequence of the definition of the field tensor alone.

Given

\begin{aligned}F_{i j} = \partial_i A_j - \partial_j A_i,\end{aligned} \hspace{\stretch{1}}(5.64)

where

\begin{aligned}\partial_i = \frac{\partial {}}{\partial {x^i}}\end{aligned} \hspace{\stretch{1}}(5.65)

This alone implies half of Maxwell’s equations. To show this we consider

\begin{aligned}e^{m k i j} \partial_k F_{i j} = 0.\end{aligned} \hspace{\stretch{1}}(5.66)

This is the Bianchi identity. To demonstrate this identity, we’ll have to swap indexes, employ derivative commutation, and then swap indexes once more

\begin{aligned}e^{m k i j} \partial_k F_{i j} &= e^{m k i j} \partial_k (\partial_i A_j - \partial_j A_i) \\ &= 2 e^{m k i j} \partial_k \partial_i A_j \\ &= 2 e^{m k i j} \frac{1}{{2}} \left( \partial_k \partial_i A_j + \partial_i \partial_k A_j \right) \\ &= e^{m k i j} \partial_k \partial_i A_j e^{m i k j} \partial_k \partial_i A_j  \\ &= (e^{m k i j} - e^{m k i j}) \partial_k \partial_i A_j \\ &= 0 \qquad \square\end{aligned}

This is the 4D analogue of

\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} f) = 0\end{aligned} \hspace{\stretch{1}}(5.67)

i.e.

\begin{aligned}e^{\alpha\beta\gamma} \partial_\beta \partial_\gamma f = 0\end{aligned} \hspace{\stretch{1}}(5.68)

Let’s do this explicitly, starting with

\begin{aligned}{\left\lVert{ F_{i j} }\right\rVert} = \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0.\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(5.69)

For the m= 0 case we have

\begin{aligned}\epsilon^{0 k i j} \partial_k F_{i j}&=\epsilon^{\alpha \beta \gamma} \partial_\alpha F_{\beta \gamma} \\ &= \epsilon^{\alpha \beta \gamma} \partial_\alpha (-\epsilon_{\beta \gamma \delta} B_\delta) \\ &= -\epsilon^{\alpha \beta \gamma} \epsilon_{\delta \beta \gamma }\partial_\alpha B_\delta \\ &= - 2 {\delta^\alpha}_\delta \partial_\alpha B_\delta \\ &= - 2 \partial_\alpha B_\alpha \end{aligned}

We must then have

\begin{aligned}\partial_\alpha B_\alpha = 0.\end{aligned} \hspace{\stretch{1}}(5.70)

This is just Gauss’s law for magnetism

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{B} = 0.\end{aligned} \hspace{\stretch{1}}(5.71)

Let’s do the spatial portion, for which we have three equations, one for each \alpha of

\begin{aligned}e^{\alpha j k l} \partial_j F_{k l}&=e^{\alpha 0 \beta \gamma} \partial_0 F_{\beta \gamma}+e^{\alpha 0 \gamma \beta} \partial_0 F_{\gamma \beta}+e^{\alpha \beta 0 \gamma} \partial_\beta F_{0 \gamma}+e^{\alpha \beta \gamma 0} \partial_\beta F_{\gamma 0}+e^{\alpha \gamma 0 \beta} \partial_\gamma F_{0 \beta}+e^{\alpha \gamma \beta 0} \partial_\gamma F_{\beta 0} \\ &=2 \left( e^{\alpha 0 \beta \gamma} \partial_0 F_{\beta \gamma}+e^{\alpha \beta 0 \gamma} \partial_\beta F_{0 \gamma}+e^{\alpha \gamma 0 \beta} \partial_\gamma F_{0 \beta}\right) \\ &=2 e^{0 \alpha \beta \gamma} \left(-\partial_0 F_{\beta \gamma}+\partial_\beta F_{0 \gamma}- \partial_\gamma F_{0 \beta}\right)\end{aligned}

This implies

\begin{aligned}0 =-\partial_0 F_{\beta \gamma}+\partial_\beta F_{0 \gamma}- \partial_\gamma F_{0 \beta}\end{aligned} \hspace{\stretch{1}}(5.72)

Referring back to the previous expansions of 2.6 and 2.17, we have

\begin{aligned}0 =\partial_0 \epsilon_{\beta\gamma\mu} B_\mu+\partial_\beta E_\gamma- \partial_\gamma E_{\beta},\end{aligned} \hspace{\stretch{1}}(5.73)

or

\begin{aligned}\frac{1}{{c}} \frac{\partial {B_\alpha}}{\partial {t}} + (\boldsymbol{\nabla} \times \mathbf{E})_\alpha = 0.\end{aligned} \hspace{\stretch{1}}(5.74)

These are just the components of the Maxwell-Faraday equation

\begin{aligned}0 = \frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} + \boldsymbol{\nabla} \times \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(5.75)

Appendix. Some additional index gymnastics.

Transposition of mixed index tensor.

Is the transpose of a mixed index object just a substitution of the free indexes? This wasn’t obvious to me that it would be the case, especially since I’d made an error in some index gymnastics that had me temporarily convinced differently. However, working some examples clears the fog. For example let’s take the transpose of 3.37.

\begin{aligned}{\left\lVert{ {\delta^i}_j }\right\rVert}^\text{T} &= {\left\lVert{ O^{a i} O_{a j} }\right\rVert}^\text{T} \\ &= \left( {\left\lVert{ O^{j i} }\right\rVert} {\left\lVert{ O_{i j} }\right\rVert} \right)^\text{T} \\ &={\left\lVert{ O_{i j} }\right\rVert}^\text{T}{\left\lVert{ O^{j i} }\right\rVert}^\text{T}  \\ &={\left\lVert{ O_{j i} }\right\rVert}{\left\lVert{ O^{i j} }\right\rVert} \\ &={\left\lVert{ O_{a i} O^{a j} }\right\rVert} \\ \end{aligned}

If the transpose of a mixed index tensor just swapped the indexes we would have

\begin{aligned}{\left\lVert{ {\delta^i}_j }\right\rVert}^\text{T} = {\left\lVert{ O_{a i} O^{a j} }\right\rVert} \end{aligned} \hspace{\stretch{1}}(6.76)

From this it does appear that all we have to do is switch the indexes and we will write

\begin{aligned}{\delta^j}_i = O_{a i} O^{a j} \end{aligned} \hspace{\stretch{1}}(6.77)

We can consider a more general operation

\begin{aligned}{\left\lVert{{A^i}_j}\right\rVert}^\text{T}&={\left\lVert{ A^{i m} g_{m j} }\right\rVert}^\text{T} \\ &={\left\lVert{ g_{i j} }\right\rVert}^\text{T}{\left\lVert{ A^{i j} }\right\rVert}^\text{T}  \\ &={\left\lVert{ g_{i j} }\right\rVert}{\left\lVert{ A^{j i} }\right\rVert} \\ &={\left\lVert{ g_{i m} A^{j m} }\right\rVert} \\ &={\left\lVert{ {A^{j}}_i }\right\rVert}\end{aligned}

So we see that we do just have to swap indexes.

Transposition of lower index tensor.

We’ve saw above that we had

\begin{aligned}{\left\lVert{ {A^{i}}_j }\right\rVert}^\text{T} &= {\left\lVert{ {A_{j}}^i }\right\rVert} \\ {\left\lVert{ {A_{i}}^j }\right\rVert}^\text{T} &= {\left\lVert{ {A^{j}}_i }\right\rVert} \end{aligned} \hspace{\stretch{1}}(6.78)

which followed by careful treatment of the transposition in terms of A^{i j} for which we defined a transpose operation. We assumed as well that

\begin{aligned}{\left\lVert{ A_{i j} }\right\rVert}^\text{T} = {\left\lVert{ A_{j i} }\right\rVert}.\end{aligned} \hspace{\stretch{1}}(6.80)

However, this does not have to be assumed, provided that g^{i j} = g_{i j}, and (AB)^\text{T} = B^\text{T} A^\text{T}. We see this by expanding this transposition in products of A^{i j} and \hat{G}

\begin{aligned}{\left\lVert{ A_{i j} }\right\rVert}^\text{T}&= \left( {\left\lVert{g_{i j}}\right\rVert} {\left\lVert{ A^{i j} }\right\rVert} {\left\lVert{g_{i j}}\right\rVert} \right)^\text{T} \\ &= \left( {\left\lVert{g^{i j}}\right\rVert} {\left\lVert{ A^{i j} }\right\rVert} {\left\lVert{g^{i j}}\right\rVert} \right)^\text{T} \\ &= {\left\lVert{g^{i j}}\right\rVert}^\text{T} {\left\lVert{ A^{i j}}\right\rVert}^\text{T} {\left\lVert{g^{i j}}\right\rVert}^\text{T} \\ &= {\left\lVert{g^{i j}}\right\rVert} {\left\lVert{ A^{j i}}\right\rVert} {\left\lVert{g^{i j}}\right\rVert} \\ &= {\left\lVert{g_{i j}}\right\rVert} {\left\lVert{ A^{i j}}\right\rVert} {\left\lVert{g_{i j}}\right\rVert} \\ &= {\left\lVert{ A_{j i}}\right\rVert} \end{aligned}

It would be worthwhile to go through all of this index manipulation stuff and lay it out in a structured axiomatic form. What is the minimal set of assumptions, and how does all of this generalize to non-diagonal metric tensors (even in Euclidean spaces).

Translating the index expression of identity from Lorentz products to matrix form

A verification that the matrix expression 3.38, matches the index expression 3.37 as claimed is worthwhile. It would be easy to guess something similar like \hat{O}^\text{T} \hat{G} \hat{O} \hat{G} is instead the matrix representation. That was in fact my first erroneous attempt to form the matrix equivalent, but is the transpose of 3.38. Either way you get an identity, but the indexes didn’t match.

Since we have g^{i j} = g_{i j} which do we pick to do this verification? This appears to be dictated by requirements to match lower and upper indexes on the summed over index. This is probably clearest by example, so let’s expand the products on the LHS explicitly

\begin{aligned}{\left\lVert{ g^{i j} }\right\rVert} {\left\lVert{ {O^{i}}_j }\right\rVert} ^\text{T}{\left\lVert{ g_{i j} }\right\rVert}{\left\lVert{ {O^{i}}_j }\right\rVert} &=\left( {\left\lVert{ {O^{i}}_j }\right\rVert} {\left\lVert{ g^{i j} }\right\rVert} \right) ^\text{T}{\left\lVert{ g_{i j} }\right\rVert}{\left\lVert{ {O^{i}}_j }\right\rVert}  \\ &=\left( {\left\lVert{ {O^{i}}_k g^{k j} }\right\rVert} \right) ^\text{T}{\left\lVert{ g_{i m} {O^{m}}_j }\right\rVert}  \\ &={\left\lVert{ O^{i j} }\right\rVert} ^\text{T}{\left\lVert{ O_{i j} }\right\rVert}  \\ &={\left\lVert{ O^{j i} }\right\rVert} {\left\lVert{ O_{i j} }\right\rVert}  \\ &={\left\lVert{ O^{k i} O_{k j} }\right\rVert}  \\ \end{aligned}

This matches the {\left\lVert{{\delta^i}_j}\right\rVert} that we have on the RHS, and all is well.

References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Lecture 10 (Taught by Prof. Erich Poppitz). Lorentz force equation energy term, and four vector formulation of the Lorentz force equation.

Posted by peeterjoot on February 8, 2011

[Click here for a PDF of this post with nicer formatting]

Reading.

Covering chapter 3 material from the text [1].

Covering lecture notes pp. 74-83: gauge transformations in 3-vector language (74); energy of a relativistic particle in EM field (75); variational principle and equation of motion in 4-vector form (76-77); the field strength tensor (78-80); the fourth equation of motion (81)

What is the significance to the gauge invariance of the action?

We had argued that under a gauge transformation

\begin{aligned}A_i \rightarrow A_i + \frac{\partial {\chi}}{\partial {x^i}},\end{aligned} \hspace{\stretch{1}}(2.1)

the action for a particle changes by a boundary term

\begin{aligned}- \frac{e}{c} ( \chi(x_b) - \chi(x_a) ).\end{aligned} \hspace{\stretch{1}}(2.2)

Because S changes by a boundary term only, variation problem is not affected. The extremal trajectories are then the same, hence the EOM are the same.

A less high brow demonstration.

With our four potential split into space and time components

\begin{aligned}A^i = (\phi, \mathbf{A}),\end{aligned} \hspace{\stretch{1}}(2.3)

the lower index representation of the same vector is

\begin{aligned}A_i = (\phi, -\mathbf{A}).\end{aligned} \hspace{\stretch{1}}(2.4)

Our gauge transformation is then

\begin{aligned}A_0 &\rightarrow A_0 + \frac{\partial {\chi}}{\partial {x^0}} \\ -\mathbf{A} &\rightarrow -\mathbf{A} + \frac{\partial {\chi}}{\partial {\mathbf{x}}}\end{aligned} \hspace{\stretch{1}}(2.5)

or

\begin{aligned}\phi &\rightarrow \phi + \frac{1}{{c}}\frac{\partial {\chi}}{\partial {t}} \\ \mathbf{A} &\rightarrow \mathbf{A} - \boldsymbol{\nabla} \chi.\end{aligned} \hspace{\stretch{1}}(2.7)

Now observe how the electric and magnetic fields are transformed

\begin{aligned}\mathbf{E} &= - \boldsymbol{\nabla} \phi - \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}} \\ &\rightarrow - \boldsymbol{\nabla} \left( \phi + \frac{1}{{c}}\frac{\partial {\chi}}{\partial {t}} \right) - \frac{1}{{c}}\frac{\partial {}}{\partial {t}} \left( \mathbf{A} - \boldsymbol{\nabla} \chi \right) \\ \end{aligned}

Sufficient continuity of \chi is assumed, allowing commutation of the space and time derivatives, and we are left with just \mathbf{E}

For the magnetic field we have

\begin{aligned}\mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{A}  \\ &\rightarrow \boldsymbol{\nabla} \times (\mathbf{A}  - \boldsymbol{\nabla} \chi) \\ \end{aligned}

Again with continuity assumptions, \boldsymbol{\nabla} \times (\boldsymbol{\nabla} \chi) = 0, and we are left with just \mathbf{B}. The electromagnetic fields (as opposed to potentials) do not change under gauge transformations.

We conclude that the \{A_i\} description is hugely redundant, but despite that, local \mathcal{L} and H can only be written in terms of the potentials A_i.

Energy term of the Lorentz force. Three vector approach.

With the Lagrangian for the particle given by

\begin{aligned}\mathcal{L} = - mc^2 \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} + \frac{e}{c} \mathbf{A} \cdot \mathbf{v} - e \phi,\end{aligned} \hspace{\stretch{1}}(2.9)

we define the energy as

\begin{aligned}\mathcal{E} = \mathbf{v} \cdot \frac{\partial {\mathcal{L}}}{\partial {\mathbf{v}}} - \mathcal{L}\end{aligned} \hspace{\stretch{1}}(2.10)

This is not necessarily a conserved quantity, but we define it as the energy anyways (we don’t really have a Hamiltonian when the fields are time dependent). Associated with this quantity is the general relationship

\begin{aligned}\frac{d{{\mathcal{E}}}}{dt} = -\frac{\partial {\mathcal{L}}}{\partial {t}},\end{aligned} \hspace{\stretch{1}}(2.11)

and when the Lagrangian is invariant with respect to time translation the energy \mathcal{E} will be a conserved quantity (and also the Hamiltonian).

Our canonical momentum is

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\mathbf{v}}} = \gamma m \mathbf{v} + \frac{e}{c} \mathbf{A}\end{aligned} \hspace{\stretch{1}}(2.12)

So our energy is

\begin{aligned}\mathcal{E} = \gamma m \mathbf{v}^2 + \frac{e}{c} \mathbf{A} \cdot \mathbf{v} - \left( - mc^2 \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} + \frac{e}{c} \mathbf{A} \cdot \mathbf{v} - e \phi \right).\end{aligned}

Or

\begin{aligned}\mathcal{E} = \underbrace{\frac{m c^2}{\sqrt{1 - \frac{\mathbf{v}^2}{c^2}}}}_{({*})} + e \phi.\end{aligned} \hspace{\stretch{1}}(2.13)

The contribution of ({*}) to the energy \mathcal{E} comes from the free (kinetic) particle portion of the Lagrangian \mathcal{L} = -m c^2 \sqrt{1 - \frac{\mathbf{v}^2}{c^2}}, and we identify the remainder as a potential energy

\begin{aligned}\mathcal{E} = \frac{m c^2}{\sqrt{1 - \frac{\mathbf{v}^2}{c^2}}} + \underbrace{e \phi}_{\text{"potential"}}.\end{aligned} \hspace{\stretch{1}}(2.14)

For the kinetic portion we can also show that we have

\begin{aligned}\frac{d}{dt} \mathcal{E}_{\text{kinetic}} =\frac{m c^2}{\sqrt{1 - \frac{\mathbf{v}^2}{c^2}}} = e \mathbf{E} \cdot \mathbf{v}.\end{aligned} \hspace{\stretch{1}}(2.15)

To show this observe that we have

\begin{aligned}\frac{d}{dt} \mathcal{E}_{\text{kinetic}} &= m c^2 \frac{d\gamma}{dt} \\ &= m c^2 \frac{d}{dt} \frac{1}{{\sqrt{1 - \frac{\mathbf{v}^2}{c^2}}}} \\ &= m c^2 \frac{\frac{\mathbf{v}}{c^2} \cdot \frac{d\mathbf{v}}{dt}}{\left(1 - \frac{\mathbf{v}^2}{c^2}\right)^{3/2}} \\ &= \frac{m \gamma \mathbf{v} \cdot \frac{d\mathbf{v}}{dt}}{1 - \frac{\mathbf{v}^2}{c^2}}\end{aligned}

We also have

\begin{aligned}\mathbf{v} \cdot \frac{d{\mathbf{p}}}{dt} &= \mathbf{v} \cdot \frac{d{{}}}{dt} \frac{m \mathbf{v}}{\sqrt{1 - \frac{\mathbf{v}^2}{c^2}}} \\ &= m\mathbf{v}^2 \frac{d{{\gamma}}}{dt} + m \gamma \mathbf{v} \cdot \frac{d{\mathbf{v}}}{dt} \\ &= m\mathbf{v}^2 \frac{d{{\gamma}}}{dt} + m c^2 \frac{d{{\gamma}}}{dt} \left( 1 - \frac{\mathbf{v}^2}{c^2} \right) \\ &= m c^2 \frac{d{{\gamma}}}{dt}.\end{aligned}

Utilizing the Lorentz force equation, we have

\begin{aligned}\mathbf{v} \cdot \frac{d{\mathbf{p}}}{dt} = e \left( \mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B} \right) \cdot \mathbf{v} = e \mathbf{E} \cdot \mathbf{v}\end{aligned} \hspace{\stretch{1}}(2.16)

and are able to assemble the above, and find that we have

\begin{aligned}\frac{d{{(m c^2 \gamma)}}}{dt} = e \mathbf{E} \cdot \mathbf{v} \end{aligned} \hspace{\stretch{1}}(2.17)

Four vector Lorentz force

Using ds = \sqrt{ dx^i dx_i } our action can be rewritten

\begin{aligned}S &= \int \left( -m c ds - \frac{e}{c} u^i A_i ds \right) \\ &= \int \left( -m c ds - \frac{e}{c} dx^i A_i \right) \\ &= \int \left( -m c \sqrt{ dx^i dx_i} - \frac{e}{c} dx^i A_i \right) \\ \end{aligned}

x^i(\tau) is a worldline x^i(0) = a^i, x^i(1) = b^i,

We want \delta S = S[ x + \delta x ] - S[ x ] = 0 (to linear order in \delta x)

The variation of our proper length is

\begin{aligned}\delta ds &=\delta \sqrt{ dx^i dx_i } \\ &= \frac{1}{{ 2 \sqrt{ dx^i dx_i }}} \delta (dx^j dx_j)\end{aligned}

Observe that for the numerator we have

\begin{aligned}\delta (dx^j dx_j) &= \delta ( dx^j g_{jk} dx^k ) \\ &= \delta ( dx^j ) g_{jk} dx^k + dx^j g_{jk} \delta ( dx^k ) \\ &= \delta ( dx^j ) g_{jk} dx^k + dx^k g_{kj} \delta ( dx^j ) \\ &= 2 \delta ( dx^j ) g_{jk} dx^k \\ &= 2 \delta ( dx^j ) dx_j \end{aligned}

\paragraph{TIP:} If this goes too quick, or there is any disbelief, write these all out explicitly as dx^j dx_j = dx^0 dx_0 + dx^1 dx_1 + dx^2 dx_2 + dx^3 dx_3 and compute it that way.

For the four vector potential our variation is

\begin{aligned}\delta A_i = A_i(x + \delta x) - A_i = \frac{\partial {A_i}}{\partial {x^j}} \delta x^j = \partial_j A_i \delta x^j\end{aligned} \hspace{\stretch{1}}(3.18)

(i.e. By chain rule)

Completing the proper length variations above we have

\begin{aligned}\delta \sqrt{ dx^i dx_i } &= \frac{1}{{ \sqrt{ dx^i dx_i }}} \delta (dx^j) dx_j \\ &= \delta (dx^j) \frac{d{{x_j}}}{ds}  \\ &= \delta (dx^j) u_j \\ &= d \delta x^j u_j\end{aligned}

We are now ready to assemble results and do the integration by parts

\begin{aligned}\delta S &= \int \left( -m c d (\delta x^j) u_j- \frac{e}{c} d (\delta x^i) A_i - \frac{e}{c} dx^i \partial_j A_i \delta x^j\right) \\ &= {\left. \left( -m c (\delta x^j) u_j - \frac{e}{c} (\delta x^i) A_i \right)\right\vert}_a^b+\int \left( m c \delta x^j d u_j+ \frac{e}{c} (\delta x^i) d A_i - \frac{e}{c} dx^i \partial_j A_i \delta x^j\right) \\ \end{aligned}

Our variation at the endpoints is zero {\left.{{\delta x^i}}\right\vert}_{{a}} = {\left.{{\delta x^i}}\right\vert}_{{b}} = 0, killing the non-integral terms

\begin{aligned}\delta S &= \int \delta x^j\left( m c d u_j+ \frac{e}{c} d A_j - \frac{e}{c} dx^i \partial_j A_i \right).\end{aligned}

Observe that our differential can also be expanded by chain rule

\begin{aligned}d A_j = \frac{\partial {A_j}}{\partial {x^i}} dx^i = \partial_i A_j dx^i,\end{aligned} \hspace{\stretch{1}}(3.19)

which simplifies the variation further

\begin{aligned}\delta S &= \int \delta x^j\left( m c d u_j+ \frac{e}{c} dx^i ( \partial_i A_j - \partial_j A_i )\right) \\ &= \int \delta x^j ds\left( m c \frac{d u_j}{ds}+ \frac{e}{c} u^i ( \partial_i A_j - \partial_j A_i )\right) \\ \end{aligned}

Since this is true for all variations \delta x^j, which is arbitrary, the interior part is zero everywhere in the trajectory. The antisymmetric portion, a rank 2 4-tensor is called the electromagnetic field strength tensor, and written

\begin{aligned}\boxed{F_{ij} = \partial_i A_j - \partial_j A_i.}\end{aligned} \hspace{\stretch{1}}(3.20)

In matrix form this is

\begin{aligned}{\left\lVert{ F_{ij} }\right\rVert} = \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0.\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.21)

In terms of the field strength tensor our Lorentz force equation takes the form

\begin{aligned}\boxed{\frac{d{{(m c u_i)}}}{ds} = \frac{e}{c} F_{ij} u^j.}\end{aligned} \hspace{\stretch{1}}(3.22)

References

[1] L.D. Landau and E.M. Lifshits. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

Energy term of the Lorentz force equation.

Posted by peeterjoot on February 8, 2011

[Click here for a PDF of this post with nicer formatting]

Motivation.

In class this week, the Lorentz force was derived from an action (the simplest Lorentz invariant, gauge invariant, action that could be constructed)

\begin{aligned}S = - m c \int ds - \frac{e}{c} \int ds A^i u_i.\end{aligned} \hspace{\stretch{1}}(1.1)

We end up with the familiar equation, with the exception that the momentum includes the relativistically required gamma factor

\begin{aligned}\frac{d (\gamma m \mathbf{v})}{dt} = e \left( \mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B} \right).\end{aligned} \hspace{\stretch{1}}(1.2)

I asked what the energy term of this equation would be and was answered that we would get to it, and it could be obtained by a four vector minimization of the action which produces the Lorentz force equation of the following form

\begin{aligned}\frac{du^i}{d\tau} \propto e F^{ij} u_j.\end{aligned} \hspace{\stretch{1}}(1.3)

Let’s see if we can work this out without the four-vector approach, using the action expressed with an explicit space time split, then also work it out in the four vector form and compare as a consistency check.

Three vector approach.

The Lorentz force derivation.

For completeness, let’s work out the Lorentz force equation from the action 1.1. Parameterizing by time we have

\begin{aligned}S &= -m c^2 \int dt \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} - e \int dt \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} \gamma \left( 1, \frac{1}{{c}} \mathbf{v}\right) \cdot (\phi, \mathbf{A}) \\ &= -m c^2 \int dt \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} - e \int dt \left( \phi - \frac{1}{{c}} \mathbf{A} \cdot \mathbf{v} \right)\end{aligned}

Our Lagrangian is therefore

\begin{aligned}\mathcal{L}(\mathbf{x}, \mathbf{v}, t) = -m c^2 \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} - e \phi(\mathbf{x}, t) + \frac{e}{c} \mathbf{A}(\mathbf{x}, t) \cdot \mathbf{v}\end{aligned} \hspace{\stretch{1}}(2.4)

We can calculate our conjugate momentum easily enough

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\mathbf{v}}} = \gamma m \mathbf{v} + \frac{e}{c} \mathbf{A},\end{aligned} \hspace{\stretch{1}}(2.5)

and for the gradient portion of the Euler-Lagrange equations we have

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\mathbf{x}}} = -e \boldsymbol{\nabla} \phi + e \boldsymbol{\nabla} \left( \frac{\mathbf{v}}{c} \cdot \mathbf{A} \right).\end{aligned} \hspace{\stretch{1}}(2.6)

Utilizing the convective derivative (i.e. chain rule in fancy clothes)

\begin{aligned}\frac{d}{dt} = \mathbf{v} \cdot \boldsymbol{\nabla} + \frac{\partial {}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(2.7)

This gives us

\begin{aligned}-e \boldsymbol{\nabla} \phi + e \boldsymbol{\nabla} \left( \frac{\mathbf{v}}{c} \cdot \mathbf{A} \right) = \frac{d(\gamma m \mathbf{v})}{dt} + \frac{e}{c} (\mathbf{v} \cdot \boldsymbol{\nabla}) \mathbf{A}+ \frac{e}{c} \frac{\partial {\mathbf{A}}}{\partial {t}},\end{aligned} \hspace{\stretch{1}}(2.8)

and a final bit of rearranging gives us

\begin{aligned}\frac{d(\gamma m \mathbf{v})}{dt} =e \left( -\boldsymbol{\nabla} \phi - \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}}\right)+ \frac{e}{c} \left( \boldsymbol{\nabla} \left( \mathbf{v} \cdot \mathbf{A} \right) - (\mathbf{v} \cdot \boldsymbol{\nabla}) \mathbf{A}\right).\end{aligned} \hspace{\stretch{1}}(2.9)

The first set of derivatives we identify with the electric field \mathbf{E}. For the second, utilizing the vector triple product identity [1]

\begin{aligned}\mathbf{a} \times (\mathbf{b} \times \mathbf{c}) = \mathbf{b} (\mathbf{a} \cdot \mathbf{c}) - (\mathbf{a} \cdot \mathbf{b}) \mathbf{c},\end{aligned} \hspace{\stretch{1}}(2.10)

we recognize as related to the magnetic field \mathbf{v} \times \mathbf{B} = \mathbf{v} \times (\boldsymbol{\nabla} \times \mathbf{A}).

The power (energy) term.

When we start with an action explicitly constructed with Lorentz invariance as a requirement, it is somewhat odd to end up with a result that has only the spatial vector portion of what should logically be a four vector result. We have an equation for the particle momentum, but not one for the energy. In tutorial Simon provided the hint of how to approach this, and asked if we had calculated the Hamiltonian for the Lorentz force. We had only calculated the Hamiltonian for the free particle.

Considering this, we can only actually calculate a Hamiltonian for the case where \phi(\mathbf{x}, t) = \phi(\mathbf{x}) and \mathbf{A}(\mathbf{x}, t) = \mathbf{A}(\mathbf{x}), because when the potentials have any sort of time dependence we do not have a Lagrangian that is invariant under time translation. Returning to the derivation of the Hamiltonian conservation equation, we see that we must modify the argument slightly when there is a time dependence and get instead

\begin{aligned}\frac{d}{dt} \left( \frac{\partial {\mathcal{L}}}{\partial {\mathbf{v}}} \cdot \mathcal{L} - \mathcal{L} \right) + \frac{\partial {\mathcal{L}}}{\partial {t}} = 0.\end{aligned} \hspace{\stretch{1}}(2.11)

Only when there is no time dependence in the Lagrangian, do we have our conserved quantity, what we label as energy, or Hamiltonian.

From 2.5, we have

\begin{aligned}0 &= \frac{d}{dt} \left( \left( \gamma m \mathbf{v} + \frac{e}{c} \mathbf{A} \right) \cdot \mathbf{v} +m c^2 \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} + e \phi - \frac{e}{c} \mathbf{A} \cdot \mathbf{v}\right) - e \frac{\partial {\phi}}{\partial {t}} + \frac{e}{c} \frac{\partial {\mathbf{A}}}{\partial {t}} \cdot \mathbf{v} \\ \end{aligned}

Our \mathbf{A} \cdot \mathbf{v} terms cancel, and we can combine the \gamma and \gamma^{-1} terms, then apply the convective derivative again

\begin{aligned}\frac{d}{dt} \left( \gamma m c^2 \right) &= - e \left( \mathbf{v} \cdot \boldsymbol{\nabla} + \frac{\partial {}}{\partial {t}} \right) \phi + e \frac{\partial {\phi}}{\partial {t}} - \frac{e}{c} \frac{\partial {\mathbf{A}}}{\partial {t}} \cdot \mathbf{v} \\ &= - e \mathbf{v} \cdot \boldsymbol{\nabla} \phi - \frac{e}{c} \frac{\partial {\mathbf{A}}}{\partial {t}} \cdot \mathbf{v} \\ &= + e \mathbf{v} \cdot \left( - \boldsymbol{\nabla} \phi - \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}} \right).\end{aligned}

This is just

\begin{aligned}\frac{d}{dt} \left( \gamma m c^2 \right) = e \mathbf{v} \cdot \mathbf{E},\end{aligned} \hspace{\stretch{1}}(2.12)

and we find the rate of change of energy term of our four momentum equation

\begin{aligned}\frac{d}{dt}\left( \frac{E}{c}, \mathbf{p}\right) = e \left( \frac{\mathbf{v}}{c} \cdot \mathbf{E}, \mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B} \right).\end{aligned} \hspace{\stretch{1}}(2.13)

Specified explicilty, this is

\begin{aligned}\frac{d}{dt}\left( \gamma m \left( c, \mathbf{v} \right) \right)= e \left( \frac{\mathbf{v}}{c} \cdot \mathbf{E}, \mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B} \right).\end{aligned} \hspace{\stretch{1}}(2.14)

While this was the result I was looking for, once written it now stands out as incomplete relativistically. We have an equation that specifies the time derivative of a four vector. What about the spatial derivatives? We really ought to have a rank two tensor result, and not a four vector result relating the fields and the energy and momentum of the particle. The Lorentz force equation, even when expanded to four vector form, does not seem complete relativistically.

With u^i = dx^i/ds, we can rewrite 2.14 as

\begin{aligned}\partial_0 (\gamma m u^i) = e \left( \frac{\mathbf{v}}{c} \cdot \mathbf{E}, \mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B} \right).\end{aligned} \hspace{\stretch{1}}(2.15)

If we were to vary the action with respect to a spatial coordinate instead of time, we should end up with a similar equation of the form \partial_i (\gamma m u^i) = ?. Having been pointed at the explicitly invariant result, I wonder if those equations are independent. Let’s defer exploring this, until at least after calculating the result using a four vector form of the action.

Four vector approach.

The Lorentz force derivation from invariant action.

We can rewrite our action, parameterizing with proper time. This is

\begin{aligned}S = -m c^2 \int d\tau \sqrt{ \frac{dx^i}{d\tau} \frac{dx_i}{d\tau} }- \frac{e}{c} \int d\tau A_i \frac{dx^i}{d\tau}\end{aligned} \hspace{\stretch{1}}(3.16)

Writing \dot{x}^i = dx^i/d\tau, our Lagrangian is then

\begin{aligned}\mathcal{L}(x^i, \dot{x^i}, \tau)= -m c^2 \sqrt{ \dot{x}^i \dot{x}_i }- \frac{e}{c} A_i \dot{x}^i\end{aligned} \hspace{\stretch{1}}(3.17)

The Euler-Lagrange equations take the form

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {x^i}} = \frac{d}{d\tau} \frac{\partial {\mathcal{L}}}{\partial {\dot{x}^i}} .\end{aligned} \hspace{\stretch{1}}(3.18)

Our gradient and conjugate momentum are

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {x^i}} &= - \frac{e}{c} \frac{\partial {A_j}}{\partial {x^i}} \dot{x}^j  \\ \frac{\partial {\mathcal{L}}}{\partial {\dot{x}^i}}  &= -m \dot{x}_i - \frac{e}{c} A_i.\end{aligned} \hspace{\stretch{1}}(3.19)

With our convective derivative taking the form

\begin{aligned}\frac{d}{d\tau} = \dot{x}^i \frac{\partial {}}{\partial {x^i}},\end{aligned} \hspace{\stretch{1}}(3.21)

we have

\begin{aligned}m \frac{d^2 x_i}{d\tau^2} &= \frac{e}{c} \frac{\partial {A_j}}{\partial {x^i}} \dot{x}^j- \frac{e}{c} \dot{x}^j \frac{\partial {A_i}}{\partial {x^j}} \\ &=\frac{e}{c} \dot{x}^j \left( \frac{\partial {A_j}}{\partial {x^i}} -\frac{\partial {A_i}}{\partial {x^j}} \right) \\ &=\frac{e}{c} \dot{x}^j \left( \partial_i A_j - \partial_j A_i\right) \\ &=\frac{e}{c} \dot{x}^j F_{ij}\end{aligned}

Our Prof wrote this with indexes raised and lowered respectively

\begin{aligned}m \frac{d^2 x^i}{d\tau^2} = \frac{e}{c} F^{ij} \dot{x}_j .\end{aligned} \hspace{\stretch{1}}(3.22)

Following the text [2] he also writes u^i = dx^i/ds = (1/c) dx^i/d\tau, and in that form we have

\begin{aligned}\frac{d (m c u^i)}{ds} = \frac{e}{c} F^{ij} u_j.\end{aligned} \hspace{\stretch{1}}(3.23)

Expressed explicitly in terms of the three vector fields.

The power term.

From 3.23, lets extract the i=0 term, relating the rate of change of energy to the field and particle velocity. With

\begin{aligned}\frac{d{{}}}{d\tau} = \frac{dt}{d\tau} \frac{d}{dt} = \gamma \frac{d{{}}}{dt},\end{aligned} \hspace{\stretch{1}}(3.24)

we have

\begin{aligned}\frac{d{{(m \gamma \frac{dx^i}{dt})}}}{dt} = \frac{e}{c} F^{ij} \frac{d{{x_j}}}{dt}.\end{aligned} \hspace{\stretch{1}}(3.25)

For i=0 we have

\begin{aligned}F^{0j} \frac{d{{x_j}}}{dt} = -F^{0\alpha} \frac{d{{x^\alpha}}}{dt} \end{aligned} \hspace{\stretch{1}}(3.26)

That component of the field is

\begin{aligned}F^{\alpha 0} &=\partial^\alpha A^0 - \partial^0 A^\alpha \\ &=-\frac{\partial {\phi}}{\partial {x^\alpha}} - \frac{1}{{c}} \frac{\partial {A^\alpha}}{\partial {t}} \\ &= \left( -\boldsymbol{\nabla} \phi - \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}} \right)^\alpha.\end{aligned}

This verifies the result obtained with considerably more difficulty, using the Hamiltonian like conservation relation obtained for a time translation of a time dependent Lagrangian

\begin{aligned}\frac{d{{(m \gamma c^2 )}}}{dt} = e \mathbf{E} \cdot \mathbf{v}.\end{aligned} \hspace{\stretch{1}}(3.27)

The Lorentz force terms.

Let’s also verify the signs for the i > 0 terms. For those we have

\begin{aligned}\frac{d{{(m \gamma \frac{dx^\alpha}{dt})}}}{dt} &= \frac{e}{c} F^{\alpha j} \frac{d{{x_j}}}{dt} \\ &= \frac{e}{c} F^{\alpha 0} \frac{d{{x_0}}}{dt}+\frac{e}{c} F^{\alpha \beta} \frac{d{{x_\beta}}}{dt} \\ &= e E^\alpha- \sum_{\alpha \beta} \frac{e}{c} \left( \partial^\alpha A^\beta - \partial^\beta A^\alpha\right)v^\beta \\ \end{aligned}

Since we have only spatial indexes left, lets be sloppy and imply summation over all repeated indexes, even if unmatched upper and lower. This leaves us with

\begin{aligned}-\left( \partial^\alpha A^\beta - \partial^\beta A^\alpha \right) v^\beta &=\left( \partial_\alpha A^\beta - \partial_\beta A^\alpha \right) v^\beta \\ &=\epsilon_{\alpha \beta \gamma} B^\gamma\end{aligned}

With the v^\beta contraction we have

\begin{aligned}\epsilon_{\alpha \beta \gamma} B^\gamma v^\beta = (\mathbf{v} \times \mathbf{B})^\alpha,\end{aligned} \hspace{\stretch{1}}(3.28)

leaving our first result obtained by the time parameterization of the Lagrangian

\begin{aligned}\frac{d{{(m \gamma \mathbf{v})}}}{dt} = e \left(\mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B} \right).\end{aligned} \hspace{\stretch{1}}(3.29)

This now has a nice symmetrical form. It’s slightly disappointing not to have a rank two tensor on the LHS like we have with the symmetric stress tensor with Poynting Vector and energy and other similar terms that relates field energy and momentum with \mathbf{E} \cdot \mathbf{J} and the charge density equivalents of the Lorentz force equation. Is there such a symmetric relationship for particles too?

References

[1] Wikipedia. Triple product — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 7-February-2011]. http://en.wikipedia.org/w/index.php?title=Triple_product&oldid=407455209.

[2] L.D. Landau and E.M. Lifshits. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , | Leave a Comment »

Classical Electrodynamic gauge interaction.

Posted by peeterjoot on October 22, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

In [1] chapter 6, we have a statement that in classical mechanics the electromagnetic interaction is due to a transformation of the following form

\begin{aligned}\mathbf{p} &\rightarrow \mathbf{p} - \frac{e}{c} \mathbf{A} \\ E &\rightarrow E - e \phi\end{aligned} \hspace{\stretch{1}}(1.1)

Let’s verify that this does produce the classical interaction law. Putting a more familiar label on this, we should see that we obtain the Lorentz force law from a transformation of the Hamiltonian.

Hamiltonian equations.

Recall that the Hamiltonian was defined in terms of conjugate momentum components p_k as

\begin{aligned}H(x_k, p_k) = \dot{x}_k p_k - \mathcal{L}(x_k, \dot{x}_k),\end{aligned} \hspace{\stretch{1}}(2.3)

we can take x_k partials to obtain the first of the Hamiltonian system of equations for the motion

\begin{aligned}\frac{\partial {H}}{\partial {x_k}} &= - \frac{\partial {\mathcal{L}}}{\partial {x_k}}  \\ &= - \frac{d}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{x}_k}} \end{aligned}

With p_k \equiv {\partial {\mathcal{L}}}/{\partial {\dot{x}_k}}, and taking p_k partials too, we have the system of equations

\begin{subequations}

\begin{aligned} \frac{\partial {H}}{\partial {x_k}} &= - \frac{d p_k}{dt}\end{aligned} \hspace{\stretch{1}}(2.4a)

\begin{aligned} \frac{\partial {H}}{\partial {p_k}} &= \dot{x}_k\end{aligned} \hspace{\stretch{1}}(2.4b)

\end{subequations}

Classical interaction

Starting with the free particle Hamiltonian

\begin{aligned}H = \frac{\mathbf{p}}{2m},\end{aligned} \hspace{\stretch{1}}(3.5)

we make the transformation required to both the energy and momentum terms

\begin{aligned}H - e\phi = \frac{\left(\mathbf{p} - \frac{e}{c} \mathbf{A}\right)^2 }{2m} = \frac{1}{{2m}} \mathbf{p}^2 - \frac{e}{m c} \mathbf{p} \cdot \mathbf{A} + \frac{1}{{2m}} \left(\frac{e}{c}\right)^2 \mathbf{A}^2 \end{aligned} \hspace{\stretch{1}}(3.6)

From 2.4b we find

\begin{aligned}\frac{d x_k}{dt} = \frac{\partial {H}}{\partial {p_k}} = \frac{1}{{m}} \left( p_k - \frac{e}{c} A_k \right),\end{aligned} \hspace{\stretch{1}}(3.7)

or

\begin{aligned}p_k = m \frac{d x_k}{dt} + \frac{e}{c} A_k.\end{aligned} \hspace{\stretch{1}}(3.8)

Taking derivatives and employing 2.4a we have

\begin{aligned}\frac{d p_k}{dt} &= m \frac{d^2 x_k}{dt^2} + \frac{e}{c} \frac{d A_k}{dt}  \\ &= -\frac{\partial {H}}{\partial {x_k}} \\ &=\frac{1}{{m}} \frac{e}{c} p_n \frac{\partial {A_n}}{\partial {x_k}} - e \frac{\partial {\phi}}{\partial {x_k}}- \frac{1}{{m}} \left(\frac{e}{c}\right)^2 A_k \frac{\partial {A_k}}{\partial {x_k}} \\ &=\frac{1}{{m}} \frac{e}{c} \left(m \frac{d x_n}{dt} + \frac{e}{c} A_n\right)\frac{\partial {A_n}}{\partial {x_k}} - e \frac{\partial {\phi}}{\partial {x_k}}- \frac{1}{{m}} \left(\frac{e}{c}\right)^2 A_k \frac{\partial {A_k}}{\partial {x_k}} \\ &=\frac{e}{c} \frac{d x_n}{dt}\frac{\partial {A_n}}{\partial {x_k}} - e \frac{\partial {\phi}}{\partial {x_k}}\end{aligned}

Rearranging and utilizing the convective derivative expansion d/dt = (d x_a/dt) {\partial {}}/{\partial {x_a}} (ie: chain rule), we have

\begin{aligned}m \frac{d^2 x_k}{dt^2} &=\frac{e}{c} \frac{d x_n}{dt}\left( \frac{\partial {A_n}}{\partial {x_k}}- \frac{\partial {A_k}}{\partial {x_n}} \right) - e \frac{\partial {\phi}}{\partial {x_k}}\end{aligned} \hspace{\stretch{1}}(3.9)

We guess and expect that the first term of 3.9 is e (\mathbf{v}/c \times \mathbf{B})_k. Let’s verify this

\begin{aligned}(\mathbf{v} \times \mathbf{B})_k&= \dot{x}_m B_d \epsilon_{k m d} \\ &= \dot{x}_m ( \epsilon_{d a b} \partial_a A_b ) \epsilon_{k m d} \\ &= \dot{x}_m \partial_a A_b \epsilon_{d a b} \epsilon_{d k m}\end{aligned}

Since \epsilon_{d a b} \epsilon_{d k m} = \delta_{a k} \delta_{b m} - \delta_{a m} \delta_{b k} we have

\begin{aligned}(\mathbf{v} \times \mathbf{B})_k&= \dot{x}_m \partial_a A_b \epsilon_{d a b} \epsilon_{d k m} \\ &=\dot{x}_m \partial_a A_b \delta_{a k} \delta_{b m} -\dot{x}_m \partial_a A_b \delta_{a m} \delta_{b k} \\ &= \dot{x}_m ( \partial_k A_m - \partial_m A_k )\end{aligned}

Except for a difference in dummy summation variables, this matches what we had in 3.9. Thus we are able to put that into the traditional Lorentz force vector form

\begin{aligned}m \frac{d^2 \mathbf{x}}{dt^2} &= e \frac{\mathbf{v}}{c} \times \mathbf{B} + e \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(3.10)

It’s good to see that we get the classical interaction from this transformation before moving on to the trickier seeming QM interaction.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

Non-covariant Lagrangian and Hamiltonian for Lorentz force.

Posted by peeterjoot on November 28, 2009

[Click here for a PDF containing this post with nicer formatting]

In [1], the Lagrangian for a charged particle is given as (12.9) as

\begin{aligned}\mathcal{L} = -m c^2 \sqrt{1 - \mathbf{u}^2/c^2} + \frac{e}{c} \mathbf{u} \cdot \mathbf{A} - e \Phi.\end{aligned} \quad\quad\quad(1)

Let’s work in detail from this to the Lorentz force law and the Hamiltonian and from the Hamiltonian again to the Lorentz force law using the Hamiltonian equations. We should get the same results in each case, and have enough details in doing so to render the text a bit more comprehensible.

Canonical momenta

We need the conjugate momenta for both the Euler-Lagrange evaluation and the Hamiltonian, so lets get that first. The components of this are

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\dot{x}_i}} &= - \frac{1}{{2}} m c^2 \gamma (-2/c^2) \dot{x}_i + \frac{e}{c} A_i \\ &= m \gamma \dot{x}_i + \frac{e}{c} A_i.\end{aligned}

In vector form the canonical momenta are then

\begin{aligned}\mathbf{P} &= \gamma m \mathbf{u} + \frac{e}{c} \mathbf{A}.\end{aligned} \quad\quad\quad(2)

Euler-Lagrange evaluation.

Completing the Euler-Lagrange equation evaluation is the calculation of

\begin{aligned}\frac{d\mathbf{P}}{dt} = \boldsymbol{\nabla} \mathcal{L}.\end{aligned} \quad\quad\quad(3)

On the left hand side we have

\begin{aligned}\frac{d\mathbf{P}}{dt} = \frac{d(\gamma m \mathbf{u})}{dt} + \frac{e}{c} \frac{d\mathbf{A} }{dt},\end{aligned} \quad\quad\quad(4)

and on the right, with implied summation over repeated indexes, we have

\begin{aligned}\boldsymbol{\nabla} \mathcal{L} = \frac{e}{c} \mathbf{e}_k (\mathbf{u} \cdot \partial_k \mathbf{A}) - e \boldsymbol{\nabla} \Phi.\end{aligned} \quad\quad\quad(5)

Putting things together we have

\begin{aligned}\frac{d(\gamma m \mathbf{u})}{dt} &= -e \left(\boldsymbol{\nabla} \Phi + \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}}+ \frac{1}{c} \left(\frac{\partial {\mathbf{A}}}{\partial {x_a}} \frac{\partial {x_a}}{\partial {t}} - \mathbf{e}_k (\mathbf{u} \cdot \partial_k \mathbf{A}) \right)\right) \\ &= -e \left(\boldsymbol{\nabla} \Phi + \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}}+ \frac{1}{c} \mathbf{e}_b u_a\left(\frac{\partial {A_b}}{\partial {x_a}} -\frac{\partial {A_a}}{\partial {x_b}}\right)\right).\end{aligned}

With

\begin{aligned}\mathbf{E} = -\boldsymbol{\nabla} \Phi - \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}},\end{aligned} \quad\quad\quad(6)

the first two terms are recognizable as the electric field. To put some structure in the remainder start by writing

\begin{aligned}\frac{\partial {A_b}}{\partial {x_a}} - \frac{\partial {A_a}}{\partial {x_b}} = \epsilon^{fab} {(\boldsymbol{\nabla} \times \mathbf{A})}_f.\end{aligned} \quad\quad\quad(7)

The remaining term, with \mathbf{B} = \boldsymbol{\nabla} \times \mathbf{A} is now

\begin{aligned}- \frac{e}{c} \mathbf{e}_b u_a \epsilon^{gab} B_g&=\frac{e}{c} \mathbf{e}_a u_b \epsilon^{abg} B_g \\ &= \frac{e}{c} \mathbf{u} \times \mathbf{B}.\end{aligned}

We are left with the momentum portion of the Lorentz force law as expected

\begin{aligned}\frac{d(\gamma m \mathbf{u})}{dt} = e \left( \mathbf{E} + \frac{1}{c} \mathbf{u} \times \mathbf{B} \right).\end{aligned} \quad\quad\quad(8)

Observe that with a small velocity Taylor expansion of the Lagrangian we obtain the approximation

\begin{aligned}-m c^2 \sqrt{ 1 -\mathbf{u}^2/c^2} \approx - m c^2 \left( 1 - \frac{1}{{2}} \mathbf{u}^2/c^2 \right) = \frac{1}{{2}} m \mathbf{u}^2\end{aligned} \quad\quad\quad(9)

If that is our starting place, we can only obtain the non-relativistic approximation of the momentum change by evaluating the Euler-Lagrange equations

\begin{aligned}\frac{d (m \mathbf{u})}{dt} = e \left( \mathbf{E} + \frac{1}{c} \mathbf{u} \times \mathbf{B} \right).\end{aligned} \quad\quad\quad(10)

That was an exercise previously attempting working the Tong Lagrangian problem set [2].

Hamiltonian.

Having confirmed the by old fashioned Euler-Lagrange equation evaluation that our Lagrangian provides the desired equations of motion, let’s now try it using the Hamiltonian approach. First we need the Hamiltonian, which is nothing more than

\begin{aligned}H = \mathbf{P} \cdot \mathbf{u} - \mathcal{L}\end{aligned} \quad\quad\quad(11)

However, in the Lagrangian and the dot product we have velocity terms that we must eliminate in favor of the canonical momenta. The Hamiltonian remains valid in either form, but to apply the Hamiltonian equations we need H = H(\mathbf{P}, \mathbf{x}), and not H = H(\mathbf{u}, \mathbf{P}, \mathbf{x}).

\begin{aligned}H = \mathbf{P} \cdot \mathbf{u} + m c^2 \sqrt{1 - \mathbf{u}^2/c^2} - \frac{e}{c} \mathbf{u} \cdot \mathbf{A} + e \Phi.\end{aligned} \quad\quad\quad(12)

Or

\begin{aligned}H = \mathbf{u} \cdot \left(\mathbf{P} - \frac{e}{c} \mathbf{A}\right) + m c^2 \sqrt{1 - \mathbf{u}^2/c^2} + e \Phi.\end{aligned} \quad\quad\quad(13)

We can rearrange 2 for \mathbf{u}

\begin{aligned}\mathbf{u} = \frac{1}{{m \gamma}} \left( \mathbf{P} - \frac{e}{c} \mathbf{A} \right),\end{aligned} \quad\quad\quad(14)

but \gamma also has a \mathbf{u} dependence, so this is not complete. Squaring gets us closer

\begin{aligned}\mathbf{u}^2 = \frac{1 - \mathbf{u}^2/c^2}{m^2} {\left( \mathbf{P} - \frac{e}{c} \mathbf{A} \right)}^2,\end{aligned} \quad\quad\quad(15)

and a bit of final rearrangement yields

\begin{aligned}\mathbf{u}^2 = \frac{(c \mathbf{P} - e \mathbf{A})^2}{m^2 c^2 + {\left( \mathbf{P} - \frac{e}{c} \mathbf{A} \right)}^2}.\end{aligned} \quad\quad\quad(16)

Writing \mathbf{p} = \mathbf{P} - e \mathbf{A}/c, we can rearrange and find

\begin{aligned}\sqrt{1 - \mathbf{u}^2/c^2} = \frac{m c }{\sqrt{m^2 c^2 +\mathbf{p}^2}}\end{aligned} \quad\quad\quad(17)

Also, taking roots of 16 we must have the directions of \mathbf{u} and \left( \mathbf{P} - \frac{e}{c} \mathbf{A} \right) differ only by a rotation. From 14 we also know that these are colinear, and therefore have

\begin{aligned}\mathbf{u} = \frac{c \mathbf{P} - e \mathbf{A}}{\sqrt{m^2 c^2 + {\left( \mathbf{P} - \frac{e}{c} \mathbf{A} \right)}^2}}.\end{aligned} \quad\quad\quad(18)

This and 17 can now be substituted into 13, for

\begin{aligned}H = \frac{c}{m^2 c^2 + \mathbf{p}^2} \left({\left(\mathbf{P} - \frac{e}{c} \mathbf{A}\right)}^2 + m^2 c^2 \right)+ e \Phi.\end{aligned} \quad\quad\quad(19)

Dividing out the common factors we finally have the Hamiltonian in a tidy form

\begin{aligned}H = \sqrt{ (c \mathbf{P} - e \mathbf{A})^2 + m^2 c^4 } + e\Phi.\end{aligned} \quad\quad\quad(20)

Hamiltonian equation evaluation.

Let’s now go through the exercise of evaluating the Hamiltonian equations. We want the starting point to be just the energy expression 20, and the use of the Hamiltonian equations and none of what led up to that. If we were given only this Hamiltonian and the Hamiltonian principle

\begin{aligned}\frac{\partial {H}}{\partial {P_k}} &= u_k \\  \frac{\partial {H}}{\partial {x_k}} &= -\dot{P}_k, \end{aligned} \quad\quad\quad(21)

how far can we go?

For the particle velocity we have no \Phi dependence and get

\begin{aligned}u_k &= \frac{c (c P_k -e A_k)}{\sqrt{ (c \mathbf{P} - e \mathbf{A})^2 + m^2 c^4 }}\end{aligned} \quad\quad\quad(23)

This is 18 in coordinate form, one of our stepping stones on the way to the Hamiltonian, and we recover it quickly with our first set of derivatives. We have the gradient part \dot{\mathbf{P}} = -\boldsymbol{\nabla} H of the Hamiltonian left to evaluate

\begin{aligned}\frac{d\mathbf{P}}{dt} = \frac{e (c P_k -e A_k) \boldsymbol{\nabla} A_k }{\sqrt{ (c \mathbf{P} - e \mathbf{A})^2 + m^2 c^4 }} - e \boldsymbol{\nabla} \Phi.\end{aligned} \quad\quad\quad(24)

Or

\begin{aligned}\frac{d\mathbf{P}}{dt} = e \left( \frac{u_k}{c} \boldsymbol{\nabla} A_k - \boldsymbol{\nabla} \Phi \right)\end{aligned} \quad\quad\quad(25)

This looks nothing like the Lorentz force law. Knowing that \mathbf{P} - e\mathbf{A}/c is of significance (because we know where we started which is kind of a cheat), we can subtract derivatives of this from both sides, and use the convective derivative operator d/dt = {\partial {}}/{\partial {t}} + \mathbf{u} \cdot \boldsymbol{\nabla} (ie. chain rule) yielding

\begin{aligned}\frac{d}{dt}(\mathbf{P} - e\mathbf{A}/c) = e \left( -\frac{1}{{c}}\frac{\partial {\mathbf{A}}}{\partial {t}} - \frac{1}{{c}} (\mathbf{u} \cdot \boldsymbol{\nabla}) \mathbf{A} + \frac{u_k}{c} \boldsymbol{\nabla} A_k - \boldsymbol{\nabla} \Phi \right).\end{aligned} \quad\quad\quad(26)

The first and last terms sum to the electric field, and we seen evaluating the Euler-Lagrange equations that the remainder is u_k \boldsymbol{\nabla} A_k - (\mathbf{u} \cdot \boldsymbol{\nabla}) \mathbf{A} = \mathbf{u} \times (\boldsymbol{\nabla} \times \mathbf{A}). We have therefore gotten close to the familiar Lorentz force law, and have

\begin{aligned}\frac{d}{dt}(\mathbf{P} - e\mathbf{A}/c) = e \left( \mathbf{E} + \frac{\mathbf{u}}{c} \times \mathbf{B} \right).\end{aligned} \quad\quad\quad(27)

The only untidy detail left is that \mathbf{P} - e \mathbf{A}/c doesn’t look much like \gamma m \mathbf{u}, what we recognize as the relativistically corrected momentum. We ought to have that implied somewhere and 23 looks like the place. That turns out to be the case, and some rearrangement gives us this directly

\begin{aligned}\mathbf{P} - \frac{e}{c}\mathbf{A} = \frac{m \mathbf{u}}{\sqrt{1 - \mathbf{u}^2/c^2}}\end{aligned} \quad\quad\quad(28)

This completes the exercise, and we’ve now obtained the momentum part of the Lorentz force law. This is still unsatisfactory from a relativistic context since we do not have momentum and energy on equal footing. We likely need to utilize a covariant Lagrangian and Hamiltonian formulation to fix up that deficiency.

References

[1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

[2] Dr. David Tong. Classical Mechanics Lagrangian Problem Set 1. [online]. http://www.damtp.cam.ac.uk/user/tong/dynamics/mf1.pdf.

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

Electromagnetic Gauge invariance.

Posted by peeterjoot on September 24, 2009

[Click here for a PDF of this post with nicer formatting]

At the end of section 12.1 in Jackson [1] he states that it is obvious that the Lorentz force equations are gauge invarient.

\begin{aligned}\frac{d \mathbf{p}}{dt} &= e \left( \mathbf{E} + \frac{\mathbf{u}}{c} \times \mathbf{B} \right) \\ \frac{d E}{dt} &= e \mathbf{u} \cdot \mathbf{E} \end{aligned} \quad\quad\quad(1)

Since I didn’t remember what Gauge invariance was, it wasn’t so obvious. But if I looking ahead to one of the problem 12.2 on this invariance we have a Gauge transformation defined in four vector form as

\begin{aligned}A^\alpha \rightarrow A^\alpha + \partial^\alpha \psi\end{aligned} \quad\quad\quad(3)

In vector form with A = \gamma_\alpha A^\alpha, this gauge transformation can be written

\begin{aligned}A \rightarrow A + \nabla \psi\end{aligned} \quad\quad\quad(4)

so this is really a statement that we add a spacetime gradient of something to the four vector potential. Given this, how does the field transform?

\begin{aligned}F &= \nabla \wedge A \\ &\rightarrow \nabla \wedge (A + \nabla \psi) \\ &= F + \nabla \wedge \nabla \psi\end{aligned}

But \nabla \wedge \nabla \psi = 0 (assuming partials are interchangable) so the field is invariant regardless of whether we are talking about the Lorentz force

\begin{aligned}\nabla F = J/\epsilon_0 c\end{aligned} \quad\quad\quad(5)

or the field equations themselves

\begin{aligned}\frac{dp}{d\tau} = e F \cdot v/c\end{aligned} \quad\quad\quad(6)

So, once you know the definition of the gauge transformation in four vector form, yes this justifiably obvious, however, to anybody who is not familiar with Geometric Algebra, perhaps this is still not so obvious. How does this translate to the more common place tensor or space time vector notations? The tensor four vector translation is the easier of the two, and there we have

\begin{aligned}F^{\alpha\beta} &= \partial^\alpha A^\beta -\partial^\beta A^\alpha \\ &\rightarrow \partial^\alpha (A^\beta + \partial^\beta \psi) -\partial^\beta (A^\alpha + \partial^\alpha \psi) \\ &= F^{\alpha\beta} + \partial^\alpha \partial^\beta \psi -\partial^\beta \partial^\alpha \psi \\ \end{aligned}

As required for \nabla \wedge \nabla \psi = 0 interchange of partials means the field components F^{\alpha\beta} are unchanged by adding this gradient. Finally, in plain old spatial vector form, how is this gauge invariance expressed?

In components we have

\begin{aligned}A^0 &\rightarrow A^0 + \partial^0 \psi = \phi + \frac{1}{{c}}\frac{\partial \psi}{\partial t} \\ A^k &\rightarrow A^k + \partial^k \psi = A^k - \frac{\partial \psi}{\partial x^k}\end{aligned} \quad\quad\quad(7)

This last in vector form is \mathbf{A} \rightarrow \mathbf{A} - \boldsymbol{\nabla} \psi, where the sign inversion comes from \partial^k = -\partial_k = -\partial/\partial x^k, assuming a +--- metric.

We want to apply this to the electric and magnetic field components

\begin{aligned}\mathbf{E} &= -\boldsymbol{\nabla} \phi - \frac{1}{{c}}\frac{\partial \mathbf{A}}{\partial t} \\ \mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{A}\end{aligned} \quad\quad\quad(9)

The electric field transforms as

\begin{aligned}\mathbf{E} &\rightarrow -\boldsymbol{\nabla} \left( \phi + \frac{1}{{c}}\frac{\partial \psi}{\partial t}\right) - \frac{1}{{c}}\frac{\partial }{\partial t} \left( \mathbf{A} - \boldsymbol{\nabla} \psi \right) \\ &= \mathbf{E} -\frac{1}{{c}} \boldsymbol{\nabla} \frac{\partial \psi}{\partial t} + \frac{1}{{c}}\frac{\partial }{\partial t} \boldsymbol{\nabla} \psi \end{aligned}

With partial interchange this is just \mathbf{E}. For the magnetic field we have

\begin{aligned}\mathbf{B} &\rightarrow \boldsymbol{\nabla} \times \left( \mathbf{A} - \boldsymbol{\nabla} \psi \right) \\ &= \mathbf{B}  - \boldsymbol{\nabla} \times \boldsymbol{\nabla} \psi \end{aligned}

Again since the partials interchange we have \boldsymbol{\nabla} \times \boldsymbol{\nabla} \psi = 0, so this is just the magnetic field.

Alright. Worked this in three different ways, so now I can say its obvious.

References

[1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

Lorentz force from Lagrangian (non-covariant)

Posted by peeterjoot on September 22, 2009

[Click here for a PDF of this post with nicer formatting]

Motivation

Jackson [1] gives the Lorentz force non-covariant Lagrangian

\begin{aligned}L = - m c^2 \sqrt{1 -\mathbf{u}^2/c^2} + \frac{e}{c} \mathbf{u} \cdot \mathbf{A} - e \phi\end{aligned} \quad\quad\quad(1)

and leaves it as an exercise for the reader to verify that this produces the Lorentz force law. Felt like trying this anew since I recall having trouble the first time I tried it (the covariant derivation was easier).

Guts

Jackson gives a tip to use the convective derivative (yet another name for the chain rule), and using this in the Euler Lagrange equations we have

\begin{aligned}\boldsymbol{\nabla} \mathcal{L} = \frac{d}{dt} \boldsymbol{\nabla}_\mathbf{u} \mathcal{L} = \left( \frac{\partial}{\partial t} + \mathbf{u} \cdot \boldsymbol{\nabla} \right) \sigma_a \frac{\partial \mathcal{L}}{\partial \dot{x}^a}\end{aligned} \quad\quad\quad(2)

where \{\sigma_a\} is the spatial basis. The first order of business is calculating the gradient and conjugate momenta. For the latter we have

\begin{aligned}\sigma_a \frac{\partial \mathcal{L}}{\partial \dot{x}^a}&=\sigma_a \left(- m c^2 \gamma \frac{1}{{2}} (-2) \dot{x}^a/c^2 + \frac{e}{c} A^a \right) \\ &=m \gamma \mathbf{u} + \frac{e}{c} \mathbf{A} \\ &\equiv \mathbf{p} + \frac{e}{c}\mathbf{A}\end{aligned}

Applying the convective derivative we have

\begin{aligned}\frac{d}{dt} \sigma_a \frac{\partial \mathcal{L}}{\partial \dot{x}^a}&=\frac{d\mathbf{p}}{dt} + \frac{e}{c} \frac{\partial \mathbf{A}}{\partial t}+ \frac{e}{c} \mathbf{u} \cdot \boldsymbol{\nabla} \mathbf{A}\end{aligned}

For the gradient we have

\begin{aligned}\sigma_a \frac{\partial \mathcal{L}}{\partial x^a} = e\left( \frac{1}{{c}}\dot{x}^b \boldsymbol{\nabla} A^b - \boldsymbol{\nabla} \phi \right)\end{aligned}

Rearranging 2 for this Lagrangian we have

\begin{aligned}\frac{d\mathbf{p}}{dt} =e \left( - \boldsymbol{\nabla} \phi- \frac{1}{c} \frac{\partial \mathbf{A}}{\partial t}- \frac{1}{c} \mathbf{u} \cdot \boldsymbol{\nabla} \mathbf{A} +\frac{1}{{c}} \dot{x}^b \boldsymbol{\nabla} A^b \right)\end{aligned}

The first two terms are the electric field

\begin{aligned}\mathbf{E} \equiv- \boldsymbol{\nabla} \phi- \frac{1}{c} \frac{\partial \mathbf{A}}{\partial t}\end{aligned}

So it remains to be shown that the remaining two equal (\mathbf{u}/c) \times \mathbf{B} = (\mathbf{u}/c) \times (\boldsymbol{\nabla} \times \mathbf{A}). Using the Hestenes notation using primes to denote what the gradient is operating on, we have

\begin{aligned}\dot{x}^b \boldsymbol{\nabla} A^b - \mathbf{u} \cdot \boldsymbol{\nabla} \mathbf{A}&=\boldsymbol{\nabla}' \mathbf{u} \cdot \mathbf{A}' - \mathbf{u} \cdot \boldsymbol{\nabla} \mathbf{A} \\ &=-\mathbf{u} \cdot (\boldsymbol{\nabla} \wedge \mathbf{A}) \\ &=\frac{1}{{2}} \left((\boldsymbol{\nabla} \wedge \mathbf{A}) \mathbf{u}  -\mathbf{u} (\boldsymbol{\nabla} \wedge \mathbf{A}) \right) \\ &=\frac{I}{2} \left((\boldsymbol{\nabla} \times \mathbf{A}) \mathbf{u} -\mathbf{u} (\boldsymbol{\nabla} \times \mathbf{A}) \right) \\ &=-I (\mathbf{u} \wedge \mathbf{B}) \\ &=-I I (\mathbf{u} \times \mathbf{B}) \\ &=\mathbf{u} \times \mathbf{B} \\ \end{aligned}

I’ve used the Geometric Algebra identities I’m familiar with to regroup things, but this last bit can likely be done with index manipulation too. The exercise is complete, and we have from the Lagrangian

\begin{aligned}\frac{d\mathbf{p}}{dt} = e \left( \mathbf{E} + \frac{1}{{c}} \mathbf{u} \times \mathbf{B} \right)\end{aligned} \quad\quad\quad(3)

References

[1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

Posted in Math and Physics Learning. | Tagged: , , | 2 Comments »