• 189,980

# Posts Tagged ‘antisymmetric tensor’

## Lorentz Force Trajectory.

Posted by Peeter Joot on September 10, 2011

# Solving the Lorentz force equation in the non-relativistic limit.

## The problem.

[1] treats the solution of the Lorentz force equation in covariant form. Let’s try this for non-relativistic motion for constant fields, but without making the usual assumptions about perpendicular electric and magnetic fields, or alignment of the axis. Our equation to solve is

\begin{aligned}\frac{d}{dt} \left( \gamma m \mathbf{v} \right) = q \left( \mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B} \right),\end{aligned} \hspace{\stretch{1}}(1.1)

so in the non-relativistic limit we want to solve the matrix equation

\begin{aligned}\mathbf{v}' &= \frac{q}{m} \mathbf{E} + \frac{q}{ m c } \Omega \mathbf{v} \\ \Omega &=\begin{bmatrix}0 & B_3 & -B_2 \\ -B_3 & 0 & B_1 \\ B_2 & -B_1 & 0 \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.2)

## First attempt.

This is very much like the plain old LDE

\begin{aligned}x' = a + b x,\end{aligned} \hspace{\stretch{1}}(1.4)

which we can solve using integrating factors

\begin{aligned}x' - b x &= a \\ e^{b t} \left( x e^{-b t} \right)' &=\end{aligned}

This we can rearrange and integrate to find a solution to the non-homogeneous problem

\begin{aligned}x e^{-b t} = \int a e^{-b t}.\end{aligned} \hspace{\stretch{1}}(1.5)

This solution to the non-homogeneous equation is thus

\begin{aligned}x - x_0 = e^{b t} \int_{\tau = 0}^t a e^{-b \tau} = \frac{a}{b} \left(e^{bt} - 1 \right).\end{aligned} \hspace{\stretch{1}}(1.6)

Because this already incorporates the homogeneous solution $x = C e^{b t}$, this is also the general form of the solution.

Can we do something similar for the matrix equation of 1.2? It is tempting to try, rearranging in the same way like so

\begin{aligned}e^{ \frac{q}{m c}\Omega t} \left( e^{-\frac{q}{m c} \Omega t} \mathbf{v} \right)' = \frac{q}{m} \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(1.7)

Our matrix exponentials are perfectly well formed, but we will run into trouble attempting this. We can get as far as the integral above before running into trouble

\begin{aligned}\mathbf{v} - \mathbf{v}_0 = \frac{q}{m}e^{ \frac{q }{m c} \Omega t}\left( \int_{\tau = 0}^te^{ -\frac{q }{m c} \Omega \tau}\right) \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(1.8)

Only when $\text{Det}{\Omega} \ne 0$ do we have

\begin{aligned}\int e^{ -\frac{q }{m c} \Omega \tau} =- \frac{m c}{q} \Omega^{-1} e^{ -\frac{q }{m c} \Omega \tau},\end{aligned} \hspace{\stretch{1}}(1.9)

but in our case, this determinant is zero, due to the antisymmetry that is built into our magnetic field tensor. It appears that we need a different strategy.

## Second attempt.

It’s natural to attempt to pull out our spectral theorem toolbox. We find three independent eigenvalues for our matrix $\Omega$ (one of which is naturally zero due to the singular nature of the matrix).

These eigenvalues are

\begin{aligned}\lambda_1 &= 0 \\ \lambda_2 &= i{\left\lvert{\mathbf{B}}\right\rvert} \\ \lambda_3 &= -i{\left\lvert{\mathbf{B}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(1.10)

The corresponding orthonormal eigenvectors are found to be

\begin{aligned}\mathbf{u}_1 &= \frac{\mathbf{B}}{{\left\lvert{\mathbf{B}}\right\rvert}} \\ \mathbf{u}_2 &=\frac{1}{{{\left\lvert{\mathbf{B}}\right\rvert} \sqrt{ 2(B_1^2 + B_3^2) } }}\begin{bmatrix}i B_1 B_2 - B_3 {\left\lvert{\mathbf{B}}\right\rvert} \\ -i(B_1^2 + B_3^2) \\ i B_2 B_3 + B_1 {\left\lvert{\mathbf{B}}\right\rvert} \\ \end{bmatrix} \\ \mathbf{u}_3 &=\frac{1}{{{\left\lvert{\mathbf{B}}\right\rvert} \sqrt{ 2(B_1^2 + B_3^2) } }}\begin{bmatrix}-i B_1 B_2 - B_3 {\left\lvert{\mathbf{B}}\right\rvert} \\ i(B_1^2 + B_3^2) \\ -i B_2 B_3 + B_1 {\left\lvert{\mathbf{B}}\right\rvert} \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.13)

The last pair of eigenvectors are computed with the assumption that not both of $B_1$ and $B_3$ are zero. This allows for the spectral decomposition

\begin{aligned}U &=\begin{bmatrix}\mathbf{u}_1 & \mathbf{u}_2 & \mathbf{u}_3\end{bmatrix} \\ D &= \begin{bmatrix}0 & 0 & 0 \\ 0 & i {\left\lvert{\mathbf{B}}\right\rvert} & 0 \\ 0 & 0 & -i {\left\lvert{\mathbf{B}}\right\rvert}\end{bmatrix} \\ \Omega &= U D U^{*}\end{aligned} \hspace{\stretch{1}}(1.16)

We can use this to decouple our equation

\begin{aligned}\mathbf{v}' = \frac{q}{m} \mathbf{E} + \frac{q}{ m c } U D U^{*} \mathbf{v}\end{aligned} \hspace{\stretch{1}}(1.19)

\begin{aligned}\mathbf{w} &= U^{*} \mathbf{v} \\ \mathbf{F} &= U^{*} \mathbf{E} \\ \mathbf{w}' &= \frac{q}{m} \mathbf{F} + \frac{q}{ m c } D \mathbf{w}.\end{aligned} \hspace{\stretch{1}}(1.20)

Written out explicitly, this is a set of three independent equations

\begin{aligned}w_1' &= \frac{q}{m} F_1 \\ w_2' &= \frac{q}{m} F_2 + \frac{q i {\left\lvert{\mathbf{B}}\right\rvert}}{ m c } w_2 \\ w_3' &= \frac{q}{m} F_3 - \frac{q i {\left\lvert{\mathbf{B}}\right\rvert}}{ m c } w_3\end{aligned} \hspace{\stretch{1}}(1.23)

Utilizing 1.6 our solution is

\begin{aligned}w_1 - w_1(0) &= \frac{q}{m} F_1 t \\ w_2 - w_2(0) &= - \frac{i c F_2 }{{\left\lvert{\mathbf{B}}\right\rvert}} \left( e^{ \frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ m c } } - 1 \right) \\ w_3 - w_3(0) &= \frac{i c F_3 }{{\left\lvert{\mathbf{B}}\right\rvert}} \left( e^{ -\frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ m c } } - 1 \right)\end{aligned} \hspace{\stretch{1}}(1.26)

Reinserting matrix form we have

\begin{aligned}\mathbf{w} - \mathbf{w}(0) =\begin{bmatrix}\frac{q}{m} \mathbf{e}_1^\text{T} U^{*} \mathbf{E} t \\ \frac{2 c \mathbf{e}_2^\text{T} U^{*} \mathbf{E} }{{\left\lvert{\mathbf{B}}\right\rvert}}e^{ \frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } } \sin \left( \frac{q {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } \right) \\ \frac{2 c \mathbf{e}_3^\text{T} U^{*} \mathbf{E} }{{\left\lvert{\mathbf{B}}\right\rvert}}e^{ -\frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } } \sin \left( \frac{q {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } \right) \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.29)

with

\begin{aligned}f_1 &= \frac{q}{m} t \\ f_2 &= \frac{2 c }{{\left\lvert{\mathbf{B}}\right\rvert}} e^{ \frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } } \sin \left( \frac{q {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } \right) \\ f_3 &= \frac{2 c }{{\left\lvert{\mathbf{B}}\right\rvert}} e^{ -\frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } } \sin \left( \frac{q {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } \right)\end{aligned} \hspace{\stretch{1}}(1.30)

We have

\begin{aligned}\mathbf{v} - \mathbf{v}(0) = U\begin{bmatrix}f_1 \mathbf{e}_1^\text{T} U^{*} \mathbf{E} \\ f_2 \mathbf{e}_2^\text{T} U^{*} \mathbf{E} \\ f_3 \mathbf{e}_3^\text{T} U^{*} \mathbf{E} \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.33)

Observe that the dot products embedded here can be nicely expressed in terms of the eigenvectors since

\begin{aligned}U^{*} \mathbf{E}= \begin{bmatrix}\mathbf{u}_1^{*} \cdot \mathbf{E} \\ \mathbf{u}_2^{*} \cdot \mathbf{E} \\ \mathbf{u}_3^{*} \cdot \mathbf{E}\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.34)

Our solution is thus a weighted sum of projections of the electric field vector $\mathbf{E}$ onto the eigenvectors formed strictly from the magnetic field tensor

\begin{aligned}\mathbf{v} - \mathbf{v}(0) = \sum f_i \mathbf{u}_i ( \mathbf{u}_i^{*} \cdot \mathbf{E} ).\end{aligned} \hspace{\stretch{1}}(1.35)

Recalling that $\mathbf{u}_i = \mathbf{B}/{\left\lvert{\mathbf{B}}\right\rvert}$, the unit vector that lies in the direction of the magnetic field, we have

\begin{aligned}\mathbf{v} - \mathbf{v}(0) = \frac{q t}{m} \hat{\mathbf{B}} (\hat{\mathbf{B}} \cdot \mathbf{E})+ \sum_{i=2}^3 f_i \mathbf{u}_i ( \mathbf{u}_i^{*} \cdot \mathbf{E} ).\end{aligned} \hspace{\stretch{1}}(1.36)

Also observe that this is a manifestly real valued solution since remaining eigenvectors are conjugate pairs $\mathbf{u}_2 = \mathbf{u}_3^{*}$ as are the differential solutions $f_2 = f_3^{*}$. This leaves us with

\begin{aligned}\mathbf{v} - \mathbf{v}(0) = \frac{q t}{m} \hat{\mathbf{B}} (\hat{\mathbf{B}} \cdot \mathbf{E})+ \frac{4 c }{{\left\lvert{\mathbf{B}}\right\rvert}} \sin \left( \frac{q {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } \right) \text{Real} \left(e^{ \frac{q i {\left\lvert{\mathbf{B}}\right\rvert} t}{ 2 m c } } \mathbf{u}_2 ( \mathbf{u}_2^{*} \cdot \mathbf{E} )\right).\end{aligned} \hspace{\stretch{1}}(1.37)

It is natural to express $\mathbf{u}_2$ in terms of the direction cosines $b_i$ of the magnetic field vector $\mathbf{B} = {\left\lvert{\mathbf{B}}\right\rvert}(b_1, b_2, b_3)$

\begin{aligned}\mathbf{u}_2 =\frac{1}{{\sqrt{ 2(b_1^2 + b_3^2) }}}\begin{bmatrix}i b_1 b_2 - b_3 \\ -i(b_1^2 + b_3^2) \\ i b_2 b_3 + b_1 \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.38)

Is there a way to express this beastie in a coordinate free fashion? I experimented with this a bit looking for something that would provide some additional geometrical meaning but did not find it. Further play with that is probably something for another day.

What we do see is that our velocity has two main components, one of which is linearly increasing in proportion to the colinearity of the magnetic and electric fields. The other component is oscillatory. With a better geometrical description of that eigenvector we could perhaps understand the mechanics a bit better.

# References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

## PHY450H1S (relativistic electrodynamics) Problem Set 3.

Posted by Peeter Joot on March 2, 2011

[Click here for a PDF of this post with nicer formatting]

# Disclaimer.

This problem set is as yet ungraded (although only the second question will be graded).

# Problem 1. Fun with $\epsilon_{\alpha\beta\gamma}$, $\epsilon^{ijkl}$, $F_{ij}$, and the duality of Maxwell’s equations in vacuum.

## 1. Statement. rank 3 spatial antisymmetric tensor identities.

Prove that

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \nu \gamma}=\delta_{\alpha\mu} \delta_{\beta\nu}-\delta_{\alpha\nu} \delta_{\beta\mu}\end{aligned} \hspace{\stretch{1}}(2.1)

and use it to find the familiar relation for

\begin{aligned}(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D})\end{aligned} \hspace{\stretch{1}}(2.2)

Also show that

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}=2 \delta_{\alpha\mu}.\end{aligned} \hspace{\stretch{1}}(2.3)

(Einstein summation implied all throughout this problem).

## 1. Solution

We can explicitly expand the (implied) sum over indexes $\gamma$. This is

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \nu \gamma}=\epsilon_{\alpha \beta 1} \epsilon_{\mu \nu 1}+\epsilon_{\alpha \beta 2} \epsilon_{\mu \nu 2}+\epsilon_{\alpha \beta 3} \epsilon_{\mu \nu 3}\end{aligned} \hspace{\stretch{1}}(2.4)

For any $\alpha \ne \beta$ only one term is non-zero. For example with $\alpha,\beta = 2,3$, we have just a contribution from the $\gamma = 1$ part of the sum

\begin{aligned}\epsilon_{2 3 1} \epsilon_{\mu \nu 1}.\end{aligned} \hspace{\stretch{1}}(2.5)

The value of this for $(\mu,\nu) = (\alpha,\beta)$ is

\begin{aligned}(\epsilon_{2 3 1})^2\end{aligned} \hspace{\stretch{1}}(2.6)

whereas for $(\mu,\nu) = (\beta,\alpha)$ we have

\begin{aligned}-(\epsilon_{2 3 1})^2\end{aligned} \hspace{\stretch{1}}(2.7)

Our sum has value one when $(\alpha, \beta)$ matches $(\mu, \nu)$, and value minus one for when $(\mu, \nu)$ are permuted. We can summarize this, by saying that when $\alpha \ne \beta$ we have

\begin{aligned}\boxed{\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \nu \gamma}=\delta_{\alpha\mu} \delta_{\beta\nu}-\delta_{\alpha\nu} \delta_{\beta\mu}.}\end{aligned} \hspace{\stretch{1}}(2.8)

However, observe that when $\alpha = \beta$ the RHS is

\begin{aligned}\delta_{\alpha\mu} \delta_{\alpha\nu}-\delta_{\alpha\nu} \delta_{\alpha\mu} = 0,\end{aligned} \hspace{\stretch{1}}(2.9)

as desired, so this form works in general without any $\alpha \ne \beta$ qualifier, completing this part of the problem.

\begin{aligned}(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D})&=(\epsilon_{\alpha \beta \gamma} \mathbf{e}^\alpha A^\beta B^\gamma ) \cdot(\epsilon_{\mu \nu \sigma} \mathbf{e}^\mu C^\nu D^\sigma ) \\ &=\epsilon_{\alpha \beta \gamma} A^\beta B^\gamma\epsilon_{\alpha \nu \sigma} C^\nu D^\sigma \\ &=(\delta_{\beta \nu} \delta_{\gamma\sigma}-\delta_{\beta \sigma} \delta_{\gamma\nu} )A^\beta B^\gammaC^\nu D^\sigma \\ &=A^\nu B^\sigmaC^\nu D^\sigma-A^\sigma B^\nuC^\nu D^\sigma.\end{aligned}

This gives us

\begin{aligned}\boxed{(\mathbf{A} \times \mathbf{B}) \cdot (\mathbf{C} \times \mathbf{D})=(\mathbf{A} \cdot \mathbf{C})(\mathbf{B} \cdot \mathbf{D})-(\mathbf{A} \cdot \mathbf{D})(\mathbf{B} \cdot \mathbf{C}).}\end{aligned} \hspace{\stretch{1}}(2.10)

We have one more identity to deal with.

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}\end{aligned} \hspace{\stretch{1}}(2.11)

We can expand out this (implied) sum slow and dumb as well

\begin{aligned}\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}&=\epsilon_{\alpha 1 2} \epsilon_{\mu 1 2}+\epsilon_{\alpha 2 1} \epsilon_{\mu 2 1} \\ &+\epsilon_{\alpha 1 3} \epsilon_{\mu 1 3}+\epsilon_{\alpha 3 1} \epsilon_{\mu 3 1} \\ &+\epsilon_{\alpha 2 3} \epsilon_{\mu 2 3}+\epsilon_{\alpha 3 2} \epsilon_{\mu 3 2} \\ &=2 \epsilon_{\alpha 1 2} \epsilon_{\mu 1 2}+ 2 \epsilon_{\alpha 1 3} \epsilon_{\mu 1 3}+ 2 \epsilon_{\alpha 2 3} \epsilon_{\mu 2 3}\end{aligned}

Now, observe that for any $\alpha \in (1,2,3)$ only one term of this sum is picked up. For example, with no loss of generality, pick $\alpha = 1$. We are left with only

\begin{aligned}2 \epsilon_{1 2 3} \epsilon_{\mu 2 3}\end{aligned} \hspace{\stretch{1}}(2.12)

This has the value

\begin{aligned}2 (\epsilon_{1 2 3})^2 = 2\end{aligned} \hspace{\stretch{1}}(2.13)

when $\mu = \alpha$ and is zero otherwise. We can therefore summarize the evaluation of this sum as

\begin{aligned}\boxed{\epsilon_{\alpha \beta \gamma}\epsilon_{\mu \beta \gamma}= 2\delta_{\alpha\mu},}\end{aligned} \hspace{\stretch{1}}(2.14)

completing this problem.

## 2. Statement. Determinant of three by three matrix.

Prove that for any $3 \times 3$ matrix ${\left\lVert{A_{\alpha\beta}}\right\rVert}$: $\epsilon_{\mu\nu\lambda} A_{\alpha \mu} A_{\beta\nu} A_{\gamma\lambda} = \epsilon_{\alpha \beta \gamma} \text{Det} A$ and that $\epsilon_{\alpha\beta\gamma} \epsilon_{\mu\nu\lambda} A_{\alpha \mu} A_{\beta\nu} A_{\gamma\lambda} = 6 \text{Det} A$.

## 2. Solution

In class Simon showed us how the first identity can be arrived at using the triple product $\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}) = \text{Det}(\mathbf{a} \mathbf{b} \mathbf{c})$. It occurred to me later that I’d seen the identity to be proven in the context of Geometric Algebra, but hadn’t recognized it in this tensor form. Basically, a wedge product can be expanded in sums of determinants, and when the dimension of the space is the same as the vector, we have a pseudoscalar times the determinant of the components.

For example, in $\mathbb{R}^{2}$, let’s take the wedge product of a pair of vectors. As preparation for the relativistic $\mathbb{R}^{4}$ case We won’t require an orthonormal basis, but express the vector in terms of a reciprocal frame and the associated components

\begin{aligned}a = a^i e_i = a_j e^j\end{aligned} \hspace{\stretch{1}}(2.15)

where

\begin{aligned}e^i \cdot e_j = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(2.16)

When we get to the relativistic case, we can pick (but don’t have to) the standard basis

\begin{aligned}e_0 &= (1, 0, 0, 0) \\ e_1 &= (0, 1, 0, 0) \\ e_2 &= (0, 0, 1, 0) \\ e_3 &= (0, 0, 0, 1),\end{aligned} \hspace{\stretch{1}}(2.17)

for which our reciprocal frame is implicitly defined by the metric

\begin{aligned}e^0 &= (1, 0, 0, 0) \\ e^1 &= (0, -1, 0, 0) \\ e^2 &= (0, 0, -1, 0) \\ e^3 &= (0, 0, 0, -1).\end{aligned} \hspace{\stretch{1}}(2.21)

Anyways. Back to the problem. Let’s examine the $\mathbb{R}^{2}$ case. Our wedge product in coordinates is

\begin{aligned}a \wedge b=a^i b^j (e_i \wedge e_j)\end{aligned} \hspace{\stretch{1}}(2.25)

Since there are only two basis vectors we have

\begin{aligned}a \wedge b=(a^1 b^2 - a^2 b^1) e_1 \wedge e_2 = \text{Det} {\left\lVert{a^i b^j}\right\rVert} (e_1 \wedge e_2).\end{aligned} \hspace{\stretch{1}}(2.26)

Our wedge product is a product of the determinant of the vector coordinates, times the $\mathbb{R}^{2}$ pseudoscalar $e_1 \wedge e_2$.

This doesn’t look quite like the $\mathbb{R}^{3}$ relation that we want to prove, which had an antisymmetric tensor factor for the determinant. Observe that we get the determinant by picking off the $e_1 \wedge e_2$ component of the bivector result (the only component in this case), and we can do that by dotting with $e^2 \cdot e^1$. To get an antisymmetric tensor times the determinant, we have only to dot with a different pseudoscalar (one that differs by a possible sign due to permutation of the indexes). That is

\begin{aligned}(e^t \wedge e^s) \cdot (a \wedge b)&=a^i b^j (e^t \wedge e^s) \cdot (e_i \wedge e_j) \\ &=a^i b^j\left( {\delta^{s}}_i {\delta^{t}}_j-{\delta^{t}}_i {\delta^{s}}_j \right) \\ &=a^i b^j{\delta^{[t}}_j {\delta^{s]}}_i \\ &=a^i b^j{\delta^{t}}_{[j} {\delta^{s}}_{i]} \\ &=a^{[i} b^{j]}{\delta^{t}}_{j} {\delta^{s}}_{i} \\ &=a^{[s} b^{t]}\end{aligned}

Now, if we write $a^i = A^{1 i}$ and $b^j = A^{2 j}$ we have

\begin{aligned}(e^t \wedge e^s) \cdot (a \wedge b)=A^{1 s} A^{2 t} -A^{1 t} A^{2 s}\end{aligned} \hspace{\stretch{1}}(2.27)

We can write this in two different ways. One of which is

\begin{aligned}A^{1 s} A^{2 t} -A^{1 t} A^{2 s} =\epsilon^{s t} \text{Det} {\left\lVert{A^{ij}}\right\rVert}\end{aligned} \hspace{\stretch{1}}(2.28)

and the other of which is by introducing free indexes for $1$ and $2$, and summing antisymmetrically over these. That is

\begin{aligned}A^{1 s} A^{2 t} -A^{1 t} A^{2 s}=A^{a s} A^{b t} \epsilon_{a b}\end{aligned} \hspace{\stretch{1}}(2.29)

So, we have

\begin{aligned}\boxed{A^{a s} A^{b t} \epsilon_{a b} =A^{1 i} A^{2 j} {\delta^{[t}}_j {\delta^{s]}}_i =\epsilon^{s t} \text{Det} {\left\lVert{A^{ij}}\right\rVert},}\end{aligned} \hspace{\stretch{1}}(2.30)

This result hold regardless of the metric for the space, and does not require that we were using an orthonormal basis. When the metric is Euclidean and we have an orthonormal basis, then all the indexes can be dropped.

The $\mathbb{R}^{3}$ and $\mathbb{R}^{4}$ cases follow in exactly the same way, we just need more vectors in the wedge products.

For the $\mathbb{R}^{3}$ case we have

\begin{aligned}(e^u \wedge e^t \wedge e^s) \cdot ( a \wedge b \wedge c)&=a^i b^j c^k(e^u \wedge e^t \wedge e^s) \cdot (e_i \wedge e_j \wedge e_k) \\ &=a^i b^j c^k{\delta^{[u}}_k{\delta^{t}}_j{\delta^{s]}}_i \\ &=a^{[s} b^t c^{u]}\end{aligned}

Again, with $a^i = A^{1 i}$ and $b^j = A^{2 j}$, and $c^k = A^{3 k}$ we have

\begin{aligned}(e^u \wedge e^t \wedge e^s) \cdot ( a \wedge b \wedge c)=A^{1 i} A^{2 j} A^{3 k}{\delta^{[u}}_k{\delta^{t}}_j{\delta^{s]}}_i\end{aligned} \hspace{\stretch{1}}(2.31)

and we can choose to write this in either form, resulting in the identity

\begin{aligned}\boxed{\epsilon^{s t u} \text{Det} {\left\lVert{A^{ij}}\right\rVert}=A^{1 i} A^{2 j} A^{3 k}{\delta^{[u}}_k{\delta^{t}}_j{\delta^{s]}}_i=\epsilon_{a b c} A^{a s} A^{b t} A^{c u}.}\end{aligned} \hspace{\stretch{1}}(2.32)

The $\mathbb{R}^{4}$ case follows exactly the same way, and we have

\begin{aligned}(e^v \wedge e^u \wedge e^t \wedge e^s) \cdot ( a \wedge b \wedge c \wedge d)&=a^i b^j c^k d^l(e^v \wedge e^u \wedge e^t \wedge e^s) \cdot (e_i \wedge e_j \wedge e_k \wedge e_l) \\ &=a^i b^j c^k d^l{\delta^{[v}}_l{\delta^{u}}_k{\delta^{t}}_j{\delta^{s]}}_i \\ &=a^{[s} b^t c^{u} d^{v]}.\end{aligned}

This time with $a^i = A^{0 i}$ and $b^j = A^{1 j}$, and $c^k = A^{2 k}$, and $d^l = A^{3 l}$ we have

\begin{aligned}\boxed{\epsilon^{s t u v} \text{Det} {\left\lVert{A^{ij}}\right\rVert}=A^{0 i} A^{1 j} A^{2 k} A^{3 l}{\delta^{[v}}_l{\delta^{u}}_k{\delta^{t}}_j{\delta^{s]}}_i=\epsilon_{a b c d} A^{a s} A^{b t} A^{c u} A^{d v}.}\end{aligned} \hspace{\stretch{1}}(2.33)

This one is almost the identity to be established later in problem 1.4. We have only to raise and lower some indexes to get that one. Note that in the Minkowski standard basis above, because $s, t, u, v$ must be a permutation of $0,1,2,3$ for a non-zero result, we must have

\begin{aligned}\epsilon^{s t u v} = (-1)^3 (+1) \epsilon_{s t u v}.\end{aligned} \hspace{\stretch{1}}(2.34)

So raising and lowering the identity above gives us

\begin{aligned}-\epsilon_{s t u v} \text{Det} {\left\lVert{A_{ij}}\right\rVert}=\epsilon^{a b c d} A_{a s} A_{b t} A_{c u} A_{d u}.\end{aligned} \hspace{\stretch{1}}(2.35)

No sign changes were required for the indexes $a, b, c, d$, since they are paired.

Until we did the raising and lowering operations here, there was no specific metric required, so our first result 2.33 is the more general one.

There’s one more part to this problem, doing the antisymmetric sums over the indexes $s, t, \cdots$. For the $\mathbb{R}^{2}$ case we have

\begin{aligned}\epsilon_{s t} \epsilon_{a b} A^{a s} A^{b t}&=\epsilon_{s t} \epsilon^{s t} \text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=\left( \epsilon_{1 2} \epsilon^{1 2} +\epsilon_{2 1} \epsilon^{2 1} \right)\text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=\left( 1^2 + (-1)^2\right)\text{Det} {\left\lVert{A^{ij}}\right\rVert}\end{aligned}

We conclude that

\begin{aligned}\boxed{\epsilon_{s t} \epsilon_{a b} A^{a s} A^{b t} = 2! \text{Det} {\left\lVert{A^{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.36)

For the $\mathbb{R}^{3}$ case we have the same operation

\begin{aligned}\epsilon_{s t u} \epsilon_{a b c} A^{a s} A^{b t} A^{c u}&=\epsilon_{s t u} \epsilon^{s t u} \text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=\left( \epsilon_{1 2 3} \epsilon^{1 2 3} +\epsilon_{1 3 2} \epsilon^{1 3 2} + \cdots\right)\text{Det} {\left\lVert{A^{ij}}\right\rVert} \\ &=(\pm 1)^2 (3!)\text{Det} {\left\lVert{A^{ij}}\right\rVert}.\end{aligned}

So we conclude

\begin{aligned}\boxed{\epsilon_{s t u} \epsilon_{a b c} A^{a s} A^{b t} A^{c u}= 3! \text{Det} {\left\lVert{A^{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.37)

It’s clear what the pattern is, and if we evaluate the sum of the antisymmetric tensor squares in $\mathbb{R}^{4}$ we have

\begin{aligned}\epsilon_{s t u v} \epsilon_{s t u v}&=\epsilon_{0 1 2 3} \epsilon_{0 1 2 3}+\epsilon_{0 1 3 2} \epsilon_{0 1 3 2}+\epsilon_{0 2 1 3} \epsilon_{0 2 1 3}+ \cdots \\ &= (\pm 1)^2 (4!),\end{aligned}

So, for our SR case we have

\begin{aligned}\boxed{\epsilon_{s t u v} \epsilon_{a b c d} A^{a s} A^{b t} A^{c u} A^{d v}= 4! \text{Det} {\left\lVert{A^{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.38)

This was part of question 1.4, albeit in lower index form. Here since all indexes are matched, we have the same result without major change

\begin{aligned}\boxed{\epsilon^{s t u v} \epsilon^{a b c d} A_{a s} A_{b t} A_{c u} A_{d v}= 4! \text{Det} {\left\lVert{A_{ij}}\right\rVert}.}\end{aligned} \hspace{\stretch{1}}(2.39)

The main difference is that we are now taking the determinant of a lower index tensor.

## 3. Statement. Rotational invariance of 3D antisymmetric tensor

Use the previous results to show that $\epsilon_{\mu\nu\lambda}$ is invariant under rotations.

## 3. Solution

We apply transformations to coordinates (and thus indexes) of the form

\begin{aligned}x_\mu \rightarrow O_{\mu\nu} x_\nu\end{aligned} \hspace{\stretch{1}}(2.40)

With our tensor transforming as its indexes, we have

\begin{aligned}\epsilon_{\mu\nu\lambda} \rightarrow \epsilon_{\alpha\beta\sigma} O_{\mu\alpha} O_{\nu\beta} O_{\lambda\sigma}.\end{aligned} \hspace{\stretch{1}}(2.41)

We’ve got 2.32, which after dropping indexes, because we are in a Euclidean space, we have

\begin{aligned}\epsilon_{\mu \nu \lambda} \text{Det} {\left\lVert{A_{ij}}\right\rVert} = \epsilon_{\alpha \beta \sigma} A_{\alpha \mu} A_{\beta \nu} A_{\sigma \lambda}.\end{aligned} \hspace{\stretch{1}}(2.42)

Let $A_{i j} = O_{j i}$, which gives us

\begin{aligned}\epsilon_{\mu\nu\lambda} \rightarrow \epsilon_{\mu\nu\lambda} \text{Det} A^\text{T}\end{aligned} \hspace{\stretch{1}}(2.43)

but since $\text{Det} O = \text{Det} O^\text{T}$, we have shown that $\epsilon_{\mu\nu\lambda}$ is invariant under rotation.

## 4. Statement. Rotational invariance of 4D antisymmetric tensor

Use the previous results to show that $\epsilon_{i j k l}$ is invariant under Lorentz transformations.

## 4. Solution

This follows the same way. We assume a transformation of coordinates of the following form

\begin{aligned}(x')^i &= {O^i}_j x^j \\ (x')_i &= {O_i}^j x_j,\end{aligned} \hspace{\stretch{1}}(2.44)

where the determinant of ${O^i}_j = 1$ (sanity check of sign: ${O^i}_j = {\delta^i}_j$).

Our antisymmetric tensor transforms as its coordinates individually

\begin{aligned}\epsilon_{i j k l} &\rightarrow \epsilon_{a b c d} {O_i}^a{O_j}^b{O_k}^c{O_l}^d \\ &= \epsilon^{a b c d} O_{i a}O_{j b}O_{k c}O_{l d} \\ \end{aligned}

Let $P_{ij} = O_{ji}$, and raise and lower all the indexes in 2.46 for

\begin{aligned}-\epsilon_{s t u v} \text{Det} {\left\lVert{P_{ij}}\right\rVert}=\epsilon^{a b c d} P_{a s} P_{b t} P_{c u} P_{d v}.\end{aligned} \hspace{\stretch{1}}(2.46)

We have

\begin{aligned}\epsilon_{i j k l} &= \epsilon^{a b c d} P_{a i}P_{a j}P_{a k}P_{a l} \\ &=-\epsilon_{i j k l} \text{Det} {\left\lVert{P_{ij}}\right\rVert} \\ &=-\epsilon_{i j k l} \text{Det} {\left\lVert{O_{ij}}\right\rVert} \\ &=-\epsilon_{i j k l} \text{Det} {\left\lVert{g_{im} {O^m}_j }\right\rVert} \\ &=-\epsilon_{i j k l} (-1)(1) \\ &=\epsilon_{i j k l}\end{aligned}

Since $\epsilon_{i j k l} = -\epsilon^{i j k l}$ both are therefore invariant under Lorentz transformation.

## 5. Statement. Sum of contracting symmetric and antisymmetric rank 2 tensors

Show that $A^{ij} B_{ij} = 0$ if $A$ is symmetric and $B$ is antisymmetric.

## 5. Solution

We swap indexes in $B$, switch dummy indexes, then swap indexes in $A$

\begin{aligned}A^{i j} B_{i j} &= -A^{i j} B_{j i} \\ &= -A^{j i} B_{i j} \\ &= -A^{i j} B_{i j} \\ \end{aligned}

Our result is the negative of itself, so must be zero.

## 6. Statement. Characteristic equation for the electromagnetic strength tensor

Show that $P(\lambda) = \text{Det} {\left\lVert{F_{i j} - \lambda g_{i j}}\right\rVert}$ is invariant under Lorentz transformations. Consider the polynomial of $P(\lambda)$, also called the characteristic polynomial of the matrix ${\left\lVert{F_{i j}}\right\rVert}$. Find the coefficients of the expansion of $P(\lambda)$ in powers of $\lambda$ in terms of the components of ${\left\lVert{F_{i j}}\right\rVert}$. Use the result to argue that $\mathbf{E} \cdot \mathbf{B}$ and $\mathbf{E}^2 - \mathbf{B}^2$ are Lorentz invariant.

## 6. Solution

### The invariance of the determinant

Let’s consider how any lower index rank 2 tensor transforms. Given a transformation of coordinates

\begin{aligned}(x^i)' &= {O^i}_j x^j \\ (x_i)' &= {O_i}^j x^j ,\end{aligned} \hspace{\stretch{1}}(2.47)

where $\text{Det} {\left\lVert{ {O^i}_j }\right\rVert} = 1$, and ${O_i}^j = {O^m}_n g_{i m} g^{j n}$. Let’s reflect briefly on why this determinant is unit valued. We have

\begin{aligned}(x^i)' (x_i)'= {O_i}^a x^a {O^i}_b x^b = x^b x_b,\end{aligned} \hspace{\stretch{1}}(2.49)

which implies that the transformation product is

\begin{aligned}{O_i}^a {O^i}_b = {\delta^a}_b,\end{aligned} \hspace{\stretch{1}}(2.50)

the identity matrix. The identity matrix has unit determinant, so we must have

\begin{aligned}1 = (\text{Det} \hat{G})^2 (\text{Det} {\left\lVert{ {O^i}_j }\right\rVert})^2.\end{aligned} \hspace{\stretch{1}}(2.51)

Since $\text{Det} \hat{G} = -1$ we have

\begin{aligned}\text{Det} {\left\lVert{ {O^i}_j }\right\rVert} = \pm 1,\end{aligned} \hspace{\stretch{1}}(2.52)

which is all that we can say about the determinant of this class of transformations by considering just invariance. If we restrict the transformations of coordinates to those of the same determinant sign as the identity matrix, we rule out reflections in time or space. This seems to be the essence of the $SO(1,3)$ labeling.

Why dwell on this? Well, I wanted to be clear on the conventions I’d chosen, since parts of the course notes used $\hat{O} = {\left\lVert{O^{i j}}\right\rVert}$, and $X' = \hat{O} X$, and gave that matrix unit determinant. That $O^{i j}$ looks like it is equivalent to my ${O^i}_j$, except that the one in the course notes is loose when it comes to lower and upper indexes since it gives $(x')^i = O^{i j} x^j$.

I’ll write

\begin{aligned}\hat{O} = {\left\lVert{{O^i}_j}\right\rVert},\end{aligned} \hspace{\stretch{1}}(2.53)

and require this (not ${\left\lVert{O^{i j}}\right\rVert}$) to be the matrix with unit determinant. Having cleared the index upper and lower confusion I had trying to reconcile the class notes with the rules for index manipulation, let’s now consider the Lorentz transformation of a lower index rank 2 tensor (not necessarily antisymmetric or symmetric)

We have, transforming in the same fashion as a lower index coordinate four vector (but twice, once for each index)

\begin{aligned}A_{i j} \rightarrow A_{k m} {O_i}^k{O_j}^m.\end{aligned} \hspace{\stretch{1}}(2.54)

The determinant of the transformation tensor ${O_i}^j$ is

\begin{aligned}\text{Det} {\left\lVert{ {O_i}^j }\right\rVert} = \text{Det} {\left\lVert{ g^{i m} {O^m}_n g^{n j} }\right\rVert} = (\text{Det} \hat{G}) (1) (\text{Det} \hat{G} ) = (-1)^2 (1) = 1.\end{aligned} \hspace{\stretch{1}}(2.55)

We see that the determinant of a lower index rank 2 tensor is invariant under Lorentz transformation. This would include our characteristic polynomial $P(\lambda)$.

### Expanding the determinant.

Utilizing 2.39 we can now calculate the characteristic polynomial. This is

\begin{aligned}\text{Det} {\left\lVert{F_{ij} - \lambda g_{ij} }\right\rVert}&= \frac{1}{{4!}}\epsilon^{s t u v} \epsilon^{a b c d} (F_{ a s } - \lambda g_{a s}) (F_{ b t } - \lambda g_{b t}) (F_{ c u } - \lambda g_{c u}) (F_{ d v } - \lambda g_{d v}) \\ &=\frac{1}{{24}}\epsilon^{s t u v} \epsilon_{a b c d} ({F^a}_s - \lambda {g^a}_s) ({F^b}_t - \lambda {g^b}_t) ({F^c}_u - \lambda {g^c}_u) ({F^d}_v - \lambda {g^d}_v) \\ \end{aligned}

However, ${g^a}_b = g_{b c} g^{a c}$, or ${\left\lVert{{g^a}_b}\right\rVert} = \hat{G}^2 = I$. This means we have

\begin{aligned}{g^a}_b = {\delta^a}_b,\end{aligned} \hspace{\stretch{1}}(2.56)

and our determinant is reduced to

\begin{aligned}\begin{aligned}P(\lambda) &=\frac{1}{{24}}\epsilon^{s t u v} \epsilon_{a b c d} \Bigl({F^a}_s {F^b}_t - \lambda( {\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) + \lambda^2 {\delta^a}_s {\delta^b}_t \Bigr) \\ &\times \qquad \qquad \Bigl({F^c}_u {F^d}_v - \lambda( {\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) + \lambda^2 {\delta^c}_u {\delta^d}_v \Bigr) \end{aligned}\end{aligned} \hspace{\stretch{1}}(2.57)

If we expand this out we have our powers of $\lambda$ coefficients are

\begin{aligned}\lambda^0 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} {F^a}_s {F^b}_t {F^c}_u {F^d}_v \\ \lambda^1 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl(- ({\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) {F^a}_s {F^b}_t - ({\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) {F^c}_u {F^d}_v \Bigr) \\ \lambda^2 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl({\delta^c}_u {\delta^d}_v {F^a}_s {F^b}_t +( {\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) ( {\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) + {\delta^a}_s {\delta^b}_t {F^c}_u {F^d}_v \Bigr) \\ \lambda^3 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl(- ( {\delta^a}_s {F^b}_t + {\delta^b}_t {F^a}_s ) {\delta^c}_u {\delta^d}_v - {\delta^a}_s {\delta^b}_t ( {\delta^c}_u {F^d}_v + {\delta^d}_v {F^c}_u ) \Bigr) \\ \lambda^4 &:\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} \Bigl({\delta^a}_s {\delta^b}_t {\delta^c}_u {\delta^d}_v \Bigr) \\ \end{aligned}

By 2.39 the $\lambda^0$ coefficient is just $\text{Det} {\left\lVert{F_{i j}}\right\rVert}$.

The $\lambda^3$ terms can be seen to be zero. For example, the first one is

\begin{aligned}-\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} {\delta^a}_s {F^b}_t {\delta^c}_u {\delta^d}_v &=-\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{s b u v} {F^b}_t \\ &=-\frac{1}{{12}} \delta^{t}_b {F^b}_t \\ &=-\frac{1}{{12}} {F^b}_b \\ &=-\frac{1}{{12}} F^{bu} g_{ub} \\ &= 0,\end{aligned}

where the final equality to zero comes from summing a symmetric and antisymmetric product.

Similarly the $\lambda$ coefficients can be shown to be zero. Again the first as a sample is

\begin{aligned}-\frac{1}{{24}} \epsilon^{s t u v} \epsilon_{a b c d} {\delta^c}_u {F^d}_v {F^a}_s {F^b}_t &=-\frac{1}{{24}} \epsilon^{u s t v} \epsilon_{u a b d} {F^d}_v {F^a}_s {F^b}_t \\ &=-\frac{1}{{24}} \delta^{[s}_a\delta^{t}_b\delta^{v]}_d{F^d}_v {F^a}_s {F^b}_t \\ &=-\frac{1}{{24}} {F^a}_{[s}{F^b}_{t}{F^d}_{v]} \\ \end{aligned}

Disregarding the $-1/24$ factor, let’s just expand this antisymmetric sum

\begin{aligned}{F^a}_{[a}{F^b}_{b}{F^d}_{d]}&={F^a}_{a}{F^b}_{b}{F^d}_{d}+{F^a}_{d}{F^b}_{a}{F^d}_{b}+{F^a}_{b}{F^b}_{d}{F^d}_{a}-{F^a}_{a}{F^b}_{d}{F^d}_{b}-{F^a}_{d}{F^b}_{b}{F^d}_{a}-{F^a}_{b}{F^b}_{a}{F^d}_{d} \\ &={F^a}_{d}{F^b}_{a}{F^d}_{b}+{F^a}_{b}{F^b}_{d}{F^d}_{a} \\ \end{aligned}

Of the two terms above that were retained, they are the only ones without a zero ${F^i}_i$ factor. Consider the first part of this remaining part of the sum. Employing the metric tensor, to raise indexes so that the antisymmetry of $F^{ij}$ can be utilized, and then finally relabeling all the dummy indexes we have

\begin{aligned}{F^a}_{d}{F^b}_{a}{F^d}_{b}&=F^{a u}F^{b v}F^{d w}g_{d u}g_{a v}g_{b w} \\ &=(-1)^3F^{u a}F^{v b}F^{w d}g_{d u}g_{a v}g_{b w} \\ &=-(F^{u a}g_{a v})(F^{v b}g_{b w} )(F^{w d}g_{d u})\\ &=-{F^u}_v{F^v}_w{F^w}_u\\ &=-{F^a}_b{F^b}_d{F^d}_a\\ \end{aligned}

This is just the negative of the second term in the sum, leaving us with zero.

Finally, we have for the $\lambda^2$ coefficient ($\times 24$)

\begin{aligned}&\epsilon^{s t u v} \epsilon_{a b c d} \Bigl({\delta^c}_u {\delta^d}_v {F^a}_s {F^b}_t +{\delta^a}_s {F^b}_t {\delta^c}_u {F^d}_v +{\delta^b}_t {F^a}_s {\delta^d}_v {F^c}_u \\ &\qquad +{\delta^b}_t {F^a}_s {\delta^c}_u {F^d}_v +{\delta^a}_s {F^b}_t {\delta^d}_v {F^c}_u + {\delta^a}_s {\delta^b}_t {F^c}_u {F^d}_v \Bigr) \\ &=\epsilon^{s t u v} \epsilon_{a b u v} {F^a}_s {F^b}_t +\epsilon^{s t u v} \epsilon_{s b u d} {F^b}_t {F^d}_v +\epsilon^{s t u v} \epsilon_{a t c v} {F^a}_s {F^c}_u \\ &\qquad +\epsilon^{s t u v} \epsilon_{a t u d} {F^a}_s {F^d}_v +\epsilon^{s t u v} \epsilon_{s b c v} {F^b}_t {F^c}_u + \epsilon^{s t u v} \epsilon_{s t c d} {F^c}_u {F^d}_v \\ &=\epsilon^{s t u v} \epsilon_{a b u v} {F^a}_s {F^b}_t +\epsilon^{t v s u } \epsilon_{b d s u} {F^b}_t {F^d}_v +\epsilon^{s u t v} \epsilon_{a c t v} {F^a}_s {F^c}_u \\ &\qquad +\epsilon^{s v t u} \epsilon_{a d t u} {F^a}_s {F^d}_v +\epsilon^{t u s v} \epsilon_{b c s v} {F^b}_t {F^c}_u + \epsilon^{u v s t} \epsilon_{c d s t} {F^c}_u {F^d}_v \\ &=6\epsilon^{s t u v} \epsilon_{a b u v} {F^a}_s {F^b}_t \\ &=6 (2){\delta^{[s}}_a{\delta^{t]}}_b{F^a}_s {F^b}_t \\ &=12{F^a}_{[a} {F^b}_{b]} \\ &=12( {F^a}_{a} {F^b}_{b} - {F^a}_{b} {F^b}_{a} ) \\ &=-12 {F^a}_{b} {F^b}_{a} \\ &=-12 F^{a b} F_{b a} \\ &=12 F^{a b} F_{a b}\end{aligned}

Therefore, our characteristic polynomial is

\begin{aligned}\boxed{P(\lambda) = \text{Det} {\left\lVert{F_{i j}}\right\rVert} + \frac{\lambda^2}{2} F^{a b} F_{a b} + \lambda^4.}\end{aligned} \hspace{\stretch{1}}(2.58)

Observe that in matrix form our strength tensors are

\begin{aligned}{\left\lVert{ F^{ij} }\right\rVert} &= \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} \\ {\left\lVert{ F_{ij} }\right\rVert} &= \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.59)

From these we can compute $F^{a b} F_{a b}$ easily by inspection

\begin{aligned}F^{a b} F_{a b} = 2 (\mathbf{B}^2 - \mathbf{E}^2).\end{aligned} \hspace{\stretch{1}}(2.61)

Computing the determinant is not so easy. The dumb and simple way of expanding by cofactors takes two pages, and yields eventually

\begin{aligned}\text{Det} {\left\lVert{ F^{i j} }\right\rVert} = (\mathbf{E} \cdot \mathbf{B})^2.\end{aligned} \hspace{\stretch{1}}(2.62)

That supplies us with a relation for the characteristic polynomial in $\mathbf{E}$ and $\mathbf{B}$

\begin{aligned}\boxed{P(\lambda) = (\mathbf{E} \cdot \mathbf{B})^2 + \lambda^2 (\mathbf{B}^2 - \mathbf{E}^2) + \lambda^4.}\end{aligned} \hspace{\stretch{1}}(2.63)

Observe that we found this for the special case where $\mathbf{E}$ and $\mathbf{B}$ were perpendicular in homework 2. Observe that when we have that perpendicularity, we can solve for the eigenvalues by inspection

\begin{aligned}\lambda \in \{ 0, 0, \pm \sqrt{ \mathbf{E}^2 - \mathbf{B}^2 } \},\end{aligned} \hspace{\stretch{1}}(2.64)

and were able to diagonalize the matrix ${F^{i}}_j$ to solve the Lorentz force equation in parametric form. When ${\left\lvert{\mathbf{E}}\right\rvert} > {\left\lvert{\mathbf{B}}\right\rvert}$ we had real eigenvalues and an orthogonal diagonalization when $\mathbf{B} = 0$. For the ${\left\lvert{\mathbf{B}}\right\rvert} > {\left\lvert{\mathbf{E}}\right\rvert}$, we had a two purely imaginary eigenvalues, and when $\mathbf{E} = 0$ this was a Hermitian diagonalization. For the general case, when one of $\mathbf{E}$, or $\mathbf{B}$ was zero, things didn’t have the same nice closed form solution.

In general our eigenvalues are

\begin{aligned}\lambda = \pm \frac{1}{{\sqrt{2}}} \sqrt{ \mathbf{E}^2 - \mathbf{B}^2 \pm \sqrt{ (\mathbf{E}^2 - \mathbf{B}^2)^2 - 4 (\mathbf{E} \cdot \mathbf{B})^2 }}.\end{aligned} \hspace{\stretch{1}}(2.65)

For the purposes of this problem we really only wish to show that $\mathbf{E} \cdot \mathbf{B}$ and $\mathbf{E}^2 - \mathbf{B}^2$ are Lorentz invariants. When $\lambda = 0$ we have $P(\lambda) = (\mathbf{E} \cdot \mathbf{B})^2$, a Lorentz invariant. This must mean that $\mathbf{E} \cdot \mathbf{B}$ is itself a Lorentz invariant. Since that is invariant, and we require $P(\lambda)$ to be invariant for any other possible values of $\lambda$, the difference $\mathbf{E}^2 - \mathbf{B}^2$ must also be Lorentz invariant.

## 7. Statement. Show that the pseudoscalar invariant has only boundary effects.

Use integration by parts to show that $\int d^4 x \epsilon^{i j k l} F_{ i j } F_{ k l }$ only depends on the values of $A^i(x)$ at the “boundary” of spacetime (e.g. the “surface” depicted on page 105 of the notes) and hence does not affect the equations of motion for the electromagnetic field.

## 7. Solution

This proceeds in a fairly straightforward fashion

\begin{aligned}\int d^4 x \epsilon^{i j k l} F_{ i j } F_{ k l }&=\int d^4 x \epsilon^{i j k l} (\partial_i A_j - \partial_j A_i) F_{ k l } \\ &=\int d^4 x \epsilon^{i j k l} (\partial_i A_j) F_{ k l } -\epsilon^{j i k l} (\partial_i A_j) F_{ k l } \\ &=2 \int d^4 x \epsilon^{i j k l} (\partial_i A_j) F_{ k l } \\ &=2 \int d^4 x \epsilon^{i j k l} \left( \frac{\partial {}}{\partial {x^i}}(A_j F_{ k l }-A_j \frac{\partial { F_{ k l } }}{\partial {x^i}}\right)\\ \end{aligned}

Now, observe that by the Bianchi identity, this second term is zero

\begin{aligned}\epsilon^{i j k l} \frac{\partial { F_{ k l } }}{\partial {x^i}}=-\epsilon^{j i k l} \partial_i F_{ k l } = 0\end{aligned} \hspace{\stretch{1}}(2.66)

Now we have a set of perfect differentials, and can integrate

\begin{aligned}\int d^4 x \epsilon^{i j k l} F_{ i j } F_{ k l }&= 2 \int d^4 x \epsilon^{i j k l} \frac{\partial {}}{\partial {x^i}}(A_j F_{ k l })\\ &= 2 \int dx^j dx^k dx^l\epsilon^{i j k l} {\left.{{(A_j F_{ k l })}}\right\vert}_{{\Delta x^i}}\\ \end{aligned}

We are left with a only contributions to the integral from the boundary terms on the spacetime hypervolume, three-volume normals bounding the four-volume integration in the original integral.

## 8. Statement. Electromagnetic duality transformations.

Show that the Maxwell equations in vacuum are invariant under the transformation: $F_{i j} \rightarrow \tilde{F}_{i j}$, where $\tilde{F}_{i j} = \frac{1}{{2}} \epsilon_{i j k l} F^{k l}$ is the dual electromagnetic stress tensor. Replacing $F$ with $\tilde{F}$ is known as “electric-magnetic duality”. Explain this name by considering the transformation in terms of $\mathbf{E}$ and $\mathbf{B}$. Are the Maxwell equations with sources invariant under electric-magnetic duality transformations?

## 8. Solution

Let’s first consider the explanation of the name. First recall what the expansions are of $F_{i j}$ and $F^{i j}$ in terms of $\mathbf{E}$ and $\mathbf{E}$. These are

\begin{aligned}F_{0 \alpha} &= \partial_0 A_\alpha - \partial_\alpha A_0 \\ &= -\frac{1}{{c}} \frac{\partial {A^\alpha}}{\partial {t}} - \frac{\partial {\phi}}{\partial {x^\alpha}} \\ &= E_\alpha\end{aligned}

with $F^{0 \alpha} = -E^\alpha$, and $E^\alpha = E_\alpha$.

The magnetic field components are

\begin{aligned}F_{\beta \alpha} &= \partial_\beta A_\alpha - \partial_\alpha A_\beta \\ &= -\partial_\beta A^\alpha + \partial_\alpha A^\beta \\ &= \epsilon_{\alpha \beta \sigma} B^\sigma\end{aligned}

with $F^{\beta \alpha} = \epsilon^{\alpha \beta \sigma} B_\sigma$ and $B_\sigma = B^\sigma$.

Now let’s expand the dual tensors. These are

\begin{aligned}\tilde{F}_{0 \alpha} &=\frac{1}{{2}} \epsilon_{0 \alpha i j} F^{i j} \\ &=\frac{1}{{2}} \epsilon_{0 \alpha \beta \sigma} F^{\beta \sigma} \\ &=\frac{1}{{2}} \epsilon_{0 \alpha \beta \sigma} \epsilon^{\sigma \beta \mu} B_\mu \\ &=-\frac{1}{{2}} \epsilon_{0 \alpha \beta \sigma} \epsilon^{\mu \beta \sigma} B_\mu \\ &=-\frac{1}{{2}} (2!) {\delta_\alpha}^\mu B_\mu \\ &=- B_\alpha \\ \end{aligned}

and

\begin{aligned}\tilde{F}_{\beta \alpha} &=\frac{1}{{2}} \epsilon_{\beta \alpha i j} F^{i j} \\ &=\frac{1}{{2}} \left(\epsilon_{\beta \alpha 0 \sigma} F^{0 \sigma} +\epsilon_{\beta \alpha \sigma 0} F^{\sigma 0} \right) \\ &=\epsilon_{0 \beta \alpha \sigma} (-E^\sigma) \\ &=\epsilon_{\alpha \beta \sigma} E^\sigma\end{aligned}

Summarizing we have

\begin{aligned}F_{0 \alpha} &= E^\alpha \\ F^{0 \alpha} &= -E^\alpha \\ F^{\beta \alpha} &= F_{\beta \alpha} = \epsilon_{\alpha \beta \sigma} B^\sigma \\ \tilde{F}_{0 \alpha} &= - B_\alpha \\ \tilde{F}^{0 \alpha} &= B_\alpha \\ \tilde{F}_{\beta \alpha} &= \tilde{F}^{\beta \alpha} = \epsilon_{\alpha \beta \sigma} E^\sigma\end{aligned} \hspace{\stretch{1}}(2.67)

Is there a sign error in the $\tilde{F}_{0 \alpha} = - B_\alpha$ result? Other than that we have the same sort of structure for the tensor with $E$ and $B$ switched around.

Let’s write these in matrix form, to compare

\begin{aligned}\begin{array}{l l l l}{\left\lVert{ \tilde{F}_{i j} }\right\rVert} &= \begin{bmatrix}0 & -B_x & -B_y & -B_z \\ B_x & 0 & -E_z & E_y \\ B_y & E_z & 0 & E_x \\ B_z & -E_y & -E_x & 0 \\ \end{bmatrix} ^{i j} }\right\rVert} &= \begin{bmatrix}0 & B_x & B_y & B_z \\ -B_x & 0 & -E_z & E_y \\ -B_y & E_z & 0 & -E_x \\ -B_z & -E_y & E_x & 0 \\ \end{bmatrix} \\ {\left\lVert{ F^{ij} }\right\rVert} &= \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} }\right\rVert} &= \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0\end{bmatrix}.\end{array}\end{aligned} \hspace{\stretch{1}}(2.73)

From these we can see by inspection that we have

\begin{aligned}\tilde{F}^{i j} F_{ij} = \tilde{F}_{i j} F^{ij} = 4 (\mathbf{E} \cdot \mathbf{B})\end{aligned} \hspace{\stretch{1}}(2.74)

This is consistent with the stated result in [1] (except for a factor of $c$ due to units differences), so it appears the signs above are all kosher.

Now, let’s see if the if the dual tensor satisfies the vacuum equations.

\begin{aligned}\partial_j \tilde{F}^{i j}&=\partial_j \frac{1}{{2}} \epsilon^{i j k l} F_{k l} \\ &=\frac{1}{{2}} \epsilon^{i j k l} \partial_j (\partial_k A_l - \partial_l A_k) \\ &=\frac{1}{{2}} \epsilon^{i j k l} \partial_j \partial_k A_l - \frac{1}{{2}} \epsilon^{i j l k} \partial_k A_l \\ &=\frac{1}{{2}} (\epsilon^{i j k l} - \epsilon^{i j k l} \partial_k A_l \\ &= 0 \qquad\square\end{aligned}

So the first checks out, provided we have no sources. If we have sources, then we see here that Maxwell’s equations do not hold since this would imply that the four current density must be zero.

How about the Bianchi identity? That gives us

\begin{aligned}\epsilon^{i j k l} \partial_j \tilde{F}_{k l} &=\epsilon^{i j k l} \partial_j \frac{1}{{2}} \epsilon_{k l a b} F^{a b} \\ &=\frac{1}{{2}} \epsilon^{k l i j} \epsilon_{k l a b} \partial_j F^{a b} \\ &=\frac{1}{{2}} (2!) {\delta^i}_{[a} {\delta^j}_{b]} \partial_j F^{a b} \\ &=\partial_j (F^{i j} - F^{j i} ) \\ &=2 \partial_j F^{i j} .\end{aligned}

The factor of two is slightly curious. Is there a mistake above? If there is a mistake, it doesn’t change the fact that Maxwell’s equation

\begin{aligned}\partial_k F^{k i} = \frac{4 \pi}{c} j^i\end{aligned} \hspace{\stretch{1}}(2.75)

Gives us zero for the Bianchi identity under source free conditions of $j^i = 0$.

# Problem 2. Transformation properties of $\mathbf{E}$ and $\mathbf{B}$, again.

## 1. Statement

Use the form of $F^{i j}$ from page 82 in the class notes, the transformation law for ${\left\lVert{ F^{i j} }\right\rVert}$ given further down that same page, and the explicit form of the $SO(1,3)$ matrix $\hat{O}$ (say, corresponding to motion in the positive $x_1$ direction with speed $v$) to derive the transformation law of the fields $\mathbf{E}$ and $\mathbf{B}$. Use the transformation law to find the electromagnetic field of a charged particle moving with constant speed $v$ in the positive $x_1$ direction and check that the result agrees with the one that you obtained in Homework 2.

## 1. Solution

Given a transformation of coordinates

\begin{aligned}{x'}^i \rightarrow {O^i}_j x^j\end{aligned} \hspace{\stretch{1}}(3.76)

our rank 2 tensor $F^{i j}$ transforms as

\begin{aligned}F^{i j} \rightarrow {O^i}_aF^{a b}{O^j}_b.\end{aligned} \hspace{\stretch{1}}(3.77)

Introducing matrices

\begin{aligned}\hat{O} &= {\left\lVert{{O^i}_j}\right\rVert} \\ \hat{F} &= {\left\lVert{F^{ij}}\right\rVert} = \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(3.78)

and noting that $\hat{O}^\text{T} = {\left\lVert{{O^j}_i}\right\rVert}$, we can express the electromagnetic strength tensor transformation as

\begin{aligned}\hat{F} \rightarrow \hat{O} \hat{F} \hat{O}^\text{T}.\end{aligned} \hspace{\stretch{1}}(3.80)

The class notes use ${x'}^i \rightarrow O^{ij} x^j$, which violates our conventions on mixed upper and lower indexes, but the end result 3.80 is the same.

\begin{aligned}{\left\lVert{{O^i}_j}\right\rVert} =\begin{bmatrix}\cosh\alpha & -\sinh\alpha & 0 & 0 \\ -\sinh\alpha & \cosh\alpha & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.81)

Writing

\begin{aligned}C &= \cosh\alpha = \gamma \\ S &= -\sinh\alpha = -\gamma \beta,\end{aligned} \hspace{\stretch{1}}(3.82)

we can compute the transformed field strength tensor

\begin{aligned}\hat{F}' &=\begin{bmatrix}C & S & 0 & 0 \\ S & C & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix} \begin{bmatrix}C & S & 0 & 0 \\ S & C & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix} \\ &=\begin{bmatrix}C & S & 0 & 0 \\ S & C & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}- S E_x & -C E_x & -E_y & -E_z \\ C E_x & S E_x & -B_z & B_y \\ C E_y + S B_z & S E_y + C B_z & 0 & -B_x \\ C E_z - S B_y & S E_z - C B_y & B_x & 0 \end{bmatrix} \\ &=\begin{bmatrix}0 & -E_x & -C E_y - S B_z & - C E_z + S B_y \\ E_x & 0 & -S E_y - C B_z & - S E_z + C B_y \\ C E_y + S B_z & S E_y + C B_z & 0 & -B_x \\ C E_z - S B_y & S E_z - C B_y & B_x & 0\end{bmatrix} \\ &=\begin{bmatrix}0 & -E_x & -\gamma(E_y - \beta B_z) & - \gamma(E_z + \beta B_y) \\ E_x & 0 & - \gamma (-\beta E_y + B_z) & \gamma( \beta E_z + B_y) \\ \gamma (E_y - \beta B_z) & \gamma(-\beta E_y + B_z) & 0 & -B_x \\ \gamma (E_z + \beta B_y) & -\gamma(\beta E_z + B_y) & B_x & 0\end{bmatrix}.\end{aligned}

As a check we have the antisymmetry that is expected. There is also a regularity to the end result that is aesthetically pleasing, hinting that things are hopefully error free. In coordinates for $\mathbf{E}$ and $\mathbf{B}$ this is

\begin{aligned}E_x &\rightarrow E_x \\ E_y &\rightarrow \gamma ( E_y - \beta B_z ) \\ E_z &\rightarrow \gamma ( E_z + \beta B_y ) \\ B_z &\rightarrow B_x \\ B_y &\rightarrow \gamma ( B_y + \beta E_z ) \\ B_z &\rightarrow \gamma ( B_z - \beta E_y ) \end{aligned} \hspace{\stretch{1}}(3.84)

Writing $\boldsymbol{\beta} = \mathbf{e}_1 \beta$, we have

\begin{aligned}\boldsymbol{\beta} \times \mathbf{B} = \begin{vmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\ \beta & 0 & 0 \\ B_x & B_y & B_z\end{vmatrix} = \mathbf{e}_2 (-\beta B_z) + \mathbf{e}_3( \beta B_y ),\end{aligned} \hspace{\stretch{1}}(3.90)

which puts us enroute to a tidier vector form

\begin{aligned}E_x &\rightarrow E_x \\ E_y &\rightarrow \gamma ( E_y + (\boldsymbol{\beta} \times \mathbf{B})_y ) \\ E_z &\rightarrow \gamma ( E_z + (\boldsymbol{\beta} \times \mathbf{B})_z ) \\ B_z &\rightarrow B_x \\ B_y &\rightarrow \gamma ( B_y - (\boldsymbol{\beta} \times \mathbf{E})_y ) \\ B_z &\rightarrow \gamma ( B_z - (\boldsymbol{\beta} \times \mathbf{E})_z ).\end{aligned} \hspace{\stretch{1}}(3.91)

For a vector $\mathbf{A}$, write $\mathbf{A}_\parallel = (\mathbf{A} \cdot \hat{\mathbf{v}})\hat{\mathbf{v}}$, $\mathbf{A}_\perp = \mathbf{A} - \mathbf{A}_\parallel$, allowing a compact description of the field transformation

\begin{aligned}\mathbf{E} &\rightarrow \mathbf{E}_\parallel + \gamma \mathbf{E}_\perp + \gamma (\boldsymbol{\beta} \times \mathbf{B})_\perp \\ \mathbf{B} &\rightarrow \mathbf{B}_\parallel + \gamma \mathbf{B}_\perp - \gamma (\boldsymbol{\beta} \times \mathbf{E})_\perp.\end{aligned} \hspace{\stretch{1}}(3.97)

Now, we want to consider the field of a moving particle. In the particle’s (unprimed) rest frame the field due to its potential $\phi = q/r$ is

\begin{aligned}\mathbf{E} &= \frac{q}{r^2} \hat{\mathbf{r}} \\ \mathbf{B} &= 0.\end{aligned} \hspace{\stretch{1}}(3.99)

Coordinates for a “stationary” observer, who sees this particle moving along the x-axis at speed $v$ are related by a boost in the $-v$ direction

\begin{aligned}\begin{bmatrix}ct' \\ x' \\ y' \\ z'\end{bmatrix}\begin{bmatrix}\gamma & \gamma (v/c) & 0 & 0 \\ \gamma (v/c) & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}ct \\ x \\ y \\ z\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.101)

Therefore the fields in the observer frame will be

\begin{aligned}\mathbf{E}' &= \mathbf{E}_\parallel + \gamma \mathbf{E}_\perp - \gamma \frac{v}{c}(\mathbf{e}_1 \times \mathbf{B})_\perp = \mathbf{E}_\parallel + \gamma \mathbf{E}_\perp \\ \mathbf{B}' &= \mathbf{B}_\parallel + \gamma \mathbf{B}_\perp + \gamma \frac{v}{c}(\mathbf{e}_1 \times \mathbf{E})_\perp = \gamma \frac{v}{c}(\mathbf{e}_1 \times \mathbf{E})_\perp \end{aligned} \hspace{\stretch{1}}(3.102)

More explicitly with $\mathbf{E} = \frac{q}{r^3}(x, y, z)$ this is

\begin{aligned}\mathbf{E}' &= \frac{q}{r^3}(x, \gamma y, \gamma z) \\ \mathbf{B}' &= \gamma \frac{q v}{c r^3} ( 0, -z, y )\end{aligned} \hspace{\stretch{1}}(3.104)

Comparing to Problem 3 in Problem set 2, I see that this matches the result obtained by separately transforming the gradient, the time partial, and the scalar potential. Actually, if I am being honest, I see that I made a sign error in all the coordinates of $\mathbf{E}'$ when I initially did (this ungraded problem) in problem set 2. That sign error should have been obvious by considering the $v=0$ case which would have mysteriously resulted in inversion of all the coordinates of the observed electric field.

## 2. Statement

A particle is moving with velocity $\mathbf{v}$ in perpendicular $\mathbf{E}$ and $\mathbf{B}$ fields, all given in some particular “stationary” frame of reference.

\begin{enumerate}
\item Show that there exists a frame where the problem of finding the particle trajectory can be reduced to having either only an electric or only a magnetic field.
\item Explain what determines which case takes place.
\item Find the velocity $\mathbf{v}_0$ of that frame relative to the “stationary” frame.
\end{enumerate}

## 2. Solution

\paragraph{Part 1 and 2:} Existence of the transformation.

In the single particle Lorentz trajectory problem we wish to solve

\begin{aligned}m c \frac{du^i}{ds} = \frac{e}{c} F^{i j} u_j,\end{aligned} \hspace{\stretch{1}}(3.106)

which in matrix form we can write as

\begin{aligned}\frac{d U}{ds} = \frac{e}{m c^2} \hat{F} \hat{G} U.\end{aligned} \hspace{\stretch{1}}(3.107)

where we write our column vector proper velocity as $U = {\left\lVert{u^i}\right\rVert}$. Under transformation of coordinates ${u'}^i = {O^i}_j x^j$, with $\hat{O} = {\left\lVert{{O^i}_j}\right\rVert}$, this becomes

\begin{aligned}\hat{O} \frac{d U}{ds} = \frac{e}{m c^2} \hat{O} \hat{F} \hat{O}^\text{T} \hat{G} \hat{O} U.\end{aligned} \hspace{\stretch{1}}(3.108)

Suppose we can find eigenvectors for the matrix $\hat{O} \hat{F} \hat{O}^\text{T} \hat{G}$. That is for some eigenvalue $\lambda$, we can find an eigenvector $\Sigma$

\begin{aligned}\hat{O} \hat{F} \hat{O}^\text{T} \hat{G} \Sigma = \lambda \Sigma.\end{aligned} \hspace{\stretch{1}}(3.109)

Rearranging we have

\begin{aligned}(\hat{O} \hat{F} \hat{O}^\text{T} \hat{G} - \lambda I) \Sigma = 0\end{aligned} \hspace{\stretch{1}}(3.110)

and conclude that $\Sigma$ lies in the null space of the matrix $\hat{O} \hat{F} \hat{O}^\text{T} \hat{G} - \lambda I$ and that this difference of matrices must have a zero determinant

\begin{aligned}\text{Det} (\hat{O} \hat{F} \hat{O}^\text{T} \hat{G} - \lambda I) = -\text{Det} (\hat{O} \hat{F} \hat{O}^\text{T} - \lambda \hat{G}) = 0.\end{aligned} \hspace{\stretch{1}}(3.111)

Since $\hat{G} = \hat{O} \hat{G} \hat{O}^\text{T}$ for any Lorentz transformation $\hat{O}$ in $SO(1,3)$, and $\text{Det} ABC = \text{Det} A \text{Det} B \text{Det} C$ we have

\begin{aligned}\text{Det} (\hat{O} \hat{F} \hat{O}^\text{T} - \lambda G)= \text{Det} (\hat{F} - \lambda \hat{G}).\end{aligned} \hspace{\stretch{1}}(3.112)

In problem 1.6, we called this our characteristic equation $P(\lambda) = \text{Det} (\hat{F} - \lambda \hat{G})$. Observe that the characteristic equation is Lorentz invariant for any $\lambda$, which requires that the eigenvalues $\lambda$ are also Lorentz invariants.

In problem 1.6 of this problem set we computed that this characteristic equation expands to

\begin{aligned}P(\lambda) = \text{Det} (\hat{F} - \lambda \hat{G}) = (\mathbf{E} \cdot \mathbf{B})^2 + \lambda^2 (\mathbf{B}^2 - \mathbf{E}^2) + \lambda^4.\end{aligned} \hspace{\stretch{1}}(3.113)

The eigenvalues for the system, also each necessarily Lorentz invariants, are

\begin{aligned}\lambda = \pm \frac{1}{{\sqrt{2}}} \sqrt{ \mathbf{E}^2 - \mathbf{B}^2 \pm \sqrt{ (\mathbf{E}^2 - \mathbf{B}^2)^2 - 4 (\mathbf{E} \cdot \mathbf{B})^2 }}.\end{aligned} \hspace{\stretch{1}}(3.114)

Observe that in the specific case where $\mathbf{E} \cdot \mathbf{B} = 0$, as in this problem, we must have $\mathbf{E}' \cdot \mathbf{B}'$ in all frames, and the two non-zero eigenvalues of our characteristic polynomial are simply

\begin{aligned}\lambda = \pm \sqrt{\mathbf{E}^2 - \mathbf{B}^2}.\end{aligned} \hspace{\stretch{1}}(3.115)

These and $\mathbf{E} \cdot \mathbf{B} = 0$ are the invariants for this system. If we have $\mathbf{E}^2 > \mathbf{B}^2$ in one frame, we must also have ${\mathbf{E}'}^2 > {\mathbf{B}'}^2$ in another frame, still maintaining perpendicular fields. In particular if $\mathbf{B}' = 0$ we maintain real eigenvalues. Similarly if $\mathbf{B}^2 > \mathbf{E}^2$ in some frame, we must always have imaginary eigenvalues, and this is also true in the $\mathbf{E}' = 0$ case.

While the problem can be posed as a pure diagonalization problem (and even solved numerically this way for the general constant fields case), we can also work symbolically, thinking of the trajectories problem as simply seeking a transformation of frames that reduce the scope of the problem to one that is more tractable. That does not have to be the linear transformation that diagonalizes the system. Instead we are free to transform to a frame where one of the two fields $\mathbf{E}'$ or $\mathbf{B}'$ is zero, provided the invariants discussed are maintained.

\paragraph{Part 3:} Finding the boost velocity that wipes out one of the fields.

Let’s now consider a Lorentz boost $\hat{O}$, and seek to solve for the boost velocity that wipes out one of the fields, given the invariants that must be maintained for the system

To make things concrete, suppose that our perpendicular fields are given by $\mathbf{E} = E \mathbf{e}_2$ and $\mathbf{B} = B \mathbf{e}_3$.

Let also assume that we can find the velocity $\mathbf{v}_0$ for which one or more of the transformed fields is zero. Suppose that velocity is

\begin{aligned}\mathbf{v}_0 = v_0 (\alpha_1, \alpha_2, \alpha_3) = v_0 \hat{\mathbf{v}}_0,\end{aligned} \hspace{\stretch{1}}(3.116)

where $\alpha_i$ are the direction cosines of $\mathbf{v}_0$ so that $\sum_i \alpha_i^2 = 1$. We will want to compute the components of $\mathbf{E}$ and $\mathbf{B}$ parallel and perpendicular to this velocity.

Those are

\begin{aligned}\mathbf{E}_\parallel &= E \mathbf{e}_2 \cdot (\alpha_1, \alpha_2, \alpha_3) (\alpha_1, \alpha_2, \alpha_3) \\ &= E \alpha_2 (\alpha_1, \alpha_2, \alpha_3) \\ \end{aligned}

\begin{aligned}\mathbf{E}_\perp &= E \mathbf{e}_2 - \mathbf{E}_\parallel \\ &= E (-\alpha_1 \alpha_2, 1 - \alpha_2^2, -\alpha_2 \alpha_3) \\ &= E (-\alpha_1 \alpha_2, \alpha_1^2 + \alpha_3^2, -\alpha_2 \alpha_3) \\ \end{aligned}

For the magnetic field we have

\begin{aligned}\mathbf{B}_\parallel &= B \alpha_3 (\alpha_1, \alpha_2, \alpha_3),\end{aligned}

and

\begin{aligned}\mathbf{B}_\perp &= B \mathbf{e}_3 - \mathbf{B}_\parallel \\ &= B (-\alpha_1 \alpha_3, -\alpha_2 \alpha_3, \alpha_1^2 + \alpha_2^2) \\ \end{aligned}

Now, observe that $(\boldsymbol{\beta} \times \mathbf{B})_\parallel \propto ((\mathbf{v}_0 \times \mathbf{B}) \cdot \mathbf{v}_0) \mathbf{v}_0$, but this is just zero. So we have $(\boldsymbol{\beta} \times \mathbf{B})_\parallel = \boldsymbol{\beta} \times \mathbf{B}$. So our cross products terms are just

\begin{aligned}\hat{\mathbf{v}}_0 \times \mathbf{B} &= \begin{vmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\ \alpha_1 & \alpha_2 & \alpha_3 \\ 0 & 0 & B \end{vmatrix} = B (\alpha_2, -\alpha_1, 0) \\ \hat{\mathbf{v}}_0 \times \mathbf{E} &= \begin{vmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\ \alpha_1 & \alpha_2 & \alpha_3 \\ 0 & E & 0 \end{vmatrix} = E (-\alpha_3, 0, \alpha_1)\end{aligned}

We can now express how the fields transform, given this arbitrary boost velocity. From 3.97, this is

\begin{aligned}\mathbf{E} &\rightarrow E \alpha_2 (\alpha_1, \alpha_2, \alpha_3) + \gamma E (-\alpha_1 \alpha_2, \alpha_1^2 + \alpha_3^2, -\alpha_2 \alpha_3) + \gamma \frac{v_0^2}{c^2} B (\alpha_2, -\alpha_1, 0) \\ \mathbf{B} &\rightarrowB \alpha_3 (\alpha_1, \alpha_2, \alpha_3)+ \gamma B (-\alpha_1 \alpha_3, -\alpha_2 \alpha_3, \alpha_1^2 + \alpha_2^2) - \gamma \frac{v_0^2}{c^2} E (-\alpha_3, 0, \alpha_1)\end{aligned} \hspace{\stretch{1}}(3.117)

### Zero Electric field case.

Let’s tackle the two cases separately. First when ${\left\lvert{\mathbf{B}}\right\rvert} > {\left\lvert{\mathbf{E}}\right\rvert}$, we can transform to a frame where $\mathbf{E}'=0$. In coordinates from 3.117 this supplies us three sets of equations. These are

\begin{aligned}0 &= E \alpha_2 \alpha_1 (1 - \gamma) + \gamma \frac{v_0^2}{c^2} B \alpha_2 \\ 0 &= E \alpha_2^2 + \gamma E (\alpha_1^2 + \alpha_3^2) - \gamma \frac{v_0^2}{c^2} B \alpha_1 \\ 0 &= E \alpha_2 \alpha_3 (1 - \gamma).\end{aligned} \hspace{\stretch{1}}(3.119)

With an assumed solution the $\mathbf{e}_3$ coordinate equation implies that one of $\alpha_2$ or $\alpha_3$ is zero. Perhaps there are solutions with $\alpha_3 = 0$ too, but inspection shows that $\alpha_2 = 0$ nicely kills off the first equation. Since $\alpha_1^2 + \alpha_2^2 + \alpha_3^2 = 1$, that also implies that we are left with

\begin{aligned}0 = E - \frac{v_0^2}{c^2} B \alpha_1 \end{aligned} \hspace{\stretch{1}}(3.122)

Or

\begin{aligned}\alpha_1 &= \frac{E}{B} \frac{c^2}{v_0^2} \\ \alpha_2 &= 0 \\ \alpha_3 &= \sqrt{1 - \frac{E^2}{B^2} \frac{c^4}{v_0^4} }\end{aligned} \hspace{\stretch{1}}(3.123)

Our velocity was $\mathbf{v}_0 = v_0 (\alpha_1, \alpha_2, \alpha_3)$ solving the problem for the ${\left\lvert{\mathbf{B}}\right\rvert}^2 > {\left\lvert{\mathbf{E}}\right\rvert}^2$ case up to an adjustable constant $v_0$. That constant comes with constraints however, since we must also have our cosine $\alpha_1 \le 1$. Expressed another way, the magnitude of the boost velocity is constrained by the relation

\begin{aligned}\frac{\mathbf{v}_0^2}{c^2} \ge {\left\lvert{\frac{E}{B}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(3.126)

It appears we may also pick the equality case, so one velocity (not unique) that should transform away the electric field is

\begin{aligned}\boxed{\mathbf{v}_0 = c \sqrt{{\left\lvert{\frac{E}{B}}\right\rvert}} \mathbf{e}_1 = \pm c \sqrt{{\left\lvert{\frac{E}{B}}\right\rvert}} \frac{\mathbf{E} \times \mathbf{B}}{{\left\lvert{\mathbf{E}}\right\rvert} {\left\lvert{\mathbf{B}}\right\rvert}}.}\end{aligned} \hspace{\stretch{1}}(3.127)

This particular boost direction is perpendicular to both fields. Observe that this highlights the invariance condition ${\left\lvert{\frac{E}{B}}\right\rvert} < 1$ since we see this is required for a physically realizable velocity. Boosting in this direction will reduce our problem to one that has only the magnetic field component.

### Zero Magnetic field case.

Now, let’s consider the case where we transform the magnetic field away, the case when our characteristic polynomial has strictly real eigenvalues $\lambda = \pm \sqrt{\mathbf{E}^2 - \mathbf{B}^2}$. In this case, if we write out our equations for the transformed magnetic field and require these to separately equal zero, we have

\begin{aligned}0 &= B \alpha_3 \alpha_1 ( 1 - \gamma ) + \gamma \frac{v_0^2}{c^2} E \alpha_3 \\ 0 &= B \alpha_2 \alpha_3 ( 1 - \gamma ) \\ 0 &= B (\alpha_3^2 + \gamma (\alpha_1^2 + \alpha_2^2)) - \gamma \frac{v_0^2}{c^2} E \alpha_1.\end{aligned} \hspace{\stretch{1}}(3.128)

Similar to before we see that $\alpha_3 = 0$ kills off the first and second equations, leaving just

\begin{aligned}0 = B - \frac{v_0^2}{c^2} E \alpha_1.\end{aligned} \hspace{\stretch{1}}(3.131)

We now have a solution for the family of direction vectors that kill the magnetic field off

\begin{aligned}\alpha_1 &= \frac{B}{E} \frac{c^2}{v_0^2} \\ \alpha_2 &= \sqrt{ 1 - \frac{B^2}{E^2} \frac{c^4}{v_0^4} } \\ \alpha_3 &= 0.\end{aligned} \hspace{\stretch{1}}(3.132)

In addition to the initial constraint that ${\left\lvert{\frac{B}{E}}\right\rvert} < 1$, we have as before, constraints on the allowable values of $v_0$

\begin{aligned}\frac{\mathbf{v}_0^2}{c^2} \ge {\left\lvert{\frac{B}{E}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(3.135)

Like before we can pick the equality $\alpha_1^2 = 1$, yielding a boost direction of

\begin{aligned}\boxed{\mathbf{v}_0 = c \sqrt{{\left\lvert{\frac{B}{E}}\right\rvert}} \mathbf{e}_1 = \pm c \sqrt{{\left\lvert{\frac{B}{E}}\right\rvert}} \frac{\mathbf{E} \times \mathbf{B}}{{\left\lvert{\mathbf{E}}\right\rvert} {\left\lvert{\mathbf{B}}\right\rvert}}.}\end{aligned} \hspace{\stretch{1}}(3.136)

Again, we see that the invariance condition ${\left\lvert{\mathbf{B}}\right\rvert} < {\left\lvert{\mathbf{E}}\right\rvert}$ is required for a physically realizable velocity if that velocity is entirely perpendicular to the fields.

# Problem 3. Continuity equation for delta function current distributions.

## Statement

Show explicitly that the electromagnetic 4-current $j^i$ for a particle moving with constant velocity (considered in class, p. 100-101 of notes) is conserved $\partial_i j^i = 0$. Give a physical interpretation of this conservation law, for example by integrating $\partial_i j^i$ over some spacetime region and giving an integral form to the conservation law ($\partial_i j^i = 0$ is known as the “continuity equation”).

## Solution

First lets review. Our four current was defined as

\begin{aligned}j^i(x) = \sum_A c e_A \int_{x(\tau)} dx_A^i(\tau) \delta^4(x - x_A(\tau)).\end{aligned} \hspace{\stretch{1}}(4.137)

If each of the trajectories $x_A(\tau)$ represents constant motion we have

\begin{aligned}x_A(\tau) = x_A(0) + \gamma_A \tau ( c, \mathbf{v}_A ).\end{aligned} \hspace{\stretch{1}}(4.138)

The spacetime split of this four vector is

\begin{aligned}x_A^0(\tau) &= x_A^0(0) + \gamma_A \tau c \\ \mathbf{x}_A(\tau) &= \mathbf{x}_A(0) + \gamma_A \tau \mathbf{v},\end{aligned} \hspace{\stretch{1}}(4.139)

with differentials

\begin{aligned}dx_A^0(\tau) &= \gamma_A d\tau c \\ d\mathbf{x}_A(\tau) &= \gamma_A d\tau \mathbf{v}_A.\end{aligned} \hspace{\stretch{1}}(4.141)

Writing out the delta functions explicitly we have

\begin{aligned}\begin{aligned}j^i(x) = \sum_A &c e_A \int_{x(\tau)} dx_A^i(\tau) \delta(x^0 - x_A^0(0) - \gamma_A c \tau) \delta(x^1 - x_A^1(0) - \gamma_A v_A^1 \tau) \\ &\delta(x^2 - x_A^2(0) - \gamma_A v_A^2 \tau) \delta(x^3 - x_A^3(0) - \gamma_A v_A^3 \tau)\end{aligned}\end{aligned} \hspace{\stretch{1}}(4.143)

So our time and space components of the current can be written

\begin{aligned}j^0(x) &= \sum_A c^2 e_A \gamma_A \int_{x(\tau)} d\tau\delta(x^0 - x_A^0(0) - \gamma_A c \tau)\delta^3(\mathbf{x} - \mathbf{x}_A(0) - \gamma_A \mathbf{v}_A \tau) \\ \mathbf{j}(x) &= \sum_A c e_A \mathbf{v}_A \gamma_A \int_{x(\tau)} d\tau\delta(x^0 - x_A^0(0) - \gamma_A c \tau)\delta^3(\mathbf{x} - \mathbf{x}_A(0) - \gamma_A \mathbf{v}_A \tau).\end{aligned} \hspace{\stretch{1}}(4.144)

Each of these integrals can be evaluated with respect to the time coordinate delta function leaving the distribution

\begin{aligned}j^0(x) &= \sum_A c e_A \delta^3(\mathbf{x} - \mathbf{x}_A(0) - \frac{\mathbf{v}_A}{c} (x^0 - x_A^0(0))) \\ \mathbf{j}(x) &= \sum_A e_A \mathbf{v}_A \delta^3(\mathbf{x} - \mathbf{x}_A(0) - \frac{\mathbf{v}_A}{c} (x^0 - x_A^0(0)))\end{aligned} \hspace{\stretch{1}}(4.146)

With this more general expression (multi-particle case) it should be possible to show that the four divergence is zero, however, the problem only asks for one particle. For the one particle case, we can make things really easy by taking the initial point in space and time as the origin, and aligning our velocity with one of the coordinates (say $x$).

Doing so we have the result derived in class

\begin{aligned}j = e \begin{bmatrix}c \\ v \\ 0 \\ 0 \end{bmatrix}\delta(x - v x^0/c)\delta(y)\delta(z).\end{aligned} \hspace{\stretch{1}}(4.148)

Our divergence then has only two portions

\begin{aligned}\frac{\partial {j^0}}{\partial {x^0}} &= e c (-v/c) \delta'(x - v x^0/c) \delta(y) \delta(z) \\ \frac{\partial {j^1}}{\partial {x}} &= e v \delta'(x - v x^0/c) \delta(y) \delta(z).\end{aligned} \hspace{\stretch{1}}(4.149)

and these cancel out when summed. Note that this requires us to be loose with our delta functions, treating them like regular functions that are differentiable.

For the more general multiparticle case, we can treat the sum one particle at a time, and in each case, rotate coordinates so that the four divergence only picks up one term.

As for physical interpretation via integral, we have using the four dimensional divergence theorem

\begin{aligned}\int d^4 x \partial_i j^i = \int j^i dS_i\end{aligned} \hspace{\stretch{1}}(4.151)

where $dS_i$ is the three-volume element perpendicular to a $x^i = \text{constant}$ plane. These volume elements are detailed generally in the text [2], however, they do note that one special case specifically $dS_0 = dx dy dz$, the element of the three-dimensional (spatial) volume “normal” to hyperplanes $ct = \text{constant}$.

Without actually computing the determinants, we have something that is roughly of the form

\begin{aligned}0 = \int j^i dS_i=\int c \rho dx dy dz+\int \mathbf{j} \cdot (\mathbf{n}_x c dt dy dz + \mathbf{n}_y c dt dx dz + \mathbf{n}_z c dt dx dy).\end{aligned} \hspace{\stretch{1}}(4.152)

This is cheating a bit to just write $\mathbf{n}_x, \mathbf{n}_y, \mathbf{n}_z$. Are there specific orientations required by the metric. To be precise we’d have to calculate the determinants detailed in the text, and then do the duality transformations.

Per unit time, we can write instead

\begin{aligned}\frac{\partial {}}{\partial {t}} \int \rho dV= -\int \mathbf{j} \cdot (\mathbf{n}_x dy dz + \mathbf{n}_y dx dz + \mathbf{n}_z dx dy)\end{aligned} \hspace{\stretch{1}}(4.153)

Rather loosely this appears to roughly describe that the rate of change of charge in a volume must be matched with the “flow” of current through the surface within that amount of time.

# References

[1] Wikipedia. Electromagnetic tensor — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 27-February-2011]. http://en.wikipedia.org/w/index.php?title=Electromagnetic_tensor&oldid=414989505.

[2] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

## Exploring Stokes Theorem in tensor form.

Posted by Peeter Joot on February 22, 2011

[Click here for a PDF of this post with nicer formatting]

# Motivation.

I’ve worked through Stokes theorem concepts a couple times on my own now. One of the first times, I was trying to formulate this in a Geometric Algebra context. I had to resort to a tensor decomposition, and pictures, before ending back in the Geometric Algebra description. Later I figured out how to do it entirely with a Geometric Algebra description, and was able to eliminate reliance on the pictures that made the path to generalization to higher dimensional spaces unclear.

It’s my expectation that if one started with a tensor description, the proof entirely in tensor form would not be difficult. This is what I’d like to try this time. To start off, I’ll temporarily use the Geometric Algebra curl expression so I know what my tensor equation starting point will be, but once that starting point is found, we can work entirely in coordinate representation. For somebody who already knows that this is the starting point, all of this initial motivation can be skipped.

# Translating the exterior derivative to a coordinate representation.

Our starting point is a curl, dotted with a volume element of the same grade, so that the result is a scalar

\begin{aligned}\int d^n x \cdot (\nabla \wedge A).\end{aligned} \hspace{\stretch{1}}(2.1)

Here $A$ is a blade of grade $n-1$, and we wedge this with the gradient for the space

\begin{aligned}\nabla \equiv e^i \partial_i = e_i \partial^i,\end{aligned} \hspace{\stretch{1}}(2.2)

where we with with a basis (not necessarily orthonormal) $\{e_i\}$, and the reciprocal frame for that basis $\{e^i\}$ defined by the relation

\begin{aligned}e^i \cdot e_j = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(2.3)

Our coordinates in these basis sets are

\begin{aligned}x \cdot e^i & \equiv x^i \\ x \cdot e_i & \equiv x_i\end{aligned} \hspace{\stretch{1}}(2.4)

so that

\begin{aligned}x = x^i e_i = x_i e^i.\end{aligned} \hspace{\stretch{1}}(2.6)

The operator coordinates of the gradient are defined in the usual fashion

\begin{aligned}\partial_i & \equiv \frac{\partial }{\partial {x^i}} \\ \partial^i & \equiv \frac{\partial}{\partial {x_i}}\end{aligned} \hspace{\stretch{1}}(2.7)

The volume element for the subspace that we are integrating over we will define in terms of an arbitrary parametrization

\begin{aligned}x = x(\alpha_1, \alpha_2, \cdots, \alpha_n)\end{aligned} \hspace{\stretch{1}}(2.9)

The subspace can be considered spanned by the differential elements in each of the respective curves where all but the $i$th parameter are held constant.

\begin{aligned}dx_{\alpha_i}= d\alpha_i \frac{\partial x}{\partial {\alpha_i}}= d\alpha_i \frac{\partial {x^j}}{\partial {\alpha_i}} e_j.\end{aligned} \hspace{\stretch{1}}(2.10)

We assume that the integral is being performed in a subspace for which none of these differential elements in that region are linearly dependent (i.e. our Jacobean determinant must be non-zero).

The magnitude of the wedge product of all such differential elements provides the volume of the parallelogram, or parallelepiped (or higher dimensional analogue), and is

\begin{aligned}d^n x=d\alpha_1 d\alpha_2\cdots d\alpha_n\frac{\partial x}{\partial {\alpha_n}} \wedge\cdots \wedge\frac{\partial x}{\partial {\alpha_2}}\wedge\frac{\partial x}{\partial {\alpha_1}}.\end{aligned} \hspace{\stretch{1}}(2.11)

The volume element is a oriented quantity, and may be adjusted with an arbitrary sign (or equivalently an arbitrary permutation of the differential elements in the wedge product), and we’ll see that it is convenient for the translation to tensor form, to express these in reversed order.

Let’s write

\begin{aligned}d^n \alpha = d\alpha_1 d\alpha_2 \cdots d\alpha_n,\end{aligned} \hspace{\stretch{1}}(2.12)

so that our volume element in coordinate form is

\begin{aligned}d^n x = d^n \alpha\frac{\partial {x^i}}{\partial {\alpha_1}}\frac{\partial {x^j}}{\partial {\alpha_2}}\cdots \frac{\partial {x^k}}{\partial {\alpha_{n-1}}}\frac{\partial {x^l}}{\partial {\alpha_n}}( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i ).\end{aligned} \hspace{\stretch{1}}(2.13)

Our curl will also also be a grade $n$ blade. We write for the $n-1$ grade blade

\begin{aligned}A = A_{b c \cdots d} (e^b \wedge e^c \wedge \cdots e^d),\end{aligned} \hspace{\stretch{1}}(2.14)

where $A_{b c \cdots d}$ is antisymmetric (i.e. $A = a_1 \wedge a_2 \wedge \cdots a_{n-1}$ for a some set of vectors $a_i, i \in 1 .. n-1$).

With our gradient in coordinate form

\begin{aligned}\nabla = e^a \partial_a,\end{aligned} \hspace{\stretch{1}}(2.15)

the curl is then

\begin{aligned}\nabla \wedge A = \partial_a A_{b c \cdots d} (e^a \wedge e^b \wedge e^c \wedge \cdots e^d).\end{aligned} \hspace{\stretch{1}}(2.16)

The differential form for our integral can now be computed by expanding out the dot product. We want

\begin{aligned}( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i )\cdot(e^a \wedge e^b \wedge e^c \wedge \cdots e^d)=((((( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i ) \cdot e^a ) \cdot e^b ) \cdot e^c ) \cdot \cdots ) \cdot e^d.\end{aligned} \hspace{\stretch{1}}(2.17)

Evaluation of the interior dot products introduces the intrinsic antisymmetry required for Stokes theorem. For example, with

\begin{aligned}( e_n \wedge e_{n-1} \wedge \cdots \wedge e_2 \wedge e_1 ) \cdot e^a a & =( e_n \wedge e_{n-1} \wedge \cdots \wedge e_3 \wedge e_2 ) (e_1 \cdot e^a) \\ & -( e_n \wedge e_{n-1} \wedge \cdots \wedge e_3 \wedge e_1 ) (e_2 \cdot e^a) \\ & +( e_n \wedge e_{n-1} \wedge \cdots \wedge e_2 \wedge e_1 ) (e_3 \cdot e^a) \\ & \cdots \\ & (-1)^{n-1}( e_{n-1} \wedge e_{n-2} \wedge \cdots \wedge e_2 \wedge e_1 ) (e_n \cdot e^a)\end{aligned}

Since $e_i \cdot e^a = {\delta_i}^a$ our end result is a completely antisymmetric set of permutations of all the deltas

\begin{aligned}( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i )\cdot(e^a \wedge e^b \wedge e^c \wedge \cdots e^d)={\delta^{[a}}_i{\delta^b}_j\cdots {\delta^{d]}}_l,\end{aligned} \hspace{\stretch{1}}(2.18)

and the curl integral takes it’s coordinate form

\begin{aligned}\int d^n x \cdot ( \nabla \wedge A ) =\int d^n \alpha\frac{\partial {x^i}}{\partial {\alpha_1}}\frac{\partial {x^j}}{\partial {\alpha_2}}\cdots \frac{\partial {x^k}}{\partial {\alpha_{n-1}}}\frac{\partial {x^l}}{\partial {\alpha_n}}\partial_a A_{b c \cdots d}{\delta^{[a}}_i{\delta^b}_j\cdots {\delta^{d]}}_l.\end{aligned} \hspace{\stretch{1}}(2.19)

One final contraction of the paired indexes gives us our Stokes integral in its coordinate representation

\begin{aligned}\boxed{\int d^n x \cdot ( \nabla \wedge A ) =\int d^n \alpha\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}\partial_a A_{b c \cdots d}}\end{aligned} \hspace{\stretch{1}}(2.20)

We now have a starting point that is free of any of the abstraction of Geometric Algebra or differential forms. We can identify the products of partials here as components of a scalar hypervolume element (possibly signed depending on the orientation of the parametrization)

\begin{aligned}d\alpha_1 d\alpha_2\cdots d\alpha_n\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}\end{aligned} \hspace{\stretch{1}}(2.21)

This is also a specific computation recipe for these hypervolume components, something that may not be obvious when we allow for general metrics for the space. We are also allowing for non-orthonormal coordinate representations, and arbitrary parametrization of the subspace that we are integrating over (our integral need not have the same dimension as the underlying vector space).

Observe that when the number of parameters equals the dimension of the space, we can write out the antisymmetric term utilizing the determinant of the Jacobian matrix

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}= \epsilon^{a b \cdots d} {\left\lvert{ \frac{\partial(x^1, x^2, \cdots x^n)}{\partial(\alpha_1, \alpha_2, \cdots \alpha_n)} }\right\rvert}\end{aligned} \hspace{\stretch{1}}(2.22)

When the dimension of the space $n$ is greater than the number of parameters for the integration hypervolume in question, the antisymmetric sum of partials is still the determinant of a Jacobian matrix

\begin{aligned}\frac{\partial {x^{[a_1}}}{\partial {\alpha_1}}\frac{\partial {x^{a_2}}}{\partial {\alpha_2}}\cdots \frac{\partial {x^{a_{n-1}}}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{a_n]}}}{\partial {\alpha_n}}= {\left\lvert{ \frac{\partial(x^{a_1}, x^{a_2}, \cdots x^{a_n})}{\partial(\alpha_1, \alpha_2, \cdots \alpha_n)} }\right\rvert},\end{aligned} \hspace{\stretch{1}}(2.23)

however, we will have one such Jacobian for each unique choice of indexes.

# The Stokes work starts here.

The task is to relate our integral to the boundary of this volume, coming up with an explicit recipe for the description of that bounding surface, and determining the exact form of the reduced rank integral. This job is essentially to reduce the ranks of the tensors that are being contracted in our Stokes integral. With the derivative applied to our rank $n-1$ antisymmetric tensor $A_{b c \cdots d}$, we can apply the chain rule and examine the permutations so that this can be rewritten as a contraction of $A$ itself with a set of rank $n-1$ surface area elements.

\begin{aligned}\int d^n \alpha\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}\partial_a A_{b c \cdots d} = ?\end{aligned} \hspace{\stretch{1}}(3.24)

Now, while the setup here has been completely general, this task is motivated by study of special relativity, where there is a requirement to work in a four dimensional space. Because of that explicit goal, I’m not going to attempt to formulate this in a completely abstract fashion. That task is really one of introducing sufficiently general notation. Instead, I’m going to proceed with a simpleton approach, and do this explicitly, and repeatedly for each of the rank 1, rank 2, and rank 3 tensor cases. It will be clear how this all generalizes by doing so, should one wish to work in still higher dimensional spaces.

## The rank 1 tensor case.

The equation we are working with for this vector case is

\begin{aligned}\int d^2 x \cdot (\nabla \wedge A) =\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\partial_a A_{b}(\alpha_1, \alpha_2)\end{aligned} \hspace{\stretch{1}}(3.25)

Expanding out the antisymmetric partials we have

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}} & =\frac{\partial {x^{a}}}{\partial {\alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}}-\frac{\partial {x^{b}}}{\partial {\alpha_1}}\frac{\partial {x^{a}}}{\partial {\alpha_2}},\end{aligned}

with which we can reduce the integral to

\begin{aligned}\int d^2 x \cdot (\nabla \wedge A) & =\int \left( d{\alpha_1}\frac{\partial {x^{a}}}{\partial {\alpha_1}}\frac{\partial {A_{b}}}{\partial {x^a}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-\left( d{\alpha_2}\frac{\partial {x^{a}}}{\partial {\alpha_2}}\frac{\partial {A_{b}}}{\partial {x^a}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1} \\ & =\int \left( d\alpha_1 \frac{\partial {A_b}}{\partial {\alpha_1}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-\left( d\alpha_2 \frac{\partial {A_b}}{\partial {\alpha_2}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1} \\ \end{aligned}

Now, if it happens that

\begin{aligned}\frac{\partial}{\partial {\alpha_1}}\frac{\partial {x^{a}}}{\partial {\alpha_2}} = \frac{\partial}{\partial {\alpha_2}}\frac{\partial {x^{a}}}{\partial {\alpha_1}} = 0\end{aligned} \hspace{\stretch{1}}(3.26)

then each of the individual integrals in $d\alpha_1$ and $d\alpha_2$ can be carried out. In that case, without any real loss of generality we can designate the integration bounds over the unit parametrization space square $\alpha_i \in [0,1]$, allowing this integral to be expressed as

\begin{aligned}\begin{aligned} & \int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\partial_a A_{b}(\alpha_1, \alpha_2) \\ & =\int \left( A_b(1, \alpha_2) - A_b(0, \alpha_2) \right)\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-\left( A_b(\alpha_1, 1) - A_b(\alpha_1, 0) \right)\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.27)

It’s also fairly common to see ${\left.{{A}}\right\vert}_{{\partial \alpha_i}}$ used to designate evaluation of this first integral on the boundary, and using this we write

\begin{aligned}\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\partial_a A_{b}(\alpha_1, \alpha_2)=\int {\left.{{A_b}}\right\vert}_{{\partial \alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-{\left.{{A_b}}\right\vert}_{{\partial \alpha_2}}\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1}.\end{aligned} \hspace{\stretch{1}}(3.28)

Also note that since we are summing over all $a,b$, and have

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}=-\frac{\partial {x^{[b}}}{\partial {\alpha_1}}\frac{\partial {x^{a]}}}{\partial {\alpha_2}},\end{aligned} \hspace{\stretch{1}}(3.29)

we can write this summing over all unique pairs of $a,b$ instead, which eliminates a small bit of redundancy (especially once the dimension of the vector space gets higher)

\begin{aligned}\boxed{\sum_{a < b}\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\left( \partial_a A_{b}-\partial_b A_{a} \right)=\int {\left.{{A_b}}\right\vert}_{{\partial \alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-{\left.{{A_b}}\right\vert}_{{\partial \alpha_2}}\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1}.}\end{aligned} \hspace{\stretch{1}}(3.30)

In this form we have recovered the original geometric structure, with components of the curl multiplied by the component of the area element that shares the orientation and direction of that portion of the curl bivector.

This form of the result with evaluation at the boundaries in this form, assumed that ${\partial {x^a}}/{\partial {\alpha_1}}$ was not a function of $\alpha_2$ and ${\partial {x^a}}/{\partial {\alpha_2}}$ was not a function of $\alpha_1$. When that is not the case, we appear to have a less pretty result

\begin{aligned}\boxed{\sum_{a < b}\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\left( \partial_a A_{b}-\partial_b A_{a} \right)=\int d\alpha_2\int d\alpha_1\frac{\partial {A_b}}{\partial {\alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}}-\int d\alpha_2\int d\alpha_1\frac{\partial {A_b}}{\partial {\alpha_2}}\frac{\partial {x^{b}}}{\partial {\alpha_1}}}\end{aligned} \hspace{\stretch{1}}(3.31)

Can this be reduced any further in the general case? Having seen the statements of Stokes theorem in it’s differential forms formulation, I initially expected the answer was yes, and only when I got to evaluating my $\mathbb{R}^{4}$ spacetime example below did I realize that the differentials displacements for the parallelogram that constituted the area element were functions of both parameters. Perhaps this detail is there in the differential forms version of the general Stokes theorem too, but is just hidden in a tricky fashion by the compact notation.

### Sanity check: $\mathbb{R}^{2}$ case in rectangular coordinates.

For $x^1 = x, x^2 = y$, and $\alpha_1 = x, \alpha_2 = y$, we have for the LHS

\begin{aligned} & \int_{x=x_0}^{x_1}\int_{y=y_0}^{y_1}dx dy\left(\frac{\partial {x^{1}}}{\partial {\alpha_1}}\frac{\partial {x^{2}}}{\partial {\alpha_2}}-\frac{\partial {x^{2}}}{\partial {\alpha_1}}\frac{\partial {x^{1}}}{\partial {\alpha_2}}\right)\partial_1 A_{2}+\left(\frac{\partial {x^{2}}}{\partial {\alpha_1}}\frac{\partial {x^{1}}}{\partial {\alpha_2}}-\frac{\partial {x^{1}}}{\partial {\alpha_1}}\frac{\partial {x^{2}}}{\partial {\alpha_2}}\right)\partial_2 A_{1} \\ & =\int_{x=x_0}^{x_1}\int_{y=y_0}^{y_1}dx dy\left( \frac{\partial {A_y}}{\partial x} - \frac{\partial {A_x}}{\partial y} \right)\end{aligned}

Our RHS expands to

\begin{aligned} & \int_{y=y_0}^{y_1} dy\left(\left( A_1(x_1, y) - A_1(x_0, y) \right)\frac{\partial {x^{1}}}{\partial y}+\left( A_2(x_1, y) - A_2(x_0, y) \right)\frac{\partial {x^{2}}}{\partial y}\right) \\ & \qquad-\int_{x=x_0}^{x_1} dx\left(\left( A_1(x, y_1) - A_1(x, y_0) \right)\frac{\partial {x^{1}}}{\partial x}+\left( A_2(x, y_1) - A_2(x, y_0) \right)\frac{\partial {x^{2}}}{\partial x}\right) \\ & =\int_{y=y_0}^{y_1} dy\left( A_y(x_1, y) - A_y(x_0, y) \right)-\int_{x=x_0}^{x_1} dx\left( A_x(x, y_1) - A_x(x, y_0) \right)\end{aligned}

We have

\begin{aligned}\begin{aligned} & \int_{x=x_0}^{x_1}\int_{y=y_0}^{y_1}dx dy\left( \frac{\partial {A_y}}{\partial x} - \frac{\partial {A_x}}{\partial y} \right) \\ & =\int_{y=y_0}^{y_1} dy\left( A_y(x_1, y) - A_y(x_0, y) \right)-\int_{x=x_0}^{x_1} dx\left( A_x(x, y_1) - A_x(x, y_0) \right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.32)

The RHS is just a positively oriented line integral around the rectangle of integration

\begin{aligned}\int A_x(x, y_0) \hat{\mathbf{x}} \cdot ( \hat{\mathbf{x}} dx )+ A_y(x_1, y) \hat{\mathbf{y}} \cdot ( \hat{\mathbf{y}} dy )+ A_x(x, y_1) \hat{\mathbf{x}} \cdot ( -\hat{\mathbf{x}} dx )+ A_y(x_0, y) \hat{\mathbf{y}} \cdot ( -\hat{\mathbf{y}} dy )= \oint \mathbf{A} \cdot d\mathbf{r}.\end{aligned} \hspace{\stretch{1}}(3.33)

This special case is also recognizable as Green’s theorem, evident with the substitution $A_x = P$, $A_y = Q$, which gives us

\begin{aligned}\int_A dx dy \left( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y} \right)=\oint_C P dx + Q dy.\end{aligned} \hspace{\stretch{1}}(3.34)

Strictly speaking, Green’s theorem is more general, since it applies to integration regions more general than rectangles, but that generalization can be arrived at easily enough, once the region is broken down into adjoining elementary regions.

### Sanity check: $\mathbb{R}^{3}$ case in rectangular coordinates.

It is expected that we can recover the classical Kelvin-Stokes theorem if we use rectangular coordinates in $\mathbb{R}^{3}$. However, we see that we have to consider three different parametrizations. If one picks rectangular parametrizations $(\alpha_1, \alpha_2) = \{ (x,y), (y,z), (z,x) \}$ in sequence, in each case holding the value of the additional coordinate fixed, we get three different independent Green’s function like relations

\begin{aligned}\int_A dx dy \left( \frac{\partial {A_y}}{\partial x} - \frac{\partial {A_x}}{\partial y} \right) & = \oint_C A_x dx + A_y dy \\ \int_A dy dz \left( \frac{\partial {A_z}}{\partial y} - \frac{\partial {A_y}}{\partial z} \right) & = \oint_C A_y dy + A_z dz \\ \int_A dz dx \left( \frac{\partial {A_x}}{\partial z} - \frac{\partial {A_z}}{\partial x} \right) & = \oint_C A_z dz + A_x dx.\end{aligned} \hspace{\stretch{1}}(3.35)

Note that we cannot just add these to form a complete integral $\oint \mathbf{A} \cdot d\mathbf{r}$ since the curves are all have different orientations. To recover the $\mathbb{R}^{3}$ Stokes theorem in rectangular coordinates, it appears that we’d have to consider a Riemann sum of triangular surface elements, and relate that to the loops over each of the surface elements. In that limiting argument, only the boundary of the complete surface would contribute to the RHS of the relation.

All that said, we shouldn’t actually have to go to all this work. Instead we can stick to a two variable parametrization of the surface, and use 3.30 directly.

### An illustration for a $\mathbb{R}^{4}$ spacetime surface.

Suppose we have a particle trajectory defined by an active Lorentz transformation from an initial spacetime point

\begin{aligned}x^i = O^{ij} x_j(0) = O^{ij} g_{jk} x^k = {O^{i}}_k x^k(0)\end{aligned} \hspace{\stretch{1}}(3.38)

Let the Lorentz transformation be formed by a composition of boost and rotation

\begin{aligned}{O^i}_j & = {L^i}_k {R^k}_j \\ {L^i}_j & =\begin{bmatrix}\cosh_\alpha & -\sinh\alpha & 0 & 0 \\ -\sinh_\alpha & \cosh\alpha & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \\ {R^i}_j & =\begin{bmatrix}1 & 0 & 0 & 0 \\ \cos_\alpha & \sin\alpha & 0 & 0 \\ -\sin_\alpha & \cos\alpha & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.39)

Different rates of evolution of $\alpha$ and $\theta$ define different trajectories, and taken together we have a surface described by the two parameters

\begin{aligned}x^i(\alpha, \theta) = {L^i}_k {R^k}_j x^j(0, 0).\end{aligned} \hspace{\stretch{1}}(3.42)

We can compute displacements along the trajectories formed by keeping either $\alpha$ or $\theta$ fixed and varying the other. Those are

\begin{aligned}\frac{\partial {x^i}}{\partial {\alpha}} d\alpha & = \frac{d{L^i}_k}{d\alpha} {R^k}_j x^j(0, 0) \\ \frac{\partial {x^i}}{\partial {\theta}} d\theta & = {L^i}_k \frac{d{R^k}_j}{d\theta} x^j(0, 0) .\end{aligned} \hspace{\stretch{1}}(3.43)

Writing $y^i = x^i(0,0)$ the computation of the partials above yields

\begin{aligned}\frac{\partial {x^i}}{\partial {\alpha}} & =\begin{bmatrix}\sinh\alpha y^0 -\cosh\alpha (\cos\theta y^1 + \sin\theta y^2) \\ -\cosh\alpha y^0 +\sinh\alpha (\cos\theta y^1 + \sin\theta y^2) \\ 0 \\ 0\end{bmatrix} \\ \frac{\partial {x^i}}{\partial {\theta}} & =\begin{bmatrix}-\sinh\alpha (-\sin\theta y^1 + \cos\theta y^2 ) \\ \cosh\alpha (-\sin\theta y^1 + \cos\theta y^2 ) \\ -(\cos\theta y^1 + \sin\theta y^2 ) \\ 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.45)

Different choices of the initial point $y^i$ yield different surfaces, but we can get the idea by picking a simple starting point $y^i = (0, 1, 0, 0)$ leaving

\begin{aligned}\frac{\partial {x^i}}{\partial {\alpha}} & =\begin{bmatrix}-\cosh\alpha \cos\theta \\ \sinh\alpha \cos\theta \\ 0 \\ 0\end{bmatrix} \\ \frac{\partial {x^i}}{\partial {\theta}} & =\begin{bmatrix}\sinh\alpha \sin\theta \\ -\cosh\alpha \sin\theta \\ -\cos\theta \\ 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.47)

We can now compute our Jacobian determinants

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha}} \frac{\partial {x^{b]}}}{\partial {\theta}}={\left\lvert{\frac{\partial(x^a, x^b)}{\partial(\alpha, \theta)}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(3.49)

Those are

\begin{aligned}{\left\lvert{\frac{\partial(x^0, x^1)}{\partial(\alpha, \theta)}}\right\rvert} & = \cos\theta \sin\theta \\ {\left\lvert{\frac{\partial(x^0, x^2)}{\partial(\alpha, \theta)}}\right\rvert} & = \cosh\alpha \cos^2\theta \\ {\left\lvert{\frac{\partial(x^0, x^3)}{\partial(\alpha, \theta)}}\right\rvert} & = 0 \\ {\left\lvert{\frac{\partial(x^1, x^2)}{\partial(\alpha, \theta)}}\right\rvert} & = -\sinh\alpha \cos^2\theta \\ {\left\lvert{\frac{\partial(x^1, x^3)}{\partial(\alpha, \theta)}}\right\rvert} & = 0 \\ {\left\lvert{\frac{\partial(x^2, x^3)}{\partial(\alpha, \theta)}}\right\rvert} & = 0\end{aligned} \hspace{\stretch{1}}(3.50)

Using this, let’s see a specific 4D example in spacetime for the integral of the curl of some four vector $A^i$, enumerating all the non-zero components of 3.31 for this particular spacetime surface

\begin{aligned}\sum_{a < b}\int d{\alpha} d{\theta}{\left\lvert{\frac{\partial(x^a, x^b)}{\partial(\alpha, \theta)}}\right\rvert}\left( \partial_a A_{b}-\partial_b A_{a} \right)=\int d\theta\int d\alpha\frac{\partial {A_b}}{\partial {\alpha}}\frac{\partial {x^{b}}}{\partial {\theta}}-\int d\theta\int d\alpha\frac{\partial {A_b}}{\partial {\theta}}\frac{\partial {x^{b}}}{\partial {\alpha}}\end{aligned} \hspace{\stretch{1}}(3.56)

The LHS is thus found to be

\begin{aligned} & \int d{\alpha} d{\theta}\left({\left\lvert{\frac{\partial(x^0, x^1)}{\partial(\alpha, \theta)}}\right\rvert} \left( \partial_0 A_{1} -\partial_1 A_{0} \right)+{\left\lvert{\frac{\partial(x^0, x^2)}{\partial(\alpha, \theta)}}\right\rvert} \left( \partial_0 A_{2} -\partial_2 A_{0} \right)+{\left\lvert{\frac{\partial(x^1, x^2)}{\partial(\alpha, \theta)}}\right\rvert} \left( \partial_1 A_{2} -\partial_2 A_{1} \right)\right) \\ & =\int d{\alpha} d{\theta}\left(\cos\theta \sin\theta \left( \partial_0 A_{1} -\partial_1 A_{0} \right)+\cosh\alpha \cos^2\theta \left( \partial_0 A_{2} -\partial_2 A_{0} \right)-\sinh\alpha \cos^2\theta \left( \partial_1 A_{2} -\partial_2 A_{1} \right)\right)\end{aligned}

On the RHS we have

\begin{aligned}\int d\theta\int d\alpha & \frac{\partial {A_b}}{\partial {\alpha}}\frac{\partial {x^{b}}}{\partial {\theta}}-\int d\theta\int d\alpha\frac{\partial {A_b}}{\partial {\theta}}\frac{\partial {x^{b}}}{\partial {\alpha}} \\ & =\int d\theta\int d\alpha\begin{bmatrix}\sinh\alpha \sin\theta & -\cosh\alpha \sin\theta & -\cos\theta & 0\end{bmatrix}\frac{\partial}{\partial {\alpha}}\begin{bmatrix}A_0 \\ A_1 \\ A_2 \\ A_3 \\ \end{bmatrix} \\ & -\int d\theta\int d\alpha\begin{bmatrix}-\cosh\alpha \cos\theta & \sinh\alpha \cos\theta & 0 & 0\end{bmatrix}\frac{\partial}{\partial {\theta}}\begin{bmatrix}A_0 \\ A_1 \\ A_2 \\ A_3 \\ \end{bmatrix} \\ \end{aligned}

\begin{aligned}\begin{aligned} & \int d{\alpha} d{\theta}\cos\theta \sin\theta \left( \partial_0 A_{1} -\partial_1 A_{0} \right) \\ & \qquad+\int d{\alpha} d{\theta}\cosh\alpha \cos^2\theta \left( \partial_0 A_{2} -\partial_2 A_{0} \right) \\ & \qquad-\int d{\alpha} d{\theta}\sinh\alpha \cos^2\theta \left( \partial_1 A_{2} -\partial_2 A_{1} \right) \\ & =\int d\theta \sin\theta \int d\alpha \left( \sinh\alpha \frac{\partial {A_0}}{\partial {\alpha}} - \cosh\alpha \frac{\partial {A_1}}{\partial {\alpha}} \right) \\ & \qquad-\int d\theta \cos\theta \int d\alpha \frac{\partial {A_2}}{\partial {\alpha}} \\ & \qquad+\int d\alpha \cosh\alpha \int d\theta \cos\theta \frac{\partial {A_0}}{\partial {\theta}} \\ & \qquad-\int d\alpha \sinh\alpha \int d\theta \cos\theta \frac{\partial {A_1}}{\partial {\theta}}\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.57)

Because of the complexity of the surface, only the second term on the RHS has the “evaluate on the boundary” characteristic that may have been expected from a Green’s theorem like line integral.

It is also worthwhile to point out that we have had to be very careful with upper and lower indexes all along (and have done so with the expectation that our application would include the special relativity case where our metric determinant is minus one.) Because we worked with upper indexes for the area element, we had to work with lower indexes for the four vector and the components of the gradient that we included in our curl evaluation.

## The rank 2 tensor case.

Let’s consider briefly the terms in the contraction sum

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc}\end{aligned} \hspace{\stretch{1}}(3.58)

For any choice of a set of three distinct indexes $(a, b, c) \in (0, 1, 2), (0, 1, 3), (0, 2, 3), (1, 2, 3)$), we have $6 = 3!$ ways of permuting those indexes in this sum

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc} & =\sum_{a < b < c} {\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc} + {\left\lvert{ \frac{\partial(x^a, x^c, x^b)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{cb} + {\left\lvert{ \frac{\partial(x^b, x^c, x^a)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_b A_{ca} \\ & \qquad + {\left\lvert{ \frac{\partial(x^b, x^a, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_b A_{ac} + {\left\lvert{ \frac{\partial(x^c, x^a, x^b)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_c A_{ab} + {\left\lvert{ \frac{\partial(x^c, x^b, x^a)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_c A_{ba} \\ & =2!\sum_{a < b < c}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\left( \partial_a A_{bc} + \partial_b A_{c a} + \partial_c A_{a b} \right)\end{aligned}

Observe that we have no sign alternation like we had in the vector (rank 1 tensor) case. That sign alternation in this summation expansion appears to occur only for odd grade tensors.

Returning to the problem, we wish to expand the determinant in order to apply a chain rule contraction as done in the rank-1 case. This can be done along any of rows or columns of the determinant, and we can write any of

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} & =\frac{\partial {x^a}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^a}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ & =\frac{\partial {x^b}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^c, x^a)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^b}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^c, x^a)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^b}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^c, x^a)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ & =\frac{\partial {x^c}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^a, x^b)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^c}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^a, x^b)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^c}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^a, x^b)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ \end{aligned}

This allows the contraction of the index $a$, eliminating it from the result

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc} & =\left( \frac{\partial {x^a}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^a}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \right) \frac{\partial {A_{bc}}}{\partial {x^a}} \\ & =\frac{\partial {A_{bc}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {A_{bc}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {A_{bc}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ & =2!\sum_{b < c}\frac{\partial {A_{bc}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {A_{bc}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {A_{bc}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ \end{aligned}

Dividing out the common $2!$ terms, we can summarize this result as

\begin{aligned}\boxed{\begin{aligned}\sum_{a < b < c} & \int d\alpha_1 d\alpha_2 d\alpha_3 {\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\left( \partial_a A_{bc} + \partial_b A_{c a} + \partial_c A_{a b} \right) \\ & =\sum_{b < c}\int d\alpha_2 d\alpha_3 \int d\alpha_1\frac{\partial {A_{bc}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert} \\ & -\sum_{b < c}\int d\alpha_1 d\alpha_3 \int d\alpha_2\frac{\partial {A_{bc}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert} \\ & +\sum_{b < c}\int d\alpha_1 d\alpha_2 \int d\alpha_3\frac{\partial {A_{bc}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert}\end{aligned}}\end{aligned} \hspace{\stretch{1}}(3.59)

In general, as observed in the spacetime surface example above, the two index Jacobians can be functions of the integration variable first being eliminated. In the special cases where this is not the case (such as the $\mathbb{R}^{3}$ case with rectangular coordinates), then we are left with just the evaluation of the tensor element $A_{bc}$ on the boundaries of the respective integrals.

## The rank 3 tensor case.

The key step is once again just a determinant expansion

\begin{aligned} {\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \\ & =\frac{\partial {x^a}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_2, \alpha_3, \alpha_4)} }\right\rvert}-\frac{\partial {x^a}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_3, \alpha_4)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_4)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_4}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\\ \end{aligned}

so that the sum can be reduced from a four index contraction to a 3 index contraction

\begin{aligned} {\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a A_{bcd} \\ & =\frac{\partial {A_{bcd}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_2, \alpha_3, \alpha_4)} }\right\rvert}-\frac{\partial {A_{bcd}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_3, \alpha_4)} }\right\rvert}+\frac{\partial {A_{bcd}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_4)} }\right\rvert}+\frac{\partial {A_{bcd}}}{\partial {\alpha_4}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\end{aligned}

That’s the essence of the theorem, but we can play the same combinatorial reduction games to reduce the built in redundancy in the result

\begin{aligned}\boxed{\begin{aligned}\frac{1}{{3!}} & \int d^4 \alpha {\left\lvert{ \frac{\partial(x^a, x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a A_{bcd} \\ & =\sum_{a < b < c < d}\int d^4 \alpha {\left\lvert{ \frac{\partial(x^a, x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \left( \partial_a A_{bcd} -\partial_b A_{cda} +\partial_c A_{dab} -\partial_d A_{abc} \right) \\ & =\qquad \sum_{b < c < d}\int d\alpha_2 d\alpha_3 d\alpha_4 \int d\alpha_1\frac{\partial {A_{bcd}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \\ & \qquad -\sum_{b < c < d}\int d\alpha_1 d\alpha_3 d\alpha_4 \int d\alpha_2\frac{\partial {A_{bcd}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_3, \alpha_4)} }\right\rvert} \\ & \qquad +\sum_{b < c < d}\int d\alpha_1 d\alpha_2 d\alpha_4 \int d\alpha_3\frac{\partial {A_{bcd}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_4)} }\right\rvert} \\ & \qquad +\sum_{b < c < d}\int d\alpha_1 d\alpha_2 d\alpha_3 \int d\alpha_4\frac{\partial {A_{bcd}}}{\partial {\alpha_4}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \\ \end{aligned}}\end{aligned} \hspace{\stretch{1}}(3.60)

## A note on Four diverence.

Our four divergence integral has the following form

\begin{aligned}\int d^4 \alpha {\left\lvert{ \frac{\partial(x^1, x^2, x^2, x^4)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a A^a\end{aligned} \hspace{\stretch{1}}(3.61)

We can relate this to the rank 3 Stokes theorem with a duality transformation, multiplying with a pseudoscalar

\begin{aligned}A^a = \epsilon^{abcd} T_{bcd},\end{aligned} \hspace{\stretch{1}}(3.62)

where $T_{bcd}$ can also be related back to the vector by the same sort of duality transformation

\begin{aligned}A^a \epsilon_{a b c d} = \epsilon^{abcd} \epsilon_{a b c d} T_{bcd} = 4! T_{bcd}.\end{aligned} \hspace{\stretch{1}}(3.63)

The divergence integral in terms of the rank 3 tensor is

\begin{aligned}\int d^4 \alpha {\left\lvert{ \frac{\partial(x^1, x^2, x^2, x^4)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a \epsilon^{abcd} T_{bcd}=\int d^4 \alpha {\left\lvert{ \frac{\partial(x^a, x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a T_{bcd},\end{aligned} \hspace{\stretch{1}}(3.64)

and we are free to perform the same Stokes reduction of the integral. Of course, this is particularly simple in rectangular coordinates. I still have to think though one sublty that I feel may be important. We could have started off with an integral of the following form

\begin{aligned}\int dx^1 dx^2 dx^3 dx^4 \partial_a A^a,\end{aligned} \hspace{\stretch{1}}(3.65)

and I think this differs from our starting point slightly because this has none of the antisymmetric structure of the signed 4 volume element that we have used. We do not take the absolute value of our Jacobians anywhere.

## PHY450H1S. Relativistic Electrodynamics Lecture 9 (Taught by Prof. Erich Poppitz). Dynamics in a vector field.

Posted by Peeter Joot on February 3, 2011

[Click here for a PDF of this post with nicer formatting]

Covering chapter 2 material from the text [1].

Covering lecture notes pp. 56.1-72: comments on mass, energy, momentum, and massless particles (56.1-58); particles in external fields: Lorentz scalar field (59-62); reminder of a vector field under spatial rotations (63) and a Lorentz vector field (64-65) [Tuesday, Feb. 1]; the action for a relativistic particle in an external 4-vector field (65-66); the equation of motion of a relativistic particle in an external electromagnetic (4-vector) field (67,68,73) [Wednesday, Feb. 2]; mathematical interlude: (69-72): on 3×3 antisymmetric matrices, 3-vectors, and totally antisymmetric 3-index tensor – please read by yourselves, preferably by Wed., Feb. 2 class! (this is important, well also soon need the 4-dimensional generalization)

# More on the action.

Action for a relativistic particle in an external 4-scalar field

\begin{aligned}S = -m c \int ds - g \int ds \phi(x)\end{aligned} \hspace{\stretch{1}}(2.1)

Unfortunately we have no 4-vector scalar fields (at least for particles that are long lived and stable).

PICTURE: 3-vector field, some arrows in various directions.

PICTURE: A vector $\mathbf{A}$ in an $x,y$ frame, and a rotated (counterclockwise by angle $\alpha$) $x', y'$ frame with the components in each shown pictorially.

We have

\begin{aligned}A_x'(x', y') &= \cos\alpha A_x(x,y) + \sin\alpha A_y(x,y) \\ A_y'(x', y') &= -\sin\alpha A_x(x,y) + \cos\alpha A_y(x,y) \end{aligned} \hspace{\stretch{1}}(2.2)

\begin{aligned}\begin{bmatrix}A_x'(x', y') \\ A_y'(x', y')\end{bmatrix}=\begin{bmatrix}\cos\alpha A_x(x,y) & \sin\alpha A_y(x,y) \\ -\sin\alpha A_x(x,y) & \cos\alpha A_y(x,y) \end{bmatrix}\begin{bmatrix}A_x(x, y) \\ A_y(x, y)\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.4)

More generally we have

\begin{aligned}\begin{bmatrix}A_x'(x', y', z') \\ A_y'(x', y', z') \\ A_z'(x', y', z')\end{bmatrix}=\hat{O}\begin{bmatrix}A_x(x, y, z) \\ A_y(x, y, z) \\ A_z(x, y, z)\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.5)

Here $\hat{O}$ is an $SO(3)$ matrix rotating $x \rightarrow x'$

\begin{aligned}\mathbf{A}(\mathbf{x}) \cdot \mathbf{y} = \mathbf{A}'(\mathbf{x}') \cdot \mathbf{y}'\end{aligned} \hspace{\stretch{1}}(2.6)

\begin{aligned}\mathbf{A} \cdot \mathbf{B} = \text{invariant}\end{aligned} \hspace{\stretch{1}}(2.7)

A four vector field is $A^i(x)$, with $x = x^i, i = 0,1,2,3$ and we’d write

\begin{aligned}\begin{bmatrix}(x^0)' \\ (x^1)' \\ (x^2)' \\ (x^3)'\end{bmatrix}=\hat{O}\begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.8)

Now $\hat{O}$ is an $SO(1,3)$ matrix. Our four vector field is then

\begin{aligned}\begin{bmatrix}(A^0)' \\ (A^1)' \\ (A^2)' \\ (A^3)'\end{bmatrix}=\hat{O}\begin{bmatrix}A^0 \\ A^1 \\ A^2 \\ A^3\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.9)

We have

\begin{aligned}A^i g_{ij} x^i = \text{invariant} = {A'}^i g_{ij} {x'}^i \end{aligned} \hspace{\stretch{1}}(2.10)

From electrodynamics we know that we have a scalar field, the electrostatic potential, and a vector field

What’s a plausible action?

\begin{aligned}\int ds x^i g_{ij} A^j\end{aligned} \hspace{\stretch{1}}(2.11)

This isn’t translation invariant.

\begin{aligned}\int ds x^i g_{ij} A^j\end{aligned} \hspace{\stretch{1}}(2.12)

Next simplest is

\begin{aligned}\int ds u^i g_{ij} A^j\end{aligned} \hspace{\stretch{1}}(2.13)

Could also do

\begin{aligned}\int ds A^i g_{ij} A^j\end{aligned} \hspace{\stretch{1}}(2.14)

but it turns out that this isn’t gauge invariant (to be defined and discussed in detail).

Note that the convention for this course is to write

\begin{aligned}u^i = \left( \gamma, \gamma \frac{\mathbf{v}}{c} \right) = \frac{dx^i}{ds}\end{aligned} \hspace{\stretch{1}}(2.15)

Where $u^i$ is dimensionless ($u^i u_i = 1$). Some authors use

\begin{aligned}u^i = \left( \gamma c, \gamma \mathbf{v} \right) = \frac{dx^i}{d\tau}\end{aligned} \hspace{\stretch{1}}(2.16)

The simplest action for a four vector field $A^i$ is then

\begin{aligned}S = - m c \int ds - \frac{e}{c} \int ds u^i A_i\end{aligned} \hspace{\stretch{1}}(2.17)

(Recall that $u^i A_i = u^i g_{ij} A^j$).

In this action $e$ is nothing but a Lorentz scalar, a property of the particle that describes how it “couples” (or “feels”) the electrodynamics field.

Similarily $mc$ is a Lorentz scalar which is a property of the particle (inertia).

It turns out that all the electric charges in nature are quantized, and there are some deep reasons (in magnetic monopoles exist) for this.

Another reason for charge quantitization apparently has to do with gauge invariance and associated compact groups. Poppitz is amusing himself a bit here, hinting at some stuff that we can eventually learn.

Returning to our discussion, we have

\begin{aligned}S = - m c \int ds - \frac{e}{c} \int ds u^i g_{ij} A^j\end{aligned} \hspace{\stretch{1}}(2.18)

with the electrodynamics four vector potential

\begin{aligned}A^i &= (\phi, \mathbf{A}) \\ u^i &= \left(\gamma, \gamma \frac{\mathbf{v}}{c} \right) \\ u^i g_{ij} A^j &= \gamma \phi - \gamma \frac{\mathbf{v} \cdot \mathbf{A}}{c}\end{aligned} \hspace{\stretch{1}}(2.19)

\begin{aligned}S &= - m c^2 \int dt \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} - \frac{e}{c} \int c dt \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} \left( \gamma \phi - \gamma \frac{\mathbf{v}}{c} \cdot \mathbf{A} \right) \\ &= \int dt \left(- m c^2 \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} - e \phi(\mathbf{x}, t) + \frac{e}{c} \mathbf{v} \cdot \mathbf{A}(\mathbf{x}, t)\right) \\ \end{aligned}

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\mathbf{v}}} = \frac{m c^2}{\sqrt{1 - \frac{\mathbf{v}^2}{c^2}}} \frac{\mathbf{v}}{c^2} + \frac{e}{c} \mathbf{A}(\mathbf{x}, t)\end{aligned} \hspace{\stretch{1}}(2.22)

\begin{aligned}\frac{d}{dt} \frac{\partial {\mathcal{L}}}{\partial {\mathbf{v}}} = m \frac{d}{dt} (\gamma \mathbf{v}) + \frac{e}{c} \frac{\partial {\mathbf{A}}}{\partial {t}} + \frac{e}{c} \frac{\partial {\mathbf{A}}}{\partial {x^\alpha}} v^\alpha\end{aligned} \hspace{\stretch{1}}(2.23)

Here $\alpha,\beta = 1,2,3$ and are summed over.

For the other half of the Euler-Lagrange equations we have

\begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {x^\alpha}} = - e \frac{\partial {\phi}}{\partial {x^\alpha}} + \frac{e}{c} v^\beta \frac{\partial {A^\beta}}{\partial {x^\alpha}}\end{aligned} \hspace{\stretch{1}}(2.24)

Equating these, and switching to coordinates for 2.23, we have

\begin{aligned}m \frac{d}{dt} (\gamma v^\alpha) + \frac{e}{c} \frac{\partial {A^\alpha}}{\partial {t}} + \frac{e}{c} \frac{\partial {A^\alpha}}{\partial {x^\beta}} v^\beta= - e \frac{\partial {\phi}}{\partial {x^\alpha}} + \frac{e}{c} v^\beta \frac{\partial {A^\beta}}{\partial {x^\alpha}}\end{aligned} \hspace{\stretch{1}}(2.25)

A final rearrangement yields

\begin{aligned}\frac{d}{dt} m \gamma v^\alpha = e \underbrace{\left( - \frac{1}{{c}} \frac{\partial {A^\alpha}}{\partial {t}} - \frac{\partial {\phi}}{\partial {x^\alpha}} \right)}_{E^\alpha} + \frac{e}{c} v^\beta \left( \frac{\partial {A^\beta}}{\partial {x^\alpha}} - \frac{\partial {A^\alpha}}{\partial {x^\beta}} \right)\end{aligned} \hspace{\stretch{1}}(2.26)

We can identity the second term with the magnetic field but first have to introduce antisymmetric matrices.

# antisymmetric matrixes

\begin{aligned}M_{\mu\nu} &= \frac{\partial {A^\nu}}{\partial {x^\mu}} - \frac{\partial {A^\mu}}{\partial {x^\nu}} \\ &= \epsilon_{\mu\nu\lambda} B_\lambda,\end{aligned}

where

\begin{aligned}\epsilon_{\mu\nu\lambda} =\begin{array}{l l}0 & \quad \mbox{if any two indexes coincide} \\ 1 & \quad \mbox{for even permutations oflatex \mu\nu\lambda} \\ -1 & \quad \mbox{for odd permutations of $\mu\nu\lambda$}\end{array}\end{aligned} \hspace{\stretch{1}}(3.27)

Example:

\begin{aligned}\epsilon_{123} &= 1 \\ \epsilon_{213} &= -1 \\ \epsilon_{231} &= 1.\end{aligned}

We can show that

\begin{aligned}B_\lambda = \frac{1}{{2}} \epsilon_{\lambda\mu\nu} M_{\mu\nu}\end{aligned} \hspace{\stretch{1}}(3.28)

\begin{aligned}B_1 &= \frac{1}{{2}} ( \epsilon_{123} M_{23} + \epsilon_{132} M_{32}) \\ &= \frac{1}{{2}} ( M_{23} - M_{32}) \\ &= \partial_2 A_3 - \partial_3 A_2.\end{aligned}

Using

\begin{aligned}\epsilon_{\mu\nu\alpha} \epsilon_{\sigma\kappa\alpha} = \delta_{\mu\sigma} \delta_{\nu\kappa} - \delta_{\nu\sigma} \delta_{\mu\kappa},\end{aligned} \hspace{\stretch{1}}(3.29)

we can verify the identity 3.28 by expanding

\begin{aligned}\epsilon_{\mu\nu\lambda} B_\lambda&=\frac{1}{{2}} \epsilon_{\mu\nu\lambda} \epsilon_{\lambda\alpha\beta} M_{\alpha\beta} \\ &=\frac{1}{{2}} (\delta_{\mu\alpha} \delta_{\nu\beta} - \delta_{\nu\alpha} \delta_{\mu\beta})M_{\alpha\beta} \\ &=\frac{1}{{2}} (M_{\mu\nu} - M_{\nu\mu}) \\ &=M_{\mu\nu}\end{aligned}

Returning to the action evaluation we have

\begin{aligned}\frac{d}{dt} ( m \gamma v^\alpha ) = e E^\alpha + \frac{e}{c} \epsilon_{\alpha\beta\gamma} v^\beta B_\gamma,\end{aligned} \hspace{\stretch{1}}(3.30)

but

\begin{aligned}\epsilon_{\alpha\beta\gamma} B_\gamma = (\mathbf{v} \times \mathbf{B})_\alpha.\end{aligned} \hspace{\stretch{1}}(3.31)

So

\begin{aligned}\frac{d}{dt} ( m \gamma \mathbf{v} ) = e \mathbf{E} + \frac{e}{c} \mathbf{v} \times \mathbf{B}\end{aligned} \hspace{\stretch{1}}(3.32)

or

\begin{aligned}\frac{d}{dt} ( \mathbf{p} ) = e \left( \mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B} \right).\end{aligned} \hspace{\stretch{1}}(3.33)

\paragraph{What is the energy component of the Lorentz force equation}

I asked this, not because I don’t know (I could answer this myself from $dp/d\tau = F \cdot v/c$, in the geometric algebra formalism, but I was curious if he had a way of determining this from what we’ve derived so far (intuitively I’d expect this to be possible). Answer was:

Observe that this is almost a relativisitic equation, but we aren’t going to get to the full equation yet. The energy component can be obtained from

\begin{aligned}\frac{du^0}{ds} = e F^{0j} u_j\end{aligned} \hspace{\stretch{1}}(3.34)

Since the full equation is

\begin{aligned}\frac{du^i}{ds} = e F^{ij} u_j\end{aligned} \hspace{\stretch{1}}(3.35)

“take with a grain of salt, may be off by sign, or factors of $c$”.

Also curious is that he claimed the energy component of this equation was not very important. Why would that be?

# Gauge transformations.

Claim

\begin{aligned}S_{\text{interaction}} = - \frac{e}{c} \int ds u^i A_i\end{aligned} \hspace{\stretch{1}}(4.36)

changes by boundary terms only under

“gauge transformation” :

\begin{aligned}A_i = A_i' + \frac{\partial {\chi}}{\partial {x^i}}\end{aligned} \hspace{\stretch{1}}(4.37)

where $\chi$ is a Lorentz scalar. This ${\partial {}}/{\partial {x^i}}$ is the four gradient. Let’s see this

Therefore the equations of motion are the same in an external $A^i$ and ${A'}^i$.

Recall that the $\mathbf{E}$ and $\mathbf{B}$ fields do not change under such transformations. Let’s see how the action transforms

\begin{aligned}S &= - \frac{e}{c} \int ds u^i A_i \\ &= - \frac{e}{c} \int ds u^i \left( {A'}_i + \frac{\partial {\chi}}{\partial {x^i}} \right) \\ &= - \frac{e}{c} \int ds u^i {A'}_i - \frac{e}{c} \int ds \frac{dx^i}{ds} \frac{\partial {\chi}}{\partial {x^i}} \\ \end{aligned}

Observe that this last bit is just a chain rule expansion

\begin{aligned}\frac{d}{ds} \chi(x^0, x^1, x^2, x^3) &= \frac{\partial {\chi}}{\partial {x^0}}\frac{dx^0}{ds} + \frac{\partial {\chi}}{\partial {x^1}}\frac{dx^1}{ds} + \frac{\partial {\chi}}{\partial {x^2}}\frac{dx^2}{ds} + \frac{\partial {\chi}}{\partial {x^3}}\frac{dx^3}{ds} \\ &= \frac{\partial {\chi}}{\partial {x^i}} \frac{dx^i}{ds},\end{aligned}

so we have

\begin{aligned}S = - \frac{e}{c} \int ds u^i {A'}_i - \frac{e}{c} \int ds \frac{d \chi}{ds}.\end{aligned} \hspace{\stretch{1}}(4.38)

This allows the line integral to be evaluated, and we find that it only depends on the end points of the interval

\begin{aligned}S = - \frac{e}{c} \int ds u^i {A'}_i - \frac{e}{c} ( \chi(x_b) - \chi(x_a) ),\end{aligned} \hspace{\stretch{1}}(4.39)

which completes the proof of the claim that this gauge transformation results in an action difference that only depends on the end points of the interval.

\paragraph{What is the significance to this claim?}

# References

[1] L.D. Landau and E.M. Lifshits. The classical theory of fields. Butterworth-Heinemann, 1980.

## Stokes Theorem for antisymmetric tensors.

Posted by Peeter Joot on January 18, 2011

[Click here for a PDF of this post with nicer formatting]

In [3] I worked through the Geometric Algebra expression for Stokes Theorem. For a $k-1$ grade blade, the final result of that work was

\begin{aligned}\int( \nabla \wedge F ) \cdot d^k x =\frac{1}{{(k-1)!}} \epsilon^{ r s \cdots t u } \int da_u \frac{\partial {F}}{\partial {a_{u}}} \cdot (dx_r \wedge dx_s \wedge \cdots \wedge dx_t)\end{aligned} \hspace{\stretch{1}}(7.44)

Let’s expand this in coordinates to attempt to get the equivalent expression for an antisymmetric tensor of rank $k-1$.

Starting with the RHS of 7.44 we have

\begin{aligned}F &= \frac{1}{{(k-1)!}}F_{\mu_1 \mu_2 \cdots \mu_{k-1} }\gamma^{\mu_1} \wedge \gamma^{ \mu_2 } \wedge \cdots \wedge \gamma^{\mu_{k-1}} \\ dx_r \wedge dx_s \wedge \cdots \wedge dx_t &=\frac{\partial {x^{\nu_1}}}{\partial {a_r}}\frac{\partial {x^{\nu_2}}}{\partial {a_s}}\cdots\frac{\partial {x^{\nu_{k-1}}}}{\partial {a_t}}\gamma_{\nu_1} \wedge \gamma_{ \nu_2 } \wedge \cdots \wedge \gamma_{\nu_{k-1}}da_r da_s \cdots da_t\end{aligned} \hspace{\stretch{1}}(7.45)

We need to expand the dot product of the wedges, for which we have

\begin{aligned}\left( \gamma^{\mu_1} \wedge \gamma^{ \mu_2 } \wedge \cdots \wedge \gamma^{\mu_{k-1}} \right) \cdot\left( \gamma_{\nu_1} \wedge \gamma_{ \nu_2 } \wedge \cdots \wedge \gamma_{\nu_{k-1}}\right) ={\delta^{\mu_{k-1}}}_{\nu_1} {\delta^{ \mu_{k-2} }}_{\nu_2} \cdots {\delta^{\mu_{1}} }_{\nu_{k-1}}\epsilon^{\nu_1 \nu_2 \cdots \nu_{k-1}}\end{aligned} \hspace{\stretch{1}}(7.47)

Putting all the LHS bits together we have

\begin{aligned}&\frac{1}{{((k-1)!)^2}} \epsilon^{ r s \cdots t u } \int da_u \frac{\partial {}}{\partial {a_{u}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1} }{\delta^{\mu_{k-1}}}_{\nu_1} {\delta^{ \mu_{k-2} }}_{\nu_2} \cdots {\delta^{\mu_{1}} }_{\nu_{k-1}}\epsilon^{\nu_1 \nu_2 \cdots \nu_{k-1}}\frac{\partial {x^{\nu_1}}}{\partial {a_r}}\frac{\partial {x^{\nu_2}}}{\partial {a_s}}\cdots\frac{\partial {x^{\nu_{k-1}}}}{\partial {a_t}}da_r da_s \cdots da_t \\ &=\frac{1}{{((k-1)!)^2}} \epsilon^{ r s \cdots t u } \int da_u \frac{\partial {}}{\partial {a_{u}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1} }\epsilon^{\mu_{k-1} \mu_{k-2} \cdots \mu_{1}}\frac{\partial {x^{\mu_{k-1}}}}{\partial {a_r}}\frac{\partial {x^{\mu_{k-2}}}}{\partial {a_s}}\cdots\frac{\partial {x^{\mu_1}}}{\partial {a_t}}da_r da_s \cdots da_t \\ &=\frac{1}{{((k-1)!)^2}} \epsilon^{ r s \cdots t u } \int da_u \frac{\partial {}}{\partial {a_{u}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1} }{\left\lvert{\frac{\partial(x^{\mu_{k-1}},x^{\mu_{k-2}},\cdots,x^{\mu_1})}{\partial(a_r, a_s, \cdots, a_t)}}\right\rvert}da_r da_s \cdots da_t \\ \end{aligned}

Now, for the LHS of 7.44 we have

\begin{aligned}\nabla \wedge F &=\gamma^\mu \wedge \partial_\mu F \\ &=\frac{1}{{(k-1)!}}\frac{\partial {}}{\partial {x^{\mu_k}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1}}\gamma^{\mu_k} \wedge\gamma^{\mu_1} \wedge \gamma^{ \mu_2 } \wedge \cdots \wedge \gamma^{\mu_{k-1}} \end{aligned}

and the volume element of

\begin{aligned}d^k x &=\frac{\partial {x^{\nu_1}}}{\partial {a_1}}\frac{\partial {x^{\nu_2}}}{\partial {a_2}}\cdots\frac{\partial {x^{\nu_{k}}}}{\partial {a_k}}\gamma_{\nu_1} \wedge \gamma_{ \nu_2 } \wedge \cdots \wedge \gamma_{\nu_k}da_1 da_2 \cdots da_k\end{aligned}

Our dot product is

\begin{aligned}\left(\gamma^{\mu_k} \wedge\gamma^{\mu_1} \wedge \gamma^{ \mu_2 } \wedge \cdots \wedge \gamma^{\mu_{k-1}} \right) \cdot\left( \gamma_{\nu_1} \wedge \gamma_{ \nu_2 } \wedge \cdots \wedge \gamma_{\nu_k} \right)={\delta^{\mu_{k-1}}}_{\nu_1} {\delta^{ \mu_{k-2} }}_{\nu_2} \cdots {\delta^{\mu_{1}} }_{\nu_{k-1}}{\delta^{\mu_{k}} }_{\nu_{k}}\epsilon^{\nu_1 \nu_2 \cdots \nu_{k}}\end{aligned} \hspace{\stretch{1}}(7.48)

The LHS of our k-form now evaluates to

\begin{aligned}(\gamma^\mu \wedge \partial_\mu F) \cdot d^k x &= \frac{1}{(k-1)!}\frac{\partial }{\partial {x^{\mu_k}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1}}{\delta^{\mu_{k-1}}}_{\nu_1} {\delta^{ \mu_{k-2} }}_{\nu_2} \cdots {\delta^{\mu_1} }_{\nu_{k-1}}{\delta^{\mu_k} }_{\nu_k}\epsilon^{\nu_1 \nu_2 \cdots \nu_k}\frac{\partial {x^{\nu_1}}}{\partial {a_1}}\frac{\partial {x^{\nu_2}}}{\partial {a_2}} \cdots \frac{\partial {x^{\nu_k}}}{\partial {a_k}} da_1 da_2 \cdots da_k \\ &= \frac{1}{(k-1)!}\frac{\partial }{\partial {x^{\mu_k}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1}}\epsilon^{\mu_{k-1} \mu_{k-2} \cdots \mu_1 \mu_k}\frac{\partial {x^{\mu_{k-1}}}}{\partial {a_1}}\frac{\partial {x^{\mu_{k-2}}}}{\partial {a_2}} \cdots \frac{\partial {x^{\mu_1}}}{\partial {a_{k-1}}}\frac{\partial {x^{\mu_k}}}{\partial {a_k}} da_1 da_2 \cdots da_k \\ &= \frac{1}{(k-1)!}\frac{\partial }{\partial {x^{\mu_k}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1}}{\left\lvert{\frac{\partial(x^{\mu_{k-1}},x^{\mu_{k-2}},\cdots x^{\mu_1},x^{\mu_k})}{\partial(a_1, a_2, \cdots, a_{k-1}, a_k)}}\right\rvert} da_1 da_2 \cdots da_k \\ \end{aligned}

Presuming no mistakes were made anywhere along the way (including in the original Geometric Algebra expression), we have arrived at Stokes Theorem for rank $k-1$ antisymmetric tensors $F$

\boxed{ \begin{aligned}&\int\frac{\partial }{\partial {x^{\mu_k}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1}}{\left\lvert{\frac{\partial(x^{\mu_{k-1}},x^{\mu_{k-2}},\cdots x^{\mu_1},x^{\mu_k})}{\partial(a_1, a_2, \cdots, a_{k-1}, a_k)}}\right\rvert} da_1 da_2 \cdots da_k \\ &= \frac{1}{(k-1)!} \epsilon^{ r s \cdots t u } \int da_u \frac{\partial }{\partial {a_u}} F_{\nu_1 \nu_2 \cdots \nu_{k-1} }{\left\lvert{\frac{\partial(x^{\nu_{k-1}},x^{\nu_{k-2}}, \cdots ,x^{\nu_1})}{\partial(a_r, a_s, \cdots, a_t)}}\right\rvert} da_r da_s \cdots da_t \end{aligned} } \hspace{\stretch{1}}(7.49)

The next task is to validate this, expanding it out for some specific ranks and hypervolume element types, and to compare the results with the familiar 3d expressions.

# References

[1] L.D. Landau and E.M. Lifshits. The classical theory of fields. Butterworth-Heinemann, 1980.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] Peeter Joot. Stokes theorem derivation without tensor expansion of the blade [online]. http://sites.google.com/site/peeterjoot/math2009/stokesNoTensor.pdf.

## Poincare transformations

Posted by Peeter Joot on July 6, 2009

[Click here for a PDF of this post with nicer formatting]

# Motivation

In ([1]) a Poincare transformation is used to develop the symmetric stress energy tensor directly, in contrast to the non-symmetric canonical stress energy tensor that results from spacetime translation.

Attempt to decode one part of this article, the use of a Poincare transformation.

# Incremental transformation in GA form.

Equation (11) in the article, is labeled an infinitesimal Poincare transformation

\begin{aligned}{x'}^\mu&={x'}^\mu+ {{\epsilon}^\mu}_\nu x^\nu+ {\epsilon}^\mu \end{aligned} \quad\quad\quad(1)

It is stated that an antisymmetrization condition $\epsilon_{\mu\nu} = -\epsilon_{\nu\mu}$. This is somewhat confusing since the infinitesimal transformation is given by a mixed upper and lower index tensor. Due to the antisymmetry perhaps this all a coordinate statement of the following vector to vector linear transformation

\begin{aligned}x' = x + \epsilon + A \cdot x \end{aligned} \quad\quad\quad(2)

This transformation is less restricted than a plain old spacetime transformation, as it also contains a projective term, where $x$ is projected onto the spacetime (or spatial) plane $A$ (a bivector), plus a rotation in that plane.

Writing as usual

\begin{aligned}x = \gamma_\mu x^\mu \end{aligned}

So that components are recovered by taking dot products, as in
\begin{aligned}x^\mu = x \cdot \gamma^\mu \end{aligned}

For the bivector term, write

\begin{aligned}A = c \wedge d = c^\alpha d^\beta (\gamma_\alpha \wedge \gamma_\beta) \end{aligned}

For
\begin{aligned}(A \cdot x ) \cdot \gamma^\mu &=c^\alpha d^\beta x_\sigma ((\gamma_\alpha \wedge \gamma_\beta) \cdot \gamma^\sigma) \cdot \gamma^\mu \\ &=c^\alpha d^\beta x_\sigma ( {\delta_\alpha}^\mu {\delta_\beta}^\sigma -{\delta_\beta}^\mu {\delta_\alpha}^\sigma ) \\ &=(c^\mu d^\sigma -c^\sigma d^\mu ) x_\sigma \end{aligned}

This allows for an identification $\epsilon^{\mu\sigma} = c^\mu d^\sigma -c^\sigma d^\mu$ which is antisymmetric as required. With that identification we can write (1) via the equivalent vector relation (2) if we write

\begin{aligned}{\epsilon^\mu}_\sigma x^\sigma = (c^\mu d_\sigma -c_\sigma d^\mu ) x^\sigma \end{aligned}

Where ${\epsilon^\mu}_\sigma$ is defined implicitly in terms of components of the bivector $A = c \wedge d$.

Is this what a Poincare transformation is? The Poincare Transformation article suggests not. This article suggests that the Poincare transformation is a spacetime translation plus a Lorentz transformation (composition of boosts and rotations). That Lorentz transformation will not be antisymmetric however, so how can these be reconciled? The key is probably the fact that this was an infinitesimal Poincare transformation so lets consider a Taylor expansion of the Lorentz boost or rotation rotor, considering instead a transformation of the following form

\begin{aligned}x' &= x + \epsilon + R x \tilde{R} \\ R \tilde{R} &= 1 \end{aligned} \quad\quad\quad(3)

In particular, let’s look at the Lorentz transformation in terms of the exponential form
\begin{aligned}R = e^{I \theta/2} \end{aligned}

Here $\theta$ is either the angle of rotation (when the bivector is a unit spatial plane such as $I = \gamma_k \wedge \gamma_m$), or a rapidity angle (when the bivector is a unit spacetime plane such as $I = \gamma_k \wedge \gamma_0$).

Ignoring the translation in (3) for now, to calculate the first order term in Taylor series we need

\begin{aligned}\frac{dx'}{d\theta} &= \frac{dR}{d\theta} x \tilde{R} +{R} x \frac{d\tilde{R}}{d\theta} \\ &= \frac{dR}{d\theta} \tilde{R} R x \tilde{R} +{R} x \tilde{R} R \frac{d\tilde{R}}{d\theta} \\ &=\frac{1}{2} ( \Omega x' + x' \tilde{\Omega} ) \end{aligned}

where
\begin{aligned}\frac{1}{2}\Omega = \frac{dR}{d\theta} \tilde{R} \end{aligned}

Now, what is the grade of the product $\Omega$? We have both $dR/d\theta$ and $R$ in $\{\bigwedge^0 \oplus \bigwedge^2\}$ so the product can only have even grades $\Omega \in \{\bigwedge^0 \oplus \bigwedge^2 \oplus \bigwedge^4\}$, but the unitary constraint on $R$ restricts this

Since $R \tilde{R} = 1$ the derivative of this is zero

\begin{aligned}\frac{dR}{d\theta} \tilde{R} + {R} \frac{d\tilde{R}}{d\theta} = 0 \end{aligned}

Or
\begin{aligned}\frac{dR}{d\theta} \tilde{R} = - \left( \frac{dR}{d\theta} \tilde{R} \right)^{\tilde{}} \end{aligned}

Antisymmetry rules out grade zero and four terms, leaving only the possibility of grade 2. That leaves

\begin{aligned}\frac{dx'}{d\theta} = \frac{1}{2}(\Omega x' - x' \Omega) = \Omega \cdot x' \end{aligned}

And the first order Taylor expansion around $\theta =0$ is

\begin{aligned}x'(d\theta) &\approx x'(\theta = 0) + ( \Omega d\theta ) \cdot x' \\ &= x + ( \Omega d\theta ) \cdot x' \end{aligned}

This has close to the postulated form in (2), but differs in one notable way. The dot product with the antisymmetric form $A = \frac{1}{2} \frac{dR}{d\theta} \tilde{R} d\theta$ is a dot product with $x'$ and not $x$! One can however invert the identity writing $x$ in terms of $x'$ (to first order)

\begin{aligned}x = x' - ( \Omega d\theta ) \cdot x' \end{aligned}

Replaying this argument in fast forward for the inverse transformation should give us a relation for $x'$ in terms of $x$ and the incremental Lorentz transform

\begin{aligned}x' &= R x \tilde{R} \\ \implies \\ x &= \tilde{R} x' {R} \end{aligned}

\begin{aligned}\frac{dx}{d\theta}&= \frac{d \tilde{R}}{d\theta} R \tilde{R} x' R + \tilde{R} x' R \tilde{R} \frac{d {R}}{d\theta} \\ &= \left(2 \frac{d \tilde{R}}{d\theta} R \right) \cdot x \end{aligned}

So we have our incremental transformation given by

\begin{aligned}x'= x - \left(2 \frac{d \tilde{R}}{d\theta} R d\theta \right) \cdot x \end{aligned} \quad\quad\quad(5)

# Consider a specific infinitesimal spatial rotation.

The signs and primes involved in arriving at (5) were a bit confusing. To firm things up a bit considering a specific example is called for.

For a rotation in the $x,y$ plane, we have

\begin{aligned}R &= e^{\gamma_1 \gamma_2 \theta/2} \\ x' &= R x \tilde{R} \end{aligned}

Here also it is easy to get the signs wrong, and it is worth pointing out the sign convention picked here for the Dirac basis is ${\gamma_0}^2 = -{\gamma_k}^2 = 1$. To verify that $R$ does the desired job, we have

\begin{aligned}R \gamma_1 \tilde{R}&=\gamma_1 \tilde{R^2} \\ &=\gamma_1 e^{\gamma_2 \gamma_1 \theta} \\ &=\gamma_1 (\cos\theta + \gamma_2 \gamma_1 \sin\theta) \\ &=\gamma_1 (\cos\theta - \gamma_1 \gamma_2 \sin\theta) \\ &=\gamma_1 \cos\theta + \gamma_2 \sin\theta \end{aligned}

and

\begin{aligned}R \gamma_2 \tilde{R}&=\gamma_2 \tilde{R^2} \\ &=\gamma_2 e^{\gamma_2 \gamma_1 \theta} \\ &=\gamma_2 (\cos\theta + \gamma_2 \gamma_1 \sin\theta) \\ &=\gamma_2 \cos\theta - \gamma_1 \sin\theta \end{aligned}

For $\gamma_3$ or $\gamma_0$, the quaternion $R$ commutes, so we have

\begin{aligned}R \gamma_3 \tilde{R} &= R \tilde{R} \gamma_3 = \gamma_3 \\ R \gamma_0 \tilde{R} &= R \tilde{R} \gamma_0 = \gamma_0 \end{aligned}

(leaving the perpendicular basis directions unchanged).

Summarizing the action on the basis vectors in matrix form this is

\begin{aligned}\begin{bmatrix}\gamma_0 \\ \gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \end{bmatrix}\rightarrow\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & \cos\theta & \sin\theta & 0 \\ 0 & -\sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix}\gamma_0 \\ \gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \end{bmatrix} \end{aligned}

Observe that the basis vectors transform with the transposed matrix to the coordinates, and we have

\begin{aligned}\gamma_0 x^0+ \gamma_1 x^1+ \gamma_2 x^2+ \gamma_3 x^3 \rightarrow\gamma_0 x^0+x^1 (\gamma_1 \cos\theta + \gamma_2 \sin\theta)+x^2 (\gamma_2 \cos\theta - \gamma_1 \sin\theta)+\gamma_3 x^3 \end{aligned}

Dotting ${x'}^\mu = x' \cdot \gamma^\mu$ we have

\begin{aligned}x^0 &\rightarrow x^0 \\ x^1 &\rightarrow x^1 \cos\theta - x^2 \sin\theta \\ x^2 &\rightarrow x^1 \sin\theta +x^2 \cos\theta \\ x^3 &\rightarrow x^3 \end{aligned}

In matrix form this is the expected and familiar rotation matrix in coordinate form
\begin{aligned}\begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3 \\ \end{bmatrix}\rightarrow\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta & 0 \\ 0 & \sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3 \\ \end{bmatrix} \end{aligned}

Moving on to the initial verification we have

\begin{aligned}2 \frac{d\tilde{R}}{d\theta} &= 2\frac{d}{d\theta} e^{\gamma_2\gamma_1 \theta/2} \\ &= \gamma_1 \gamma_2 e^{\gamma_2\gamma_1 \theta/2} \end{aligned}

So we have

\begin{aligned}2 \frac{d\tilde{R}}{d\theta} R &= \gamma_2 \gamma_1 e^{\gamma_2\gamma_1 \theta/2} e^{\gamma_1\gamma_2 \theta/2} \\ &= \gamma_2 \gamma_1 \end{aligned}

The antisymmetric form $\epsilon_{\mu\nu}$ in this case therefore appears to be nothing more than the unit bivector for the plane of rotation! We should now be able to verify the incremental transformation result from (5), which is in this specific case now calculated to be

\begin{aligned}x'= x + d\theta (\gamma_1 \gamma_2) \cdot x \end{aligned} \quad\quad\quad(8)

As a final check let’s look at the action of rotation part of the transformation (8) on the coordinates $x^\mu$. Only the $x^1$ and $x^2$ coordinates need be considered since there is no projection of $\gamma_0$ or $\gamma_3$ components onto the plane $\gamma_1 \gamma_2$.

\begin{aligned}d\theta (\gamma_1 \gamma_2) \cdot (x^1 \gamma_1 + x^2 \gamma_2)&= d\theta {\left\langle{{ \gamma_1 \gamma_2 (x^1 \gamma_1 + x^2 \gamma_2) }}\right\rangle}_{1} \\ &= d\theta (\gamma_2 x^1 - \gamma_1 x^2) \end{aligned}

Now compare to the incremental transformation on the coordinates in matrix form. That is

\begin{aligned}\delta R&=d\theta \frac{d}{d\theta}{\left.\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta & 0 \\ 0 & \sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\right\vert}_{\theta=0} \\ &=d\theta{\left.\begin{bmatrix}0 & 0 & 0 & 0 \\ 0 & -\sin\theta & -\cos\theta & 0 \\ 0 & \cos\theta & -\sin\theta & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}\right\vert}_{\theta=0} \\ &=d\theta\begin{bmatrix}0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \end{aligned}

So acting on the coordinate vector

\begin{aligned}\delta R &= d\theta\begin{bmatrix}0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3\end{bmatrix} \\ &=d\theta\begin{bmatrix}0 \\ -x^2 \\ x^1 \\ 0\end{bmatrix} \end{aligned}

This is exactly what we got above with the bivector dot product. Good.

# Consider a specific infinitesimal boost.

For a boost along the $x$ axis we have

\begin{aligned}R &= e^{\gamma_0\gamma_1 \alpha/2} \\ x' &= R x \tilde{R} \end{aligned}

Verifying, we have

\begin{aligned}x^0 \gamma_0 &\rightarrow x^0 ( \cosh\alpha + \gamma_0 \gamma_1 \sinh\alpha ) \gamma_0 \\ &= x^0 ( \gamma_0 \cosh\alpha - \gamma_1 \sinh\alpha ) \end{aligned}

\begin{aligned}x^1 \gamma_1 &\rightarrow x^1 ( \cosh\alpha + \gamma_0 \gamma_1 \sinh\alpha ) \gamma_1 \\ &= x^1 ( \gamma_1 \cosh\alpha - \gamma_0 \sinh\alpha ) \end{aligned}

Dot products recover the familiar boost matrix
\begin{aligned}\begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3 \\ \end{bmatrix}'&=\begin{bmatrix}\cosh\alpha & -\sinh\alpha & 0 & 0 \\ -\sinh\alpha & \cosh\alpha & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3 \\ \end{bmatrix} \end{aligned}

Now, how about the incremental transformation given by (5). A quick calculation shows that we have

\begin{aligned}x' = x + d\alpha (\gamma_0 \gamma_1) \cdot x \end{aligned} \quad\quad\quad(11)

Just like the (8) case for a rotation in the $x y$ plane, the antisymmetric form is again the unit bivector of the rotation plane (this time the unit bivector in the spacetime plane of the boost.)

This completes the examination of two specific incremental Lorentz transformations. It is clear that the result will be the same for an arbitrarily oriented bivector, and the original guess (2) of a geometric equivalent of tensor relation (1) was correct, provided that $A$ is a unit bivector scaled by the magnitude of the incremental transformation.

The specific case not treated however are those transformations where the orientation of the bivector is allowed to change. Parameterizing that by angle is not such an obvious procedure.

# In tensor form.

For an arbitrary bivector $A = a \wedge b$, we can calculate ${\epsilon^\sigma}_\alpha$. That is

\begin{aligned}\epsilon^{\sigma\alpha} x_\alpha&=d\theta\frac{((a^\mu \gamma_\mu \wedge b^\nu \gamma_\nu) \cdot ( x_\alpha \gamma^\alpha)) \cdot \gamma^\sigma }{{\left\lvert{((a^\mu \gamma_\mu) \wedge (b^\nu \gamma_\nu)) \cdot ((a_\alpha \gamma^\alpha) \wedge (b_\beta \gamma^\beta))}\right\rvert}^{1/2}} \\ &=\frac{ a^\sigma b^\alpha - a^\alpha b^\sigma }{{\left\lvert{a^\mu b^\nu( a_\nu b_\mu - a_\mu b_\nu)}\right\rvert}^{1/2}} x_\alpha \end{aligned}

So we have

\begin{aligned}{\epsilon^\sigma}_\alpha&=d\theta\frac{ a^\sigma b_\alpha - a_\alpha b^\sigma }{{\left\lvert{a^\mu b^\nu( a_\nu b_\mu - a_\mu b_\nu)}\right\rvert}^{1/2}} \end{aligned}

The denominator can be subsumed into $d\theta$, so the important factor is just the numerator, which encodes an incremental boost or rotational in some arbitrary spacetime or spatial plane (respectively). The associated antisymmetry can be viewed as a consequence of the bivector nature of the rotor derivative rotor product.

# References

[1] M. Montesinos and E. Flores. {Symmetric energy-momentum tensor in Maxwell, Yang-Mills, and Proca theories obtained using only Noether’s theorem}. Arxiv preprint hep-th/0602190, 2006.