# Peeter Joot's Blog.

• ## Archives

 ivor on Just Energy Canada nasty busin… A final pre-exam upd… on An updated compilation of note… Anon on About peeterjoot on About Anon on About
• ## People not reading this blog: 6,973,738,433 minus:

• 132,251 hits

# Archive for February 22nd, 2011

## PHY450H1S. Relativistic Electrodynamics Lecture 13 (Taught by Prof. Erich Poppitz). Variational principle for the field.

Posted by peeterjoot on February 22, 2011

Covering chapter 4 material from the text [1].

Covering lecture notes pp.103-113: variational principle for the electromagnetic field and the relevant boundary conditions (103-105); the second set of Maxwell’s equations from the variational principle (106-108); Maxwell’s equations in vacuum and the wave equation in the non-relativistic Coulomb gauge (109-111)

# Review. Our action.

\begin{aligned}S&= S_{\text{particles}} + S_{\text{interaction}} + S_{\text{EM field}}&= \sum_A \int_{x_A^i(\tau)} ds ( -m_A c )- \sum_A\frac{e_A}{c}\int dx_A^i A_i(x_A)- \frac{1}{{16 \pi c}} \int d^4 x F^{ij } F_{ij}.\end{aligned}

Our dynamics variables are

\begin{aligned}\left\{\begin{array}{l l}x_A^i(\tau) & \quad \mbox{A = 1, \cdots, N$} \\ A^i(x) & \quad \mbox{$A = 1, \cdots, N}\end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.1)

We saw that the interaction term could also be written in terms of a delta function current, with

\begin{aligned}S_{\text{interaction}}= -\frac{1}{{c^2}} \int d^4x j^i(x) A_i(x),\end{aligned} \hspace{\stretch{1}}(2.2)

and

\begin{aligned}j^i(x) = \sum_A c e_A \int dx_A^i \delta^4( x - x_A(\tau)).\end{aligned} \hspace{\stretch{1}}(2.3)

Variation with respect to $x_A^i(\tau)$ gave us

\begin{aligned}m c \frac{d{{u^i_A}}}{ds} = \frac{e}{c} u_A^j F_{ij}.\end{aligned} \hspace{\stretch{1}}(2.4)

Note that it’s easy to get the sign mixed up here. With our $(+,-,-,-)$ metric tensor, if the second index is the summation index, we have a positive sign.

Only the $S_{\text{particles}}$ and $S_{\text{interaction}}$ depend on $x_A^i(\tau)$.

# The field action variation.

\paragraph{Today:} We’ll find the EOM for $A^i(x)$. The dynamical degrees of freedom are $A^i(\mathbf{x},t)$

\begin{aligned}S[A^i(\mathbf{x}, t)] = -\frac{1}{{16 \pi c}} \int d^4x F_{ij}F^{ij} - \frac{1}{{c^2}} \int d^4 x A^i j_i.\end{aligned} \hspace{\stretch{1}}(3.5)

Here $j^i$ are treated as “sources”.

We demand that

\begin{aligned}\delta S = S[ A^i(\mathbf{x}, t) + \delta A^i(\mathbf{x}, t)] - S[ A^i(\mathbf{x}, t) ] = 0 + O(\delta A)^2.\end{aligned} \hspace{\stretch{1}}(3.6)

We need to impose two conditions.
\begin{itemize}
\item At spatial $\infty$, i.e. at ${\left\lvert{\mathbf{x}}\right\rvert} \rightarrow \infty, \forall t$, we’ll impose the condition

\begin{aligned}{\left.{{A^i(\mathbf{x}, t)}}\right\vert}_{{{\left\lvert{\mathbf{x}}\right\rvert} \rightarrow \infty}} \rightarrow 0.\end{aligned} \hspace{\stretch{1}}(3.7)

This is sensible, because fields are created by charges, and charges are assumed to be localized in a bounded region. The field outside charges will $\rightarrow 0$ at ${\left\lvert{\mathbf{x}}\right\rvert} \rightarrow \infty$. Later we will treat the integration range as finite, and bounded, then later allow the boundary to go to infinity.

\item at $t = -T$ and $t = T$ we’ll imagine that the values of $A^i(\mathbf{x}, \pm T)$ are fixed.

This is analogous to $x(t_i) = x_1$ and $x(t_f) = x_2$ in particle mechanics.

Since $A^i(\mathbf{x}, \pm T)$ is given, and equivalent to the initial and final field configurations, our extremes at the boundary is zero

\begin{aligned}\delta A^i(\mathbf{x}, \pm T) = 0.\end{aligned} \hspace{\stretch{1}}(3.8)

\end{itemize}

PICTURE: a cylinder in spacetime, with an attempt to depict the boundary.

# Computing the variation.

\begin{aligned}\delta S[A^i(\mathbf{x}, t)]= -\frac{1}{{16 \pi c}} \int d^4 x \delta (F_{ij}F^{ij}) - \frac{1}{{c^2}} \int d^4 x \delta(A^i) j_i.\end{aligned} \hspace{\stretch{1}}(4.9)

Looking first at the variation of just the $F^2$ bit we have

\begin{aligned}\delta (F_{ij}F^{ij})&=\delta(F_{ij}) F^{ij} + F_{ij} \delta(F^{ij}) \\ &=2 \delta(F^{ij}) F_{ij} \\ &=2 \delta(\partial^i A^j - \partial^j A^i) F_{ij} \\ &=2 \delta(\partial^i A^j) F_{ij} - 2 \delta(\partial^j A^i) F_{ij} \\ &=2 \delta(\partial^i A^j) F_{ij} - 2 \delta(\partial^i A^j) F_{ji} \\ &=4 \delta(\partial^i A^j) F_{ij} \\ &=4 F_{ij} \partial^i \delta(A^j).\end{aligned}

Our variation is now reduced to

\begin{aligned}\delta S[A^i(\mathbf{x}, t)]&= -\frac{1}{{4 \pi c}} \int d^4 x F_{ij} \partial^i \delta(A^j) - \frac{1}{{c^2}} \int d^4 x j^i \delta(A_i) \\ &= -\frac{1}{{4 \pi c}} \int d^4 x F^{ij} \frac{\partial {}}{\partial {x^i}} \delta(A_j) - \frac{1}{{c^2}} \int d^4 x j^i \delta(A_i).\end{aligned}

We can integrate this first term by parts

\begin{aligned}\int d^4 x F^{ij} \frac{\partial {}}{\partial {x^i}} \delta(A_j)&=\int d^4 x \frac{\partial {}}{\partial {x^i}} \left( F^{ij} \delta(A_j) \right)-\int d^4 x \left( \frac{\partial {}}{\partial {x^i}} F^{ij} \right) \delta(A_j) \end{aligned}

The first term is a four dimensional divergence, with the contraction of the four gradient $\partial_i$ with a four vector $B^i = F^{ij} \delta(A_j)$.

Prof. Poppitz chose $dx^0 d^3 \mathbf{x}$ split of $d^4 x$ to illustrate that this can be viewed as regular old spatial three vector divergences. It is probably more rigorous to mandate that the four volume element is oriented $d^4 x = (1/4!)\epsilon_{ijkl} dx^i dx^j dx^k dx^l$, and then utilize the 4D version of the divergence theorem (or its Stokes Theorem equivalent). The completely antisymmetric tensor should do most of the work required to express the oriented boundary volume.

Because we have specified that $A^i$ is zero on the boundary, so is $F^{ij}$, so these boundary terms are killed off. We are left with

\begin{aligned}\delta S[A^i(\mathbf{x}, t)]&= -\frac{1}{{4 \pi c}} \int d^4 x \delta (A_j) \partial_i F^{ij} - \frac{1}{{c^2}} \int d^4 x j^i \delta(A_i) \\ &=\int d^4 x \delta A_j(x)\left(-\frac{1}{{4 \pi c}} \partial_i F^{ij}(x) - \frac{1}{{c^2}} j^i\right) \\ &= 0.\end{aligned}

This gives us

\begin{aligned}\boxed{\partial_i F^{ij} = \frac{4 \pi}{c} j^j}\end{aligned} \hspace{\stretch{1}}(4.10)

# Unpacking these.

Recall that the Bianchi identity

\begin{aligned}\epsilon^{ijkl} \partial_j F_{kl} = 0,\end{aligned} \hspace{\stretch{1}}(5.11)

gave us

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{B} &= 0 \\ \boldsymbol{\nabla} \times \mathbf{E} &= -\frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(5.12)

How about the EOM that we have found by varying the action? One of those equations is

\begin{aligned}\partial_\alpha F^{\alpha 0} = \frac{4 \pi}{c} j^0 = 4 \pi \rho,\end{aligned} \hspace{\stretch{1}}(5.14)

since $j^0 = c \rho$.

Because

\begin{aligned}F^{\alpha 0} = (\mathbf{E})^\alpha,\end{aligned} \hspace{\stretch{1}}(5.15)

we have

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 4 \pi \rho.\end{aligned} \hspace{\stretch{1}}(5.16)

The messier one to deal with is

\begin{aligned}\partial_i F^{i\alpha} = \frac{4 \pi}{c} j^\alpha.\end{aligned} \hspace{\stretch{1}}(5.17)

Splitting out the spatial and time indexes for the four gradient we have

\begin{aligned}\partial_i F^{i\alpha}&= \partial_\beta F^{\beta \alpha} + \partial_0 F^{0 \alpha} \\ &= \partial_\beta F^{\beta \alpha} - \frac{1}{{c}} \frac{\partial {(\mathbf{E})^\alpha}}{\partial {t}} \\ \end{aligned}

The spatial index tensor element is

\begin{aligned}F^{\beta \alpha} &= \partial^\beta A^\alpha - \partial^\alpha A^\beta \\ &= - \frac{\partial {A^\alpha}}{\partial {x^\beta}} + \frac{\partial {A^\beta}}{\partial {x^\alpha}} \\ &= \epsilon^{\alpha\beta\gamma} B^\gamma,\end{aligned}

so the sum becomes

\begin{aligned}\partial_i F^{i\alpha}&= \partial_\beta ( \epsilon^{\alpha\beta\gamma} B^\gamma) - \frac{1}{{c}} \frac{\partial {(\mathbf{E})^\alpha}}{\partial {t}} \\ &= \epsilon^{\beta\gamma\alpha} \partial_\beta B^\gamma - \frac{1}{{c}} \frac{\partial {(\mathbf{E})^\alpha}}{\partial {t}} \\ &= (\boldsymbol{\nabla} \times \mathbf{B})^\alpha - \frac{1}{{c}} \frac{\partial {(\mathbf{E})^\alpha}}{\partial {t}}.\end{aligned}

This gives us

\begin{aligned}\frac{4 \pi}{c} j^\alpha= (\boldsymbol{\nabla} \times \mathbf{B})^\alpha - \frac{1}{{c}} \frac{\partial {(\mathbf{E})^\alpha}}{\partial {t}},\end{aligned} \hspace{\stretch{1}}(5.18)

or in vector form

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{B} - \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} = \frac{4 \pi}{c} \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.19)

Summarizing what we know so far, we have

\begin{aligned}\boxed{\begin{aligned}\partial_i F^{ij} &= \frac{4 \pi}{c} j^j \\ \epsilon^{ijkl} \partial_j F_{kl} &= 0\end{aligned}}\end{aligned} \hspace{\stretch{1}}(5.20)

or in vector form

\begin{aligned}\boxed{\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} &= 4 \pi \rho \\ \boldsymbol{\nabla} \times \mathbf{B} -\frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} &= \frac{4 \pi}{c} \mathbf{j} \\ \boldsymbol{\nabla} \cdot \mathbf{B} &= 0 \\ \boldsymbol{\nabla} \times \mathbf{E} +\frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} &= 0\end{aligned}}\end{aligned} \hspace{\stretch{1}}(5.21)

# Speed of light

\paragraph{Claim}: “$c$” is the speed of EM waves in vacuum.

Study equations in vacuum (no sources, so $j^i = 0$) for $A^i = (\phi, \mathbf{A})$.

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} &= 0 \\ \boldsymbol{\nabla} \times \mathbf{B} &= \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(6.22)

where

\begin{aligned}\mathbf{E} &= - \boldsymbol{\nabla} \phi - \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}} \\ \mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{A}\end{aligned} \hspace{\stretch{1}}(6.24)

In terms of potentials

\begin{aligned}0 &= \boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{A}) \\ &= \boldsymbol{\nabla} \times \mathbf{B} \\ &= \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} \\ &= \frac{1}{{c}} \frac{\partial {}}{\partial {t}} \left( - \boldsymbol{\nabla} \phi - \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}} \right) \\ &= -\frac{1}{{c}} \frac{\partial {}}{\partial {t}} \boldsymbol{\nabla} \phi - \frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}}{\partial t^2} \end{aligned}

Since we also have

\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{A}) = \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) - \boldsymbol{\nabla}^2 \mathbf{A},\end{aligned} \hspace{\stretch{1}}(6.26)

some rearrangement gives

\begin{aligned}\boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) = \boldsymbol{\nabla}^2 \mathbf{A} -\frac{1}{{c}} \frac{\partial {}}{\partial {t}} \boldsymbol{\nabla} \phi - \frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}}{\partial t^2}.\end{aligned} \hspace{\stretch{1}}(6.27)

The remaining equation $\boldsymbol{\nabla} \cdot \mathbf{E} = 0$, in terms of potentials is

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = - \boldsymbol{\nabla}^2 \phi - \frac{1}{{c}} \frac{\partial {\boldsymbol{\nabla} \cdot \mathbf{A}}}{\partial {t}} \end{aligned} \hspace{\stretch{1}}(6.28)

We can make a gauge transformation that completely eliminates 6.28, and reduces 6.27 to a wave equation.

\begin{aligned}(\phi, \mathbf{A}) \rightarrow (\phi', \mathbf{A}')\end{aligned} \hspace{\stretch{1}}(6.29)

with

\begin{aligned}\phi &= \phi' - \frac{1}{{c}} \frac{\partial {\chi}}{\partial {t}} \\ \mathbf{A} &= \mathbf{A}' + \boldsymbol{\nabla} \chi\end{aligned} \hspace{\stretch{1}}(6.30)

Can choose $\chi(\mathbf{x}, t)$ to make $\phi' = 0$ ($\forall \phi \exists \chi, \phi' = 0$)

\begin{aligned}\frac{1}{{c}} \frac{\partial {}}{\partial {t}} \chi(\mathbf{x}, t) = \phi(\mathbf{x}, t)\end{aligned} \hspace{\stretch{1}}(6.32)

\begin{aligned}\chi(\mathbf{x}, t) = c \int_{-\infty}^t dt' \phi(\mathbf{x}, t')\end{aligned} \hspace{\stretch{1}}(6.33)

Can also find a transformation that also allows $\boldsymbol{\nabla} \cdot \mathbf{A} = 0$

\paragraph{Q:} What would that second transformation be explicitly?
\paragraph{A:} To be revisited next lecture, when this is covered in full detail.

This is the Coulomb gauge

\begin{aligned}\phi &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{A} &= 0\end{aligned} \hspace{\stretch{1}}(6.34)

From 6.27, we then have

\begin{aligned}\frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}'}{\partial t^2} -\boldsymbol{\nabla}^2 \mathbf{A}' = 0\end{aligned} \hspace{\stretch{1}}(6.36)

which is the wave equation for the propagation of the vector potential $\mathbf{A}'(\mathbf{x}, t)$ through space at velocity $c$, confirming that $c$ is the speed of electromagnetic propagation (the speed of light).

# References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

## Exploring Stokes Theorem in tensor form.

Posted by peeterjoot on February 22, 2011

# Motivation.

I’ve worked through Stokes theorem concepts a couple times on my own now. One of the first times, I was trying to formulate this in a Geometric Algebra context. I had to resort to a tensor decomposition, and pictures, before ending back in the Geometric Algebra description. Later I figured out how to do it entirely with a Geometric Algebra description, and was able to eliminate reliance on the pictures that made the path to generalization to higher dimensional spaces unclear.

It’s my expectation that if one started with a tensor description, the proof entirely in tensor form would not be difficult. This is what I’d like to try this time. To start off, I’ll temporarily use the Geometric Algebra curl expression so I know what my tensor equation starting point will be, but once that starting point is found, we can work entirely in coordinate representation. For somebody who already knows that this is the starting point, all of this initial motivation can be skipped.

# Translating the exterior derivative to a coordinate representation.

Our starting point is a curl, dotted with a volume element of the same grade, so that the result is a scalar

\begin{aligned}\int d^n x \cdot (\nabla \wedge A).\end{aligned} \hspace{\stretch{1}}(2.1)

Here $A$ is a blade of grade $n-1$, and we wedge this with the gradient for the space

\begin{aligned}\nabla \equiv e^i \partial_i = e_i \partial^i,\end{aligned} \hspace{\stretch{1}}(2.2)

where we with with a basis (not necessarily orthonormal) $\{e_i\}$, and the reciprocal frame for that basis $\{e^i\}$ defined by the relation

\begin{aligned}e^i \cdot e_j = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(2.3)

Our coordinates in these basis sets are

\begin{aligned}x \cdot e^i & \equiv x^i \\ x \cdot e_i & \equiv x_i\end{aligned} \hspace{\stretch{1}}(2.4)

so that

\begin{aligned}x = x^i e_i = x_i e^i.\end{aligned} \hspace{\stretch{1}}(2.6)

The operator coordinates of the gradient are defined in the usual fashion

\begin{aligned}\partial_i & \equiv \frac{\partial }{\partial {x^i}} \\ \partial^i & \equiv \frac{\partial}{\partial {x_i}}\end{aligned} \hspace{\stretch{1}}(2.7)

The volume element for the subspace that we are integrating over we will define in terms of an arbitrary parametrization

\begin{aligned}x = x(\alpha_1, \alpha_2, \cdots, \alpha_n)\end{aligned} \hspace{\stretch{1}}(2.9)

The subspace can be considered spanned by the differential elements in each of the respective curves where all but the $i$th parameter are held constant.

\begin{aligned}dx_{\alpha_i}= d\alpha_i \frac{\partial x}{\partial {\alpha_i}}= d\alpha_i \frac{\partial {x^j}}{\partial {\alpha_i}} e_j.\end{aligned} \hspace{\stretch{1}}(2.10)

We assume that the integral is being performed in a subspace for which none of these differential elements in that region are linearly dependent (i.e. our Jacobean determinant must be non-zero).

The magnitude of the wedge product of all such differential elements provides the volume of the parallelogram, or parallelepiped (or higher dimensional analogue), and is

\begin{aligned}d^n x=d\alpha_1 d\alpha_2\cdots d\alpha_n\frac{\partial x}{\partial {\alpha_n}} \wedge\cdots \wedge\frac{\partial x}{\partial {\alpha_2}}\wedge\frac{\partial x}{\partial {\alpha_1}}.\end{aligned} \hspace{\stretch{1}}(2.11)

The volume element is a oriented quantity, and may be adjusted with an arbitrary sign (or equivalently an arbitrary permutation of the differential elements in the wedge product), and we’ll see that it is convenient for the translation to tensor form, to express these in reversed order.

Let’s write

\begin{aligned}d^n \alpha = d\alpha_1 d\alpha_2 \cdots d\alpha_n,\end{aligned} \hspace{\stretch{1}}(2.12)

so that our volume element in coordinate form is

\begin{aligned}d^n x = d^n \alpha\frac{\partial {x^i}}{\partial {\alpha_1}}\frac{\partial {x^j}}{\partial {\alpha_2}}\cdots \frac{\partial {x^k}}{\partial {\alpha_{n-1}}}\frac{\partial {x^l}}{\partial {\alpha_n}}( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i ).\end{aligned} \hspace{\stretch{1}}(2.13)

Our curl will also also be a grade $n$ blade. We write for the $n-1$ grade blade

\begin{aligned}A = A_{b c \cdots d} (e^b \wedge e^c \wedge \cdots e^d),\end{aligned} \hspace{\stretch{1}}(2.14)

where $A_{b c \cdots d}$ is antisymmetric (i.e. $A = a_1 \wedge a_2 \wedge \cdots a_{n-1}$ for a some set of vectors $a_i, i \in 1 .. n-1$).

With our gradient in coordinate form

\begin{aligned}\nabla = e^a \partial_a,\end{aligned} \hspace{\stretch{1}}(2.15)

the curl is then

\begin{aligned}\nabla \wedge A = \partial_a A_{b c \cdots d} (e^a \wedge e^b \wedge e^c \wedge \cdots e^d).\end{aligned} \hspace{\stretch{1}}(2.16)

The differential form for our integral can now be computed by expanding out the dot product. We want

\begin{aligned}( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i )\cdot(e^a \wedge e^b \wedge e^c \wedge \cdots e^d)=((((( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i ) \cdot e^a ) \cdot e^b ) \cdot e^c ) \cdot \cdots ) \cdot e^d.\end{aligned} \hspace{\stretch{1}}(2.17)

Evaluation of the interior dot products introduces the intrinsic antisymmetry required for Stokes theorem. For example, with

\begin{aligned}( e_n \wedge e_{n-1} \wedge \cdots \wedge e_2 \wedge e_1 ) \cdot e^a a & =( e_n \wedge e_{n-1} \wedge \cdots \wedge e_3 \wedge e_2 ) (e_1 \cdot e^a) \\ & -( e_n \wedge e_{n-1} \wedge \cdots \wedge e_3 \wedge e_1 ) (e_2 \cdot e^a) \\ & +( e_n \wedge e_{n-1} \wedge \cdots \wedge e_2 \wedge e_1 ) (e_3 \cdot e^a) \\ & \cdots \\ & (-1)^{n-1}( e_{n-1} \wedge e_{n-2} \wedge \cdots \wedge e_2 \wedge e_1 ) (e_n \cdot e^a)\end{aligned}

Since $e_i \cdot e^a = {\delta_i}^a$ our end result is a completely antisymmetric set of permutations of all the deltas

\begin{aligned}( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i )\cdot(e^a \wedge e^b \wedge e^c \wedge \cdots e^d)={\delta^{[a}}_i{\delta^b}_j\cdots {\delta^{d]}}_l,\end{aligned} \hspace{\stretch{1}}(2.18)

and the curl integral takes it’s coordinate form

\begin{aligned}\int d^n x \cdot ( \nabla \wedge A ) =\int d^n \alpha\frac{\partial {x^i}}{\partial {\alpha_1}}\frac{\partial {x^j}}{\partial {\alpha_2}}\cdots \frac{\partial {x^k}}{\partial {\alpha_{n-1}}}\frac{\partial {x^l}}{\partial {\alpha_n}}\partial_a A_{b c \cdots d}{\delta^{[a}}_i{\delta^b}_j\cdots {\delta^{d]}}_l.\end{aligned} \hspace{\stretch{1}}(2.19)

One final contraction of the paired indexes gives us our Stokes integral in its coordinate representation

\begin{aligned}\boxed{\int d^n x \cdot ( \nabla \wedge A ) =\int d^n \alpha\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}\partial_a A_{b c \cdots d}}\end{aligned} \hspace{\stretch{1}}(2.20)

We now have a starting point that is free of any of the abstraction of Geometric Algebra or differential forms. We can identify the products of partials here as components of a scalar hypervolume element (possibly signed depending on the orientation of the parametrization)

\begin{aligned}d\alpha_1 d\alpha_2\cdots d\alpha_n\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}\end{aligned} \hspace{\stretch{1}}(2.21)

This is also a specific computation recipe for these hypervolume components, something that may not be obvious when we allow for general metrics for the space. We are also allowing for non-orthonormal coordinate representations, and arbitrary parametrization of the subspace that we are integrating over (our integral need not have the same dimension as the underlying vector space).

Observe that when the number of parameters equals the dimension of the space, we can write out the antisymmetric term utilizing the determinant of the Jacobian matrix

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}= \epsilon^{a b \cdots d} {\left\lvert{ \frac{\partial(x^1, x^2, \cdots x^n)}{\partial(\alpha_1, \alpha_2, \cdots \alpha_n)} }\right\rvert}\end{aligned} \hspace{\stretch{1}}(2.22)

When the dimension of the space $n$ is greater than the number of parameters for the integration hypervolume in question, the antisymmetric sum of partials is still the determinant of a Jacobian matrix

\begin{aligned}\frac{\partial {x^{[a_1}}}{\partial {\alpha_1}}\frac{\partial {x^{a_2}}}{\partial {\alpha_2}}\cdots \frac{\partial {x^{a_{n-1}}}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{a_n]}}}{\partial {\alpha_n}}= {\left\lvert{ \frac{\partial(x^{a_1}, x^{a_2}, \cdots x^{a_n})}{\partial(\alpha_1, \alpha_2, \cdots \alpha_n)} }\right\rvert},\end{aligned} \hspace{\stretch{1}}(2.23)

however, we will have one such Jacobian for each unique choice of indexes.

# The Stokes work starts here.

The task is to relate our integral to the boundary of this volume, coming up with an explicit recipe for the description of that bounding surface, and determining the exact form of the reduced rank integral. This job is essentially to reduce the ranks of the tensors that are being contracted in our Stokes integral. With the derivative applied to our rank $n-1$ antisymmetric tensor $A_{b c \cdots d}$, we can apply the chain rule and examine the permutations so that this can be rewritten as a contraction of $A$ itself with a set of rank $n-1$ surface area elements.

\begin{aligned}\int d^n \alpha\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}\partial_a A_{b c \cdots d} = ?\end{aligned} \hspace{\stretch{1}}(3.24)

Now, while the setup here has been completely general, this task is motivated by study of special relativity, where there is a requirement to work in a four dimensional space. Because of that explicit goal, I’m not going to attempt to formulate this in a completely abstract fashion. That task is really one of introducing sufficiently general notation. Instead, I’m going to proceed with a simpleton approach, and do this explicitly, and repeatedly for each of the rank 1, rank 2, and rank 3 tensor cases. It will be clear how this all generalizes by doing so, should one wish to work in still higher dimensional spaces.

## The rank 1 tensor case.

The equation we are working with for this vector case is

\begin{aligned}\int d^2 x \cdot (\nabla \wedge A) =\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\partial_a A_{b}(\alpha_1, \alpha_2)\end{aligned} \hspace{\stretch{1}}(3.25)

Expanding out the antisymmetric partials we have

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}} & =\frac{\partial {x^{a}}}{\partial {\alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}}-\frac{\partial {x^{b}}}{\partial {\alpha_1}}\frac{\partial {x^{a}}}{\partial {\alpha_2}},\end{aligned}

with which we can reduce the integral to

\begin{aligned}\int d^2 x \cdot (\nabla \wedge A) & =\int \left( d{\alpha_1}\frac{\partial {x^{a}}}{\partial {\alpha_1}}\frac{\partial {A_{b}}}{\partial {x^a}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-\left( d{\alpha_2}\frac{\partial {x^{a}}}{\partial {\alpha_2}}\frac{\partial {A_{b}}}{\partial {x^a}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1} \\ & =\int \left( d\alpha_1 \frac{\partial {A_b}}{\partial {\alpha_1}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-\left( d\alpha_2 \frac{\partial {A_b}}{\partial {\alpha_2}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1} \\ \end{aligned}

Now, if it happens that

\begin{aligned}\frac{\partial}{\partial {\alpha_1}}\frac{\partial {x^{a}}}{\partial {\alpha_2}} = \frac{\partial}{\partial {\alpha_2}}\frac{\partial {x^{a}}}{\partial {\alpha_1}} = 0\end{aligned} \hspace{\stretch{1}}(3.26)

then each of the individual integrals in $d\alpha_1$ and $d\alpha_2$ can be carried out. In that case, without any real loss of generality we can designate the integration bounds over the unit parametrization space square $\alpha_i \in [0,1]$, allowing this integral to be expressed as

\begin{aligned}\begin{aligned} & \int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\partial_a A_{b}(\alpha_1, \alpha_2) \\ & =\int \left( A_b(1, \alpha_2) - A_b(0, \alpha_2) \right)\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-\left( A_b(\alpha_1, 1) - A_b(\alpha_1, 0) \right)\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.27)

It’s also fairly common to see ${\left.{{A}}\right\vert}_{{\partial \alpha_i}}$ used to designate evaluation of this first integral on the boundary, and using this we write

\begin{aligned}\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\partial_a A_{b}(\alpha_1, \alpha_2)=\int {\left.{{A_b}}\right\vert}_{{\partial \alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-{\left.{{A_b}}\right\vert}_{{\partial \alpha_2}}\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1}.\end{aligned} \hspace{\stretch{1}}(3.28)

Also note that since we are summing over all $a,b$, and have

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}=-\frac{\partial {x^{[b}}}{\partial {\alpha_1}}\frac{\partial {x^{a]}}}{\partial {\alpha_2}},\end{aligned} \hspace{\stretch{1}}(3.29)

we can write this summing over all unique pairs of $a,b$ instead, which eliminates a small bit of redundancy (especially once the dimension of the vector space gets higher)

\begin{aligned}\boxed{\sum_{a < b}\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\left( \partial_a A_{b}-\partial_b A_{a} \right)=\int {\left.{{A_b}}\right\vert}_{{\partial \alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-{\left.{{A_b}}\right\vert}_{{\partial \alpha_2}}\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1}.}\end{aligned} \hspace{\stretch{1}}(3.30)

In this form we have recovered the original geometric structure, with components of the curl multiplied by the component of the area element that shares the orientation and direction of that portion of the curl bivector.

This form of the result with evaluation at the boundaries in this form, assumed that ${\partial {x^a}}/{\partial {\alpha_1}}$ was not a function of $\alpha_2$ and ${\partial {x^a}}/{\partial {\alpha_2}}$ was not a function of $\alpha_1$. When that is not the case, we appear to have a less pretty result

\begin{aligned}\boxed{\sum_{a < b}\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\left( \partial_a A_{b}-\partial_b A_{a} \right)=\int d\alpha_2\int d\alpha_1\frac{\partial {A_b}}{\partial {\alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}}-\int d\alpha_2\int d\alpha_1\frac{\partial {A_b}}{\partial {\alpha_2}}\frac{\partial {x^{b}}}{\partial {\alpha_1}}}\end{aligned} \hspace{\stretch{1}}(3.31)

Can this be reduced any further in the general case? Having seen the statements of Stokes theorem in it’s differential forms formulation, I initially expected the answer was yes, and only when I got to evaluating my $\mathbb{R}^{4}$ spacetime example below did I realize that the differentials displacements for the parallelogram that constituted the area element were functions of both parameters. Perhaps this detail is there in the differential forms version of the general Stokes theorem too, but is just hidden in a tricky fashion by the compact notation.

### Sanity check: $\mathbb{R}^{2}$ case in rectangular coordinates.

For $x^1 = x, x^2 = y$, and $\alpha_1 = x, \alpha_2 = y$, we have for the LHS

\begin{aligned} & \int_{x=x_0}^{x_1}\int_{y=y_0}^{y_1}dx dy\left(\frac{\partial {x^{1}}}{\partial {\alpha_1}}\frac{\partial {x^{2}}}{\partial {\alpha_2}}-\frac{\partial {x^{2}}}{\partial {\alpha_1}}\frac{\partial {x^{1}}}{\partial {\alpha_2}}\right)\partial_1 A_{2}+\left(\frac{\partial {x^{2}}}{\partial {\alpha_1}}\frac{\partial {x^{1}}}{\partial {\alpha_2}}-\frac{\partial {x^{1}}}{\partial {\alpha_1}}\frac{\partial {x^{2}}}{\partial {\alpha_2}}\right)\partial_2 A_{1} \\ & =\int_{x=x_0}^{x_1}\int_{y=y_0}^{y_1}dx dy\left( \frac{\partial {A_y}}{\partial x} - \frac{\partial {A_x}}{\partial y} \right)\end{aligned}

Our RHS expands to

\begin{aligned} & \int_{y=y_0}^{y_1} dy\left(\left( A_1(x_1, y) - A_1(x_0, y) \right)\frac{\partial {x^{1}}}{\partial y}+\left( A_2(x_1, y) - A_2(x_0, y) \right)\frac{\partial {x^{2}}}{\partial y}\right) \\ & \qquad-\int_{x=x_0}^{x_1} dx\left(\left( A_1(x, y_1) - A_1(x, y_0) \right)\frac{\partial {x^{1}}}{\partial x}+\left( A_2(x, y_1) - A_2(x, y_0) \right)\frac{\partial {x^{2}}}{\partial x}\right) \\ & =\int_{y=y_0}^{y_1} dy\left( A_y(x_1, y) - A_y(x_0, y) \right)-\int_{x=x_0}^{x_1} dx\left( A_x(x, y_1) - A_x(x, y_0) \right)\end{aligned}

We have

\begin{aligned}\begin{aligned} & \int_{x=x_0}^{x_1}\int_{y=y_0}^{y_1}dx dy\left( \frac{\partial {A_y}}{\partial x} - \frac{\partial {A_x}}{\partial y} \right) \\ & =\int_{y=y_0}^{y_1} dy\left( A_y(x_1, y) - A_y(x_0, y) \right)-\int_{x=x_0}^{x_1} dx\left( A_x(x, y_1) - A_x(x, y_0) \right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.32)

The RHS is just a positively oriented line integral around the rectangle of integration

\begin{aligned}\int A_x(x, y_0) \hat{\mathbf{x}} \cdot ( \hat{\mathbf{x}} dx )+ A_y(x_1, y) \hat{\mathbf{y}} \cdot ( \hat{\mathbf{y}} dy )+ A_x(x, y_1) \hat{\mathbf{x}} \cdot ( -\hat{\mathbf{x}} dx )+ A_y(x_0, y) \hat{\mathbf{y}} \cdot ( -\hat{\mathbf{y}} dy )= \oint \mathbf{A} \cdot d\mathbf{r}.\end{aligned} \hspace{\stretch{1}}(3.33)

This special case is also recognizable as Green’s theorem, evident with the substitution $A_x = P$, $A_y = Q$, which gives us

\begin{aligned}\int_A dx dy \left( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y} \right)=\oint_C P dx + Q dy.\end{aligned} \hspace{\stretch{1}}(3.34)

Strictly speaking, Green’s theorem is more general, since it applies to integration regions more general than rectangles, but that generalization can be arrived at easily enough, once the region is broken down into adjoining elementary regions.

### Sanity check: $\mathbb{R}^{3}$ case in rectangular coordinates.

It is expected that we can recover the classical Kelvin-Stokes theorem if we use rectangular coordinates in $\mathbb{R}^{3}$. However, we see that we have to consider three different parametrizations. If one picks rectangular parametrizations $(\alpha_1, \alpha_2) = \{ (x,y), (y,z), (z,x) \}$ in sequence, in each case holding the value of the additional coordinate fixed, we get three different independent Green’s function like relations

\begin{aligned}\int_A dx dy \left( \frac{\partial {A_y}}{\partial x} - \frac{\partial {A_x}}{\partial y} \right) & = \oint_C A_x dx + A_y dy \\ \int_A dy dz \left( \frac{\partial {A_z}}{\partial y} - \frac{\partial {A_y}}{\partial z} \right) & = \oint_C A_y dy + A_z dz \\ \int_A dz dx \left( \frac{\partial {A_x}}{\partial z} - \frac{\partial {A_z}}{\partial x} \right) & = \oint_C A_z dz + A_x dx.\end{aligned} \hspace{\stretch{1}}(3.35)

Note that we cannot just add these to form a complete integral $\oint \mathbf{A} \cdot d\mathbf{r}$ since the curves are all have different orientations. To recover the $\mathbb{R}^{3}$ Stokes theorem in rectangular coordinates, it appears that we’d have to consider a Riemann sum of triangular surface elements, and relate that to the loops over each of the surface elements. In that limiting argument, only the boundary of the complete surface would contribute to the RHS of the relation.

All that said, we shouldn’t actually have to go to all this work. Instead we can stick to a two variable parametrization of the surface, and use 3.30 directly.

### An illustration for a $\mathbb{R}^{4}$ spacetime surface.

Suppose we have a particle trajectory defined by an active Lorentz transformation from an initial spacetime point

\begin{aligned}x^i = O^{ij} x_j(0) = O^{ij} g_{jk} x^k = {O^{i}}_k x^k(0)\end{aligned} \hspace{\stretch{1}}(3.38)

Let the Lorentz transformation be formed by a composition of boost and rotation

\begin{aligned}{O^i}_j & = {L^i}_k {R^k}_j \\ {L^i}_j & =\begin{bmatrix}\cosh_\alpha & -\sinh\alpha & 0 & 0 \\ -\sinh_\alpha & \cosh\alpha & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \\ {R^i}_j & =\begin{bmatrix}1 & 0 & 0 & 0 \\ \cos_\alpha & \sin\alpha & 0 & 0 \\ -\sin_\alpha & \cos\alpha & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.39)

Different rates of evolution of $\alpha$ and $\theta$ define different trajectories, and taken together we have a surface described by the two parameters

\begin{aligned}x^i(\alpha, \theta) = {L^i}_k {R^k}_j x^j(0, 0).\end{aligned} \hspace{\stretch{1}}(3.42)

We can compute displacements along the trajectories formed by keeping either $\alpha$ or $\theta$ fixed and varying the other. Those are

\begin{aligned}\frac{\partial {x^i}}{\partial {\alpha}} d\alpha & = \frac{d{L^i}_k}{d\alpha} {R^k}_j x^j(0, 0) \\ \frac{\partial {x^i}}{\partial {\theta}} d\theta & = {L^i}_k \frac{d{R^k}_j}{d\theta} x^j(0, 0) .\end{aligned} \hspace{\stretch{1}}(3.43)

Writing $y^i = x^i(0,0)$ the computation of the partials above yields

\begin{aligned}\frac{\partial {x^i}}{\partial {\alpha}} & =\begin{bmatrix}\sinh\alpha y^0 -\cosh\alpha (\cos\theta y^1 + \sin\theta y^2) \\ -\cosh\alpha y^0 +\sinh\alpha (\cos\theta y^1 + \sin\theta y^2) \\ 0 \\ 0\end{bmatrix} \\ \frac{\partial {x^i}}{\partial {\theta}} & =\begin{bmatrix}-\sinh\alpha (-\sin\theta y^1 + \cos\theta y^2 ) \\ \cosh\alpha (-\sin\theta y^1 + \cos\theta y^2 ) \\ -(\cos\theta y^1 + \sin\theta y^2 ) \\ 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.45)

Different choices of the initial point $y^i$ yield different surfaces, but we can get the idea by picking a simple starting point $y^i = (0, 1, 0, 0)$ leaving

\begin{aligned}\frac{\partial {x^i}}{\partial {\alpha}} & =\begin{bmatrix}-\cosh\alpha \cos\theta \\ \sinh\alpha \cos\theta \\ 0 \\ 0\end{bmatrix} \\ \frac{\partial {x^i}}{\partial {\theta}} & =\begin{bmatrix}\sinh\alpha \sin\theta \\ -\cosh\alpha \sin\theta \\ -\cos\theta \\ 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.47)

We can now compute our Jacobian determinants

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha}} \frac{\partial {x^{b]}}}{\partial {\theta}}={\left\lvert{\frac{\partial(x^a, x^b)}{\partial(\alpha, \theta)}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(3.49)

Those are

\begin{aligned}{\left\lvert{\frac{\partial(x^0, x^1)}{\partial(\alpha, \theta)}}\right\rvert} & = \cos\theta \sin\theta \\ {\left\lvert{\frac{\partial(x^0, x^2)}{\partial(\alpha, \theta)}}\right\rvert} & = \cosh\alpha \cos^2\theta \\ {\left\lvert{\frac{\partial(x^0, x^3)}{\partial(\alpha, \theta)}}\right\rvert} & = 0 \\ {\left\lvert{\frac{\partial(x^1, x^2)}{\partial(\alpha, \theta)}}\right\rvert} & = -\sinh\alpha \cos^2\theta \\ {\left\lvert{\frac{\partial(x^1, x^3)}{\partial(\alpha, \theta)}}\right\rvert} & = 0 \\ {\left\lvert{\frac{\partial(x^2, x^3)}{\partial(\alpha, \theta)}}\right\rvert} & = 0\end{aligned} \hspace{\stretch{1}}(3.50)

Using this, let’s see a specific 4D example in spacetime for the integral of the curl of some four vector $A^i$, enumerating all the non-zero components of 3.31 for this particular spacetime surface

\begin{aligned}\sum_{a < b}\int d{\alpha} d{\theta}{\left\lvert{\frac{\partial(x^a, x^b)}{\partial(\alpha, \theta)}}\right\rvert}\left( \partial_a A_{b}-\partial_b A_{a} \right)=\int d\theta\int d\alpha\frac{\partial {A_b}}{\partial {\alpha}}\frac{\partial {x^{b}}}{\partial {\theta}}-\int d\theta\int d\alpha\frac{\partial {A_b}}{\partial {\theta}}\frac{\partial {x^{b}}}{\partial {\alpha}}\end{aligned} \hspace{\stretch{1}}(3.56)

The LHS is thus found to be

\begin{aligned} & \int d{\alpha} d{\theta}\left({\left\lvert{\frac{\partial(x^0, x^1)}{\partial(\alpha, \theta)}}\right\rvert} \left( \partial_0 A_{1} -\partial_1 A_{0} \right)+{\left\lvert{\frac{\partial(x^0, x^2)}{\partial(\alpha, \theta)}}\right\rvert} \left( \partial_0 A_{2} -\partial_2 A_{0} \right)+{\left\lvert{\frac{\partial(x^1, x^2)}{\partial(\alpha, \theta)}}\right\rvert} \left( \partial_1 A_{2} -\partial_2 A_{1} \right)\right) \\ & =\int d{\alpha} d{\theta}\left(\cos\theta \sin\theta \left( \partial_0 A_{1} -\partial_1 A_{0} \right)+\cosh\alpha \cos^2\theta \left( \partial_0 A_{2} -\partial_2 A_{0} \right)-\sinh\alpha \cos^2\theta \left( \partial_1 A_{2} -\partial_2 A_{1} \right)\right)\end{aligned}

On the RHS we have

\begin{aligned}\int d\theta\int d\alpha & \frac{\partial {A_b}}{\partial {\alpha}}\frac{\partial {x^{b}}}{\partial {\theta}}-\int d\theta\int d\alpha\frac{\partial {A_b}}{\partial {\theta}}\frac{\partial {x^{b}}}{\partial {\alpha}} \\ & =\int d\theta\int d\alpha\begin{bmatrix}\sinh\alpha \sin\theta & -\cosh\alpha \sin\theta & -\cos\theta & 0\end{bmatrix}\frac{\partial}{\partial {\alpha}}\begin{bmatrix}A_0 \\ A_1 \\ A_2 \\ A_3 \\ \end{bmatrix} \\ & -\int d\theta\int d\alpha\begin{bmatrix}-\cosh\alpha \cos\theta & \sinh\alpha \cos\theta & 0 & 0\end{bmatrix}\frac{\partial}{\partial {\theta}}\begin{bmatrix}A_0 \\ A_1 \\ A_2 \\ A_3 \\ \end{bmatrix} \\ \end{aligned}

\begin{aligned}\begin{aligned} & \int d{\alpha} d{\theta}\cos\theta \sin\theta \left( \partial_0 A_{1} -\partial_1 A_{0} \right) \\ & \qquad+\int d{\alpha} d{\theta}\cosh\alpha \cos^2\theta \left( \partial_0 A_{2} -\partial_2 A_{0} \right) \\ & \qquad-\int d{\alpha} d{\theta}\sinh\alpha \cos^2\theta \left( \partial_1 A_{2} -\partial_2 A_{1} \right) \\ & =\int d\theta \sin\theta \int d\alpha \left( \sinh\alpha \frac{\partial {A_0}}{\partial {\alpha}} - \cosh\alpha \frac{\partial {A_1}}{\partial {\alpha}} \right) \\ & \qquad-\int d\theta \cos\theta \int d\alpha \frac{\partial {A_2}}{\partial {\alpha}} \\ & \qquad+\int d\alpha \cosh\alpha \int d\theta \cos\theta \frac{\partial {A_0}}{\partial {\theta}} \\ & \qquad-\int d\alpha \sinh\alpha \int d\theta \cos\theta \frac{\partial {A_1}}{\partial {\theta}}\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.57)

Because of the complexity of the surface, only the second term on the RHS has the “evaluate on the boundary” characteristic that may have been expected from a Green’s theorem like line integral.

It is also worthwhile to point out that we have had to be very careful with upper and lower indexes all along (and have done so with the expectation that our application would include the special relativity case where our metric determinant is minus one.) Because we worked with upper indexes for the area element, we had to work with lower indexes for the four vector and the components of the gradient that we included in our curl evaluation.

## The rank 2 tensor case.

Let’s consider briefly the terms in the contraction sum

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc}\end{aligned} \hspace{\stretch{1}}(3.58)

For any choice of a set of three distinct indexes $(a, b, c) \in (0, 1, 2), (0, 1, 3), (0, 2, 3), (1, 2, 3)$), we have $6 = 3!$ ways of permuting those indexes in this sum

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc} & =\sum_{a < b < c} {\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc} + {\left\lvert{ \frac{\partial(x^a, x^c, x^b)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{cb} + {\left\lvert{ \frac{\partial(x^b, x^c, x^a)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_b A_{ca} \\ & \qquad + {\left\lvert{ \frac{\partial(x^b, x^a, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_b A_{ac} + {\left\lvert{ \frac{\partial(x^c, x^a, x^b)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_c A_{ab} + {\left\lvert{ \frac{\partial(x^c, x^b, x^a)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_c A_{ba} \\ & =2!\sum_{a < b < c}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\left( \partial_a A_{bc} + \partial_b A_{c a} + \partial_c A_{a b} \right)\end{aligned}

Observe that we have no sign alternation like we had in the vector (rank 1 tensor) case. That sign alternation in this summation expansion appears to occur only for odd grade tensors.

Returning to the problem, we wish to expand the determinant in order to apply a chain rule contraction as done in the rank-1 case. This can be done along any of rows or columns of the determinant, and we can write any of

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} & =\frac{\partial {x^a}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^a}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ & =\frac{\partial {x^b}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^c, x^a)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^b}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^c, x^a)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^b}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^c, x^a)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ & =\frac{\partial {x^c}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^a, x^b)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^c}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^a, x^b)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^c}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^a, x^b)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ \end{aligned}

This allows the contraction of the index $a$, eliminating it from the result

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc} & =\left( \frac{\partial {x^a}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^a}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \right) \frac{\partial {A_{bc}}}{\partial {x^a}} \\ & =\frac{\partial {A_{bc}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {A_{bc}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {A_{bc}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ & =2!\sum_{b < c}\frac{\partial {A_{bc}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {A_{bc}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {A_{bc}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ \end{aligned}

Dividing out the common $2!$ terms, we can summarize this result as

\begin{aligned}\boxed{\begin{aligned}\sum_{a < b < c} & \int d\alpha_1 d\alpha_2 d\alpha_3 {\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\left( \partial_a A_{bc} + \partial_b A_{c a} + \partial_c A_{a b} \right) \\ & =\sum_{b < c}\int d\alpha_2 d\alpha_3 \int d\alpha_1\frac{\partial {A_{bc}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert} \\ & -\sum_{b < c}\int d\alpha_1 d\alpha_3 \int d\alpha_2\frac{\partial {A_{bc}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert} \\ & +\sum_{b < c}\int d\alpha_1 d\alpha_2 \int d\alpha_3\frac{\partial {A_{bc}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert}\end{aligned}}\end{aligned} \hspace{\stretch{1}}(3.59)

In general, as observed in the spacetime surface example above, the two index Jacobians can be functions of the integration variable first being eliminated. In the special cases where this is not the case (such as the $\mathbb{R}^{3}$ case with rectangular coordinates), then we are left with just the evaluation of the tensor element $A_{bc}$ on the boundaries of the respective integrals.

## The rank 3 tensor case.

The key step is once again just a determinant expansion

\begin{aligned} {\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \\ & =\frac{\partial {x^a}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_2, \alpha_3, \alpha_4)} }\right\rvert}-\frac{\partial {x^a}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_3, \alpha_4)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_4)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_4}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\\ \end{aligned}

so that the sum can be reduced from a four index contraction to a 3 index contraction

\begin{aligned} {\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a A_{bcd} \\ & =\frac{\partial {A_{bcd}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_2, \alpha_3, \alpha_4)} }\right\rvert}-\frac{\partial {A_{bcd}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_3, \alpha_4)} }\right\rvert}+\frac{\partial {A_{bcd}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_4)} }\right\rvert}+\frac{\partial {A_{bcd}}}{\partial {\alpha_4}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\end{aligned}

That’s the essence of the theorem, but we can play the same combinatorial reduction games to reduce the built in redundancy in the result

\begin{aligned}\boxed{\begin{aligned}\frac{1}{{3!}} & \int d^4 \alpha {\left\lvert{ \frac{\partial(x^a, x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a A_{bcd} \\ & =\sum_{a < b < c < d}\int d^4 \alpha {\left\lvert{ \frac{\partial(x^a, x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \left( \partial_a A_{bcd} -\partial_b A_{cda} +\partial_c A_{dab} -\partial_d A_{abc} \right) \\ & =\qquad \sum_{b < c < d}\int d\alpha_2 d\alpha_3 d\alpha_4 \int d\alpha_1\frac{\partial {A_{bcd}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \\ & \qquad -\sum_{b < c < d}\int d\alpha_1 d\alpha_3 d\alpha_4 \int d\alpha_2\frac{\partial {A_{bcd}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_3, \alpha_4)} }\right\rvert} \\ & \qquad +\sum_{b < c < d}\int d\alpha_1 d\alpha_2 d\alpha_4 \int d\alpha_3\frac{\partial {A_{bcd}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_4)} }\right\rvert} \\ & \qquad +\sum_{b < c < d}\int d\alpha_1 d\alpha_2 d\alpha_3 \int d\alpha_4\frac{\partial {A_{bcd}}}{\partial {\alpha_4}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \\ \end{aligned}}\end{aligned} \hspace{\stretch{1}}(3.60)

## A note on Four diverence.

Our four divergence integral has the following form

\begin{aligned}\int d^4 \alpha {\left\lvert{ \frac{\partial(x^1, x^2, x^2, x^4)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a A^a\end{aligned} \hspace{\stretch{1}}(3.61)

We can relate this to the rank 3 Stokes theorem with a duality transformation, multiplying with a pseudoscalar

\begin{aligned}A^a = \epsilon^{abcd} T_{bcd},\end{aligned} \hspace{\stretch{1}}(3.62)

where $T_{bcd}$ can also be related back to the vector by the same sort of duality transformation

\begin{aligned}A^a \epsilon_{a b c d} = \epsilon^{abcd} \epsilon_{a b c d} T_{bcd} = 4! T_{bcd}.\end{aligned} \hspace{\stretch{1}}(3.63)

The divergence integral in terms of the rank 3 tensor is

\begin{aligned}\int d^4 \alpha {\left\lvert{ \frac{\partial(x^1, x^2, x^2, x^4)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a \epsilon^{abcd} T_{bcd}=\int d^4 \alpha {\left\lvert{ \frac{\partial(x^a, x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a T_{bcd},\end{aligned} \hspace{\stretch{1}}(3.64)

and we are free to perform the same Stokes reduction of the integral. Of course, this is particularly simple in rectangular coordinates. I still have to think though one sublty that I feel may be important. We could have started off with an integral of the following form

\begin{aligned}\int dx^1 dx^2 dx^3 dx^4 \partial_a A^a,\end{aligned} \hspace{\stretch{1}}(3.65)

and I think this differs from our starting point slightly because this has none of the antisymmetric structure of the signed 4 volume element that we have used. We do not take the absolute value of our Jacobians anywhere.