• 336,988

# Posts Tagged ‘stokes theorem’

## Stokes theorem in Geometric algebra

Posted by peeterjoot on May 17, 2014

[Click here for a PDF of this post with nicer formatting  (especially since my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Understanding how to apply Stokes theorem to higher dimensional spaces, non-Euclidean metrics, and with curvilinear coordinates has been a long standing goal.

A traditional answer to these questions can be found in the formalism of differential forms, as covered for example in [2], and [8]. However, both of those texts, despite their small size, are intensely scary. I also found it counter intuitive to have to express all physical quantities as forms, since there are many times when we don’t have any pressing desire to integrate these.

Later I encountered Denker’s straight wire treatment [1], which states that the geometric algebra formulation of Stokes theorem has the form

\begin{aligned}\int_S \nabla \wedge F = \int_{\partial S} F\end{aligned} \hspace{\stretch{1}}(1.0.1)

This is simple enough looking, but there are some important details left out. In particular the grades do not match, so there must be some sort of implied projection or dot product operations too. We also need to understand how to express the hypervolume and hypersurfaces when evaluating these integrals, especially when we want to use curvilinear coordinates.

I’d attempted to puzzle through these details previously. A collection of these attempts, to be removed from my collection of geometric algebra notes, can be found in [4]. I’d recently reviewed all of these and wrote a compact synopsis [5] of all those notes, but in the process of doing so, I realized there was a couple of fundamental problems with the approach I had used.

One detail that was that I failed to understand, was that we have a requirement for treating a infinitesimal region in the proof, then summing over such regions to express the boundary integral. Understanding that the boundary integral form and its dot product are both evaluated only at the end points of the integral region is an important detail that follows from such an argument (as used in proof of Stokes theorem for a 3D Cartesian space in [7].)

I also realized that my previous attempts could only work for the special cases where the dimension of the integration volume also equaled the dimension of the vector space. The key to resolving this issue is the concept of the tangent space, and an understanding of how to express the projection of the gradient onto the tangent space. These concepts are covered thoroughly in [6], which also introduces Stokes theorem as a special case of a more fundamental theorem for integration of geometric algebraic objects. My objective, for now, is still just to understand the generalization of Stokes theorem, and will leave the fundamental theorem of geometric calculus to later.

Now that these details are understood, the purpose of these notes is to detail the Geometric algebra form of Stokes theorem, covering its generalization to higher dimensional spaces and non-Euclidean metrics (i.e. especially those used for special relativity and electromagnetism), and understanding how to properly deal with curvilinear coordinates. This generalization has the form

## Theorem 1. Stokes’ Theorem

For blades $F \in \bigwedge^{s}$, and $m$ volume element $d^k \mathbf{x}, s < k$,

\begin{aligned}\int_V d^k \mathbf{x} \cdot (\boldsymbol{\partial} \wedge F) = \int_{\partial V} d^{k-1} \mathbf{x} \cdot F.\end{aligned}

Here the volume integral is over a $m$ dimensional surface (manifold), $\boldsymbol{\partial}$ is the projection of the gradient onto the tangent space of the manifold, and $\partial V$ indicates integration over the boundary of $V$.

It takes some work to give this more concrete meaning. I will attempt to do so in a gradual fashion, and provide a number of examples that illustrate some of the relevant details.

# Basic notation

A finite vector space, not necessarily Euclidean, with basis $\left\{ {\mathbf{e}_1, \mathbf{e}_2, \cdots} \right\}$ will be assumed to be the generator of the geometric algebra. A dual or reciprocal basis $\left\{ {\mathbf{e}^1, \mathbf{e}^2, \cdots} \right\}$ for this basis can be calculated, defined by the property

\begin{aligned}\mathbf{e}_i \cdot \mathbf{e}^j = {\delta_i}^j.\end{aligned} \hspace{\stretch{1}}(1.1.2)

This is an Euclidean space when $\mathbf{e}_i = \mathbf{e}^i, \forall i$.

To select from a multivector $A$ the grade $k$ portion, say $A_k$ we write

\begin{aligned}A_k = {\left\langle A \right\rangle}_{k}.\end{aligned} \hspace{\stretch{1}}(1.1.3)

The scalar portion of a multivector $A$ will be written as

\begin{aligned}{\left\langle A \right\rangle}_{0} \equiv \left\langle A \right\rangle.\end{aligned} \hspace{\stretch{1}}(1.1.4)

The grade selection operators can be used to define the outer and inner products. For blades $U$, and $V$ of grade $r$ and $s$ respectively, these are

\begin{aligned}{\left\langle U V \right\rangle}_{{\left\lvert {r + s} \right\rvert}} \equiv U \wedge V\end{aligned} \hspace{\stretch{1}}(1.0.5.5)

\begin{aligned}{\left\langle U V \right\rangle}_{{\left\lvert {r - s} \right\rvert}} \equiv U \cdot V.\end{aligned} \hspace{\stretch{1}}(1.0.5.5)

Written out explicitly for odd grade blades $A$ (vector, trivector, …), and vector $\mathbf{a}$ the dot and wedge products are respectively

\begin{aligned}\begin{aligned}\mathbf{a} \wedge A &= \frac{1}{2} (\mathbf{a} A - A \mathbf{a}) \\ \mathbf{a} \cdot A &= \frac{1}{2} (\mathbf{a} A + A \mathbf{a}).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.6)

\begin{aligned}\begin{aligned}\mathbf{a} \wedge A &= \frac{1}{2} (\mathbf{a} A + A \mathbf{a}) \\ \mathbf{a} \cdot A &= \frac{1}{2} (\mathbf{a} A - A \mathbf{a}).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.7)

It will be useful to employ the cyclic scalar reordering identity for the scalar selection operator

\begin{aligned}\left\langle{{\mathbf{a} \mathbf{b} \mathbf{c}}}\right\rangle= \left\langle{{\mathbf{b} \mathbf{c} \mathbf{a}}}\right\rangle= \left\langle{{\mathbf{c} \mathbf{a} \mathbf{b}}}\right\rangle.\end{aligned} \hspace{\stretch{1}}(1.0.8)

For an $N$ dimensional vector space, a product of $N$ orthonormal (up to a sign) unit vectors is referred to as a pseudoscalar for the space, typically denoted by $I$

\begin{aligned}I = \mathbf{e}_1 \mathbf{e}_2 \cdots \mathbf{e}_N.\end{aligned} \hspace{\stretch{1}}(1.0.9)

The pseudoscalar may commute or anticommute with other blades in the space. We may also form a pseudoscalar for a subspace spanned by vectors $\left\{ {\mathbf{a}, \mathbf{b}, \cdots, \mathbf{c}} \right\}$ by unit scaling the wedge products of those vectors $\mathbf{a} \wedge \mathbf{b} \wedge \cdots \wedge \mathbf{c}$.

# Curvilinear coordinates

For our purposes a manifold can be loosely defined as a parameterized surface. For example, a 2D manifold can be considered a surface in an $n$ dimensional vector space, parameterized by two variables

\begin{aligned}\mathbf{x} = \mathbf{x}(a,b) = \mathbf{x}(u^1, u^2).\end{aligned} \hspace{\stretch{1}}(1.0.10)

Note that the indices here do not represent exponentiation. We can construct a basis for the manifold as

\begin{aligned}\mathbf{x}_i = \frac{\partial {\mathbf{x}}}{\partial {u^i}}.\end{aligned} \hspace{\stretch{1}}(1.0.11)

On the manifold we can calculate a reciprocal basis $\left\{ {\mathbf{x}^i} \right\}$, defined by requiring, at each point on the surface

\begin{aligned}\mathbf{x}^i \cdot \mathbf{x}_j = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(1.0.12)

Associated implicitly with this basis is a curvilinear coordinate representation defined by the projection operation

\begin{aligned}\mathbf{x} = x^i \mathbf{x}_i,\end{aligned} \hspace{\stretch{1}}(1.0.13)

(sums over mixed indices are implied). These coordinates can be calculated by taking dot products with the reciprocal frame vectors

\begin{aligned}\mathbf{x} \cdot \mathbf{x}^i &= x^j \mathbf{x}_j \cdot \mathbf{x}^i \\ &= x^j {\delta_j}^i \\ &= x^i.\end{aligned} \hspace{\stretch{1}}(1.0.13)

In this document all coordinates are with respect to a specific curvilinear basis, and not with respect to the standard basis $\left\{ {\mathbf{e}_i} \right\}$ or its dual basis unless otherwise noted.

Similar to the usual notation for derivatives with respect to the standard basis coordinates we form a lower index partial derivative operator

\begin{aligned}\frac{\partial {}}{\partial {u^i}} \equiv \partial_i,\end{aligned} \hspace{\stretch{1}}(1.0.13)

so that when the complete vector space is spanned by $\left\{ {\mathbf{x}_i} \right\}$ the gradient has the curvilinear representation

\begin{aligned}\boldsymbol{\nabla} = \mathbf{x}^i \frac{\partial {}}{\partial {u^i}}.\end{aligned} \hspace{\stretch{1}}(1.0.13)

This can be motivated by noting that the directional derivative is defined by

\begin{aligned}\mathbf{a} \cdot \boldsymbol{\nabla} f(\mathbf{x}) = \lim_{t \rightarrow 0} \frac{f(\mathbf{x} + t \mathbf{a}) - f(\mathbf{x})}{t}.\end{aligned} \hspace{\stretch{1}}(1.0.17)

When the basis $\left\{ {\mathbf{x}_i} \right\}$ does not span the space, the projection of the gradient onto the tangent space at the point of evaluation

\begin{aligned}\boldsymbol{\partial} = \mathbf{x}^i \partial_i = \sum_i \mathbf{x}_i \frac{\partial {}}{\partial {u^i}}.\end{aligned} \hspace{\stretch{1}}(1.0.18)

This is called the vector derivative.

See [6] for a more complete discussion of the gradient and vector derivatives in curvilinear coordinates.

# Green’s theorem

Given a two parameter ($u,v$) surface parameterization, the curvilinear coordinate representation of a vector $\mathbf{f}$ has the form

\begin{aligned}\mathbf{f} = f_u \mathbf{x}^u + f_v \mathbf{x}^v + f_\perp \mathbf{x}^\perp.\end{aligned} \hspace{\stretch{1}}(1.19)

We assume that the vector space is of dimension two or greater but otherwise unrestricted, and need not have an Euclidean basis. Here $f_\perp \mathbf{x}^\perp$ denotes the rejection of $\mathbf{f}$ from the tangent space at the point of evaluation. Green’s theorem relates the integral around a closed curve to an “area” integral on that surface

## Theorem 2. Green’s Theorem

\begin{aligned}\mathop{\rlap{\ensuremath{\mkern3.5mu\circlearrowright}}\int} \mathbf{f} \cdot d\mathbf{l}=\iint \left( {-\frac{\partial {f_u}}{\partial {v}}+\frac{\partial {f_v}}{\partial {u}}} \right)du dv\end{aligned}

Following the arguments used in [7] for Stokes theorem in three dimensions, we first evaluate the loop integral along the differential element of the surface at the point $\mathbf{x}(u_0, v_0)$ evaluated over the range $(du, dv)$, as shown in the infinitesimal loop of fig. 1.1.

Fig 1.1. Infinitesimal loop integral

Over the infinitesimal area, the loop integral decomposes into

\begin{aligned}\mathop{\rlap{\ensuremath{\mkern3.5mu\circlearrowright}}\int} \mathbf{f} \cdot d\mathbf{l}=\int \mathbf{f} \cdot d\mathbf{x}_1+\int \mathbf{f} \cdot d\mathbf{x}_2+\int \mathbf{f} \cdot d\mathbf{x}_3+\int \mathbf{f} \cdot d\mathbf{x}_4,\end{aligned} \hspace{\stretch{1}}(1.20)

where the differentials along the curve are

\begin{aligned}\begin{aligned}d\mathbf{x}_1 &= {\left.{{ \frac{\partial {\mathbf{x}}}{\partial {u}} }}\right\vert}_{{v = v_0}} du \\ d\mathbf{x}_2 &= {\left.{{ \frac{\partial {\mathbf{x}}}{\partial {v}} }}\right\vert}_{{u = u_0 + du}} dv \\ d\mathbf{x}_3 &= -{\left.{{ \frac{\partial {\mathbf{x}}}{\partial {u}} }}\right\vert}_{{v = v_0 + dv}} du \\ d\mathbf{x}_4 &= -{\left.{{ \frac{\partial {\mathbf{x}}}{\partial {v}} }}\right\vert}_{{u = u_0}} dv.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.21)

It is assumed that the parameterization change $(du, dv)$ is small enough that this loop integral can be considered planar (regardless of the dimension of the vector space). Making use of the fact that $\mathbf{x}^\perp \cdot \mathbf{x}_\alpha = 0$ for $\alpha \in \left\{ {u,v} \right\}$, the loop integral is

\begin{aligned}\mathop{\rlap{\ensuremath{\mkern3.5mu\circlearrowright}}\int} \mathbf{f} \cdot d\mathbf{l}=\int\left( {f_u \mathbf{x}^u + f_v \mathbf{x}^v + f_\perp \mathbf{x}^\perp} \right)\cdot\Bigl(\mathbf{x}_u(u, v_0) du - \mathbf{x}_u(u, v_0 + dv) du+\mathbf{x}_v(u_0 + du, v) dv - \mathbf{x}_v(u_0, v) dv\Bigr)=\int f_u(u, v_0) du - f_u(u, v_0 + dv) du+f_v(u_0 + du, v) dv - f_v(u_0, v) dv\end{aligned} \hspace{\stretch{1}}(1.22)

With the distances being infinitesimal, these differences can be rewritten as partial differentials

\begin{aligned}\mathop{\rlap{\ensuremath{\mkern3.5mu\circlearrowright}}\int} \mathbf{f} \cdot d\mathbf{l}=\iint \left( {-\frac{\partial {f_u}}{\partial {v}}+\frac{\partial {f_v}}{\partial {u}}} \right)du dv.\end{aligned} \hspace{\stretch{1}}(1.23)

We can now sum over a larger area as in fig. 1.2

Fig 1.2. Sum of infinitesimal loops

All the opposing oriented loop elements cancel, so the integral around the complete boundary of the surface $\mathbf{x}(u, v)$ is given by the $u,v$ area integral of the partials difference.

We will see that Green’s theorem is a special case of the Curl (Stokes) theorem. This observation will also provide a geometric interpretation of the right hand side area integral of thm. 2, and allow for a coordinate free representation.

Special case:

An important special case of Green’s theorem is for a Euclidean two dimensional space where the vector function is

\begin{aligned}\mathbf{f} = P \mathbf{e}_1 + Q \mathbf{e}_2.\end{aligned} \hspace{\stretch{1}}(1.24)

Here Green’s theorem takes the form

\begin{aligned}\mathop{\rlap{\ensuremath{\mkern3.5mu\circlearrowright}}\int} P dx + Q dy=\iint \left( {\frac{\partial {Q}}{\partial {x}}-\frac{\partial {P}}{\partial {y}}} \right)dx dy.\end{aligned} \hspace{\stretch{1}}(1.0.25)

# Curl theorem, two volume vector field

Having examined the right hand side of thm. 1 for the very simplest geometric object $\mathbf{f}$, let’s look at the right hand side, the area integral in more detail. We restrict our attention for now to vectors $\mathbf{f}$ still defined by eq. 1.19.

First we need to assign a meaning to $d^2 \mathbf{x}$. By this, we mean the wedge products of the two differential elements. With

\begin{aligned}d\mathbf{x}_i = du^i \frac{\partial {\mathbf{x}}}{\partial {u^i}} = du^i \mathbf{x}_i,\end{aligned} \hspace{\stretch{1}}(1.26)

that area element is

\begin{aligned}d^2 \mathbf{x}= d\mathbf{x}_1 \wedge d\mathbf{x}_2= du^1 du^2 \mathbf{x}_1 \wedge \mathbf{x}_2.\end{aligned} \hspace{\stretch{1}}(1.0.27)

This is the oriented area element that lies in the tangent plane at the point of evaluation, and has the magnitude of the area of that segment of the surface, as depicted in fig. 1.3.

Fig 1.3. Oriented area element tiling of a surface

Observe that we have no requirement to introduce a normal to the surface to describe the direction of the plane. The wedge product provides the information about the orientation of the place in the space, even when the vector space that our vector lies in has dimension greater than three.

Proceeding with the expansion of the dot product of the area element with the curl, using eq. 1.0.6, eq. 1.0.7, and eq. 1.0.8, and a scalar selection operation, we have

\begin{aligned}d^2 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right) &= \left\langle{{d^2 \mathbf{x} \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)}}\right\rangle \\ &= \left\langle{{d^2 \mathbf{x}\frac{1}{2}\left( { \stackrel{ \rightarrow }{\boldsymbol{\partial}} \mathbf{f} - \mathbf{f} \stackrel{ \leftarrow }{\boldsymbol{\partial}} } \right)}}\right\rangle \\ &= \frac{1}{2}\left\langle{{d^2 \mathbf{x} \left( { \mathbf{x}^i \left( { \partial_i \mathbf{f}} \right) - \left( {\partial_i \mathbf{f}} \right) \mathbf{x}^i } \right)}}\right\rangle \\ &= \frac{1}{2}\left\langle{{\left( { \partial_i \mathbf{f} } \right) d^2 \mathbf{x} \mathbf{x}^i - \left( { \partial_i \mathbf{f} } \right) \mathbf{x}^i d^2 \mathbf{x}}}\right\rangle \\ &= \left\langle{{\left( { \partial_i \mathbf{f} } \right) \left( { d^2 \mathbf{x} \cdot \mathbf{x}^i } \right)}}\right\rangle \\ &= \partial_i \mathbf{f} \cdot\left( { d^2 \mathbf{x} \cdot \mathbf{x}^i } \right).\end{aligned} \hspace{\stretch{1}}(1.28)

Let’s proceed to expand the inner dot product

\begin{aligned}d^2 \mathbf{x} \cdot \mathbf{x}^i &= du^1 du^2\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 } \right) \cdot \mathbf{x}^i \\ &= du^1 du^2\left( {\mathbf{x}_2 \cdot \mathbf{x}^i \mathbf{x}_1-\mathbf{x}_1 \cdot \mathbf{x}^i \mathbf{x}_2} \right) \\ &= du^1 du^2\left( {{\delta_2}^i \mathbf{x}_1-{\delta_1}^i \mathbf{x}_2} \right).\end{aligned} \hspace{\stretch{1}}(1.29)

The complete curl term is thus

\begin{aligned}d^2 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)=du^1 du^2\left( {\frac{\partial {\mathbf{f}}}{\partial {u^2}} \cdot \mathbf{x}_1-\frac{\partial {\mathbf{f}}}{\partial {u^1}} \cdot \mathbf{x}_2} \right)\end{aligned} \hspace{\stretch{1}}(1.30)

This almost has the form of eq. 1.23, although that is not immediately obvious. Working backwards, using the shorthand $u = u^1, v = u^2$, we can show that this coordinate representation can be eliminated

\begin{aligned}-du dv\left( {\frac{\partial {f_v}}{\partial {u}} -\frac{\partial {f_u}}{\partial {v}}} \right) &= du dv\left( {\frac{\partial {}}{\partial {v}}\left( {\mathbf{f} \cdot \mathbf{x}_u} \right)-\frac{\partial {}}{\partial {u}}\left( {\mathbf{f} \cdot \mathbf{x}_v} \right)} \right) \\ &= du dv\left( {\frac{\partial {\mathbf{f}}}{\partial {v}} \cdot \mathbf{x}_u-\frac{\partial {\mathbf{f}}}{\partial {u}} \cdot \mathbf{x}_v+\mathbf{f} \cdot \left( {\frac{\partial {\mathbf{x}_u}}{\partial {v}}-\frac{\partial {\mathbf{x}_v}}{\partial {u}}} \right)} \right) \\ &= du dv \left( {\frac{\partial {\mathbf{f}}}{\partial {v}} \cdot \mathbf{x}_u-\frac{\partial {\mathbf{f}}}{\partial {u}} \cdot \mathbf{x}_v+\mathbf{f} \cdot \left( {\frac{\partial^2 \mathbf{x}}{\partial v \partial u}-\frac{\partial^2 \mathbf{x}}{\partial u \partial v}} \right)} \right) \\ &= du dv \left( {\frac{\partial {\mathbf{f}}}{\partial {v}} \cdot \mathbf{x}_u-\frac{\partial {\mathbf{f}}}{\partial {u}} \cdot \mathbf{x}_v} \right) \\ &= d^2 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right).\end{aligned} \hspace{\stretch{1}}(1.31)

This relates the two parameter surface integral of the curl to the loop integral over its boundary

\begin{aligned}\int d^2 \mathbf{x} \cdot (\boldsymbol{\partial} \wedge \mathbf{f}) = \mathop{\rlap{\ensuremath{\mkern3.5mu\circlearrowleft}}\int} \mathbf{f} \cdot d\mathbf{l}.\end{aligned} \hspace{\stretch{1}}(1.0.32)

This is the very simplest special case of Stokes theorem. When written in the general form of Stokes thm. 1

\begin{aligned}\int_A d^2 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f}} \right)=\int_{\partial A} d^1 \mathbf{x} \cdot \mathbf{f}=\int_{\partial A} \left( { d\mathbf{x}_1 - d\mathbf{x}_2 } \right) \cdot \mathbf{f},\end{aligned} \hspace{\stretch{1}}(1.0.33)

we must remember (the $\partial A$ is to remind us of this) that it is implied that both the vector $\mathbf{f}$ and the differential elements are evaluated on the boundaries of the integration ranges respectively. A more exact statement is

\begin{aligned}\int_{\partial A} d^1 \mathbf{x} \cdot \mathbf{f}=\int {\left.{{\mathbf{f} \cdot d\mathbf{x}_1}}\right\vert}_{{\Delta u^2}}-{\left.{{\mathbf{f} \cdot d\mathbf{x}_2}}\right\vert}_{{\Delta u^1}}=\int {\left.{{f_1}}\right\vert}_{{\Delta u^2}} du^1-{\left.{{f_2}}\right\vert}_{{\Delta u^1}} du^2.\end{aligned} \hspace{\stretch{1}}(1.0.34)

Expanded out in full this is

\begin{aligned}\int {\left.{{\mathbf{f} \cdot d\mathbf{x}_1}}\right\vert}_{{u^2(1)}}-{\left.{{\mathbf{f} \cdot d\mathbf{x}_1}}\right\vert}_{{u^2(0)}}+{\left.{{\mathbf{f} \cdot d\mathbf{x}_2}}\right\vert}_{{u^1(0)}}-{\left.{{\mathbf{f} \cdot d\mathbf{x}_2}}\right\vert}_{{u^1(1)}},\end{aligned} \hspace{\stretch{1}}(1.0.35)

which can be cross checked against fig. 1.4 to demonstrate that this specifies a clockwise orientation. For the surface with oriented area $d\mathbf{x}_1 \wedge d\mathbf{x}_2$, the clockwise loop is designated with line elements (1)-(4), we see that the contributions around this loop (in boxes) match eq. 1.0.35.

Fig 1.4. Clockwise loop

## Example: Green’s theorem, a 2D Cartesian parameterization for a Euclidean space

For a Cartesian 2D Euclidean parameterization of a vector field and the integration space, Stokes theorem should be equivalent to Green’s theorem eq. 1.0.25. Let’s expand both sides of eq. 1.0.32 independently to verify equality. The parameterization is

\begin{aligned}\mathbf{x}(x, y) = x \mathbf{e}_1 + y \mathbf{e}_2.\end{aligned} \hspace{\stretch{1}}(1.36)

Here the dual basis is the basis, and the projection onto the tangent space is just the gradient

\begin{aligned}\boldsymbol{\partial} = \boldsymbol{\nabla}= \mathbf{e}_1 \frac{\partial {}}{\partial {x}}+ \mathbf{e}_2 \frac{\partial {}}{\partial {y}}.\end{aligned} \hspace{\stretch{1}}(1.0.37)

The volume element is an area weighted pseudoscalar for the space

\begin{aligned}d^2 \mathbf{x} = dx dy \frac{\partial {\mathbf{x}}}{\partial {x}} \wedge \frac{\partial {\mathbf{x}}}{\partial {y}} = dx dy \mathbf{e}_1 \mathbf{e}_2,\end{aligned} \hspace{\stretch{1}}(1.0.38)

and the curl of a vector $\mathbf{f} = f_1 \mathbf{e}_1 + f_2 \mathbf{e}_2$ is

\begin{aligned}\boldsymbol{\partial} \wedge \mathbf{f}=\left( {\mathbf{e}_1 \frac{\partial {}}{\partial {x}}+ \mathbf{e}_2 \frac{\partial {}}{\partial {y}}} \right) \wedge\left( {f_1 \mathbf{e}_1 + f_2 \mathbf{e}_2} \right)=\mathbf{e}_1 \mathbf{e}_2\left( {\frac{\partial {f_2}}{\partial {x}}-\frac{\partial {f_1}}{\partial {y}}} \right).\end{aligned} \hspace{\stretch{1}}(1.0.38)

So, the LHS of Stokes theorem takes the coordinate form

\begin{aligned}\int d^2 \mathbf{x} \cdot (\boldsymbol{\partial} \wedge \mathbf{f}) =\iint dx dy\underbrace{\left\langle{{\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_1 \mathbf{e}_2}}\right\rangle}_{=-1}\left( {\frac{\partial {f_2}}{\partial {x}}-\frac{\partial {f_1}}{\partial {y}}} \right).\end{aligned} \hspace{\stretch{1}}(1.0.38)

For the RHS, following fig. 1.5, we have

\begin{aligned}\mathop{\rlap{\ensuremath{\mkern3.5mu\circlearrowleft}}\int} \mathbf{f} \cdot d\mathbf{x}=f_2(x_0, y) dy+f_1(x, y_1) dx-f_2(x_1, y) dy-f_1(x, y_0) dx=\int dx \left( {f_1(x, y_1)-f_1(x, y_0)} \right)-\int dy \left( {f_2(x_1, y)-f_2(x_0, y)} \right).\end{aligned} \hspace{\stretch{1}}(1.0.38)

As expected, we can also obtain this by integrating eq. 1.0.38.

Fig 1.5. Euclidean 2D loop

## Example: Cylindrical parameterization

Let’s now consider a cylindrical parameterization of a 4D space with Euclidean metric $++++$ or Minkowski metric $+++-$. For such a space let’s do a brute force expansion of both sides of Stokes theorem to gain some confidence that all is well.

With $\kappa = \mathbf{e}_3 \mathbf{e}_4$, such a space is conveniently parameterized as illustrated in fig. 1.6 as

\begin{aligned}\mathbf{x}(\rho, \theta, h) = x \mathbf{e}_1 + y \mathbf{e}_2 + \rho \mathbf{e}_3 e^{\kappa \theta}.\end{aligned} \hspace{\stretch{1}}(1.42)

Fig 1.6. Cylindrical polar parameterization

Note that the Euclidean case where $\left( {\mathbf{e}_4} \right)^2 = 1$ rejection of the non-axial components of $\mathbf{x}$ expands to

\begin{aligned}\left( { \left( { \mathbf{x} \wedge \mathbf{e}_1 \wedge \mathbf{e}_2} \right) \cdot \mathbf{e}^2 } \right) \cdot \mathbf{e}^1 =\rho \left( { \mathbf{e}_3 \cos\theta + \mathbf{e}_4 \sin \theta } \right),\end{aligned} \hspace{\stretch{1}}(1.43)

whereas for the Minkowski case where $\left( {\mathbf{e}_4} \right)^2 = -1$ we have a hyperbolic expansion

\begin{aligned}\left( { \left( { \mathbf{x} \wedge \mathbf{e}_1 \wedge \mathbf{e}_2} \right) \cdot \mathbf{e}^2 } \right) \cdot \mathbf{e}^1 =\rho \left( { \mathbf{e}_3 \cosh\theta + \mathbf{e}_4 \sinh \theta } \right).\end{aligned} \hspace{\stretch{1}}(1.44)

Within such a space consider the surface along $x = c, y = d$, for which the vectors are parameterized by

\begin{aligned}\mathbf{x}(\rho, \theta) = c \mathbf{e}_1 + d \mathbf{e}_2 + \rho \mathbf{e}_3 e^{\kappa \theta}.\end{aligned} \hspace{\stretch{1}}(1.45)

The tangent space unit vectors are

\begin{aligned}\mathbf{x}_\rho= \frac{\partial {\mathbf{x}}}{\partial {\rho}} = \mathbf{e}_3 e^{\kappa \theta},\end{aligned} \hspace{\stretch{1}}(1.46)

and

\begin{aligned}\mathbf{x}_\theta &= \frac{\partial {\mathbf{x}}}{\partial {\theta}} \\ &= \rho \mathbf{e}_3 \mathbf{e}_3 \mathbf{e}_4 e^{\kappa \theta} \\ &= \rho \mathbf{e}_4 e^{\kappa \theta}.\end{aligned} \hspace{\stretch{1}}(1.47)

Observe that both of these vectors have their origin at the point of evaluation, and aren’t relative to the absolute origin used to parameterize the complete space.

We wish to compute the volume element for the tangent plane. Noting that $\mathbf{e}_3$ and $\mathbf{e}_4$ both anticommute with $\kappa$ we have for $\mathbf{a} \in \text{span} \left\{ {\mathbf{e}_3, \mathbf{e}_4} \right\}$

\begin{aligned}\mathbf{a} e^{\kappa \theta} = e^{-\kappa \theta} \mathbf{a},\end{aligned} \hspace{\stretch{1}}(1.48)

so

\begin{aligned}\mathbf{x}_\theta \wedge \mathbf{x}_\rho &= {\left\langle{{\mathbf{e}_3 e^{\kappa \theta} \rho \mathbf{e}_4 e^{\kappa \theta}}}\right\rangle}_{2} \\ &= \rho {\left\langle{{\mathbf{e}_3 e^{\kappa \theta} e^{-\kappa \theta} \mathbf{e}_4}}\right\rangle}_{2} \\ &= \rho \mathbf{e}_3 \mathbf{e}_4.\end{aligned} \hspace{\stretch{1}}(1.49)

The tangent space volume element is thus

\begin{aligned}d^2 \mathbf{x} = \rho d\rho d\theta \mathbf{e}_3 \mathbf{e}_4.\end{aligned} \hspace{\stretch{1}}(1.50)

With the tangent plane vectors both perpendicular we don’t need the general lemma 6 to compute the reciprocal basis, but can do so by inspection

\begin{aligned}\mathbf{x}^\rho = e^{-\kappa \theta} \mathbf{e}^3,\end{aligned} \hspace{\stretch{1}}(1.0.51)

and

\begin{aligned}\mathbf{x}^\theta = e^{-\kappa \theta} \mathbf{e}^4 \frac{1}{{\rho}}.\end{aligned} \hspace{\stretch{1}}(1.0.52)

Observe that the latter depends on the metric signature.

The vector derivative, the projection of the gradient on the tangent space, is

\begin{aligned}\boldsymbol{\partial} &= \mathbf{x}^\rho \frac{\partial {}}{\partial {\rho}}+\mathbf{x}^\theta \frac{\partial {}}{\partial {\theta}} \\ &= e^{-\kappa \theta} \left( {\mathbf{e}^3 \partial_\rho + \frac{\mathbf{e}^4}{\rho} \partial_\theta } \right).\end{aligned} \hspace{\stretch{1}}(1.0.52)

From this we see that acting with the vector derivative on a scalar radial only dependent function $f(\rho)$ is a vector function that has a radial direction, whereas the action of the vector derivative on an azimuthal only dependent function $g(\theta)$ is a vector function that has only an azimuthal direction. The interpretation of the geometric product action of the vector derivative on a vector function is not as simple since the product will be a multivector.

Expanding the curl in coordinates is messier, but yields in the end when tackled with sufficient care

\begin{aligned}\boldsymbol{\partial} \wedge \mathbf{f} &= {\left\langle{{e^{-\kappa \theta}\left( { e^3 \partial_\rho + \frac{e^4}{\rho} \partial_\theta} \right)\left( { \not{{e_1 x}} + \not{{e_2 y}} + e_3 e^{\kappa \theta } f_\rho + \frac{e^4}{\rho} e^{\kappa \theta } f_\theta} \right)}}\right\rangle}_{2} \\ &= \not{{{\left\langle{{e^{-\kappa \theta} e^3 \partial_\rho \left( { e_3 e^{\kappa \theta } f_\rho} \right)}}\right\rangle}_{2}}}+{\left\langle{{\not{{e^{-\kappa \theta}}} e^3 \partial_\rho \left( { \frac{e^4}{\rho} \not{{e^{\kappa \theta }}} f_\theta} \right)}}\right\rangle}_{2}+{\left\langle{{e^{-\kappa \theta}\frac{e^4}{\rho} \partial_\theta\left( { e_3 e^{\kappa \theta } f_\rho} \right)}}\right\rangle}_{2}+{\left\langle{{e^{-\kappa \theta}\frac{e^4}{\rho} \partial_\theta\left( { \frac{e^4}{\rho} e^{\kappa \theta } f_\theta} \right)}}\right\rangle}_{2} \\ &= \mathbf{e}^3 \mathbf{e}^4 \left( {-\frac{f_\theta}{\rho^2} + \frac{1}{{\rho}} \partial_\rho f_\theta- \frac{1}{{\rho}} \partial_\theta f_\rho} \right)+ \frac{1}{{\rho^2}}{\left\langle{{e^{-\kappa \theta} \left( {\mathbf{e}^4} \right)^2\left( {\mathbf{e}_3 \mathbf{e}_4 f_\theta+ \not{{\partial_\theta f_\theta}}} \right)e^{\kappa \theta}}}\right\rangle}_{2} \\ &= \mathbf{e}^3 \mathbf{e}^4 \left( {-\frac{f_\theta}{\rho^2} + \frac{1}{{\rho}} \partial_\rho f_\theta- \frac{1}{{\rho}} \partial_\theta f_\rho} \right)+ \frac{1}{{\rho^2}}{\left\langle{{\not{{e^{-\kappa \theta} }}\mathbf{e}_3 \mathbf{e}^4 f_\theta\not{{e^{\kappa \theta}}}}}\right\rangle}_{2} \\ &= \frac{\mathbf{e}^3 \mathbf{e}^4 }{\rho}\left( {\partial_\rho f_\theta- \partial_\theta f_\rho} \right).\end{aligned} \hspace{\stretch{1}}(1.0.52)

After all this reduction, we can now state in coordinates the LHS of Stokes theorem explicitly

\begin{aligned}\int d^2 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right) &= \int \rho d\rho d\theta \left\langle{{\mathbf{e}_3 \mathbf{e}_4 \mathbf{e}^3 \mathbf{e}^4 }}\right\rangle\frac{1}{{\rho}}\left( {\partial_\rho f_\theta- \partial_\theta f_\rho} \right) \\ &= \int d\rho d\theta\left( {\partial_\theta f_\rho-\partial_\rho f_\theta} \right) \\ &= \int d\rho {\left.{{f_\rho}}\right\vert}_{{\Delta \theta}}- \int d\theta{\left.{{f_\theta}}\right\vert}_{{\Delta \rho}}.\end{aligned} \hspace{\stretch{1}}(1.0.52)

Now compare this to the direct evaluation of the loop integral portion of Stokes theorem. Expressing this using eq. 1.0.34, we have the same result

\begin{aligned}\int d^2 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)=\int {\left.{{f_\rho}}\right\vert}_{{\Delta \theta}} d\rho-{\left.{{f_\theta}}\right\vert}_{{\Delta \rho}} d\theta\end{aligned} \hspace{\stretch{1}}(1.0.56)

This example highlights some of the power of Stokes theorem, since the reduction of the volume element differential form was seen to be quite a chore (and easy to make mistakes doing.)

## Example: Composition of boost and rotation

Working in a $\bigwedge^{1,3}$ space with basis $\left\{ {\gamma_0, \gamma_1, \gamma_2, \gamma_3} \right\}$ where $\left( {\gamma_0} \right)^2 = 1$ and $\left( {\gamma_k} \right)^2 = -1, k \in \left\{ {1,2,3} \right\}$, an active composition of boost and rotation has the form

\begin{aligned}\begin{aligned}\mathbf{x}' &= e^{i\alpha/2} \mathbf{x}_0 e^{-i\alpha/2} \\ \mathbf{x}'' &= e^{-j\theta/2} \mathbf{x}' e^{j\theta/2}\end{aligned},\end{aligned} \hspace{\stretch{1}}(1.0.57)

where $i$ is a bivector of a timelike unit vector and perpendicular spacelike unit vector, and $j$ is a bivector of two perpendicular spacelike unit vectors. For example, $i = \gamma_0 \gamma_1$ and $j = \gamma_1 \gamma_2$. For such $i,j$ the respective Lorentz transformation matrices are

\begin{aligned}{\begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3 \end{bmatrix}}'=\begin{bmatrix}\cosh\alpha & -\sinh\alpha & 0 & 0 \\ -\sinh\alpha & \cosh\alpha & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3 \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.58)

and

\begin{aligned}{\begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3 \end{bmatrix}}''=\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & \cos\theta & \sin\theta & 0 \\ 0 & -\sin\theta & \cos\theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}{\begin{bmatrix}x^0 \\ x^1 \\ x^2 \\ x^3 \end{bmatrix}}'.\end{aligned} \hspace{\stretch{1}}(1.0.59)

Let’s calculate the tangent space vectors for this parameterization, assuming that the particle is at an initial spacetime position of $\mathbf{x}_0$. That is

\begin{aligned}\mathbf{x} = e^{-j\theta/2} e^{i\alpha/2} \mathbf{x}_0e^{-i\alpha/2} e^{j\theta/2}.\end{aligned} \hspace{\stretch{1}}(1.0.60)

To calculate the tangent space vectors for this subspace we note that

\begin{aligned}\frac{\partial {\mathbf{x}'}}{\partial {\alpha}} = \frac{i}{2} \mathbf{x}_0 - \mathbf{x}_0 \frac{i}{2} = i \cdot \mathbf{x}_0,\end{aligned} \hspace{\stretch{1}}(1.0.61)

and

\begin{aligned}\frac{\partial {\mathbf{x}''}}{\partial {\theta}} = -\frac{j}{2} \mathbf{x}' + \mathbf{x}' \frac{j}{2} = \mathbf{x}' \cdot j.\end{aligned} \hspace{\stretch{1}}(1.0.62)

The tangent space vectors are therefore

\begin{aligned}\begin{aligned}\mathbf{x}_\alpha &= e^{-j\theta/2} \left( { i \cdot \mathbf{x}_0 } \right)e^{j\theta/2} \\ \mathbf{x}_\theta &= \left( {e^{i\alpha/2} \mathbf{x}_0e^{-i\alpha/2} } \right) \cdot j.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.63)

Continuing a specific example where $i = \gamma_0\gamma_1, j = \gamma_1 \gamma_2$ let’s also pick $\mathbf{x}_0 = \gamma_0$, the spacetime position of a particle at the origin of a frame at that frame’s $c t = 1$. The tangent space vectors for the subspace parameterized by this transformation and this initial position is then reduced to

\begin{aligned}\mathbf{x}_\alpha = -\gamma_1 e^{j \theta} = \gamma_1 \sin\theta + \gamma_2 \cos\theta,\end{aligned} \hspace{\stretch{1}}(1.0.63)

and

\begin{aligned}\mathbf{x}_\theta &= \left( { \gamma_0 e^{-i \alpha} } \right) \cdot j \\ &= \left( { \gamma_0\left( { \cosh\alpha - \gamma_0 \gamma_1 \sinh\alpha } \right)} \right) \cdot \left( { \gamma_1 \gamma_2} \right) \\ &= {\left\langle{{ \left( { \gamma_0 \cosh\alpha - \gamma_1 \sinh\alpha } \right) \gamma_1 \gamma_2 }}\right\rangle}_{1} \\ &= \gamma_2 \sinh\alpha.\end{aligned} \hspace{\stretch{1}}(1.0.63)

By inspection the dual basis for this parameterization is

\begin{aligned}\begin{aligned}\mathbf{x}^\alpha &= \gamma_1 e^{j \theta} \\ \mathbf{x}^\theta &= \frac{\gamma^2}{\sinh\alpha} \end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.66)

So, Stokes theorem, applied to a spacetime vector $\mathbf{f}$, for this subspace is

\begin{aligned}\int d\alpha d\theta \sinh\alpha \sin\theta \left( { \gamma_1 \gamma_2 } \right) \cdot \left( {\left( {\gamma_1 e^{j \theta} \partial_\alpha + \frac{\gamma^2}{\sinh\alpha} \partial_\theta} \right)\wedge \mathbf{f}} \right)=\int d\alpha {\left.{{\mathbf{f} \cdot \Bigl( {\gamma^1 e^{j \theta}} \Bigr)}}\right\vert}_{{\theta_0}}^{{\theta_1}}-\int d\theta {\left.{{\mathbf{f} \cdot \Bigl( { \gamma_2 \sinh\alpha } \Bigr)}}\right\vert}_{{\alpha_0}}^{{\alpha_1}}.\end{aligned} \hspace{\stretch{1}}(1.0.67)

Since the point is to avoid the curl integral, we did not actually have to state it explicitly, nor was there any actual need to calculate the dual basis.

## Example: Dual representation in three dimensions

It’s clear that there is a projective nature to the differential form $d^2 \mathbf{x} \cdot \left( {\boldsymbol{\partial} \wedge \mathbf{f}} \right)$. This projective nature allows us, in three dimensions, to re-express Stokes theorem using the gradient instead of the vector derivative, and to utilize the cross product and a normal direction to the plane.

When we parameterize a normal direction to the tangent space, so that for a 2D tangent space spanned by curvilinear coordinates $\mathbf{x}_1$ and $\mathbf{x}_2$ the vector $\mathbf{x}^3$ is normal to both, we can write our vector as

\begin{aligned}\mathbf{f} = f_1 \mathbf{x}^1 + f_2 \mathbf{x}^2 + f_3 \mathbf{x}^3,\end{aligned} \hspace{\stretch{1}}(1.0.68)

and express the orientation of the tangent space area element in terms of a pseudoscalar that includes this normal direction

\begin{aligned}\mathbf{x}_1 \wedge \mathbf{x}_2 =\mathbf{x}^3 \cdot \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) =\mathbf{x}^3 \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right).\end{aligned} \hspace{\stretch{1}}(1.0.69)

Inserting this into an expansion of the curl form we have

\begin{aligned}d^2 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right) &= du^1 du^2 \left\langle{{\mathbf{x}^3 \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right)\left( {\left( {\sum_{i=1,2} x^i \partial_i} \right)\wedge\mathbf{f}} \right)}}\right\rangle \\ &= du^1 du^2 \mathbf{x}^3 \cdot \left( {\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right)\cdot \left( {\boldsymbol{\nabla} \wedge \mathbf{f}} \right)-\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right)\cdot \left( {\mathbf{x}^3 \partial_3 \wedge \mathbf{f}} \right)} \right).\end{aligned} \hspace{\stretch{1}}(1.0.69)

Observe that this last term, the contribution of the component of the gradient perpendicular to the tangent space, has no $\mathbf{x}_3$ components

\begin{aligned}\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right)\cdot \left( {\mathbf{x}^3 \partial_3 \wedge \mathbf{f}} \right) &= \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right)\cdot \left( {\mathbf{x}^3 \wedge \partial_3 \mathbf{f}} \right) \\ &= \left( { \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) \cdot \mathbf{x}^3} \right)\cdot \partial_3 \mathbf{f} \\ &= \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 } \right) \cdot \partial_3 \mathbf{f} \\ &= \mathbf{x}_1 \left( { \mathbf{x}_2 \cdot \partial_3 \mathbf{f} } \right)-\mathbf{x}_2 \left( { \mathbf{x}_1 \cdot \partial_3 \mathbf{f} } \right),\end{aligned} \hspace{\stretch{1}}(1.0.69)

leaving

\begin{aligned}d^2 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)=du^1 du^2 \mathbf{x}^3 \cdot \left( {\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) \cdot \left( { \boldsymbol{\nabla} \wedge \mathbf{f}} \right)} \right).\end{aligned} \hspace{\stretch{1}}(1.0.69)

Now scale the normal vector and its dual to have unit norm as follows

\begin{aligned}\begin{aligned}\mathbf{x}^3 &= \alpha \hat{\mathbf{x}}^3 \\ \mathbf{x}_3 &= \frac{1}{{\alpha}} \hat{\mathbf{x}}_3,\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.73)

so that for $\beta > 0$, the volume element can be

\begin{aligned}\mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \hat{\mathbf{x}}_3 = \beta I.\end{aligned} \hspace{\stretch{1}}(1.0.73)

This scaling choice is illustrated in fig. 1.7, and represents the “outwards” normal. With such a scaling choice we have

Fig 1.7. Outwards normal

\begin{aligned}\beta du^1 du^2 = dA,\end{aligned} \hspace{\stretch{1}}(1.75)

and almost have the desired cross product representation

\begin{aligned}d^2 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)=dA \hat{\mathbf{x}}^3 \cdot \left( { I \cdot \left( {\boldsymbol{\nabla} \wedge \mathbf{f}} \right) } \right)=dA \hat{\mathbf{x}}^3 \cdot \left( { I \left( {\boldsymbol{\nabla} \wedge \mathbf{f}} \right) } \right).\end{aligned} \hspace{\stretch{1}}(1.76)

With the duality identity $\mathbf{a} \wedge \mathbf{b} = I \left( {\mathbf{a} \times \mathbf{b}} \right)$, we have the traditional 3D representation of Stokes theorem

\begin{aligned}\int d^2 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)=-\int dA \hat{\mathbf{x}}^3 \cdot \left( {\boldsymbol{\nabla} \times \mathbf{f}} \right) = \mathop{\rlap{\ensuremath{\mkern3.5mu\circlearrowleft}}\int} \mathbf{f} \cdot d\mathbf{l}.\end{aligned} \hspace{\stretch{1}}(1.0.77)

Note that the orientation of the loop integral in the traditional statement of the 3D Stokes theorem is counterclockwise instead of clockwise, as written here.

# Stokes theorem, three variable volume element parameterization

We can restate the identity of thm. 1 in an equivalent dot product form.

\begin{aligned}\int_V \left( { d^k \mathbf{x} \cdot \mathbf{x}^i } \right) \cdot \partial_i F = \int_{\partial V} d^{k-1} \mathbf{x} \cdot F.\end{aligned} \hspace{\stretch{1}}(1.0.78)

Here $d^{k-1} \mathbf{x} = \sum_i d^k \mathbf{x} \cdot \mathbf{x}^i$, with the implicit assumption that it and the blade $F$ that it is dotted with, are both evaluated at the end points of integration variable $u^i$ that has been integrated against.

We’ve seen one specific example of this above in the expansions of eq. 1.28, and eq. 1.29, however, the equivalent result of eq. 1.0.78, somewhat magically, applies to any degree blade and volume element provided the degree of the blade is less than that of the volume element (i.e. $s < k$). That magic follows directly from lemma 1.

As an expositional example, consider a three variable volume element parameterization, and a vector blade $\mathbf{f}$

\begin{aligned}d^3 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right) &= \left( { d^3 \mathbf{x} \cdot \mathbf{x}^i } \right) \cdot \partial_i \mathbf{f} \\ &= du^1 du^2 du^3\left( {\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) \cdot \mathbf{x}^i } \right) \cdot \partial_i \mathbf{f} \\ &= du^1 du^2 du^3\left( {\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 } \right) {\delta_3}^i-\left( { \mathbf{x}_1 \wedge \mathbf{x}_3 } \right) {\delta_2}^i+\left( { \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) {\delta_1}^i} \right) \cdot \partial_i \mathbf{f} \\ &= du^1 du^2 du^3\left( {\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 } \right) \cdot \partial_3 \mathbf{f}-\left( { \mathbf{x}_1 \wedge \mathbf{x}_3 } \right) \cdot \partial_2 \mathbf{f}+\left( { \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) \cdot \partial_1 \mathbf{f}} \right).\end{aligned} \hspace{\stretch{1}}(1.0.78)

It should not be surprising that this has the structure found in the theory of differential forms. Using the differentials for each of the parameterization “directions”, we can write this dot product expansion as

\begin{aligned}d^3 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)=\left( {du^3 \left( { d\mathbf{x}_1 \wedge d\mathbf{x}_2 } \right) \cdot \partial_3 \mathbf{f}-du^2 \left( { d\mathbf{x}_1 \wedge d\mathbf{x}_3 } \right) \cdot \partial_2 \mathbf{f}+du^1 \left( { d\mathbf{x}_2 \wedge d\mathbf{x}_3 } \right) \cdot \partial_1 \mathbf{f}} \right).\end{aligned} \hspace{\stretch{1}}(1.0.78)

Observe that the sign changes with each element of $d\mathbf{x}_1 \wedge d\mathbf{x}_2 \wedge d\mathbf{x}_3$ that is skipped. In differential forms, the wedge product composition of 1-forms is an abstract quantity. Here the differentials are just vectors, and their wedge product represents an oriented volume element. This interpretation is likely available in the theory of differential forms too, but is arguably less obvious.

## Digression

As was the case with the loop integral, we expect that the coordinate representation has a representation that can be expressed as a number of antisymmetric terms. A bit of experimentation shows that such a sum, after dropping the parameter space volume element factor, is

\begin{aligned}\mathbf{x}_1 \left( { -\partial_2 f_3 + \partial_3 f_2 } \right)+\mathbf{x}_2 \left( { -\partial_3 f_1 + \partial_1 f_3 } \right)+\mathbf{x}_3 \left( { -\partial_1 f_2 + \partial_2 f_1 } \right) &= \mathbf{x}_1 \left( { -\partial_2 \mathbf{f} \cdot \mathbf{x}_3 + \partial_3 \mathbf{f} \cdot \mathbf{x}_2 } \right)+\mathbf{x}_2 \left( { -\partial_3 \mathbf{f} \cdot \mathbf{x}_1 + \partial_1 \mathbf{f} \cdot \mathbf{x}_3 } \right)+\mathbf{x}_3 \left( { -\partial_1 \mathbf{f} \cdot \mathbf{x}_2 + \partial_2 \mathbf{f} \cdot \mathbf{x}_1 } \right) \\ &= \left( { \mathbf{x}_1 \partial_3 \mathbf{f} \cdot \mathbf{x}_2 -\mathbf{x}_2 \partial_3 \mathbf{f} \cdot \mathbf{x}_1 } \right)+\left( { \mathbf{x}_3 \partial_2 \mathbf{f} \cdot \mathbf{x}_1 -\mathbf{x}_1 \partial_2 \mathbf{f} \cdot \mathbf{x}_3 } \right)+\left( { \mathbf{x}_2 \partial_1 \mathbf{f} \cdot \mathbf{x}_3 -\mathbf{x}_3 \partial_1 \mathbf{f} \cdot \mathbf{x}_2 } \right) \\ &= \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 } \right) \cdot \partial_3 \mathbf{f}+\left( { \mathbf{x}_3 \wedge \mathbf{x}_1 } \right) \cdot \partial_2 \mathbf{f}+\left( { \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) \cdot \partial_1 \mathbf{f}.\end{aligned} \hspace{\stretch{1}}(1.0.78)

To proceed with the integration, we must again consider an infinitesimal volume element, for which the partial can be evaluated as the difference of the endpoints, with all else held constant. For this three variable parameterization, say, $(u,v,w)$, let’s delimit such an infinitesimal volume element by the parameterization ranges $[u_0,u_0 + du]$, $[v_0,v_0 + dv]$, $[w_0,w_0 + dw]$. The integral is

\begin{aligned}\begin{aligned}\int_{u = u_0}^{u_0 + du}\int_{v = v_0}^{v_0 + dv}\int_{w = w_0}^{w_0 + dw}d^3 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)&=\int_{u = u_0}^{u_0 + du}du\int_{v = v_0}^{v_0 + dv}dv{\left.{{ \Bigl( { \left( { \mathbf{x}_u \wedge \mathbf{x}_v } \right) \cdot \mathbf{f} } \Bigr) }}\right\vert}_{{w = w_0}}^{{w_0 + dw}} \\ &-\int_{u = u_0}^{u_0 + du}du\int_{w = w_0}^{w_0 + dw}dw{\left.{{\Bigl( { \left( { \mathbf{x}_u \wedge \mathbf{x}_w } \right) \cdot \mathbf{f} } \Bigr) }}\right\vert}_{{v = v_0}}^{{v_0 + dv}} \\ &+\int_{v = v_0}^{v_0 + dv}dv\int_{w = w_0}^{w_0 + dw}dw{\left.{{\Bigl( { \left( { \mathbf{x}_v \wedge \mathbf{x}_w } \right) \cdot \mathbf{f} } \Bigr) }}\right\vert}_{{u = u_0}}^{{u_0 + du}}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.82)

Extending this over the ranges $[u_0,u_0 + \Delta u]$, $[v_0,v_0 + \Delta v]$, $[w_0,w_0 + \Delta w]$, we have proved Stokes thm. 1 for vectors and a three parameter volume element, provided we have a surface element of the form

\begin{aligned}d^2 \mathbf{x} = {\left. \Bigl( {d\mathbf{x}_u \wedge d\mathbf{x}_v } \Bigr) \right\vert}_{w = w_0}^{w_1}-{\left. \Bigl( {d\mathbf{x}_u \wedge d\mathbf{x}_w } \Bigr) \right\vert}_{v = v_0}^{v_1}+{\left. \Bigl( {d\mathbf{x}_v \wedge \mathbf{x}_w } \Bigr) \right\vert}_{ u = u_0 }^{u_1},\end{aligned} \hspace{\stretch{1}}(1.0.82)

where the evaluation of the dot products with $\mathbf{f}$ are also evaluated at the same points.

## Example: Euclidean spherical polar parameterization of 3D subspace

Consider an Euclidean space where a 3D subspace is parameterized using spherical coordinates, as in

\begin{aligned}\mathbf{x}(x, \rho, \theta, \phi) = \mathbf{e}_1 x + \mathbf{e}_4 \rho \exp\left( { \mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \theta} \right)=\left( {x, \rho \sin\theta \cos\phi, \rho \sin\theta \sin\phi, \rho \cos\theta} \right).\end{aligned} \hspace{\stretch{1}}(1.0.84)

The tangent space basis for the subspace situated at some fixed $x = x_0$, is easy to calculate, and is found to be

\begin{aligned}\begin{aligned}\mathbf{x}_\rho &= \left( {0, \sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta} \right) =\mathbf{e}_4 \exp\left( { \mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \theta} \right) \\ \mathbf{x}_\theta &= \rho \left( {0, \cos\theta \cos\phi, \cos\theta \sin\phi, - \sin\theta} \right) =\rho \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \exp\left( { \mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \theta } \right) \\ \mathbf{x}_\phi &=\rho \left( {0, -\sin\theta \sin\phi, \sin\theta \cos\phi, 0} \right)= \rho \sin\theta \mathbf{e}_3 e^{\mathbf{e}_2 \mathbf{e}_3 \phi}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.85)

While we can use the general relation of lemma 7 to compute the reciprocal basis. That is

\begin{aligned}\mathbf{a}^{*} = \left( { \mathbf{b} \wedge \mathbf{c} } \right) \frac{1}{{\mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c} }}.\end{aligned} \hspace{\stretch{1}}(1.0.86)

However, a naive attempt at applying this without algebraic software is a route that requires a lot of care, and is easy to make mistakes doing. In this case it is really not necessary since the tangent space basis only requires scaling to orthonormalize, satisfying for $i,j \in \left\{ {\rho, \theta, \phi} \right\}$

\begin{aligned}\mathbf{x}_i \cdot \mathbf{x}_j =\begin{bmatrix} 1 & 0 & 0 \\ 0 & \rho^2 & 0 \\ 0 & 0 & \rho^2 \sin^2 \theta \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.87)

This allows us to read off the dual basis for the tangent volume by inspection

\begin{aligned}\begin{aligned}\mathbf{x}^\rho &=\mathbf{e}_4 \exp\left( { \mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \theta} \right) \\ \mathbf{x}^\theta &= \frac{1}{{\rho}} \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \exp\left( { \mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \theta } \right) \\ \mathbf{x}^\phi &=\frac{1}{{\rho \sin\theta}} \mathbf{e}_3 e^{\mathbf{e}_2 \mathbf{e}_3 \phi}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.88)

Should we wish to explicitly calculate the curl on the tangent space, we would need these. The area and volume elements are also messy to calculate manually. This expansion can be found in the Mathematica notebook \nbref{sphericalSurfaceAndVolumeElements.nb}, and is

\begin{aligned}\begin{aligned}\mathbf{x}_\theta \wedge \mathbf{x}_\phi &=\rho^2 \sin\theta \left( \mathbf{e}_4 \mathbf{e}_2 \sin\theta \sin\phi + \mathbf{e}_2 \mathbf{e}_3 \cos\theta + \mathbf{e}_3 \mathbf{e}_4 \sin\theta \cos\phi \right) \\ \mathbf{x}_\phi \wedge \mathbf{x}_\rho &=\rho \sin\theta \left(-\mathbf{e}_2 \mathbf{e}_3 \sin\theta -\mathbf{e}_2 \mathbf{e}_4 \cos\theta \sin\phi +\mathbf{e}_3 \mathbf{e}_4\cos\theta \cos\phi \right) \\ \mathbf{x}_\rho \wedge \mathbf{x}_\theta &= -\mathbf{e}_4 \rho \left(\mathbf{e}_2\cos\phi +\mathbf{e}_3\sin\phi \right) \\ \mathbf{x}_\rho \wedge \mathbf{x}_\theta \wedge \mathbf{x}_\phi &= \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_4 \rho^2 \sin\theta \end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.89)

Those area elements have a Geometric algebra factorization that are perhaps useful

\begin{aligned}\begin{aligned}\mathbf{x}_\theta \wedge \mathbf{x}_\phi &=-\rho^2 \sin\theta \mathbf{e}_2 \mathbf{e}_3 \exp\left( {-\mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \theta} \right) \\ \mathbf{x}_\phi \wedge \mathbf{x}_\rho &=\rho \sin\theta \mathbf{e}_3 \mathbf{e}_4 e^{\mathbf{e}_2 \mathbf{e}_3 \phi}\exp\left( {\mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \theta} \right) \\ \mathbf{x}_\rho \wedge \mathbf{x}_\theta &= -\rho \mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi}\end{aligned}.\end{aligned} \hspace{\stretch{1}}(1.0.90)

One of the beauties of Stokes theorem is that we don’t actually have to calculate the dual basis on the tangent space to proceed with the integration. For that calculation above, where we had a normal tangent basis, I still used software was used as an aid, so it is clear that this can generally get pretty messy.

To apply Stokes theorem to a vector field we can use eq. 1.0.82 to write down the integral directly

\begin{aligned}\int_V d^3 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right) &= \int_{\partial V} d^2 \mathbf{x} \cdot \mathbf{f} \\ &= \int {\left.{{ \left( { \mathbf{x}_\theta \wedge \mathbf{x}_\phi } \right) \cdot \mathbf{f} }}\right\vert}_{{\rho = \rho_0}}^{{\rho_1}} d\theta d\phi+\int{\left.{{ \left( { \mathbf{x}_\phi \wedge \mathbf{x}_\rho } \right) \cdot \mathbf{f} }}\right\vert}_{{\theta = \theta_0}}^{{\theta_1}} d\phi d\rho+\int{\left.{{ \left( { \mathbf{x}_\rho \wedge \mathbf{x}_\theta } \right) \cdot \mathbf{f} }}\right\vert}_{{\phi = \phi_0}}^{{\phi_1}} d\rho d\theta.\end{aligned} \hspace{\stretch{1}}(1.0.90)

Observe that eq. 1.0.90 is a vector valued integral that expands to

\begin{aligned}\int {\left.{{ \left( { \mathbf{x}_\theta f_\phi - \mathbf{x}_\phi f_\theta } \right) }}\right\vert}_{{\rho = \rho_0}}^{{\rho_1}} d\theta d\phi+\int {\left.{{ \left( { \mathbf{x}_\phi f_\rho - \mathbf{x}_\rho f_\phi } \right) }}\right\vert}_{{\theta = \theta_0}}^{{\theta_1}} d\phi d\rho+\int {\left.{{ \left( { \mathbf{x}_\rho f_\theta - \mathbf{x}_\theta f_\rho } \right) }}\right\vert}_{{\phi = \phi_0}}^{{\phi_1}} d\rho d\theta.\end{aligned} \hspace{\stretch{1}}(1.0.92)

This could easily be a difficult integral to evaluate since the vectors $\mathbf{x}_i$ evaluated at the endpoints are still functions of two parameters. An easier integral would result from the application of Stokes theorem to a bivector valued field, say $B$, for which we have

\begin{aligned}\int_V d^3 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge B } \right) &= \int_{\partial V} d^2 \mathbf{x} \cdot B \\ &= \int {\left.{{ \left( { \mathbf{x}_\theta \wedge \mathbf{x}_\phi } \right) \cdot B }}\right\vert}_{{\rho = \rho_0}}^{{\rho_1}} d\theta d\phi+\int{\left.{{ \left( { \mathbf{x}_\phi \wedge \mathbf{x}_\rho } \right) \cdot B }}\right\vert}_{{\theta = \theta_0}}^{{\theta_1}} d\phi d\rho+\int{\left.{{ \left( { \mathbf{x}_\rho \wedge \mathbf{x}_\theta } \right) \cdot B }}\right\vert}_{{\phi = \phi_0}}^{{\phi_1}} d\rho d\theta \\ &= \int {\left.{{ B_{\phi \theta} }}\right\vert}_{{\rho = \rho_0}}^{{\rho_1}} d\theta d\phi+\int{\left.{{ B_{\rho \phi} }}\right\vert}_{{\theta = \theta_0}}^{{\theta_1}} d\phi d\rho+\int{\left.{{ B_{\theta \rho} }}\right\vert}_{{\phi = \phi_0}}^{{\phi_1}} d\rho d\theta.\end{aligned} \hspace{\stretch{1}}(1.0.92)

There is a geometric interpretation to these oriented area integrals, especially when written out explicitly in terms of the differentials along the parameterization directions. Pulling out a sign explicitly to match the geometry (as we had to also do for the line integrals in the two parameter volume element case), we can write this as

\begin{aligned}\int_{\partial V} d^2 \mathbf{x} \cdot B = -\int {\left.{{ \left( { d\mathbf{x}_\phi \wedge d\mathbf{x}_\theta } \right) \cdot B }}\right\vert}_{{\rho = \rho_0}}^{{\rho_1}} -\int{\left.{{ \left( { d\mathbf{x}_\rho \wedge d\mathbf{x}_\phi } \right) \cdot B }}\right\vert}_{{\theta = \theta_0}}^{{\theta_1}} -\int{\left.{{ \left( { d\mathbf{x}_\theta \wedge d\mathbf{x}_\rho } \right) \cdot B }}\right\vert}_{{\phi = \phi_0}}^{{\phi_1}}.\end{aligned} \hspace{\stretch{1}}(1.0.94)

When written out in this differential form, each of the respective area elements is an oriented area along one of the faces of the parameterization volume, much like the line integral that results from a two parameter volume curl integral. This is visualized in fig. 1.8. In this figure, faces (1) and (3) are “top faces”, those with signs matching the tops of the evaluation ranges eq. 1.0.94, whereas face (2) is a bottom face with a sign that is correspondingly reversed.

Fig 1.8. Boundary faces of a spherical parameterization region

## Example: Minkowski hyperbolic-spherical polar parameterization of 3D subspace

Working with a three parameter volume element in a Minkowski space does not change much. For example in a 4D space with $\left( {\mathbf{e}_4} \right)^2 = -1$, we can employ a hyperbolic-spherical parameterization similar to that used above for the 4D Euclidean space

\begin{aligned}\mathbf{x}(x, \rho, \alpha, \phi)=\left\{ {x, \rho \sinh \alpha \cos\phi, \rho \sinh \alpha \sin\phi, \rho \cosh \alpha} \right\}=\mathbf{e}_1 x + \mathbf{e}_4 \rho \exp\left( { \mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \alpha } \right).\end{aligned} \hspace{\stretch{1}}(1.0.95)

This has tangent space basis elements

\begin{aligned}\begin{aligned}\mathbf{x}_\rho &= \sinh\alpha \left( { \cos\phi \mathbf{e}_2 + \sin\phi \mathbf{e}_3 } \right) + \cosh\alpha \mathbf{e}_4 = \mathbf{e}_4 \exp\left( {\mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \alpha} \right) \\ \mathbf{x}_\alpha &=\rho \cosh\alpha \left( { \cos\phi \mathbf{e}_2 + \sin\phi \mathbf{e}_3} \right) + \rho \sinh\alpha \mathbf{e}_4=\rho \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \exp\left( {-\mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \alpha} \right) \\ \mathbf{x}_\phi &=\rho \sinh\alpha \left( { \mathbf{e}_3 \cos\phi - \mathbf{e}_2 \sin\phi} \right) = \rho\sinh\alpha \mathbf{e}_3 e^{\mathbf{e}_2 \mathbf{e}_3 \phi}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.96)

This is a normal basis, but again not orthonormal. Specifically, for $i,j \in \left\{ {\rho, \theta, \phi} \right\}$ we have

\begin{aligned}\mathbf{x}_i \cdot \mathbf{x}_j =\begin{bmatrix}-1 & 0 & 0 \\ 0 & \rho^2 & 0 \\ 0 & 0 & \rho^2 \sinh^2 \alpha \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.97)

where we see that the radial vector $\mathbf{x}_\rho$ is timelike. We can form the dual basis again by inspection

\begin{aligned}\begin{aligned}\mathbf{x}_\rho &= -\mathbf{e}_4 \exp\left( {\mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \alpha} \right) \\ \mathbf{x}_\alpha &= \frac{1}{{\rho}} \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \exp\left( {-\mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \alpha} \right) \\ \mathbf{x}_\phi &= \frac{1}{{\rho\sinh\alpha}} \mathbf{e}_3 e^{\mathbf{e}_2 \mathbf{e}_3 \phi}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.98)

The area elements are

\begin{aligned}\begin{aligned}\mathbf{x}_\alpha \wedge \mathbf{x}_\phi &=\rho^2 \sinh\alpha \left(-\mathbf{e}_4 \mathbf{e}_3 \sinh\alpha \cos\phi+\cosh\alpha \mathbf{e}_2 \mathbf{e}_3+\sinh\alpha \sin\phi \mathbf{e}_2 \mathbf{e}_4\right) \\ \mathbf{x}_\phi \wedge \mathbf{x}_\rho &=\rho \sinh\alpha \left(-\mathbf{e}_2 \mathbf{e}_3 \sinh\alpha-\mathbf{e}_2 \mathbf{e}_4 \cosh\alpha \sin\phi+\cosh\alpha \cos\phi \mathbf{e}_3 \mathbf{e}_4\right) \\ \mathbf{x}_\rho \wedge \mathbf{x}_\alpha &=-\mathbf{e}_4 \rho \left(\cos\phi \mathbf{e}_2+\sin\phi \mathbf{e}_3\right),\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.99)

or

\begin{aligned}\begin{aligned}\mathbf{x}_\alpha \wedge \mathbf{x}_\phi &=\rho^2 \sinh\alpha \mathbf{e}_2 \mathbf{e}_3 \exp\left( { \mathbf{e}_4 \mathbf{e}_2 e^{-\mathbf{e}_2 \mathbf{e}_3 \phi} \alpha } \right) \\ \mathbf{x}_\phi \wedge \mathbf{x}_\rho &=\rho\sinh\alpha \mathbf{e}_3 \mathbf{e}_4 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \exp\left( {\mathbf{e}_4 \mathbf{e}_2 e^{\mathbf{e}_2 \mathbf{e}_3 \phi} \alpha} \right) \\ \mathbf{x}_\rho \wedge \mathbf{x}_\alpha &=-\mathbf{e}_4 \mathbf{e}_2 \rho e^{\mathbf{e}_2 \mathbf{e}_3 \phi}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.100)

The volume element also reduces nicely, and is

\begin{aligned}\mathbf{x}_\rho \wedge \mathbf{x}_\alpha \wedge \mathbf{x}_\phi = \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_4 \rho^2 \sinh\alpha.\end{aligned} \hspace{\stretch{1}}(1.0.101)

The area and volume element reductions were once again messy, done in software using \nbref{sphericalSurfaceAndVolumeElementsMinkowski.nb}. However, we really only need eq. 1.0.96 to perform the Stokes integration.

# Stokes theorem, four variable volume element parameterization

Volume elements for up to four parameters are likely of physical interest, with the four volume elements of interest for relativistic physics in $\bigwedge^{3,1}$ spaces. For example, we may wish to use a parameterization $u^1 = x, u^2 = y, u^3 = z, u^4 = \tau = c t$, with a four volume

\begin{aligned}d^4 \mathbf{x}=d\mathbf{x}_x \wedge d\mathbf{x}_y \wedge d\mathbf{x}_z \wedge d\mathbf{x}_\tau,\end{aligned} \hspace{\stretch{1}}(1.102)

We follow the same procedure to calculate the corresponding boundary surface “area” element (with dimensions of volume in this case). This is

\begin{aligned}d^4 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right) &= \left( { d^4 \mathbf{x} \cdot \mathbf{x}^i } \right) \cdot \partial_i \mathbf{f} \\ &= du^1 du^2 du^3 du^4\left( {\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 \wedge \mathbf{x}_4 } \right) \cdot \mathbf{x}^i } \right) \cdot \partial_i \mathbf{f} \\ &= du^1 du^2 du^3 du_4\left( {\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) {\delta_4}^i-\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_4 } \right) {\delta_3}^i+\left( { \mathbf{x}_1 \wedge \mathbf{x}_3 \wedge \mathbf{x}_4 } \right) {\delta_2}^i-\left( { \mathbf{x}_2 \wedge \mathbf{x}_3 \wedge \mathbf{x}_4 } \right) {\delta_1}^i} \right) \cdot \partial_i \mathbf{f} \\ &= du^1 du^2 du^3 du^4\left( { \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) \cdot \partial_4 \mathbf{f}-\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_4 } \right) \cdot \partial_3 \mathbf{f}+\left( { \mathbf{x}_1 \wedge \mathbf{x}_3 \wedge \mathbf{x}_4 } \right) \cdot \partial_2 \mathbf{f}-\left( { \mathbf{x}_2 \wedge \mathbf{x}_3 \wedge \mathbf{x}_4 } \right) \cdot \partial_1 \mathbf{f}} \right).\end{aligned} \hspace{\stretch{1}}(1.103)

Our boundary value surface element is therefore

\begin{aligned}d^3 \mathbf{x} = \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3- \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_4+ \mathbf{x}_1 \wedge \mathbf{x}_3 \wedge \mathbf{x}_4- \mathbf{x}_2 \wedge \mathbf{x}_3 \wedge \mathbf{x}_4.\end{aligned} \hspace{\stretch{1}}(1.104)

where it is implied that this (and the dot products with $\mathbf{f}$) are evaluated on the boundaries of the integration ranges of the omitted index. This same boundary form can be used for vector, bivector and trivector variations of Stokes theorem.

# Duality and its relation to the pseudoscalar.

Looking to eq. 1.0.181 of lemma 6, and scaling the wedge product $\mathbf{a} \wedge \mathbf{b}$ by its absolute magnitude, we can express duality using that scaled bivector as a pseudoscalar for the plane that spans $\left\{ {\mathbf{a}, \mathbf{b}} \right\}$. Let’s introduce a subscript notation for such scaled blades

\begin{aligned}I_{\mathbf{a}\mathbf{b}} = \frac{\mathbf{a} \wedge \mathbf{b}}{\left\lvert {\mathbf{a} \wedge \mathbf{b}} \right\rvert}.\end{aligned} \hspace{\stretch{1}}(1.105)

This allows us to express the unit vector in the direction of $\mathbf{a}^{*}$ as

\begin{aligned}\widehat{\mathbf{a}^{*}} = \hat{\mathbf{b}} \frac{\left\lvert {\mathbf{a} \wedge \mathbf{b}} \right\rvert}{\mathbf{a} \wedge \mathbf{b}}= \hat{\mathbf{b}} \frac{1}{{I_{\mathbf{a} \mathbf{b}}}}.\end{aligned} \hspace{\stretch{1}}(1.0.106)

Following the pattern of eq. 1.0.181, it is clear how to express the dual vectors for higher dimensional subspaces. For example

or for the unit vector in the direction of $\mathbf{a}^{*}$,

\begin{aligned}\widehat{\mathbf{a}^{*}} = I_{\mathbf{b} \mathbf{c}} \frac{1}{{I_{\mathbf{a} \mathbf{b} \mathbf{c}} }}.\end{aligned}

# Divergence theorem.

When the curl integral is a scalar result we are able to apply duality relationships to obtain the divergence theorem for the corresponding space. We will be able to show that a relationship of the following form holds

\begin{aligned}\int_V dV \boldsymbol{\nabla} \cdot \mathbf{f} = \int_{\partial V} dA_i \hat{\mathbf{n}}^i \cdot \mathbf{f}.\end{aligned} \hspace{\stretch{1}}(1.0.107)

Here $\mathbf{f}$ is a vector, $\hat{\mathbf{n}}^i$ is normal to the boundary surface, and $dA_i$ is the area of this bounding surface element. We wish to quantify these more precisely, especially because the orientation of the normal vectors are metric dependent. Working a few specific examples will show the pattern nicely, but it is helpful to first consider some aspects of the general case.

First note that, for a scalar Stokes integral we are integrating the vector derivative curl of a blade $F \in \bigwedge^{k-1}$ over a k-parameter volume element. Because the dimension of the space matches the number of parameters, the projection of the gradient onto the tangent space is exactly that gradient

\begin{aligned}\int_V d^k \mathbf{x} \cdot (\boldsymbol{\partial} \wedge F) =\int_V d^k \mathbf{x} \cdot (\boldsymbol{\nabla} \wedge F).\end{aligned} \hspace{\stretch{1}}(1.0.108)

Multiplication of $F$ by the pseudoscalar will always produce a vector. With the introduction of such a dual vector, as in

\begin{aligned}F = I \mathbf{f},\end{aligned} \hspace{\stretch{1}}(1.0.108)

Stokes theorem takes the form

\begin{aligned}\int_V d^k \mathbf{x} \cdot {\left\langle{{\boldsymbol{\nabla} I \mathbf{f}}}\right\rangle}_{k}= \int_{\partial V} \left\langle{{ d^{k-1} \mathbf{x} I \mathbf{f}}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(1.0.108)

or

\begin{aligned}\int_V \left\langle{{ d^k \mathbf{x} \boldsymbol{\nabla} I \mathbf{f}}}\right\rangle= \int_{\partial V} \left( { d^{k-1} \mathbf{x} I} \right) \cdot \mathbf{f},\end{aligned} \hspace{\stretch{1}}(1.0.108)

where we will see that the vector $d^{k-1} \mathbf{x} I$ can roughly be characterized as a normal to the boundary surface. Using primes to indicate the scope of the action of the gradient, cyclic permutation within the scalar selection operator can be used to factor out the pseudoscalar

\begin{aligned}\int_V \left\langle{{ d^k \mathbf{x} \boldsymbol{\nabla} I \mathbf{f}}}\right\rangle &= \int_V \left\langle{{ \mathbf{f}' d^k \mathbf{x} \boldsymbol{\nabla}' I}}\right\rangle \\ &= \int_V {\left\langle{{ \mathbf{f}' d^k \mathbf{x} \boldsymbol{\nabla}'}}\right\rangle}_{k} I \\ &= \int_V(-1)^{k+1} d^k \mathbf{x} \left( { \boldsymbol{\nabla} \cdot \mathbf{f}} \right) I \\ &= (-1)^{k+1} I^2\int_V dV\left( { \boldsymbol{\nabla} \cdot \mathbf{f}} \right).\end{aligned} \hspace{\stretch{1}}(1.0.108)

The second last step uses lemma 8, and the last writes $d^k \mathbf{x} = I^2 \left\lvert {d^k \mathbf{x}} \right\rvert = I^2 dV$, where we have assumed (without loss of generality) that $d^k \mathbf{x}$ has the same orientation as the pseudoscalar for the space. We also assume that the parameterization is non-degenerate over the integration volume (i.e. no $d\mathbf{x}_i = 0$), so the sign of this product cannot change.

Let’s now return to the normal vector $d^{k-1} \mathbf{x} I$. With $d^{k-1} u_i = du^1 du^2 \cdots du^{i-1} du^{i+1} \cdots du^k$ (the $i$ indexed differential omitted), and $I_{ab\cdots c} = (\mathbf{x}_a \wedge \mathbf{x}_b \wedge \cdots \wedge \mathbf{x}_c)/\left\lvert {\mathbf{x}_a \wedge \mathbf{x}_b \wedge \cdots \wedge \mathbf{x}_c} \right\rvert$, we have

\begin{aligned}\begin{aligned}d^{k-1} \mathbf{x} I&=d^{k-1} u_i \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \cdots \wedge \mathbf{x}_k} \right) \cdot \mathbf{x}^i I \\ &= I_{1 2 \cdots (k-1)} I \left\lvert {d\mathbf{x}_1 \wedge d\mathbf{x}_2 \wedge \cdots \wedge d\mathbf{x}_{k-1} } \right\rvert \\ &\quad -I_{1 \cdots (k-2) k} I \left\lvert {d\mathbf{x}_1 \wedge \cdots \wedge d\mathbf{x}_{k-2} \wedge d\mathbf{x}_k} \right\rvert+ \cdots\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.113)

We’ve seen in eq. 1.0.106 and lemma 7 that the dual of vector $\mathbf{a}$ with respect to the unit pseudoscalar $I_{\mathbf{b} \cdots \mathbf{c} \mathbf{d}}$ in a subspace spanned by $\left\{ {\mathbf{a}, \cdots \mathbf{c}, \mathbf{d}} \right\}$ is

\begin{aligned}\widehat{\mathbf{a}^{*}} = I_{\mathbf{b} \cdots \mathbf{c} \mathbf{d}} \frac{1}{{ I_{\mathbf{a} \cdots \mathbf{c} \mathbf{d}} }},\end{aligned} \hspace{\stretch{1}}(1.0.114)

or

\begin{aligned}\widehat{\mathbf{a}^{*}} I_{\mathbf{a} \cdots \mathbf{c} \mathbf{d}}^2=I_{\mathbf{b} \cdots \mathbf{c} \mathbf{d}}.\end{aligned} \hspace{\stretch{1}}(1.0.115)

This allows us to write

\begin{aligned}d^{k-1} \mathbf{x} I= I^2 \sum_i \widehat{\mathbf{x}^i} d{A'}_i\end{aligned} \hspace{\stretch{1}}(1.0.116)

where $d{A'}_i = \pm dA_i$, and $dA_i$ is the area of the boundary area element normal to $\mathbf{x}^i$. Note that the $I^2$ term will now cancel cleanly from both sides of the divergence equation, taking both the metric and the orientation specific dependencies with it.

This leaves us with

\begin{aligned}\int_V dV \boldsymbol{\nabla} \cdot \mathbf{f} = (-1)^{k+1} \int_{\partial V} d{A'}_i \widehat{\mathbf{x}^i} \cdot \mathbf{f}.\end{aligned} \hspace{\stretch{1}}(1.0.117)

To spell out the details, we have to be very careful with the signs. However, that is a job best left for specific examples.

## Example: 2D divergence theorem

Let’s start back at

\begin{aligned}\int_A \left\langle{{ d^2 \mathbf{x} \boldsymbol{\nabla} I \mathbf{f} }}\right\rangle = \int_{\partial A} \left( { d^1 \mathbf{x} I} \right) \cdot \mathbf{f}.\end{aligned} \hspace{\stretch{1}}(1.118)

On the left our integral can be rewritten as

\begin{aligned}\int_A \left\langle{{ d^2 \mathbf{x} \boldsymbol{\nabla} I \mathbf{f} }}\right\rangle &= -\int_A \left\langle{{ d^2 \mathbf{x} I \boldsymbol{\nabla} \mathbf{f} }}\right\rangle \\ &= -\int_A d^2 \mathbf{x} I \left( { \boldsymbol{\nabla} \cdot \mathbf{f} } \right) \\ &= - I^2 \int_A dA \boldsymbol{\nabla} \cdot \mathbf{f},\end{aligned} \hspace{\stretch{1}}(1.119)

where $d^2 \mathbf{x} = I dA$ and we pick the pseudoscalar with the same orientation as the volume (area in this case) element $I = (\mathbf{x}_1 \wedge \mathbf{x}_2)/\left\lvert {\mathbf{x}_1 \wedge \mathbf{x}_2} \right\rvert$.

For the boundary form we have

\begin{aligned}d^1 \mathbf{x} = du^2 \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 } \right) \cdot \mathbf{x}^1+ du^1 \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 } \right) \cdot \mathbf{x}^2= -du^2 \mathbf{x}_2 +du^1 \mathbf{x}_1.\end{aligned} \hspace{\stretch{1}}(1.120)

The duality relations for the tangent space are

\begin{aligned}\begin{aligned}\mathbf{x}^2 &= \mathbf{x}_1 \frac{1}{{\mathbf{x}_2 \wedge \mathbf{x}_1}} \\ \mathbf{x}^1 &= \mathbf{x}_2 \frac{1}{{\mathbf{x}_1 \wedge \mathbf{x}_2}}\end{aligned},\end{aligned} \hspace{\stretch{1}}(1.0.121)

or

\begin{aligned}\begin{aligned}\widehat{\mathbf{x}^2} &= -\widehat{\mathbf{x}_1} \frac{1}{I} \\ \widehat{\mathbf{x}^1} &= \widehat{\mathbf{x}_2} \frac{1}{I}\end{aligned}.\end{aligned} \hspace{\stretch{1}}(1.0.122)

Back substitution into the line element gives

\begin{aligned}d^1 \mathbf{x} = -du^2 \left\lvert {\mathbf{x}_2} \right\rvert \widehat{\mathbf{x}_2}+du^1 \left\lvert {\mathbf{x}_1} \right\rvert \widehat{\mathbf{x}_1}=-du^2 \left\lvert {\mathbf{x}_2} \right\rvert \widehat{\mathbf{x}^1} I-du^1 \left\lvert {\mathbf{x}_1} \right\rvert \widehat{\mathbf{x}^2} I.\end{aligned} \hspace{\stretch{1}}(1.0.122)

Writing (no sum) $du^i \left\lvert {\mathbf{x}_i} \right\rvert = ds_i$, we have

\begin{aligned}d^1 \mathbf{x} I = -\left( { ds_2 \widehat{\mathbf{x}^1} +ds_1 \widehat{\mathbf{x}^2} } \right) I^2.\end{aligned} \hspace{\stretch{1}}(1.0.122)

This provides us a divergence and normal relationship, with $-I^2$ terms on each side that can be canceled. Restoring explicit range evaluation, that is

\begin{aligned}\int_A dA \boldsymbol{\nabla} \cdot \mathbf{f}=\int_{\Delta u^2} {\left.{{ ds_2 \widehat{\mathbf{x}^1} \cdot \mathbf{f}}}\right\vert}_{{\Delta u^1}}+ \int_{\Delta u^1} {\left.{{ ds_1 \widehat{\mathbf{x}^2} \cdot \mathbf{f}}}\right\vert}_{{\Delta u^2}}=\int_{\Delta u^2} {\left.{{ ds_2 \widehat{\mathbf{x}^1} \cdot \mathbf{f}}}\right\vert}_{{u^1(1)}}-\int_{\Delta u^2} {\left.{{ ds_2 \widehat{\mathbf{x}^1} \cdot \mathbf{f}}}\right\vert}_{{u^1(0)}}+ \int_{\Delta u^1} {\left.{{ ds_1 \widehat{\mathbf{x}^2} \cdot \mathbf{f}}}\right\vert}_{{u^2(0)}}- \int_{\Delta u^1} {\left.{{ ds_1 \widehat{\mathbf{x}^2} \cdot \mathbf{f}}}\right\vert}_{{u^2(0)}}.\end{aligned} \hspace{\stretch{1}}(1.0.122)

Let’s consider this graphically for an Euclidean metric as illustrated in fig. 1.9.

Fig 1.9. Normals on area element

We see that

1. along $u^2(0)$ the outwards normal is $-\widehat{\mathbf{x}^2}$,
2. along $u^2(1)$ the outwards normal is $\widehat{\mathbf{x}^2}$,
3. along $u^1(0)$ the outwards normal is $-\widehat{\mathbf{x}^1}$, and
4. along $u^1(1)$ the outwards normal is $\widehat{\mathbf{x}^2}$.

Writing that outwards normal as $\hat{\mathbf{n}}$, we have

\begin{aligned}\int_A dA \boldsymbol{\nabla} \cdot \mathbf{f}= \mathop{\rlap{\ensuremath{\mkern3.5mu\circlearrowright}}\int} ds \hat{\mathbf{n}} \cdot \mathbf{f}.\end{aligned} \hspace{\stretch{1}}(1.0.126)

Note that we can use the same algebraic notion of outward normal for non-Euclidean spaces, although cannot expect the geometry to look anything like that of the figure.

## Example: 3D divergence theorem

As with the 2D example, let’s start back with

\begin{aligned}\int_V \left\langle{{ d^3 \mathbf{x} \boldsymbol{\nabla} I \mathbf{f} }}\right\rangle = \int_{\partial V} \left( { d^2 \mathbf{x} I} \right) \cdot \mathbf{f}.\end{aligned} \hspace{\stretch{1}}(1.127)

In a 3D space, the pseudoscalar commutes with all grades, so we have

\begin{aligned}\int_V \left\langle{{ d^3 \mathbf{x} \boldsymbol{\nabla} I \mathbf{f} }}\right\rangle=\int_V \left( { d^3 \mathbf{x} I } \right) \boldsymbol{\nabla} \cdot \mathbf{f}=I^2 \int_V dV \boldsymbol{\nabla} \cdot \mathbf{f},\end{aligned} \hspace{\stretch{1}}(1.128)

where $d^3 \mathbf{x} I = dV I^2$, and we have used a pseudoscalar with the same orientation as the volume element

\begin{aligned}\begin{aligned}I &= \widehat{ \mathbf{x}_{123} } \\ \mathbf{x}_{123} &= \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.129)

In the boundary integral our dual two form is

\begin{aligned}d^2 \mathbf{x} I= du^1 du^2 \mathbf{x}_1 \wedge \mathbf{x}_2+du^3 du^1 \mathbf{x}_3 \wedge \mathbf{x}_1+du^2 du^3 \mathbf{x}_2 \wedge \mathbf{x}_3= \left( { dA_{3} \widehat{ \mathbf{x}_{12} } \frac{1}{I}+dA_{2} \widehat{ \mathbf{x}_{31} } \frac{1}{I}+dA_{1} \widehat{ \mathbf{x}_{23} } \frac{1}{I}} \right) I^2,\end{aligned} \hspace{\stretch{1}}(1.0.129)

where $\mathbf{x}_{ij} = \mathbf{x}_i \wedge \mathbf{x}_j$, and

\begin{aligned}\begin{aligned}dA_1 &= \left\lvert {d\mathbf{x}_2 \wedge d\mathbf{x}_3} \right\rvert \\ dA_2 &= \left\lvert {d\mathbf{x}_3 \wedge d\mathbf{x}_1} \right\rvert \\ dA_3 &= \left\lvert {d\mathbf{x}_1 \wedge d\mathbf{x}_2} \right\rvert.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.131)

Observe that we can do a cyclic permutation of a 3 blade without any change of sign, for example

\begin{aligned}\mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 =-\mathbf{x}_2 \wedge \mathbf{x}_1 \wedge \mathbf{x}_3 =\mathbf{x}_2 \wedge \mathbf{x}_3 \wedge \mathbf{x}_1.\end{aligned} \hspace{\stretch{1}}(1.0.132)

Because of this we can write the dual two form as we expressed the normals in lemma 7

\begin{aligned}d^2 \mathbf{x} I = \left( { dA_1 \widehat{\mathbf{x}_{23}} \frac{1}{{\widehat{\mathbf{x}_{123}}}} + dA_2 \widehat{\mathbf{x}_{31}} \frac{1}{{\widehat{\mathbf{x}_{231}}}} + dA_3 \widehat{\mathbf{x}_{12}} \frac{1}{{\widehat{\mathbf{x}_{312}}}}} \right) I^2=\left( { dA_1 \widehat{\mathbf{x}^1}+dA_2 \widehat{\mathbf{x}^2}+dA_3 \widehat{\mathbf{x}^3} } \right) I^2.\end{aligned} \hspace{\stretch{1}}(1.0.132)

We can now state the 3D divergence theorem, canceling out the metric and orientation dependent term $I^2$ on both sides

\begin{aligned}\int_V dV \boldsymbol{\nabla} \cdot \mathbf{f}=\int dA \hat{\mathbf{n}} \cdot \mathbf{f},\end{aligned} \hspace{\stretch{1}}(1.0.134)

where (sums implied)

\begin{aligned}dA \hat{\mathbf{n}} = dA_i \widehat{\mathbf{x}^i},\end{aligned} \hspace{\stretch{1}}(1.0.135)

and

\begin{aligned}\begin{aligned}{\left.{{\hat{\mathbf{n}}}}\right\vert}_{{u^i = u^i(1)}} &= \widehat{\mathbf{x}^i} \\ {\left.{{\hat{\mathbf{n}}}}\right\vert}_{{u^i = u^i(0)}} &= -\widehat{\mathbf{x}^i}\end{aligned}.\end{aligned} \hspace{\stretch{1}}(1.0.136)

The outwards normals at the upper integration ranges of a three parameter surface are depicted in fig. 1.10.

Fig 1.10. Outwards normals on volume at upper integration ranges.

This sign alternation originates with the two form elements $\left( {d\mathbf{x}_i \wedge d\mathbf{x}_j} \right) \cdot F$ from the Stokes boundary integral, which were explicitly evaluated at the endpoints of the integral. That is, for $k \ne i,j$,

\begin{aligned}\int_{\partial V} \left( { d\mathbf{x}_i \wedge d\mathbf{x}_j } \right) \cdot F\equiv\int_{\Delta u^i} \int_{\Delta u^j} {\left.{{\left( { \left( { d\mathbf{x}_i \wedge d\mathbf{x}_j } \right) \cdot F } \right)}}\right\vert}_{{u^k = u^k(1)}}-{\left.{{\left( { \left( { d\mathbf{x}_i \wedge d\mathbf{x}_j } \right) \cdot F } \right)}}\right\vert}_{{u^k = u^k(0)}}\end{aligned} \hspace{\stretch{1}}(1.0.137)

In the context of the divergence theorem, this means that we are implicitly requiring the dot products $\widehat{\mathbf{x}^k} \cdot \mathbf{f}$ to be evaluated specifically at the end points of the integration where $u^k = u^k(1), u^k = u^k(0)$, accounting for the alternation of sign required to describe the normals as uniformly outwards.

## Example: 4D divergence theorem

Applying Stokes theorem to a trivector $T = I \mathbf{f}$ in the 4D case we find

\begin{aligned}-I^2 \int_V d^4 x \boldsymbol{\nabla} \cdot \mathbf{f} = \int_{\partial V} \left( { d^3 \mathbf{x} I} \right) \cdot \mathbf{f}.\end{aligned} \hspace{\stretch{1}}(1.138)

Here the pseudoscalar has been picked to have the same orientation as the hypervolume element $d^4 \mathbf{x} = I d^4 x$. Writing $\mathbf{x}_{ij \cdots k} = \mathbf{x}_i \wedge \mathbf{x}_j \wedge \cdots \mathbf{x}_k$ the dual of the three form is

\begin{aligned}d^3 \mathbf{x} I &= \left( { du^1 du^2 du^3 \mathbf{x}_{123}-du^1 du^2 du^4 \mathbf{x}_{124}+du^1 du^3 du^4 \mathbf{x}_{134}-du^2 du^3 du^4 \mathbf{x}_{234}} \right) I \\ &= \left( { dA^{123} \widehat{ \mathbf{x}_{123} } -dA^{124} \widehat{ \mathbf{x}_{124} } +dA^{134} \widehat{ \mathbf{x}_{134} } -dA^{234} \widehat{ \mathbf{x}_{234} }} \right) I \\ &= \left( { dA^{123} \widehat{ \mathbf{x}_{123} } \frac{1}{{\widehat{\mathbf{x}_{1234} }}} -dA^{124} \widehat{ \mathbf{x}_{124} } \frac{1}{{\widehat{\mathbf{x}_{1234} }}} +dA^{134} \widehat{ \mathbf{x}_{134} } \frac{1}{{\widehat{\mathbf{x}_{1234} }}} -dA^{234} \widehat{ \mathbf{x}_{234} } \frac{1}{{\widehat{\mathbf{x}_{1234} }}}} \right) I^2 \\ &= -\left( { dA^{123} \widehat{ \mathbf{x}_{123} } \frac{1}{{\widehat{\mathbf{x}_{4123} }}} +dA^{124} \widehat{ \mathbf{x}_{124} } \frac{1}{{\widehat{\mathbf{x}_{3412} }}} +dA^{134} \widehat{ \mathbf{x}_{134} } \frac{1}{{\widehat{\mathbf{x}_{2341} }}} +dA^{234} \widehat{ \mathbf{x}_{234} } \frac{1}{{\widehat{\mathbf{x}_{1234} }}}} \right) I^2 \\ &= -\left( { dA^{123} \widehat{ \mathbf{x}_{123} } \frac{1}{{\widehat{\mathbf{x}_{4123} }}} +dA^{124} \widehat{ \mathbf{x}_{412} } \frac{1}{{\widehat{\mathbf{x}_{3412} }}} +dA^{134} \widehat{ \mathbf{x}_{341} } \frac{1}{{\widehat{\mathbf{x}_{2341} }}} +dA^{234} \widehat{ \mathbf{x}_{234} } \frac{1}{{\widehat{\mathbf{x}_{1234} }}}} \right) I^2 \\ &= -\left( { dA^{123} \widehat{ \mathbf{x}^{4} } +dA^{124} \widehat{ \mathbf{x}^{3} } +dA^{134} \widehat{ \mathbf{x}^{2} } +dA^{234} \widehat{ \mathbf{x}^{1} } } \right) I^2\end{aligned} \hspace{\stretch{1}}(1.139)

Here, we’ve written

\begin{aligned}dA^{ijk} = \left\lvert { d\mathbf{x}_i \wedge d\mathbf{x}_j \wedge d\mathbf{x}_k } \right\rvert.\end{aligned} \hspace{\stretch{1}}(1.140)

Observe that the dual representation nicely removes the alternation of sign that we had in the Stokes theorem boundary integral, since each alternation of the wedged vectors in the pseudoscalar changes the sign once.

As before, we define the outwards normals as $\hat{\mathbf{n}} = \pm \widehat{\mathbf{x}^i}$ on the upper and lower integration ranges respectively. The scalar area elements on these faces can be written in a dual form

\begin{aligned}\begin{aligned} dA_4 &= dA^{123} \\ dA_3 &= dA^{124} \\ dA_2 &= dA^{134} \\ dA_1 &= dA^{234} \end{aligned},\end{aligned} \hspace{\stretch{1}}(1.0.141)

so that the 4D divergence theorem looks just like the 2D and 3D cases

\begin{aligned}\int_V d^4 x \boldsymbol{\nabla} \cdot \mathbf{f} = \int_{\partial V} d^3 x \hat{\mathbf{n}} \cdot \mathbf{f}.\end{aligned} \hspace{\stretch{1}}(1.0.142)

Here we define the volume scaled normal as

\begin{aligned}d^3 x \hat{\mathbf{n}} = dA_i \widehat{\mathbf{x}^i}.\end{aligned} \hspace{\stretch{1}}(1.0.143)

As before, we have made use of the implicit fact that the three form (and it’s dot product with $\mathbf{f}$) was evaluated on the boundaries of the integration region, with a toggling of sign on the lower limit of that evaluation that is now reflected in what we have defined as the outwards normal.

We also obtain explicit instructions from this formalism how to compute the “outwards” normal for this surface in a 4D space (unit scaling of the dual basis elements), something that we cannot compute using any sort of geometrical intuition. For free we’ve obtained a result that applies to both Euclidean and Minkowski (or other non-Euclidean) spaces.

# Volume integral coordinate representations

It may be useful to formulate the curl integrals in tensor form. For vectors $\mathbf{f}$, and bivectors $B$, the coordinate representations of those differential forms (\cref{pr:stokesTheoremGeometricAlgebraII:1}) are

\begin{aligned}d^2 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)=- d^2 u \epsilon^{ a b } \partial_a f_b\end{aligned} \hspace{\stretch{1}}(1.0.144a)

\begin{aligned}d^3 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)=-d^3 u \epsilon^{a b c} \mathbf{x}_a \partial_b f_{c}\end{aligned} \hspace{\stretch{1}}(1.0.144b)

\begin{aligned}d^4 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)=-\frac{1}{2} d^4 u \epsilon^{a b c d} \mathbf{x}_a \wedge \mathbf{x}_b \partial_{c} f_{d}\end{aligned} \hspace{\stretch{1}}(1.0.144c)

\begin{aligned}d^3 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge B } \right)=-\frac{1}{2}d^3 u \epsilon^{a b c} \partial_a B_{b c}\end{aligned} \hspace{\stretch{1}}(1.0.144d)

\begin{aligned}d^4 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge B } \right)=-\frac{1}{2} d^4 u \epsilon^{a b c d} \mathbf{x}_a \partial_b B_{cd}\end{aligned} \hspace{\stretch{1}}(1.0.144e)

\begin{aligned}d^4 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge T } \right)=-d^4 u\left( {\partial_4 T_{123}-\partial_3 T_{124}+\partial_2 T_{134}-\partial_1 T_{234}} \right).\end{aligned} \hspace{\stretch{1}}(1.0.144f)

Here the bivector $B$ and trivector $T$ is expressed in terms of their curvilinear components on the tangent space

\begin{aligned}B = \frac{1}{2} \mathbf{x}^i \wedge \mathbf{x}^j B_{ij} + B_\perp\end{aligned} \hspace{\stretch{1}}(1.0.145a)

\begin{aligned}T = \frac{1}{{3!}} \mathbf{x}^i \wedge \mathbf{x}^j \wedge \mathbf{x}^k T_{ijk} + T_\perp,\end{aligned} \hspace{\stretch{1}}(1.0.145b)

where

\begin{aligned}B_{ij} = \mathbf{x}_j \cdot \left( { \mathbf{x}_i \cdot B } \right) = -B_{ji}.\end{aligned} \hspace{\stretch{1}}(1.0.146a)

\begin{aligned}T_{ijk} = \mathbf{x}_k \cdot \left( { \mathbf{x}_j \cdot \left( { \mathbf{x}_i \cdot B } \right)} \right).\end{aligned} \hspace{\stretch{1}}(1.0.146b)

For the trivector components are also antisymmetric, changing sign with any interchange of indices.

Note that eq. 1.0.144d and eq. 1.0.144f appear much different on the surface, but both have the same structure. This can be seen by writing for former as

\begin{aligned}d^3 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge B } \right)=-d^3 u\left( { \partial_1 B_{2 3} + \partial_2 B_{3 1} + \partial_3 B_{1 2}} \right)=-d^3 u\left( { \partial_3 B_{1 2} - \partial_2 B_{1 3} + \partial_1 B_{2 3}} \right).\end{aligned} \hspace{\stretch{1}}(1.0.146b)

In both of these we have an alternation of sign, where the tensor index skips one of the volume element indices is sequence. We’ve seen in the 4D divergence theorem that this alternation of sign can be related to a duality transformation.

In integral form (no sum over indexes $i$ in $du^i$ terms), these are

\begin{aligned}\int d^2 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)=- \epsilon^{ a b } \int {\left.{{du^b f_b}}\right\vert}_{{\Delta u^a}}\end{aligned} \hspace{\stretch{1}}(1.0.148a)

\begin{aligned}\int d^3 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)=-\epsilon^{a b c} \int du^a du^c{\left.{{\mathbf{x}_a f_{c}}}\right\vert}_{{\Delta u^b}}\end{aligned} \hspace{\stretch{1}}(1.0.148b)

\begin{aligned}\int d^4 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)=-\frac{1}{2} \epsilon^{a b c d} \int du^a du^b du^d{\left.{{\mathbf{x}_a \wedge \mathbf{x}_b f_{d}}}\right\vert}_{{\Delta u^c}}\end{aligned} \hspace{\stretch{1}}(1.0.148c)

\begin{aligned}\int d^3 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge B } \right)=-\frac{1}{2}\epsilon^{a b c} \int du^b du^c{\left.{{B_{b c}}}\right\vert}_{{\Delta u^a}}\end{aligned} \hspace{\stretch{1}}(1.0.148d)

\begin{aligned}\int d^4 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge B } \right)=-\frac{1}{2} \epsilon^{a b c d} \int du^a du^c du^d{\left.{{\mathbf{x}_a B_{cd}}}\right\vert}_{{\Delta u^b}}\end{aligned} \hspace{\stretch{1}}(1.0.148e)

\begin{aligned}\int d^4 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge T } \right)=-\int \left( {du^1 du^2 du^3 {\left.{{T_{123}}}\right\vert}_{{\Delta u^4}}-du^1 du^2 du^4 {\left.{{T_{124}}}\right\vert}_{{\Delta u^3}}+du^1 du^3 du^4 {\left.{{T_{134}}}\right\vert}_{{\Delta u^2}}-du^2 du^3 du^4 {\left.{{T_{234}}}\right\vert}_{{\Delta u^1}}} \right).\end{aligned} \hspace{\stretch{1}}(1.0.148f)

Of these, I suspect that only eq. 1.0.148a and eq. 1.0.148d are of use.

# Final remarks

Because we have used curvilinear coordinates from the get go, we have arrived naturally at a formulation that works for both Euclidean and non-Euclidean geometries, and have demonstrated that Stokes (and the divergence theorem) holds regardless of the geometry or the parameterization. We also know explicitly how to formulate both theorems for any parameterization that we choose, something much more valuable than knowledge that this is possible.

For the divergence theorem we have introduced the concept of outwards normal (for example in 3D, eq. 1.0.136), which still holds for non-Euclidean geometries. We may not be able to form intuitive geometrical interpretations for these normals, but do have an algebraic description of them.

# Appendix

## Question: Expand volume elements in coordinates

Show that the coordinate representation for the volume element dotted with the curl can be represented as a sum of antisymmetric terms. That is

• (a)Prove eq. 1.0.144a
• (b)Prove eq. 1.0.144b
• (c)Prove eq. 1.0.144c
• (d)Prove eq. 1.0.144d
• (e)Prove eq. 1.0.144e
• (f)Prove eq. 1.0.144f

### (a) Two parameter volume, curl of vector

\begin{aligned}d^2 \mathbf{x} \cdot \left( \boldsymbol{\partial} \wedge \mathbf{f} \right) &= d^2 u\Bigl( { \left( \mathbf{x}_1 \wedge \mathbf{x}_2 \right) \cdot \mathbf{x}^i } \Bigr) \cdot \partial_i \mathbf{f} \\ &= d^2 u \left( \mathbf{x}_1 \cdot \partial_2 \mathbf{f}-\mathbf{x}_2 \cdot \partial_1 \mathbf{f} \right) \\ &= d^2 u\left( \partial_2 f_1-\partial_1 f_2 \right) \\ &= - d^2 u \epsilon^{ab} \partial_{a} f_{b}. \qquad\square\end{aligned} \hspace{\stretch{1}}(1.149)

### (b) Three parameter volume, curl of vector

\begin{aligned}d^3 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right) &= d^3 u\Bigl( { \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) \cdot \mathbf{x}^i } \Bigr) \cdot \partial_i \mathbf{f} \\ &= d^3 u\Bigl( { \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 } \right) \cdot \partial_3 \mathbf{f}+\left( { \mathbf{x}_3 \wedge \mathbf{x}_1 } \right) \cdot \partial_2 \mathbf{f}+\left( { \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) \cdot \partial_1 \mathbf{f}} \Bigr) \\ &= d^3 u\Bigl( {\left( { \mathbf{x}_1 \partial_3 \mathbf{f} \cdot \mathbf{x}_2 -\mathbf{x}_2 \partial_3 \mathbf{f} \cdot \mathbf{x}_1 } \right)+\left( { \mathbf{x}_3 \partial_2 \mathbf{f} \cdot \mathbf{x}_1 -\mathbf{x}_1 \partial_2 \mathbf{f} \cdot \mathbf{x}_3 } \right)+\left( { \mathbf{x}_2 \partial_1 \mathbf{f} \cdot \mathbf{x}_3 -\mathbf{x}_3 \partial_1 \mathbf{f} \cdot \mathbf{x}_2 } \right)} \Bigr) \\ &= d^3 u\Bigl( {\mathbf{x}_1 \left( { -\partial_2 \mathbf{f} \cdot \mathbf{x}_3 + \partial_3 \mathbf{f} \cdot \mathbf{x}_2 } \right)+\mathbf{x}_2 \left( { -\partial_3 \mathbf{f} \cdot \mathbf{x}_1 + \partial_1 \mathbf{f} \cdot \mathbf{x}_3 } \right)+\mathbf{x}_3 \left( { -\partial_1 \mathbf{f} \cdot \mathbf{x}_2 + \partial_2 \mathbf{f} \cdot \mathbf{x}_1 } \right)} \Bigr) \\ &= d^3 u\Bigl( {\mathbf{x}_1 \left( { -\partial_2 f_3 + \partial_3 f_2 } \right)+\mathbf{x}_2 \left( { -\partial_3 f_1 + \partial_1 f_3 } \right)+\mathbf{x}_3 \left( { -\partial_1 f_2 + \partial_2 f_1 } \right)} \Bigr) \\ &= - d^3 u \epsilon^{abc} \partial_b f_c. \qquad\square\end{aligned} \hspace{\stretch{1}}(1.150)

### (c) Four parameter volume, curl of vector

\begin{aligned}\begin{aligned}d^4 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge \mathbf{f} } \right)&=d^4 u\Bigl( { \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 \wedge \mathbf{x}_4 } \right) \cdot \mathbf{x}^i } \Bigr) \cdot \partial_i \mathbf{f} \\ &=d^4 u\Bigl( {\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) \cdot \partial_4 \mathbf{f}-\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_4 } \right) \cdot \partial_3 \mathbf{f}+\left( { \mathbf{x}_1 \wedge \mathbf{x}_3 \wedge \mathbf{x}_4 } \right) \cdot \partial_2 \mathbf{f}-\left( { \mathbf{x}_2 \wedge \mathbf{x}_3 \wedge \mathbf{x}_4 } \right) \cdot \partial_1 \mathbf{f}} \Bigr) \\ &=d^4 u\Bigl( { \\ &\quad\quad \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 } \right) \mathbf{x}_3 \cdot \partial_4 \mathbf{f}-\left( { \mathbf{x}_1 \wedge \mathbf{x}_3 } \right) \mathbf{x}_2 \cdot \partial_4 \mathbf{f}+\left( { \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) \mathbf{x}_1 \cdot \partial_4 \mathbf{f} \\ &\quad-\left( { \mathbf{x}_1 \wedge \mathbf{x}_2 } \right) \mathbf{x}_4 \cdot \partial_3 \mathbf{f}+\left( { \mathbf{x}_1 \wedge \mathbf{x}_4 } \right) \mathbf{x}_2 \cdot \partial_3 \mathbf{f}-\left( { \mathbf{x}_2 \wedge \mathbf{x}_4 } \right) \mathbf{x}_1 \cdot \partial_3 \mathbf{f} \\ &\quad+ \left( { \mathbf{x}_1 \wedge \mathbf{x}_3 } \right) \mathbf{x}_4 \cdot \partial_2 \mathbf{f}-\left( { \mathbf{x}_1 \wedge \mathbf{x}_4 } \right) \mathbf{x}_3 \cdot \partial_2 \mathbf{f}+\left( { \mathbf{x}_3 \wedge \mathbf{x}_4 } \right) \mathbf{x}_1 \cdot \partial_2 \mathbf{f} \\ &\quad-\left( { \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) \mathbf{x}_4 \cdot \partial_1 \mathbf{f}+\left( { \mathbf{x}_2 \wedge \mathbf{x}_4 } \right) \mathbf{x}_3 \cdot \partial_1 \mathbf{f}-\left( { \mathbf{x}_3 \wedge \mathbf{x}_4 } \right) \mathbf{x}_2 \cdot \partial_1 \mathbf{f} \\ &\qquad} \Bigr) \\ &=d^4 u\Bigl( {\mathbf{x}_1 \wedge \mathbf{x}_2 \partial_{[4} f_{3]}+\mathbf{x}_1 \wedge \mathbf{x}_3 \partial_{[2} f_{4]}+\mathbf{x}_1 \wedge \mathbf{x}_4 \partial_{[3} f_{2]}+\mathbf{x}_2 \wedge \mathbf{x}_3 \partial_{[4} f_{1]}+\mathbf{x}_2 \wedge \mathbf{x}_4 \partial_{[1} f_{3]}+\mathbf{x}_3 \wedge \mathbf{x}_4 \partial_{[2} f_{1]}} \Bigr) \\ &=- \frac{1}{2} d^4 u \epsilon^{abcd} \mathbf{x}_a \wedge \mathbf{x}_b \partial_{c} f_{d}. \qquad\square\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.151)

### (d) Three parameter volume, curl of bivector

\begin{aligned}\begin{aligned}d^3 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge B } \right)&=d^3 u\Bigl( { \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) \cdot \mathbf{x}^i } \Bigr) \cdot \partial_i B \\ &=d^3 u\Bigl( { \left( { \mathbf{x}_1 \wedge \mathbf{x}_2 } \right) \cdot \partial_3 B+\left( { \mathbf{x}_3 \wedge \mathbf{x}_1 } \right) \cdot \partial_2 B+\left( { \mathbf{x}_2 \wedge \mathbf{x}_3 } \right) \cdot \partial_1 B} \Bigr) \\ &=\frac{1}{2} d^3 u\Bigl( { \mathbf{x}_1 \cdot \left( { \mathbf{x}_2 \cdot \partial_3 B } \right) -\mathbf{x}_2 \cdot \left( { \mathbf{x}_1 \cdot \partial_3 B } \right) \\ &\qquad +\mathbf{x}_3 \cdot \left( { \mathbf{x}_1 \cdot \partial_2 B } \right) -\mathbf{x}_1 \cdot \left( { \mathbf{x}_3 \cdot \partial_2 B } \right) \\ &\qquad +\mathbf{x}_2 \cdot \left( { \mathbf{x}_3 \cdot \partial_1 B } \right) -\mathbf{x}_3 \cdot \left( { \mathbf{x}_2 \cdot \partial_1 B } \right)} \Bigr) \\ &=\frac{1}{2} d^3 u\Bigl( { \mathbf{x}_1 \cdot \left( { \mathbf{x}_2 \cdot \partial_3 B - \mathbf{x}_3 \cdot \partial_2 B } \right) \\ &\qquad +\mathbf{x}_2 \cdot \left( { \mathbf{x}_3 \cdot \partial_1 B - \mathbf{x}_1 \cdot \partial_3 B } \right) \\ &\qquad +\mathbf{x}_3 \cdot \left( { \mathbf{x}_1 \cdot \partial_2 B - \mathbf{x}_2 \cdot \partial_1 B } \right)} \Bigr) \\ &=\frac{1}{2} d^3 u\Bigl( {\mathbf{x}_1 \cdot \left( { \partial_3 \left( { \mathbf{x}_2 \cdot B} \right) - \partial_2 \left( { \mathbf{x}_3 \cdot B} \right) } \right) \\ &\qquad +\mathbf{x}_2 \cdot \left( { \partial_1 \left( { \mathbf{x}_3 \cdot B} \right) - \partial_3 \left( { \mathbf{x}_1 \cdot B} \right) } \right) \\ &\qquad +\mathbf{x}_3 \cdot \left( { \partial_2 \left( { \mathbf{x}_1 \cdot B} \right) - \partial_1 \left( { \mathbf{x}_2 \cdot B} \right) } \right)} \Bigr) \\ &=\frac{1}{2} d^3 u\Bigl( {\partial_2 \left( { \mathbf{x}_3 \cdot \left( { \mathbf{x}_1 \cdot B} \right) } \right) - \partial_3 \left( { \mathbf{x}_2 \cdot \left( { \mathbf{x}_1 \cdot B} \right) } \right) \\ &\qquad+ \partial_3 \left( { \mathbf{x}_1 \cdot \left( { \mathbf{x}_2 \cdot B} \right) } \right) - \partial_1 \left( { \mathbf{x}_3 \cdot \left( { \mathbf{x}_2 \cdot B} \right) } \right) \\ &\qquad+ \partial_1 \left( { \mathbf{x}_2 \cdot \left( { \mathbf{x}_3 \cdot B} \right) } \right) - \partial_2 \left( { \mathbf{x}_1 \cdot \left( { \mathbf{x}_3 \cdot B} \right) } \right)} \Bigr) \\ &=\frac{1}{2} d^3 u\Bigl( {\partial_2 B_{13} - \partial_3 B_{12}+\partial_3 B_{21} - \partial_1 B_{23}+\partial_1 B_{32} - \partial_2 B_{31}} \Bigr) \\ &=d^3 u\Bigl( {\partial_2 B_{13}+\partial_3 B_{21}+\partial_1 B_{32}} \Bigr) \\ &= - \frac{1}{2} d^3 u \epsilon^{abc} \partial_a B_{bc}. \qquad\square\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.152)

### (e) Four parameter volume, curl of bivector

To start, we require lemma 3. For convenience lets also write our wedge products as a single indexed quantity, as in $\mathbf{x}_{abc}$ for $\mathbf{x}_a \wedge \mathbf{x}_b \wedge \mathbf{x}_c$. The expansion is

\begin{aligned}\begin{aligned}d^4 \mathbf{x} \cdot \left( \boldsymbol{\partial} \wedge B \right) &= d^4 u \left( \mathbf{x}_{1234} \cdot \mathbf{x}^i \right) \cdot \partial_i B \\ &= d^4 u\left( \mathbf{x}_{123} \cdot \partial_4 B - \mathbf{x}_{124} \cdot \partial_3 B + \mathbf{x}_{134} \cdot \partial_2 B - \mathbf{x}_{234} \cdot \partial_1 B \right) \\ &= d^4 u \Bigl( \mathbf{x}_1 \left( \mathbf{x}_{23} \cdot \partial_4 B \right) + \mathbf{x}_2 \left( \mathbf{x}_{32} \cdot \partial_4 B \right) + \mathbf{x}_3 \left( \mathbf{x}_{12} \cdot \partial_4 B \right) \\ &\qquad - \mathbf{x}_1 \left( \mathbf{x}_{24} \cdot \partial_3 B \right) - \mathbf{x}_2 \left( \mathbf{x}_{41} \cdot \partial_3 B \right) - \mathbf{x}_4 \left( \mathbf{x}_{12} \cdot \partial_3 B \right) \\ &\qquad + \mathbf{x}_1 \left( \mathbf{x}_{34} \cdot \partial_2 B \right) + \mathbf{x}_3 \left( \mathbf{x}_{41} \cdot \partial_2 B \right) + \mathbf{x}_4 \left( \mathbf{x}_{13} \cdot \partial_2 B \right) \\ &\qquad - \mathbf{x}_2 \left( \mathbf{x}_{34} \cdot \partial_1 B \right) - \mathbf{x}_3 \left( \mathbf{x}_{42} \cdot \partial_1 B \right) - \mathbf{x}_4 \left( \mathbf{x}_{23} \cdot \partial_1 B \right)} \Bigr) \\ &= d^4 u \Bigl( \mathbf{x}_1 \left( \mathbf{x}_{23} \cdot \partial_4 B + \mathbf{x}_{42} \cdot \partial_3 B + \mathbf{x}_{34} \cdot \partial_2 B \right) \\ &\qquad + \mathbf{x}_2 \left( \mathbf{x}_{32} \cdot \partial_4 B + \mathbf{x}_{14} \cdot \partial_3 B + \mathbf{x}_{43} \cdot \partial_1 B \right) \\ &\qquad + \mathbf{x}_3 \left( \mathbf{x}_{12} \cdot \partial_4 B + \mathbf{x}_{41} \cdot \partial_2 B + \mathbf{x}_{24} \cdot \partial_1 B \right) \\ &\qquad + \mathbf{x}_4 \left( \mathbf{x}_{21} \cdot \partial_3 B + \mathbf{x}_{13} \cdot \partial_2 B + \mathbf{x}_{32} \cdot \partial_1 B \right)} \Bigr) \\ &= - \frac{1}{2} d^4 u \epsilon^{a b c d} \mathbf{x}_a \partial_b B_{c d}. \qquad\square\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.153)

This last step uses an intermediate result from the eq. 1.0.152 expansion above, since each of the four terms has the same structure we have previously observed.

### (f) Four parameter volume, curl of trivector

Using the $\mathbf{x}_{ijk}$ shorthand again, the initial expansion gives

\begin{aligned}d^4 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge T } \right)=d^4 u\left( {\mathbf{x}_{123} \cdot \partial_4 T - \mathbf{x}_{124} \cdot \partial_3 T + \mathbf{x}_{134} \cdot \partial_2 T - \mathbf{x}_{234} \cdot \partial_1 T} \right).\end{aligned} \hspace{\stretch{1}}(1.0.153)

Applying lemma 4 to expand the inner products within the braces we have

\begin{aligned}\begin{aligned}\mathbf{x}_{123} \cdot \partial_4 T-&\mathbf{x}_{124} \cdot \partial_3 T+\mathbf{x}_{134} \cdot \partial_2 T-\mathbf{x}_{234} \cdot \partial_1 T \\ &=\mathbf{x}_1 \cdot \left( { \mathbf{x}_2 \cdot \left( { \mathbf{x}_3 \cdot \partial_4 T } \right) } \right)-\mathbf{x}_1 \cdot \left( { \mathbf{x}_2 \cdot \left( { \mathbf{x}_4 \cdot \partial_3 T } \right) } \right) \\ &\quad +\underbrace{\mathbf{x}_1 \cdot \left( { \mathbf{x}_3 \cdot \left( { \mathbf{x}_4 \cdot \partial_2 T } \right) } \right)-\mathbf{x}_2 \cdot \left( { \mathbf{x}_3 \cdot \left( { \mathbf{x}_4 \cdot \partial_1 T } \right) } \right)}_{\text{Apply cyclic permutations}}\\ &=\mathbf{x}_1 \cdot \left( { \mathbf{x}_2 \cdot \left( { \mathbf{x}_3 \cdot \partial_4 T } \right) } \right)-\mathbf{x}_1 \cdot \left( { \mathbf{x}_2 \cdot \left( { \mathbf{x}_4 \cdot \partial_3 T } \right) } \right) \\ &\quad +\mathbf{x}_3 \cdot \left( { \mathbf{x}_4 \cdot \left( { \mathbf{x}_1 \cdot \partial_2 T } \right) } \right)-\mathbf{x}_3 \cdot \left( { \mathbf{x}_4 \cdot \left( { \mathbf{x}_2 \cdot \partial_1 T } \right) } \right) \\ &=\mathbf{x}_1 \cdot \left( { \mathbf{x}_2 \cdot\left( {\mathbf{x}_3 \cdot \partial_4 T-\mathbf{x}_4 \cdot \partial_3 T} \right) } \right) \\ &\quad +\mathbf{x}_3 \cdot \left( { \mathbf{x}_4 \cdot \left( {\mathbf{x}_1 \cdot \partial_2 T-\mathbf{x}_2 \cdot \partial_1 T} \right) } \right) \\ &=\mathbf{x}_1 \cdot \left( { \mathbf{x}_2 \cdot\left( {\partial_4 \left( { \mathbf{x}_3 \cdot T } \right)-\partial_3 \left( { \mathbf{x}_4 \cdot T } \right)} \right) } \right) \\ &\quad +\mathbf{x}_3 \cdot \left( { \mathbf{x}_4 \cdot \left( {\partial_2 \left( { \mathbf{x}_1 \cdot T } \right)-\partial_1 \left( { \mathbf{x}_2 \cdot T } \right)} \right) } \right) \\ &=\mathbf{x}_1 \cdot \partial_4 \left( { \mathbf{x}_2 \cdot \left( { \mathbf{x}_3 \cdot T } \right) } \right)+\mathbf{x}_2 \cdot \partial_3 \left( { \mathbf{x}_1 \cdot \left( { \mathbf{x}_4 \cdot T } \right) } \right) \\ &\quad +\mathbf{x}_3 \cdot \partial_2 \left( { \mathbf{x}_4 \cdot \left( { \mathbf{x}_1 \cdot T } \right) } \right)+\mathbf{x}_4 \cdot \partial_1 \left( { \mathbf{x}_3 \cdot \left( { \mathbf{x}_2 \cdot T } \right) } \right) \\ &-\mathbf{x}_1 \cdot \left( { \left( { \partial_4 \mathbf{x}_2} \right) \cdot \left( { \mathbf{x}_3 \cdot T } \right) } \right)-\mathbf{x}_2 \cdot \left( { \left( { \partial_3 \mathbf{x}_1} \right) \cdot \left( { \mathbf{x}_4 \cdot T } \right) } \right) \\ &\quad -\mathbf{x}_3 \cdot \left( { \left( { \partial_2 \mathbf{x}_4} \right) \cdot \left( { \mathbf{x}_1 \cdot T } \right) } \right)-\mathbf{x}_4 \cdot \left( { \left( { \partial_1 \mathbf{x}_3} \right) \cdot \left( { \mathbf{x}_2 \cdot T } \right) } \right) \\ &=\mathbf{x}_1 \cdot \partial_4 \left( { \mathbf{x}_2 \cdot \left( { \mathbf{x}_3 \cdot T } \right) } \right)+\mathbf{x}_2 \cdot \partial_3 \left( { \mathbf{x}_1 \cdot \left( { \mathbf{x}_4 \cdot T } \right) } \right) \\ &\quad +\mathbf{x}_3 \cdot \partial_2 \left( { \mathbf{x}_4 \cdot \left( { \mathbf{x}_1 \cdot T } \right) } \right)+\mathbf{x}_4 \cdot \partial_1 \left( { \mathbf{x}_3 \cdot \left( { \mathbf{x}_2 \cdot T } \right) } \right) \\ &+\frac{\partial^2 \mathbf{x}}{\partial u^4 \partial u^2}\cdot\not{{\left( {\mathbf{x}_1 \cdot \left( { \mathbf{x}_3 \cdot T } \right)+\mathbf{x}_3 \cdot \left( { \mathbf{x}_1 \cdot T } \right)} \right)}} \\ &\quad +\frac{\partial^2 \mathbf{x}}{\partial u^1 \partial u^3}\cdot\not{{\left( {\mathbf{x}_2 \cdot \left( { \mathbf{x}_4 \cdot T } \right)+\mathbf{x}_4 \cdot \left( { \mathbf{x}_2 \cdot T } \right)} \right)}}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.155)

We can cancel those last terms using lemma 5. Using the same reverse chain rule expansion once more we have

\begin{aligned}\begin{aligned}\mathbf{x}_{123} \cdot \partial_4 T-&\mathbf{x}_{124} \cdot \partial_3 T+\mathbf{x}_{134} \cdot \partial_2 T-\mathbf{x}_{234} \cdot \partial_1 T \\ &=\partial_4 \left( { \mathbf{x}_1 \cdot \left( { \mathbf{x}_2 \cdot \left( { \mathbf{x}_3 \cdot T } \right) } \right) } \right)+\partial_3 \left( { \mathbf{x}_2 \cdot \left( { \mathbf{x}_1 \cdot \left( { \mathbf{x}_4 \cdot T } \right) } \right) } \right)+\partial_2 \left( { \mathbf{x}_3 \cdot \left( { \mathbf{x}_4 \cdot \left( { \mathbf{x}_1 \cdot T } \right) } \right) } \right)+\partial_1 \left( { \mathbf{x}_4 \cdot \left( { \mathbf{x}_3 \cdot \left( { \mathbf{x}_2 \cdot T } \right) } \right) } \right) \\ &-\left( { \partial_4 \mathbf{x}_1} \right)\cdot\not{{\left( {\mathbf{x}_2 \cdot \left( { \mathbf{x}_3 \cdot T } \right)+\mathbf{x}_3 \cdot \left( { \mathbf{x}_2 \cdot T } \right)} \right)}}-\left( { \partial_3 \mathbf{x}_2} \right) \cdot\not{{\left( {\mathbf{x}_1 \cdot \left( { \mathbf{x}_4 \cdot T } \right)\mathbf{x}_4 \cdot \left( { \mathbf{x}_1 \cdot T } \right)} \right)}},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.156)

or

\begin{aligned}d^4 \mathbf{x} \cdot \left( { \boldsymbol{\partial} \wedge T } \right)=d^4 u\Bigl( {\partial_4 T_{3 2 1}+\partial_3 T_{4 1 2}+\partial_2 T_{1 4 3}+\partial_1 T_{2 3 4}} \Bigr).\end{aligned} \hspace{\stretch{1}}(1.0.156)

The final result follows after permuting the indices slightly.

### Lemma 1. Distribution of inner products

Given two blades $A_s, B_r$ with grades subject to $s > r > 0$, and a vector $b$, the inner product distributes according to

\begin{aligned}A_s \cdot \left( { b \wedge B_r } \right) = \left( { A_s \cdot b } \right) \cdot B_r.\end{aligned}

This will allow us, for example, to expand a general inner product of the form $d^k \mathbf{x} \cdot (\boldsymbol{\partial} \wedge F)$.

The proof is straightforward, but also mechanical. Start by expanding the wedge and dot products within a grade selection operator

\begin{aligned}A_s \cdot \left( { b \wedge B_r } \right)={\left\langle{{A_s (b \wedge B_r)}}\right\rangle}_{{s - (r + 1)}}=\frac{1}{2} {\left\langle{{A_s \left( {b B_r + (-1)^{r} B_r b} \right) }}\right\rangle}_{{s - (r + 1)}}\end{aligned} \hspace{\stretch{1}}(1.158)

Solving for $B_r b$ in

\begin{aligned}2 b \cdot B_r = b B_r - (-1)^{r} B_r b,\end{aligned} \hspace{\stretch{1}}(1.159)

we have

\begin{aligned}A_s \cdot \left( { b \wedge B_r } \right)=\frac{1}{2} {\left\langle{{ A_s b B_r + A_s \left( { b B_r - 2 b \cdot B_r } \right) }}\right\rangle}_{{s - (r + 1)}}={\left\langle{{ A_s b B_r }}\right\rangle}_{{s - (r + 1)}}-\not{{{\left\langle{{ A_s \left( { b \cdot B_r } \right) }}\right\rangle}_{{s - (r + 1)}}}}.\end{aligned} \hspace{\stretch{1}}(1.160)

The last term above is zero since we are selecting the $s - r - 1$ grade element of a multivector with grades $s - r + 1$ and $s + r - 1$, which has no terms for $r > 0$. Now we can expand the $A_s b$ multivector product, for

\begin{aligned}A_s \cdot \left( { b \wedge B_r } \right)={\left\langle{{ \left( { A_s \cdot b + A_s \wedge b} \right) B_r }}\right\rangle}_{{s - (r + 1)}}.\end{aligned} \hspace{\stretch{1}}(1.161)

The latter multivector (with the wedge product factor) above has grades $s + 1 - r$ and $s + 1 + r$, so this selection operator finds nothing. This leaves

\begin{aligned}A_s \cdot \left( { b \wedge B_r } \right)={\left\langle{{\left( { A_s \cdot b } \right) \cdot B_r+ \left( { A_s \cdot b } \right) \wedge B_r}}\right\rangle}_{{s - (r + 1)}}.\end{aligned} \hspace{\stretch{1}}(1.162)

The first dot products term has grade $s - 1 - r$ and is selected, whereas the wedge term has grade $s - 1 + r \ne s - r - 1$ (for $r > 0$). $\qquad\square$

### Lemma 2. Distribution of two bivectors

For vectors $\mathbf{a}$, $\mathbf{b}$, and bivector $B$, we have

\begin{aligned}\left( { \mathbf{a} \wedge \mathbf{b} } \right) \cdot B = \frac{1}{2} \left( {\mathbf{a} \cdot \left( { \mathbf{b} \cdot B } \right)-\mathbf{b} \cdot \left( { \mathbf{a} \cdot B } \right)} \right).\end{aligned} \hspace{\stretch{1}}(1.0.163)

Proof follows by applying the scalar selection operator, expanding the wedge product within it, and eliminating any of the terms that cannot contribute grade zero values

\begin{aligned}\left( { \mathbf{a} \wedge \mathbf{b} } \right) \cdot B &= \left\langle{{\frac{1}{2} \Bigl( { \mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a} } \Bigr) B}}\right\rangle \\ &= \frac{1}{2}\left\langle{{\mathbf{a} \left( { \mathbf{b} \cdot B + \not{{ \mathbf{b} \wedge B }} } \right)-\mathbf{b} \left( { \mathbf{a} \cdot B + \not{{ \mathbf{a} \wedge B }} } \right)}}\right\rangle \\ &= \frac{1}{2}\left\langle{{\mathbf{a} \cdot \left( { \mathbf{b} \cdot B } \right)+\not{{\mathbf{a} \wedge \left( { \mathbf{b} \cdot B } \right)}}-\mathbf{b} \cdot \left( { \mathbf{a} \cdot B } \right)-\not{{\mathbf{b} \wedge \left( { \mathbf{a} \cdot B } \right)}}}}\right\rangle \\ &= \frac{1}{2}\Bigl( {\mathbf{a} \cdot \left( { \mathbf{b} \cdot B } \right)-\mathbf{b} \cdot \left( { \mathbf{a} \cdot B } \right)} \Bigr)\qquad\square\end{aligned} \hspace{\stretch{1}}(1.0.163)

### Lemma 3. Inner product of trivector with bivector

Given a bivector $B$, and trivector $\mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c}$ where $\mathbf{a}, \mathbf{b}$ and $\mathbf{c}$ are vectors, the inner product is

\begin{aligned}\left( { \mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c} } \right) \cdot B=\mathbf{a} \Bigl( { \left( { \mathbf{b} \wedge \mathbf{c} } \right) \cdot B } \Bigr)+\mathbf{b} \Bigl( { \left( { \mathbf{c} \wedge \mathbf{a} } \right) \cdot B } \Bigr)+\mathbf{c} \Bigl( { \left( { \mathbf{a} \wedge \mathbf{b} } \right) \cdot B } \Bigr).\end{aligned} \hspace{\stretch{1}}(1.165)

This is also problem 1.1(c) from Exercises 2.1 in [3], and submits to a dumb expansion in successive dot products with a final regrouping. With $B = \mathbf{u} \wedge \mathbf{v}$

\begin{aligned}\begin{aligned}\left( \mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c} \right)\cdot B&={\left\langle{{\left( \mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c} \right) \left( \mathbf{u} \wedge \mathbf{v} \right) }}\right\rangle}_{1} \\ &={\left\langle{{\left( \mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c} \right)\left(\mathbf{u} \mathbf{v}- \mathbf{u} \cdot \mathbf{v}\right) }}\right\rangle}_{1} \\ &=\left(\left( \mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c} \right) \cdot \mathbf{u} \right) \cdot \mathbf{v} \\ &=\left( \mathbf{a} \wedge \mathbf{b} \right) \cdot \mathbf{v} \left( \mathbf{c} \cdot \mathbf{u} \right)+\left( \mathbf{c} \wedge \mathbf{a} \right) \cdot \mathbf{v} \left( \mathbf{b} \cdot \mathbf{u} \right)+\left( \mathbf{b} \wedge \mathbf{c} \right) \cdot \mathbf{v} \left( \mathbf{a} \cdot \mathbf{u} \right) \\ &=\mathbf{a}\left( \mathbf{b} \cdot \mathbf{v} \right)\left( \mathbf{c} \cdot \mathbf{u} \right)-\mathbf{b}\left( \mathbf{a} \cdot \mathbf{v} \right)\left( \mathbf{c} \cdot \mathbf{u} \right) \\ &\quad +\mathbf{c}\left( \mathbf{a} \cdot \mathbf{v} \right)\left( \mathbf{b} \cdot \mathbf{u} \right)-\mathbf{a}\left( \mathbf{c} \cdot \mathbf{v} \right)\left( \mathbf{b} \cdot \mathbf{u} \right) \\ &\quad +\mathbf{b}\left( \mathbf{c} \cdot \mathbf{v} \right)\left( \mathbf{a} \cdot \mathbf{u} \right)-\mathbf{c}\left( \mathbf{b} \cdot \mathbf{v} \right)\left( \mathbf{a} \cdot \mathbf{u} \right) \\ &=\mathbf{a}\left( \left( \mathbf{b} \cdot \mathbf{v} \right) \left( \mathbf{c} \cdot \mathbf{u} \right) - \left( \mathbf{c} \cdot \mathbf{v} \right) \left( \mathbf{b} \cdot \mathbf{u} \right) \right)\\ &\quad +\mathbf{b}\left( \left( \mathbf{c} \cdot \mathbf{v} \right) \left( \mathbf{a} \cdot \mathbf{u} \right) - \left( \mathbf{a} \cdot \mathbf{v} \right) \left( \mathbf{c} \cdot \mathbf{u} \right) \right)\\ &\quad +\mathbf{c}\left( \left( \mathbf{a} \cdot \mathbf{v} \right) \left( \mathbf{b} \cdot \mathbf{u} \right) - \left( \mathbf{b} \cdot \mathbf{v} \right) \left( \mathbf{a} \cdot \mathbf{u} \right) \right) \\ &=\mathbf{a}\left( \mathbf{b} \wedge \mathbf{c} \right)\cdot\left( \mathbf{u} \wedge \mathbf{v} \right)\\ &\quad +\mathbf{b}\left( \mathbf{c} \wedge \mathbf{a} \right)\cdot\left( \mathbf{u} \wedge \mathbf{v} \right)\\ &\quad +\mathbf{c}\left( \mathbf{a} \wedge \mathbf{b} \right) \cdot\left( \mathbf{u} \wedge \mathbf{v} \right)\\ &=\mathbf{a}\left( \mathbf{b} \wedge \mathbf{c} \right)\cdot B+\mathbf{b}\left( \mathbf{c} \wedge \mathbf{a} \right) \cdot B+\mathbf{c}\left( \mathbf{a} \wedge \mathbf{b} \right)\cdot B. \qquad\square\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.166)

### Lemma 4. Distribution of two trivectors

Given a trivector $T$ and three vectors $\mathbf{a}, \mathbf{b}$, and $\mathbf{c}$, the entire inner product can be expanded in terms of any successive set inner products, subject to change of sign with interchange of any two adjacent vectors within the dot product sequence

\begin{aligned}\left( { \mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c} } \right) \cdot T &= \mathbf{a} \cdot \left( { \mathbf{b} \cdot \left( { \mathbf{c} \cdot T } \right) } \right) \\ &= -\mathbf{a} \cdot \left( { \mathbf{c} \cdot \left( { \mathbf{b} \cdot T } \right) } \right) \\ &= \mathbf{b} \cdot \left( { \mathbf{c} \cdot \left( { \mathbf{a} \cdot T } \right) } \right) \\ &= - \mathbf{b} \cdot \left( { \mathbf{a} \cdot \left( { \mathbf{c} \cdot T } \right) } \right) \\ &= \mathbf{c} \cdot \left( { \mathbf{a} \cdot \left( { \mathbf{b} \cdot T } \right) } \right) \\ &= - \mathbf{c} \cdot \left( { \mathbf{b} \cdot \left( { \mathbf{a} \cdot T } \right) } \right).\end{aligned} \hspace{\stretch{1}}(1.167)

To show this, we first expand within a scalar selection operator

\begin{aligned}\begin{aligned}\left( { \mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c} } \right) \cdot T&=\left\langle{{\left( { \mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c} } \right) T}}\right\rangle \\ &=\frac{1}{6}\left\langle{{ \mathbf{a} \mathbf{b} \mathbf{c} T- \mathbf{a} \mathbf{c} \mathbf{b} T+ \mathbf{b} \mathbf{c} \mathbf{a} T- \mathbf{b} \mathbf{a} \mathbf{b} T+ \mathbf{c} \mathbf{a} \mathbf{b} T- \mathbf{c} \mathbf{b} \mathbf{a} T}}\right\rangle \\ \end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.168)

Now consider any single term from the scalar selection, such as the first. This can be reordered using the vector dot product identity

\begin{aligned}\left\langle{{ \mathbf{a} \mathbf{b} \mathbf{c} T}}\right\rangle=\left\langle{{ \mathbf{a} \left( { -\mathbf{c} \mathbf{b} + 2 \mathbf{b} \cdot \mathbf{c} } \right) T}}\right\rangle=-\left\langle{{ \mathbf{a} \mathbf{c} \mathbf{b} T}}\right\rangle+2 \mathbf{b} \cdot \mathbf{c} \not{{\left\langle{{ \mathbf{a} T}}\right\rangle}}.\end{aligned} \hspace{\stretch{1}}(1.0.168)

The vector-trivector product in the latter grade selection operation above contributes only bivector and quadvector terms, thus contributing nothing. This can be repeated, showing that

\begin{aligned} \left\langle{{ \mathbf{a} \mathbf{b} \mathbf{c} T }}\right\rangle &= - \left\langle{{ \mathbf{a} \mathbf{c} \mathbf{b} T }}\right\rangle \\ &= + \left\langle{{ \mathbf{b} \mathbf{c} \mathbf{a} T }}\right\rangle \\ &= - \left\langle{{ \mathbf{b} \mathbf{a} \mathbf{c} T }}\right\rangle \\ &= + \left\langle{{ \mathbf{c} \mathbf{a} \mathbf{b} T }}\right\rangle \\ &= - \left\langle{{ \mathbf{c} \mathbf{b} \mathbf{a} T }}\right\rangle.\end{aligned} \hspace{\stretch{1}}(1.0.168)

Substituting this back into eq. 1.0.168 proves lemma 4.

### Lemma 5. Permutation of two successive dot products with trivector

Given a trivector $T$ and two vectors $\mathbf{a}$ and $\mathbf{b}$, alternating the order of the dot products changes the sign

\begin{aligned}\mathbf{a} \cdot \left( { \mathbf{b} \cdot T } \right)=-\mathbf{b} \cdot \left( { \mathbf{a} \cdot T } \right).\end{aligned} \hspace{\stretch{1}}(1.171)

This and lemma 4 are clearly examples of a more general identity, but I’ll not try to prove that here. To show this one, we have

\begin{aligned}\mathbf{a} \cdot \left( { \mathbf{b} \cdot T } \right) &= {\left\langle{{ \mathbf{a} \left( { \mathbf{b} \cdot T } \right) }}\right\rangle}_{1} \\ &= \frac{1}{2}{\left\langle{{ \mathbf{a} \mathbf{b} T + \mathbf{a} T \mathbf{b} }}\right\rangle}_{1} \\ &= \frac{1}{2}{\left\langle{{ \left( { -\mathbf{b} \mathbf{a} + \not{{2 \mathbf{a} \cdot \mathbf{b}}}} \right) T + \left( { \mathbf{a} \cdot T} \right) \mathbf{b} + \not{{ \mathbf{a} \wedge T}} \mathbf{b} }}\right\rangle}_{1} \\ &= \frac{1}{2}\left( {-\mathbf{b} \cdot \left( { \mathbf{a} \cdot T } \right)+\left( { \mathbf{a} \cdot T } \right) \cdot \mathbf{b}} \right) \\ &= -\mathbf{b} \cdot \left( { \mathbf{a} \cdot T } \right). \qquad\square\end{aligned} \hspace{\stretch{1}}(1.172)

Cancellation of terms above was because they could not contribute to a grade one selection. We also employed the relation $\mathbf{x} \cdot B = - B \cdot \mathbf{x}$ for bivector $B$ and vector $\mathbf{x}$.

### Lemma 6. Duality in a plane

For a vector $\mathbf{a}$, and a plane containing $\mathbf{a}$ and $\mathbf{b}$, the dual $\mathbf{a}^{*}$ of this vector with respect to this plane is

\begin{aligned}\mathbf{a}^{*} = \frac{\mathbf{b} \cdot \left( { \mathbf{a} \wedge \mathbf{b} } \right)}{\left( {\mathbf{a} \wedge \mathbf{b}} \right)^2},\end{aligned} \hspace{\stretch{1}}(1.173)

Satisfying

\begin{aligned}\mathbf{a}^{*} \cdot \mathbf{a} = 1,\end{aligned} \hspace{\stretch{1}}(1.174)

and

\begin{aligned}\mathbf{a}^{*} \cdot \mathbf{b} = 0.\end{aligned} \hspace{\stretch{1}}(1.175)

\begin{aligned}\mathbf{b} \cdot \left( { \mathbf{a} \wedge \mathbf{b} } \right)=\left( { \mathbf{b} \cdot \mathbf{a} } \right) \mathbf{b}-\mathbf{b}^2 \mathbf{a}.\end{aligned} \hspace{\stretch{1}}(1.176)

Dotting with $\mathbf{a}$ we have

\begin{aligned}\mathbf{a} \cdot \left( { \mathbf{b} \cdot \left( { \mathbf{a} \wedge \mathbf{b} } \right) } \right)=\mathbf{a} \cdot \left( {\left( { \mathbf{b} \cdot \mathbf{a} } \right) \mathbf{b}-\mathbf{b}^2 \mathbf{a}} \right)=\left( { \mathbf{b} \cdot \mathbf{a} } \right)^2 - \mathbf{b}^2 \mathbf{a}^2,\end{aligned} \hspace{\stretch{1}}(1.177)

but dotting with $\mathbf{b}$ yields zero

\begin{aligned}\mathbf{b} \cdot \left( { \mathbf{b} \cdot \left( { \mathbf{a} \wedge \mathbf{b} } \right) } \right) &= \mathbf{b} \cdot \left( {\left( { \mathbf{b} \cdot \mathbf{a} } \right) \mathbf{b}-\mathbf{b}^2 \mathbf{a}} \right) \\ &= \left( { \mathbf{b} \cdot \mathbf{a} } \right) \mathbf{b}^2 - \mathbf{b}^2 \left( { \mathbf{a} \cdot \mathbf{b} } \right) \\ &= 0.\end{aligned} \hspace{\stretch{1}}(1.178)

To complete the proof, we note that the product in eq. 1.177 is just the wedge squared

\begin{aligned}\left( { \mathbf{a} \wedge \mathbf{b}} \right)^2 &= \left\langle{{\left( { \mathbf{a} \wedge \mathbf{b} } \right)^2}}\right\rangle \\ &= \left\langle{{\left( { \mathbf{a} \mathbf{b} - \mathbf{a} \cdot \mathbf{b} } \right)\left( { \mathbf{a} \mathbf{b} - \mathbf{a} \cdot \mathbf{b} } \right)}}\right\rangle \\ &= \left\langle{{\mathbf{a} \mathbf{b} \mathbf{a} \mathbf{b} - 2 \left( {\mathbf{a} \cdot \mathbf{b}} \right) \mathbf{a} \mathbf{b}}}\right\rangle+\left( { \mathbf{a} \cdot \mathbf{b} } \right)^2 \\ &= \left\langle{{\mathbf{a} \mathbf{b} \left( { -\mathbf{b} \mathbf{a} + 2 \mathbf{a} \cdot \mathbf{b} } \right)}}\right\rangle-\left( { \mathbf{a} \cdot \mathbf{b} } \right)^2 \\ &= \left( { \mathbf{a} \cdot \mathbf{b} } \right)^2-\mathbf{a}^2 \mathbf{b}^2.\end{aligned} \hspace{\stretch{1}}(1.179)

This duality relation can be recast with a linear denominator

\begin{aligned}\mathbf{a}^{*} &= \frac{\mathbf{b} \cdot \left( { \mathbf{a} \wedge \mathbf{b} } \right)}{\left( {\mathbf{a} \wedge \mathbf{b}} \right)^2} \\ &= \mathbf{b} \frac{\mathbf{a} \wedge \mathbf{b} }{\left( {\mathbf{a} \wedge \mathbf{b}} \right)^2} \\ &= \mathbf{b} \frac{\mathbf{a} \wedge \mathbf{b} }{\left\lvert {\mathbf{a} \wedge \mathbf{b} } \right\rvert} \frac{\left\lvert {\mathbf{a} \wedge \mathbf{b}} \right\rvert}{\mathbf{a} \wedge \mathbf{b} }\frac{1}{{\left( {\mathbf{a} \wedge \mathbf{b}} \right)}},\end{aligned} \hspace{\stretch{1}}(1.180)

or

\begin{aligned}\mathbf{a}^{*} = \mathbf{b} \frac{1}{{\left( {\mathbf{a} \wedge \mathbf{b}} \right)}}.\end{aligned} \hspace{\stretch{1}}(1.0.181)

We can use this form after scaling it appropriately to express duality in terms of the pseudoscalar.

### Lemma 7. Dual vector in a three vector subspace

In the subspace spanned by $\left\{ {\mathbf{a}, \mathbf{b}, \mathbf{c}} \right\}$, the dual of $\mathbf{a}$ is

\begin{aligned}\mathbf{a}^{*} = \mathbf{b} \wedge \mathbf{c} \frac{1}{{\mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c}}},\end{aligned}

Consider the dot product of $\hat{\mathbf{a}}^{*}$ with $\mathbf{u} \in \left\{ {\mathbf{a}, \mathbf{b}, \mathbf{c}} \right\}$.

\begin{aligned}\mathbf{u} \cdot \mathbf{a}^{*} &= \left\langle{{ \mathbf{u} \mathbf{b} \wedge \mathbf{c} \frac{1}{{\mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c}}} }}\right\rangle \\ &= \left\langle{{ \mathbf{u} \cdot \left( { \mathbf{b} \wedge \mathbf{c}} \right) \frac{1}{{\mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c}}} }}\right\rangle+\left\langle{{ \mathbf{u} \wedge \mathbf{b} \wedge \mathbf{c} \frac{1}{{\mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c}}} }}\right\rangle \\ &= \not{{\left\langle{{ \left( { \left( { \mathbf{u} \cdot \mathbf{b}} \right) \mathbf{c}-\left( {\mathbf{u} \cdot \mathbf{c}} \right) \mathbf{b}} \right)\frac{1}{{\mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c}}} }}\right\rangle}}+\left\langle{{ \mathbf{u} \wedge \mathbf{b} \wedge \mathbf{c} \frac{1}{{\mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c}}} }}\right\rangle.\end{aligned} \hspace{\stretch{1}}(1.182)

The canceled term is eliminated since it is the product of a vector and trivector producing no scalar term. Substituting $\mathbf{a}, \mathbf{b}, \mathbf{c}$, and noting that $\mathbf{u} \wedge \mathbf{u} = 0$, we have

\begin{aligned}\begin{aligned}\mathbf{a} \cdot \mathbf{a}^{*} &= 1 \\ \mathbf{b} \cdot \mathbf{a}^{*} &= 0 \\ \mathbf{c} \cdot \mathbf{a}^{*} &= 0.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.183)

### Lemma 8. Pseudoscalar selection

For grade $k$ blade $K \in \bigwedge^k$ (i.e. a pseudoscalar), and vectors $\mathbf{a}, \mathbf{b}$, the grade $k$ selection of this blade sandwiched between the vectors is

\begin{aligned}{\left\langle{{ \mathbf{a} K \mathbf{b} }}\right\rangle}_{k} = (-1)^{k+1} {\left\langle{{K a b}}\right\rangle}_{k} = (-1)^{k+1} K \left( { \mathbf{a} \cdot \mathbf{b}} \right).\end{aligned}

To show this, we have to consider even and odd grades separately. First for even $k$ we have

\begin{aligned}{\left\langle{{ \mathbf{a} K \mathbf{b} }}\right\rangle}_{k} &= {\left\langle{{ \left( { \mathbf{a} \cdot K + \not{{\mathbf{a} \wedge K}}} \right) \mathbf{b} }}\right\rangle}_{k} \\ &= \frac{1}{2} {\left\langle{{ \left( { \mathbf{a} K - K \mathbf{a} } \right) \mathbf{b} }}\right\rangle}_{k} \\ &= \frac{1}{2} {\left\langle{{ \mathbf{a} K \mathbf{b} }}\right\rangle}_{k}-\frac{1}{2} {\left\langle{{ K \mathbf{a} \mathbf{b} }}\right\rangle}_{k},\end{aligned} \hspace{\stretch{1}}(1.184)

or

\begin{aligned}{\left\langle{{ \mathbf{a} K \mathbf{b} }}\right\rangle}_{k} = -{\left\langle{{ K \mathbf{a} \mathbf{b} }}\right\rangle}_{k} = -K \left( { \mathbf{a} \cdot \mathbf{b}} \right).\end{aligned} \hspace{\stretch{1}}(1.185)

Similarly for odd $k$, we have

\begin{aligned}{\left\langle{{ \mathbf{a} K \mathbf{b} }}\right\rangle}_{k} &= {\left\langle{{ \left( { \mathbf{a} \cdot K + \not{{\mathbf{a} \wedge K}}} \right) \mathbf{b} }}\right\rangle}_{k} \\ &= \frac{1}{2} {\left\langle{{ \left( { \mathbf{a} K + K \mathbf{a} } \right) \mathbf{b} }}\right\rangle}_{k} \\ &= \frac{1}{2} {\left\langle{{ \mathbf{a} K \mathbf{b} }}\right\rangle}_{k}+\frac{1}{2} {\left\langle{{ K \mathbf{a} \mathbf{b} }}\right\rangle}_{k},\end{aligned} \hspace{\stretch{1}}(1.186)

or

\begin{aligned}{\left\langle{{ \mathbf{a} K \mathbf{b} }}\right\rangle}_{k} = {\left\langle{{ K \mathbf{a} \mathbf{b} }}\right\rangle}_{k} = K \left( { \mathbf{a} \cdot \mathbf{b}} \right).\end{aligned} \hspace{\stretch{1}}(1.187)

Adjusting for the signs completes the proof.

# References

[1] John Denker. Magnetic field for a straight wire., 2014. URL http://www.av8n.com/physics/straight-wire.pdf. [Online; accessed 11-May-2014].

[2] H. Flanders. Differential Forms With Applications to the Physical Sciences. Courier Dover Publications, 1989.

[3] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

[4] Peeter Joot. Collection of old notes on Stokes theorem in Geometric algebra, 2014. URL https://sites.google.com/site/peeterjoot3/math2014/bigCollectionOfPartiallyIncorrectStokesTheoremMusings.pdf.

[5] Peeter Joot. Synposis of old notes on Stokes theorem in Geometric algebra, 2014. URL https://sites.google.com/site/peeterjoot3/math2014/synopsisOfBigCollectionOfPartiallyIncorrectStokesTheoremMusings.pdf.

[6] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

[7] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987.

[8] Michael Spivak. Calculus on manifolds, volume 1. Benjamin New York, 1965.

## PHY450H1S. Relativistic Electrodynamics Lecture 18 (Taught by Prof. Erich Poppitz). Green’s function solution to Maxwell’s equation.

Posted by peeterjoot on March 12, 2011

Covering chapter 8 material from the text [1].

Covering lecture notes pp. 136-146: continued reminder of electrostatic Greens function (136); the retarded Greens function of the d’Alembert operator: derivation and properties (137-140); the solution of the d’Alembert equation with a source: retarded potentials (141-142)

# Solving the forced wave equation.

See the notes for a complex variables and Fourier transform method of deriving the Green’s function. In class, we’ll just pull it out of a magic hat. We wish to solve

\begin{aligned}\square A^k = \partial_i \partial^i A^k = \frac{4 \pi}{c} j^k\end{aligned} \hspace{\stretch{1}}(2.1)

(with a $\partial_i A^i = 0$ gauge choice).

Our Green’s method utilizes

\begin{aligned}\square_{(\mathbf{x}, t)} G(\mathbf{x} - \mathbf{x}', t - t') = \delta^3( \mathbf{x} - \mathbf{x}') \delta( t - t')\end{aligned} \hspace{\stretch{1}}(2.2)

If we know such a function, our solution is simple to obtain

\begin{aligned}A^k(\mathbf{x}, t)= \int d^3 \mathbf{x}' dt' \frac{4 \pi}{c} j^k(\mathbf{x}', t') G(\mathbf{x} - \mathbf{x}', t - t')\end{aligned} \hspace{\stretch{1}}(2.3)

Proof:

\begin{aligned}\square_{(\mathbf{x}, t)} A^k(\mathbf{x}, t)&=\int d^3 \mathbf{x}' dt' \frac{4 \pi}{c} j^k(\mathbf{x}', t')\square_{(\mathbf{x}, t)}G(\mathbf{x} - \mathbf{x}', t - t') \\ &=\int d^3 \mathbf{x}' dt' \frac{4 \pi}{c} j^k(\mathbf{x}', t')\delta^3( \mathbf{x} - \mathbf{x}') \delta( t - t') \\ &=\frac{4 \pi}{c} j^k(\mathbf{x}, t)\end{aligned}

Claim:

\begin{aligned}G(\mathbf{x}, t) = \frac{\delta(t - {\left\lvert{\mathbf{x}}\right\rvert}/c)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }\end{aligned} \hspace{\stretch{1}}(2.4)

This is the retarded Green’s function of the operator $\square$, where

\begin{aligned}\square G(\mathbf{x}, t) = \delta^3(\mathbf{x}) \delta(t)\end{aligned} \hspace{\stretch{1}}(2.5)

## Proof of the d’Alembertian Green’s function

Our Prof is excellent at motivating any results that he pulls out of magic hats. He’s said that he’s included a derivation using Fourier transforms and tricky contour integration arguments in the class notes for anybody who is interested (and for those who also know how to do contour integration). For those who don’t know contour integration yet (some people are taking it concurrently), one can actually prove this by simply applying the wave equation operator to this function. This treats the delta function as a normal function that one can take the derivatives of, something that can be well defined in the context of generalized functions. Chugging ahead with this approach we have

\begin{aligned}\square G(\mathbf{x}, t)=\left(\frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta\right)\frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\frac{\delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi c^2 {\left\lvert{\mathbf{x}}\right\rvert} }- \Delta \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }.\end{aligned} \hspace{\stretch{1}}(2.6)

This starts things off and now things get a bit hairy. It’s helpful to consider a chain rule expansion of the Laplacian

\begin{aligned}\Delta (u v)&=\partial_{\alpha\alpha} (u v) \\ &=\partial_{\alpha} (v \partial_\alpha u+ u\partial_\alpha v) \\ &=(\partial_\alpha v) (\partial_\alpha u ) + v \partial_{\alpha\alpha} u+(\partial_\alpha u) (\partial_\alpha v ) + u \partial_{\alpha\alpha} v).\end{aligned}

In vector form this is

\begin{aligned}\Delta (u v) = u \Delta v + 2 (\boldsymbol{\nabla} u) \cdot (\boldsymbol{\nabla} v) + v \Delta u.\end{aligned} \hspace{\stretch{1}}(2.7)

Applying this to the Laplacian portion of 2.6 we have

\begin{aligned}\Delta \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)\Delta\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}+\left(\boldsymbol{\nabla} \frac{1}{{2 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}\right)\cdot\left(\boldsymbol{\nabla}\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \right)+\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}\Delta\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right).\end{aligned} \hspace{\stretch{1}}(2.8)

Here we make the identification

\begin{aligned}\Delta \frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }} = - \delta^3(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.9)

This could be considered a given from our knowledge of electrostatics, but it’s not too much work to just do so.

### An aside. Proving the Laplacian Green’s function.

If $-1/{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }$ is a Green’s function for the Laplacian, then the Laplacian of the convolution of this with a test function should recover that test function

\begin{aligned}\Delta \int d^3 \mathbf{x}' \left(-\frac{1}{{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} \right) f(\mathbf{x}') = f(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.10)

We can directly evaluate the LHS of this equation, following the approach in [2]. First note that the Laplacian can be pulled into the integral and operates only on the presumed Green’s function. For that operation we have

\begin{aligned}\Delta \left(-\frac{1}{{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} \right)=-\frac{1}{{4 \pi}} \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(2.11)

It will be helpful to compute the gradient of various powers of ${\left\lvert{\mathbf{x}}\right\rvert}$

\begin{aligned}\boldsymbol{\nabla} {\left\lvert{\mathbf{x}}\right\rvert}^a&=e_\alpha \partial_\alpha (x^\beta x^\beta)^{a/2} \\ &=e_\alpha \left(\frac{a}{2}\right) 2 x^\beta {\delta_\beta}^\alpha {\left\lvert{\mathbf{x}}\right\rvert}^{a - 2}.\end{aligned}

In particular we have, when $\mathbf{x} \ne 0$, this gives us

\begin{aligned}\boldsymbol{\nabla} {\left\lvert{\mathbf{x}}\right\rvert} &= \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ \boldsymbol{\nabla} \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} &= -\frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ \boldsymbol{\nabla} \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}^3}} &= -3 \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^5}.\end{aligned} \hspace{\stretch{1}}(2.12)

For the Laplacian of $1/{\left\lvert{\mathbf{x}}\right\rvert}$, at the points $\mathbf{e} \ne 0$ where this is well defined we have

\begin{aligned}\Delta \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} &=\boldsymbol{\nabla} \cdot \boldsymbol{\nabla} \frac{1}{{{\left\lvert{\mathbf{x}}\right\rvert}}} \\ &= -\partial_\alpha \frac{x^\alpha}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= -\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}^3} - x^\alpha \partial_\alpha \frac{1}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= -\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}^3} - \mathbf{x} \cdot \boldsymbol{\nabla} \frac{1}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= -\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}^3} + 3 \frac{\mathbf{x}^2}{{\left\lvert{\mathbf{x}}\right\rvert}^5}\end{aligned}

So we have a zero. This means that the Laplacian operation

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') =\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^3 \mathbf{x}' \Delta \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}},\end{aligned} \hspace{\stretch{1}}(2.15)

can only have a value in a neighborhood of point $\mathbf{x}$. Writing $\Delta = \boldsymbol{\nabla} \cdot \boldsymbol{\nabla}$ we have

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') =\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^3 \mathbf{x}' \boldsymbol{\nabla} \cdot -\frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(2.16)

Observing that $\boldsymbol{\nabla} \cdot f(\mathbf{x} -\mathbf{x}') = -\boldsymbol{\nabla}' f(\mathbf{x} - \mathbf{x}')$ we can put this in a form that allows for use of Stokes theorem so that we can convert this to a surface integral

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') &=\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^3 \mathbf{x}' \boldsymbol{\nabla}' \cdot \frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}^3} \\ &=\lim_{\epsilon = {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert} \rightarrow 0}f(\mathbf{x}) \int d^2 \mathbf{x}' \mathbf{n} \cdot \frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}^3} \\ &= \int_{\phi=0}^{2\pi} \int_{\theta = 0}^\pi \epsilon^2 \sin\theta d\theta d\phi \frac{\mathbf{x}' - \mathbf{x}}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}} \cdot \frac{\mathbf{x} - \mathbf{x}'}{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}^3} \\ &= -\int_{\phi=0}^{2\pi} \int_{\theta = 0}^\pi \epsilon^2 \sin\theta d\theta d\phi \frac{\epsilon^2}{\epsilon^4}\end{aligned}

where we use $(\mathbf{x}' - \mathbf{x})/{\left\lvert{\mathbf{x}' - \mathbf{x}}\right\rvert}$ as the outwards normal for a sphere centered at $\mathbf{x}$ of radius $\epsilon$. This integral is just $-4 \pi$, so we have

\begin{aligned}\Delta \int d^3 \mathbf{x}' \frac{1}{{-4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert} }} f(\mathbf{x}') =f(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.17)

The convolution of $f(\mathbf{x})$ with $-\Delta/4 \pi {\left\lvert{\mathbf{x}}\right\rvert}$ produces $f(\mathbf{x})$, allowing an identification of this function with a delta function, since the two have the same operational effect

\begin{aligned}\int d^3 \mathbf{x}' \delta(\mathbf{x} - \mathbf{x}') f(\mathbf{x}') =f(\mathbf{x}).\end{aligned} \hspace{\stretch{1}}(2.18)

### Returning to the d’Alembertian Green’s function.

We need two additional computations to finish the job. The first is the gradient of the delta function

\begin{aligned}\boldsymbol{\nabla} \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &= ? \\ \Delta \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &= ?\end{aligned}

Consider $\boldsymbol{\nabla} f(g(\mathbf{x}))$. This is

\begin{aligned}\boldsymbol{\nabla} f(g(\mathbf{x}))&=e_\alpha \frac{\partial {f(g(\mathbf{x}))}}{\partial {x^\alpha}} \\ &=e_\alpha \frac{\partial {f}}{\partial {g}} \frac{\partial {g}}{\partial {x^\alpha}},\end{aligned}

so we have

\begin{aligned}\boldsymbol{\nabla} f(g(\mathbf{x}))=\frac{\partial {f}}{\partial {g}} \boldsymbol{\nabla} g.\end{aligned} \hspace{\stretch{1}}(2.19)

The Laplacian is similar

\begin{aligned}\Delta f(g)&= \boldsymbol{\nabla} \cdot \left(\frac{\partial {f}}{\partial {g}} \boldsymbol{\nabla} g \right) \\ &= \partial_\alpha \left(\frac{\partial {f}}{\partial {g}} \partial_\alpha g \right) \\ &= \left( \partial_\alpha \frac{\partial {f}}{\partial {g}} \right) \partial_\alpha g +\frac{\partial {f}}{\partial {g}} \partial_{\alpha\alpha} g \\ &= \frac{\partial^2 {{f}}}{\partial {{g}}^2} \left( \partial_\alpha g \right) (\partial_\alpha g)+\frac{\partial {f}}{\partial {g}} \Delta g,\end{aligned}

so we have

\begin{aligned}\Delta f(g)= \frac{\partial^2 {{f}}}{\partial {{g}}^2} (\boldsymbol{\nabla} g)^2 +\frac{\partial {f}}{\partial {g}} \Delta g\end{aligned} \hspace{\stretch{1}}(2.20)

With $g(\mathbf{x}) = {\left\lvert{\mathbf{x}}\right\rvert}$, we’ll need the Laplacian of this vector magnitude

\begin{aligned}\Delta {\left\lvert{\mathbf{x}}\right\rvert}&=\partial_\alpha \frac{x_\alpha}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ &=\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}} + x_\alpha \partial_\alpha (x^\beta x^\beta)^{-1/2} \\ &=\frac{3}{{\left\lvert{\mathbf{x}}\right\rvert}} - \frac{x_\alpha x_\alpha}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \\ &= \frac{2}{{\left\lvert{\mathbf{x}}\right\rvert}} \end{aligned}

So that we have

\begin{aligned}\boldsymbol{\nabla} \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &= -\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ \Delta \delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) &=\frac{1}{{c^2}} \delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) -\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{2}{{\left\lvert{\mathbf{x}}\right\rvert}} \end{aligned} \hspace{\stretch{1}}(2.21)

Now we have all the bits and pieces of 2.8 ready to assemble

\begin{aligned}\Delta \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }&=-\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \delta^3(\mathbf{x}) \\ &\quad +\frac{1}{{2\pi}} \left( - \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}^3} \right)\cdot-\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{\mathbf{x}}{{\left\lvert{\mathbf{x}}\right\rvert}} \\ &\quad +\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }}\left(\frac{1}{{c^2}} \delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) -\frac{1}{{c}} \delta'\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \frac{2}{{\left\lvert{\mathbf{x}}\right\rvert}} \right) \\ &=-\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \delta^3(\mathbf{x}) +\frac{1}{{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} c^2 }}\delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \end{aligned}

Since we also have

\begin{aligned}\frac{1}{{c^2}} \partial_{tt}\frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\frac{\delta''\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} c^2}\end{aligned} \hspace{\stretch{1}}(2.23)

The $\delta''$ terms cancel out in the d’Alembertian, leaving just

\begin{aligned}\square \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \delta^3(\mathbf{x}) \end{aligned} \hspace{\stretch{1}}(2.24)

Noting that the spatial delta function is non-zero only when $\mathbf{x} = 0$, which means $\delta(t - {\left\lvert{\mathbf{x}}\right\rvert}/c) = \delta(t)$ in this product, and we finally have

\begin{aligned}\square \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} }=\delta(t) \delta^3(\mathbf{x}) \end{aligned} \hspace{\stretch{1}}(2.25)

We write

\begin{aligned}G(\mathbf{x}, t) = \frac{\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert} },\end{aligned} \hspace{\stretch{1}}(2.26)

# Elaborating on the wave equation Green’s function

The Green’s function 2.26 is a distribution that is non-zero only on the future lightcone. Observe that for $t < 0$ we have

\begin{aligned}\delta\left(t - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)&=\delta\left(-{\left\lvert{t}\right\rvert} - \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right) \\ &= 0.\end{aligned}

We say that $G$ is supported only on the future light cone. At $\mathbf{x} = 0$, only the contributions for $t > 0$ matter. Note that in the “old days”, Green’s functions used to be called influence functions, a name that works particularly well in this case. We have other Green’s functions for the d’Alembertian. The one above is called the retarded Green’s functions and we also have an advanced Green’s function. Writing $+$ for advanced and $-$ for retarded these are

\begin{aligned}G_{\pm} = \frac{\delta\left(t \pm \frac{{\left\lvert{\mathbf{x}}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x}}\right\rvert}}\end{aligned} \hspace{\stretch{1}}(3.27)

There are also causal and non-causal variations that won’t be of interest for this course.

This arms us now to solve any problem in the Lorentz gauge

\begin{aligned}A^k(\mathbf{x}, t) = \frac{1}{{c}} \int d^3 \mathbf{x}' dt' \frac{\delta\left(t - t' - \frac{{\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}}{c}\right)}{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}j^k(\mathbf{x}', t')+\text{An arbitrary collection of EM waves.}\end{aligned} \hspace{\stretch{1}}(3.28)

The additional EM waves are the possible contributions from the homogeneous equation.

Since $\delta(t - t' - {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}/c)$ is non-zero only when $t' = t - {\left\lvert{\mathbf{x} -\mathbf{x}'}\right\rvert}/c)$, the non-homogeneous parts of 3.28 reduce to

\begin{aligned}A^k(\mathbf{x}, t) = \frac{1}{{c}} \int d^3 \mathbf{x}' \frac{j^k(\mathbf{x}', t - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}/c)}{4 \pi {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(3.29)

Our potentials at time $t$ and spatial position $\mathbf{x}$ are completely specified in terms of the sums of the currents acting at the retarded time $t - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}/c$. The field can only depend on the charge and current distribution in the past. Specifically, it can only depend on the charge and current distribution on the past light cone of the spacetime point at which we measure the field.

# Example of the Green’s function. Consider a charged particle moving on a worldline

\begin{aligned}(c t, \mathbf{x}_c(t))\end{aligned} \hspace{\stretch{1}}(4.30)

($c$ for classical)

For this particle

\begin{aligned}\rho(\mathbf{x}, t) &= e \delta^3(\mathbf{x} - \mathbf{x}_c(t)) \\ \mathbf{j}(\mathbf{x}, t) &= e \dot{\mathbf{x}}_c(t) \delta^3(\mathbf{x} - \mathbf{x}_c(t))\end{aligned} \hspace{\stretch{1}}(4.31)

\begin{aligned}\begin{bmatrix}A^0(\mathbf{x}, t)\mathbf{A}(\mathbf{x}, t)\end{bmatrix}&=\frac{1}{{c}}\int d^3 \mathbf{x}' dt'\frac{ \delta( t - t' - {\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}/c }{{\left\lvert{\mathbf{x} - \mathbf{x}'}\right\rvert}}\begin{bmatrix}c e \\ e \dot{\mathbf{x}}_c(t)\end{bmatrix}\delta^3(\mathbf{x} - \mathbf{x}_c(t)) \\ &=\int_{-\infty}^\infty\frac{ \delta( t - t' - {\left\lvert{\mathbf{x} - \mathbf{x}_c(t')}\right\rvert}/c }{{\left\lvert{\mathbf{x}_c(t') - \mathbf{x}}\right\rvert}}\begin{bmatrix}e \\ e \frac{\dot{\mathbf{x}}_c(t)}{c}\end{bmatrix}\end{aligned}

PICTURE: light cones, and curved worldline. Pick an arbitrary point $(\mathbf{x}_0, t_0)$, and draw the past light cone, looking at where this intersects with the trajectory

For the arbitrary point $(\mathbf{x}_0, t_0)$ we see that this point and the retarded time $(\mathbf{x}_c(t_r), t_r)$ obey the relation

\begin{aligned}c (t_0 - t_r) = {\left\lvert{\mathbf{x}_0 - \mathbf{x}_c(t_r)}\right\rvert}\end{aligned} \hspace{\stretch{1}}(4.33)

This retarded time is unique. There is only one such intersection.

Our job is to calculate

\begin{aligned}\int_{-\infty}^\infty \delta(f(x)) g(x) = \frac{g(x_{*})}{f'(x_{*})}\end{aligned} \hspace{\stretch{1}}(4.34)

where $f(x_{*}) = 0$.

\begin{aligned}f(t') = t - t' - {\left\lvert{\mathbf{x} - \mathbf{x}_c(t')}\right\rvert}/c\end{aligned} \hspace{\stretch{1}}(4.35)

\begin{aligned}\frac{\partial {f}}{\partial {t'}}&= -1 - \frac{1}{{c}} \frac{\partial {}}{\partial {t'}} \sqrt{ (\mathbf{x} - \mathbf{x}_c(t')) \cdot (\mathbf{x} - \mathbf{x}_c(t')) } \\ &= -1 + \frac{1}{{c}} \frac{\partial {}}{\partial {t'}} \frac{(\mathbf{x} - \mathbf{x}_c(t')) \cdot \mathbf{v}_c(t_r)}{{\left\lvert{\mathbf{x} - \mathbf{x}_c(t_r)}\right\rvert}}\end{aligned}

# References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

[2] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987.

## PHY450H1S. Relativistic Electrodynamics Lecture 16 (Taught by Prof. Erich Poppitz). Monochromatic EM fields. Poynting vector and energy density conservation

Posted by peeterjoot on March 3, 2011

Covering chapter 6 material from the text [1].

Covering lecture notes pp. 115-127: properties of monochromatic plane EM waves (122-124); energy and energy flux of the EM field and energy conservation from the equations of motion (125-127) [Wednesday, Mar. 2]

# Review. Solution to the wave equation.

Recall that in the Coulomb gauge

\begin{aligned}A^0 &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{A} &= 0\end{aligned} \hspace{\stretch{1}}(2.1)

our equation to solve is

\begin{aligned}\left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) \mathbf{A} = 0.\end{aligned} \hspace{\stretch{1}}(2.3)

We found that the general solution was

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \int \frac{d^3\mathbf{k}}{(2 \pi)^3} \left(e^{i (\mathbf{k} \cdot \mathbf{x} + \omega_k t)} \boldsymbol{\beta}^{*}(-\mathbf{k})+e^{i (\mathbf{k} \cdot \mathbf{x} - \omega_k t)} \boldsymbol{\beta}(\mathbf{k})\right)\end{aligned} \hspace{\stretch{1}}(2.4)

where

\begin{aligned}\mathbf{k} \cdot \boldsymbol{\beta}(\mathbf{k}) = 0\end{aligned} \hspace{\stretch{1}}(2.5)

It is clear that this is a solution since

\begin{aligned}\left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) e^{i (\mathbf{k} \cdot \mathbf{x} \pm \omega_k t)} = 0\end{aligned} \hspace{\stretch{1}}(2.6)

# Moving to physically relevant results.

Since the most general solution is a sum over $\mathbf{k}$, it is enough to consider only a single $\mathbf{k}$, or equivalently, take

\begin{aligned}\boldsymbol{\beta}(\mathbf{k}) &= \boldsymbol{\beta} ( 2\pi)^3 \delta^3(\mathbf{k} - \mathbf{p}) \\ \boldsymbol{\beta}^{*}(-\mathbf{k}) &= \boldsymbol{\beta}^{*} ( 2\pi)^3 \delta^3(-\mathbf{k} - \mathbf{p})\end{aligned} \hspace{\stretch{1}}(3.7)

but we have the freedom to pick a real and constant $\boldsymbol{\beta}$. Now our solution is

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \boldsymbol{\beta} \left(e^{-i (\mathbf{p} \cdot \mathbf{x} + \omega_k t)}+e^{i (\mathbf{p} \cdot \mathbf{x} - \omega_k t)}\right)= \boldsymbol{\beta} \cos( \omega t - \mathbf{p} \cdot \mathbf{x})\end{aligned} \hspace{\stretch{1}}(3.9)

where

\begin{aligned}\boldsymbol{\beta} \cdot \mathbf{p} = 0\end{aligned} \hspace{\stretch{1}}(3.10)

FIXME:DIY: show that also using $\boldsymbol{\beta}$ complex also works.

Let’s choose

\begin{aligned}\mathbf{p} = (p, 0, 0)\end{aligned} \hspace{\stretch{1}}(3.11)

Since

\begin{aligned}\mathbf{p} \cdot \boldsymbol{\beta} = p_x \beta_x\end{aligned} \hspace{\stretch{1}}(3.12)

we must have

\begin{aligned}\boldsymbol{\beta}_x = 0\end{aligned} \hspace{\stretch{1}}(3.13)

so

\begin{aligned}\boldsymbol{\beta} = (0, \beta_y, \beta_z)\end{aligned} \hspace{\stretch{1}}(3.14)

\paragraph{Claim:} The Coulomb gauge $0 = \boldsymbol{\nabla} \cdot \mathbf{A} = (\boldsymbol{\beta} \cdot \mathbf{p})\sin(\omega t - \mathbf{p} \cdot \mathbf{x})$ implies that there are two linearly independent choices of $\boldsymbol{\beta}$ and $\mathbf{p}$.

FIXME: missing exactly how this is?

PICTURE:

$\boldsymbol{\beta}_1$, $\boldsymbol{\beta}_2$, $\mathbf{p}$ all mutually perpendicular.

\begin{aligned}\mathbf{E}&= -\frac{\partial {\mathbf{A}}}{\partial {ct}} \\ &= -\frac{\boldsymbol{\beta}}{c} \frac{\partial {}}{\partial {t}} \cos(\omega t - \mathbf{p} \cdot \mathbf{x}) \\ &= -\frac{1}{{c}} \boldsymbol{\beta} \omega_p\sin(\omega t - \mathbf{p} \cdot \mathbf{x})\end{aligned}

(recall: $\omega_p = c{\left\lvert{\mathbf{p}}\right\rvert}$)

\begin{aligned}\boxed{\mathbf{E} = \boldsymbol{\beta} {\left\lvert{\mathbf{p}}\right\rvert} \sin(\omega t - \mathbf{p} \cdot \mathbf{x})}\end{aligned} \hspace{\stretch{1}}(3.15)

\begin{aligned}\mathbf{B}&= \boldsymbol{\nabla} \times \mathbf{A} \\ &= \boldsymbol{\nabla} \times ( \boldsymbol{\beta} \cos(\omega t - \mathbf{p} \cdot \mathbf{x}) \\ &= (\boldsymbol{\nabla} \cos(\omega t - \mathbf{p} \cdot \mathbf{x})) \times \boldsymbol{\beta} \\ &= \sin(\omega t - \mathbf{p} \cdot \mathbf{x}) \mathbf{p} \times \boldsymbol{\beta}\end{aligned}

\begin{aligned}\boxed{\mathbf{B} = (\mathbf{p} \times \boldsymbol{\beta}) \sin(\omega t - \mathbf{p} \cdot \mathbf{x})}\end{aligned} \hspace{\stretch{1}}(3.16)

\paragraph{Example:} $\mathbf{p} \parallel \mathbf{e}_x$, $\mathbf{B} \parallel \mathbf{e}_y$ or $\mathbf{e}_z$

(since we have two linearly independent choices)

\paragraph{Example:} take $\boldsymbol{\beta} \parallel \mathbf{e}_y$

\begin{aligned}\mathbf{E} &= \boldsymbol{\beta} p \sin(c p t - p x) \\ \mathbf{B} &= (\mathbf{p} \times \boldsymbol{\beta}) \sin(c p t - p x)\end{aligned} \hspace{\stretch{1}}(3.17)

At $t = 0$

\begin{aligned}\mathbf{E} &= -\boldsymbol{\beta} p \sin( p x) \\ B_z &= - {\left\lvert{\boldsymbol{\beta}}\right\rvert} \mathbf{e}_z c p \sin(p x)\end{aligned} \hspace{\stretch{1}}(3.19)

PICTURE: two oscillating mutually perpendicular sinusoids.

So physically, we see that $\mathbf{p}$ is the direction of propagation. We have always

\begin{aligned}\mathbf{p} \perp \mathbf{E}\end{aligned} \hspace{\stretch{1}}(3.21)

and we have two possible polarizations.

Convention is usually to take the direction of oscillation of $\mathbf{E}$ the polarization of the wave.

This is the starting point for the field of optics, because the polarization of the incident wave, is strongly tied to how much of the wave will reflect off of a surface with a given index of refraction $n$.

# EM waves carrying energy and momentum

Maxwell field in vacuum is the sum of plane monochromatic waves, two per wave vector.

PICTURE:

\begin{aligned}\mathbf{E} &\parallel \mathbf{e}_3 \\ \mathbf{B} &\parallel \mathbf{e}_1 \\ \mathbf{k} &\parallel \mathbf{e}_2\end{aligned}

PICTURE:

\begin{aligned}\mathbf{B} &\parallel -\mathbf{e}_3 \\ \mathbf{E} &\parallel \mathbf{e}_1 \\ \mathbf{k} &\parallel \mathbf{e}_2\end{aligned}

(two linearly independent polarizations)

Our wave frequency is

\begin{aligned}\omega_{\mathbf{k}} = c {\left\lvert{\mathbf{k}}\right\rvert}\end{aligned} \hspace{\stretch{1}}(4.22)

The wavelength, the value such that $x \rightarrow x + \frac{2 \pi}{k}$

FIXME:DIY: see:

\begin{aligned}\sin(k c t - k x)\end{aligned} \hspace{\stretch{1}}(4.23)

\begin{aligned}\lambda_{\mathbf{k}} = \frac{2 \pi}{k}\end{aligned} \hspace{\stretch{1}}(4.24)

period

\begin{aligned}T = \frac{ 2 \pi} {k c} = \frac{\lambda_\mathbf{k}}{c}\end{aligned} \hspace{\stretch{1}}(4.25)

# Energy and momentum of EM waves.

## Classical mechanics motivation.

To motivate our approach, let’s recall one route from our equations of motion in classical mechanics, to the energy conservation relation. Our EOM in one dimension is

\begin{aligned}m \frac{d}{dt} \dot{x} = - \mathcal{U}'(x).\end{aligned} \hspace{\stretch{1}}(5.26)

We can multiply both sides by what we take the time derivative of

\begin{aligned}m \dot{x} \frac{d{{\dot{x}}}}{dt} = - \dot{x} \mathcal{U}'(x),\end{aligned} \hspace{\stretch{1}}(5.27)

and then manipulate it a bit so that we have time derivatives on both sides

\begin{aligned}\frac{d{{}}}{dt} \frac{m \dot{x}^2}{2} = - \frac{d{{ \mathcal{U}(x) }}}{dt}.\end{aligned} \hspace{\stretch{1}}(5.28)

Taking differences, we have

\begin{aligned}\frac{d{{}}}{dt} \left( \frac{m \dot{x}^2}{2} + \mathcal{U}(x) \right) = 0,\end{aligned} \hspace{\stretch{1}}(5.29)

which allows us to find a conservation relationship that we label energy conservation ($\mathcal{E} = K + \mathcal{U}$).

## Doing the same thing for Maxwell’s equations.

Poppitz claims we have very little tricks in physics, and we really just do the same thing for our EM case. Our equations are a bit messier to start with, and for the vacuum, our non-divergence equations are

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{B} -\frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} &= \frac{4 \pi}{c} \mathbf{j} \\ \boldsymbol{\nabla} \times \mathbf{E} +\frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} &= 0\end{aligned} \hspace{\stretch{1}}(5.30)

We can dot these with $\mathbf{E}$ and $\mathbf{B}$ respectively, repeating the trick of “multiplying” by what we take the time derivative of

\begin{aligned}\mathbf{E} \cdot (\boldsymbol{\nabla} \times \mathbf{B}) -\frac{1}{{c}} \mathbf{E} \cdot \frac{\partial {\mathbf{E}}}{\partial {t}} &= \frac{4 \pi}{c} \mathbf{E} \cdot \mathbf{j} \\ \mathbf{B} \cdot (\boldsymbol{\nabla} \times \mathbf{E}) +\frac{1}{{c}} \mathbf{B} \cdot \frac{\partial {\mathbf{B}}}{\partial {t}} &= 0,\end{aligned} \hspace{\stretch{1}}(5.32)

and then take differences

\begin{aligned}\frac{1}{{c}} \left(\mathbf{B} \cdot \frac{\partial {\mathbf{B}}}{\partial {t}}+ \mathbf{E} \cdot \frac{\partial {\mathbf{E}}}{\partial {t}} \right) + \mathbf{B} \cdot (\boldsymbol{\nabla} \times \mathbf{E}) -\mathbf{E} \cdot (\boldsymbol{\nabla} \times \mathbf{B}) =-\frac{4 \pi}{c} \mathbf{E} \cdot \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.34)

\paragraph{Claim:}

\begin{aligned}-\mathbf{B} \cdot (\boldsymbol{\nabla} \times \mathbf{E}) +\mathbf{E} \cdot (\boldsymbol{\nabla} \times \mathbf{B}) = \boldsymbol{\nabla} \cdot ( \mathbf{B} \times \mathbf{E} ).\end{aligned} \hspace{\stretch{1}}(5.35)

This is almost trivial with an expansion of the RHS in tensor notation

\begin{aligned}\boldsymbol{\nabla} \cdot ( \mathbf{B} \times \mathbf{E} )&=\partial_\alpha e^{\alpha \beta \sigma} B^\beta E^\sigma \\ &=e^{\alpha \beta \sigma} (\partial_\alpha B^\beta) E^\sigma+e^{\alpha \beta \sigma} B^\beta (\partial_\alpha E^\sigma) \\ &=\mathbf{E} \cdot (\boldsymbol{\nabla} \times \mathbf{B})-\mathbf{B} \cdot (\boldsymbol{\nabla} \times \mathbf{E})\qquad \square\end{aligned}

Regrouping we have

\begin{aligned}\frac{1}{{2 c}} \frac{\partial {}}{\partial {t}} \left(\mathbf{B}^2 + \mathbf{E}^2 \right) - \boldsymbol{\nabla} \cdot ( \mathbf{B} \times \mathbf{E} )=-\frac{4 \pi}{c} \mathbf{E} \cdot \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.36)

A final rescaling makes the units natural

\begin{aligned}\frac{\partial {}}{\partial {t}} \frac{ \mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} - \boldsymbol{\nabla} \cdot \left( \frac{c}{4 \pi} \mathbf{B} \times \mathbf{E} \right) = - \mathbf{E} \cdot \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.37)

We define the cross product term as the Poynting vector

\begin{aligned}\mathbf{S} &= \frac{c}{4 \pi} \mathbf{B} \times \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(5.38)

Suppose we integrate over a spatial volume. This gives us

\begin{aligned}\frac{\partial {}}{\partial {t}}\int_V d^3 \mathbf{x} \frac{ \mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} - \int_V d^3 \mathbf{x} \boldsymbol{\nabla} \cdot \mathbf{S} = - \int_V d^3 \mathbf{x} \mathbf{E} \cdot \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.39)

Our Poynting integral can be converted to a surface integral utilizing Stokes theorem

\begin{aligned}\int_V d^3 \mathbf{x} \boldsymbol{\nabla} \cdot \mathbf{S} = \int_{\partial V} d^2 \sigma \mathbf{n} \cdot \mathbf{S} =\int_{\partial V} d^2 \boldsymbol{\sigma} \cdot \mathbf{S}\end{aligned} \hspace{\stretch{1}}(5.40)

We make the interpretations

\begin{aligned}\int_V d^3 \mathbf{x} \frac{ \mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} &= \mbox{energy} \\ \int_V d^3 \mathbf{x} \boldsymbol{\nabla} \cdot \mathbf{S} &= \mbox{momentum change through surface per unit time} \\ - \int_V d^3 \mathbf{x} \mathbf{E} \cdot \mathbf{j} &= \mbox{work done}\end{aligned}

\paragraph{Justifying the sign, and clarifying work done by what, above.}

Recall that the energy term of the Lorentz force equation was

\begin{aligned}\frac{d{{\mathcal{E}_{\text{kinetic}}}}}{dt} = e \mathbf{E} \cdot \mathbf{v}\end{aligned} \hspace{\stretch{1}}(5.41)

and

\begin{aligned}\mathbf{j} = e \rho \mathbf{v}\end{aligned} \hspace{\stretch{1}}(5.42)

so

\begin{aligned}\int_V d^3 \mathbf{x} \mathbf{E} \cdot \mathbf{j}\end{aligned} \hspace{\stretch{1}}(5.43)

represents the rate of change of kinetic energy of the charged particles as they move through through a field. If this is positive, then the charge distribution has gained energy. The negation of this quantity would represent energy transfer to the field from the charge distribution, the work done \underline{on the field} by the charge distribution.

## Aside: As a four vector relationship.

In tutorial today (after this lecture, but before typing up these lecture notes in full), we used $\mathcal{U}$ for the energy density term above

\begin{aligned}\mathcal{U} = \frac{ \mathbf{E}^2 + \mathbf{B}^2 }{8 \pi} .\end{aligned} \hspace{\stretch{1}}(5.44)

This allows us to group the quantities in our conservation relationship above nicely

\begin{aligned}\frac{\partial {\mathcal{U}}}{\partial {t}} - \boldsymbol{\nabla} \cdot \mathbf{S} = - \mathbf{E} \cdot \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.45)

It appears natural to write 5.45 in the form of a four divergence. Suppose we define

\begin{aligned}P^i = (\mathcal{U}, \mathbf{S}/c^2)\end{aligned} \hspace{\stretch{1}}(5.46)

then we have

\begin{aligned}\partial_i P^i = - c \mathbf{E} \cdot \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.47)

Since the LHS has the appearance of a four scalar, this seems to imply that $\mathbf{E} \cdot \mathbf{j}$ is a Lorentz invariant. It is curious that we have only the four scalar that comes from the energy term of the Lorentz force on the RHS of the conservation relationship. Peeking ahead at the text, this appears to be why a rank two energy tensor $T^{ij}$ is introduced. For a relativistically natural quantity, we ought to have a conservation relationship also associated with each of the momentum change components of the four vector Lorentz force equation too.

# References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

## Exploring Stokes Theorem in tensor form.

Posted by peeterjoot on February 22, 2011

# Obsolete with potential errors.

This post may be in error.  I wrote this before understanding that the gradient used in Stokes Theorem must be projected onto the tangent space of the parameterized surface, as detailed in Alan MacDonald’s Vector and Geometric Calculus.

See the post ‘stokes theorem in geometric algebra‘ [PDF], where this topic has been revisited with this in mind.

# Original Post:

## Motivation.

I’ve worked through Stokes theorem concepts a couple times on my own now. One of the first times, I was trying to formulate this in a Geometric Algebra context. I had to resort to a tensor decomposition, and pictures, before ending back in the Geometric Algebra description. Later I figured out how to do it entirely with a Geometric Algebra description, and was able to eliminate reliance on the pictures that made the path to generalization to higher dimensional spaces unclear.

It’s my expectation that if one started with a tensor description, the proof entirely in tensor form would not be difficult. This is what I’d like to try this time. To start off, I’ll temporarily use the Geometric Algebra curl expression so I know what my tensor equation starting point will be, but once that starting point is found, we can work entirely in coordinate representation. For somebody who already knows that this is the starting point, all of this initial motivation can be skipped.

# Translating the exterior derivative to a coordinate representation.

Our starting point is a curl, dotted with a volume element of the same grade, so that the result is a scalar

\begin{aligned}\int d^n x \cdot (\nabla \wedge A).\end{aligned} \hspace{\stretch{1}}(2.1)

Here $A$ is a blade of grade $n-1$, and we wedge this with the gradient for the space

\begin{aligned}\nabla \equiv e^i \partial_i = e_i \partial^i,\end{aligned} \hspace{\stretch{1}}(2.2)

where we with with a basis (not necessarily orthonormal) $\{e_i\}$, and the reciprocal frame for that basis $\{e^i\}$ defined by the relation

\begin{aligned}e^i \cdot e_j = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(2.3)

Our coordinates in these basis sets are

\begin{aligned}x \cdot e^i & \equiv x^i \\ x \cdot e_i & \equiv x_i\end{aligned} \hspace{\stretch{1}}(2.4)

so that

\begin{aligned}x = x^i e_i = x_i e^i.\end{aligned} \hspace{\stretch{1}}(2.6)

The operator coordinates of the gradient are defined in the usual fashion

\begin{aligned}\partial_i & \equiv \frac{\partial }{\partial {x^i}} \\ \partial^i & \equiv \frac{\partial}{\partial {x_i}}\end{aligned} \hspace{\stretch{1}}(2.7)

The volume element for the subspace that we are integrating over we will define in terms of an arbitrary parametrization

\begin{aligned}x = x(\alpha_1, \alpha_2, \cdots, \alpha_n)\end{aligned} \hspace{\stretch{1}}(2.9)

The subspace can be considered spanned by the differential elements in each of the respective curves where all but the $i$th parameter are held constant.

\begin{aligned}dx_{\alpha_i}= d\alpha_i \frac{\partial x}{\partial {\alpha_i}}= d\alpha_i \frac{\partial {x^j}}{\partial {\alpha_i}} e_j.\end{aligned} \hspace{\stretch{1}}(2.10)

We assume that the integral is being performed in a subspace for which none of these differential elements in that region are linearly dependent (i.e. our Jacobean determinant must be non-zero).

The magnitude of the wedge product of all such differential elements provides the volume of the parallelogram, or parallelepiped (or higher dimensional analogue), and is

\begin{aligned}d^n x=d\alpha_1 d\alpha_2\cdots d\alpha_n\frac{\partial x}{\partial {\alpha_n}} \wedge\cdots \wedge\frac{\partial x}{\partial {\alpha_2}}\wedge\frac{\partial x}{\partial {\alpha_1}}.\end{aligned} \hspace{\stretch{1}}(2.11)

The volume element is a oriented quantity, and may be adjusted with an arbitrary sign (or equivalently an arbitrary permutation of the differential elements in the wedge product), and we’ll see that it is convenient for the translation to tensor form, to express these in reversed order.

Let’s write

\begin{aligned}d^n \alpha = d\alpha_1 d\alpha_2 \cdots d\alpha_n,\end{aligned} \hspace{\stretch{1}}(2.12)

so that our volume element in coordinate form is

\begin{aligned}d^n x = d^n \alpha\frac{\partial {x^i}}{\partial {\alpha_1}}\frac{\partial {x^j}}{\partial {\alpha_2}}\cdots \frac{\partial {x^k}}{\partial {\alpha_{n-1}}}\frac{\partial {x^l}}{\partial {\alpha_n}}( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i ).\end{aligned} \hspace{\stretch{1}}(2.13)

Our curl will also also be a grade $n$ blade. We write for the $n-1$ grade blade

\begin{aligned}A = A_{b c \cdots d} (e^b \wedge e^c \wedge \cdots e^d),\end{aligned} \hspace{\stretch{1}}(2.14)

where $A_{b c \cdots d}$ is antisymmetric (i.e. $A = a_1 \wedge a_2 \wedge \cdots a_{n-1}$ for a some set of vectors $a_i, i \in 1 .. n-1$).

With our gradient in coordinate form

\begin{aligned}\nabla = e^a \partial_a,\end{aligned} \hspace{\stretch{1}}(2.15)

the curl is then

\begin{aligned}\nabla \wedge A = \partial_a A_{b c \cdots d} (e^a \wedge e^b \wedge e^c \wedge \cdots e^d).\end{aligned} \hspace{\stretch{1}}(2.16)

The differential form for our integral can now be computed by expanding out the dot product. We want

\begin{aligned}( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i )\cdot(e^a \wedge e^b \wedge e^c \wedge \cdots e^d)=((((( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i ) \cdot e^a ) \cdot e^b ) \cdot e^c ) \cdot \cdots ) \cdot e^d.\end{aligned} \hspace{\stretch{1}}(2.17)

Evaluation of the interior dot products introduces the intrinsic antisymmetry required for Stokes theorem. For example, with

\begin{aligned}( e_n \wedge e_{n-1} \wedge \cdots \wedge e_2 \wedge e_1 ) \cdot e^a a & =( e_n \wedge e_{n-1} \wedge \cdots \wedge e_3 \wedge e_2 ) (e_1 \cdot e^a) \\ & -( e_n \wedge e_{n-1} \wedge \cdots \wedge e_3 \wedge e_1 ) (e_2 \cdot e^a) \\ & +( e_n \wedge e_{n-1} \wedge \cdots \wedge e_2 \wedge e_1 ) (e_3 \cdot e^a) \\ & \cdots \\ & (-1)^{n-1}( e_{n-1} \wedge e_{n-2} \wedge \cdots \wedge e_2 \wedge e_1 ) (e_n \cdot e^a)\end{aligned}

Since $e_i \cdot e^a = {\delta_i}^a$ our end result is a completely antisymmetric set of permutations of all the deltas

\begin{aligned}( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i )\cdot(e^a \wedge e^b \wedge e^c \wedge \cdots e^d)={\delta^{[a}}_i{\delta^b}_j\cdots {\delta^{d]}}_l,\end{aligned} \hspace{\stretch{1}}(2.18)

and the curl integral takes it’s coordinate form

\begin{aligned}\int d^n x \cdot ( \nabla \wedge A ) =\int d^n \alpha\frac{\partial {x^i}}{\partial {\alpha_1}}\frac{\partial {x^j}}{\partial {\alpha_2}}\cdots \frac{\partial {x^k}}{\partial {\alpha_{n-1}}}\frac{\partial {x^l}}{\partial {\alpha_n}}\partial_a A_{b c \cdots d}{\delta^{[a}}_i{\delta^b}_j\cdots {\delta^{d]}}_l.\end{aligned} \hspace{\stretch{1}}(2.19)

One final contraction of the paired indexes gives us our Stokes integral in its coordinate representation

\begin{aligned}\boxed{\int d^n x \cdot ( \nabla \wedge A ) =\int d^n \alpha\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}\partial_a A_{b c \cdots d}}\end{aligned} \hspace{\stretch{1}}(2.20)

We now have a starting point that is free of any of the abstraction of Geometric Algebra or differential forms. We can identify the products of partials here as components of a scalar hypervolume element (possibly signed depending on the orientation of the parametrization)

\begin{aligned}d\alpha_1 d\alpha_2\cdots d\alpha_n\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}\end{aligned} \hspace{\stretch{1}}(2.21)

This is also a specific computation recipe for these hypervolume components, something that may not be obvious when we allow for general metrics for the space. We are also allowing for non-orthonormal coordinate representations, and arbitrary parametrization of the subspace that we are integrating over (our integral need not have the same dimension as the underlying vector space).

Observe that when the number of parameters equals the dimension of the space, we can write out the antisymmetric term utilizing the determinant of the Jacobian matrix

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}= \epsilon^{a b \cdots d} {\left\lvert{ \frac{\partial(x^1, x^2, \cdots x^n)}{\partial(\alpha_1, \alpha_2, \cdots \alpha_n)} }\right\rvert}\end{aligned} \hspace{\stretch{1}}(2.22)

When the dimension of the space $n$ is greater than the number of parameters for the integration hypervolume in question, the antisymmetric sum of partials is still the determinant of a Jacobian matrix

\begin{aligned}\frac{\partial {x^{[a_1}}}{\partial {\alpha_1}}\frac{\partial {x^{a_2}}}{\partial {\alpha_2}}\cdots \frac{\partial {x^{a_{n-1}}}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{a_n]}}}{\partial {\alpha_n}}= {\left\lvert{ \frac{\partial(x^{a_1}, x^{a_2}, \cdots x^{a_n})}{\partial(\alpha_1, \alpha_2, \cdots \alpha_n)} }\right\rvert},\end{aligned} \hspace{\stretch{1}}(2.23)

however, we will have one such Jacobian for each unique choice of indexes.

# The Stokes work starts here.

The task is to relate our integral to the boundary of this volume, coming up with an explicit recipe for the description of that bounding surface, and determining the exact form of the reduced rank integral. This job is essentially to reduce the ranks of the tensors that are being contracted in our Stokes integral. With the derivative applied to our rank $n-1$ antisymmetric tensor $A_{b c \cdots d}$, we can apply the chain rule and examine the permutations so that this can be rewritten as a contraction of $A$ itself with a set of rank $n-1$ surface area elements.

\begin{aligned}\int d^n \alpha\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}\partial_a A_{b c \cdots d} = ?\end{aligned} \hspace{\stretch{1}}(3.24)

Now, while the setup here has been completely general, this task is motivated by study of special relativity, where there is a requirement to work in a four dimensional space. Because of that explicit goal, I’m not going to attempt to formulate this in a completely abstract fashion. That task is really one of introducing sufficiently general notation. Instead, I’m going to proceed with a simpleton approach, and do this explicitly, and repeatedly for each of the rank 1, rank 2, and rank 3 tensor cases. It will be clear how this all generalizes by doing so, should one wish to work in still higher dimensional spaces.

## The rank 1 tensor case.

The equation we are working with for this vector case is

\begin{aligned}\int d^2 x \cdot (\nabla \wedge A) =\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\partial_a A_{b}(\alpha_1, \alpha_2)\end{aligned} \hspace{\stretch{1}}(3.25)

Expanding out the antisymmetric partials we have

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}} & =\frac{\partial {x^{a}}}{\partial {\alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}}-\frac{\partial {x^{b}}}{\partial {\alpha_1}}\frac{\partial {x^{a}}}{\partial {\alpha_2}},\end{aligned}

with which we can reduce the integral to

\begin{aligned}\int d^2 x \cdot (\nabla \wedge A) & =\int \left( d{\alpha_1}\frac{\partial {x^{a}}}{\partial {\alpha_1}}\frac{\partial {A_{b}}}{\partial {x^a}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-\left( d{\alpha_2}\frac{\partial {x^{a}}}{\partial {\alpha_2}}\frac{\partial {A_{b}}}{\partial {x^a}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1} \\ & =\int \left( d\alpha_1 \frac{\partial {A_b}}{\partial {\alpha_1}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-\left( d\alpha_2 \frac{\partial {A_b}}{\partial {\alpha_2}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1} \\ \end{aligned}

Now, if it happens that

\begin{aligned}\frac{\partial}{\partial {\alpha_1}}\frac{\partial {x^{a}}}{\partial {\alpha_2}} = \frac{\partial}{\partial {\alpha_2}}\frac{\partial {x^{a}}}{\partial {\alpha_1}} = 0\end{aligned} \hspace{\stretch{1}}(3.26)

then each of the individual integrals in $d\alpha_1$ and $d\alpha_2$ can be carried out. In that case, without any real loss of generality we can designate the integration bounds over the unit parametrization space square $\alpha_i \in [0,1]$, allowing this integral to be expressed as

\begin{aligned}\begin{aligned} & \int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\partial_a A_{b}(\alpha_1, \alpha_2) \\ & =\int \left( A_b(1, \alpha_2) - A_b(0, \alpha_2) \right)\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-\left( A_b(\alpha_1, 1) - A_b(\alpha_1, 0) \right)\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.27)

It’s also fairly common to see ${\left.{{A}}\right\vert}_{{\partial \alpha_i}}$ used to designate evaluation of this first integral on the boundary, and using this we write

\begin{aligned}\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\partial_a A_{b}(\alpha_1, \alpha_2)=\int {\left.{{A_b}}\right\vert}_{{\partial \alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-{\left.{{A_b}}\right\vert}_{{\partial \alpha_2}}\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1}.\end{aligned} \hspace{\stretch{1}}(3.28)

Also note that since we are summing over all $a,b$, and have

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}=-\frac{\partial {x^{[b}}}{\partial {\alpha_1}}\frac{\partial {x^{a]}}}{\partial {\alpha_2}},\end{aligned} \hspace{\stretch{1}}(3.29)

we can write this summing over all unique pairs of $a,b$ instead, which eliminates a small bit of redundancy (especially once the dimension of the vector space gets higher)

\begin{aligned}\boxed{\sum_{a < b}\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\left( \partial_a A_{b}-\partial_b A_{a} \right)=\int {\left.{{A_b}}\right\vert}_{{\partial \alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-{\left.{{A_b}}\right\vert}_{{\partial \alpha_2}}\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1}.}\end{aligned} \hspace{\stretch{1}}(3.30)

In this form we have recovered the original geometric structure, with components of the curl multiplied by the component of the area element that shares the orientation and direction of that portion of the curl bivector.

This form of the result with evaluation at the boundaries in this form, assumed that ${\partial {x^a}}/{\partial {\alpha_1}}$ was not a function of $\alpha_2$ and ${\partial {x^a}}/{\partial {\alpha_2}}$ was not a function of $\alpha_1$. When that is not the case, we appear to have a less pretty result

\begin{aligned}\boxed{\sum_{a < b}\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\left( \partial_a A_{b}-\partial_b A_{a} \right)=\int d\alpha_2\int d\alpha_1\frac{\partial {A_b}}{\partial {\alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}}-\int d\alpha_2\int d\alpha_1\frac{\partial {A_b}}{\partial {\alpha_2}}\frac{\partial {x^{b}}}{\partial {\alpha_1}}}\end{aligned} \hspace{\stretch{1}}(3.31)

Can this be reduced any further in the general case? Having seen the statements of Stokes theorem in it’s differential forms formulation, I initially expected the answer was yes, and only when I got to evaluating my $\mathbb{R}^{4}$ spacetime example below did I realize that the differentials displacements for the parallelogram that constituted the area element were functions of both parameters. Perhaps this detail is there in the differential forms version of the general Stokes theorem too, but is just hidden in a tricky fashion by the compact notation.

### Sanity check: $\mathbb{R}^{2}$ case in rectangular coordinates.

For $x^1 = x, x^2 = y$, and $\alpha_1 = x, \alpha_2 = y$, we have for the LHS

\begin{aligned} & \int_{x=x_0}^{x_1}\int_{y=y_0}^{y_1}dx dy\left(\frac{\partial {x^{1}}}{\partial {\alpha_1}}\frac{\partial {x^{2}}}{\partial {\alpha_2}}-\frac{\partial {x^{2}}}{\partial {\alpha_1}}\frac{\partial {x^{1}}}{\partial {\alpha_2}}\right)\partial_1 A_{2}+\left(\frac{\partial {x^{2}}}{\partial {\alpha_1}}\frac{\partial {x^{1}}}{\partial {\alpha_2}}-\frac{\partial {x^{1}}}{\partial {\alpha_1}}\frac{\partial {x^{2}}}{\partial {\alpha_2}}\right)\partial_2 A_{1} \\ & =\int_{x=x_0}^{x_1}\int_{y=y_0}^{y_1}dx dy\left( \frac{\partial {A_y}}{\partial x} - \frac{\partial {A_x}}{\partial y} \right)\end{aligned}

Our RHS expands to

\begin{aligned} & \int_{y=y_0}^{y_1} dy\left(\left( A_1(x_1, y) - A_1(x_0, y) \right)\frac{\partial {x^{1}}}{\partial y}+\left( A_2(x_1, y) - A_2(x_0, y) \right)\frac{\partial {x^{2}}}{\partial y}\right) \\ & \qquad-\int_{x=x_0}^{x_1} dx\left(\left( A_1(x, y_1) - A_1(x, y_0) \right)\frac{\partial {x^{1}}}{\partial x}+\left( A_2(x, y_1) - A_2(x, y_0) \right)\frac{\partial {x^{2}}}{\partial x}\right) \\ & =\int_{y=y_0}^{y_1} dy\left( A_y(x_1, y) - A_y(x_0, y) \right)-\int_{x=x_0}^{x_1} dx\left( A_x(x, y_1) - A_x(x, y_0) \right)\end{aligned}

We have

\begin{aligned}\begin{aligned} & \int_{x=x_0}^{x_1}\int_{y=y_0}^{y_1}dx dy\left( \frac{\partial {A_y}}{\partial x} - \frac{\partial {A_x}}{\partial y} \right) \\ & =\int_{y=y_0}^{y_1} dy\left( A_y(x_1, y) - A_y(x_0, y) \right)-\int_{x=x_0}^{x_1} dx\left( A_x(x, y_1) - A_x(x, y_0) \right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.32)

The RHS is just a positively oriented line integral around the rectangle of integration

\begin{aligned}\int A_x(x, y_0) \hat{\mathbf{x}} \cdot ( \hat{\mathbf{x}} dx )+ A_y(x_1, y) \hat{\mathbf{y}} \cdot ( \hat{\mathbf{y}} dy )+ A_x(x, y_1) \hat{\mathbf{x}} \cdot ( -\hat{\mathbf{x}} dx )+ A_y(x_0, y) \hat{\mathbf{y}} \cdot ( -\hat{\mathbf{y}} dy )= \oint \mathbf{A} \cdot d\mathbf{r}.\end{aligned} \hspace{\stretch{1}}(3.33)

This special case is also recognizable as Green’s theorem, evident with the substitution $A_x = P$, $A_y = Q$, which gives us

\begin{aligned}\int_A dx dy \left( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y} \right)=\oint_C P dx + Q dy.\end{aligned} \hspace{\stretch{1}}(3.34)

Strictly speaking, Green’s theorem is more general, since it applies to integration regions more general than rectangles, but that generalization can be arrived at easily enough, once the region is broken down into adjoining elementary regions.

### Sanity check: $\mathbb{R}^{3}$ case in rectangular coordinates.

It is expected that we can recover the classical Kelvin-Stokes theorem if we use rectangular coordinates in $\mathbb{R}^{3}$. However, we see that we have to consider three different parametrizations. If one picks rectangular parametrizations $(\alpha_1, \alpha_2) = \{ (x,y), (y,z), (z,x) \}$ in sequence, in each case holding the value of the additional coordinate fixed, we get three different independent Green’s function like relations

\begin{aligned}\int_A dx dy \left( \frac{\partial {A_y}}{\partial x} - \frac{\partial {A_x}}{\partial y} \right) & = \oint_C A_x dx + A_y dy \\ \int_A dy dz \left( \frac{\partial {A_z}}{\partial y} - \frac{\partial {A_y}}{\partial z} \right) & = \oint_C A_y dy + A_z dz \\ \int_A dz dx \left( \frac{\partial {A_x}}{\partial z} - \frac{\partial {A_z}}{\partial x} \right) & = \oint_C A_z dz + A_x dx.\end{aligned} \hspace{\stretch{1}}(3.35)

Note that we cannot just add these to form a complete integral $\oint \mathbf{A} \cdot d\mathbf{r}$ since the curves are all have different orientations. To recover the $\mathbb{R}^{3}$ Stokes theorem in rectangular coordinates, it appears that we’d have to consider a Riemann sum of triangular surface elements, and relate that to the loops over each of the surface elements. In that limiting argument, only the boundary of the complete surface would contribute to the RHS of the relation.

All that said, we shouldn’t actually have to go to all this work. Instead we can stick to a two variable parametrization of the surface, and use 3.30 directly.

### An illustration for a $\mathbb{R}^{4}$ spacetime surface.

Suppose we have a particle trajectory defined by an active Lorentz transformation from an initial spacetime point

\begin{aligned}x^i = O^{ij} x_j(0) = O^{ij} g_{jk} x^k = {O^{i}}_k x^k(0)\end{aligned} \hspace{\stretch{1}}(3.38)

Let the Lorentz transformation be formed by a composition of boost and rotation

\begin{aligned}{O^i}_j & = {L^i}_k {R^k}_j \\ {L^i}_j & =\begin{bmatrix}\cosh_\alpha & -\sinh\alpha & 0 & 0 \\ -\sinh_\alpha & \cosh\alpha & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \\ {R^i}_j & =\begin{bmatrix}1 & 0 & 0 & 0 \\ \cos_\alpha & \sin\alpha & 0 & 0 \\ -\sin_\alpha & \cos\alpha & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.39)

Different rates of evolution of $\alpha$ and $\theta$ define different trajectories, and taken together we have a surface described by the two parameters

\begin{aligned}x^i(\alpha, \theta) = {L^i}_k {R^k}_j x^j(0, 0).\end{aligned} \hspace{\stretch{1}}(3.42)

We can compute displacements along the trajectories formed by keeping either $\alpha$ or $\theta$ fixed and varying the other. Those are

\begin{aligned}\frac{\partial {x^i}}{\partial {\alpha}} d\alpha & = \frac{d{L^i}_k}{d\alpha} {R^k}_j x^j(0, 0) \\ \frac{\partial {x^i}}{\partial {\theta}} d\theta & = {L^i}_k \frac{d{R^k}_j}{d\theta} x^j(0, 0) .\end{aligned} \hspace{\stretch{1}}(3.43)

Writing $y^i = x^i(0,0)$ the computation of the partials above yields

\begin{aligned}\frac{\partial {x^i}}{\partial {\alpha}} & =\begin{bmatrix}\sinh\alpha y^0 -\cosh\alpha (\cos\theta y^1 + \sin\theta y^2) \\ -\cosh\alpha y^0 +\sinh\alpha (\cos\theta y^1 + \sin\theta y^2) \\ 0 \\ 0\end{bmatrix} \\ \frac{\partial {x^i}}{\partial {\theta}} & =\begin{bmatrix}-\sinh\alpha (-\sin\theta y^1 + \cos\theta y^2 ) \\ \cosh\alpha (-\sin\theta y^1 + \cos\theta y^2 ) \\ -(\cos\theta y^1 + \sin\theta y^2 ) \\ 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.45)

Different choices of the initial point $y^i$ yield different surfaces, but we can get the idea by picking a simple starting point $y^i = (0, 1, 0, 0)$ leaving

\begin{aligned}\frac{\partial {x^i}}{\partial {\alpha}} & =\begin{bmatrix}-\cosh\alpha \cos\theta \\ \sinh\alpha \cos\theta \\ 0 \\ 0\end{bmatrix} \\ \frac{\partial {x^i}}{\partial {\theta}} & =\begin{bmatrix}\sinh\alpha \sin\theta \\ -\cosh\alpha \sin\theta \\ -\cos\theta \\ 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.47)

We can now compute our Jacobian determinants

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha}} \frac{\partial {x^{b]}}}{\partial {\theta}}={\left\lvert{\frac{\partial(x^a, x^b)}{\partial(\alpha, \theta)}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(3.49)

Those are

\begin{aligned}{\left\lvert{\frac{\partial(x^0, x^1)}{\partial(\alpha, \theta)}}\right\rvert} & = \cos\theta \sin\theta \\ {\left\lvert{\frac{\partial(x^0, x^2)}{\partial(\alpha, \theta)}}\right\rvert} & = \cosh\alpha \cos^2\theta \\ {\left\lvert{\frac{\partial(x^0, x^3)}{\partial(\alpha, \theta)}}\right\rvert} & = 0 \\ {\left\lvert{\frac{\partial(x^1, x^2)}{\partial(\alpha, \theta)}}\right\rvert} & = -\sinh\alpha \cos^2\theta \\ {\left\lvert{\frac{\partial(x^1, x^3)}{\partial(\alpha, \theta)}}\right\rvert} & = 0 \\ {\left\lvert{\frac{\partial(x^2, x^3)}{\partial(\alpha, \theta)}}\right\rvert} & = 0\end{aligned} \hspace{\stretch{1}}(3.50)

Using this, let’s see a specific 4D example in spacetime for the integral of the curl of some four vector $A^i$, enumerating all the non-zero components of 3.31 for this particular spacetime surface

\begin{aligned}\sum_{a < b}\int d{\alpha} d{\theta}{\left\lvert{\frac{\partial(x^a, x^b)}{\partial(\alpha, \theta)}}\right\rvert}\left( \partial_a A_{b}-\partial_b A_{a} \right)=\int d\theta\int d\alpha\frac{\partial {A_b}}{\partial {\alpha}}\frac{\partial {x^{b}}}{\partial {\theta}}-\int d\theta\int d\alpha\frac{\partial {A_b}}{\partial {\theta}}\frac{\partial {x^{b}}}{\partial {\alpha}}\end{aligned} \hspace{\stretch{1}}(3.56)

The LHS is thus found to be

\begin{aligned} & \int d{\alpha} d{\theta}\left({\left\lvert{\frac{\partial(x^0, x^1)}{\partial(\alpha, \theta)}}\right\rvert} \left( \partial_0 A_{1} -\partial_1 A_{0} \right)+{\left\lvert{\frac{\partial(x^0, x^2)}{\partial(\alpha, \theta)}}\right\rvert} \left( \partial_0 A_{2} -\partial_2 A_{0} \right)+{\left\lvert{\frac{\partial(x^1, x^2)}{\partial(\alpha, \theta)}}\right\rvert} \left( \partial_1 A_{2} -\partial_2 A_{1} \right)\right) \\ & =\int d{\alpha} d{\theta}\left(\cos\theta \sin\theta \left( \partial_0 A_{1} -\partial_1 A_{0} \right)+\cosh\alpha \cos^2\theta \left( \partial_0 A_{2} -\partial_2 A_{0} \right)-\sinh\alpha \cos^2\theta \left( \partial_1 A_{2} -\partial_2 A_{1} \right)\right)\end{aligned}

On the RHS we have

\begin{aligned}\int d\theta\int d\alpha & \frac{\partial {A_b}}{\partial {\alpha}}\frac{\partial {x^{b}}}{\partial {\theta}}-\int d\theta\int d\alpha\frac{\partial {A_b}}{\partial {\theta}}\frac{\partial {x^{b}}}{\partial {\alpha}} \\ & =\int d\theta\int d\alpha\begin{bmatrix}\sinh\alpha \sin\theta & -\cosh\alpha \sin\theta & -\cos\theta & 0\end{bmatrix}\frac{\partial}{\partial {\alpha}}\begin{bmatrix}A_0 \\ A_1 \\ A_2 \\ A_3 \\ \end{bmatrix} \\ & -\int d\theta\int d\alpha\begin{bmatrix}-\cosh\alpha \cos\theta & \sinh\alpha \cos\theta & 0 & 0\end{bmatrix}\frac{\partial}{\partial {\theta}}\begin{bmatrix}A_0 \\ A_1 \\ A_2 \\ A_3 \\ \end{bmatrix} \\ \end{aligned}

\begin{aligned}\begin{aligned} & \int d{\alpha} d{\theta}\cos\theta \sin\theta \left( \partial_0 A_{1} -\partial_1 A_{0} \right) \\ & \qquad+\int d{\alpha} d{\theta}\cosh\alpha \cos^2\theta \left( \partial_0 A_{2} -\partial_2 A_{0} \right) \\ & \qquad-\int d{\alpha} d{\theta}\sinh\alpha \cos^2\theta \left( \partial_1 A_{2} -\partial_2 A_{1} \right) \\ & =\int d\theta \sin\theta \int d\alpha \left( \sinh\alpha \frac{\partial {A_0}}{\partial {\alpha}} - \cosh\alpha \frac{\partial {A_1}}{\partial {\alpha}} \right) \\ & \qquad-\int d\theta \cos\theta \int d\alpha \frac{\partial {A_2}}{\partial {\alpha}} \\ & \qquad+\int d\alpha \cosh\alpha \int d\theta \cos\theta \frac{\partial {A_0}}{\partial {\theta}} \\ & \qquad-\int d\alpha \sinh\alpha \int d\theta \cos\theta \frac{\partial {A_1}}{\partial {\theta}}\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.57)

Because of the complexity of the surface, only the second term on the RHS has the “evaluate on the boundary” characteristic that may have been expected from a Green’s theorem like line integral.

It is also worthwhile to point out that we have had to be very careful with upper and lower indexes all along (and have done so with the expectation that our application would include the special relativity case where our metric determinant is minus one.) Because we worked with upper indexes for the area element, we had to work with lower indexes for the four vector and the components of the gradient that we included in our curl evaluation.

## The rank 2 tensor case.

Let’s consider briefly the terms in the contraction sum

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc}\end{aligned} \hspace{\stretch{1}}(3.58)

For any choice of a set of three distinct indexes $(a, b, c) \in (0, 1, 2), (0, 1, 3), (0, 2, 3), (1, 2, 3)$), we have $6 = 3!$ ways of permuting those indexes in this sum

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc} & =\sum_{a < b < c} {\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc} + {\left\lvert{ \frac{\partial(x^a, x^c, x^b)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{cb} + {\left\lvert{ \frac{\partial(x^b, x^c, x^a)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_b A_{ca} \\ & \qquad + {\left\lvert{ \frac{\partial(x^b, x^a, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_b A_{ac} + {\left\lvert{ \frac{\partial(x^c, x^a, x^b)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_c A_{ab} + {\left\lvert{ \frac{\partial(x^c, x^b, x^a)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_c A_{ba} \\ & =2!\sum_{a < b < c}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\left( \partial_a A_{bc} + \partial_b A_{c a} + \partial_c A_{a b} \right)\end{aligned}

Observe that we have no sign alternation like we had in the vector (rank 1 tensor) case. That sign alternation in this summation expansion appears to occur only for odd grade tensors.

Returning to the problem, we wish to expand the determinant in order to apply a chain rule contraction as done in the rank-1 case. This can be done along any of rows or columns of the determinant, and we can write any of

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} & =\frac{\partial {x^a}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^a}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ & =\frac{\partial {x^b}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^c, x^a)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^b}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^c, x^a)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^b}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^c, x^a)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ & =\frac{\partial {x^c}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^a, x^b)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^c}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^a, x^b)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^c}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^a, x^b)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ \end{aligned}

This allows the contraction of the index $a$, eliminating it from the result

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc} & =\left( \frac{\partial {x^a}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^a}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \right) \frac{\partial {A_{bc}}}{\partial {x^a}} \\ & =\frac{\partial {A_{bc}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {A_{bc}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {A_{bc}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ & =2!\sum_{b < c}\frac{\partial {A_{bc}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {A_{bc}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {A_{bc}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ \end{aligned}

Dividing out the common $2!$ terms, we can summarize this result as

\begin{aligned}\boxed{\begin{aligned}\sum_{a < b < c} & \int d\alpha_1 d\alpha_2 d\alpha_3 {\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\left( \partial_a A_{bc} + \partial_b A_{c a} + \partial_c A_{a b} \right) \\ & =\sum_{b < c}\int d\alpha_2 d\alpha_3 \int d\alpha_1\frac{\partial {A_{bc}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert} \\ & -\sum_{b < c}\int d\alpha_1 d\alpha_3 \int d\alpha_2\frac{\partial {A_{bc}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert} \\ & +\sum_{b < c}\int d\alpha_1 d\alpha_2 \int d\alpha_3\frac{\partial {A_{bc}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert}\end{aligned}}\end{aligned} \hspace{\stretch{1}}(3.59)

In general, as observed in the spacetime surface example above, the two index Jacobians can be functions of the integration variable first being eliminated. In the special cases where this is not the case (such as the $\mathbb{R}^{3}$ case with rectangular coordinates), then we are left with just the evaluation of the tensor element $A_{bc}$ on the boundaries of the respective integrals.

## The rank 3 tensor case.

The key step is once again just a determinant expansion

\begin{aligned} {\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \\ & =\frac{\partial {x^a}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_2, \alpha_3, \alpha_4)} }\right\rvert}-\frac{\partial {x^a}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_3, \alpha_4)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_4)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_4}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\\ \end{aligned}

so that the sum can be reduced from a four index contraction to a 3 index contraction

\begin{aligned} {\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a A_{bcd} \\ & =\frac{\partial {A_{bcd}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_2, \alpha_3, \alpha_4)} }\right\rvert}-\frac{\partial {A_{bcd}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_3, \alpha_4)} }\right\rvert}+\frac{\partial {A_{bcd}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_4)} }\right\rvert}+\frac{\partial {A_{bcd}}}{\partial {\alpha_4}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\end{aligned}

That’s the essence of the theorem, but we can play the same combinatorial reduction games to reduce the built in redundancy in the result

\begin{aligned}\boxed{\begin{aligned}\frac{1}{{3!}} & \int d^4 \alpha {\left\lvert{ \frac{\partial(x^a, x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a A_{bcd} \\ & =\sum_{a < b < c < d}\int d^4 \alpha {\left\lvert{ \frac{\partial(x^a, x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \left( \partial_a A_{bcd} -\partial_b A_{cda} +\partial_c A_{dab} -\partial_d A_{abc} \right) \\ & =\qquad \sum_{b < c < d}\int d\alpha_2 d\alpha_3 d\alpha_4 \int d\alpha_1\frac{\partial {A_{bcd}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \\ & \qquad -\sum_{b < c < d}\int d\alpha_1 d\alpha_3 d\alpha_4 \int d\alpha_2\frac{\partial {A_{bcd}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_3, \alpha_4)} }\right\rvert} \\ & \qquad +\sum_{b < c < d}\int d\alpha_1 d\alpha_2 d\alpha_4 \int d\alpha_3\frac{\partial {A_{bcd}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_4)} }\right\rvert} \\ & \qquad +\sum_{b < c < d}\int d\alpha_1 d\alpha_2 d\alpha_3 \int d\alpha_4\frac{\partial {A_{bcd}}}{\partial {\alpha_4}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \\ \end{aligned}}\end{aligned} \hspace{\stretch{1}}(3.60)

## A note on Four diverence.

Our four divergence integral has the following form

\begin{aligned}\int d^4 \alpha {\left\lvert{ \frac{\partial(x^1, x^2, x^2, x^4)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a A^a\end{aligned} \hspace{\stretch{1}}(3.61)

We can relate this to the rank 3 Stokes theorem with a duality transformation, multiplying with a pseudoscalar

\begin{aligned}A^a = \epsilon^{abcd} T_{bcd},\end{aligned} \hspace{\stretch{1}}(3.62)

where $T_{bcd}$ can also be related back to the vector by the same sort of duality transformation

\begin{aligned}A^a \epsilon_{a b c d} = \epsilon^{abcd} \epsilon_{a b c d} T_{bcd} = 4! T_{bcd}.\end{aligned} \hspace{\stretch{1}}(3.63)

The divergence integral in terms of the rank 3 tensor is

\begin{aligned}\int d^4 \alpha {\left\lvert{ \frac{\partial(x^1, x^2, x^2, x^4)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a \epsilon^{abcd} T_{bcd}=\int d^4 \alpha {\left\lvert{ \frac{\partial(x^a, x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a T_{bcd},\end{aligned} \hspace{\stretch{1}}(3.64)

and we are free to perform the same Stokes reduction of the integral. Of course, this is particularly simple in rectangular coordinates. I still have to think though one sublty that I feel may be important. We could have started off with an integral of the following form

\begin{aligned}\int dx^1 dx^2 dx^3 dx^4 \partial_a A^a,\end{aligned} \hspace{\stretch{1}}(3.65)

and I think this differs from our starting point slightly because this has none of the antisymmetric structure of the signed 4 volume element that we have used. We do not take the absolute value of our Jacobians anywhere.

## Stokes Theorem for antisymmetric tensors.

Posted by peeterjoot on January 18, 2011

# Obsolete with potential errors.

This post may be in error.  I wrote this before understanding that the gradient used in Stokes Theorem must be projected onto the tangent space of the parameterized surface, as detailed in Alan MacDonald’s Vector and Geometric Calculus.

See the post ‘stokes theorem in geometric algebra‘ [PDF], where this topic has been revisited with this in mind.

# Original Post:

In [3] I worked through the Geometric Algebra expression for Stokes Theorem. For a $k-1$ grade blade, the final result of that work was

\begin{aligned}\int( \nabla \wedge F ) \cdot d^k x =\frac{1}{{(k-1)!}} \epsilon^{ r s \cdots t u } \int da_u \frac{\partial {F}}{\partial {a_{u}}} \cdot (dx_r \wedge dx_s \wedge \cdots \wedge dx_t)\end{aligned} \hspace{\stretch{1}}(7.44)

Let’s expand this in coordinates to attempt to get the equivalent expression for an antisymmetric tensor of rank $k-1$.

Starting with the RHS of 7.44 we have

\begin{aligned}F &= \frac{1}{{(k-1)!}}F_{\mu_1 \mu_2 \cdots \mu_{k-1} }\gamma^{\mu_1} \wedge \gamma^{ \mu_2 } \wedge \cdots \wedge \gamma^{\mu_{k-1}} \\ dx_r \wedge dx_s \wedge \cdots \wedge dx_t &=\frac{\partial {x^{\nu_1}}}{\partial {a_r}}\frac{\partial {x^{\nu_2}}}{\partial {a_s}}\cdots\frac{\partial {x^{\nu_{k-1}}}}{\partial {a_t}}\gamma_{\nu_1} \wedge \gamma_{ \nu_2 } \wedge \cdots \wedge \gamma_{\nu_{k-1}}da_r da_s \cdots da_t\end{aligned} \hspace{\stretch{1}}(7.45)

We need to expand the dot product of the wedges, for which we have

\begin{aligned}\left( \gamma^{\mu_1} \wedge \gamma^{ \mu_2 } \wedge \cdots \wedge \gamma^{\mu_{k-1}} \right) \cdot\left( \gamma_{\nu_1} \wedge \gamma_{ \nu_2 } \wedge \cdots \wedge \gamma_{\nu_{k-1}}\right) ={\delta^{\mu_{k-1}}}_{\nu_1} {\delta^{ \mu_{k-2} }}_{\nu_2} \cdots {\delta^{\mu_{1}} }_{\nu_{k-1}}\epsilon^{\nu_1 \nu_2 \cdots \nu_{k-1}}\end{aligned} \hspace{\stretch{1}}(7.47)

Putting all the LHS bits together we have

\begin{aligned}&\frac{1}{{((k-1)!)^2}} \epsilon^{ r s \cdots t u } \int da_u \frac{\partial {}}{\partial {a_{u}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1} }{\delta^{\mu_{k-1}}}_{\nu_1} {\delta^{ \mu_{k-2} }}_{\nu_2} \cdots {\delta^{\mu_{1}} }_{\nu_{k-1}}\epsilon^{\nu_1 \nu_2 \cdots \nu_{k-1}}\frac{\partial {x^{\nu_1}}}{\partial {a_r}}\frac{\partial {x^{\nu_2}}}{\partial {a_s}}\cdots\frac{\partial {x^{\nu_{k-1}}}}{\partial {a_t}}da_r da_s \cdots da_t \\ &=\frac{1}{{((k-1)!)^2}} \epsilon^{ r s \cdots t u } \int da_u \frac{\partial {}}{\partial {a_{u}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1} }\epsilon^{\mu_{k-1} \mu_{k-2} \cdots \mu_{1}}\frac{\partial {x^{\mu_{k-1}}}}{\partial {a_r}}\frac{\partial {x^{\mu_{k-2}}}}{\partial {a_s}}\cdots\frac{\partial {x^{\mu_1}}}{\partial {a_t}}da_r da_s \cdots da_t \\ &=\frac{1}{{((k-1)!)^2}} \epsilon^{ r s \cdots t u } \int da_u \frac{\partial {}}{\partial {a_{u}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1} }{\left\lvert{\frac{\partial(x^{\mu_{k-1}},x^{\mu_{k-2}},\cdots,x^{\mu_1})}{\partial(a_r, a_s, \cdots, a_t)}}\right\rvert}da_r da_s \cdots da_t \\ \end{aligned}

Now, for the LHS of 7.44 we have

\begin{aligned}\nabla \wedge F &=\gamma^\mu \wedge \partial_\mu F \\ &=\frac{1}{{(k-1)!}}\frac{\partial {}}{\partial {x^{\mu_k}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1}}\gamma^{\mu_k} \wedge\gamma^{\mu_1} \wedge \gamma^{ \mu_2 } \wedge \cdots \wedge \gamma^{\mu_{k-1}} \end{aligned}

and the volume element of

\begin{aligned}d^k x &=\frac{\partial {x^{\nu_1}}}{\partial {a_1}}\frac{\partial {x^{\nu_2}}}{\partial {a_2}}\cdots\frac{\partial {x^{\nu_{k}}}}{\partial {a_k}}\gamma_{\nu_1} \wedge \gamma_{ \nu_2 } \wedge \cdots \wedge \gamma_{\nu_k}da_1 da_2 \cdots da_k\end{aligned}

Our dot product is

\begin{aligned}\left(\gamma^{\mu_k} \wedge\gamma^{\mu_1} \wedge \gamma^{ \mu_2 } \wedge \cdots \wedge \gamma^{\mu_{k-1}} \right) \cdot\left( \gamma_{\nu_1} \wedge \gamma_{ \nu_2 } \wedge \cdots \wedge \gamma_{\nu_k} \right)={\delta^{\mu_{k-1}}}_{\nu_1} {\delta^{ \mu_{k-2} }}_{\nu_2} \cdots {\delta^{\mu_{1}} }_{\nu_{k-1}}{\delta^{\mu_{k}} }_{\nu_{k}}\epsilon^{\nu_1 \nu_2 \cdots \nu_{k}}\end{aligned} \hspace{\stretch{1}}(7.48)

The LHS of our k-form now evaluates to

\begin{aligned}(\gamma^\mu \wedge \partial_\mu F) \cdot d^k x &= \frac{1}{(k-1)!}\frac{\partial }{\partial {x^{\mu_k}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1}}{\delta^{\mu_{k-1}}}_{\nu_1} {\delta^{ \mu_{k-2} }}_{\nu_2} \cdots {\delta^{\mu_1} }_{\nu_{k-1}}{\delta^{\mu_k} }_{\nu_k}\epsilon^{\nu_1 \nu_2 \cdots \nu_k}\frac{\partial {x^{\nu_1}}}{\partial {a_1}}\frac{\partial {x^{\nu_2}}}{\partial {a_2}} \cdots \frac{\partial {x^{\nu_k}}}{\partial {a_k}} da_1 da_2 \cdots da_k \\ &= \frac{1}{(k-1)!}\frac{\partial }{\partial {x^{\mu_k}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1}}\epsilon^{\mu_{k-1} \mu_{k-2} \cdots \mu_1 \mu_k}\frac{\partial {x^{\mu_{k-1}}}}{\partial {a_1}}\frac{\partial {x^{\mu_{k-2}}}}{\partial {a_2}} \cdots \frac{\partial {x^{\mu_1}}}{\partial {a_{k-1}}}\frac{\partial {x^{\mu_k}}}{\partial {a_k}} da_1 da_2 \cdots da_k \\ &= \frac{1}{(k-1)!}\frac{\partial }{\partial {x^{\mu_k}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1}}{\left\lvert{\frac{\partial(x^{\mu_{k-1}},x^{\mu_{k-2}},\cdots x^{\mu_1},x^{\mu_k})}{\partial(a_1, a_2, \cdots, a_{k-1}, a_k)}}\right\rvert} da_1 da_2 \cdots da_k \\ \end{aligned}

Presuming no mistakes were made anywhere along the way (including in the original Geometric Algebra expression), we have arrived at Stokes Theorem for rank $k-1$ antisymmetric tensors $F$

\boxed{ \begin{aligned}&\int\frac{\partial }{\partial {x^{\mu_k}}} F_{\mu_1 \mu_2 \cdots \mu_{k-1}}{\left\lvert{\frac{\partial(x^{\mu_{k-1}},x^{\mu_{k-2}},\cdots x^{\mu_1},x^{\mu_k})}{\partial(a_1, a_2, \cdots, a_{k-1}, a_k)}}\right\rvert} da_1 da_2 \cdots da_k \\ &= \frac{1}{(k-1)!} \epsilon^{ r s \cdots t u } \int da_u \frac{\partial }{\partial {a_u}} F_{\nu_1 \nu_2 \cdots \nu_{k-1} }{\left\lvert{\frac{\partial(x^{\nu_{k-1}},x^{\nu_{k-2}}, \cdots ,x^{\nu_1})}{\partial(a_r, a_s, \cdots, a_t)}}\right\rvert} da_r da_s \cdots da_t \end{aligned} } \hspace{\stretch{1}}(7.49)

The next task is to validate this, expanding it out for some specific ranks and hypervolume element types, and to compare the results with the familiar 3d expressions.

# References

[1] L.D. Landau and E.M. Lifshits. The classical theory of fields. Butterworth-Heinemann, 1980.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] Peeter Joot. Stokes theorem derivation without tensor expansion of the blade [online]. http://sites.google.com/site/peeterjoot/math2009/stokesNoTensor.pdf.

## 4D divergence theorem, continued.

Posted by peeterjoot on July 23, 2009

# Obsolete with potential errors.

This post may be in error.  I wrote this before understanding that the gradient used in Stokes Theorem must be projected onto the tangent space of the parameterized surface, as detailed in Alan MacDonald’s Vector and Geometric Calculus.

See the post ‘stokes theorem in geometric algebra‘ [PDF], where this topic has been revisited with this in mind.

# Original Post:

The basic idea of using duality to express the 4D divergence integral as a stokes boundary surface integral has been explored. Lets consider this in more detail picking a specific parametrization, namely rectangular four vector coordinates. For the volume element write

\begin{aligned}d^4 x &= ( \gamma_0 dx^0 ) \wedge ( \gamma_1 dx^1 ) \wedge ( \gamma_2 dx^2 ) \wedge ( \gamma_3 dx^3 ) \\ &= \gamma_0 \gamma_1 \gamma_2 \gamma_3 dx^0 dx^1 dx^2 dx^3 \\ &= i dx^0 dx^1 dx^2 dx^3 \\ \end{aligned}

As seen previously (but not separately), the divergence can be expressed as the dual of the curl

\begin{aligned}\nabla \cdot f&=\left\langle{{ \nabla f }}\right\rangle \\ &=-\left\langle{{ \nabla i (\underbrace{i f}_{\text{grade 3}}) }}\right\rangle \\ &=\left\langle{{ i \nabla (i f) }}\right\rangle \\ &=\left\langle{{ i ( \underbrace{\nabla \cdot (i f)}_{\text{grade 2}} + \underbrace{\nabla \wedge (i f)}_{\text{grade 4}} ) }}\right\rangle \\ &=i (\nabla \wedge (i f)) \\ \end{aligned}

So we have $\nabla \wedge (i f) = -i (\nabla \cdot f)$. Putting things together, and writing $i f = -f i$ we have

\begin{aligned}\int (\nabla \wedge (i f)) \cdot d^4 x&= \int (\nabla \cdot f) dx^0 dx^1 dx^2 dx^3 \\ &=\int dx^0 \partial_0 (f i) \cdot \gamma_{123} dx^1 dx^2 dx^3 \\ &-\int dx^1 \partial_1 (f i) \cdot \gamma_{023} dx^0 dx^2 dx^3 \\ &+\int dx^2 \partial_2 (f i) \cdot \gamma_{013} dx^0 dx^1 dx^3 \\ &-\int dx^3 \partial_3 (f i) \cdot \gamma_{012} dx^0 dx^1 dx^2 \\ \end{aligned}

It is straightforward to reduce each of these dot products. For example

\begin{aligned}\partial_2 (f i) \cdot \gamma_{013}&=\left\langle{{ \partial_2 f \gamma_{0123013} }}\right\rangle \\ &=-\left\langle{{ \partial_2 f \gamma_{2} }}\right\rangle \\ &=- \gamma_2 \partial_2 \cdot f \\ &=\gamma^2 \partial_2 \cdot f \end{aligned}

The rest proceed the same and rather anticlimactically we end up coming full circle

\begin{aligned}\int (\nabla \cdot f) dx^0 dx^1 dx^2 dx^3 &=\int dx^0 \gamma^0 \partial_0 \cdot f dx^1 dx^2 dx^3 \\ &+\int dx^1 \gamma^1 \partial_1 \cdot f dx^0 dx^2 dx^3 \\ &+\int dx^2 \gamma^2 \partial_2 \cdot f dx^0 dx^1 dx^3 \\ &+\int dx^3 \gamma^3 \partial_3 \cdot f dx^0 dx^1 dx^2 \\ \end{aligned}

This is however nothing more than the definition of the divergence itself and no need to resort to Stokes theorem is required. However, if we are integrating over a rectangle and perform each of the four integrals, we have (with c=1) from the dual Stokes equation the perhaps less obvious result

\begin{aligned}\int \partial_\mu f^\mu dt dx dy dz&=\int (f^0(t_1) - f^0(t_0)) dx dy dz \\ &+\int (f^1(x_1) - f^1(x_0)) dt dy dz \\ &+\int (f^2(y_1) - f^2(y_0)) dt dx dz \\ &+\int (f^3(z_1) - f^3(z_0)) dt dx dy \\ \end{aligned}

When stated this way one sees that this could have just as easily have followed directly from the left hand side. What’s the point then of the divergence theorem or Stokes theorem? I think that the value must really be the fact that the Stokes formulation naturally builds the volume element in a fashion independent of any specific parametrization. Here in rectangular coordinates the result seems obvious, but would the equivalent result seem obvious if non-rectangular spacetime coordinates were employed? Probably not.

## Stokes theorem in Geometric Algebra formalism.

Posted by peeterjoot on July 22, 2009

# Obsolete with potential errors.

This post may be in error.  I wrote this before understanding that the gradient used in Stokes Theorem must be projected onto the tangent space of the parameterized surface, as detailed in Alan MacDonald’s Vector and Geometric Calculus.

See the post ‘stokes theorem in geometric algebra‘ [PDF], where this topic has been revisited with this in mind.

# Motivation

Relying on pictorial means and a brute force ugly comparison of left and right hand sides, a verification of Stokes theorem for the vector and bivector cases was performed ([1]). This was more of a confirmation than a derivation, and the technique fails the transition to the trivector case. The trivector case is of particular interest in electromagnetism since that and a duality transformation provides a four-vector divergence theorem.

The fact that the pictorial means of defining the boundary surface doesn’t work well in four vector space is not the only unsatisfactory aspect of the previous treatment. The fact that a coordinate expansion of the hypervolume element and hypersurface element was performed in the LHS and RHS comparisons was required is particularly ugly. It is a lot of work and essentially has to be undone on the opposing side of the equation. Comparing to previous attempts to come to terms with Stokes theorem in ([2]) and ([3]) this more recent attempt at least avoids the requirement for a tensor expansion of the vector or bivector. It should be possible to build on this and minimize the amount of coordinate expansion required and go directly from the volume integral to the expression of the boundary surface.

# Do it.

## Notation and Setup.

The desire is to relate the curl hypervolume integral to a hypersurface integral on the boundary

\begin{aligned}\int (\nabla \wedge F) \cdot d^k x = \int F \cdot d^{k-1} x\end{aligned} \hspace{\stretch{1}}(2.1)

In order to put meaning to this statement the volume and surface elements need to be properly defined. In order that this be a scalar equation, the object $F$ in the integral is required to be of grade $k-1$, and $k \le n$ where $n$ is the dimension of the vector space that generates the object $F$.

## Reciprocal frames.

As evident in equation (2.1) a metric is required to define the dot product. If an affine non-metric formulation
of Stokes theorem is possible it will not be attempted here. A reciprocal basis pair will be utilized, defined by

\begin{aligned}\gamma^\mu \cdot \gamma_\nu = {\delta^\mu}_\nu\end{aligned} \hspace{\stretch{1}}(2.2)

Both of the sets $\{\gamma_\mu\}$ and $\{\gamma^\mu\}$ are taken to span the space, but are not required to be orthogonal. The notation is consistent with the Dirac reciprocal basis, and there will not be anything in this treatment that prohibits the Minkowski metric signature required for such a relativistic space.

Vector decomposition in terms of coordinates follows by taking dot products. We write

\begin{aligned}x = x^\mu \gamma_\mu = x_\nu \gamma^\nu\end{aligned} \hspace{\stretch{1}}(2.3)

When working with a non-orthonormal basis, use of the reciprocal frame can be utilized to express the gradient.

\begin{aligned}\nabla \equiv \gamma^\mu \partial_\mu \equiv \sum_\mu \gamma^\mu \frac{\partial {}}{\partial {x^\mu}}\end{aligned} \hspace{\stretch{1}}(2.4)

This contains what may perhaps seem like an odd seeming mix of upper and lower indexes in this definition. This is how the gradient is defined in [4]. Although it is possible to accept this definition and work with it, this form can be justified by require of the gradient consistency with the the definition of directional derivative. A definition of the directional derivative that works for single and multivector functions, in $\mathbb{R}^{3}$ and other more general spaces is

\begin{aligned}a \cdot \nabla F \equiv \lim_{\lambda \rightarrow 0} \frac{F(x + a\lambda) - F(x)}{\lambda} = {\left.\frac{\partial {F(x + a\lambda)}}{\partial {\lambda}} \right\vert}_{\lambda=0}\end{aligned} \hspace{\stretch{1}}(2.5)

Taylor expanding about $\lambda=0$ in terms of coordinates we have

\begin{aligned}{\left.\frac{\partial {F(x + a\lambda)}}{\partial {\lambda}} \right\vert}_{\lambda=0}&= a^\mu \frac{\partial {F}}{\partial {x^\mu}} \\ &= (a^\nu \gamma_\nu) \cdot (\gamma^\mu \partial_\mu) F \\ &= a \cdot \nabla F \quad\quad\quad\square\end{aligned}

The lower index representation of the vector coordinates could also have been used, so using the directional derivative to imply a definition of the gradient, we have an additional alternate representation of the gradient

\begin{aligned}\nabla \equiv \gamma_\mu \partial^\mu \equiv \sum_\mu \gamma_\mu \frac{\partial {}}{\partial {x_\mu}}\end{aligned} \hspace{\stretch{1}}(2.6)

## Volume element

We define the hypervolume in terms of parametrized vector displacements $x = x(a_1, a_2, ... a_k)$. For the vector x we can form a pseudoscalar for the subspace spanned by this parametrization by wedging the displacements in each of the directions defined by variation of the parameters. For $m \in [1,k]$ let

\begin{aligned}dx_i = \frac{\partial {x}}{\partial {a_i}} da_i = \gamma_\mu \frac{\partial {x^\mu}}{\partial {a_i}} da_i,\end{aligned} \hspace{\stretch{1}}(2.7)

so the hypervolume element for the subspace in question is

\begin{aligned}d^k x \equiv dx_1 \wedge dx_2 \cdots dx_k\end{aligned} \hspace{\stretch{1}}(2.8)

This can be expanded explicitly in coordinates

\begin{aligned}d^k x &= da_1 da_2 \cdots da_k \left(\frac{\partial {x^{\mu_1}}}{\partial {a_1}} \frac{\partial {x^{\mu_2}}}{\partial {a_2}} \cdots\frac{\partial {x^{\mu_k}}}{\partial {a_k}} \right)( \gamma_{\mu_1} \wedge \gamma_{\mu_2} \wedge \cdots \wedge \gamma_{\mu_k} ) \\ \end{aligned}

Observe that when $k$ is also the dimension of the space, we can employ a pseudoscalar $I = \gamma_0 \gamma_1 \cdots \gamma_k$ and can specify our volume element in terms of the Jacobian determinant.

This is

\begin{aligned}d^k x =I da_1 da_2 \cdots da_k {\left\lvert{\frac{\partial {(x^1, x^2, \cdots, x^k)}}{\partial {(a_1, a_2, \cdots, a_k)}}}\right\rvert}\end{aligned} \hspace{\stretch{1}}(2.9)

However, we won’t have a requirement to express the Stokes result in terms of such Jacobians.

## Expansion of the curl and volume element product

We are now prepared to go on to the meat of the issue. The first order of business is the expansion of the curl and volume element product

\begin{aligned}( \nabla \wedge F ) \cdot d^k x&=( \gamma^\mu \wedge \partial_\mu F ) \cdot d^k x \\ &=\left\langle{{ ( \gamma^\mu \wedge \partial_\mu F ) d^k x }}\right\rangle \\ \end{aligned}

The wedge product within the scalar grade selection operator can be expanded in symmetric or antisymmetric sums, but this is a grade dependent operation. For odd grade blades $A$ (vector, trivector, …), and vector $a$ we have for the dot and wedge product respectively

\begin{aligned}a \wedge A = \frac{1}{{2}} (a A - A a) \\ a \cdot A = \frac{1}{{2}} (a A + A a)\end{aligned}

\begin{aligned}a \wedge A = \frac{1}{{2}} (a A + A a) \\ a \cdot A = \frac{1}{{2}} (a A - A a)\end{aligned}

First treating the odd grade case for $F$ we have

\begin{aligned}( \nabla \wedge F ) \cdot d^k x&=\frac{1}{{2}} \left\langle{{ \gamma^\mu \partial_\mu F d^k x }}\right\rangle - \frac{1}{{2}} \left\langle{{ \partial_\mu F \gamma^\mu d^k x }}\right\rangle \\ \end{aligned}

Employing cyclic scalar reordering within the scalar product for the first term

\begin{aligned}\left\langle{{a b c}}\right\rangle = \left\langle{{b c a}}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.10)

we have

\begin{aligned}( \nabla \wedge F ) \cdot d^k x&=\frac{1}{{2}} \left\langle{{ \partial_\mu F (d^k x \gamma^\mu - \gamma^\mu d^k x)}}\right\rangle \\ &=\frac{1}{{2}} \left\langle{{ \partial_\mu F (d^k x \cdot \gamma^\mu - \gamma^\mu d^k x)}}\right\rangle \\ &=\left\langle{{ \partial_\mu F (d^k x \cdot \gamma^\mu)}}\right\rangle \\ \end{aligned}

The end result is

\begin{aligned}( \nabla \wedge F ) \cdot d^k x &= \partial_\mu F \cdot (d^k x \cdot \gamma^\mu) \end{aligned} \hspace{\stretch{1}}(2.11)

For even grade $F$ (and thus odd grade $d^k x$) it is straightforward to show that (2.11) also holds.

## Expanding the volume dot product

We want to expand the volume integral dot product

\begin{aligned}d^k x \cdot \gamma^\mu\end{aligned} \hspace{\stretch{1}}(2.12)

Picking $k = 4$ will serve to illustrate the pattern, and the generalization (or degeneralization to lower grades) will be clear. We have

\begin{aligned}d^4 x \cdot \gamma^\mu&=( dx_1 \wedge dx_2 \wedge dx_3 \wedge dx_4 ) \cdot \gamma^\mu \\ &= ( dx_1 \wedge dx_2 \wedge dx_3 ) dx_4 \cdot \gamma^\mu \\ &-( dx_1 \wedge dx_2 \wedge dx_4 ) dx_3 \cdot \gamma^\mu \\ &+( dx_1 \wedge dx_3 \wedge dx_4 ) dx_2 \cdot \gamma^\mu \\ &-( dx_2 \wedge dx_3 \wedge dx_4 ) dx_1 \cdot \gamma^\mu \\ \end{aligned}

This avoids the requirement to do the entire Jacobian expansion of (2.9). The dot product of the differential displacement $dx_m$ with $\gamma^\mu$ can now be made explicit without as much mess.

\begin{aligned}dx_m \cdot \gamma^\mu &=da_m \frac{\partial {x^\nu}}{\partial {a_m}} \gamma_\nu \cdot \gamma^\mu \\ &=da_m \frac{\partial {x^\mu}}{\partial {a_m}} \\ \end{aligned}

We now have products of the form

\begin{aligned}\partial_\mu F da_m \frac{\partial {x^\mu}}{\partial {a_m}} &=da_m \frac{\partial {x^\mu}}{\partial {a_m}} \frac{\partial {F}}{\partial {x^\mu}} \\ &=da_m \frac{\partial {F}}{\partial {a_m}} \\ \end{aligned}

Now we see that the differential form of (2.11) for this $k=4$ example is reduced to

\begin{aligned}( \nabla \wedge F ) \cdot d^4 x &= da_4 \frac{\partial {F}}{\partial {a_4}} \cdot ( dx_1 \wedge dx_2 \wedge dx_3 ) \\ &- da_3 \frac{\partial {F}}{\partial {a_3}} \cdot ( dx_1 \wedge dx_2 \wedge dx_4 ) \\ &+ da_2 \frac{\partial {F}}{\partial {a_2}} \cdot ( dx_1 \wedge dx_3 \wedge dx_4 ) \\ &- da_1 \frac{\partial {F}}{\partial {a_1}} \cdot ( dx_2 \wedge dx_3 \wedge dx_4 ) \\ \end{aligned}

While 2.11 was a statement of Stokes theorem in this Geometric Algebra formulation, it was really incomplete without this explicit expansion of $(\partial_\mu F) \cdot (d^k x \cdot \gamma^\mu)$. This expansion for the $k=4$ case serves to illustrate that we would write Stokes theorem as

\begin{aligned}\boxed{\int( \nabla \wedge F ) \cdot d^k x =\frac{1}{{(k-1)!}} \epsilon^{ r s \cdots t u } \int da_u \frac{\partial {F}}{\partial {a_{u}}} \cdot (dx_r \wedge dx_s \wedge \cdots \wedge dx_t)}\end{aligned} \hspace{\stretch{1}}(2.13)

Here the indexes have the range $\{r, s, \cdots, t, u\} \in \{1, 2, \cdots k\}$. This with the definitions 2.7, and 2.8 is really Stokes theorem in its full glory.

Observe that in this Geometric algebra form, the one forms $dx_i = da_i {\partial {x}}/{\partial {a_i}}, i \in [1,k]$ are nothing more abstract that plain old vector differential elements. In the formalism of differential forms, this would be vectors, and $(\nabla \wedge F) \cdot d^k x$ would be a $k$ form. In a context where we are working with vectors, or blades already, the Geometric Algebra statement of the theorem avoids a requirement to translate to the language of forms.

With a statement of the general theorem complete, let’s return to our $k=4$ case where we can now integrate over each of the $a_1, a_2, \cdots, a_k$ parameters. That is

\begin{aligned}\int ( \nabla \wedge F ) \cdot d^4 x &= \int (F(a_4(1)) - F(a_4(0))) \cdot ( dx_1 \wedge dx_2 \wedge dx_3 ) \\ &- \int (F(a_3(1)) - F(a_3(0))) \cdot ( dx_1 \wedge dx_2 \wedge dx_4 ) \\ &+ \int (F(a_2(1)) - F(a_2(0))) \cdot ( dx_1 \wedge dx_3 \wedge dx_4 ) \\ &- \int (F(a_1(1)) - F(a_1(0))) \cdot ( dx_2 \wedge dx_3 \wedge dx_4 ) \\ \end{aligned}

This is precisely Stokes theorem for the trivector case and makes the enumeration of the boundary surfaces explicit. As derived there was no requirement for an orthonormal basis, nor a Euclidean metric, nor a parametrization along the basis directions. The only requirement of the parametrization is that the associated volume element is non-trivial (i.e. none of $dx_q \wedge dx_r = 0$).

For completeness, note that our boundary surface and associated Stokes statement for the bivector and vector cases is, by inspection respectively

\begin{aligned}\int ( \nabla \wedge F ) \cdot d^3 x &= \int (F(a_3(1)) - F(a_3(0))) \cdot ( dx_1 \wedge dx_2 ) \\ &- \int (F(a_2(1)) - F(a_2(0))) \cdot ( dx_1 \wedge dx_3 ) \\ &+ \int (F(a_1(1)) - F(a_1(0))) \cdot ( dx_2 \wedge dx_3 ) \\ \end{aligned}

and

\begin{aligned}\int ( \nabla \wedge F ) \cdot d^2 x &= \int (F(a_2(1)) - F(a_2(0))) \cdot dx_1 \\ &- \int (F(a_1(1)) - F(a_1(0))) \cdot dx_2 \\ \end{aligned}

These three expansions can be summarized by the original single statement of (2.1), which repeating for reference, is

\begin{aligned}\int ( \nabla \wedge F ) \cdot d^k x = \int F \cdot d^{k-1} x \end{aligned}

Where it is implied that the blade $F$ is evaluated on the boundaries and dotted with the associated hypersurface boundary element. However, having expanded this we now have an explicit statement of exactly what that surface element is now for any desired parametrization.

# Duality relations and special cases.

Some special (and more recognizable) cases of (2.1) are possible considering specific grades of $F$, and in some cases employing duality relations.

## curl surface integral

One important case is the $\mathbb{R}^{3}$ vector result, which can be expressed in terms of the cross product.

Write $\hat{\mathbf{n}} d^2 x = -i dA$. Then we have

\begin{aligned}( \boldsymbol{\nabla} \wedge \mathbf{f} ) \cdot d^2 x&=\left\langle{{ i (\boldsymbol{\nabla} \times \mathbf{f}) (- \hat{\mathbf{n}} i dA) }}\right\rangle \\ &=(\boldsymbol{\nabla} \times \mathbf{f}) \cdot \hat{\mathbf{n}} dA\end{aligned}

This recovers the familiar cross product form of Stokes law.

\begin{aligned}\int (\boldsymbol{\nabla} \times \mathbf{f}) \cdot \hat{\mathbf{n}} dA = \oint \mathbf{f} \cdot d\mathbf{x}\end{aligned} \hspace{\stretch{1}}(3.14)

## 3D divergence theorem

Duality applied to the bivector Stokes result provides the divergence theorem in $\mathbb{R}^{3}$. For bivector $B$, let $iB = \mathbf{f}$, $d^3 x = i dV$, and $d^2 x = i \hat{\mathbf{n}} dA$. We then have

\begin{aligned}( \boldsymbol{\nabla} \wedge B ) \cdot d^3 x&=\left\langle{{ ( \boldsymbol{\nabla} \wedge B ) \cdot d^3 x }}\right\rangle \\ &=\frac{1}{{2}} \left\langle{{ ( \boldsymbol{\nabla} B + B \boldsymbol{\nabla} ) i dV }}\right\rangle \\ &=\boldsymbol{\nabla} \cdot \mathbf{f} dV \\ \end{aligned}

Similarly

\begin{aligned}B \cdot d^2 x&=\left\langle{{ -i\mathbf{f} i \hat{\mathbf{n}} dA}}\right\rangle \\ &=(\mathbf{f} \cdot \hat{\mathbf{n}}) dA \\ \end{aligned}

This recovers the $\mathbb{R}^{3}$ divergence equation

\begin{aligned}\int \boldsymbol{\nabla} \cdot \mathbf{f} dV = \int (\mathbf{f} \cdot \hat{\mathbf{n}}) dA\end{aligned} \hspace{\stretch{1}}(3.15)

## 4D divergence theorem

How about the four dimensional spacetime divergence? Write, express a trivector as a dual four-vector $T = if$, and the four volume element $d^4 x = i dQ$. This gives

\begin{aligned}(\nabla \wedge T) \cdot d^4 x&=\frac{1}{{2}} \left\langle{{ (\nabla T - T \nabla) i }}\right\rangle dQ \\ &=\frac{1}{{2}} \left\langle{{ (\nabla i f - if \nabla) i }}\right\rangle dQ \\ &=\frac{1}{2} \left\langle{{ (\nabla f + f \nabla) }}\right\rangle dQ \\ &=(\nabla \cdot f) dQ\end{aligned}

For the boundary volume integral write $d^3 x = n i dV$, for

\begin{aligned}T \cdot d^3 x &= \left\langle{{ (if) ( n i ) }}\right\rangle dV \\ &= \left\langle{{ f n }}\right\rangle dV \\ &= (f \cdot n) dV\end{aligned}

So we have

\begin{aligned}\int \partial_\mu f^\mu dQ = \int f^\nu n_\nu dV\end{aligned}

the orientation of the fourspace volume element and the boundary normal is defined in terms of the parametrization, the duality relations and our explicit expansion of the 4D stokes boundary integral above.

## 4D divergence theorem, continued.

The basic idea of using duality to express the 4D divergence integral as a stokes boundary surface integral has been explored. Lets consider this in more detail picking a specific parametrization, namely rectangular four vector coordinates. For the volume element write

\begin{aligned}d^4 x &= ( \gamma_0 dx^0 ) \wedge ( \gamma_1 dx^1 ) \wedge ( \gamma_2 dx^2 ) \wedge ( \gamma_3 dx^3 ) \\ &= \gamma_0 \gamma_1 \gamma_2 \gamma_3 dx^0 dx^1 dx^2 dx^3 \\ &= i dx^0 dx^1 dx^2 dx^3 \\ \end{aligned}

As seen previously (but not separately), the divergence can be expressed as the dual of the curl

\begin{aligned}\nabla \cdot f&=\left\langle{{ \nabla f }}\right\rangle \\ &=-\left\langle{{ \nabla i (\underbrace{i f}_{\text{grade 3}}) }}\right\rangle \\ &=\left\langle{{ i \nabla (i f) }}\right\rangle \\ &=\left\langle{{ i ( \underbrace{\nabla \cdot (i f)}_{\text{grade 2}} + \underbrace{\nabla \wedge (i f)}_{\text{grade 4}} ) }}\right\rangle \\ &=i (\nabla \wedge (i f)) \\ \end{aligned}

So we have $\nabla \wedge (i f) = -i (\nabla \cdot f)$. Putting things together, and writing $i f = -f i$ we have

\begin{aligned}\int (\nabla \wedge (i f)) \cdot d^4 x&= \int (\nabla \cdot f) dx^0 dx^1 dx^2 dx^3 \\ &=\int dx^0 \partial_0 (f i) \cdot \gamma_{123} dx^1 dx^2 dx^3 \\ &-\int dx^1 \partial_1 (f i) \cdot \gamma_{023} dx^0 dx^2 dx^3 \\ &+\int dx^2 \partial_2 (f i) \cdot \gamma_{013} dx^0 dx^1 dx^3 \\ &-\int dx^3 \partial_3 (f i) \cdot \gamma_{012} dx^0 dx^1 dx^2 \\ \end{aligned}

It is straightforward to reduce each of these dot products. For example

\begin{aligned}\partial_2 (f i) \cdot \gamma_{013}&=\left\langle{{ \partial_2 f \gamma_{0123013} }}\right\rangle \\ &=-\left\langle{{ \partial_2 f \gamma_{2} }}\right\rangle \\ &=- \gamma_2 \partial_2 \cdot f \\ &=\gamma^2 \partial_2 \cdot f \end{aligned}

The rest proceed the same and rather anticlimactically we end up coming full circle

\begin{aligned}\int (\nabla \cdot f) dx^0 dx^1 dx^2 dx^3 &=\int dx^0 \gamma^0 \partial_0 \cdot f dx^1 dx^2 dx^3 \\ &+\int dx^1 \gamma^1 \partial_1 \cdot f dx^0 dx^2 dx^3 \\ &+\int dx^2 \gamma^2 \partial_2 \cdot f dx^0 dx^1 dx^3 \\ &+\int dx^3 \gamma^3 \partial_3 \cdot f dx^0 dx^1 dx^2 \\ \end{aligned}

This is however nothing more than the definition of the divergence itself and no need to resort to Stokes theorem is required. However, if we are integrating over a rectangle and perform each of the four integrals, we have (with $c=1$) from the dual Stokes equation the perhaps less obvious result

\begin{aligned}\int \partial_\mu f^\mu dt dx dy dz&=\int (f^0(t_1) - f^0(t_0)) dx dy dz \\ &+\int (f^1(x_1) - f^1(x_0)) dt dy dz \\ &+\int (f^2(y_1) - f^2(y_0)) dt dx dz \\ &+\int (f^3(z_1) - f^3(z_0)) dt dx dy \\ \end{aligned}

When stated this way one sees that this could have just as easily have followed directly from the left hand side. What’s the point then of the divergence theorem or Stokes theorem? I think that the value must really be the fact that the Stokes formulation naturally builds the volume element in a fashion independent of any specific parametrization. Here in rectangular coordinates the result seems obvious, but would the equivalent result seem obvious if non-rectangular spacetime coordinates were employed? Probably not.

# References

[1] Peeter Joot. Stokes theorem applied to vector and bivector fields [online]. http://sites.google.com/site/peeterjoot/math2009/stokesGradeTwo.pdf.

[2] Peeter Joot. Stokes law in wedge product form [online]. http://sites.google.com/site/peeterjoot/geometric-algebra/vector_integral_relations.pdf.

[3] Peeter Joot. Stokes Law revisited with algebraic enumeration of boundary [online]. http://sites.google.com/site/peeterjoot/geometric-algebra/stokes_revisited.pdf.

[4] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

## bivector form of Stokes theorem

Posted by peeterjoot on July 18, 2009

# Obsolete with potential errors.

This post may be in error.  I wrote this before understanding that the gradient used in Stokes Theorem must be projected onto the tangent space of the parameterized surface, as detailed in Alan MacDonald’s Vector and Geometric Calculus.

See the post ‘stokes theorem in geometric algebra‘ [PDF], where this topic has been revisited with this in mind.

# Original Post:

A parallelepiped volume element is depicted in the figure below. Three parameters $\alpha$, $\beta$, $\sigma$ generate a set of differential vector displacements spanning the three dimensional subspace

volume element

Writing the displacements

\begin{aligned}dx_\alpha &= \frac{\partial {x}}{\partial {\alpha}} d\alpha \\ dx_\beta &= \frac{\partial {x}}{\partial {\beta}} d\beta \\ dx_\sigma &= \frac{\partial {x}}{\partial {\sigma}} d\sigma \end{aligned}

We have for the front, right and top face area elements

\begin{aligned}dA_F &= dx_\alpha \wedge dx_\beta \\ dA_R &= dx_\beta \wedge dx_\sigma \\ dA_T &= dx_\sigma \wedge dx_\alpha \\ \end{aligned}

These are the surfaces of constant parameterization, respectively, $\sigma = \sigma_1$, $\alpha = \alpha_1$, and $\beta = \beta_1$. For a bivector, the flux through the surface is therefore

\begin{aligned}\int B \cdot dA &= (B_{\sigma_1} \cdot dA_F - B_{\sigma_0} \cdot dA_P ) + (B_{\alpha_1} \cdot dA_R - B_{\alpha_0} \cdot dA_L) + (B_{\beta_1} \cdot dA_T - B_{\beta_0} \cdot dA_B) \\ &= d \sigma \frac{\partial {B}}{\partial {\sigma}} \cdot (dx_\alpha \wedge dx_\beta ) + d \alpha \frac{\partial {B}}{\partial {\alpha}} \cdot (dx_\beta \wedge dx_\sigma) + d \beta \frac{\partial {B}}{\partial {\beta}} \cdot (dx_\sigma \wedge dx_\alpha ) \\ \end{aligned}

Written out in full this is a bit of a mess

\begin{aligned}\int B \cdot dA &= d \alpha d\beta d\sigma \partial_\mu B \cdot \left( \left( - \frac{\partial {x^\mu}}{\partial {\sigma}} \frac{\partial {x^\nu}}{\partial {\beta}} \frac{\partial {x^\epsilon}}{\partial {\alpha}} + \frac{\partial {x^\mu}}{\partial {\alpha}} \frac{\partial {x^\nu}}{\partial {\beta}} \frac{\partial {x^\epsilon}}{\partial {\sigma}} + \frac{\partial {x^\mu}}{\partial {\beta}} \frac{\partial {x^\nu}}{\partial {\sigma}} \frac{\partial {x^\epsilon}}{\partial {\alpha}} \right) (\gamma_\nu \wedge \gamma_\epsilon ) \right) \end{aligned} \quad\quad\quad(5)

It should equal, at least up to a sign, $\int (\nabla \wedge B) \cdot d^3 x$. Expanding the latter is probably easier than regrouping the mess, and doing so we have

\begin{aligned}(\nabla \wedge B) \cdot d^3 x &= d\alpha d\beta d\sigma ( \gamma^\mu \wedge \partial_\mu B) \cdot \left( \frac{\partial {x}}{\partial {\alpha}} \wedge \frac{\partial {x}}{\partial {\beta}} \wedge \frac{\partial {x}}{\partial {\sigma}} \right) \\ &= d\alpha d\beta d\sigma \frac{1}{{2}} ( \gamma^\mu \partial_\mu B + \partial_\mu B \gamma^\mu ) \cdot \left( \frac{\partial {x}}{\partial {\alpha}} \wedge \frac{\partial {x}}{\partial {\beta}} \wedge \frac{\partial {x}}{\partial {\sigma}} \right) \\ &= d\alpha d\beta d\sigma \frac{1}{{2}} \left\langle{{ ( \gamma^\mu \partial_\mu B + \partial_\mu B \gamma^\mu ) \left( \frac{\partial {x}}{\partial {\alpha}} \wedge \frac{\partial {x}}{\partial {\beta}} \wedge \frac{\partial {x}}{\partial {\sigma}} \right) }}\right\rangle \\ &= d\alpha d\beta d\sigma \frac{1}{{2}} \partial_\mu B \cdot {\left\langle{{ \left( \frac{\partial {x}}{\partial {\alpha}} \wedge \frac{\partial {x}}{\partial {\beta}} \wedge \frac{\partial {x}}{\partial {\sigma}} \right) \gamma^\mu + \gamma^\mu \left( \frac{\partial {x}}{\partial {\alpha}} \wedge \frac{\partial {x}}{\partial {\beta}} \wedge \frac{\partial {x}}{\partial {\sigma}} \right) }}\right\rangle}_{2} \\ &= d\alpha d\beta d\sigma \partial_\mu B \cdot \left( \left( \frac{\partial {x}}{\partial {\alpha}} \wedge \frac{\partial {x}}{\partial {\beta}} \wedge \frac{\partial {x}}{\partial {\sigma}} \right) \cdot \gamma^\mu \right) \\ \end{aligned}

Expanding just that trivector-vector dot product

\begin{aligned}\left( \frac{\partial {x}}{\partial {\alpha}} \wedge \frac{\partial {x}}{\partial {\beta}} \wedge \frac{\partial {x}}{\partial {\sigma}} \right) \cdot \gamma^\mu &= \frac{\partial {x^\lambda}}{\partial {\alpha}} \frac{\partial {x^\nu}}{\partial {\beta}} \frac{\partial {x^\epsilon}}{\partial {\sigma}} \left( \gamma_\lambda \wedge \gamma_\nu \wedge \gamma_\epsilon \right) \cdot \gamma^\mu \\ &= \frac{\partial {x^\lambda}}{\partial {\alpha}} \frac{\partial {x^\nu}}{\partial {\beta}} \frac{\partial {x^\epsilon}}{\partial {\sigma}} \left( \gamma_\lambda \wedge \gamma_\nu {\delta_\epsilon}^\mu -\gamma_\lambda \wedge \gamma_\epsilon {\delta_\nu}^\mu +\gamma_\nu \wedge \gamma_\epsilon {\delta_\lambda}^\mu \right) \end{aligned}

So we have

\begin{aligned}(\nabla \wedge B) \cdot d^3 x &= d\alpha d\beta d\sigma \frac{\partial {x^\lambda}}{\partial {\alpha}} \frac{\partial {x^\nu}}{\partial {\beta}} \frac{\partial {x^\epsilon}}{\partial {\sigma}} \partial_\mu B \cdot \left( \gamma_\lambda \wedge \gamma_\nu {\delta_\epsilon}^\mu -\gamma_\lambda \wedge \gamma_\epsilon {\delta_\nu}^\mu +\gamma_\nu \wedge \gamma_\epsilon {\delta_\lambda}^\mu \right) \\ &= d\alpha d\beta d\sigma \partial_\mu B \cdot \left( \frac{\partial {x^\lambda}}{\partial {\alpha}} \frac{\partial {x^\nu}}{\partial {\beta}} \frac{\partial {x^\mu}}{\partial {\sigma}} \gamma_\lambda \wedge \gamma_\nu + \frac{\partial {x^\lambda}}{\partial {\alpha}} \frac{\partial {x^\mu}}{\partial {\beta}} \frac{\partial {x^\epsilon}}{\partial {\sigma}} \gamma_\epsilon \wedge \gamma_\lambda + \frac{\partial {x^\mu}}{\partial {\alpha}} \frac{\partial {x^\nu}}{\partial {\beta}} \frac{\partial {x^\epsilon}}{\partial {\sigma}} \gamma_\nu \wedge \gamma_\epsilon \right) \\ &= d\alpha d\beta d\sigma \partial_\mu B \cdot \left( \left( \frac{\partial {x^\nu}}{\partial {\alpha}} \frac{\partial {x^\epsilon}}{\partial {\beta}} \frac{\partial {x^\mu}}{\partial {\sigma}} + \frac{\partial {x^\epsilon}}{\partial {\alpha}} \frac{\partial {x^\mu}}{\partial {\beta}} \frac{\partial {x^\nu}}{\partial {\sigma}} + \frac{\partial {x^\mu}}{\partial {\alpha}} \frac{\partial {x^\nu}}{\partial {\beta}} \frac{\partial {x^\epsilon}}{\partial {\sigma}} \right) \gamma_\nu \wedge \gamma_\epsilon \right) \\ \end{aligned}

Noting that an $\epsilon$, $\nu$ interchange in the first term inverts the sign, we have an exact match with (5), thus fixing the sign for the bivector form of Stokes theorem for the orientation picked in this diagram

\begin{aligned}\int (\nabla \wedge B) \cdot d^3 x &= \int B \cdot d^2 x \end{aligned}

Like the vector case, there is a requirement to be very specific about the meaning given to the oriented surfaces, and the corresponding oriented volume element (which could be a volume subspace of a greater than three dimensional space).

## Stokes theorem applied to a vector field.

Posted by peeterjoot on July 17, 2009

# Obsolete with potential errors.

This post may be in error.  I wrote this before understanding that the gradient used in Stokes Theorem must be projected onto the tangent space of the parameterized surface, as detailed in Alan MacDonald’s Vector and Geometric Calculus.

See the post ‘stokes theorem in geometric algebra‘ [PDF], where this topic has been revisited with this in mind.

# Motivation

I found my self forgetting stokes theorem once again. Redo this for the simplest case of a parallelogram area element.

What I recall is that we have on one side the curl dotted into the plane of the surface area element

\begin{aligned} \int ( \nabla \wedge A ) \cdot d^2 x \end{aligned}

and on the other side a loop integral (implying here a counterclockwise orientation: any idea how to do ointctrclockwise in wordpress?)

\begin{aligned} \int A \cdot dx \end{aligned}

Comparing the two we should end up with the same form and thus determine the form of the grade two Stokes equation (i.e. for curl of a vector).

# Bivector product part.

\begin{aligned} ( \nabla \wedge A ) \cdot d^2 x &= ( \nabla \wedge A ) \cdot \left(\frac{\partial x}{\partial \alpha} \wedge \frac{\partial x}{\partial \beta}\right) d\alpha d\beta \\ &= \partial_\mu A_\nu \frac{\partial x^\sigma}{\partial \alpha} \frac{\partial x^\epsilon}{\partial \beta} (\gamma^\mu \wedge \gamma^\nu) \cdot (\gamma_\sigma \wedge \gamma_\epsilon) d\alpha d\beta \\ &= \partial_\mu A_\nu \frac{\partial x^\sigma}{\partial \alpha} \frac{\partial x^\epsilon}{\partial \beta} ( {\delta^\mu}_\epsilon {\delta^\nu}_\sigma - {\delta^\mu}_\sigma {\delta^\nu}_\epsilon ) d\alpha d\beta \\ &= \partial_\mu A_\nu \left( \frac{\partial x^\nu}{\partial \alpha} \frac{\partial x^\mu}{\partial \beta} - \frac{\partial x^\mu}{\partial \alpha} \frac{\partial x^\nu}{\partial \beta} \right) d\alpha d\beta \\ \end{aligned}

So we have

\begin{aligned} ( \nabla \wedge A ) \cdot d^2 x &= -\partial_\mu A_\nu \frac{\partial (x^\mu, x^\nu)}{\partial (\alpha, \beta)} d\alpha d\beta \end{aligned} \quad\quad\quad(3)

# Loop integral part.

Integrating around a parallelogram spacetime area element with sides $d\alpha \partial x/\partial \alpha$ and $d\beta \partial x/\partial \beta$ we have

surface area element

\begin{aligned} \int A \cdot dx &= \int {\left. A \right\vert}_{\beta=\beta_0} \cdot \frac{\partial x}{\partial \alpha} d\alpha + {\left. A \right\vert}_{\alpha=\alpha_1} \cdot \frac{\partial x}{\partial \beta} d\beta + {\left. A \right\vert}_{\beta=\beta_1} \cdot \left( -\frac{\partial x}{\partial \alpha} d\alpha \right) + {\left. A \right\vert}_{\alpha=\alpha_0} \cdot \left( -\frac{\partial x}{\partial \beta} d\beta \right) \\ &= \int \left( {\left. A \right\vert}_{\alpha=\alpha_1} - {\left. A \right\vert}_{\alpha=\alpha_0} \right) \cdot \frac{\partial x}{\partial \beta} d\beta -\left( {\left. A \right\vert}_{\beta=\beta_1} - {\left. A \right\vert}_{\beta=\beta_0} \right) \cdot \frac{\partial x}{\partial \alpha} d\alpha \\ &= \int \frac{\partial A}{\partial \alpha} \cdot \frac{\partial x}{\partial \beta} d\alpha d\beta -\frac{\partial A}{\partial \beta} \cdot \frac{\partial x}{\partial \alpha} d\beta d\alpha \end{aligned}

Expanding the derivatives in terms of coordinates we have

\begin{aligned} \frac{\partial A}{\partial \sigma} &= \frac{\partial A_mu}{\partial \sigma} \gamma^\mu \\ &= \frac{\partial A_mu}{\partial x^\nu}\frac{\partial x^\nu}{\partial \sigma} \gamma^\mu \\ &= \partial_\nu A_\mu \frac{\partial x^\nu}{\partial \sigma} \gamma^\mu \\ \end{aligned}

and

\begin{aligned} \frac{\partial x}{\partial \sigma} &= \frac{\partial x^\nu}{\partial \sigma} \gamma_\nu \end{aligned}

Assembling we have

\begin{aligned} \int A \cdot dx &= \int \partial_\nu A_\mu \left( \frac{\partial x^\nu}{\partial \alpha} \frac{\partial x^\mu}{\partial \beta} - \frac{\partial x^\nu}{\partial \beta} \frac{\partial x^\mu}{\partial \alpha} \right) d\alpha d\beta \end{aligned}

In terms of the Jacobian used in (3) we have

\begin{aligned} \int A \cdot dx &= \int \partial_\mu A_\nu \frac{\partial (x^\mu, x^\nu)}{\partial (\alpha, \beta)} d\alpha d\beta \end{aligned}

Comparing the two we have only a sign difference so the conclusion is that Stokes for a vector field (considering only a flat parallelogram area element) is

\begin{aligned} \int ( \nabla \wedge A ) \cdot d^2 x &= -\int A \cdot dx \end{aligned}

Observe that there’s an implied orientation of the area element on the LHS, required to match up with the (reversed) counterclockwise orientation of the RHS integral.