Peeter Joot's Blog.

Math, physics, perl, and programming obscurity.

 binaere optionen on A start at a rudimentary perl… bubba transvere on My letter to the Ontario Energ… Kuba Ober on New faucet installation peeterjoot on Ease of screwing up C string… peeterjoot on Basement electrical now d…
• People not reading this blog: 6,973,738,433 minus:

• 136,534 hits

Posts Tagged ‘wedge product’

Plane wave solutions of Maxwell’s equation using Geometric Algebra

Posted by peeterjoot on September 3, 2012

Motivation

Study of reflection and transmission of radiation in isotropic, charge and current free, linear matter utilizes the plane wave solutions to Maxwell’s equations. These have the structure of phasor equations, with some specific constraints on the components and the exponents.

These constraints are usually derived starting with the plain old vector form of Maxwell’s equations, and it is natural to wonder how this is done directly using Geometric Algebra. [1] provides one such derivation, using the covariant form of Maxwell’s equations. Here’s a slightly more pedestrian way of doing the same.

Maxwell’s equations in media

We start with Maxwell’s equations for linear matter as found in [2]

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 0\end{aligned} \hspace{\stretch{1}}(1.2.1a)

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} = -\frac{\partial {\mathbf{B}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(1.2.1b)

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{B} = 0\end{aligned} \hspace{\stretch{1}}(1.2.1c)

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{B} = \mu\epsilon \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.1d)

We merge these using the geometric identity

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{a} + I \boldsymbol{\nabla} \times \mathbf{a} = \boldsymbol{\nabla} \mathbf{a},\end{aligned} \hspace{\stretch{1}}(1.2.2)

where $I$ is the 3D pseudoscalar $I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3$, to find

\begin{aligned}\boldsymbol{\nabla} \mathbf{E} = -I \frac{\partial {\mathbf{B}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(1.2.3a)

\begin{aligned}\boldsymbol{\nabla} \mathbf{B} = I \mu\epsilon \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.3b)

We want dimensions of $1/L$ for the derivative operator on the RHS of 1.2.3b, so we divide through by $\sqrt{\mu\epsilon} I$ for

\begin{aligned}-I \frac{1}{{\sqrt{\mu\epsilon}}} \boldsymbol{\nabla} \mathbf{B} = \sqrt{\mu\epsilon} \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.4)

This can now be added to 1.2.3a for

\begin{aligned}\left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \left( \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \right)= 0.\end{aligned} \hspace{\stretch{1}}(1.2.5)

This is Maxwell’s equation in linear isotropic charge and current free matter in Geometric Algebra form.

Phasor solutions

We write the electromagnetic field as

\begin{aligned}F = \left( \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \right),\end{aligned} \hspace{\stretch{1}}(1.3.6)

so that for vacuum where $1/\sqrt{\mu \epsilon} = c$ we have the usual $F = \mathbf{E} + I c \mathbf{B}$. Assuming a phasor solution of

\begin{aligned}\tilde{F} = F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}\end{aligned} \hspace{\stretch{1}}(1.3.7)

where $F_0$ is allowed to be complex, and the actual field is obtained by taking the real part

\begin{aligned}F = \text{Real} \tilde{F} = \text{Real}(F_0) \cos(\mathbf{k} \cdot \mathbf{x} - \omega t)-\text{Imag}(F_0) \sin(\mathbf{k} \cdot \mathbf{x} - \omega t).\end{aligned} \hspace{\stretch{1}}(1.3.8)

Note carefully that we are using a scalar imaginary $i$, as well as the multivector (pseudoscalar) $I$, despite the fact that both have the square to scalar minus one property.

We now seek the constraints on $\mathbf{k}$, $\omega$, and $F_0$ that allow this to be a solution to 1.2.5

\begin{aligned}0 = \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F}.\end{aligned} \hspace{\stretch{1}}(1.3.9)

As usual in the non-geometric algebra treatment, we observe that any such solution $F$ to Maxwell’s equation is also a wave equation solution. In GA we can do so by right multiplying an operator that has a conjugate form,

\begin{aligned}\begin{aligned}0 &= \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F} \\ &= \left(\boldsymbol{\nabla} - \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F} \\ &=\left( \boldsymbol{\nabla}^2 - \mu\epsilon \frac{\partial^2}{\partial t^2} \right) \tilde{F} \\ &=\left( \boldsymbol{\nabla}^2 - \frac{1}{{v^2}} \frac{\partial^2}{\partial t^2} \right) \tilde{F},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.10)

where $v = 1/\sqrt{\mu\epsilon}$ is the speed of the wave described by this solution.

Inserting the exponential form of our assumed solution 1.3.7 we find

\begin{aligned}0 = -(\mathbf{k}^2 - \omega^2/v^2) F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)},\end{aligned} \hspace{\stretch{1}}(1.3.11)

which implies that the wave number vector $\mathbf{k}$ and the angular frequency $\omega$ are related by

\begin{aligned}v^2 \mathbf{k}^2 = \omega^2.\end{aligned} \hspace{\stretch{1}}(1.3.12)

Our assumed solution must also satisfy the first order system 1.3.9

\begin{aligned}\begin{aligned}0 &= \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) F_0e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)} \\ &=i\left(\mathbf{e}_m k_m - \frac{\omega}{v}\right) F_0e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)} \\ &=i k ( \hat{\mathbf{k}} - 1 ) F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.13)

The constraints on $F_0$ must then be given by

\begin{aligned}0 = ( \hat{\mathbf{k}} - 1 ) F_0.\end{aligned} \hspace{\stretch{1}}(1.3.14)

With

\begin{aligned}F_0 = \mathbf{E}_0 + I v \mathbf{B}_0,\end{aligned} \hspace{\stretch{1}}(1.3.15)

we must then have all grades of the multivector equation equal to zero

\begin{aligned}0 = ( \hat{\mathbf{k}} - 1 ) \left(\mathbf{E}_0 + I v \mathbf{B}_0\right).\end{aligned} \hspace{\stretch{1}}(1.3.16)

Writing out all the geometric products, noting that $I$ commutes with all of $\hat{\mathbf{k}}$, $\mathbf{E}_0$, and $\mathbf{B}_0$ and employing the identity $\mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b}$ we have

\begin{aligned}\begin{array}{l l l l l}0 &= \hat{\mathbf{k}} \cdot \mathbf{E}_0 & - \mathbf{E}_0 & + \hat{\mathbf{k}} \wedge \mathbf{E}_0 & I v \hat{\mathbf{k}} \cdot \mathbf{B}_0 \\ & & + I v \hat{\mathbf{k}} \wedge \mathbf{B}_0 & + I v \mathbf{B}_0 &\end{array}\end{aligned} \hspace{\stretch{1}}(1.3.17)

This is

\begin{aligned}0 = \hat{\mathbf{k}} \cdot \mathbf{E}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18a)

\begin{aligned}\mathbf{E}_0 =- \hat{\mathbf{k}} \times v \mathbf{B}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18b)

\begin{aligned}v \mathbf{B}_0 = \hat{\mathbf{k}} \times \mathbf{E}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18c)

\begin{aligned}0 = \hat{\mathbf{k}} \cdot \mathbf{B}_0.\end{aligned} \hspace{\stretch{1}}(1.3.18d)

This and 1.3.12 describe all the constraints on our phasor that are required for it to be a solution. Note that only one of the two cross product equations in are required because the two are not independent. This can be shown by crossing $\hat{\mathbf{k}}$ with 1.3.18b and using the identity

\begin{aligned}\mathbf{a} \times (\mathbf{a} \times \mathbf{b}) = - \mathbf{a}^2 \mathbf{b} + a (\mathbf{a} \cdot \mathbf{b}).\end{aligned} \hspace{\stretch{1}}(1.3.19)

One can find easily that 1.3.18b and 1.3.18c provide the same relationship between the $\mathbf{E}_0$ and $\mathbf{B}_0$ components of $F_0$. Writing out the complete expression for $F_0$ we have

\begin{aligned}\begin{aligned}F_0 &= \mathbf{E}_0 + I v \mathbf{B}_0 \\ &=\mathbf{E}_0 + I \hat{\mathbf{k}} \times \mathbf{E}_0 \\ &=\mathbf{E}_0 + \hat{\mathbf{k}} \wedge \mathbf{E}_0.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.20)

Since $\hat{\mathbf{k}} \cdot \mathbf{E}_0 = 0$, this is

\begin{aligned}F_0 = (1 + \hat{\mathbf{k}}) \mathbf{E}_0.\end{aligned} \hspace{\stretch{1}}(1.3.21)

Had we been clever enough this could have been deduced directly from the 1.3.14 directly, since we require a product that is killed by left multiplication with $\hat{\mathbf{k}} - 1$. Our complete plane wave solution to Maxwell’s equation is therefore given by

\begin{aligned}\begin{aligned}F &= \text{Real}(\tilde{F}) = \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \\ \tilde{F} &= (1 \pm \hat{\mathbf{k}}) \mathbf{E}_0 e^{i (\mathbf{k} \cdot \mathbf{x} \mp \omega t)} \\ 0 &= \hat{\mathbf{k}} \cdot \mathbf{E}_0 \\ \mathbf{k}^2 &= \omega^2 \mu \epsilon.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.22)

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] D.J. Griffith. Introduction to Electrodynamics. Prentice-Hall, 1981.

Putting the stress tensor (and traction vector) into explicit vector form.

Posted by peeterjoot on April 8, 2012

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation.

Exersize 6.1 from [1] is to show that the traction vector can be written in vector form (a rather curious thing to have to say) as

\begin{aligned}\mathbf{t} = -p \hat{\mathbf{n}} + \mu ( 2 (\hat{\mathbf{n}} \cdot \boldsymbol{\nabla})\mathbf{u} + \hat{\mathbf{n}} \times (\boldsymbol{\nabla} \times \mathbf{u})).\end{aligned} \hspace{\stretch{1}}(1.1)

Note that the text uses a wedge symbol for the cross product, and I’ve switched to standard notation. I’ve done so because the use of a Geometric-Algebra wedge product also can be used to express this relationship, in which case we would write

\begin{aligned}\mathbf{t} = -p \hat{\mathbf{n}} + \mu ( 2 (\hat{\mathbf{n}} \cdot \boldsymbol{\nabla}) \mathbf{u} + (\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\mathbf{n}}).\end{aligned} \hspace{\stretch{1}}(1.2)

In either case we have

\begin{aligned}(\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\mathbf{n}}=\hat{\mathbf{n}} \times (\boldsymbol{\nabla} \times \mathbf{u})=\boldsymbol{\nabla}' (\hat{\mathbf{n}} \cdot \mathbf{u}') - (\hat{\mathbf{n}} \cdot \boldsymbol{\nabla}) \mathbf{u}\end{aligned} \hspace{\stretch{1}}(1.3)

(where the primes indicate the scope of the gradient, showing here that we are operating only on $\mathbf{u}$, and not $\hat{\mathbf{n}}$).

After computing this, lets also compute the stress tensor in cylindrical and spherical coordinates (a portion of that is also problem 6.10), something that this allows us to do fairly easily without having to deal with the second order terms that we encountered doing this by computing the difference of squared displacements.

We’ll work primarily with just the strain tensor portion of the traction vector expressions above, calculating

\begin{aligned}2 {\mathbf{e}}_{\hat{\mathbf{n}}}=2 (\hat{\mathbf{n}} \cdot \boldsymbol{\nabla})\mathbf{u} + \hat{\mathbf{n}} \times (\boldsymbol{\nabla} \times \mathbf{u})=2 (\hat{\mathbf{n}} \cdot \boldsymbol{\nabla})\mathbf{u} + (\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\mathbf{n}}.\end{aligned} \hspace{\stretch{1}}(1.4)

We’ll see that this gives us a nice way to interpret these tensor relationships. The interpretation was less clear when we computed this from the second order difference method, but here we see that we are just looking at the components of the force in each of the respective directions, dependent on which way our normal is specified.

Verifying the relationship.

\begin{aligned}(\hat{\mathbf{n}} \times (\boldsymbol{\nabla} \times \mathbf{u}) + 2 (\hat{\mathbf{n}} \cdot \boldsymbol{\nabla}) \mathbf{u})_i&=n_a (\boldsymbol{\nabla} \times \mathbf{u})_b \epsilon_{a b i} + 2 n_a \partial_a u_i \\ &=n_a \partial_r u_s \epsilon_{r s b} \epsilon_{a b i} + 2 n_a \partial_a u_i \\ &=n_a \partial_r u_s \delta_{ia}^{[rs]} + 2 n_a \partial_a u_i \\ &=n_a ( \partial_i u_a -\partial_a u_i ) + 2 n_a \partial_a u_i \\ &=n_a \partial_i u_a + n_a \partial_a u_i \\ &=n_a (\partial_i u_a + \partial_a u_i) \\ &=\sigma_{i a } n_a\end{aligned}

We can also put the double cross product in wedge product form

\begin{aligned}\hat{\mathbf{n}} \times (\boldsymbol{\nabla} \times \mathbf{u})&=-I \hat{\mathbf{n}} \wedge (\boldsymbol{\nabla} \times \mathbf{u}) \\ &=-\frac{I}{2}\left(\hat{\mathbf{n}} (\boldsymbol{\nabla} \times \mathbf{u})- (\boldsymbol{\nabla} \times \mathbf{u}) \hat{\mathbf{n}}\right) \\ &=-\frac{I}{2}\left(-I \hat{\mathbf{n}} (\boldsymbol{\nabla} \wedge \mathbf{u})+ I (\boldsymbol{\nabla} \wedge \mathbf{u}) \hat{\mathbf{n}}\right) \\ &=-\frac{I^2}{2}\left(- \hat{\mathbf{n}} (\boldsymbol{\nabla} \wedge \mathbf{u})+ (\boldsymbol{\nabla} \wedge \mathbf{u}) \hat{\mathbf{n}}\right) \\ &=(\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\mathbf{n}}\end{aligned}

Equivalently (and easier) we can just expand the dot product of the wedge and the vector using the relationship

\begin{aligned}\mathbf{a} \cdot (\mathbf{c} \wedge \mathbf{d} \wedge \mathbf{e} \wedge \cdots )=(\mathbf{a} \cdot \mathbf{c}) (\mathbf{d} \wedge \mathbf{e} \wedge \cdots ) - (\mathbf{a} \cdot \mathbf{d}) (\mathbf{c} \wedge \mathbf{e} \wedge \cdots ) +\end{aligned} \hspace{\stretch{1}}(2.5)

so we find

\begin{aligned}((\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\mathbf{n}} + 2 (\hat{\mathbf{n}} \cdot \boldsymbol{\nabla}) \mathbf{u})_i&=(\boldsymbol{\nabla}' (\mathbf{u}' \cdot \hat{\mathbf{n}})-(\hat{\mathbf{n}} \cdot \boldsymbol{\nabla}) \mathbf{u}+ 2 (\hat{\mathbf{n}} \cdot \boldsymbol{\nabla}) \mathbf{u})_i \\ &=\partial_i u_a n_a+n_a \partial_a u_i \\ &=\sigma_{ia} n_a.\end{aligned}

Cylindrical strain tensor.

Let’s now compute the strain tensor (and implicitly the traction vector) in cylindrical coordinates.

Our gradient in cylindrical coordinates is the familiar

\begin{aligned}\boldsymbol{\nabla} = \hat{\mathbf{r}} \frac{\partial {}}{\partial {r}} + \hat{\boldsymbol{\phi}} \frac{1}{{r }}\frac{\partial {}}{\partial {\phi}} + \hat{\mathbf{z}} \frac{\partial {}}{\partial {z}},\end{aligned} \hspace{\stretch{1}}(3.6)

and our cylindrical velocity is

\begin{aligned}\mathbf{u} = \hat{\mathbf{r}} u_r + \hat{\boldsymbol{\phi}} u_\phi + \hat{\mathbf{z}} u_z.\end{aligned} \hspace{\stretch{1}}(3.7)

Our curl is then

\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{u}&=\left(\hat{\mathbf{r}} \frac{\partial {}}{\partial {r}} + \hat{\boldsymbol{\phi}} \frac{1}{{r }}\frac{\partial {}}{\partial {\phi}} + \hat{\mathbf{z}} \frac{\partial {}}{\partial {z}}\right)\wedge\left(\hat{\mathbf{r}} u_r + \hat{\boldsymbol{\phi}} u_\phi + \hat{\mathbf{z}} u_z\right) \\ &=\hat{\mathbf{r}} \wedge \hat{\boldsymbol{\phi}}\left(\partial_r u_\phi -\frac{1}{{r}} \partial_\phi u_r\right)+\hat{\boldsymbol{\phi}} \wedge \hat{\mathbf{z}}\left(\frac{1}{{r}} \partial_\phi u_z- \partial_z u_\phi\right)+\hat{\mathbf{z}} \wedge \hat{\mathbf{r}}\left(\partial_z u_r - \partial_r u_z\right)+\frac{1}{{r}} \hat{\boldsymbol{\phi}} \wedge \left((\partial_\phi \hat{\mathbf{r}}) u_r+(\partial_\phi \hat{\boldsymbol{\phi}}) u_\phi\right)\end{aligned}

Since $\partial_\phi \hat{\mathbf{r}} = \hat{\boldsymbol{\theta}}$ and $\partial_\phi \hat{\boldsymbol{\phi}} = -\hat{\mathbf{r}}$, we have only one cross term and our curl is

\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{u}=\hat{\mathbf{r}} \wedge \hat{\boldsymbol{\phi}}\left(\partial_r u_\phi-\frac{1}{{r}} \partial_\phi u_r+ \frac{u_\phi}{r}\right)+\hat{\boldsymbol{\phi}} \wedge \hat{\mathbf{z}}\left(\frac{1}{{r}} \partial_\phi u_z- \partial_z u_\phi\right)+\hat{\mathbf{z}} \wedge \hat{\mathbf{r}}\left(\partial_z u_r - \partial_r u_z\right).\end{aligned} \hspace{\stretch{1}}(3.8)

We can now move on to compute the directional derivatives and complete the strain calculation in cylindrical coordinates. Let’s consider this computation of the stress for normals in each direction in term.

With $\hat{\mathbf{n}} = \hat{\mathbf{r}}$.

Our directional derivative component for a $\hat{\mathbf{r}}$ normal direction doesn’t have any cross terms

\begin{aligned}2 (\hat{\mathbf{r}} \cdot \boldsymbol{\nabla}) \mathbf{u}&=2 \partial_r\left(\hat{\mathbf{r}} u_r + \hat{\boldsymbol{\phi}} u_\phi + \hat{\mathbf{z}} u_z\right) \\ &=2\left(\hat{\mathbf{r}} \partial_r u_r + \hat{\boldsymbol{\phi}} \partial_r u_\phi + \hat{\mathbf{z}} \partial_r u_z\right).\end{aligned}

Projecting our curl bivector onto the $\hat{\mathbf{r}}$ direction we have

\begin{aligned}(\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\mathbf{r}}&=(\hat{\mathbf{r}} \wedge \hat{\boldsymbol{\phi}}) \cdot \hat{\mathbf{r}}\left(\partial_r u_\phi-\frac{1}{{r}} \partial_\phi u_r+ \frac{u_\phi}{r}\right)+(\hat{\boldsymbol{\phi}} \wedge \hat{\mathbf{z}}) \cdot \hat{\mathbf{r}}\left(\frac{1}{{r}} \partial_\phi u_z- \partial_z u_\phi\right)+(\hat{\mathbf{z}} \wedge \hat{\mathbf{r}}) \cdot \hat{\mathbf{r}}\left(\partial_z u_r - \partial_r u_z\right) \\ &=-\hat{\boldsymbol{\phi}}\left(\partial_r u_\phi-\frac{1}{{r}} \partial_\phi u_r+ \frac{u_\phi}{r}\right)+\hat{\mathbf{z}}\left(\partial_z u_r - \partial_r u_z\right).\end{aligned}

Putting things together we have

\begin{aligned}2 \mathbf{e}_{\hat{\mathbf{r}}}&=2\left(\hat{\mathbf{r}} \partial_r u_r + \hat{\boldsymbol{\phi}} \partial_r u_\phi + \hat{\mathbf{z}} \partial_r u_z\right)-\hat{\boldsymbol{\phi}}\left(\partial_r u_\phi-\frac{1}{{r}} \partial_\phi u_r+ \frac{u_\phi}{r}\right)+\hat{\mathbf{z}}\left(\partial_z u_r - \partial_r u_z\right) \\ &=\hat{\mathbf{r}}\left(2 \partial_r u_r\right)+\hat{\boldsymbol{\phi}}\left(2 \partial_r u_\phi-\partial_r u_\phi+\frac{1}{{r}} \partial_\phi u_r- \frac{u_\phi}{r}\right)+\hat{\mathbf{z}}\left(2 \partial_r u_z+\partial_z u_r - \partial_r u_z\right).\end{aligned}

For our stress tensor

\begin{aligned}\boldsymbol{\sigma}_{\hat{\mathbf{r}}} = - p \hat{\mathbf{r}} + 2 \mu e_{\hat{\mathbf{r}}},\end{aligned} \hspace{\stretch{1}}(3.9)

we can now read off our components by taking dot products to yield

\begin{subequations}

\begin{aligned}\sigma_{rr}=-p + 2 \mu \frac{\partial {u_r}}{\partial {r}}\end{aligned} \hspace{\stretch{1}}(3.10a)

\begin{aligned}\sigma_{r \phi}=\mu \left( \frac{\partial {u_\phi}}{\partial {r}}+\frac{1}{{r}} \frac{\partial {u_r}}{\partial {\phi}}- \frac{u_\phi}{r}\right)\end{aligned} \hspace{\stretch{1}}(3.10b)

\begin{aligned}\sigma_{r z}=\mu \left( \frac{\partial {u_z}}{\partial {r}}+\frac{\partial {u_r}}{\partial {z}}\right).\end{aligned} \hspace{\stretch{1}}(3.10c)

\end{subequations}

With $\hat{\mathbf{n}} = \hat{\boldsymbol{\phi}}$.

Our directional derivative component for a $\hat{\boldsymbol{\phi}}$ normal direction will have some cross terms since both $\hat{\mathbf{r}}$ and $\hat{\boldsymbol{\phi}}$ are functions of $\phi$

\begin{aligned}2 (\hat{\boldsymbol{\phi}} \cdot \boldsymbol{\nabla}) \mathbf{u}&=\frac{2}{r}\partial_\phi\left(\hat{\mathbf{r}} u_r + \hat{\boldsymbol{\phi}} u_\phi + \hat{\mathbf{z}} u_z\right) \\ &=\frac{2}{r}\left(\hat{\mathbf{r}} \partial_\phi u_r + \hat{\boldsymbol{\phi}} \partial_\phi u_\phi + \hat{\mathbf{z}} \partial_\phi u_z+(\partial_\phi \hat{\mathbf{r}}) u_r + (\partial_\phi \hat{\boldsymbol{\phi}}) u_\phi\right) \\ &=\frac{2}{r}\left(\hat{\mathbf{r}} (\partial_\phi u_r - u_\phi) + \hat{\boldsymbol{\phi}} (\partial_\phi u_\phi + u_r )+ \hat{\mathbf{z}} \partial_\phi u_z\right) \\ \end{aligned}

Projecting our curl bivector onto the $\hat{\boldsymbol{\phi}}$ direction we have

\begin{aligned}(\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\boldsymbol{\phi}}&=(\hat{\mathbf{r}} \wedge \hat{\boldsymbol{\phi}}) \cdot \hat{\boldsymbol{\phi}}\left(\partial_r u_\phi-\frac{1}{{r}} \partial_\phi u_r+ \frac{u_\phi}{r}\right)+(\hat{\boldsymbol{\phi}} \wedge \hat{\mathbf{z}}) \cdot \hat{\boldsymbol{\phi}}\left(\frac{1}{{r}} \partial_\phi u_z- \partial_z u_\phi\right)+(\hat{\mathbf{z}} \wedge \hat{\mathbf{r}}) \cdot \hat{\boldsymbol{\phi}}\left(\partial_z u_r - \partial_r u_z\right) \\ &=\hat{\mathbf{r}}\left(\partial_r u_\phi-\frac{1}{{r}} \partial_\phi u_r+ \frac{u_\phi}{r}\right)-\hat{\mathbf{z}}\left(\frac{1}{{r}} \partial_\phi u_z- \partial_z u_\phi\right)\end{aligned}

Putting things together we have

\begin{aligned}2 \mathbf{e}_{\hat{\boldsymbol{\phi}}}&=\frac{2}{r}\left(\hat{\mathbf{r}} (\partial_\phi u_r - u_\phi) + \hat{\boldsymbol{\phi}} (\partial_\phi u_\phi + u_r )+ \hat{\mathbf{z}} \partial_\phi u_z\right)+\hat{\mathbf{r}}\left(\partial_r u_\phi-\frac{1}{{r}} \partial_\phi u_r+ \frac{u_\phi}{r}\right)-\hat{\mathbf{z}}\left(\frac{1}{{r}} \partial_\phi u_z- \partial_z u_\phi\right) \\ &=\hat{\mathbf{r}}\left(\frac{1}{r}\partial_\phi u_r-\frac{u_\phi}{r}+\partial_r u_\phi\right)+\frac{2}{r} \hat{\boldsymbol{\phi}}\left(\partial_\phi u_\phi + u_r\right)+\hat{\mathbf{z}}\left(\frac{1}{r} \partial_\phi u_z + \partial_z u_\phi\right).\end{aligned}

For our stress tensor

\begin{aligned}\boldsymbol{\sigma}_{\hat{\boldsymbol{\phi}}} = - p \hat{\boldsymbol{\phi}} + 2 \mu e_{\hat{\boldsymbol{\phi}}},\end{aligned} \hspace{\stretch{1}}(3.11)

we can now read off our components by taking dot products to yield

\begin{subequations}

\begin{aligned}\sigma_{\phi \phi}=-p + 2 \mu \left(\frac{1}{{r}}\frac{\partial {u_\phi}}{\partial {\phi}} + \frac{u_r}{r}\right)\end{aligned} \hspace{\stretch{1}}(3.12a)

\begin{aligned}\sigma_{\phi z}=\mu \left(\frac{1}{r} \frac{\partial {u_z}}{\partial {\phi}} + \frac{\partial {u_\phi}}{\partial {z}}\right)\end{aligned} \hspace{\stretch{1}}(3.12b)

\begin{aligned}\sigma_{\phi r}=\mu \left(\frac{1}{r}\frac{\partial {u_r}}{\partial {\phi}}-\frac{u_\phi}{r}+\frac{\partial {u_\phi}}{\partial {r}}\right).\end{aligned} \hspace{\stretch{1}}(3.12c)

\end{subequations}

With $\hat{\mathbf{n}} = \hat{\mathbf{z}}$.

Like the $\hat{\mathbf{r}}$ normal direction, our directional derivative component for a $\hat{\mathbf{z}}$ normal direction will not have any cross terms

\begin{aligned}2 (\hat{\mathbf{z}} \cdot \boldsymbol{\nabla}) \mathbf{u}&=\partial_z\left(\hat{\mathbf{r}} u_r + \hat{\boldsymbol{\phi}} u_\phi + \hat{\mathbf{z}} u_z\right) \\ &=\hat{\mathbf{r}} \partial_z u_r + \hat{\boldsymbol{\phi}} \partial_z u_\phi + \hat{\mathbf{z}} \partial_z u_z\end{aligned}

Projecting our curl bivector onto the $\hat{\mathbf{z}}$ direction we have

\begin{aligned}(\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\boldsymbol{\phi}}&=(\hat{\mathbf{r}} \wedge \hat{\boldsymbol{\phi}}) \cdot \hat{\mathbf{z}}\left(\partial_r u_\phi-\frac{1}{{r}} \partial_\phi u_r+ \frac{u_\phi}{r}\right)+(\hat{\boldsymbol{\phi}} \wedge \hat{\mathbf{z}}) \cdot \hat{\mathbf{z}}\left(\frac{1}{{r}} \partial_\phi u_z- \partial_z u_\phi\right)+(\hat{\mathbf{z}} \wedge \hat{\mathbf{r}}) \cdot \hat{\mathbf{z}}\left(\partial_z u_r - \partial_r u_z\right) \\ &=\hat{\boldsymbol{\phi}}\left(\frac{1}{{r}} \partial_\phi u_z- \partial_z u_\phi\right)-\hat{\mathbf{r}}\left(\partial_z u_r - \partial_r u_z\right)\end{aligned}

Putting things together we have

\begin{aligned}2 \mathbf{e}_{\hat{\mathbf{z}}}&=2 \hat{\mathbf{r}} \partial_z u_r + 2 \hat{\boldsymbol{\phi}} \partial_z u_\phi + 2 \hat{\mathbf{z}} \partial_z u_z+\hat{\boldsymbol{\phi}}\left(\frac{1}{{r}} \partial_\phi u_z- \partial_z u_\phi\right)-\hat{\mathbf{r}}\left(\partial_z u_r - \partial_r u_z\right) \\ &=\hat{\mathbf{r}}\left(2 \partial_z u_r -\partial_z u_r + \partial_r u_z\right)+\hat{\boldsymbol{\phi}}\left(2 \partial_z u_\phi +\frac{1}{{r}} \partial_\phi u_z- \partial_z u_\phi\right)+\hat{\mathbf{z}}\left(2 \partial_z u_z\right) \\ &=\hat{\mathbf{r}}\left(\partial_z u_r + \partial_r u_z\right)+\hat{\boldsymbol{\phi}}\left(\partial_z u_\phi +\frac{1}{{r}} \partial_\phi u_z\right)+\hat{\mathbf{z}}\left(2 \partial_z u_z\right).\end{aligned}

For our stress tensor

\begin{aligned}\boldsymbol{\sigma}_{\hat{\mathbf{z}}} = - p \hat{\mathbf{z}} + 2 \mu e_{\hat{\mathbf{z}}},\end{aligned} \hspace{\stretch{1}}(3.13)

we can now read off our components by taking dot products to yield

\begin{subequations}

\begin{aligned}\sigma_{z z}=-p + 2 \mu \frac{\partial {u_z}}{\partial {z}}\end{aligned} \hspace{\stretch{1}}(3.14a)

\begin{aligned}\sigma_{z r}=\mu \left(\frac{\partial {u_r}}{\partial {z}}+ \frac{\partial {u_z}}{\partial {r}}\right)\end{aligned} \hspace{\stretch{1}}(3.14b)

\begin{aligned}\sigma_{z \phi}=\mu \left(\frac{\partial {u_\phi}}{\partial {z}}+\frac{1}{{r}} \frac{\partial {u_z}}{\partial {\phi}}\right).\end{aligned} \hspace{\stretch{1}}(3.14c)

\end{subequations}

Summary.

\begin{subequations}

\begin{aligned}\sigma_{rr}=-p + 2 \mu \frac{\partial {u_r}}{\partial {r}}\end{aligned} \hspace{\stretch{1}}(3.15a)

\begin{aligned}\sigma_{\phi \phi}=-p + 2 \mu \left(\frac{1}{{r}}\frac{\partial {u_\phi}}{\partial {\phi}} + \frac{u_r}{r}\right)\end{aligned} \hspace{\stretch{1}}(3.15b)

\begin{aligned}\sigma_{z z}=-p + 2 \mu \frac{\partial {u_z}}{\partial {z}}\end{aligned} \hspace{\stretch{1}}(3.15c)

\begin{aligned}\sigma_{r \phi}=\mu \left( \frac{\partial {u_\phi}}{\partial {r}}+\frac{1}{{r}} \frac{\partial {u_r}}{\partial {\phi}}- \frac{u_\phi}{r}\right)\end{aligned} \hspace{\stretch{1}}(3.15d)

\begin{aligned}\sigma_{\phi z}=\mu \left(\frac{1}{r} \frac{\partial {u_z}}{\partial {\phi}} + \frac{\partial {u_\phi}}{\partial {z}}\right)\end{aligned} \hspace{\stretch{1}}(3.15e)

\begin{aligned}\sigma_{z r}=\mu \left(\frac{\partial {u_r}}{\partial {z}}+ \frac{\partial {u_z}}{\partial {r}}\right)\end{aligned} \hspace{\stretch{1}}(3.15f)

\end{subequations}

Spherical strain tensor.

Having done a first order cylindrical derivation of the strain tensor, let’s also do the spherical case for completeness. Would this have much utility in fluids? Perhaps for flow over a spherical barrier?

We need the gradient in spherical coordinates. Recall that our spherical coordinate velocity was

\begin{aligned}\frac{d\mathbf{r}}{dt} = \hat{\mathbf{r}} \dot{r} + \hat{\boldsymbol{\theta}} (r \dot{\theta}) + \hat{\boldsymbol{\phi}} ( r \sin\theta \dot{\phi} ),\end{aligned} \hspace{\stretch{1}}(4.16)

and our gradient mirrors this structure

\begin{aligned}\boldsymbol{\nabla} = \hat{\mathbf{r}} \frac{\partial {}}{\partial {r}} + \hat{\boldsymbol{\theta}} \frac{1}{{r }}\frac{\partial {}}{\partial {\theta}} + \hat{\boldsymbol{\phi}} \frac{1}{{r \sin\theta}} \frac{\partial {}}{\partial {\phi}}.\end{aligned} \hspace{\stretch{1}}(4.17)

We also previously calculated \inbookref{phy454:continuumL2}{eqn:continuumL2:1010} the unit vector differentials

\begin{subequations}

\begin{aligned}d\hat{\mathbf{r}} = \hat{\boldsymbol{\phi}} \sin\theta d\phi + \hat{\boldsymbol{\theta}} d\theta\end{aligned} \hspace{\stretch{1}}(4.18a)

\begin{aligned}d\hat{\boldsymbol{\theta}} = \hat{\boldsymbol{\phi}} \cos\theta d\phi - \hat{\mathbf{r}} d\theta\end{aligned} \hspace{\stretch{1}}(4.18b)

\begin{aligned}d\hat{\boldsymbol{\phi}} = -(\hat{\mathbf{r}} \sin\theta + \hat{\boldsymbol{\theta}} \cos\theta) d\phi,\end{aligned} \hspace{\stretch{1}}(4.18c)

\end{subequations}

and can use those to read off the partials of all the unit vectors

\begin{aligned}\frac{\partial \hat{\mathbf{r}}}{\partial \{r,\theta, \phi\}} &= \{0, \hat{\boldsymbol{\theta}}, \hat{\boldsymbol{\phi}} \sin\theta \} \\ \frac{\partial \hat{\boldsymbol{\theta}}}{\partial \{r,\theta, \phi\}} &= \{0, -\hat{\mathbf{r}}, \hat{\boldsymbol{\phi}} \cos\theta \} \\ \frac{\partial \hat{\boldsymbol{\phi}}}{\partial \{r,\theta, \phi\}} &= \{0, 0, -\hat{\mathbf{r}} \sin\theta -\hat{\boldsymbol{\theta}} \cos\theta \}.\end{aligned} \hspace{\stretch{1}}(4.19)

Finally, our velocity in spherical coordinates is just

\begin{aligned}\mathbf{u} = \hat{\mathbf{r}} u_r + \hat{\boldsymbol{\theta}} u_\theta + \hat{\boldsymbol{\phi}} u_\phi,\end{aligned} \hspace{\stretch{1}}(4.22)

from which we can now compute the curl, and the directional derivative. Starting with the curl we have

\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{u}&=\left( \hat{\mathbf{r}} \frac{\partial {}}{\partial {r}} + \hat{\boldsymbol{\theta}} \frac{1}{{r }}\frac{\partial {}}{\partial {\theta}} + \hat{\boldsymbol{\phi}} \frac{1}{{r \sin\theta}} \frac{\partial {}}{\partial {\phi}} \right) \wedge\left( \hat{\mathbf{r}} u_r + \hat{\boldsymbol{\theta}} u_\theta + \hat{\boldsymbol{\phi}} u_\phi \right) \\ &=\hat{\mathbf{r}} \wedge \hat{\boldsymbol{\theta}}\left( \partial_r u_\theta - \frac{1}{{r}} \partial_\theta u_r\right)\\ & +\hat{\boldsymbol{\theta}} \wedge \hat{\boldsymbol{\phi}}\left(\frac{1}{{r}} \partial_\theta u_\phi - \frac{1}{{r \sin\theta}} \partial_\phi u_\theta\right)\\ & +\hat{\boldsymbol{\phi}} \wedge \hat{\mathbf{r}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_r - \partial_r u_\phi\right)\\ & +\frac{1}{{r}} \hat{\boldsymbol{\theta}} \wedge \left(u_\theta \underbrace{\partial_\theta \hat{\boldsymbol{\theta}}}_{-\hat{\mathbf{r}}}+u_\phi \underbrace{\partial_\theta \hat{\boldsymbol{\phi}}}_{0}\right)\\ & +\frac{1}{{r \sin\theta}} \hat{\boldsymbol{\phi}} \wedge \left(u_\theta \underbrace{\partial_\phi \hat{\boldsymbol{\theta}}}_{\hat{\boldsymbol{\phi}} \cos\theta}+u_\phi \underbrace{\partial_\phi \hat{\boldsymbol{\phi}}}_{-\hat{\mathbf{r}} \sin\theta - \hat{\boldsymbol{\theta}} \cos\theta}\right).\end{aligned}

So we have

\begin{aligned}\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{u}&=\hat{\mathbf{r}} \wedge \hat{\boldsymbol{\theta}}\left( \partial_r u_\theta - \frac{1}{{r}} \partial_\theta u_r + \frac{u_\theta}{r}\right)\\ & +\hat{\boldsymbol{\theta}} \wedge \hat{\boldsymbol{\phi}}\left(\frac{1}{{r}} \partial_\theta u_\phi - \frac{1}{{r \sin\theta}} \partial_\phi u_\theta+ \frac{u_\phi \cot\theta}{r}\right)\\ & +\hat{\boldsymbol{\phi}} \wedge \hat{\mathbf{r}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_r - \partial_r u_\phi- \frac{u_\phi}{r}\right).\end{aligned}\end{aligned} \hspace{\stretch{1}}(4.23)

With $\hat{\mathbf{n}} = \hat{\mathbf{r}}$.

The directional derivative portion of our strain is

\begin{aligned}2 (\hat{\mathbf{r}} \cdot \boldsymbol{\nabla}) \mathbf{u}&=2 \partial_r (\hat{\mathbf{r}} u_r + \hat{\boldsymbol{\theta}} u_\theta + \hat{\boldsymbol{\phi}} u_\phi ) \\ &=2 (\hat{\mathbf{r}} \partial_r u_r + \hat{\boldsymbol{\theta}} \partial_r u_\theta + \hat{\boldsymbol{\phi}} \partial_r u_\phi ).\end{aligned}

The other portion of our strain tensor is

\begin{aligned}(\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\mathbf{r}}&=(\hat{\mathbf{r}} \wedge \hat{\boldsymbol{\theta}}) \cdot \hat{\mathbf{r}}\left( \partial_r u_\theta - \frac{1}{{r}} \partial_\theta u_r + \frac{u_\theta}{r}\right)\\ & +(\hat{\boldsymbol{\theta}} \wedge \hat{\boldsymbol{\phi}}) \cdot \hat{\mathbf{r}}\left(\frac{1}{{r}} \partial_\theta u_\phi - \frac{1}{{r \sin\theta}} \partial_\phi u_\theta+ \frac{u_\phi \cot\theta}{r}\right)\\ & +(\hat{\boldsymbol{\phi}} \wedge \hat{\mathbf{r}}) \cdot \hat{\mathbf{r}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_r - \partial_r u_\phi- \frac{u_\phi}{r}\right) \\ &=-\hat{\boldsymbol{\theta}}\left( \partial_r u_\theta - \frac{1}{{r}} \partial_\theta u_r + \frac{u_\theta}{r}\right)\\ & +\hat{\boldsymbol{\phi}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_r - \partial_r u_\phi- \frac{u_\phi}{r}\right).\end{aligned}

Putting these together we find

\begin{aligned}2 {\mathbf{e}}_{\hat{\mathbf{r}}}&=2 (\hat{\mathbf{r}} \cdot \boldsymbol{\nabla})\mathbf{u} + (\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\mathbf{r}} \\ &=2 (\hat{\mathbf{r}} \partial_r u_r + \hat{\boldsymbol{\theta}} \partial_r u_\theta + \hat{\boldsymbol{\phi}} \partial_r u_\phi )-\hat{\boldsymbol{\theta}}\left(\partial_r u_\theta - \frac{1}{{r}} \partial_\theta u_r + \frac{u_\theta}{r}\right)+\hat{\boldsymbol{\phi}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_r - \partial_r u_\phi- \frac{u_\phi}{r}\right) \\ &=\hat{\mathbf{r}}\left(2 \partial_r u_r\right)+\hat{\boldsymbol{\theta}}\left(2 \partial_r u_\theta-\partial_r u_\theta + \frac{1}{{r}} \partial_\theta u_r - \frac{u_\theta}{r}\right)+\hat{\boldsymbol{\phi}}\left(2 \partial_r u_\phi+ \frac{1}{{r \sin\theta}} \partial_\phi u_r - \partial_r u_\phi- \frac{u_\phi}{r}\right).\end{aligned}

Which gives

\begin{aligned}2 {\mathbf{e}}_{\hat{\mathbf{r}}}=\hat{\mathbf{r}}\left(2 \partial_r u_r\right)+\hat{\boldsymbol{\theta}}\left(\partial_r u_\theta+ \frac{1}{{r}} \partial_\theta u_r - \frac{u_\theta}{r}\right)+\hat{\boldsymbol{\phi}}\left(\partial_r u_\phi+ \frac{1}{{r \sin\theta}} \partial_\phi u_r- \frac{u_\phi}{r}\right)\end{aligned} \hspace{\stretch{1}}(4.24)

For our stress tensor

\begin{aligned}\boldsymbol{\sigma}_{\hat{\mathbf{r}}} = - p \hat{\mathbf{r}} + 2 \mu e_{\hat{\mathbf{r}}},\end{aligned} \hspace{\stretch{1}}(4.25)

we can now read off our components by taking dot products

\begin{subequations}

\begin{aligned}\sigma_{rr}=-p + 2 \mu \frac{\partial {u_r}}{\partial {r}}\end{aligned} \hspace{\stretch{1}}(4.26a)

\begin{aligned}\sigma_{r \theta}=\mu \left(\frac{\partial {u_\theta}}{\partial {r}}+ \frac{1}{{r}} \frac{\partial {u_r}}{\partial {\theta}} - \frac{u_\theta}{r}\right)\end{aligned} \hspace{\stretch{1}}(4.26b)

\begin{aligned}\sigma_{r \phi}=\mu \left(\frac{\partial {u_\phi}}{\partial {r}}+ \frac{1}{{r \sin\theta}} \frac{\partial {u_r}}{\partial {\phi}}- \frac{u_\phi}{r}\right).\end{aligned} \hspace{\stretch{1}}(4.26c)

\end{subequations}

This is consistent with (15.20) from [3] (after adjusting for minor notational differences).

With $\hat{\mathbf{n}} = \hat{\boldsymbol{\theta}}$.

Now let’s do the $\hat{\boldsymbol{\theta}}$ direction. The directional derivative portion of our strain will be a bit more work to compute because we have $\theta$ variation of the unit vectors

\begin{aligned}(\hat{\boldsymbol{\theta}} \cdot \boldsymbol{\nabla}) \mathbf{u} &= \frac{1}{r} \partial_\theta (\hat{\mathbf{r}} u_r + \hat{\boldsymbol{\theta}} u_\theta + \hat{\boldsymbol{\phi}} u_\phi ) \\ &= \frac{1}{r} \left( \hat{\mathbf{r}} \partial_\theta u_r + \hat{\boldsymbol{\theta}} \partial_\theta u_\theta + \hat{\boldsymbol{\phi}} \partial_\theta u_\phi \right)+\frac{1}{r} \left( (\partial_\theta \hat{\mathbf{r}}) u_r + (\partial_\theta \hat{\boldsymbol{\theta}}) u_\theta + (\partial_\theta \hat{\boldsymbol{\phi}}) u_\phi \right) \\ &= \frac{1}{r}\left(\hat{\mathbf{r}} \partial_\theta u_r + \hat{\boldsymbol{\theta}} \partial_\theta u_\theta + \hat{\boldsymbol{\phi}} \partial_\theta u_\phi \right)+\frac{1}{r} \left( \hat{\boldsymbol{\theta}} u_r - \hat{\mathbf{r}} u_\theta \right).\end{aligned}

So we have

\begin{aligned}2 (\hat{\boldsymbol{\theta}} \cdot \boldsymbol{\nabla}) \mathbf{u}=\frac{2}{r} \hat{\mathbf{r}} (\partial_\theta u_r- u_\theta)+ \frac{2}{r} \hat{\boldsymbol{\theta}} (\partial_\theta u_\theta+ u_r) + \frac{2}{r} \hat{\boldsymbol{\phi}} \partial_\theta u_\phi,\end{aligned} \hspace{\stretch{1}}(4.27)

and can move on to projecting our curl bivector onto the $\hat{\boldsymbol{\theta}}$ direction. That portion of our strain tensor is

\begin{aligned}(\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\boldsymbol{\theta}}&=(\hat{\mathbf{r}} \wedge \hat{\boldsymbol{\theta}}) \cdot \hat{\boldsymbol{\theta}}\left( \partial_r u_\theta - \frac{1}{{r}} \partial_\theta u_r + \frac{u_\theta}{r}\right)\\ & +(\hat{\boldsymbol{\theta}} \wedge \hat{\boldsymbol{\phi}}) \cdot \hat{\boldsymbol{\theta}}\left(\frac{1}{{r}} \partial_\theta u_\phi - \frac{1}{{r \sin\theta}} \partial_\phi u_\theta+ \frac{u_\phi \cot\theta}{r}\right)\\ & +(\hat{\boldsymbol{\phi}} \wedge \hat{\mathbf{r}}) \cdot \hat{\boldsymbol{\theta}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_r - \partial_r u_\phi- \frac{u_\phi}{r}\right) \\ &=\hat{\mathbf{r}}\left( \partial_r u_\theta - \frac{1}{{r}} \partial_\theta u_r + \frac{u_\theta}{r}\right)-\hat{\boldsymbol{\phi}}\left(\frac{1}{{r}} \partial_\theta u_\phi - \frac{1}{{r \sin\theta}} \partial_\phi u_\theta+ \frac{u_\phi \cot\theta}{r}\right).\end{aligned}

Putting these together we find

\begin{aligned}2 {\mathbf{e}}_{\hat{\boldsymbol{\theta}}}&=2 (\hat{\boldsymbol{\theta}} \cdot \boldsymbol{\nabla})\mathbf{u} + (\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\boldsymbol{\theta}} \\ &= \frac{2}{r} \hat{\mathbf{r}} (\partial_\theta u_r - u_\theta )+ \frac{2}{r} \hat{\boldsymbol{\theta}} (\partial_\theta u_\theta + u_r )+ \frac{2}{r} \hat{\boldsymbol{\phi}} \partial_\theta u_\phi \\ &+\hat{\mathbf{r}}\left(\partial_r u_\theta - \frac{1}{{r}} \partial_\theta u_r + \frac{u_\theta}{r}\right)-\hat{\boldsymbol{\phi}}\left(\frac{1}{{r}} \partial_\theta u_\phi - \frac{1}{{r \sin\theta}} \partial_\phi u_\theta + \frac{u_\phi \cot\theta}{r}\right).\end{aligned}

Which gives

\begin{aligned}2 {\mathbf{e}}_{\hat{\boldsymbol{\theta}}}=\hat{\mathbf{r}} \left( \frac{1}{r} \partial_\theta u_r + \partial_r u_\theta- \frac{u_\theta}{r}\right)+\hat{\boldsymbol{\theta}} \left( \frac{2}{r} \partial_\theta u_\theta+ \frac{2}{r} u_r\right)+\hat{\boldsymbol{\phi}} \left(\frac{1}{r} \partial_\theta u_\phi+ \frac{1}{{r \sin\theta}} \partial_\phi u_\theta- \frac{u_\phi \cot\theta}{r}\right).\end{aligned} \hspace{\stretch{1}}(4.28)

For our stress tensor

\begin{aligned}\boldsymbol{\sigma}_{\hat{\boldsymbol{\theta}}} = - p \hat{\boldsymbol{\theta}} + 2 \mu e_{\hat{\boldsymbol{\theta}}},\end{aligned} \hspace{\stretch{1}}(4.29)

we can now read off our components by taking dot products

\begin{subequations}

\begin{aligned}\sigma_{\theta \theta}=-p+\mu \left( \frac{2}{r} \frac{\partial {u_\theta}}{\partial {\theta}}+ \frac{2}{r} u_r\right)\end{aligned} \hspace{\stretch{1}}(4.30a)

\begin{aligned}\sigma_{\theta \phi}=\mu \left(\frac{1}{r} \frac{\partial {u_\phi}}{\partial {\theta}}+ \frac{1}{{r \sin\theta}} \frac{\partial {u_\theta}}{\partial {\phi}}- \frac{u_\phi \cot\theta}{r}\right)\end{aligned} \hspace{\stretch{1}}(4.30b)

\begin{aligned}\sigma_{\theta r}= \mu \left(\frac{1}{r} \frac{\partial {u_r}}{\partial {\theta}} + \frac{\partial {u_\theta}}{\partial {r}}- \frac{u_\theta}{r}\right).\end{aligned} \hspace{\stretch{1}}(4.30c)

\end{subequations}

This again is consistent with (15.20) from [3].

With $\hat{\mathbf{n}} = \hat{\boldsymbol{\phi}}$.

Finally, let’s do the $\hat{\boldsymbol{\phi}}$ direction. This directional derivative portion of our strain will also be a bit more work to compute because we have $\hat{\boldsymbol{\phi}}$ variation of the unit vectors

\begin{aligned}(\hat{\boldsymbol{\phi}} \cdot \boldsymbol{\nabla}) \mathbf{u}&=\frac{1}{r \sin\theta} \partial_\phi (\hat{\mathbf{r}} u_r + \hat{\boldsymbol{\theta}} u_\theta + \hat{\boldsymbol{\phi}} u_\phi) \\ &=\frac{1}{r \sin\theta}(\hat{\mathbf{r}} \partial_\phi u_r+\hat{\boldsymbol{\theta}} \partial_\phi u_\theta+\hat{\boldsymbol{\phi}} \partial_\phi u_\phi+(\partial_\phi \hat{\mathbf{r}} )u_r+(\partial_\phi \hat{\boldsymbol{\theta}} )u_\theta+(\partial_\phi \hat{\boldsymbol{\phi}} )u_\phi) \\ &=\frac{1}{r \sin\theta}(\hat{\mathbf{r}} \partial_\phi u_r+\hat{\boldsymbol{\theta}} \partial_\phi u_\theta+\hat{\boldsymbol{\phi}} \partial_\phi u_\phi+\hat{\boldsymbol{\phi}} \sin\thetau_r+\hat{\boldsymbol{\phi}} \cos\thetau_\theta-(\hat{\mathbf{r}} \sin\theta+ \hat{\boldsymbol{\theta}} \cos\theta)u_\phi)\end{aligned}

So we have

\begin{aligned}2 (\hat{\boldsymbol{\phi}} \cdot \boldsymbol{\nabla}) \mathbf{u}=2 \hat{\mathbf{r}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_r - \frac{u_\phi}{r}\right)+2 \hat{\boldsymbol{\theta}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_\theta-\frac{1}{{r}} \cot\theta u_\phi\right)+2 \hat{\boldsymbol{\phi}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_\phi+ \frac{1}{{r}} u_r+ \frac{1}{{r}} \cot\theta u_\theta\right),\end{aligned} \hspace{\stretch{1}}(4.31)

and can move on to projecting our curl bivector onto the $\hat{\boldsymbol{\phi}}$ direction. That portion of our strain tensor is

\begin{aligned}(\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\boldsymbol{\phi}}&=(\hat{\mathbf{r}} \wedge \hat{\boldsymbol{\theta}}) \cdot \hat{\boldsymbol{\phi}}\left( \partial_r u_\theta - \frac{1}{{r}} \partial_\theta u_r + \frac{u_\theta}{r}\right)\\ & +(\hat{\boldsymbol{\theta}} \wedge \hat{\boldsymbol{\phi}}) \cdot \hat{\boldsymbol{\phi}}\left(\frac{1}{{r}} \partial_\theta u_\phi - \frac{1}{{r \sin\theta}} \partial_\phi u_\theta+ \frac{u_\phi \cot\theta}{r}\right)\\ & +(\hat{\boldsymbol{\phi}} \wedge \hat{\mathbf{r}}) \cdot \hat{\boldsymbol{\phi}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_r - \partial_r u_\phi- \frac{u_\phi}{r}\right) \\ &=\hat{\boldsymbol{\theta}}\left(\frac{1}{{r}} \partial_\theta u_\phi - \frac{1}{{r \sin\theta}} \partial_\phi u_\theta+ \frac{u_\phi \cot\theta}{r}\right)\\ &-\hat{\mathbf{r}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_r - \partial_r u_\phi- \frac{u_\phi}{r}\right).\end{aligned}

Putting these together we find

\begin{aligned}2 {\mathbf{e}}_{\hat{\boldsymbol{\theta}}}&=2 (\hat{\boldsymbol{\phi}} \cdot \boldsymbol{\nabla})\mathbf{u} + (\boldsymbol{\nabla} \wedge \mathbf{u}) \cdot \hat{\boldsymbol{\phi}} \\ &=2 \hat{\mathbf{r}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_r - \frac{u_\phi}{r}\right)+2 \hat{\boldsymbol{\theta}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_\theta-\frac{1}{{r}} \cot\theta u_\phi\right)+2 \hat{\boldsymbol{\phi}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_\phi+ \frac{1}{{r}} u_r+ \frac{1}{{r}} \cot\theta u_\theta\right) \\ &+\hat{\boldsymbol{\theta}}\left(\frac{1}{{r}} \partial_\theta u_\phi - \frac{1}{{r \sin\theta}} \partial_\phi u_\theta+ \frac{u_\phi \cot\theta}{r}\right)-\hat{\mathbf{r}}\left(\frac{1}{{r \sin\theta}} \partial_\phi u_r - \partial_r u_\phi- \frac{u_\phi}{r}\right).\end{aligned}

Which gives

\begin{aligned}2 {\mathbf{e}}_{\hat{\boldsymbol{\phi}}}=\hat{\mathbf{r}} \left( \frac{ \partial_\phi u_r }{r \sin\theta}- \frac{u_\phi}{r}+ \partial_r u_\phi\right)+\hat{\boldsymbol{\theta}} \left(\frac{\partial_\phi u_\theta}{r \sin\theta}- \frac{u_\phi \cot\theta}{r}+\frac{\partial_\theta u_\phi}{r}\right)+2 \hat{\boldsymbol{\phi}} \left(\frac{\partial_\phi u_\phi}{r \sin\theta}+ \frac{u_r}{r}+ \frac{\cot\theta u_\theta}{r}\right).\end{aligned} \hspace{\stretch{1}}(4.32)

For our stress tensor

\begin{aligned}\boldsymbol{\sigma}_{\hat{\boldsymbol{\phi}}} = - p \hat{\boldsymbol{\phi}} + 2 \mu e_{\hat{\boldsymbol{\phi}}},\end{aligned} \hspace{\stretch{1}}(4.33)

we can now read off our components by taking dot products

\begin{subequations}

\begin{aligned}\sigma_{\phi \phi}=-p+2 \mu \left(\frac{1}{{r \sin\theta}} \frac{\partial {u_\phi}}{\partial {\phi}}+ \frac{u_r}{r}+ \frac{\cot\theta u_\theta}{r}\right)\end{aligned} \hspace{\stretch{1}}(4.34a)

\begin{aligned}\sigma_{\phi r}=\mu \left( \frac{1}{r \sin\theta} \frac{\partial {u_r}}{\partial {\phi}}- \frac{u_\phi}{r}+ \frac{\partial {u_\phi}}{\partial {r}}\right)\end{aligned} \hspace{\stretch{1}}(4.34b)

\begin{aligned}\sigma_{\phi \theta}= \mu \left(\frac{1}{r \sin\theta} \frac{\partial {u_\theta}}{\partial {\phi}}- \frac{u_\phi \cot\theta}{r}+\frac{1}{{r}} \frac{\partial {u_\phi}}{\partial {\theta}}\right).\end{aligned} \hspace{\stretch{1}}(4.34c)

\end{subequations}

This again is consistent with (15.20) from [3].

Summary

\begin{subequations}

\begin{aligned}\sigma_{rr}=-p + 2 \mu \frac{\partial {u_r}}{\partial {r}}\end{aligned} \hspace{\stretch{1}}(4.35a)

\begin{aligned}\sigma_{\theta \theta}=-p+2 \mu \left( \frac{1}{r} \frac{\partial {u_\theta}}{\partial {\theta}}+ \frac{ u_r }{r}\right)\end{aligned} \hspace{\stretch{1}}(4.35b)

\begin{aligned}\sigma_{\phi \phi}=-p+2 \mu \left(\frac{1}{{r \sin\theta}} \frac{\partial {u_\phi}}{\partial {\phi}}+ \frac{u_r}{r}+ \frac{\cot\theta u_\theta}{r}\right)\end{aligned} \hspace{\stretch{1}}(4.35c)

\begin{aligned}\sigma_{r \theta}=\mu \left(\frac{\partial {u_\theta}}{\partial {r}}+ \frac{1}{{r}} \frac{\partial {u_r}}{\partial {\theta}} - \frac{u_\theta}{r}\right)\end{aligned} \hspace{\stretch{1}}(4.35d)

\begin{aligned}\sigma_{\theta \phi}= \mu \left(\frac{1}{r \sin\theta} \frac{\partial {u_\theta}}{\partial {\phi}}- \frac{u_\phi \cot\theta}{r}+\frac{1}{{r}} \frac{\partial {u_\phi}}{\partial {\theta}}\right).\end{aligned} \hspace{\stretch{1}}(4.35e)

\begin{aligned}\sigma_{\phi r}=\mu \left( \frac{1}{r \sin\theta} \frac{\partial {u_r}}{\partial {\phi}}- \frac{u_\phi}{r}+ \frac{\partial {u_\phi}}{\partial {r}}\right)\end{aligned} \hspace{\stretch{1}}(4.35f)

\end{subequations}

References

[1] D.J. Acheson. Elementary fluid dynamics. Oxford University Press, USA, 1990.

[2] Peeter Joot. Continuum mechanics., chapter {Introduction and strain tensor.}
.

[3] L.D. Landau and E.M. Lifshitz. A Course in Theoretical Physics-Fluid Mechanics. Pergamon Press Ltd., 1987.

Geometric Algebra. The very quickest introduction.

Posted by peeterjoot on March 17, 2012

Motivation.

An attempt to make a relatively concise introduction to Geometric (or Clifford) Algebra. Much more complete introductions to the subject can be found in [1], [2], and [3].

Axioms

We have a couple basic principles upon which the algebra is based

1. Vectors can be multiplied.
2. The square of a vector is the (squared) length of that vector (with appropriate generalizations for non-Euclidean metrics).
3. Vector products are associative (but not necessarily commutative).

That’s really all there is to it, and the rest, paraphrasing Feynman, can be figured out by anybody sufficiently clever.

By example. The 2D case.

Consider a 2D Euclidean space, and the product of two vectors $\mathbf{a}$ and $\mathbf{b}$ in that space. Utilizing a standard orthonormal basis $\{\mathbf{e}_1, \mathbf{e}_2\}$ we can write

\begin{aligned}\mathbf{a} &= \mathbf{e}_1 x_1 + \mathbf{e}_2 x_2 \\ \mathbf{b} &= \mathbf{e}_1 y_1 + \mathbf{e}_2 y_2,\end{aligned} \hspace{\stretch{1}}(3.1)

and let’s write out the product of these two vectors $\mathbf{a} \mathbf{b}$, not yet knowing what we will end up with. That is

\begin{aligned}\mathbf{a} \mathbf{b} &= (\mathbf{e}_1 x_1 + \mathbf{e}_2 x_2 )( \mathbf{e}_1 y_1 + \mathbf{e}_2 y_2 ) \\ &= \mathbf{e}_1^2 x_1 y_1 + \mathbf{e}_2^2 x_2 y_2+ \mathbf{e}_1 \mathbf{e}_2 x_1 y_2 + \mathbf{e}_2 \mathbf{e}_1 x_2 y_1\end{aligned}

From axiom 2 we have $\mathbf{e}_1^2 = \mathbf{e}_2^2 = 1$, so we have

\begin{aligned}\mathbf{a} \mathbf{b} = x_1 y_1 + x_2 y_2 + \mathbf{e}_1 \mathbf{e}_2 x_1 y_2 + \mathbf{e}_2 \mathbf{e}_1 x_2 y_1.\end{aligned} \hspace{\stretch{1}}(3.3)

We’ve multiplied two vectors and ended up with a scalar component (and recognize that this part of the vector product is the dot product), and a component that is a “something else”. We’ll call this something else a bivector, and see that it is characterized by a product of non-colinear vectors. These products $\mathbf{e}_1 \mathbf{e}_2$ and $\mathbf{e}_2 \mathbf{e}_1$ are in fact related, and we can see that by looking at the case of $\mathbf{b} = \mathbf{a}$. For that we have

\begin{aligned}\mathbf{a}^2 &=x_1 x_1 + x_2 x_2 + \mathbf{e}_1 \mathbf{e}_2 x_1 x_2 + \mathbf{e}_2 \mathbf{e}_1 x_2 x_1 \\ &={\left\lvert{\mathbf{a}}\right\rvert}^2 +x_1 x_2 ( \mathbf{e}_1 \mathbf{e}_2 + \mathbf{e}_2 \mathbf{e}_1 )\end{aligned}

Since axiom (2) requires our vectors square to equal its (squared) length, we must then have

\begin{aligned}\mathbf{e}_1 \mathbf{e}_2 + \mathbf{e}_2 \mathbf{e}_1 = 0,\end{aligned} \hspace{\stretch{1}}(3.4)

or

\begin{aligned}\mathbf{e}_2 \mathbf{e}_1 = -\mathbf{e}_1 \mathbf{e}_2.\end{aligned} \hspace{\stretch{1}}(3.5)

We see that Euclidean orthonormal vectors anticommute. What we can see with some additional study is that any colinear vectors commute, and in Euclidean spaces (of any dimension) vectors that are normal to each other anticommute (this can also be taken as a definition of normal).

We can now return to our product of two vectors 3.3 and simplify it slightly

\begin{aligned}\mathbf{a} \mathbf{b} = x_1 y_1 + x_2 y_2 + \mathbf{e}_1 \mathbf{e}_2 (x_1 y_2 - x_2 y_1).\end{aligned} \hspace{\stretch{1}}(3.6)

The product of two vectors in 2D is seen here to have one scalar component, and one bivector component (an irreducible product of two normal vectors). Observe the symmetric and antisymmetric split of the scalar and bivector components above. This symmetry and antisymmetry can be made explicit, introducing dot and wedge product notation respectively

\begin{aligned}\mathbf{a} \cdot \mathbf{b} &= \frac{1}{{2}}( \mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a}) = x_1 y_1 + x_2 y_2 \\ \mathbf{a} \wedge \mathbf{b} &= \frac{1}{{2}}( \mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a}) = \mathbf{e}_1 \mathbf{e}_2 (x_1 y_y - x_2 y_1).\end{aligned} \hspace{\stretch{1}}(3.7)

so that the vector product can be written as

\begin{aligned}\mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b}.\end{aligned} \hspace{\stretch{1}}(3.9)

Pseudoscalar

In many contexts it is useful to introduce an ordered product of all the unit vectors for the space is called the pseudoscalar. In our 2D case this is

\begin{aligned}i = \mathbf{e}_1 \mathbf{e}_2,\end{aligned} \hspace{\stretch{1}}(4.10)

a quantity that we find behaves like the complex imaginary. That can be shown by considering its square

\begin{aligned}(\mathbf{e}_1 \mathbf{e}_2)^2&=(\mathbf{e}_1 \mathbf{e}_2)(\mathbf{e}_1 \mathbf{e}_2) \\ &=\mathbf{e}_1 (\mathbf{e}_2 \mathbf{e}_1) \mathbf{e}_2 \\ &=-\mathbf{e}_1 (\mathbf{e}_1 \mathbf{e}_2) \mathbf{e}_2 \\ &=-(\mathbf{e}_1 \mathbf{e}_1) (\mathbf{e}_2 \mathbf{e}_2) \\ &=-1^2 \\ &= -1\end{aligned}

Here the anticommutation of normal vectors property has been used, as well as (for the first time) the associative multiplication axiom.

In a 3D context, you’ll see the pseudoscalar in many places (expressing the normals to planes for example). It also shows up in a number of fundamental relationships. For example, if one writes

\begin{aligned}I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3\end{aligned} \hspace{\stretch{1}}(4.11)

for the 3D pseudoscalar, then it’s also possible to show

\begin{aligned}\mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + I (\mathbf{a} \times \mathbf{b})\end{aligned} \hspace{\stretch{1}}(4.12)

something that will be familiar to the student of QM, where we see this in the context of Pauli matrices. The Pauli matrices also encode a Clifford algebraic structure, but we do not need an explicit matrix representation to do so.

Rotations

Very much like complex numbers we can utilize exponentials to perform rotations. Rotating in a sense from $\mathbf{e}_1$ to $\mathbf{e}_2$, can be expressed as

\begin{aligned}\mathbf{a} e^{i \theta}&=(\mathbf{e}_1 x_1 + \mathbf{e}_2 x_2) (\cos\theta + \mathbf{e}_1 \mathbf{e}_2 \sin\theta) \\ &=\mathbf{e}_1 (x_1 \cos\theta - x_2 \sin\theta)+\mathbf{e}_2 (x_2 \cos\theta + x_1 \sin\theta)\end{aligned}

More generally, even in N dimensional Euclidean spaces, if $\mathbf{a}$ is a vector in a plane, and $\hat{\mathbf{u}}$ and $\hat{\mathbf{v}}$ are perpendicular unit vectors in that plane, then the rotation through angle $\theta$ is given by

\begin{aligned}\mathbf{a} \rightarrow \mathbf{a} e^{\hat{\mathbf{u}} \hat{\mathbf{v}} \theta}.\end{aligned} \hspace{\stretch{1}}(5.13)

This is illustrated in figure (1).

Plane rotation.

Notice that we have expressed the rotation here without utilizing a normal direction for the plane. The sense of the rotation is encoded by the bivector $\hat{\mathbf{u}} \hat{\mathbf{v}}$ that describes the plane and the orientation of the rotation (or by duality the direction of the normal in a 3D space). By avoiding a requirement to encode the rotation using a normal to the plane we have an method of expressing the rotation that works not only in 3D spaces, but also in 2D and greater than 3D spaces, something that isn’t possible when we restrict ourselves to traditional vector algebra (where quantities like the cross product can’t be defined in a 2D or 4D space, despite the fact that things they may represent, like torque are planar phenomena that do not have any intrinsic requirement for a normal that falls out of the plane.).

When $\mathbf{a}$ does not lie in the plane spanned by the vectors $\hat{\mathbf{u}}$ and $\hat{\mathbf{v}}$ , as in figure (2), we must express the rotations differently. A rotation then takes the form

\begin{aligned}\mathbf{a} \rightarrow e^{-\hat{\mathbf{u}} \hat{\mathbf{v}} \theta/2} \mathbf{a} e^{\hat{\mathbf{u}} \hat{\mathbf{v}} \theta/2}.\end{aligned} \hspace{\stretch{1}}(5.14)

3D rotation.

In the 2D case, and when the vector lies in the plane this reduces to the one sided complex exponential operator used above. We see these types of paired half angle rotations in QM, and they are also used extensively in computer graphics under the guise of quaternions.

References

[1] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

Exploring Stokes Theorem in tensor form.

Posted by peeterjoot on February 22, 2011

Motivation.

I’ve worked through Stokes theorem concepts a couple times on my own now. One of the first times, I was trying to formulate this in a Geometric Algebra context. I had to resort to a tensor decomposition, and pictures, before ending back in the Geometric Algebra description. Later I figured out how to do it entirely with a Geometric Algebra description, and was able to eliminate reliance on the pictures that made the path to generalization to higher dimensional spaces unclear.

It’s my expectation that if one started with a tensor description, the proof entirely in tensor form would not be difficult. This is what I’d like to try this time. To start off, I’ll temporarily use the Geometric Algebra curl expression so I know what my tensor equation starting point will be, but once that starting point is found, we can work entirely in coordinate representation. For somebody who already knows that this is the starting point, all of this initial motivation can be skipped.

Translating the exterior derivative to a coordinate representation.

Our starting point is a curl, dotted with a volume element of the same grade, so that the result is a scalar

\begin{aligned}\int d^n x \cdot (\nabla \wedge A).\end{aligned} \hspace{\stretch{1}}(2.1)

Here $A$ is a blade of grade $n-1$, and we wedge this with the gradient for the space

\begin{aligned}\nabla \equiv e^i \partial_i = e_i \partial^i,\end{aligned} \hspace{\stretch{1}}(2.2)

where we with with a basis (not necessarily orthonormal) $\{e_i\}$, and the reciprocal frame for that basis $\{e^i\}$ defined by the relation

\begin{aligned}e^i \cdot e_j = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(2.3)

Our coordinates in these basis sets are

\begin{aligned}x \cdot e^i & \equiv x^i \\ x \cdot e_i & \equiv x_i\end{aligned} \hspace{\stretch{1}}(2.4)

so that

\begin{aligned}x = x^i e_i = x_i e^i.\end{aligned} \hspace{\stretch{1}}(2.6)

The operator coordinates of the gradient are defined in the usual fashion

\begin{aligned}\partial_i & \equiv \frac{\partial }{\partial {x^i}} \\ \partial^i & \equiv \frac{\partial}{\partial {x_i}}\end{aligned} \hspace{\stretch{1}}(2.7)

The volume element for the subspace that we are integrating over we will define in terms of an arbitrary parametrization

\begin{aligned}x = x(\alpha_1, \alpha_2, \cdots, \alpha_n)\end{aligned} \hspace{\stretch{1}}(2.9)

The subspace can be considered spanned by the differential elements in each of the respective curves where all but the $i$th parameter are held constant.

\begin{aligned}dx_{\alpha_i}= d\alpha_i \frac{\partial x}{\partial {\alpha_i}}= d\alpha_i \frac{\partial {x^j}}{\partial {\alpha_i}} e_j.\end{aligned} \hspace{\stretch{1}}(2.10)

We assume that the integral is being performed in a subspace for which none of these differential elements in that region are linearly dependent (i.e. our Jacobean determinant must be non-zero).

The magnitude of the wedge product of all such differential elements provides the volume of the parallelogram, or parallelepiped (or higher dimensional analogue), and is

\begin{aligned}d^n x=d\alpha_1 d\alpha_2\cdots d\alpha_n\frac{\partial x}{\partial {\alpha_n}} \wedge\cdots \wedge\frac{\partial x}{\partial {\alpha_2}}\wedge\frac{\partial x}{\partial {\alpha_1}}.\end{aligned} \hspace{\stretch{1}}(2.11)

The volume element is a oriented quantity, and may be adjusted with an arbitrary sign (or equivalently an arbitrary permutation of the differential elements in the wedge product), and we’ll see that it is convenient for the translation to tensor form, to express these in reversed order.

Let’s write

\begin{aligned}d^n \alpha = d\alpha_1 d\alpha_2 \cdots d\alpha_n,\end{aligned} \hspace{\stretch{1}}(2.12)

so that our volume element in coordinate form is

\begin{aligned}d^n x = d^n \alpha\frac{\partial {x^i}}{\partial {\alpha_1}}\frac{\partial {x^j}}{\partial {\alpha_2}}\cdots \frac{\partial {x^k}}{\partial {\alpha_{n-1}}}\frac{\partial {x^l}}{\partial {\alpha_n}}( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i ).\end{aligned} \hspace{\stretch{1}}(2.13)

Our curl will also also be a grade $n$ blade. We write for the $n-1$ grade blade

\begin{aligned}A = A_{b c \cdots d} (e^b \wedge e^c \wedge \cdots e^d),\end{aligned} \hspace{\stretch{1}}(2.14)

where $A_{b c \cdots d}$ is antisymmetric (i.e. $A = a_1 \wedge a_2 \wedge \cdots a_{n-1}$ for a some set of vectors $a_i, i \in 1 .. n-1$).

With our gradient in coordinate form

\begin{aligned}\nabla = e^a \partial_a,\end{aligned} \hspace{\stretch{1}}(2.15)

the curl is then

\begin{aligned}\nabla \wedge A = \partial_a A_{b c \cdots d} (e^a \wedge e^b \wedge e^c \wedge \cdots e^d).\end{aligned} \hspace{\stretch{1}}(2.16)

The differential form for our integral can now be computed by expanding out the dot product. We want

\begin{aligned}( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i )\cdot(e^a \wedge e^b \wedge e^c \wedge \cdots e^d)=((((( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i ) \cdot e^a ) \cdot e^b ) \cdot e^c ) \cdot \cdots ) \cdot e^d.\end{aligned} \hspace{\stretch{1}}(2.17)

Evaluation of the interior dot products introduces the intrinsic antisymmetry required for Stokes theorem. For example, with

\begin{aligned}( e_n \wedge e_{n-1} \wedge \cdots \wedge e_2 \wedge e_1 ) \cdot e^a a & =( e_n \wedge e_{n-1} \wedge \cdots \wedge e_3 \wedge e_2 ) (e_1 \cdot e^a) \\ & -( e_n \wedge e_{n-1} \wedge \cdots \wedge e_3 \wedge e_1 ) (e_2 \cdot e^a) \\ & +( e_n \wedge e_{n-1} \wedge \cdots \wedge e_2 \wedge e_1 ) (e_3 \cdot e^a) \\ & \cdots \\ & (-1)^{n-1}( e_{n-1} \wedge e_{n-2} \wedge \cdots \wedge e_2 \wedge e_1 ) (e_n \cdot e^a)\end{aligned}

Since $e_i \cdot e^a = {\delta_i}^a$ our end result is a completely antisymmetric set of permutations of all the deltas

\begin{aligned}( e_l \wedge e_k \wedge \cdots \wedge e_j \wedge e_i )\cdot(e^a \wedge e^b \wedge e^c \wedge \cdots e^d)={\delta^{[a}}_i{\delta^b}_j\cdots {\delta^{d]}}_l,\end{aligned} \hspace{\stretch{1}}(2.18)

and the curl integral takes it’s coordinate form

\begin{aligned}\int d^n x \cdot ( \nabla \wedge A ) =\int d^n \alpha\frac{\partial {x^i}}{\partial {\alpha_1}}\frac{\partial {x^j}}{\partial {\alpha_2}}\cdots \frac{\partial {x^k}}{\partial {\alpha_{n-1}}}\frac{\partial {x^l}}{\partial {\alpha_n}}\partial_a A_{b c \cdots d}{\delta^{[a}}_i{\delta^b}_j\cdots {\delta^{d]}}_l.\end{aligned} \hspace{\stretch{1}}(2.19)

One final contraction of the paired indexes gives us our Stokes integral in its coordinate representation

\begin{aligned}\boxed{\int d^n x \cdot ( \nabla \wedge A ) =\int d^n \alpha\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}\partial_a A_{b c \cdots d}}\end{aligned} \hspace{\stretch{1}}(2.20)

We now have a starting point that is free of any of the abstraction of Geometric Algebra or differential forms. We can identify the products of partials here as components of a scalar hypervolume element (possibly signed depending on the orientation of the parametrization)

\begin{aligned}d\alpha_1 d\alpha_2\cdots d\alpha_n\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}\end{aligned} \hspace{\stretch{1}}(2.21)

This is also a specific computation recipe for these hypervolume components, something that may not be obvious when we allow for general metrics for the space. We are also allowing for non-orthonormal coordinate representations, and arbitrary parametrization of the subspace that we are integrating over (our integral need not have the same dimension as the underlying vector space).

Observe that when the number of parameters equals the dimension of the space, we can write out the antisymmetric term utilizing the determinant of the Jacobian matrix

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}= \epsilon^{a b \cdots d} {\left\lvert{ \frac{\partial(x^1, x^2, \cdots x^n)}{\partial(\alpha_1, \alpha_2, \cdots \alpha_n)} }\right\rvert}\end{aligned} \hspace{\stretch{1}}(2.22)

When the dimension of the space $n$ is greater than the number of parameters for the integration hypervolume in question, the antisymmetric sum of partials is still the determinant of a Jacobian matrix

\begin{aligned}\frac{\partial {x^{[a_1}}}{\partial {\alpha_1}}\frac{\partial {x^{a_2}}}{\partial {\alpha_2}}\cdots \frac{\partial {x^{a_{n-1}}}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{a_n]}}}{\partial {\alpha_n}}= {\left\lvert{ \frac{\partial(x^{a_1}, x^{a_2}, \cdots x^{a_n})}{\partial(\alpha_1, \alpha_2, \cdots \alpha_n)} }\right\rvert},\end{aligned} \hspace{\stretch{1}}(2.23)

however, we will have one such Jacobian for each unique choice of indexes.

The Stokes work starts here.

The task is to relate our integral to the boundary of this volume, coming up with an explicit recipe for the description of that bounding surface, and determining the exact form of the reduced rank integral. This job is essentially to reduce the ranks of the tensors that are being contracted in our Stokes integral. With the derivative applied to our rank $n-1$ antisymmetric tensor $A_{b c \cdots d}$, we can apply the chain rule and examine the permutations so that this can be rewritten as a contraction of $A$ itself with a set of rank $n-1$ surface area elements.

\begin{aligned}\int d^n \alpha\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^b}}{\partial {\alpha_2}}\cdots \frac{\partial {x^c}}{\partial {\alpha_{n-1}}}\frac{\partial {x^{d]}}}{\partial {\alpha_n}}\partial_a A_{b c \cdots d} = ?\end{aligned} \hspace{\stretch{1}}(3.24)

Now, while the setup here has been completely general, this task is motivated by study of special relativity, where there is a requirement to work in a four dimensional space. Because of that explicit goal, I’m not going to attempt to formulate this in a completely abstract fashion. That task is really one of introducing sufficiently general notation. Instead, I’m going to proceed with a simpleton approach, and do this explicitly, and repeatedly for each of the rank 1, rank 2, and rank 3 tensor cases. It will be clear how this all generalizes by doing so, should one wish to work in still higher dimensional spaces.

The rank 1 tensor case.

The equation we are working with for this vector case is

\begin{aligned}\int d^2 x \cdot (\nabla \wedge A) =\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\partial_a A_{b}(\alpha_1, \alpha_2)\end{aligned} \hspace{\stretch{1}}(3.25)

Expanding out the antisymmetric partials we have

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}} & =\frac{\partial {x^{a}}}{\partial {\alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}}-\frac{\partial {x^{b}}}{\partial {\alpha_1}}\frac{\partial {x^{a}}}{\partial {\alpha_2}},\end{aligned}

with which we can reduce the integral to

\begin{aligned}\int d^2 x \cdot (\nabla \wedge A) & =\int \left( d{\alpha_1}\frac{\partial {x^{a}}}{\partial {\alpha_1}}\frac{\partial {A_{b}}}{\partial {x^a}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-\left( d{\alpha_2}\frac{\partial {x^{a}}}{\partial {\alpha_2}}\frac{\partial {A_{b}}}{\partial {x^a}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1} \\ & =\int \left( d\alpha_1 \frac{\partial {A_b}}{\partial {\alpha_1}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-\left( d\alpha_2 \frac{\partial {A_b}}{\partial {\alpha_2}} \right)\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1} \\ \end{aligned}

Now, if it happens that

\begin{aligned}\frac{\partial}{\partial {\alpha_1}}\frac{\partial {x^{a}}}{\partial {\alpha_2}} = \frac{\partial}{\partial {\alpha_2}}\frac{\partial {x^{a}}}{\partial {\alpha_1}} = 0\end{aligned} \hspace{\stretch{1}}(3.26)

then each of the individual integrals in $d\alpha_1$ and $d\alpha_2$ can be carried out. In that case, without any real loss of generality we can designate the integration bounds over the unit parametrization space square $\alpha_i \in [0,1]$, allowing this integral to be expressed as

\begin{aligned}\begin{aligned} & \int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\partial_a A_{b}(\alpha_1, \alpha_2) \\ & =\int \left( A_b(1, \alpha_2) - A_b(0, \alpha_2) \right)\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-\left( A_b(\alpha_1, 1) - A_b(\alpha_1, 0) \right)\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.27)

It’s also fairly common to see ${\left.{{A}}\right\vert}_{{\partial \alpha_i}}$ used to designate evaluation of this first integral on the boundary, and using this we write

\begin{aligned}\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\partial_a A_{b}(\alpha_1, \alpha_2)=\int {\left.{{A_b}}\right\vert}_{{\partial \alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-{\left.{{A_b}}\right\vert}_{{\partial \alpha_2}}\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1}.\end{aligned} \hspace{\stretch{1}}(3.28)

Also note that since we are summing over all $a,b$, and have

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}=-\frac{\partial {x^{[b}}}{\partial {\alpha_1}}\frac{\partial {x^{a]}}}{\partial {\alpha_2}},\end{aligned} \hspace{\stretch{1}}(3.29)

we can write this summing over all unique pairs of $a,b$ instead, which eliminates a small bit of redundancy (especially once the dimension of the vector space gets higher)

\begin{aligned}\boxed{\sum_{a < b}\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\left( \partial_a A_{b}-\partial_b A_{a} \right)=\int {\left.{{A_b}}\right\vert}_{{\partial \alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}} d{\alpha_2}-{\left.{{A_b}}\right\vert}_{{\partial \alpha_2}}\frac{\partial {x^{b}}}{\partial {\alpha_1}} d{\alpha_1}.}\end{aligned} \hspace{\stretch{1}}(3.30)

In this form we have recovered the original geometric structure, with components of the curl multiplied by the component of the area element that shares the orientation and direction of that portion of the curl bivector.

This form of the result with evaluation at the boundaries in this form, assumed that ${\partial {x^a}}/{\partial {\alpha_1}}$ was not a function of $\alpha_2$ and ${\partial {x^a}}/{\partial {\alpha_2}}$ was not a function of $\alpha_1$. When that is not the case, we appear to have a less pretty result

\begin{aligned}\boxed{\sum_{a < b}\int d{\alpha_1} d{\alpha_2}\frac{\partial {x^{[a}}}{\partial {\alpha_1}}\frac{\partial {x^{b]}}}{\partial {\alpha_2}}\left( \partial_a A_{b}-\partial_b A_{a} \right)=\int d\alpha_2\int d\alpha_1\frac{\partial {A_b}}{\partial {\alpha_1}}\frac{\partial {x^{b}}}{\partial {\alpha_2}}-\int d\alpha_2\int d\alpha_1\frac{\partial {A_b}}{\partial {\alpha_2}}\frac{\partial {x^{b}}}{\partial {\alpha_1}}}\end{aligned} \hspace{\stretch{1}}(3.31)

Can this be reduced any further in the general case? Having seen the statements of Stokes theorem in it’s differential forms formulation, I initially expected the answer was yes, and only when I got to evaluating my $\mathbb{R}^{4}$ spacetime example below did I realize that the differentials displacements for the parallelogram that constituted the area element were functions of both parameters. Perhaps this detail is there in the differential forms version of the general Stokes theorem too, but is just hidden in a tricky fashion by the compact notation.

Sanity check: $\mathbb{R}^{2}$ case in rectangular coordinates.

For $x^1 = x, x^2 = y$, and $\alpha_1 = x, \alpha_2 = y$, we have for the LHS

\begin{aligned} & \int_{x=x_0}^{x_1}\int_{y=y_0}^{y_1}dx dy\left(\frac{\partial {x^{1}}}{\partial {\alpha_1}}\frac{\partial {x^{2}}}{\partial {\alpha_2}}-\frac{\partial {x^{2}}}{\partial {\alpha_1}}\frac{\partial {x^{1}}}{\partial {\alpha_2}}\right)\partial_1 A_{2}+\left(\frac{\partial {x^{2}}}{\partial {\alpha_1}}\frac{\partial {x^{1}}}{\partial {\alpha_2}}-\frac{\partial {x^{1}}}{\partial {\alpha_1}}\frac{\partial {x^{2}}}{\partial {\alpha_2}}\right)\partial_2 A_{1} \\ & =\int_{x=x_0}^{x_1}\int_{y=y_0}^{y_1}dx dy\left( \frac{\partial {A_y}}{\partial x} - \frac{\partial {A_x}}{\partial y} \right)\end{aligned}

Our RHS expands to

\begin{aligned} & \int_{y=y_0}^{y_1} dy\left(\left( A_1(x_1, y) - A_1(x_0, y) \right)\frac{\partial {x^{1}}}{\partial y}+\left( A_2(x_1, y) - A_2(x_0, y) \right)\frac{\partial {x^{2}}}{\partial y}\right) \\ & \qquad-\int_{x=x_0}^{x_1} dx\left(\left( A_1(x, y_1) - A_1(x, y_0) \right)\frac{\partial {x^{1}}}{\partial x}+\left( A_2(x, y_1) - A_2(x, y_0) \right)\frac{\partial {x^{2}}}{\partial x}\right) \\ & =\int_{y=y_0}^{y_1} dy\left( A_y(x_1, y) - A_y(x_0, y) \right)-\int_{x=x_0}^{x_1} dx\left( A_x(x, y_1) - A_x(x, y_0) \right)\end{aligned}

We have

\begin{aligned}\begin{aligned} & \int_{x=x_0}^{x_1}\int_{y=y_0}^{y_1}dx dy\left( \frac{\partial {A_y}}{\partial x} - \frac{\partial {A_x}}{\partial y} \right) \\ & =\int_{y=y_0}^{y_1} dy\left( A_y(x_1, y) - A_y(x_0, y) \right)-\int_{x=x_0}^{x_1} dx\left( A_x(x, y_1) - A_x(x, y_0) \right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.32)

The RHS is just a positively oriented line integral around the rectangle of integration

\begin{aligned}\int A_x(x, y_0) \hat{\mathbf{x}} \cdot ( \hat{\mathbf{x}} dx )+ A_y(x_1, y) \hat{\mathbf{y}} \cdot ( \hat{\mathbf{y}} dy )+ A_x(x, y_1) \hat{\mathbf{x}} \cdot ( -\hat{\mathbf{x}} dx )+ A_y(x_0, y) \hat{\mathbf{y}} \cdot ( -\hat{\mathbf{y}} dy )= \oint \mathbf{A} \cdot d\mathbf{r}.\end{aligned} \hspace{\stretch{1}}(3.33)

This special case is also recognizable as Green’s theorem, evident with the substitution $A_x = P$, $A_y = Q$, which gives us

\begin{aligned}\int_A dx dy \left( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y} \right)=\oint_C P dx + Q dy.\end{aligned} \hspace{\stretch{1}}(3.34)

Strictly speaking, Green’s theorem is more general, since it applies to integration regions more general than rectangles, but that generalization can be arrived at easily enough, once the region is broken down into adjoining elementary regions.

Sanity check: $\mathbb{R}^{3}$ case in rectangular coordinates.

It is expected that we can recover the classical Kelvin-Stokes theorem if we use rectangular coordinates in $\mathbb{R}^{3}$. However, we see that we have to consider three different parametrizations. If one picks rectangular parametrizations $(\alpha_1, \alpha_2) = \{ (x,y), (y,z), (z,x) \}$ in sequence, in each case holding the value of the additional coordinate fixed, we get three different independent Green’s function like relations

\begin{aligned}\int_A dx dy \left( \frac{\partial {A_y}}{\partial x} - \frac{\partial {A_x}}{\partial y} \right) & = \oint_C A_x dx + A_y dy \\ \int_A dy dz \left( \frac{\partial {A_z}}{\partial y} - \frac{\partial {A_y}}{\partial z} \right) & = \oint_C A_y dy + A_z dz \\ \int_A dz dx \left( \frac{\partial {A_x}}{\partial z} - \frac{\partial {A_z}}{\partial x} \right) & = \oint_C A_z dz + A_x dx.\end{aligned} \hspace{\stretch{1}}(3.35)

Note that we cannot just add these to form a complete integral $\oint \mathbf{A} \cdot d\mathbf{r}$ since the curves are all have different orientations. To recover the $\mathbb{R}^{3}$ Stokes theorem in rectangular coordinates, it appears that we’d have to consider a Riemann sum of triangular surface elements, and relate that to the loops over each of the surface elements. In that limiting argument, only the boundary of the complete surface would contribute to the RHS of the relation.

All that said, we shouldn’t actually have to go to all this work. Instead we can stick to a two variable parametrization of the surface, and use 3.30 directly.

An illustration for a $\mathbb{R}^{4}$ spacetime surface.

Suppose we have a particle trajectory defined by an active Lorentz transformation from an initial spacetime point

\begin{aligned}x^i = O^{ij} x_j(0) = O^{ij} g_{jk} x^k = {O^{i}}_k x^k(0)\end{aligned} \hspace{\stretch{1}}(3.38)

Let the Lorentz transformation be formed by a composition of boost and rotation

\begin{aligned}{O^i}_j & = {L^i}_k {R^k}_j \\ {L^i}_j & =\begin{bmatrix}\cosh_\alpha & -\sinh\alpha & 0 & 0 \\ -\sinh_\alpha & \cosh\alpha & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \\ {R^i}_j & =\begin{bmatrix}1 & 0 & 0 & 0 \\ \cos_\alpha & \sin\alpha & 0 & 0 \\ -\sin_\alpha & \cos\alpha & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.39)

Different rates of evolution of $\alpha$ and $\theta$ define different trajectories, and taken together we have a surface described by the two parameters

\begin{aligned}x^i(\alpha, \theta) = {L^i}_k {R^k}_j x^j(0, 0).\end{aligned} \hspace{\stretch{1}}(3.42)

We can compute displacements along the trajectories formed by keeping either $\alpha$ or $\theta$ fixed and varying the other. Those are

\begin{aligned}\frac{\partial {x^i}}{\partial {\alpha}} d\alpha & = \frac{d{L^i}_k}{d\alpha} {R^k}_j x^j(0, 0) \\ \frac{\partial {x^i}}{\partial {\theta}} d\theta & = {L^i}_k \frac{d{R^k}_j}{d\theta} x^j(0, 0) .\end{aligned} \hspace{\stretch{1}}(3.43)

Writing $y^i = x^i(0,0)$ the computation of the partials above yields

\begin{aligned}\frac{\partial {x^i}}{\partial {\alpha}} & =\begin{bmatrix}\sinh\alpha y^0 -\cosh\alpha (\cos\theta y^1 + \sin\theta y^2) \\ -\cosh\alpha y^0 +\sinh\alpha (\cos\theta y^1 + \sin\theta y^2) \\ 0 \\ 0\end{bmatrix} \\ \frac{\partial {x^i}}{\partial {\theta}} & =\begin{bmatrix}-\sinh\alpha (-\sin\theta y^1 + \cos\theta y^2 ) \\ \cosh\alpha (-\sin\theta y^1 + \cos\theta y^2 ) \\ -(\cos\theta y^1 + \sin\theta y^2 ) \\ 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.45)

Different choices of the initial point $y^i$ yield different surfaces, but we can get the idea by picking a simple starting point $y^i = (0, 1, 0, 0)$ leaving

\begin{aligned}\frac{\partial {x^i}}{\partial {\alpha}} & =\begin{bmatrix}-\cosh\alpha \cos\theta \\ \sinh\alpha \cos\theta \\ 0 \\ 0\end{bmatrix} \\ \frac{\partial {x^i}}{\partial {\theta}} & =\begin{bmatrix}\sinh\alpha \sin\theta \\ -\cosh\alpha \sin\theta \\ -\cos\theta \\ 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.47)

We can now compute our Jacobian determinants

\begin{aligned}\frac{\partial {x^{[a}}}{\partial {\alpha}} \frac{\partial {x^{b]}}}{\partial {\theta}}={\left\lvert{\frac{\partial(x^a, x^b)}{\partial(\alpha, \theta)}}\right\rvert}.\end{aligned} \hspace{\stretch{1}}(3.49)

Those are

\begin{aligned}{\left\lvert{\frac{\partial(x^0, x^1)}{\partial(\alpha, \theta)}}\right\rvert} & = \cos\theta \sin\theta \\ {\left\lvert{\frac{\partial(x^0, x^2)}{\partial(\alpha, \theta)}}\right\rvert} & = \cosh\alpha \cos^2\theta \\ {\left\lvert{\frac{\partial(x^0, x^3)}{\partial(\alpha, \theta)}}\right\rvert} & = 0 \\ {\left\lvert{\frac{\partial(x^1, x^2)}{\partial(\alpha, \theta)}}\right\rvert} & = -\sinh\alpha \cos^2\theta \\ {\left\lvert{\frac{\partial(x^1, x^3)}{\partial(\alpha, \theta)}}\right\rvert} & = 0 \\ {\left\lvert{\frac{\partial(x^2, x^3)}{\partial(\alpha, \theta)}}\right\rvert} & = 0\end{aligned} \hspace{\stretch{1}}(3.50)

Using this, let’s see a specific 4D example in spacetime for the integral of the curl of some four vector $A^i$, enumerating all the non-zero components of 3.31 for this particular spacetime surface

\begin{aligned}\sum_{a < b}\int d{\alpha} d{\theta}{\left\lvert{\frac{\partial(x^a, x^b)}{\partial(\alpha, \theta)}}\right\rvert}\left( \partial_a A_{b}-\partial_b A_{a} \right)=\int d\theta\int d\alpha\frac{\partial {A_b}}{\partial {\alpha}}\frac{\partial {x^{b}}}{\partial {\theta}}-\int d\theta\int d\alpha\frac{\partial {A_b}}{\partial {\theta}}\frac{\partial {x^{b}}}{\partial {\alpha}}\end{aligned} \hspace{\stretch{1}}(3.56)

The LHS is thus found to be

\begin{aligned} & \int d{\alpha} d{\theta}\left({\left\lvert{\frac{\partial(x^0, x^1)}{\partial(\alpha, \theta)}}\right\rvert} \left( \partial_0 A_{1} -\partial_1 A_{0} \right)+{\left\lvert{\frac{\partial(x^0, x^2)}{\partial(\alpha, \theta)}}\right\rvert} \left( \partial_0 A_{2} -\partial_2 A_{0} \right)+{\left\lvert{\frac{\partial(x^1, x^2)}{\partial(\alpha, \theta)}}\right\rvert} \left( \partial_1 A_{2} -\partial_2 A_{1} \right)\right) \\ & =\int d{\alpha} d{\theta}\left(\cos\theta \sin\theta \left( \partial_0 A_{1} -\partial_1 A_{0} \right)+\cosh\alpha \cos^2\theta \left( \partial_0 A_{2} -\partial_2 A_{0} \right)-\sinh\alpha \cos^2\theta \left( \partial_1 A_{2} -\partial_2 A_{1} \right)\right)\end{aligned}

On the RHS we have

\begin{aligned}\int d\theta\int d\alpha & \frac{\partial {A_b}}{\partial {\alpha}}\frac{\partial {x^{b}}}{\partial {\theta}}-\int d\theta\int d\alpha\frac{\partial {A_b}}{\partial {\theta}}\frac{\partial {x^{b}}}{\partial {\alpha}} \\ & =\int d\theta\int d\alpha\begin{bmatrix}\sinh\alpha \sin\theta & -\cosh\alpha \sin\theta & -\cos\theta & 0\end{bmatrix}\frac{\partial}{\partial {\alpha}}\begin{bmatrix}A_0 \\ A_1 \\ A_2 \\ A_3 \\ \end{bmatrix} \\ & -\int d\theta\int d\alpha\begin{bmatrix}-\cosh\alpha \cos\theta & \sinh\alpha \cos\theta & 0 & 0\end{bmatrix}\frac{\partial}{\partial {\theta}}\begin{bmatrix}A_0 \\ A_1 \\ A_2 \\ A_3 \\ \end{bmatrix} \\ \end{aligned}

\begin{aligned}\begin{aligned} & \int d{\alpha} d{\theta}\cos\theta \sin\theta \left( \partial_0 A_{1} -\partial_1 A_{0} \right) \\ & \qquad+\int d{\alpha} d{\theta}\cosh\alpha \cos^2\theta \left( \partial_0 A_{2} -\partial_2 A_{0} \right) \\ & \qquad-\int d{\alpha} d{\theta}\sinh\alpha \cos^2\theta \left( \partial_1 A_{2} -\partial_2 A_{1} \right) \\ & =\int d\theta \sin\theta \int d\alpha \left( \sinh\alpha \frac{\partial {A_0}}{\partial {\alpha}} - \cosh\alpha \frac{\partial {A_1}}{\partial {\alpha}} \right) \\ & \qquad-\int d\theta \cos\theta \int d\alpha \frac{\partial {A_2}}{\partial {\alpha}} \\ & \qquad+\int d\alpha \cosh\alpha \int d\theta \cos\theta \frac{\partial {A_0}}{\partial {\theta}} \\ & \qquad-\int d\alpha \sinh\alpha \int d\theta \cos\theta \frac{\partial {A_1}}{\partial {\theta}}\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.57)

Because of the complexity of the surface, only the second term on the RHS has the “evaluate on the boundary” characteristic that may have been expected from a Green’s theorem like line integral.

It is also worthwhile to point out that we have had to be very careful with upper and lower indexes all along (and have done so with the expectation that our application would include the special relativity case where our metric determinant is minus one.) Because we worked with upper indexes for the area element, we had to work with lower indexes for the four vector and the components of the gradient that we included in our curl evaluation.

The rank 2 tensor case.

Let’s consider briefly the terms in the contraction sum

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc}\end{aligned} \hspace{\stretch{1}}(3.58)

For any choice of a set of three distinct indexes $(a, b, c) \in (0, 1, 2), (0, 1, 3), (0, 2, 3), (1, 2, 3)$), we have $6 = 3!$ ways of permuting those indexes in this sum

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc} & =\sum_{a < b < c} {\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc} + {\left\lvert{ \frac{\partial(x^a, x^c, x^b)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{cb} + {\left\lvert{ \frac{\partial(x^b, x^c, x^a)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_b A_{ca} \\ & \qquad + {\left\lvert{ \frac{\partial(x^b, x^a, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_b A_{ac} + {\left\lvert{ \frac{\partial(x^c, x^a, x^b)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_c A_{ab} + {\left\lvert{ \frac{\partial(x^c, x^b, x^a)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_c A_{ba} \\ & =2!\sum_{a < b < c}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\left( \partial_a A_{bc} + \partial_b A_{c a} + \partial_c A_{a b} \right)\end{aligned}

Observe that we have no sign alternation like we had in the vector (rank 1 tensor) case. That sign alternation in this summation expansion appears to occur only for odd grade tensors.

Returning to the problem, we wish to expand the determinant in order to apply a chain rule contraction as done in the rank-1 case. This can be done along any of rows or columns of the determinant, and we can write any of

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} & =\frac{\partial {x^a}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^a}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ & =\frac{\partial {x^b}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^c, x^a)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^b}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^c, x^a)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^b}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^c, x^a)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ & =\frac{\partial {x^c}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^a, x^b)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^c}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^a, x^b)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^c}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^a, x^b)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ \end{aligned}

This allows the contraction of the index $a$, eliminating it from the result

\begin{aligned}{\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \partial_a A_{bc} & =\left( \frac{\partial {x^a}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {x^a}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \right) \frac{\partial {A_{bc}}}{\partial {x^a}} \\ & =\frac{\partial {A_{bc}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {A_{bc}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {A_{bc}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ & =2!\sum_{b < c}\frac{\partial {A_{bc}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert}-\frac{\partial {A_{bc}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert}+\frac{\partial {A_{bc}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert} \\ \end{aligned}

Dividing out the common $2!$ terms, we can summarize this result as

\begin{aligned}\boxed{\begin{aligned}\sum_{a < b < c} & \int d\alpha_1 d\alpha_2 d\alpha_3 {\left\lvert{ \frac{\partial(x^a, x^b, x^c)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\left( \partial_a A_{bc} + \partial_b A_{c a} + \partial_c A_{a b} \right) \\ & =\sum_{b < c}\int d\alpha_2 d\alpha_3 \int d\alpha_1\frac{\partial {A_{bc}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_2, \alpha_3)} }\right\rvert} \\ & -\sum_{b < c}\int d\alpha_1 d\alpha_3 \int d\alpha_2\frac{\partial {A_{bc}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_3)} }\right\rvert} \\ & +\sum_{b < c}\int d\alpha_1 d\alpha_2 \int d\alpha_3\frac{\partial {A_{bc}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c)}{\partial(\alpha_1, \alpha_2)} }\right\rvert}\end{aligned}}\end{aligned} \hspace{\stretch{1}}(3.59)

In general, as observed in the spacetime surface example above, the two index Jacobians can be functions of the integration variable first being eliminated. In the special cases where this is not the case (such as the $\mathbb{R}^{3}$ case with rectangular coordinates), then we are left with just the evaluation of the tensor element $A_{bc}$ on the boundaries of the respective integrals.

The rank 3 tensor case.

The key step is once again just a determinant expansion

\begin{aligned} {\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \\ & =\frac{\partial {x^a}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_2, \alpha_3, \alpha_4)} }\right\rvert}-\frac{\partial {x^a}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_3, \alpha_4)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_4)} }\right\rvert}+\frac{\partial {x^a}}{\partial {\alpha_4}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\\ \end{aligned}

so that the sum can be reduced from a four index contraction to a 3 index contraction

\begin{aligned} {\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a A_{bcd} \\ & =\frac{\partial {A_{bcd}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_2, \alpha_3, \alpha_4)} }\right\rvert}-\frac{\partial {A_{bcd}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_3, \alpha_4)} }\right\rvert}+\frac{\partial {A_{bcd}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_4)} }\right\rvert}+\frac{\partial {A_{bcd}}}{\partial {\alpha_4}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert}\end{aligned}

That’s the essence of the theorem, but we can play the same combinatorial reduction games to reduce the built in redundancy in the result

\begin{aligned}\boxed{\begin{aligned}\frac{1}{{3!}} & \int d^4 \alpha {\left\lvert{ \frac{\partial(x^a, x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a A_{bcd} \\ & =\sum_{a < b < c < d}\int d^4 \alpha {\left\lvert{ \frac{\partial(x^a, x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \left( \partial_a A_{bcd} -\partial_b A_{cda} +\partial_c A_{dab} -\partial_d A_{abc} \right) \\ & =\qquad \sum_{b < c < d}\int d\alpha_2 d\alpha_3 d\alpha_4 \int d\alpha_1\frac{\partial {A_{bcd}}}{\partial {\alpha_1}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \\ & \qquad -\sum_{b < c < d}\int d\alpha_1 d\alpha_3 d\alpha_4 \int d\alpha_2\frac{\partial {A_{bcd}}}{\partial {\alpha_2}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_3, \alpha_4)} }\right\rvert} \\ & \qquad +\sum_{b < c < d}\int d\alpha_1 d\alpha_2 d\alpha_4 \int d\alpha_3\frac{\partial {A_{bcd}}}{\partial {\alpha_3}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_4)} }\right\rvert} \\ & \qquad +\sum_{b < c < d}\int d\alpha_1 d\alpha_2 d\alpha_3 \int d\alpha_4\frac{\partial {A_{bcd}}}{\partial {\alpha_4}} {\left\lvert{ \frac{\partial(x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3)} }\right\rvert} \\ \end{aligned}}\end{aligned} \hspace{\stretch{1}}(3.60)

A note on Four diverence.

Our four divergence integral has the following form

\begin{aligned}\int d^4 \alpha {\left\lvert{ \frac{\partial(x^1, x^2, x^2, x^4)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a A^a\end{aligned} \hspace{\stretch{1}}(3.61)

We can relate this to the rank 3 Stokes theorem with a duality transformation, multiplying with a pseudoscalar

\begin{aligned}A^a = \epsilon^{abcd} T_{bcd},\end{aligned} \hspace{\stretch{1}}(3.62)

where $T_{bcd}$ can also be related back to the vector by the same sort of duality transformation

\begin{aligned}A^a \epsilon_{a b c d} = \epsilon^{abcd} \epsilon_{a b c d} T_{bcd} = 4! T_{bcd}.\end{aligned} \hspace{\stretch{1}}(3.63)

The divergence integral in terms of the rank 3 tensor is

\begin{aligned}\int d^4 \alpha {\left\lvert{ \frac{\partial(x^1, x^2, x^2, x^4)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a \epsilon^{abcd} T_{bcd}=\int d^4 \alpha {\left\lvert{ \frac{\partial(x^a, x^b, x^c, x^d)}{\partial(\alpha_1, \alpha_2, \alpha_3, \alpha_4)} }\right\rvert} \partial_a T_{bcd},\end{aligned} \hspace{\stretch{1}}(3.64)

and we are free to perform the same Stokes reduction of the integral. Of course, this is particularly simple in rectangular coordinates. I still have to think though one sublty that I feel may be important. We could have started off with an integral of the following form

\begin{aligned}\int dx^1 dx^2 dx^3 dx^4 \partial_a A^a,\end{aligned} \hspace{\stretch{1}}(3.65)

and I think this differs from our starting point slightly because this has none of the antisymmetric structure of the signed 4 volume element that we have used. We do not take the absolute value of our Jacobians anywhere.

Vector form of Julia fractal

Posted by peeterjoot on December 27, 2010

Motivation.

As outlined in [1], 2-D and N-D Julia fractals can be computed using the geometric product, instead of complex numbers. Explore a couple of details related to that here.

Guts

Fractal patterns like the mandelbrot and julia sets are typically using iterative computations in the complex plane. For the Julia set, our iteration has the form

\begin{aligned}Z \rightarrow Z^p + C\end{aligned} \hspace{\stretch{1}}(2.1)

where $p$ is an integer constant, and $Z$, and $C$ are complex numbers. For $p=2$ I believe we obtain the Mandelbrot set. Given the isomorphism between complex numbers and vectors using the geometric product, we can use write

\begin{aligned}Z &= \mathbf{x} \hat{\mathbf{n}} \\ C &= \mathbf{c} \hat{\mathbf{n}},\end{aligned} \hspace{\stretch{1}}(2.2)

and reexpress the Julia iterator as

\begin{aligned}\mathbf{x} \rightarrow (\mathbf{x} \hat{\mathbf{n}})^p \hat{\mathbf{n}} + \mathbf{c}\end{aligned} \hspace{\stretch{1}}(2.4)

It’s not obvious that the RHS of this equation is a vector and not a multivector, especially when the vector $\mathbf{x}$ lies in $\mathbb{R}^{3}$ or higher dimensional space. To get a feel for this, let’s start by write this out in components for $\hat{\mathbf{n}} = \mathbf{e}_1$ and $p=2$. We obtain for the product term

\begin{aligned}(\mathbf{x} \hat{\mathbf{n}})^p \hat{\mathbf{n}} &= \mathbf{x} \hat{\mathbf{n}} \mathbf{x} \hat{\mathbf{n}} \hat{\mathbf{n}} \\ &= \mathbf{x} \hat{\mathbf{n}} \mathbf{x} \\ &= (x_1 \mathbf{e}_1 + x_2 \mathbf{e}_2 )\mathbf{e}_1(x_1 \mathbf{e}_1 + x_2 \mathbf{e}_2 ) \\ &= (x_1 + x_2 \mathbf{e}_2 \mathbf{e}_1 )(x_1 \mathbf{e}_1 + x_2 \mathbf{e}_2 ) \\ &= (x_1^2 - x_2^2 ) \mathbf{e}_1 + 2 x_1 x_2 \mathbf{e}_2\end{aligned}

Looking at the same square in coordinate representation for the $\mathbb{R}^{n}$ case (using summation notation unless otherwise specified), we have

\begin{aligned}\mathbf{x} \hat{\mathbf{n}} \mathbf{x} &= x_k \mathbf{e}_k \mathbf{e}_1x_m \mathbf{e}_m \\ &= \left(x_1 + \sum_{k>1} x_k \mathbf{e}_k \mathbf{e}_1\right)x_m \mathbf{e}_m \\ &= x_1 x_m \mathbf{e}_m +\sum_{k>1} x_k x_m \mathbf{e}_k \mathbf{e}_1 \mathbf{e}_m \\ &= x_1 x_m \mathbf{e}_m +\sum_{k>1} x_k x_1 \mathbf{e}_k +\sum_{k>1,m>1} x_k x_m \mathbf{e}_k \mathbf{e}_1 \mathbf{e}_m \\ &= \left(x_1^2 -\sum_{k>1} x_k^2\right) \mathbf{e}_1+2 \sum_{k>1} x_1 x_k \mathbf{e}_k +\sum_{1 < k < m, 1 < m < k} x_k x_m \mathbf{e}_k \mathbf{e}_1 \mathbf{e}_m \\ \end{aligned}

This last term is zero since $\mathbf{e}_k \mathbf{e}_1 \mathbf{e}_m = -\mathbf{e}_m \mathbf{e}_1 \mathbf{e}_k$, and we are left with

\begin{aligned}\mathbf{x} \hat{\mathbf{n}} \mathbf{x} =\left(x_1^2 -\sum_{k>1} x_k^2\right) \mathbf{e}_1+2 \sum_{k>1} x_1 x_k \mathbf{e}_k,\end{aligned} \hspace{\stretch{1}}(2.5)

a vector, even for non-planar vectors. How about for an arbitrary orientation of the unit vector in $\mathbb{R}^{n}$? For that we get

\begin{aligned}\mathbf{x} \hat{\mathbf{n}} \mathbf{x} &=(\mathbf{x} \cdot \hat{\mathbf{n}} \hat{\mathbf{n}} + \mathbf{x} \wedge \hat{\mathbf{n}} \hat{\mathbf{n}} ) \hat{\mathbf{n}} \mathbf{x} \\ &=(\mathbf{x} \cdot \hat{\mathbf{n}} + \mathbf{x} \wedge \hat{\mathbf{n}} ) (\mathbf{x} \cdot \hat{\mathbf{n}} \hat{\mathbf{n}} + \mathbf{x} \wedge \hat{\mathbf{n}} \hat{\mathbf{n}} ) \\ &=((\mathbf{x} \cdot \hat{\mathbf{n}})^2 + (\mathbf{x} \wedge \hat{\mathbf{n}})^2) \hat{\mathbf{n}}+ 2 (\mathbf{x} \cdot \hat{\mathbf{n}}) (\mathbf{x} \wedge \hat{\mathbf{n}}) \hat{\mathbf{n}}\end{aligned}

We can read 2.5 off of this result by inspection for the $\hat{\mathbf{n}} = \mathbf{e}_1$ case.

It is now straightforward to show that the product $(\mathbf{x} \hat{\mathbf{n}})^p \hat{\mathbf{n}}$ is a vector for integer $p \ge 2$. We’ve covered the $p=2$ case, justifying an assumption that this product has the following form

\begin{aligned}(\mathbf{x} \hat{\mathbf{n}})^{p-1} \hat{\mathbf{n}} = a \hat{\mathbf{n}} + b (\mathbf{x} \wedge \hat{\mathbf{n}}) \hat{\mathbf{n}},\end{aligned} \hspace{\stretch{1}}(2.6)

for scalars $a$ and $b$. The induction test becomes

\begin{aligned}(\mathbf{x} \hat{\mathbf{n}})^{p} \hat{\mathbf{n}} &= (\mathbf{x} \hat{\mathbf{n}})^{p-1} (\mathbf{x} \hat{\mathbf{n}}) \hat{\mathbf{n}} \\ &= (\mathbf{x} \hat{\mathbf{n}})^{p-1} \mathbf{x} \\ &= (a + b (\mathbf{x} \wedge \hat{\mathbf{n}}) ) ((\mathbf{x} \cdot \hat{\mathbf{n}} )\hat{\mathbf{n}} + (\mathbf{x} \wedge \hat{\mathbf{n}}) \hat{\mathbf{n}}) \\ &= ( a(\mathbf{x} \cdot \hat{\mathbf{n}} )^2 - b (\mathbf{x} \wedge \hat{\mathbf{n}})^2 ) \hat{\mathbf{n}}+ ( a + b(\mathbf{x} \cdot \hat{\mathbf{n}} ) ) (\mathbf{x} \wedge \hat{\mathbf{n}}) \hat{\mathbf{n}}.\end{aligned}

Again we have a vector split nicely into projective and rejective components, so for any integer power of $p$ our iterator 2.4 employing the geometric product is a mapping from vectors to vectors.

There is a striking image in the text of such a Julia set for such a 3D iterator, and an exersize left for the adventurous reader to attempt to code that based on the 2D $p=2$ sample code they provide.

References

[1] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.

Multivector commutators and Lorentz boosts.

Posted by peeterjoot on October 31, 2010

Motivation.

In some reading there I found that the electrodynamic field components transform in a reversed sense to that of vectors, where instead of the perpendicular to the boost direction remaining unaffected, those are the parts that are altered.

To explore this, look at the Lorentz boost action on a multivector, utilizing symmetric and antisymmetric products to split that vector into portions effected and unaffected by the boost. For the bivector (electrodynamic case) and the four vector case, examine how these map to dot and wedge (or cross) products.

The underlying motivator for this boost consideration is an attempt to see where equation (6.70) of [1] comes from. We get to this by the very end.

Guts.

Structure of the bivector boost.

Recall that we can write our Lorentz boost in exponential form with

\begin{aligned}L &= e^{\alpha \boldsymbol{\sigma}/2} \\ X' &= L^\dagger X L,\end{aligned} \hspace{\stretch{1}}(2.1)

where $\boldsymbol{\sigma}$ is a spatial vector. This works for our bivector field too, assuming the composite transformation is an outermorphism of the transformed four vectors. Applying the boost to both the gradient and the potential our transformed field is then

\begin{aligned}F' &= \nabla' \wedge A' \\ &= (L^\dagger \nabla L) \wedge (L^\dagger A L) \\ &= \frac{1}{{2}} \left((L^\dagger \stackrel{ \rightarrow }{\nabla} L) (L^\dagger A L) -(L^\dagger A L) (L^\dagger \stackrel{ \leftarrow }{\nabla} L)\right) \\ &= \frac{1}{{2}} L^\dagger \left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) L \\ &= L^\dagger (\nabla \wedge A) L.\end{aligned}

Note that arrows were used briefly to indicate that the partials of the gradient are still acting on $A$ despite their vector components being to one side. We are left with the very simple transformation rule

\begin{aligned}F' = L^\dagger F L,\end{aligned} \hspace{\stretch{1}}(2.3)

which has exactly the same structure as the four vector boost.

Employing the commutator and anticommutator to find the parallel and perpendicular components.

If we apply the boost to a four vector, those components of the four vector that commute with the spatial direction $\boldsymbol{\sigma}$ are unaffected. As an example, which also serves to ensure we have the sign of the rapidity angle $\alpha$ correct, consider $\boldsymbol{\sigma} = \boldsymbol{\sigma}_1$. We have

\begin{aligned}X' = e^{-\alpha \boldsymbol{\sigma}/2} ( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 ) (\cosh \alpha/2 + \gamma_1 \gamma_0 \sinh \alpha/2 )\end{aligned} \hspace{\stretch{1}}(2.4)

We observe that the scalar and $\boldsymbol{\sigma}_1 = \gamma_1 \gamma_0$ components of the exponential commute with $\gamma_2$ and $\gamma_3$ since there is no vector in common, but that $\boldsymbol{\sigma}_1$ anticommutes with $\gamma_0$ and $\gamma_1$. We can therefore write

\begin{aligned}X' &= x^2 \gamma_2 + x^3 \gamma_3 +( x^0 \gamma_0 + x^1 \gamma_1 + ) (\cosh \alpha + \gamma_1 \gamma_0 \sinh \alpha ) \\ &= x^2 \gamma_2 + x^3 \gamma_3 +\gamma_0 ( x^0 \cosh\alpha - x^1 \sinh \alpha )+ \gamma_1 ( x^1 \cosh\alpha - x^0 \sinh \alpha )\end{aligned}

reproducing the familiar matrix result should we choose to write it out. How can we express the commutation property without resorting to components. We could write the four vector as a spatial and timelike component, as in

\begin{aligned}X = x^0 \gamma_0 + \mathbf{x} \gamma_0,\end{aligned} \hspace{\stretch{1}}(2.5)

and further separate that into components parallel and perpendicular to the spatial unit vector $\boldsymbol{\sigma}$ as

\begin{aligned}X = x^0 \gamma_0 + (\mathbf{x} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0 + (\mathbf{x} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0.\end{aligned} \hspace{\stretch{1}}(2.6)

However, it would be nicer to group the first two terms together, since they are ones that are affected by the transformation. It would also be nice to not have to resort to spatial dot and wedge products, since we get into trouble too easily if we try to mix dot and wedge products of four vector and spatial vector components.

What we can do is employ symmetric and antisymmetric products (the anticommutator and commutator respectively). Recall that we can write any multivector product this way, and in particular

\begin{aligned}M \boldsymbol{\sigma} = \frac{1}{{2}} (M \boldsymbol{\sigma} + \boldsymbol{\sigma} M) + \frac{1}{{2}} (M \boldsymbol{\sigma} - \boldsymbol{\sigma} M).\end{aligned} \hspace{\stretch{1}}(2.7)

Left multiplying by the unit spatial vector $\boldsymbol{\sigma}$ we have

\begin{aligned}M = \frac{1}{{2}} (M + \boldsymbol{\sigma} M \boldsymbol{\sigma}) + \frac{1}{{2}} (M - \boldsymbol{\sigma} M \boldsymbol{\sigma}) = \frac{1}{{2}} \left\{{M},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} + \frac{1}{{2}} \left[{M},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma}.\end{aligned} \hspace{\stretch{1}}(2.8)

When $M = \mathbf{a}$ is a spatial vector this is our familiar split into parallel and perpendicular components with the respective projection and rejection operators

\begin{aligned}\mathbf{a} = \frac{1}{{2}} \left\{\mathbf{a},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} + \frac{1}{{2}} \left[{\mathbf{a}},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} = (\mathbf{a} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} + (\mathbf{a} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}.\end{aligned} \hspace{\stretch{1}}(2.9)

However, the more general split employing symmetric and antisymmetric products in 2.8, is something we can use for our four vector and bivector objects too.

Observe that we have the commutation and anti-commutation relationships

\begin{aligned}\left( \frac{1}{{2}} \left\{{M},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} \right) \boldsymbol{\sigma} &= \boldsymbol{\sigma} \left( \frac{1}{{2}} \left\{{M},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} \right) \\ \left( \frac{1}{{2}} \left[{M},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} \right) \boldsymbol{\sigma} &= -\boldsymbol{\sigma} \left( \frac{1}{{2}} \left[{M},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} \right).\end{aligned} \hspace{\stretch{1}}(2.10)

This split therefore serves to separate the multivector object in question nicely into the portions that are acted on by the Lorentz boost, or left unaffected.

Application of the symmetric and antisymmetric split to the bivector field.

Let’s apply 2.8 to the spacetime event $X$ again with an x-axis boost $\sigma = \sigma_1$. The anticommutator portion of X in this boost direction is

\begin{aligned}\frac{1}{{2}} \left\{{X},{\boldsymbol{\sigma}_1}\right\} \boldsymbol{\sigma}_1&=\frac{1}{{2}} \left(\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right)+\gamma_1 \gamma_0\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right) \gamma_1 \gamma_0\right) \\ &=x^2 \gamma_2 + x^3 \gamma_3,\end{aligned}

whereas the commutator portion gives us

\begin{aligned}\frac{1}{{2}} \left[{X},{\boldsymbol{\sigma}_1}\right] \boldsymbol{\sigma}_1&=\frac{1}{{2}} \left(\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right)-\gamma_1 \gamma_0\left( x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3 \right) \gamma_1 \gamma_0\right) \\ &=x^0 \gamma_0 + x^1 \gamma_1.\end{aligned}

We’ve seen that only these commutator portions are acted on by the boost. We have therefore found the desired logical grouping of the four vector $X$ into portions that are left unchanged by the boost and those that are affected. That is

\begin{aligned}\frac{1}{{2}} \left[{X},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} &= x^0 \gamma_0 + (\mathbf{x} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0 \\ \frac{1}{{2}} \left\{{X},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} &= (\mathbf{x} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} \gamma_0 \end{aligned} \hspace{\stretch{1}}(2.12)

Let’s now return to the bivector field $F = \nabla \wedge A = \mathbf{E} + I c \mathbf{B}$, and split that multivector into boostable and unboostable portions with the commutator and anticommutator respectively.

Observing that our pseudoscalar $I$ commutes with all spatial vectors we have for the anticommutator parts that will not be affected by the boost

\begin{aligned}\frac{1}{{2}} \left\{{\mathbf{E} + I c \mathbf{B}},{\boldsymbol{\sigma}}\right\} \boldsymbol{\sigma} &= (\mathbf{E} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma},\end{aligned} \hspace{\stretch{1}}(2.14)

and for the components that will be boosted we have

\begin{aligned}\frac{1}{{2}} \left[{\mathbf{E} + I c \mathbf{B}},{\boldsymbol{\sigma}}\right] \boldsymbol{\sigma} &= (\mathbf{E} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}.\end{aligned} \hspace{\stretch{1}}(2.15)

For the four vector case we saw that the components that lay “perpendicular” to the boost direction, were unaffected by the boost. For the field we see the opposite, and the components of the individual electric and magnetic fields that are parallel to the boost direction are unaffected.

Our boosted field is therefore

\begin{aligned}F' = (\mathbf{E} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \cdot \boldsymbol{\sigma}) \boldsymbol{\sigma}+ \left( (\mathbf{E} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}\right) \left( \cosh \alpha + \boldsymbol{\sigma} \sinh \alpha \right)\end{aligned} \hspace{\stretch{1}}(2.16)

Focusing on just the non-parallel terms we have

\begin{aligned}\left( (\mathbf{E} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma} + I c (\mathbf{B} \wedge \boldsymbol{\sigma}) \boldsymbol{\sigma}\right) \left( \cosh \alpha + \boldsymbol{\sigma} \sinh \alpha \right)&=(\mathbf{E}_\perp + I c \mathbf{B}_\perp ) \cosh\alpha+(I \mathbf{E} \times \boldsymbol{\sigma} - c \mathbf{B} \times \boldsymbol{\sigma} ) \sinh\alpha \\ &=\mathbf{E}_\perp \cosh\alpha - c (\mathbf{B} \times \boldsymbol{\sigma} ) \sinh\alpha + I ( c \mathbf{B}_\perp \cosh\alpha + (\mathbf{E} \times \boldsymbol{\sigma}) \sinh\alpha ) \\ &=\gamma \left(\mathbf{E}_\perp - c (\mathbf{B} \times \boldsymbol{\sigma} ) {\left\lvert{\mathbf{v}}\right\rvert}/c+ I ( c \mathbf{B}_\perp + (\mathbf{E} \times \boldsymbol{\sigma}) {\left\lvert{\mathbf{v}}\right\rvert}/c) \right)\end{aligned}

A final regrouping gives us

\begin{aligned}F'&=\mathbf{E}_\parallel + \gamma \left( \mathbf{E}_\perp - \mathbf{B} \times \mathbf{v} \right)+I c \left( \mathbf{B}_\parallel + \gamma \left( \mathbf{B}_\perp + \mathbf{E} \times \mathbf{v}/c^2 \right) \right)\end{aligned} \hspace{\stretch{1}}(2.17)

In particular when we consider the proton, electron system as in equation (6.70) of [1] where it is stated that the electron will feel a magnetic field given by

\begin{aligned}\mathbf{B} = - \frac{\mathbf{v}}{c} \times \mathbf{E}\end{aligned} \hspace{\stretch{1}}(2.18)

we can see where this comes from. If $F = \mathbf{E} + I c (0)$ is the field acting on the electron, then application of a $\mathbf{v}$ boost to the electron perpendicular to the field (ie: radial motion), we get

\begin{aligned}F' = I c \gamma \mathbf{E} \times \mathbf{v}/c^2 =-I c \gamma \frac{\mathbf{v}}{c^2} \times \mathbf{E}\end{aligned} \hspace{\stretch{1}}(2.19)

We also have an additional $1/c$ factor in our result, but that’s a consequence of the choice of units where the dimensions of $\mathbf{E}$ match $c \mathbf{B}$, whereas in the text we have $\mathbf{E}$ and $\mathbf{B}$ in the same units. We also have an additional $\gamma$ factor, so we must presume that ${\left\lvert{\mathbf{v}}\right\rvert} << c$ in this portion of the text. That is actually a requirement here, for if the electron was already in motion, we'd have to boost a field that also included a magnetic component. A consequence of this is that the final interaction Hamiltonian of (6.75) is necessarily non-relativistic.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Newton’s method for intersection of curves in a plane.

Posted by peeterjoot on March 7, 2010

[Click here for a PDF of this post with nicer formatting]Note that this PDF file is formatted in a wide-for-screen layout that is probably not good for printing.

Motivation.

Reading the blog post Problem solving, artificial intelligence and computational linear algebra some variations of Newton’s method for finding local minumums and maximums are given.

While I’d seen the Hessian matrix eons ago in the context of back propagation feedback methods, Newton’s method itself I remember as a first order root finding method. Here I refresh my memory what that simpler Newton’s method was about, and build on that slightly to find the form of the solution for the intersection of an arbitrarily oriented line with a curve, and finally the problem of refining an approximation for the intersection of two curves using the same technique.

Root finding as the intersection with a horizontal.

Refining an approximate horizontal intersection.

The essence of Newton’s method for finding roots is following the tangent from the point of first guess down to the line that one wants to intersect with the curve. This is illustrated in figure (\ref{fig:newtonsIntersectionHorizontal}).

Algebraically, the problem is that of finding the point $x_1$, which is given by the tangent

\begin{aligned}\frac{f(x_0) - b}{x_0 - x_1} = f'(x_0).\end{aligned} \hspace{\stretch{1}}(2.1)

Rearranging and solving for $x_1$, we have

\begin{aligned}x_1 = x_0 - \frac{f(x_0) - b}{f'(x_0)}\end{aligned} \hspace{\stretch{1}}(2.2)

If one presumes convergence, something not guarenteed, then a first guess, if good enough, will get closer and closer to the target with each iteration. If this first guess is far from the target, following the tangent line could ping pong you to some other part of the curve, and it is possible not to find the root, or to find some other one.

Intersection with a line.

Refining an approximation for the intersection with an arbitrarily oriented line.

The above pictorial treatment works nicely for the intersection of a horizontal line with a curve. Now consider the intersection of an arbitrarily oriented line with a curve, as illustrated in figure (\ref{fig:newtonsIntersectionAnyOrientation}). Here it is useful to setup the problem algebraically from the begining. Our problem is really still just that of finding the intersection of two lines. The curve itself can be considered the set of end points of the vector

\begin{aligned}\mathbf{r}(x) = x \mathbf{e}_1 + f(x) \mathbf{e}_2,\end{aligned} \hspace{\stretch{1}}(3.3)

for which the tangent direction vector is

\begin{aligned}\mathbf{t}(x) = \frac{d\mathbf{r}}{dx} = \mathbf{e}_1 + f'(x) \mathbf{e}_2.\end{aligned} \hspace{\stretch{1}}(3.4)

The set of points on this tangent, taken at the point $x_0$, can also be written as a vector, namely

\begin{aligned}(x_0, f(x)) + \alpha \mathbf{t}(x_0).\end{aligned} \hspace{\stretch{1}}(3.5)

For the line to intersect this, suppose we have one point on the line $\mathbf{p}_0$, and a direction vector for that line $\hat{\mathbf{u}}$. The points on this line are therefore all the endpoints of

\begin{aligned}\mathbf{p}_0 + \beta \hat{\mathbf{u}}.\end{aligned} \hspace{\stretch{1}}(3.6)

Provided that the tangent and the line of intersection do in fact intersect then our problem becomes finding $\alpha$ or $\beta$ after equating 3.5 and 3.6. This is the solution of

\begin{aligned}(x_0, f(x_0)) + \alpha \mathbf{t}(x_0) = \mathbf{p}_0 + \beta \hat{\mathbf{u}}.\end{aligned} \hspace{\stretch{1}}(3.7)

Since we don’t care which of $\alpha$ or $\beta$ we solve for, setting this up as a matrix equation in two variables isn’t the best approach. Instead we wedge both sides with $\mathbf{t}(x_0)$ (or $\hat{\mathbf{u}}$), essentially using Cramer’s method. This gives

\begin{aligned}\left((x_0, f(x_0)) -\mathbf{p}_0 \right) \wedge \mathbf{t}(x_0) = \beta \hat{\mathbf{u}} \wedge \mathbf{t}(x_0).\end{aligned} \hspace{\stretch{1}}(3.8)

If the lines are not parallel, then both sides are scalar multiples of $\mathbf{e}_1 \wedge \mathbf{e}_2$, and dividing out one gets

\begin{aligned}\beta = \frac{\left((x_0, f(x_0)) -\mathbf{p}_0 \right) \wedge \mathbf{t}(x_0)}{\hat{\mathbf{u}} \wedge \mathbf{t}(x_0)}.\end{aligned} \hspace{\stretch{1}}(3.9)

Writing out $\mathbf{t}(x_0) = \mathbf{e}_1 + f'(x_0) \mathbf{e}_2$, explicitly, this is

\begin{aligned}\beta = \frac{\left((x_0, f(x_0)) -\mathbf{p}_0 \right) \wedge \left(\mathbf{e}_1 + f'(x_0) \mathbf{e}_2\right)}{\hat{\mathbf{u}} \wedge \left(\mathbf{e}_1 + f'(x_0) \mathbf{e}_2\right)}.\end{aligned} \hspace{\stretch{1}}(3.10)

Further, dividing out the common $\mathbf{e}_1 \wedge \mathbf{e}_2$ bivector, we have a ratio of determinants

\begin{aligned}\beta = \frac{\begin{vmatrix}x_0 -\mathbf{p}_0 \cdot \mathbf{e}_1 & f(x_0) - \mathbf{p}_0 \cdot \mathbf{e}_2 \\ 1 & f'(x_0) \\ \end{vmatrix}}{\begin{vmatrix}\hat{\mathbf{u}} \cdot \mathbf{e}_1 & \hat{\mathbf{u}} \cdot \mathbf{e}_2 \\ 1 & f'(x_0) \\ \end{vmatrix}}.\end{aligned} \hspace{\stretch{1}}(3.11)

The final step in the solution is noting that the point of intersection is just

\begin{aligned}\mathbf{p}_0 + \beta \hat{\mathbf{u}},\end{aligned} \hspace{\stretch{1}}(3.12)

and in particular, the $x$ coordinate of this is the desired result of one step of iteration

\begin{aligned}x_1 = \mathbf{p}_0 \cdot \mathbf{e}_1 + (\hat{\mathbf{u}} \cdot \mathbf{e}_1)\frac{\begin{vmatrix}x_0 -\mathbf{p}_0 \cdot \mathbf{e}_1 & f(x_0) - \mathbf{p}_0 \cdot \mathbf{e}_2 \\ 1 & f'(x_0) \\ \end{vmatrix}}{\begin{vmatrix}\hat{\mathbf{u}} \cdot \mathbf{e}_1 & \hat{\mathbf{u}} \cdot \mathbf{e}_2 \\ 1 & f'(x_0) \\ \end{vmatrix}}.\end{aligned} \hspace{\stretch{1}}(3.13)

This looks a whole lot different than the original $x_1$ for the horizontal from back at 2.2, but substitution of $\hat{\mathbf{u}} = \mathbf{e}_1$, and $\mathbf{p}_0 = b \mathbf{e}_2$, shows that these are identical.

Intersection of two curves.

Can we generalize this any further? It seems reasonable that we would be able to use this Newton’s method technique of following the tangent to refine an approximation for the intersection point of two general curves. This is not expected to be much harder, and the geometric idea is illustrated in figure (\ref{fig:newtonsIntersectionTwoCurves})

Refining an approximation for the intersection of two curves in a plane.

The task at hand is to setup this problem algebraically. Suppose the two curves $s(x)$, and $r(x)$ are parameterized as vectors

\begin{aligned}\mathbf{s}(x) &= x \mathbf{e}_1 + s(x) \mathbf{e}_2 \\ \mathbf{r}(x) &= x \mathbf{e}_1 + r(x) \mathbf{e}_2.\end{aligned} \hspace{\stretch{1}}(4.14)

Tangent direction vectors at the point $x_0$ are then

\begin{aligned}\mathbf{s}'(x_0) &= \mathbf{e}_1 + s'(x_0) \mathbf{e}_2 \\ \mathbf{r}'(x_0) &= \mathbf{e}_1 + r'(x_0) \mathbf{e}_2.\end{aligned} \hspace{\stretch{1}}(4.16)

The intersection of interest is therefore the solution of

\begin{aligned}(x_0, s(x_0)) + \alpha \mathbf{s}' = (x_0, r(x_0)) + \beta \mathbf{r}'.\end{aligned} \hspace{\stretch{1}}(4.18)

Wedging with one of tangent vectors $\mathbf{s}'$ or $\mathbf{r}'$ provides our solution. Solving for $\alpha$ this is

\begin{aligned}\alpha = \frac{(0, r(x_0) - s(x_0)) \wedge \mathbf{r}'}{\mathbf{s}' \wedge \mathbf{r}'} = \frac{\begin{vmatrix}0 & r(x_0) - s(x_0) \\ \mathbf{r}' \cdot \mathbf{e}_1 & \mathbf{r}' \cdot \mathbf{e}_2 \end{vmatrix}}{\begin{vmatrix}\mathbf{s}' \cdot \mathbf{e}_1 & \mathbf{s}' \cdot \mathbf{e}_2 \\ \mathbf{r}' \cdot \mathbf{e}_1 & \mathbf{r}' \cdot \mathbf{e}_2 \end{vmatrix}}= -\frac{r(x_0) - s(x_0)}{r'(x_0) - s'(x_0) }.\end{aligned} \hspace{\stretch{1}}(4.19)

To finish things off, we just have to calculate the new $x$ coordinate on the line for this value of $\alpha$, which gives us

\begin{aligned}x_1 = x_0 -\frac{r(x_0) - s(x_0)}{r'(x_0) - s'(x_0) }.\end{aligned} \hspace{\stretch{1}}(4.20)

It is ironic that generalizing things to two curves leads to a tidier result than the more specific line and curve result from 3.13. With a substitution of $r(x) = f(x)$, and $s(x) = b$, we once again recover the result 2.2, for the horizontal line intersecting a curve.

Followup.

Having completed the play that I set out to do, the next logical step would be to try the min/max problem that leads to the Hessian. That can be for another day.

Area of parallelogram spanned by two vectors

Posted by peeterjoot on August 18, 2009

Parallelogram Area

As depicted in figure, one can see that the area of a parallelogram spanned by two vectors is computed from the base times height. In the figure $\mathbf{u}$ was picked as the base, with length $\Vert \mathbf{u} \Vert$. Designating the second vector $\mathbf{v}$, we want the component of $\mathbf{v}$ perpendicular to $\hat{\mathbf{u}}$ for the height. An orthogonal decomposition of $\mathbf{v}$ into directions parallel and perpendicular to $\hat{\mathbf{u}}$ can be performed in two ways.

\begin{aligned}\mathbf{v} &= \mathbf{v} \hat{\mathbf{u}} \hat{\mathbf{u}} = (\mathbf{v} \cdot \hat{\mathbf{u}}) \hat{\mathbf{u}} + (\mathbf{v} \wedge \hat{\mathbf{u}}) \hat{\mathbf{u}} \\ &= \hat{\mathbf{u}} \hat{\mathbf{u}} \mathbf{v} = \hat{\mathbf{u}} (\hat{\mathbf{u}} \cdot \mathbf{v}) + \hat{\mathbf{u}} (\hat{\mathbf{u}} \wedge \mathbf{v}) \end{aligned}

The height is the length of the perpendicular component expressed using the wedge as either $\hat{\mathbf{u}} (\hat{\mathbf{u}} \wedge \mathbf{v})$ or $(\mathbf{v} \wedge \hat{\mathbf{u}}) \hat{\mathbf{u}}$.

Multiplying base times height we have the parallelogram area

\begin{aligned}A(\mathbf{u},\mathbf{v}) &= \Vert \mathbf{u} \Vert \Vert \hat{\mathbf{u}} ( \hat{\mathbf{u}} \wedge \mathbf{v} ) \Vert \\ &= \Vert \hat{\mathbf{u}} ( \mathbf{u} \wedge \mathbf{v} ) \Vert \end{aligned}

Since the squared length of an Euclidean vector is the geometric square of that vector, we can compute the squared area of this parallogram by squaring this single scaled vector

\begin{aligned}A^2 &= (\hat{\mathbf{u}} ( \mathbf{u} \wedge \mathbf{v} ) )^2 \end{aligned}

Utilizing both encodings of the perpendicular to $\hat{\mathbf{u}}$ component of $\mathbf{v}$ computed above we have for the squared area

\begin{aligned}A^2&= (\hat{\mathbf{u}}( \mathbf{u} \wedge {\mathbf{v}} ) )^2 \\ &= (( \mathbf{v} \wedge {\mathbf{u}} ) \hat{\mathbf{u}}) (\hat{\mathbf{u}} ( {\mathbf{u}} \wedge \mathbf{v} )) \\ &= ( \mathbf{v} \wedge \mathbf{u} ) ( \mathbf{u} \wedge \mathbf{v} ) \end{aligned}

Since $\mathbf{u} \wedge \mathbf{v} = -\mathbf{v} \wedge \mathbf{u}$, we have finally

\begin{aligned}A^2 = -( \mathbf{u} \wedge \mathbf{v} )^2 \end{aligned}

There are a few things of note here. One is that the parallelogram area can easily be expressed in terms of the square of a bivector. Another is that the square of a bivector has the same property as a purely imaginary number, a negative square.

It can also be noted that a vector lying completely within a plane anticommutes with the bivector for that plane. More generally components of vectors that lie within a plane commute with the bivector for that plane while the perpendicular components of that vector commute. These commutation or anticommutation properties depend both on the vector and the grade of the object that one attempts to commute it with (these properties lie behind the generalized definitions of the dot and wedge product to be seen later).

Stokes theorem in Geometric Algebra formalism.

Posted by peeterjoot on July 22, 2009

Motivation

Relying on pictorial means and a brute force ugly comparison of left and right hand sides, a verification of Stokes theorem for the vector and bivector cases was performed ([1]). This was more of a confirmation than a derivation, and the technique fails the transition to the trivector case. The trivector case is of particular interest in electromagnetism since that and a duality transformation provides a four-vector divergence theorem.

The fact that the pictorial means of defining the boundary surface doesn’t work well in four vector space is not the only unsatisfactory aspect of the previous treatment. The fact that a coordinate expansion of the hypervolume element and hypersurface element was performed in the LHS and RHS comparisons was required is particularly ugly. It is a lot of work and essentially has to be undone on the opposing side of the equation. Comparing to previous attempts to come to terms with Stokes theorem in ([2]) and ([3]) this more recent attempt at least avoids the requirement for a tensor expansion of the vector or bivector. It should be possible to build on this and minimize the amount of coordinate expansion required and go directly from the volume integral to the expression of the boundary surface.

Do it.

Notation and Setup.

The desire is to relate the curl hypervolume integral to a hypersurface integral on the boundary

\begin{aligned}\int (\nabla \wedge F) \cdot d^k x = \int F \cdot d^{k-1} x\end{aligned} \hspace{\stretch{1}}(2.1)

In order to put meaning to this statement the volume and surface elements need to be properly defined. In order that this be a scalar equation, the object $F$ in the integral is required to be of grade $k-1$, and $k \le n$ where $n$ is the dimension of the vector space that generates the object $F$.

Reciprocal frames.

As evident in equation (2.1) a metric is required to define the dot product. If an affine non-metric formulation
of Stokes theorem is possible it will not be attempted here. A reciprocal basis pair will be utilized, defined by

\begin{aligned}\gamma^\mu \cdot \gamma_\nu = {\delta^\mu}_\nu\end{aligned} \hspace{\stretch{1}}(2.2)

Both of the sets $\{\gamma_\mu\}$ and $\{\gamma^\mu\}$ are taken to span the space, but are not required to be orthogonal. The notation is consistent with the Dirac reciprocal basis, and there will not be anything in this treatment that prohibits the Minkowski metric signature required for such a relativistic space.

Vector decomposition in terms of coordinates follows by taking dot products. We write

\begin{aligned}x = x^\mu \gamma_\mu = x_\nu \gamma^\nu\end{aligned} \hspace{\stretch{1}}(2.3)

When working with a non-orthonormal basis, use of the reciprocal frame can be utilized to express the gradient.

\begin{aligned}\nabla \equiv \gamma^\mu \partial_\mu \equiv \sum_\mu \gamma^\mu \frac{\partial {}}{\partial {x^\mu}}\end{aligned} \hspace{\stretch{1}}(2.4)

This contains what may perhaps seem like an odd seeming mix of upper and lower indexes in this definition. This is how the gradient is defined in [4]. Although it is possible to accept this definition and work with it, this form can be justified by require of the gradient consistency with the the definition of directional derivative. A definition of the directional derivative that works for single and multivector functions, in $\mathbb{R}^{3}$ and other more general spaces is

\begin{aligned}a \cdot \nabla F \equiv \lim_{\lambda \rightarrow 0} \frac{F(x + a\lambda) - F(x)}{\lambda} = {\left.\frac{\partial {F(x + a\lambda)}}{\partial {\lambda}} \right\vert}_{\lambda=0}\end{aligned} \hspace{\stretch{1}}(2.5)

Taylor expanding about $\lambda=0$ in terms of coordinates we have

\begin{aligned}{\left.\frac{\partial {F(x + a\lambda)}}{\partial {\lambda}} \right\vert}_{\lambda=0}&= a^\mu \frac{\partial {F}}{\partial {x^\mu}} \\ &= (a^\nu \gamma_\nu) \cdot (\gamma^\mu \partial_\mu) F \\ &= a \cdot \nabla F \quad\quad\quad\square\end{aligned}

The lower index representation of the vector coordinates could also have been used, so using the directional derivative to imply a definition of the gradient, we have an additional alternate representation of the gradient

\begin{aligned}\nabla \equiv \gamma_\mu \partial^\mu \equiv \sum_\mu \gamma_\mu \frac{\partial {}}{\partial {x_\mu}}\end{aligned} \hspace{\stretch{1}}(2.6)

Volume element

We define the hypervolume in terms of parametrized vector displacements $x = x(a_1, a_2, ... a_k)$. For the vector x we can form a pseudoscalar for the subspace spanned by this parametrization by wedging the displacements in each of the directions defined by variation of the parameters. For $m \in [1,k]$ let

\begin{aligned}dx_i = \frac{\partial {x}}{\partial {a_i}} da_i = \gamma_\mu \frac{\partial {x^\mu}}{\partial {a_i}} da_i,\end{aligned} \hspace{\stretch{1}}(2.7)

so the hypervolume element for the subspace in question is

\begin{aligned}d^k x \equiv dx_1 \wedge dx_2 \cdots dx_k\end{aligned} \hspace{\stretch{1}}(2.8)

This can be expanded explicitly in coordinates

\begin{aligned}d^k x &= da_1 da_2 \cdots da_k \left(\frac{\partial {x^{\mu_1}}}{\partial {a_1}} \frac{\partial {x^{\mu_2}}}{\partial {a_2}} \cdots\frac{\partial {x^{\mu_k}}}{\partial {a_k}} \right)( \gamma_{\mu_1} \wedge \gamma_{\mu_2} \wedge \cdots \wedge \gamma_{\mu_k} ) \\ \end{aligned}

Observe that when $k$ is also the dimension of the space, we can employ a pseudoscalar $I = \gamma_0 \gamma_1 \cdots \gamma_k$ and can specify our volume element in terms of the Jacobian determinant.

This is

\begin{aligned}d^k x =I da_1 da_2 \cdots da_k {\left\lvert{\frac{\partial {(x^1, x^2, \cdots, x^k)}}{\partial {(a_1, a_2, \cdots, a_k)}}}\right\rvert}\end{aligned} \hspace{\stretch{1}}(2.9)

However, we won’t have a requirement to express the Stokes result in terms of such Jacobians.

Expansion of the curl and volume element product

We are now prepared to go on to the meat of the issue. The first order of business is the expansion of the curl and volume element product

\begin{aligned}( \nabla \wedge F ) \cdot d^k x&=( \gamma^\mu \wedge \partial_\mu F ) \cdot d^k x \\ &=\left\langle{{ ( \gamma^\mu \wedge \partial_\mu F ) d^k x }}\right\rangle \\ \end{aligned}

The wedge product within the scalar grade selection operator can be expanded in symmetric or antisymmetric sums, but this is a grade dependent operation. For odd grade blades $A$ (vector, trivector, …), and vector $a$ we have for the dot and wedge product respectively

\begin{aligned}a \wedge A = \frac{1}{{2}} (a A - A a) \\ a \cdot A = \frac{1}{{2}} (a A + A a)\end{aligned}

\begin{aligned}a \wedge A = \frac{1}{{2}} (a A + A a) \\ a \cdot A = \frac{1}{{2}} (a A - A a)\end{aligned}

First treating the odd grade case for $F$ we have

\begin{aligned}( \nabla \wedge F ) \cdot d^k x&=\frac{1}{{2}} \left\langle{{ \gamma^\mu \partial_\mu F d^k x }}\right\rangle - \frac{1}{{2}} \left\langle{{ \partial_\mu F \gamma^\mu d^k x }}\right\rangle \\ \end{aligned}

Employing cyclic scalar reordering within the scalar product for the first term

\begin{aligned}\left\langle{{a b c}}\right\rangle = \left\langle{{b c a}}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.10)

we have

\begin{aligned}( \nabla \wedge F ) \cdot d^k x&=\frac{1}{{2}} \left\langle{{ \partial_\mu F (d^k x \gamma^\mu - \gamma^\mu d^k x)}}\right\rangle \\ &=\frac{1}{{2}} \left\langle{{ \partial_\mu F (d^k x \cdot \gamma^\mu - \gamma^\mu d^k x)}}\right\rangle \\ &=\left\langle{{ \partial_\mu F (d^k x \cdot \gamma^\mu)}}\right\rangle \\ \end{aligned}

The end result is

\begin{aligned}( \nabla \wedge F ) \cdot d^k x &= \partial_\mu F \cdot (d^k x \cdot \gamma^\mu) \end{aligned} \hspace{\stretch{1}}(2.11)

For even grade $F$ (and thus odd grade $d^k x$) it is straightforward to show that (2.11) also holds.

Expanding the volume dot product

We want to expand the volume integral dot product

\begin{aligned}d^k x \cdot \gamma^\mu\end{aligned} \hspace{\stretch{1}}(2.12)

Picking $k = 4$ will serve to illustrate the pattern, and the generalization (or degeneralization to lower grades) will be clear. We have

\begin{aligned}d^4 x \cdot \gamma^\mu&=( dx_1 \wedge dx_2 \wedge dx_3 \wedge dx_4 ) \cdot \gamma^\mu \\ &= ( dx_1 \wedge dx_2 \wedge dx_3 ) dx_4 \cdot \gamma^\mu \\ &-( dx_1 \wedge dx_2 \wedge dx_4 ) dx_3 \cdot \gamma^\mu \\ &+( dx_1 \wedge dx_3 \wedge dx_4 ) dx_2 \cdot \gamma^\mu \\ &-( dx_2 \wedge dx_3 \wedge dx_4 ) dx_1 \cdot \gamma^\mu \\ \end{aligned}

This avoids the requirement to do the entire Jacobian expansion of (2.9). The dot product of the differential displacement $dx_m$ with $\gamma^\mu$ can now be made explicit without as much mess.

\begin{aligned}dx_m \cdot \gamma^\mu &=da_m \frac{\partial {x^\nu}}{\partial {a_m}} \gamma_\nu \cdot \gamma^\mu \\ &=da_m \frac{\partial {x^\mu}}{\partial {a_m}} \\ \end{aligned}

We now have products of the form

\begin{aligned}\partial_\mu F da_m \frac{\partial {x^\mu}}{\partial {a_m}} &=da_m \frac{\partial {x^\mu}}{\partial {a_m}} \frac{\partial {F}}{\partial {x^\mu}} \\ &=da_m \frac{\partial {F}}{\partial {a_m}} \\ \end{aligned}

Now we see that the differential form of (2.11) for this $k=4$ example is reduced to

\begin{aligned}( \nabla \wedge F ) \cdot d^4 x &= da_4 \frac{\partial {F}}{\partial {a_4}} \cdot ( dx_1 \wedge dx_2 \wedge dx_3 ) \\ &- da_3 \frac{\partial {F}}{\partial {a_3}} \cdot ( dx_1 \wedge dx_2 \wedge dx_4 ) \\ &+ da_2 \frac{\partial {F}}{\partial {a_2}} \cdot ( dx_1 \wedge dx_3 \wedge dx_4 ) \\ &- da_1 \frac{\partial {F}}{\partial {a_1}} \cdot ( dx_2 \wedge dx_3 \wedge dx_4 ) \\ \end{aligned}

While 2.11 was a statement of Stokes theorem in this Geometric Algebra formulation, it was really incomplete without this explicit expansion of $(\partial_\mu F) \cdot (d^k x \cdot \gamma^\mu)$. This expansion for the $k=4$ case serves to illustrate that we would write Stokes theorem as

\begin{aligned}\boxed{\int( \nabla \wedge F ) \cdot d^k x =\frac{1}{{(k-1)!}} \epsilon^{ r s \cdots t u } \int da_u \frac{\partial {F}}{\partial {a_{u}}} \cdot (dx_r \wedge dx_s \wedge \cdots \wedge dx_t)}\end{aligned} \hspace{\stretch{1}}(2.13)

Here the indexes have the range $\{r, s, \cdots, t, u\} \in \{1, 2, \cdots k\}$. This with the definitions 2.7, and 2.8 is really Stokes theorem in its full glory.

Observe that in this Geometric algebra form, the one forms $dx_i = da_i {\partial {x}}/{\partial {a_i}}, i \in [1,k]$ are nothing more abstract that plain old vector differential elements. In the formalism of differential forms, this would be vectors, and $(\nabla \wedge F) \cdot d^k x$ would be a $k$ form. In a context where we are working with vectors, or blades already, the Geometric Algebra statement of the theorem avoids a requirement to translate to the language of forms.

With a statement of the general theorem complete, let’s return to our $k=4$ case where we can now integrate over each of the $a_1, a_2, \cdots, a_k$ parameters. That is

\begin{aligned}\int ( \nabla \wedge F ) \cdot d^4 x &= \int (F(a_4(1)) - F(a_4(0))) \cdot ( dx_1 \wedge dx_2 \wedge dx_3 ) \\ &- \int (F(a_3(1)) - F(a_3(0))) \cdot ( dx_1 \wedge dx_2 \wedge dx_4 ) \\ &+ \int (F(a_2(1)) - F(a_2(0))) \cdot ( dx_1 \wedge dx_3 \wedge dx_4 ) \\ &- \int (F(a_1(1)) - F(a_1(0))) \cdot ( dx_2 \wedge dx_3 \wedge dx_4 ) \\ \end{aligned}

This is precisely Stokes theorem for the trivector case and makes the enumeration of the boundary surfaces explicit. As derived there was no requirement for an orthonormal basis, nor a Euclidean metric, nor a parametrization along the basis directions. The only requirement of the parametrization is that the associated volume element is non-trivial (i.e. none of $dx_q \wedge dx_r = 0$).

For completeness, note that our boundary surface and associated Stokes statement for the bivector and vector cases is, by inspection respectively

\begin{aligned}\int ( \nabla \wedge F ) \cdot d^3 x &= \int (F(a_3(1)) - F(a_3(0))) \cdot ( dx_1 \wedge dx_2 ) \\ &- \int (F(a_2(1)) - F(a_2(0))) \cdot ( dx_1 \wedge dx_3 ) \\ &+ \int (F(a_1(1)) - F(a_1(0))) \cdot ( dx_2 \wedge dx_3 ) \\ \end{aligned}

and

\begin{aligned}\int ( \nabla \wedge F ) \cdot d^2 x &= \int (F(a_2(1)) - F(a_2(0))) \cdot dx_1 \\ &- \int (F(a_1(1)) - F(a_1(0))) \cdot dx_2 \\ \end{aligned}

These three expansions can be summarized by the original single statement of (2.1), which repeating for reference, is

\begin{aligned}\int ( \nabla \wedge F ) \cdot d^k x = \int F \cdot d^{k-1} x \end{aligned}

Where it is implied that the blade $F$ is evaluated on the boundaries and dotted with the associated hypersurface boundary element. However, having expanded this we now have an explicit statement of exactly what that surface element is now for any desired parametrization.

Duality relations and special cases.

Some special (and more recognizable) cases of (2.1) are possible considering specific grades of $F$, and in some cases employing duality relations.

curl surface integral

One important case is the $\mathbb{R}^{3}$ vector result, which can be expressed in terms of the cross product.

Write $\hat{\mathbf{n}} d^2 x = -i dA$. Then we have

\begin{aligned}( \boldsymbol{\nabla} \wedge \mathbf{f} ) \cdot d^2 x&=\left\langle{{ i (\boldsymbol{\nabla} \times \mathbf{f}) (- \hat{\mathbf{n}} i dA) }}\right\rangle \\ &=(\boldsymbol{\nabla} \times \mathbf{f}) \cdot \hat{\mathbf{n}} dA\end{aligned}

This recovers the familiar cross product form of Stokes law.

\begin{aligned}\int (\boldsymbol{\nabla} \times \mathbf{f}) \cdot \hat{\mathbf{n}} dA = \oint \mathbf{f} \cdot d\mathbf{x}\end{aligned} \hspace{\stretch{1}}(3.14)

3D divergence theorem

Duality applied to the bivector Stokes result provides the divergence theorem in $\mathbb{R}^{3}$. For bivector $B$, let $iB = \mathbf{f}$, $d^3 x = i dV$, and $d^2 x = i \hat{\mathbf{n}} dA$. We then have

\begin{aligned}( \boldsymbol{\nabla} \wedge B ) \cdot d^3 x&=\left\langle{{ ( \boldsymbol{\nabla} \wedge B ) \cdot d^3 x }}\right\rangle \\ &=\frac{1}{{2}} \left\langle{{ ( \boldsymbol{\nabla} B + B \boldsymbol{\nabla} ) i dV }}\right\rangle \\ &=\boldsymbol{\nabla} \cdot \mathbf{f} dV \\ \end{aligned}

Similarly

\begin{aligned}B \cdot d^2 x&=\left\langle{{ -i\mathbf{f} i \hat{\mathbf{n}} dA}}\right\rangle \\ &=(\mathbf{f} \cdot \hat{\mathbf{n}}) dA \\ \end{aligned}

This recovers the $\mathbb{R}^{3}$ divergence equation

\begin{aligned}\int \boldsymbol{\nabla} \cdot \mathbf{f} dV = \int (\mathbf{f} \cdot \hat{\mathbf{n}}) dA\end{aligned} \hspace{\stretch{1}}(3.15)

4D divergence theorem

How about the four dimensional spacetime divergence? Write, express a trivector as a dual four-vector $T = if$, and the four volume element $d^4 x = i dQ$. This gives

\begin{aligned}(\nabla \wedge T) \cdot d^4 x&=\frac{1}{{2}} \left\langle{{ (\nabla T - T \nabla) i }}\right\rangle dQ \\ &=\frac{1}{{2}} \left\langle{{ (\nabla i f - if \nabla) i }}\right\rangle dQ \\ &=\frac{1}{2} \left\langle{{ (\nabla f + f \nabla) }}\right\rangle dQ \\ &=(\nabla \cdot f) dQ\end{aligned}

For the boundary volume integral write $d^3 x = n i dV$, for

\begin{aligned}T \cdot d^3 x &= \left\langle{{ (if) ( n i ) }}\right\rangle dV \\ &= \left\langle{{ f n }}\right\rangle dV \\ &= (f \cdot n) dV\end{aligned}

So we have

\begin{aligned}\int \partial_\mu f^\mu dQ = \int f^\nu n_\nu dV\end{aligned}

the orientation of the fourspace volume element and the boundary normal is defined in terms of the parametrization, the duality relations and our explicit expansion of the 4D stokes boundary integral above.

4D divergence theorem, continued.

The basic idea of using duality to express the 4D divergence integral as a stokes boundary surface integral has been explored. Lets consider this in more detail picking a specific parametrization, namely rectangular four vector coordinates. For the volume element write

\begin{aligned}d^4 x &= ( \gamma_0 dx^0 ) \wedge ( \gamma_1 dx^1 ) \wedge ( \gamma_2 dx^2 ) \wedge ( \gamma_3 dx^3 ) \\ &= \gamma_0 \gamma_1 \gamma_2 \gamma_3 dx^0 dx^1 dx^2 dx^3 \\ &= i dx^0 dx^1 dx^2 dx^3 \\ \end{aligned}

As seen previously (but not separately), the divergence can be expressed as the dual of the curl

\begin{aligned}\nabla \cdot f&=\left\langle{{ \nabla f }}\right\rangle \\ &=-\left\langle{{ \nabla i (\underbrace{i f}_{\text{grade 3}}) }}\right\rangle \\ &=\left\langle{{ i \nabla (i f) }}\right\rangle \\ &=\left\langle{{ i ( \underbrace{\nabla \cdot (i f)}_{\text{grade 2}} + \underbrace{\nabla \wedge (i f)}_{\text{grade 4}} ) }}\right\rangle \\ &=i (\nabla \wedge (i f)) \\ \end{aligned}

So we have $\nabla \wedge (i f) = -i (\nabla \cdot f)$. Putting things together, and writing $i f = -f i$ we have

\begin{aligned}\int (\nabla \wedge (i f)) \cdot d^4 x&= \int (\nabla \cdot f) dx^0 dx^1 dx^2 dx^3 \\ &=\int dx^0 \partial_0 (f i) \cdot \gamma_{123} dx^1 dx^2 dx^3 \\ &-\int dx^1 \partial_1 (f i) \cdot \gamma_{023} dx^0 dx^2 dx^3 \\ &+\int dx^2 \partial_2 (f i) \cdot \gamma_{013} dx^0 dx^1 dx^3 \\ &-\int dx^3 \partial_3 (f i) \cdot \gamma_{012} dx^0 dx^1 dx^2 \\ \end{aligned}

It is straightforward to reduce each of these dot products. For example

\begin{aligned}\partial_2 (f i) \cdot \gamma_{013}&=\left\langle{{ \partial_2 f \gamma_{0123013} }}\right\rangle \\ &=-\left\langle{{ \partial_2 f \gamma_{2} }}\right\rangle \\ &=- \gamma_2 \partial_2 \cdot f \\ &=\gamma^2 \partial_2 \cdot f \end{aligned}

The rest proceed the same and rather anticlimactically we end up coming full circle

\begin{aligned}\int (\nabla \cdot f) dx^0 dx^1 dx^2 dx^3 &=\int dx^0 \gamma^0 \partial_0 \cdot f dx^1 dx^2 dx^3 \\ &+\int dx^1 \gamma^1 \partial_1 \cdot f dx^0 dx^2 dx^3 \\ &+\int dx^2 \gamma^2 \partial_2 \cdot f dx^0 dx^1 dx^3 \\ &+\int dx^3 \gamma^3 \partial_3 \cdot f dx^0 dx^1 dx^2 \\ \end{aligned}

This is however nothing more than the definition of the divergence itself and no need to resort to Stokes theorem is required. However, if we are integrating over a rectangle and perform each of the four integrals, we have (with $c=1$) from the dual Stokes equation the perhaps less obvious result

\begin{aligned}\int \partial_\mu f^\mu dt dx dy dz&=\int (f^0(t_1) - f^0(t_0)) dx dy dz \\ &+\int (f^1(x_1) - f^1(x_0)) dt dy dz \\ &+\int (f^2(y_1) - f^2(y_0)) dt dx dz \\ &+\int (f^3(z_1) - f^3(z_0)) dt dx dy \\ \end{aligned}

When stated this way one sees that this could have just as easily have followed directly from the left hand side. What’s the point then of the divergence theorem or Stokes theorem? I think that the value must really be the fact that the Stokes formulation naturally builds the volume element in a fashion independent of any specific parametrization. Here in rectangular coordinates the result seems obvious, but would the equivalent result seem obvious if non-rectangular spacetime coordinates were employed? Probably not.

References

[1] Peeter Joot. Stokes theorem applied to vector and bivector fields [online].